With Reddit having officially seized the group and introduced replacement mods, it might be time for a little action report on Ask A Conservative.
Sites like Reddit operate by encouraging user-created content, then deleting anything which does not fit the Narrative. We flipped this around, and allowed only actually conservative content, and this made them furious for years but they could not justify doing anything about it.
With the panic over this election, they felt they could finally move. They want fake conservatives like the Lincoln Project: libertarians who are content to lose and watch society get destroyed because they are waiting for The Rapture to take them away.
That is the socially acceptable role of “conservative,” namely someone who gets to pose around as being principled but actually do nothing. They encourage these because their political idea consists of anarchy, with the good people going to Jesus when the world is destroyed.
As Nietzsche pointed out, this is self-pity and rejection of life disguised as something benevolent, much like the altruism of the Left conceals their desire for revenge on the naturally talented and the ability to seize what these do and destroy it by making it prole-friendly.
The following are the texts from the Ask A Conservative group. They are not perfect, many being only links or paragraphs cribbed from a post by a user, but on the whole, they offer an overview of conservatism that could be found nowhere else — until now.
Conservatism is a preference for the historically inherited rather than the abstract and ideal. This preference has traditionally rested on an organic conception of society—that is, on the belief that society is not merely a loose collection of individuals but a living organism comprising closely connected, interdependent members. Conservatives thus favour institutions and practices that have evolved gradually and are manifestations of continuity and stability. Government’s responsibility is to be the servant, not the master, of existing ways of life, and politicians must therefore resist the temptation to transform society and politics.
It was not until the late 18th century, in reaction to the upheavals of the French Revolution (1789), that conservatism began to develop as a distinct political attitude and movement. The term conservative was introduced after 1815 by supporters of the newly restored Bourbon monarchy in France, including the author and diplomat Franƈois-Auguste-René, vicomte de Chateaubriand. In 1830 the British politician and writer John Wilson Croker used the term to describe the British Tory Party (see Whig and Tory), and John C. Calhoun, an ardent defender of states’ rights in the United States, adopted it soon afterward.^(1)
Republican presidential candidate Mitt Romney was captured making some inflammatory comments about people who don’t pay income tax in America – the people he says will vote for President Obama “no matter what.” Below, CBSNews.com looks into the validity of his controversial statement.
The Quote “There are 47 percent of the people who will vote for the president no matter what. All right, there are 47 percent who are with him, who are dependent upon government, who believe that they are victims, who believe the government has a responsibility to care for them, who believe that they are entitled to health care, to food, to housing, to you-name-it — that that’s an entitlement. And the government should give it to them. And they will vote for this president no matter what. … These are people who pay no income tax. … [M]y job is not to worry about those people. I’ll never convince them they should take personal responsibility and care for their lives.”
So is it true that 47 percent of Americans don’t pay income tax? Essentially, yes, according to the the Tax Policy Center, which provides data showing that in 2011, 46.4 percent of American households paid no federal income tax.
You will often hear the claim that 97% of climate scientists endorse global warming as a theory. However, there are some issues with this.
If you look at the literature, the specific meaning of the 97% claim is: 97 percent of climate scientists agree that there is a global warming trend and that human beings are the main cause–that is, that we are over 50% responsible. The warming is a whopping 0.8 degrees over the past 150 years, a warming that has tapered off to essentially nothing in the last decade and a half.
One of the main papers behind the 97 percent claim is authored by John Cook, who runs the popular website SkepticalScience.com, a virtual encyclopedia of arguments trying to defend predictions of catastrophic climate change from all challenges.
Here is Cook’s summary of his paper: “Cook et al. (2013) found that over 97 percent [of papers he surveyed] endorsed the view that the Earth is warming up and human emissions of greenhouse gases are the main cause.”
Where did most of the 97 percent come from, then? Cook had created a category called “explicit endorsement without quantification”—that is, papers in which the author, by Cook’s admission, did not say whether 1 percent or 50 percent or 100 percent of the warming was caused by man. He had also created a category called “implicit endorsement,” for papers that imply (but don’t say) that there is some man-made global warming and don’t quantify it. In other words, he created two categories that he labeled as endorsing a view that they most certainly didn’t.
There are little glitches with other studies as well:
Oreskes claimed that an analysis of 928 abstracts in the ISI database containing the phrase “climate change” proved the alleged consensus. It turned out that she had searched the database using three keywords (“global climate change”) instead of the two (“climate change”) she reported—reducing the search results by an order of magnitude. Searching just on “climate change” instead found almost 12,000 articles in the same database in the relevant decade. Excluded from Oreskes’s list were “countless research papers that show that global temperatures were similar or even higher during the Holocene Climate Optimum and the Medieval Warm Period when atmospheric CO2 levels were much lower than today; that solar variability is a key driver of recent climate change; and that climate modeling is highly uncertain
In fact, it seems that “97%” was always a political trope:
Where did this 97% figure originate? It appears to have started with a short 2009 paper by Peter Doran and Maggie Zimmerman of the University of Illinois at Chicago. In this paper, the announced the results of the two question poll. This poll was sent to 10,257 “Earth scientists.”
The two questions were:
- When compared with pre- 1800s levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?
- Do you think human activity is a significant contributing factor in changing mean global temperatures?
The poll received 3,146 responses. Of these only 79 of the respondents listed climate science as their area of expertise and had published more than 50% of their recent peer-reviewed papers on the subject of climate change. Of those 79, 97% responded “yes” to both questions. 97% of 79 is 77. When they tell you that 97% of scientists agree, you need to know that they mean 77 scientists out of 10,257 polled. I admit that I’m not very good at this new math, but the way I learned it, 77 out of 10,257 is 0.75%.
The arguments against abortion are twofold:
The arguments for abortion tend to be utilitarian, such as a common one based on the idea that abortions reduce people from unstable, impoverished, lower class, and less conscientious homes and therefore reduces crime by up to 50%:
We offer evidence that legalized abortion has contributed significantly to recent crime reductions. Crime began to fall roughly 18 years after abortion legalization. The 5 states that allowed abortion in 1970 experienced declines earlier than the rest of the nation, which legalized in 1973 with Roe v. Wade. States with high abortion rates in the 1970s and 1980s experienced greater crime reductions in the 1990s. In high abortion states, only arrests of those born after abortion legalization fall relative to low abortion states. Legalized abortion appears to account for as much as 50 percent of the recent drop in crime.
Academic philosophers have tangled this issue:
A government body authorized to implement legislative directives by developing more precise and technical rules than possible in a legislative setting. Many administrative agencies also have law enforcement responsibilities.
The first administrative agency was created by Congress in 1789 to provide pensions for wounded Revolutionary War soldiers. Also in the late 1700s, agencies were created to determine the amount of duties charged on imported goods, but it was not until 1887 that the first permanent administrative agency was created. The INTERSTATE COMMERCE COMMISSION (ICC), created by the INTERSTATE COMMERCE ACT (49 U.S.C.A. § 10101 et seq. ), was enacted by Congress to regulate commerce among the states, especially the interstate transportation of persons or property by carriers. The ICC was designed to ensure that carriers involved in interstate commerce provided the public with fair and reasonable rates and services. To buttress the Interstate Commerce Act, the Federal Reserve System was established by the Federal Reserve Act of 1913 (12 U.S.C.A. § 221) to serve as the United States’ central bank and execute U.S. monetary policy.
After the STOCK MARKET crash of October 1929, and during the Great Depression of the 1930s, Congress created numerous agencies in an effort to regulate the production and marketing of goods. Agencies such as the SOCIAL SECURITY ADMINISTRATION (created by the SOCIAL SECURITY ACT OF 1935 [42 U.S.C.A. § 301 et seq.]), the Federal Savings and Loan Insurance Corporation (established by a 1933 amendment to the Federal Reserve Act, 12 U.S.C.A. § 264, and now codified at 12 U.S.C.A. §§ 1811–1831) helped provide financial security for many Americans.
When the United States entered WORLD WAR II, more agencies were created or enlarged to mobilize human resources and production and to administer price controls and rationing. The social upheaval of the 1960s spawned agencies designed to improve urban areas, provide opportunities for people who were historically disadvantaged and marginalized, and promote artistic endeavors. In the 1970s, 1980s, and 1990s, pressing issues such as human and environmental health were addressed through the creation of agencies such as the Environmental Protection Agency and a new, enlarged DEPARTMENT OF ENERGY.
Administrative agencies are considered the “fourth branch of government”:
In arguing for the states to ratify the Constitution, James Madison wrote in Federalist 47, “The accumulation of all powers, legislative, executive and judiciary, in the same hands, whether of one, a few, or many … may justly be pronounced the very definition of tyranny.” In our time, the regulatory state has become a form of tyranny led by bureaucrats who have wrested power from Congress and even influenced the Supreme Court to bow to their power.
This concentration of power in the regulatory state has led to outrage after outrage as unelected, unaccountable bureaucrats act as tyrannical kings, issuing edicts that harm small businesses, restrict freedom and cost our economy almost $2 trillion per year. This fourth branch of government exists outside of and unrestrained by the constitutional system of checks and balances. It represents one of the greatest threats to the liberties of American citizens.
This branch is unaccountable to the voters and represents a massive administrative-managerial bureaucracy or “nanny state”:
The growing dominance of the federal government over the states has obscured more fundamental changes within the federal government itself: It is not just bigger, it is dangerously off kilter. Our carefully constructed system of checks and balances is being negated by the rise of a fourth branch, an administrative state of sprawling departments and agencies that govern with increasing autonomy and decreasing transparency.
For much of our nation’s history, the federal government was quite small. In 1790, it had just 1,000 nonmilitary workers. In 1962, there were 2,515,000 federal employees. Today, we have 2,840,000 federal workers in 15 departments, 69 agencies and 383 nonmilitary sub-agencies.
The rise of the fourth branch has been at the expense of Congress’s lawmaking authority. In fact, the vast majority of “laws” governing the United States are not passed by Congress but are issued as regulations, crafted largely by thousands of unnamed, unreachable bureaucrats. One study found that in 2007, Congress enacted 138 public laws, while federal agencies finalized 2,926 rules, including 61 major regulations.
When people talk about repealing regulations, they are speaking of undoing the administrative agency state:
In one recent year alone, Congress passed 138 laws—while federal agencies finalized 2,926 rules. Federal judges conduct about 95,000 trials a year, but federal agencies conduct nearly 1 million. Put all that together and you have a situation in which one branch of government, the executive, is arrogating to itself the powers of the other two.
All of this has happened thanks largely to a 1984 Supreme Court case called Chevron. The Reagan administration chose to relax some air-quality regulations, and the Natural Resources Defense Council challenged the decision in court. The Supreme Court sided with the Environmental Protection Agency. It did so for commendable reasons: to avoid turning the courts themselves into policy-making bodies. Rather than decide whether the EPA was right or wrong, the high court deferred to the agency. This is judicial modesty.
It owes its origins to “Progressive” ideas:
The story begins more than a century ago, when new assumptions about the role and configuration of government gradually superseded the classical liberal ideas of the founding generation. A look at the political thought of Woodrow Wilson provides a useful illustration of this new way of thinking about the state, now known as progressivism. Wilson believed the “science of administration,” which he saw as still in its nonage, must be adapted to accommodate widening “new conceptions of state duty.” To Wilson, “the weightier debates of constitutional principle” were passé, increasingly irrelevant to the more-pressing questions of running a large and complex government apparatus. The idea of limited government itself belonged to a simpler time.
Wilson’s answer to the admittedly “poisonous atmosphere” of corruption and confusion in government at all levels was an appeal to the “impartial scientific method.” Here, he was a product of his time. Successive breakthroughs in the natural sciences had convinced Wilson’s generation virtually everything, government included, could be understood and restructured in terms of fixed scientific laws; government and human nature were believed to be perfectible through science.
Wilson and the progressives accordingly believed bureaucrats were, through an august commitment to the common good, lifted above ordinary greed and self-interest. The federal bureaucracy would be their temple, a thing apart from partisan melees and their raucous debates. It was to be the cloistered, rarified world of trained subject-matter experts, objective and scientific, unmoved by selfish interests and unsoiled by politics. In principle, the modern administrative state, this new fourth branch of government, represents a forthright repudiation of the liberal Enlightenment principles upon which the constitutional order was premised.
This entrenched bureaucracy forms the basis of “the swamp”:
The term “big government” in effect means more responsibilities for federal employees, who make up the federal bureaucracy. Their job is to interpret and enforce laws enacted by Congress. The regulations developed by the bureaucracy are published in the Federal Register for comment before going into effect. In 2013 the pages of the Federal Register ran 80,000-plus pages.
The results of the comments and final decisions of the various departments of the federal government are published in the Code of Federal Regulations, which runs 20,000 pages. These regulations have the force of law unless struck down by the courts, which rarely happens. The United States Code (of laws) fills 35 volumes with about 45,000 words (up from 400 pages when first published in 1913), of which Obamacare claims 13,000 and counting. Then there is the United Sates Tax Code which is 73,954 pages.
So, our friends in the bureaucracy have been busy. Since 2001, they have issued 4,680 changes of regulations, in addition to processing the new laws passed by Congress and signed by the president. As long as Congress keeps passing more laws and not repealing any, however, this job will continue to expand.
Degree of Popular Support
Amerinds attacked the new colonists, ushering in centuries of warfare:
We may examine representative incidents by following the geographic route of European settlement, beginning in the New England colonies. There, at first, the Puritans did not regard the Indians they encountered as natural enemies, but rather as potential friends and converts. But their Christianizing efforts showed little success, and their experience with the natives gradually yielded a more hostile view. The Pequot tribe in particular, with its reputation for cruelty and ruthlessness, was feared not only by the colonists but by most other Indians in New England. In the warfare that eventually ensued, caused in part by intertribal rivalries, the Narragansett Indians became actively engaged on the Puritan side.
Hostilities opened in late 1636 after the murder of several colonists. When the Pequots refused to comply with the demands of the Massachusetts Bay Colony for the surrender of the guilty and other forms of indemnification, a punitive expedition was led against them by John Endecott, the first resident governor of the colony; although it ended inconclusively, the Pequots retaliated by attacking any settler they could find. Fort Saybrook on the Connecticut River was besieged, and members of the garrison who ventured outside were ambushed and killed. One captured trader, tied to a stake in sight of the fort, was tortured for three days, expiring after his captors flayed his skin with the help of hot timbers and cut off his fingers and toes. Another prisoner was roasted alive.
While this form of government was once the norm among human societies, now it comes to us mostly from J.R.R. Tolkien:
The text of his sole anarcho-monarchist manifesto, such as it is, comes from a letter he wrote to his son Christopher in 1943 (forgive me for quoting at such length):
My political opinions lean more and more to Anarchy (philosophically understood, meaning the abolition of control not whiskered men with bombs)—or to ‘unconstitutional’ Monarchy. I would arrest anybody who uses the word State (in any sense other than the inanimate real of England and its inhabitants, a thing that has neither power, rights nor mind); and after a chance of recantation, execute them if they remained obstinate! If we could go back to personal names, it would do a lot of good. Government is an abstract noun meaning the art and process of governing and it should be an offence to write it with a capital G or so to refer to people . . . .
And anyway, he continues, “the proper study of Man is anything but Man; and the most improper job of any man, even saints (who at any rate were at least unwilling to take it on), is bossing other men”:
Not one in a million is fit for it, and least of all those who seek the opportunity. At least it is done only to a small group of men who know who their master is. The mediaevals were only too right in taking nolo episcopari as the best reason a man could give to others for making him a bishop. Grant me a king whose chief interest in life is stamps, railways, or race-horses; and who has the power to sack his Vizier (or whatever you dare call him) if he does not like the cut of his trousers. And so on down the line. But, of course, the fatal weakness of all that—after all only the fatal weakness of all good natural things in a bad corrupt unnatural world—is that it works and has only worked when all the world is messing along in the same good old inefficient human way . . . . There is only one bright spot and that is the growing habit of disgruntled men of dynamiting factories and power-stations; I hope that, encouraged now as ‘patriotism’, may remain a habit! But it won’t do any good, if it is not universal.
Last week, as I watched the waves of the Republican electoral counterinsurgency washing across the heartland, and falling back only at the high littoral shelves of the Pacific coast and the Northeast, I found myself reflecting on what a devil’s bargain electoral democracy is.
In addition, we must consider the benefits of monarchy/aristocracy:
With our knowledge of the complete unalterability both of character and of mental faculties, we are led to the view that a real and thorough improvement of the human race might be reached not so much from outside as from within, not so much by theory and instruction as rather by the path of generation. Plato had something of the kind in mind when, in the fifth book of his Republic, he explained his plan for increasing and improving his warrior caste. If we could castrate all scoundrels and stick all stupid geese in a convent, and give men of noble character a whole harem, and procure men, and indeed thorough men, for all girls of intellect and understanding, then a generation would soon arise which would produce a better age than that of Pericles. – Schopenhauer, Arthur (1969). E. F. J. Payne (ed.). The World as Will and Representation. II. New York: Dover Publications. p. 527.
And a contra-Utopian statement of optimization:
If you want Utopian plans, I would say: the only solution to the problem is the despotism of the wise and noble members of a genuine aristocracy, a genuine nobility, achieved by mating the most magnanimous men with the cleverest and most gifted women. This proposal constitutes my Utopia and my Platonic Republic. – Essays and Aphorisms, trans. R.J. Hollingdale, Middlesex: London, 1970, p. 154
An assault rifle is a rifle that uses ammunition between pistol and long rifle ammunition in size designed for rapid fire:
Assault rifle, military firearm that is chambered for ammunition of reduced size or propellant charge and that has the capacity to switch between semiautomatic and fully automatic fire. Because they are light and portable yet still able to deliver a high volume of fire with reasonable accuracy at modern combat ranges of 1,000–1,600 feet (300–500 metres), assault rifles have replaced the high-powered bolt-action and semiautomatic rifles of the World War II era as the standard infantry weapon of modern armies.
During World War II, Hugo Schmeisser designed a light rifle to fire the Germans’ 7.92-mm Kurz (“Short”) cartridge, which was of the same calibre as the Mauser rifle cartridge but was lighter and shorter and was therefore of a less-potent “intermediate” power. The weapon, known variously as the MP43, MP44, or Sturmgewehr (“Assault Rifle”) 44, was loaded by a curved box magazine holding 30 rounds and was designed for most-effective fire at about 300 yards (270 metres). Only some 425,000 to 440,000 of these rifles were built—too few and too late for the German war effort—but they were based on a concept that would dominate infantry weapons into the 21st century.
This is controversial only because Leftists want to extend the definition of “assault rifle” to mean “any semi-automatic rifle” for the purposes of banning a wider range of guns:
Yet media and politicians often use this term inaccurately, as doing so furthers their desire of getting Americans to support gun-control policies. As Sean Davis pointed out on our pages last year, when the United States had a federal “assault weapons” ban, lawmakers defined the term cosmetically instead of by function and, contra Merriam-Webster, had nothing to do with a military-esque design (whatever that means).
A political group which takes a “base-building” approach toward organizing a constituency approaches “issue” work a bit differently. The first step involves a canvassing the “base” where the group is attempting to organize: talking to coworkers, knocking on doors in a neighborhood, or chatting with commuters on the train or at the bus stop. Since the vast majority of individuals in our society are not members of a particular political group or even a union, we say that recruiting someone from a working class constituency into a mass organization is an act of “organizing the unorganized.”
This topic baffles people coming over from Leftism more than anything other. “Big government” refers to mission creep which happens when a government decides that it is not, as the Constitution intends, designed to protect an organic culture through preventing government itself from violating natural rights, law, and order. When a government leaves those behind, it becomes ideological or committed to social engineering of its population in order to provide a Utopian end result someday, and these big governments are inevitably of an egalitarian or civil/human rights nature.
In essence, the conservative argument for “small government” means that we want to avoid ideological government in order to avoid this mission creep and thus government feeling justified into reaching into all areas of life so that it can complete its social engineering agenda. This does not mean that we oppose the use of government power, only that we think government should see itself as defending the natural rights, law, and order that come to us from history instead of pursuing some quantitatively new and hence conjectural, hypothetical, and theoretical strategy.
The broken window theory — similar to Keynesianism — states that if a window is broken, someone must be hired to fix it, and therefore money has been injected into the economy. Critics point out that this fails to take into account the loss of the window and possibly of trust and goodwill in the community.
The broken window fallacy was first expressed by the 19th-century French economist Frederic Bastiat.
In Bastiat’s tale, a boy breaks a window. The townspeople looking on decide that the boy has actually done the community a service because his father will have to pay the town’s glazier to replace the broken pane. The glazier will then spend the extra money on something else, jump-starting the local economy. The onlookers come to believe that breaking windows stimulates the economy.
Bastiat points out that further analysis exposes the fallacy. By forcing his father to pay for a window, the boy has reduced his father’s disposable income. His father will not be able to purchase new shoes or some other luxury good. Thus, the broken window might help the glazier, but at the same time, it robs other industries and reduces the amount spent on other goods.
Bastiat also noted that the townspeople should have regarded the broken window as a loss of some of the town’s real value.
See also Broken Windows.
Broken windows theory had an enormous impact on police policy throughout the 1990s and remained influential into the 21st century. Perhaps the most notable application of the theory was in New York City under the direction of Police Commissioner William Bratton. He and others were convinced that the aggressive order-maintenance practices of the New York City Police Department were responsible for the dramatic decrease in crime rates within the city during the 1990s. Bratton began translating the theory into practice as the chief of New York City’s transit police from 1990 to 1992. Squads of plainclothes officers were assigned to catch turnstile jumpers, and, as arrests for misdemeanours increased, subway crimes of all kinds decreased dramatically. In 1994, when he became New York City police commissioner, Bratton introduced his broken windows-based “quality of life initiative.” This initiative cracked down on panhandling, disorderly behaviour, public drinking, street prostitution, and unsolicited windshield washing or other such attempts to obtain cash from drivers stopped in traffic. When Bratton resigned in 1996, felonies were down almost 40 percent in New York, and the homicide rate had been halved.
Prior to the development and implementation of various incivility theories such as broken windows, law enforcement scholars and police tended to focus on serious crime; that is, the major concern was with crimes that were perceived to be the most serious and consequential for the victim, such as rape, robbery, and murder. Wilson and Kelling took a different view. They saw serious crime as the final result of a lengthier chain of events, theorizing that crime emanated from disorder and that if disorder were eliminated, then serious crimes would not occur.
Their theory further posits that the prevalence of disorder creates fear in the minds of citizens who are convinced that the area is unsafe. This withdrawal from the community weakens social controls that previously kept criminals in check. Once this process begins, it feeds itself. Disorder causes crime, and crime causes further disorder and crime.
See also our page for the Broken Window Theory.
You might notice the following:
1-5 are Leftist programs, and 7 pays mostly for debt incurred by them.
Most of these are New Deal/Great Society welfare entitlements benefits state programs.
From the first source:
Five decades, nearly $22 trillion and roughly 80 welfare programs later, it’s fair to ask how we’re doing. The short answer? Not well.
In contrast, our wars as of 2010 cost us only $738 billion in Vietnam and $1.1tn in Iraq/Afghanistan. The driver for our budgetary crisis is the entitlements programs, and politicians are scared to death of touching them.
The entitlements budget crisis is likely the issue that ends democracy:
But now, in its third century of existence, it is producing dysfunctional and potentially self-destructive forms of governance. The United States has been deadlocked in the monumental issue of its budget deficit and entitlements, unable to cut spending or raise taxes. Europe as a whole is no less fiscally bankrupt, and measures to restore its public finances are throwing the continent into economic depression and political upheaval.
A Cargo Cult is a superstitious belief that confuses symbolism and the cause of something desired:
After World War II anthropologists discovered that an unusual religion had developed among the islanders of the South Pacific. It was oriented around the concept of cargo which the islanders perceived as the source of the wealth and power of the Europeans and Americans. This religion, known as the Cargo Cult, held that if the proper ceremonies were performed shipments of riches would be sent from some heavenly place. It was all very logical to the islanders. The islanders saw that they worked hard but were poor whereas the Europeans and Americans did not work but instead wrote things down on paper and in due time a shipment of wonderful things would arrive.
The Cargo Cult members built replicas of airports and airplanes out of twigs and branches and made the sounds associated with airplanes to try to activate the shipment of cargo.
Although the existence of the Cargo Cult only became known after World War II the cult had developed long before, when the Europeans first arrived in the area in ships. There were legends among the islanders of their distant ancestor-god having journeyed to the west and promised to someday return. The West was thought to be the land of the dead.
When the Portuguese and Dutch came into the area of the South Pacific they came from the west and they were pale skinned just as the islanders would have expected people coming from the land of the dead to be. The Europeans of the time also did not work but sent messages which led to the arrival of wonderful things as cargoes from ships.
This sort of talismanic, symbolic, and superstitious behavior is a form of neurosis:
He struggled with feelings of superiority with respect to his parents, yet inferiority with respect to rich relatives. He became phobic about mathematics, and developed fainting spells, which enabled him to avoid going to school. This developed into a school phobia, which allowed Jung to play alone and ruminate in his secret fantasy world for months. Only when there was talk of his being epileptic did he begin to consciously battle the fainting feelings. He overcame them, returning to school and improving his performance. Jung concluded: “That was when I learned what a neurosis is.”
The association of part of reality (fear of mathematics) with a talisman to drive it away (fainting) forms the basis of the symbolism that divides the individual from their inner reality and its knowledge and causes them to embrace an (erroneous) external source.
Caste refers to hereditary groupings of social classes by genetic ability and tendencies:
Others might present a biological explanation of India’s stratification system, based on the notion that all living things inherit a particular set of qualities. Some inherit wisdom and intelligence, some get pride and passion, and others are stuck with less fortunate traits. Proponents of this theory attribute all aspects of one’s lifestyle — social status, occupation, and even diet — to these inherent qualities and thus use them to explain the foundation of the caste system.
In ancient India, the ranked occupational groups were referred to as varnas, and the hereditary occupational groups within the varnas were known as jatis. Many have immediately assumed that ascribed social groups and rules prohibiting intermarriage among the groups signify the existence of a racist culture. But this assumption is false. Varnas are not racial groups but rather classes.
Four varna categories were constructed to organize society along economic and occupational lines. Spiritual leaders and teachers were called Brahmins. Warriors and nobility were called Kshatriyas. Merchants and producers were called Vaishyas. Laborers were called Sudras.
Europe had its own version:
One example is Medieval Europe which had a system similar to the caste system which featured four social groups; nobility, knights and clergy, artisans, and peasants.
It was part of the feudal system including manorialism:
manorialism — a communal agricultural system that was really an almost all-encompassing socio-religious-political system which, although its features and importance did vary at different times and in different locales, pretty much regulated nearly all aspects of medieval europeans’ lives. where it existed — a key point which i’ll come back to later.
manorialism — “classic,” bipartite manorialism (more on that below) — started with the franks in austrasia by at least the 600s or perhaps earlier and spread gradually southwards with the frankish conquest of, well, france and eastwards during the ostsiedlung. we find it just across the channel in southern england very early as well — there are references to what sounds like features of a manor system in the laws of king ine of wessex (688-726)
the bipartite estate was a key aspect of classical (north)western european manorialism. basically, the manor was divided into two parts: the lord’s part — his farm or demesne — and the peasants’ or serfs’ parts — all their individual farms. the serfs or villeins or whatever you want to call them (there were multiple categories of these peasant farmers and a range of names for them) each had farms to work which were granted to them by the lords (keep in mind that sometimes those “lords” were bishops or monks who ran the monasteries). in the earlier part of the medieval period, the serfs owed labor to the lord of the manor as payment — they were obliged to help work the lord’s demesne — but they also independently worked the farms which they were granted, both to sustain themselves and perhaps make a little profit by selling any extra produce to the neighbors or in a market
There was nothing wrong with the confessions:
“The confessions were not coerced,” she explains, or else the trial court wouldn’t have found them admissible. The videotapes show that “the questioning was respectful, dignified, carried out according to the letter of the law and with sensitivity to the young age of the men. . . . If you spot the first sign of a coercive questioning, don’t hesitate to write to me and point it out.”
The Reyes confession is dubious:
The D.A.’s report was based solely on the confession of Matias Reyes, career criminal, serial rapist and murderer. Reyes had absolutely nothing to lose by confessing to the rape — the statute of limitations had run — and much to gain by claiming he acted alone: He got a favorable prison transfer and the admiration of his fellow inmates for smearing the police.
Multiple people participated in the assault:
Here’s the first big problem with the confession of the alleged lone rapist Reyes: his tale of being the only attacker goes against the medical evidence that indicates the Central Park Jogger was attacked by multiple people. Part of this evidence includes bruising on both legs of the victim indicating she was held down by more than one person and cuts from a blade (Reyes said he only hit her with a rock and tree branch).
The five were involved in violent criminal activity that night:
For starters, no one has ever disputed that the Harlem thugs had been in Central Park that fateful evening for the sole purpose of assaulting and mugging innocents (one of whom had been bludgeoned with a pipe). As is the wont of cowards, “the Five” set upon only those who they outnumbered, those who were weaker and more vulnerable. This they confessed from the moment they were in police custody.
The fact of another rapist was known:
However, Trump isn’t the only one who wasn’t entirely convinced that Reyes acted alone. A panel of three attorneys commissioned by the NYPD in 2003 believed that the Central Park Five were most likely involved in the attack, just not the rape. “Our examination of the facts leads us to suggest that there is an alternative theory of the attack upon the jogger, that both the defendants and Reyes assaulted her, perhaps successively,” the lawyers wrote in their report, according to a New York Times article. They described their theory, saying that the Central Park Five attacked the woman first, and that “Mr. Reyes, drawn by her screams, either joined in the attack as it was ending or waited until the defendants had moved on to their next victims before descending upon her himself, raping her and inflicting upon her the brutal injuries that almost caused her death.”
https://lawandcrime.com/high-profile/donald-trump-isnt-alone-in-believing-central-park-five-are-guilty/ quoting https://www.nytimes.com/2003/01/28/nyregion/boys-guilt-likely-in-rape-of-jogger-police-panel-says.html
The CP5 were indicted because they had knowledge of the attacks which was not made public, and they were known to be part of a group of thirty mostly minority youth were went around Central Park and attacked several people:
Antron McCray: “We charged her. We got her on the ground. Everybody started hitting her and stuff. She was on the ground. Everybody stomping and everything. Then we got, each — I grabbed one arm, some other kid grabbed one arm, and we grabbed her legs and stuff. Then we all took turns getting on her, getting on top of her.”
Kevin Richardson: “Raymond [Santana] had her arms, and Steve [Lopez, who accepted a plea bargain rather than face trial] had her legs. He spread it out. And Antron [McCray] got on top, took her panties off.”
Raymond Santana: “He was smacking her. He was saying, ‘Shut up, bitch!’ Just smacking her…. I was grabbing the lady’s tits.”
Kharey Wise: “This was my first rape.”
When investigators at one point asked the fifth suspect, Yusef Salaam, why he had tried to smash the victim’s skull, he replied, “It was fun.”
The accusation of racism has always been spurious:
“Fascism! White fascism,” said another.
“Racism,” said a man. “Racism is what it is.”
Galligan, trying to stop the outbursts, ordered everyone but relatives from the courtroom.
But Richardson’s brother-in-law, who had been crying and clutching Grace Cuffee as she was given oxygen by a court officer, stood and pointed at Assistant District Attorney Elizabeth Lederer.
“Bitch, you mine. You f—–g mine,” he yelled.
The quotation you’re looking for is from Chesterton’s 1929 book, The Thing, in the chapter entitled, “The Drift from Domesticity”:
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.
The complaint says Ng brought a suitcase full of $400,000 in cash to the United States on June 13 and later that day brought the suitcase to a meeting with “Business Associate-1” in Queens, New York.
Ng was identified in a 1998 Senate report as the source of hundreds of thousands of dollars illegally funneled through an Arkansas restaurant owner, Charlie Trie, to the Democratic National Committee during the Clinton administration.
Ng and Trie made a number of visits to the White House to attend Democratic National Committee-sponsored events and were photographed with President Bill Clinton and then-First Lady Hillary Clinton. ABC News reported in 1997 that Ng had made six trips to the White House.
Leftists endorse standard demand-side economics (versus conservative supply-side economics based on the law of supply and demand). We saw it most interestingly with Richard Rubin and Alan Greenspan’s “fast money” policies under Bill Clinton. This
Conservatives tend to have several issues with climate change:
We believe it falsely describes a methodological problem as universal
Michael Crichton provides the basic outline of this argument:
Crichton argues that concern about global warming is best understood as a fad. In particular, he argues that many people concerned about global warming follow a herd mentality, failing critically to examine the data. Crichton is especially harsh in his portrayal of other members of the Hollywood elite, though his critique extends more broadly to the news media, intelligentsia and general public.
First, he highlights the “urban heat island effect.” Crichton explains that cities are often warmer than the surrounding countryside and implies that observed temperature increases during the past century are the result of urban growth, not rising greenhouse gas concentrations.
For more information on the citations the Crichton used, see this list.
Crichton is not anti-conservationist:
“In closing, I want to state emphatically that nothing in my remarks should be taken to imply that we can ignore our environment, or that we should not take climate change seriously. On the contrary, we must dramatically improve our record on environmental management. That’s why a focused effort on climate science, aimed at securing sound, independently verified answers to policy questions, is so important now.”
The “heat islands” argument is somewhat misstated. The broader criticism is that everywhere humans go, we do the same thing: cut down forests, pave the land, erect concrete buildings that obstruct drainage and radiate heat, and block natural wind currents with large construction projects. This disrupts jet streams and radiates heat into the wind, causing local heat effects as well as wind currents that warm other parts of the globe, which to scientists thinking statistically looks like global warming rather than simply a bad method.
It is used to justify expanding Leftist power and wealth transfer from first world to third
“One has to free oneself from the illusion that international climate policy is environmental policy. This has almost nothing to do with the environmental policy anymore, with problems such as deforestation or the ozone hole,” said Edenhofer, who co-chaired the U.N.’s Intergovernmental Panel on Climate Change working group on Mitigation of Climate Change from 2008 to 2015.
So what is the goal of environmental policy?
“We redistribute de facto the world’s wealth by climate policy,” said Edenhofer.
It follows a pattern of previous doom pathologies used to justify power seizure
A 2009 investigative report from UK’s Telegraph detailed the extent of dictatorial-like powers Connolley possessed at Wikipedia, allowing him to remove inconvenient scientific information that didn’t conform to his point of view.
“All told, Connolley created or rewrote 5,428 unique Wikipedia articles. His control over Wikipedia was greater still, however, through the role he obtained at Wikipedia as a website administrator, which allowed him to act with virtual impunity. When Connolley didn’t like the subject of a certain article, he removed it — more than 500 articles of various descriptions disappeared at his hand. When he disapproved of the arguments that others were making, he often had them barred — over 2,000 Wikipedia contributors who ran afoul of him found themselves blocked from making further contributions. Acolytes whose writing conformed to Connolley’s global warming views, in contrast, were rewarded with Wikipedia’s blessings. In these ways, Connolley turned Wikipedia into the missionary wing of the global warming movement.“
…when including the papers from the 1960s and 1970s that indicated the globe had cooled (by -0.3° C between the 1940s and ’70s), that this cooling was concerning (leading to extreme weather, drought, depressed crop yields, etc.), and/or that CO2’s climate influence was questionable to negligible, a conservative estimate for the number of scientific publications that did not agree with the alleged CO2-warming “consensus” was 220 papers for the 1965-’79 period, not 7. If including papers published between 1960 and 1989, the “non-consensus” or “cooling” papers reaches 285.
The scapegoating of industry allows us to continue problematic behaviors like using too much land, deforestation, urbanization, damming, and overfishing
Forests still cover about 30 percent of the world’s land area, but they are disappearing at an alarming rate. Between 1990 and 2016, the world lost 502,000 square miles (1.3 million square kilometers) of forest, according to the World Bank—an area larger than South Africa. Since humans started cutting down forests, 46 percent of trees have been felled, according to a 2015 study in the journal Nature. About 17 percent of the Amazonian rainforest has been destroyed over the past 50 years, and losses recently have been on the rise.
Poor air and water quality, insufficient water availability, waste-disposal problems, and high energy consumption are exacerbated by the increasing population density and demands of urban environments.
“Dams change rivers by creating artificial lakes, fragmenting river networks and distorting natural patterns of sediment transport and seasonal variations in water temperature and stream flow,” says Schmidt, who served as chief of the U.S. Geological Survey’s Grand Canyon Monitoring and Research Center from 2011 to 2014.
The number of overfished stocks globally has tripled in half a century and today fully one-third of the world’s assessed fisheries are currently pushed beyond their biological limits, according to the Food and Agriculture Organization of the United Nations. Overfishing is closely tied to bycatch—the capture of unwanted sea life while fishing for a different species. This, too, is a serious marine threat that causes the needless loss of billions of fish, along with hundreds of thousands of sea turtles and cetaceans.
More than three percent of global carbon dioxide emissions can be attributed to ocean-going ships. This is an amount comparable to major carbon-emitting countries — and the industry continues to grow rapidly.
In fact, if global shipping were a country, it would be the sixth largest producer of greenhouse gas emissions.
The International Maritime Organization (IMO) calculated that ocean-going vessels released 1.12 billion metric tons of carbon dioxide in 2007. This is equivalent to the annual greenhouse gas emissions from over 205 million cars, or more cars than were registered in the entire United States in 2006 (135 million).
It turns out to be quite a health threat:
Confidential data from maritime industry insiders based on engine size and the quality of fuel typically used by ships and cars shows that just 15 of the world’s biggest ships may now emit as much pollution as all the world’s 760m cars. Low-grade ship bunker fuel (or fuel oil) has up to 2,000 times the sulphur content of diesel fuel used in US and European automobiles.
The setting up of a low emission shipping zone follows US academic research which showed that pollution from the world’s 90,000 cargo ships leads to 60,000 deaths a year and costs up to $330bn per year in health costs from lung and heart diseases.
Cars driving 15,000km a year emit approximately 101 grammes of sulphur oxide gases (or SOx) in that time. The world’s largest ships’ diesel engines which typically operate for about 280 days a year generate roughly 5,200 tonnes of SOx.
Sounds a bit toxic:
The fuel used in ships is waste oil, basically what is left over after the crude oil refining process. It is the same as asphalt and is so thick that when cold it can be walked upon . It’s the cheapest and most polluting fuel available and the world’s 90,000 ships chew through an astonishing 7.29 million barrels of it each day, or more than 84% of all exported oil production from Saudi Arabia, the worlds largest oil exporter.
These are producing a toxic environment:
In European coastal areas, shipping emissions contribute with 1–7% of ambient air PM10 levels, 1–14% of PM2.5, and at least 11% of PM1. Contributions from shipping to ambient NO2 levels range between 7 and 24%, with the highest values being recorded in the Netherlands and Denmark. Impacts from shipping emissions on SO2 concentrations were reported for Sweden and Spain. Shipping emissions impact not only the levels and composition of particulate and gaseous pollutants, but may also enhance new particle formation processes in urban areas.
In addition, the environmental panic industry has a bad record when it comes to predicting the future
This includes Al Gore and his movie An Inconvenient Truth (2006)
There are also concerns about those who use predictions of imminent doom to argue for their own absolute power
Here is the standard mythos:
The Clinton years showed the effects of a large tax increase that Clinton pushed through in his first year, and that Republicans incorrectly claim is the “largest tax increase in history.” It fell almost exclusively on upper-income taxpayers. Clinton’s fiscal 1994 budget also contained some spending restraints. An equally if not more powerful influence was the booming economy and huge gains in the stock markets, the so-called dot-com bubble, which brought in hundreds of millions in unanticipated tax revenue from taxes on capital gains and rising salaries.
Clinton’s large budget surpluses also owe much to the Social Security tax on payrolls. Social Security taxes now bring in more than the cost of current benefits, and the “Social Security surplus” makes the total deficit or surplus figures look better than they would if Social Security wasn’t counted. But even if we remove Social Security from the equation, there was a surplus of $1.9 billion in fiscal 1999 and $86.4 billion in fiscal 2000. So any way you count it, the federal budget was balanced and the deficit was erased, if only for a while.
N.B. the federal budget is somewhere around $3tn per year.
There was never a surplus and the facts support that position. In fact, far from a $360 billion reduction in the national debt in FY1998-FY2000, there was an increase of $281 billion.
[i]n no year did the national debt go down, nor did Clinton leave President Bush with a surplus that Bush subsequently turned into a deficit. Yes, the deficit was almost eliminated in FY2000 (ending in September 2000 with a deficit of “only” $17.9 billion), but it never reached zero–let alone a positive surplus number. And Clinton’s last budget proposal for FY2001, which ended in September 2001, generated a $133.29 billion deficit. The growing deficits started in the year of the last Clinton budget, not in the first year of the Bush administration.
It turns out that devious accounting created this “surplus”:
Even as the Clinton administration took a victory lap with the projected surplus, then-Federal Reserve Chairman Alan Greenspan was reminding people that the Social Security Administration was sitting on approximately $10 trillion in unfunded promises. A study by Howell Jackson, a Harvard professor, had shown that if the program were administered using the same accounting methods required by private pension plans, Social Security was actually running a $500 billion deficit each year.
Excluding Social Security obligations from the national debt allowed the White House to claim a surplus while the government’s financial obligations were actually increasing (this practice still exists, with the official national debt figures omitting $83 trillion in future Social Security bills). This financial approach is not dissimilar to infamous Enron accounting methods, where huge liabilities are not included on balance sheets.
This may have laid the groundwork for the Great Recession:
But there’s also a deeper critique to be made: Namely, that Clinton’s budget surplus wasn’t everything it’s cracked up to be. In fact, it might have hurt the economy pretty badly.
A sectoral balances analysis starts with the recognition that the U.S. economy, like any national economy, is roughly comprised of three sectors. There’s the government sector: the federal government, the Federal Reserve, and the state and local governments. There’s the private domestic sector: individuals, households, businesses, the banks, all the major industries, etc. And then there’s the foreign sector: i.e. the rest of the world, or every entity outside the U.S. national border that we trade with.
And because of the way we calculate gross domestic product (GDP), the sum of the deficits or surpluses of these three sectors will always be zero. So if the domestic private sector is running a surplus of 4 percent of GDP, for instance, then the government and foreign sectors might each run a deficit of 2 percent.
[T]he government sector has almost always been in deficit since the mid-20th century while the private sector has almost always been in surplus. But what do you notice about the late 1990s? Something weird happened: The private domestic sector (the blue bars) went into deficit for the first time since 1952. Then it did it again in the second half of the 2000s. There’s no way for the spending of private households and businesses to collectively outpace saving unless its being driven by unsustainable debt. So what we’re seeing here is the stock bubble of the late ’90s, which burst in 2001, and the out-of-control mortgages and household debt of the mid-to-late ’00s, which culminated in the 2008 financial crisis.
The average cost of tuition and fees at a private, non-profit, four-year university this school year was $31,231—up sharply from $1,832 in 1971-1972 (in current dollars). At public, four-year schools, tuition and fees cost about $9,139 this year. In the 1971 school year, they added up to less than $500 in current dollars, according to the College Board.
Since 1971, annual college enrollment has more than doubled in the U.S. to 19.5 million, as of 2013, the latest Census data available. In that year, there were 5.3 million in two-year colleges, 10.5 million in four-year colleges and 3.7 million in graduate school.
College enrollment peaked in 2011.
People form first impressions and then defend them, reading intervening data points into their existing narrative rather than re-analyzing the assumptions foundational to that narrative:
Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.
Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.
The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.
The problem is probably not confirmation bias, but people searching for reasons to justify their beliefs, which means that they seize on information favorable to their thesis and ignore or disregard everything else:
Unfortunately, people do not always revise their beliefs in light of new information. On the contrary, they often stubbornly maintain their views. Certain disagreements stay entrenched and polarized.
We asked 900 United States residents which candidate they wanted to win the election, and which candidate they believed was most likely to win. Respondents fell into two groups. In one group were those who believed the candidate they wanted to win was also most likely to win (for example, the Clinton supporter who believed Mrs. Clinton would win). In the other group were those who believed the candidate they wanted to win was not the candidate most likely to win (for example, the Trump supporter who believed Mrs. Clinton would win). Each person in the study then read about recent polling results emphasizing either that Mrs. Clinton or Mr. Trump was more likely to win.
After reading about the recent polling numbers, all the participants once again indicated which candidate they believed was most likely to win. The results, which we report in a forthcoming paper in the Journal of Experimental Psychology: General.html), were clear and robust. Those people who received desirable evidence — polls suggesting that their preferred candidate was going to win — took note and incorporated the information into their subsequent belief about which candidate was most likely to win the election. In contrast, those people who received undesirable evidence barely changed their belief about which candidate was most likely to win.
Importantly, this bias in favor of the desirable evidence emerged irrespective of whether the polls confirmed or disconfirmed peoples’ prior belief about which candidate would win. In other words, we observed a general bias toward the desirable evidence.
To our surprise, those people who received confirming evidence — polls supporting their prior belief about which candidate was most likely to win — showed no bias in favor of this information. They tended to incorporate this evidence into their subsequent belief to the same extent as those people who had their prior belief disconfirmed. In other words, we observed little to no bias toward the confirming evidence.
Confirmation bias can be seen as one of the tools that our brains use to build cases for us to use to justify our behavior, wants, or interpretation so that we do not lose social position:
As the term is used in this article and, I believe, generally by psychologists, confirmation bias connotes a less explicit, less consciously one-sided case-building process. It refers usually to unwitting selectivity in the acquisition and use of evidence. The line between deliberate selectivity in the use of evidence and unwitting molding of facts to fit hypotheses or beliefs is a difficult one to draw in practice, but the distinction is meaningful conceptually, and confirmation bias has more to do with the latter than with the former.
Conservatism, the belief system of conservatives, can be described as a philosophy but is more accurately portrayed as a folkway, lifestyle or way of life. It concerns how people think of their own actions and their place in the universe within the context of human civilization, and our default human tendency is to oppose it because it minimizes our sense of self-importance, or individualism.
The French aristocratic political philosopher Alexis de Tocqueville (1805–59) described individualism in terms of a kind of moderate selfishness that disposed humans to be concerned only with their own small circle of family and friends. Observing the workings of the American democratic tradition for Democracy in America (1835–40), Tocqueville wrote that by leading “each citizen to isolate himself from his fellows and to draw apart with his family and friends,” individualism sapped the “virtues of public life,” for which civic virtue and association were a suitable remedy.^(1)
The root of conservatism and conservative comes from the Proto-Indo-European root word ser meaning “to protect”:
Proto-Indo-European root meaning “to protect.” It forms all or part of: conservation; conservative; conserve; hero; observance; observatory; observe; preserve; reservation; reserve; reservoir.^(2))
It also includes the prefix con^(3) meaning “together, together with, in combination” for a final definition of “protecting together.” Conservatives protect the best of the past; this requires that they be realists who recognize what works in reality, and also have an aesthetic ability to assess the best above the rest, an ability that we normally describe as the pursuit of excellence (arete^(4)) as a means of conserving the best.
From this we get notions such as “to conserve” which implies the preservation and continuity of something original, in this case civilization and humanity in its most advanced state. Conservatism necessarily implies a qualitative assessment of the results of our actions, and conserves the best of those, usually as enshrined in various “golden ages” of highly advanced civilization that degraded later into its modern form.
Conservatism (or the conservative philosophy) is that which conserves the best of human endeavors. It is not a utilitarian but an optimalist view, meaning that it aims for the best possible results even where those are not pragmatic given the nature of most human beings. Conservatism challenges us to rise above our Simian impulses and to instead pay attention to our inner intuition, where we find knowledge of how Reality works and how to adapt to it such that the best possible results are achieved. In the conservative viewpoint, the purpose of life is pleasure, and this is found only through the “transcendentals” such as beauty, excellence, truth, goodness and national identity.
The root of conservative thinking comes to us from the writing of Plato, specifically The Republic, which is a thought-experiment regarding the prospects of government outside of the known traditional working model of king, faith, culture, nation and values. Plato introduced the cyclic model of history to the Western canon, and elaborated on the elusive nature of realistic thinking with his metaphor of the cave.
This leads to several realizations about conservatism:
Conservatism is the belief system (more like a philosophy or culture that explains our world than an ideology, or theory about how life “should” be) of conservatives. It holds that history is cyclic, and humans cycle between “golden ages” of tradition and the degradation of the same, which results in worse results. As a result, conservatives attempt to conserve the best of the past, meaning that they are not mindless reactionaries but those who assess qualitatively the outcomes of human actions and preserve those that worked better than others, which point toward those golden ages rather than degraded forms like modernity.
Contrary to what we are told by some modern public neoconservatives, conservatism is an eternal order that does not vary between societies. It is not a preservation of recent traditions, but of timeless truths. Conservatism exists as a philosophy and way of life, but this is implemented in public form through corporations like conservative media, conservative book publishers, and of course, the GOP and other conservative parties.
Many of these public conservatives, especially in America, attempt to play with the definition of conservatism because they want to bring their Leftist ideals into the conservative arena, and to re-define conservatism to mean a hybrid between Left and Right. These hybrids take two forms:
The term “neoconservative” in particular is used to mean a “classical liberal” who endorses modern methods of government:
also neoconservative; used in the modern sense by 1979:
My Republican vote [in the 1972 presidential election] produced little shock waves in the New York intellectual community. It didn’t take long – a year or two – for the socialist writer Michael Harrington to come up with the term “neoconservative” to describe a renegade liberal like myself. To the chagrin of some of my friends, I decided to accept that term; there was no point calling myself a liberal when no one else did. [Irving Kristol, “Forty Good Years,” “The Public Interest,” Spring 2005]
The term is attested from 1960, but it originally often was applied to Russell Kirk and his followers, who would be philosophically opposed to the later neocons.^(6)
Neoconservatives are essentially classical liberals, or those who recognized that the new Leftist order was doomed and decided the best way to protect themselves against it was to insist on individual freedom, protected by economics. Plato details this response in Chapter XVIII of The Republic:^(7)
The inevitable division: such a State is not one, but two States, the one of poor, the other of rich men; and they are living on the same spot and always conspiring against one another.
That, surely, is at least as bad.
Another discreditable feature is, that, for a like reason, they are incapable of carrying on any war. Either they arm the multitude, and then they are more afraid of them than of the enemy; or, if they do not call them out in the hour of battle, they are oligarchs indeed, few to fight as they are few to rule. And at the same time their fondness for money makes them unwilling to pay taxes.
…On the other hand, the men of business, stooping as they walk, and pretending not even to see those whom they have already ruined, insert their sting –that is, their money –into some one else who is not on his guard against them, and recover the parent sum many times over multiplied into a family of children: and so they make drone and pauper to abound in the State.
Yes, he said, there are plenty of them –that is certain.
The evil blazes up like a fire; and they will not extinguish it, either by restricting a man’s use of his own property, or by another remedy:
One which is the next best, and has the advantage of compelling the citizens to look to their characters: –Let there be a general rule that every one shall enter into voluntary contracts at his own risk, and there will be less of this scandalous money-making, and the evils of which we were speaking will be greatly lessened in the State.
…Then there is another class which is always being severed from the mass.
What is that?
They are the orderly class, which in a nation of traders sure to be the richest.
They are the most squeezable persons and yield the largest amount of honey to the drones.
Why, he said, there is little to be squeezed out of people who have little.
…What else can they do?
And then, although they may have no desire of change, the others charge them with plotting against the people and being friends of oligarchy? True.
And the end is that when they see the people, not of their own accord, but through ignorance, and because they are deceived by informers, seeking to do them wrong, then at last they are forced to become oligarchs in reality; they do not wish to be, but the sting of the drones torments them and breeds revolution in them.^(3)
Here, Plato establishes the path by which people become unwilling oligarchs, or those who rule by money and wish to pay no taxes, an analogue to our contemporary libertarians, classical liberals and neoconservatives. They dislike oligarchy, but when the state demands money to pay off the many impoverished and non-productive, they turn to oligarchy-within-democracy as a means of preserving their wealth.
This is the nature of the neoconservative, libertarian and classical liberal: someone who rationalizes decay, and defends against it with excuses for keeping their personal wealth. While they may be hybrids of conservatives, or even a type of conservative, their definition of conservative is not the whole of conservatism, and therefore is not conservatism itself.
These hybrids — like national socialism and fascism — reflect an interpretation of conservatism adapted to a Leftist framework. National Socialism and Fascism simply came later, attempting to adopt the ultra-modernist methods of the Communist total state or Jacobin militarized empire to work for conservatism. This attempt misses out on what makes conservatism distinct, which is that it is a philosophy of how the order of nature operates, not an attempt to quantitatively change that order into something else.
Consider the following duality:
Left: In politics, the portion of the political spectrum associated in general with egalitarianism and popular or state control of the major institutions of political and economic life. The term dates from the 1790s, when in the French revolutionary parliament the socialist representatives sat to the presiding officer’s left. Leftists tend to be hostile to the interests of traditional elites, including the wealthy and members of the aristocracy, and to favour the interests of the working class (see proletariat). They tend to regard social welfare as the most important goal of government. Socialism is the standard leftist ideology in most countries of the world; communism is a more radical leftist ideology.^(8)
Conservatism is a preference for the historically inherited rather than the abstract and ideal. This preference has traditionally rested on an organic conception of society—that is, on the belief that society is not merely a loose collection of individuals but a living organism comprising closely connected, interdependent members. Conservatives thus favour institutions and practices that have evolved gradually and are manifestations of continuity and stability. Government’s responsibility is to be the servant, not the master, of existing ways of life, and politicians must therefore resist the temptation to transform society and politics.^(9)
If we boil this down to its essence, conservatism is results-based where Leftism is methods-based. Conservatism favors an eternal natural order, instead of human desires and intentions. It means we recognize our place in the universe, and that mathematically-precise orders like natural selection and self-improvement rule us no matter what we do.
Conservatism at its heart is about seeing the world as it actually exists, and working within those confines. We do not have to like reality, but unlike progressives, we do not see it as clay to be shaped in our image.^(10)
Conservatism favors what works in the laboratory of history, where Leftists favor the theory of egalitarianism, which is conjectural and hypothetical, but like other theories such as anarchy of a lack of personal responsibility for the maintenance of civilization, eternally popular.
Like the term classical music or heavy metal, “Conservative” refers both to a genre and an era within it. In the case of conservatives, our philosophy was only given a name and semi-formalized after the rise of an alternative, egalitarianism or Leftism. As a result, “conservative” is a generic to mean any one who liked the order before egalitarianism in any degree, and is a “big tent” ranging from classical liberals with social conservative leanings all the way through Monarchists.
Consider a control situation: ten people in a lifeboat. two armed self-appointed leaders force the other eight to do the rowing while they dispose of the food and water, keeping most of it for themselves an doling out only enough to keep the other eight rowing. The two leaders now need to exercise control to maintain an advantageous position which they could not hold without it. Here the method of control is force – the possession of guns. Decontrol would be accomplished by overpowering the leaders and taking their guns. This effected, it would be advantageous to kill them at once. So once embarked on a policy of control, the leaders must continue the policy as a matter of self-preservation. Who, then, needs to control others but those who protect by such control a position of relative advantage? Why do they need to exercise control? Because they would soon lose this position and advantage and in many cases their lives as well, if they relinquished control.
Conservation is the conservative response to environmentalism:
planned management of a natural resource to prevent exploitation, destruction, or neglect
It advocates setting aside land for use by nature, without human intervention:
Alarmed by the public’s attitude toward natural resources as well as the exploitation of natural resources for private gain, conservationists called for federal supervision of the nation’s resources and the preservation of those resources for future generations. In President Theodore Roosevelt, the conservationists found a sympathetic ear and man of action. Conservation of the nation’s resources, putting an end to wasteful uses of raw materials, and the reclamation of large areas of neglected land have been identified as some of the major achievements of the Roosevelt era.
President Roosevelt’s concern for the environment was influenced by American naturalists, such as John Muir, and by his own political appointees, including Gifford Pinchot, Chief of Forestry. Working in concert with many individuals and organizations, the Roosevelt administration was responsible for the following: the Newlands Act of 1902, which funded irrigation projects from the proceeds of the sale of federal lands in the West; the appointment of the Inland Waterways Commission in 1907 to study the relation of rivers, soil, forest, waterpower development, and water transportation; and the National Conservation Commission of 1909, which was charged with drawing up long-range plans for preserving national resources. Along with a vocal group of conservationists, the Roosevelt administration created an environmental conservation movement whose words and actions continue to be heard and felt throughout the nation today.
Conservationism follows the ideals of deep ecology:
We believe that current problems are largely rooted in the following circumstances:
- The loss of traditional knowledge, values, and ethics of behavior that celebrate the intrinsic value and sacredness of the natural world and that give the preservation of Nature prime importance. Correspondingly, the assumption of human superiority to other life forms, as if we were granted royalty status over Nature; the idea that Nature is mainly here to serve human will and purpose.
- The prevailing economic and development paradigms of the modern world, which place primary importance on the values of the market, not on Nature. The conversion of Nature to commodity form, the emphasis upon economic growth as a panacea, the industrialization of all activity, from forestry to farming to fishing, even to education and culture; the rush to economic globalization, cultural homogenization, commodity accumulation, urbanization, and human alienation. All of these are fundamentally incompatible with ecological sustainability on a finite Earth.
- Technology worship and an unlimited faith in the virtues of science; the modern paradigm that technological development is inevitable, invariably good, and to be equated with progress and human destiny. From this, we are left dangerously uncritical, blind to profound problems that technology has wrought, and in a state of passivity that confounds democracy.
- Overpopulation, in both the overdeveloped and the underdeveloped worlds, placing unsustainable burdens upon biodiversity and the human condition.
Its ultimate expression may be the “Half Earth” idea of E.O. Wilson:
The crucial factor in the life and death of species is the amount of suitable habitat left to them. As defined by the theory of island biogeography, a change in area of a habitat results in a change in the sustainable number of species by approximately the fourth root. As reserves grow in size, the diversity of life surviving within them also grows. As reserves are reduced in area, the diversity within them declines to a mathematically predictable degree swiftly – often immediately and, for a large fraction, forever.
When 90% of habitat is removed, the number of species that can persist sustainably will descend to about a half. Such is the actual condition of many of the most species-rich localities around the world. In these places, if 10% of the remaining natural habitat were then also removed, most or all of the surviving resident species would disappear.
If, on the other hand, we protect half the global surface, the fraction of species protected will be 85%, or more. At one-half and above, life on Earth enters the safe zone.
This arises from older European approaches and makes a gentler version of the far-Right environmental policies of years past:
With Hess’s enthusiastic backing, the “green wing” was able to achieve its most notable successes. As early as March 1933, a wide array of environmentalist legislation was approved and implemented at national, regional and local levels. These measures, which included reforestation programs, bills protecting animal and plant species, and preservationist decrees blocking industrial development, undoubtedly “ranked among the most progressive in the world at that time.”60 Planning ordinances were designed for the protection of wildlife habitat and at the same time demanded respect for the sacred German forest. The Nazi state also created the first nature preserves in Europe.
Along with Darré’s efforts toward re-agrarianization and support for organic agriculture, as well as Todt and Seifert’s attempts to institutionalize an environmentally sensitive land use planning and industrial policy, the major accomplishment of the Nazi ecologists was the Reichsnaturschutzgesetz of 1935. This completely unprecedented “nature protection law” not only established guidelines for safeguarding flora, fauna, and “natural monuments” across the Reich; it also restricted commercial access to remaining tracts of wilderness. In addition, the comprehensive ordinance “required all national, state and local officials to consult with Naturschutz authorities in a timely manner before undertaking any measures that would produce fundamental alterations in the countryside.”
The crab mentality is the outlook that one must minimize others in order to succeed:
Crabs in a bucket can easily escape from the bucket one at a time, but instead of doing that they pull each other down whenever one rises to the top – thus insuring their collective demise.
This is analogous to the behavior of a person who diminishes or pulls down anyone else who achieves or is about to achieve success greater than their own.
Others have noted this outlook in Enlightenment™/egalitarian thought:
Lo, this is the tarantula’s den! Would’st thou see the tarantula itself? Here hangeth its web: touch this, so that it may tremble.
There cometh the tarantula willingly: Welcome, tarantula! Black on thy back is thy triangle and symbol; and I know also what is in thy soul.
Revenge is in thy soul: wherever thou bitest, there ariseth black scab; with revenge, thy poison maketh the soul giddy!
Thus do I speak unto you in parable, ye who make the soul giddy, ye preachers of EQUALITY! Tarantulas are ye unto me, and secretly revengeful ones!
Cultural Marxism means the transformation of American culture through political correctness and pop culture:
While classical Marxism argued that capitalism and the class structure it created must be overthrown because it is oppressive to workers, cultural Marxism argues that it is not economics that creates oppression but rather the nuclear family, traditional morality and concepts of race, gender and sexual identity. These are the chains of tyranny which must be broken by revolution.
Cultural Marxism is the Marxist dialectic fused with Freudian theory and applied to identity and culture. Like all forms of Marxism, it is based upon categorizing people into abstract groups and then creating a narrative of historical oppression between them. The strategy of Marxists is always to cultivate a victimized group and then convince its members that solidarity is required against the oppressors. This creates resentment and hatred and is how Marxist ideologies fulfill their revolutionary objectives.
The cultural Marxism that our societies are infected with is a particularly Western phenomenon. After the Russian Revolution of 1917, Marxists in Europe believed that the dictatorship of the proletariat was at hand. They were wrong. The revolution failed to spread. In despair, and in one of Mussolini’s prisons, a young Italian socialist Antonio Gramsci wrote that the problem was the Christian bedrock of Western European cultures. He encouraged Marxists to develop a fifth column inside these countries to destroy the foundations of Western cultures.
This would lead to the long march through the institutions which would end in Leftist dominance of media:
When the socialist revolution failed to materialise beyond the Soviet Union, Marxist thinkers like Antonio Gramsci and Georg Lukacs tried to explain why. Their answer was that culture and religion blunted the proletariat’s desire to revolt, and the solution was that Marxists should carry out a “long march through the institutions” – universities and schools, government bureaucracies and the media – so that cultural values could be progressively changed from above.
Adapting this, later thinkers of the Frankfurt School decided that the key to destroying capitalism was to mix up Marx with a bit of Freud, since workers were not only economically oppressed, but made orderly by sexual repression and other social conventions. The problem was not only capitalism as an economic system, but the family, gender hierarchies, normal sexuality – in short, the whole suite of traditional western values.
While it had some counterparts in the Frankfurt School, Cultural Marxism is more likely simply Marxist-Leninism in a Westernized form:
The Frankfurt School invented the intellectual pestilences now known as Cultural Studies and Media Studies. They called their method Critical Theory or Social Theory. The gist of their interminable argument is that the reason the proles don’t join the revolution is that their thick heads are blunted by capitalist culture and sexual repression.
The phrase ‘cultural Marxism’ might even precede the Frankfurt School. Marx had applied his ideas to culture; the germ of ‘false consciousness’ theory lurks in Marx’s reflections on the French revolution of 1848 and his report on the Great Exhibition of 1851. The Frankfurters certainly didn’t invent the idea of a comprehensive Marxism of culture, either. The key ideas arose in the aftermath of World War One, from the Hungarian literary critic Georg Lukács, and the Italian communist Antonio Gramsci. It was Gramsci who adopted the dreaded term ‘hegemony’, probably from Lenin, and devised the strategy now known as ‘the long march through the institutions’.
William Lind of the American Conservative and the Free Congress Foundation seems to have been central to popularizing the idea that ‘multiculturalism and Political Correctness’ were the latest face of the Gramsci-Lukacs-Frankfurt program to destroy ‘Western culture and the Christian religion’ by mobilizing what Marcuse called ‘a coalition of blacks, students, feminist women and homosexuals’.
This fits with known Marxist strategies of subversion:
Bezmenov explained that the most striking thing about ideological subversion is that it happens in the open as a legitimate process. “You can see it with your own eyes,” he said. The American media would be able to see it, if it just focused on it.
Here’s how he further defined ideological subversion:
“What it basically means is: to change the perception of reality of every American to such an extent that despite of the abundance of information no one is able to come to sensible conclusions in the interest of defending themselves, their families, their community, and their country.”
The culture war refers to the struggle between Left and Right for the orientation of American culture:
For six decades or more, America’s political history has been driven by cultural warfare. This is usually traced to the social revolutions of the 1960s, but it began earlier. Culture War 1.0 began in the 1950s as religious enthusiasts sought to win hearts, minds, and souls for Christ in a society that was rapidly liberalizing and secularizing.
Culture War 2.0 rotates around three axes: 1) the new rules of engagement, 2) the correspondence theory of truth, and 3) the role intersectionality ought to play in everyone’s worldview.
The correspondence theory of truth basically states that objective truth exists and we can know something about it through evidence and reason. That is, there are objective truths to be known, and we gain reliable knowledge about them when our beliefs align with reality. It’s termed “the correspondence theory of truth” because a statement is considered true when it corresponds with reality and false when it does not.
In Culture War 2.0 the correspondence theory of truth—with its commitment to the idea that there are better and worse ways to come to knowledge about an objectively knowable world—is no longer common ground.
The culture war reflects a post-factual political landscape where ideology predominates over reality:
As long as America keeps sorting itself into two factions divided by geography, ethnicity and ideology, pitting a multiracial team of progressives who live in cities and inner-ring suburbs against a white team of conservatives who live in exurbs and rural areas, this is what debates about public policy—or for that matter about the FBI, the dictator of North Korea and the credibility of various sexual assault allegations—will look like. We will twist the facts into our partisan narratives.
This in turn reflects questions of identity which the Right resolves with either nationalism or patriotism, and the Left resolves with more egalitarian ideology:
Who and what constituted America was up for grabs in the 1960s. This was the decade that planted the seeds of the culture wars, according to Hartman, through a frontal assault on what he calls “normative America.” Before the ‘60s, the irreverent and unsettling sporadic messages of radical artists, academics, and politicians had largely failed to reach normative Americans, who continued to believe in God, hard work, American exceptionalism (“their nation was the best in human history”), and “traditional” gender roles. During the ‘60s, however, conflict, fracture, and dissent were unavoidable. Cultural disruption was no longer the exclusive province of little magazines, the occasional seminar room, and fringe political parties. With civil rights, anti-war protests, and the flowering counterculture, it was broadcast into American living rooms everyday on the nightly news.
The New Left was the most significant force in terms of reshaping American culture. This “loose configuration” of the antiwar, Black Power, feminist, and gay liberation movements may not have achieved their “utopian political dreams” but they did manage to change hearts and minds, fostering skepticism about the government, drawing attention to deeply entrenched racism, and challenging conventional ideas about gender and sexuality. For many on the Right, this cultural shift was an “abomination,” a loud and public denunciation of their most cherished values and beliefs. Hartman’s overarching argument is that the culture wars should be seen as a right-wing backlash against the ‘60s “cultural revolution.”
The 1970s, in Hartman’s view, were a transitional decade, providing an elaborate training ground for the culture wars to come. This decade saw the rise of neoconservatives, that argumentative faction led primarily by Jewish New York intellectuals—onetime liberals who had been “mugged by reality,” in Irving Kristol’s memorable phrase. In the pages of Commentary, Encounter, and Public Interest, the likes of Kristol, Gertrude Himmelfarb, and Norman Podhoretz formulated a neoconservative platform that attacked affirmative action, the welfare state, and identity politics while promoting colorblind social policies, personal responsibility, and the “fundamental goodness” of America and its institutions.
The federal government spent a record $3,727,014,000,000 in the first ten months of fiscal 2019 (October through July), according to the Monthly Treasury Statement released today.
Before this year, the most that the federal government had ever spent in the first ten months of a fiscal year was in fiscal 2009, when the Treasury spent $3,576,745,930,000 (in constant June 2019 dollars, adjusted using the Bureau of Labor Statistics inflation calculator).
Across America, elected officials, taxpayer groups, and other researchers have launched a forensic accounting of state and municipal debt, and their fact-finding mission is rewriting the country’s balance sheet. Just a few years ago, most experts estimated that state and local governments owed about $2.5 trillion, mostly in the form of municipal bonds and other debt securities. But late last year, the States Project, a joint venture of Harvard’s Institute of Politics and the University of Pennsylvania’s Fels Institute of Government, projected that if you also count promises made to retired government workers and money borrowed without taxpayer approval, the figure might be higher than $7 trillion.
- Chicago’s combined Taxpayer Burden: $119,110
- New York City’s combined Taxpayer Burden: $85,600
- Los Angeles’ combined Taxpayer Burden: $56,390
- Philadelphia’s combined Taxpayer Burden: $50,120
- San Jose’s combined Taxpayer Burden: $43,120
- San Diego’s combined Taxpayer Burden: $35,410
- Dallas’ combined Taxpayer Burden: $33,490
- Houston’s combined Taxpayer Burden: $22,940
- San Antonio’s combined Taxpayer Burden: $16,660
- Phoenix’s combined Taxpayer Burden: $13,290
Meanwhile, record American household debt, near $14 trillion including mortgages and student loans, is some $1 trillion higher than during the Great Recession of 2008. Credit card debt of $1 trillion also exceeds the 2008 peak.
Total corporate debt has swelled from nearly $4.9 trillion in 2007 as the Great Recession was just starting to break out to nearly $9.1 trillion halfway through 2018, quietly surging 86 percent, according to Securities Industry and Financial Markets Association data. Other than a few hiccups and some fairly substantial turbulence in the energy sector in late-2015 and 2016, the market has performed well.
Quantitative Easing printed $3.5tn:
When conducting its quantitative easing programs, the Fed created brand new money out of thin-air (in digital form) and used it to buy Treasury bonds and mortgage-backed securities (MBS). These programs helped to boost the overall bond market, not just Treasuries or MBS.
Total outstanding non-financial corporate debt has increased by over $2.5 trillion or 40% since its 2008 high, which was already a dangerously high level in its own right.
The world’s debt pile is hovering near a record at $244 trillion, which is more than three times the size of the global economy, according to an analysis by the Institute of International Finance.
The global debt-to-GDP ratio exceeded 318 percent in the third quarter of last year, despite a stronger pace of economic growth, according to a report by the Washington-based IIF released on Tuesday. That’s slightly below a record 320 percent of GDP in the same quarter of 2016.
Pension debt (the driver of much city and state debt):
As of fiscal year 2015, the latest year for which complete accounts are available for all cities and states, governments reported unfunded liabilities of $1.378 trillion under recently implemented governmental accounting standards. However, we calculate using market valuation techniques that the true unfunded liability owed to workers based on their current service and salaries is $3.846 trillion. These calculations reflect the fact that accrued pension promises are a form of government debt with strong rights. These unfunded liabilities represent an increase of $434 billion over 2014, as realized asset returns fell far short of their targets.
The federal government also has debt that has not been accounted for, and which one doesn’t often hear about. The debt that has been accounted for is the $15.6 trillion held by the public in the form of US Treasury bonds. The debts that have not been accounted for include the deferred costs of maintenance on roads, water systems, and 54,560 structurally deficient bridges, as well as the yet-to-be-built low-carbon energy systems necessary to mitigate the catastrophic effects of climate change. And these are just two broad examples.
So, just how much hidden US debt is there? At this point, we must rely on rough estimates. For example, according to a 2016 report from the American Society of Civil Engineers (ASCE), upgrading the country’s crumbling infrastructure would cost $5.2 trillion.
Entitlements trust fund obligations:
As Social Security provides benefits to millions of retiring baby boomers, its costs will balloon to $1.4 trillion. That includes a rapidly increasing number of Social Security disability recipients. Their scheduled benefits will increase by more than $60 billion in the next decade.
In Social Security’s early years, the ratio of workers to retired beneficiaries was high—16 to 1. And, on average, individuals died about three years before they were due to collect benefits. Thus, tax revenue generally exceeded benefit payments. By 2035, the ratio of workers to retired beneficiaries is projected to drop to 2-to-1.
During this decade and the next, the number of Americans 65 or over will jump by 75%, while those of working age will nudge up by just 7%. During the next 17 years, 77 million workers will retire—that’s 10,000 people a day. Thirty-six million Americans are already retired.
The cost to make these programs financially solvent for the next 75 years is almost $40 trillion.
The primary idea of deep ecology is that civilization must re-organize to include nature as a necessary participant, like the economy or ideology. It comes from the deep ecology platform:
We believe that stopping the global extinction crisis and achieving true ecological sustainability will require rethinking our values as a society. Present assumptions about economics, development, and the place of human beings in the natural order must be reevaluated. Nature can no longer be viewed merely as a commodity—a storehouse of “resources” for human use and profit. It must be seen as a partner and model in all human enterprise.
We believe that current problems are largely rooted in the following circumstances:
- The loss of traditional knowledge, values, and ethics of behavior that celebrate the intrinsic value and sacredness of the natural world and that give the preservation of Nature prime importance. Correspondingly, the assumption of human superiority to other life forms, as if we were granted royalty status over Nature; the idea that Nature is mainly here to serve human will and purpose.
- The prevailing economic and development paradigms of the modern world, which place primary importance on the values of the market, not on Nature. The conversion of Nature to commodity form, the emphasis upon economic growth as a panacea, the industrialization of all activity, from forestry to farming to fishing, even to education and culture; the rush to economic globalization, cultural homogenization, commodity accumulation, urbanization, and human alienation. All of these are fundamentally incompatible with ecological sustainability on a finite Earth.
- Technology worship and an unlimited faith in the virtues of science; the modern paradigm that technological development is inevitable, invariably good, and to be equated with progress and human destiny. From this, we are left dangerously uncritical, blind to profound problems that technology has wrought, and in a state of passivity that confounds democracy.
- Overpopulation, in both the overdeveloped and the underdeveloped worlds, placing unsustainable burdens upon biodiversity and the human condition.
We believe that values other than market values must be recognized and given importance, and that Nature provides the ultimate measure by which to judge human endeavors.
Many see the ultimate fulfillment of this as coming from a plan such as Half Earth, or leaving aside half of the natural land, air, and sea for use by natural ecosystems only:
The ongoing mass extinction of the natural world ranks with pandemics, world war, and climate change as among the greatest threats that humanity has imposed on itself. To lose so much of Earth’s biodiversity is to both destroy our living heritage, and to risk the stability of the planet, today and for all future generations.
Half-Earth is a call to protect half the land and sea in order to manage sufficient habitat to reverse the species extinction crisis and ensure the long-term health of our planet.
The president’s reference Tuesday to “Deep State Justice Dept” suggests that federal law enforcement is part of an entrenched bureaucracy that Trump and his supporters say didn’t want him to be elected and is actively working to undermine his presidency.
Elections may be useful in removing politicians, but the career bureaucrats who toil away in obscurity, often in blatant defiance of the Constitution, never go anywhere. Instead, they protect their turf as they dump an unfathomable number of regulations and decrees on the very taxpaying Americans who pay their salaries — tens of thousands of pages worth every year. And when there is a perceived threat to their power and agenda — say, for example, a president who promises to “drain the swamp” and rein in the bureaucracy — they react with fury. Meet the infamous “Deep State,” or at least one crucial component of it.
In a memo produced by Rich Higgins while he was serving as U.S. national security council director for strategic planning in the Trump administration, the “Deep State” is referred to multiple times. Under “The Deep State,” the document outlines the general idea: “The successful outcome of cultural Marxism is a bureaucratic state beholden to no one, certainly not the American people. [Emphasis added.] With no rule of law considerations outside those that further deep state power, the deep state truly becomes, as Hegel advocated, god bestriding the earth.” Throughout the memo, there are more than half a dozen references to this “Deep State,” including the idea that Democratic leadership “protects cultural Marxist programs of action and facilitates the relentless expansion of the deep state.” Even the Republican leadership, in cooperation with “globalists, corporatists, and the international financial interests,” is willing to “service the deep state,” Higgins explained.
Though the deep state is sometimes discussed as a shadowy conspiracy, it helps to think of it instead as a political conflict between a nation’s leader and its governing institutions.
In Egypt, for instance, the military and security services actively undermined Mohamed Morsi, the country’s democratically elected Islamist president, contributing to the upheaval that culminated in his ouster in a 2013 coup.
Mr. Flynn, in his short tenure, exemplified the breakdown between the president’s inner circle and career civil servants…“the deep state is not official institutions rebelling,” [El Amrani] said, but rather “shadowy networks within those institutions, and within business, who are conspiring together and forming parallel state institutions.”
Dwight Eisenhower delivered his farewell address after serving two terms as U.S. president; the five-star general chose to warn Americans of this specific threat to democracy: “In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.” That warning was issued prior to the decadelong escalation of the Vietnam War, three more decades of Cold War mania, and the post-9/11 era, all of which radically expanded that unelected faction’s power even further.
This is the faction that is now engaged in open warfare against the duly elected and already widely disliked president-elect, Donald Trump.
Their most valuable instrument is the U.S. media, much of which reflexively reveres, serves, believes, and sides with hidden intelligence officials.
Yes, there is another government concealed behind the one that is visible at either end of Pennsylvania Avenue, a hybrid entity of public and private institutions ruling the country according to consistent patterns in season and out, connected to, but only intermittently controlled by, the visible state whose leaders we choose. My analysis of this phenomenon is not an exposé of a secret, conspiratorial cabal; the state within a state is hiding mostly in plain sight, and its operators mainly act in the light of day. Nor can this other government be accurately termed an “establishment.” All complex societies have an establishment, a social network committed to its own enrichment and perpetuation. In terms of its scope, financial resources and sheer global reach, the American hybrid state, the Deep State, is in a class by itself.
But, like virtually every employed person, I became, to some extent, assimilated into the culture of the institution I worked for, and only by slow degrees, starting before the invasion of Iraq, did I begin fundamentally to question the reasons of state that motivate the people who are, to quote George W. Bush, “the deciders.”
Cultural assimilation is partly a matter of what psychologist Irving L. Janis called “groupthink,” the chameleon-like ability of people to adopt the views of their superiors and peers. This syndrome is endemic to Washington: The town is characterized by sudden fads, be it negotiating biennial budgeting, making grand bargains or invading countries.
Definition of bureaucracy
a. a body of nonelective government officials
b. an administrative policy-making group
2. government characterized by specialization of functions, adherence to fixed rules, and a hierarchy of authority
3. a system of administration marked by officialism, red tape, and proliferation
careerism: devotion to a successful career, often at the expense of one’s personal life, ethics, etc.
But the greatest sin of all for Francis is perhaps that of careerism, chiding those who honor people rather than God.
Searching for analogies and differences, I find that in common, routine nonconformity, mistake, misconduct, and disaster are systematically produced by the interconnection between environment, organizations, cognition, and choice. These patterns amplify what is known about social structure and have implications for theory, research, and policy.
This research started with the claim that managers had distinct interests and an ability to control the firm that led corporations to different goals than those intended by their owners (Berle & Means, 1932). The “managerialist thesis” contained a general view of organizations as arenas with contested goals, as well as specific suggestions on how managerial goals of stable growth rates led to foregone profits, an idea that is connected to research on goal displacement (Selznick, 1949) and dominant coalitions (Cyert & March, 1963) in organizational theory.
How Democrats and Republicans “switched positions”:
Republicans didn’t immediately adopt the opposite position of favoring limited government. “Instead, for a couple of decades, both parties are promising an augmented federal government devoted in various ways to the cause of social justice,” Rauchway wrote in a 2010 blog post for the Chronicles of Higher Education. Only gradually did Republican rhetoric drift to the counterarguments. The party’s small-government platform cemented in the 1930s with its heated opposition to the New Deal.
But why did Bryan and other turn-of-the-century Democrats start advocating for big government? According to Rauchway, they, like Republicans, were trying to win the West…Democrats seized upon a way of ingratiating themselves to western voters: Republican federal expansions in the 1860s and 1870s had turned out favorable to big businesses based in the northeast, such as banks, railroads and manufacturers, while small-time farmers like those who had gone west received very little. Both parties tried to exploit the discontent this generated, by promising the little guy some of the federal largesse that had hitherto gone to the business sector. From this point on, Democrats stuck with this stance — favoring federally funded social programs and benefits — while Republicans were gradually driven to the counterposition of hands-off government.
This intensified in the 1960s:
Born and raised a Democrat in the Solid South, Helms switched parties in 1970, two years before his first Senate run. In 1974, Helms remarked of his decision:
The party veered so far to the left nationally, and was taken over by the people whom I’d describe as substantially left of center in North Carolina. And I think I felt, as many other Democrats felt and feel, that really I had no real faith in the party. But I didn’t do anything about it. Changing parties, changing party registration, is like moving from a church. But President Nixon’s speech at Kansas State, I think it was, persuaded me that maybe the Republican party in North Carolina and in the nation had a chance to restore the two party system.
After the New Deal, the Supreme Court’s desegregation ruling in Brown v. Board in 1954, and the civil-rights movement, Helms shepherded white conservatives of the Solid South to the Republican Party, but continued the old Democratic Party’s hard line against civil-rights reforms.
Republicans opposed civil rights because it was bad law:
Goldwater wanted to support the Civil Rights Act of 1964, as he had the civil rights acts of 1957 and 1960. But he reluctantly decided he could not, because he could see that the bill’s Title II and Title VII were unconstitutional. He predicted that Title VII, which dealt with employment, would end in the government dictating hiring and firing policy for millions of Americans. So it has come to pass.
[Goldwater] was half-Jewish and as a private citizen and U.S. senator had fought discrimination time and again. He led the way in desegregating the Arizona Air National Guard in 1946, two years before President Truman desegregated the armed forces. He was an early member of the Phoenix chapters of the NAACP and the Urban League, even making up the latter’s operating deficit when it was getting started. He desegregated the Senate cafeteria in early 1953, demanding that his black legislative assistant be served along with every other Senate employee, after learning she had been denied service.
Demographic Replacement is a Leftist theory that white Americans are dying out while being replaced by high numbers of third world immigrants. Most Leftists seem to view this as logical, necessary, and good, as well as being a natural result of the Hart-Celler Act in the 1960s.
The theory comes to us from the New Republic, who wrote an article about “post-white America”:
But whites’ tenure as America’s mainstream population is on the wane, in a demographic sense.
The most recent information from the census and elsewhere shows how quickly the shift is happening. From 2000 to 2010, a decade during which the white population as a whole grew by just 1.2 percent, the number of white children in the United States declined by 4.3 million. Meanwhile the child populations of Hispanics, Asians, and people of two or more races were increasing. In comparative terms, whites constituted just 53 percent of America’s young people (down from nearly 70 percent in 1990) while Hispanics constituted 23 percent (up from just 12 percent).
The Pew survey found marked differences between baby boomers and millennials—who are known for their racial inclusiveness—with regard to agreement that the following are changes for the better: that more people of different races are marrying each other (36 percent versus 60 percent), that the population of Hispanics is growing (21 percent versus 33 percent), and that the population of Asians is growing (24 percent versus 43 percent).
Not to be outdone, National Geographic picked up the mythos and ran with it:
The U.S. Census Bureau has projected that non-Hispanic whites will make up less than 50 percent of the population by 2044, a change that almost certainly will recast American race relations and the role and status of white Americans, who have long been a comfortable majority.
Hazleton’s experience offers a glimpse into the future as white Americans confront the end of their majority status, which often has meant that their story, their traditions, their tastes, and their cultural aesthetic were seen as being quintessentially American. This is a conversation already exploding across the country as some white Americans, in online forums and protests over the removal of Confederate monuments, react anxiously and angrily to a sense that their way of life is under threat. Those are the stories that grab headlines and trigger social media showdowns. But the shift in status—or what some are calling “the altitude adjustment”—is also playing out in much more subtle ways in classrooms, break rooms, factory floors, and shopping malls, where the future has arrived ahead of schedule. Since 2000, the minority population has grown to outnumber the population of whites who aren’t Hispanic in such counties as Suffolk in Massachusetts, Montgomery in Maryland, Mecklenburg in North Carolina, as well as counties in California, Colorado, Florida, Georgia, New Jersey, and Texas.
In particular, this means that minority groups have the swing vote on who wins national elections:
By 2020, the report estimates, the percentage of eligible voters who fall into the category of “white without a college degree,” will drop by 2 points, from 46 percent in 2016 to 44 percent. Meanwhile, voter segments that tend to favor Democrats will all grow as a share of the total eligible vote. That includes “whites with a college degree,” African-Americans, Hispanics and Asians/other are all forecasted to climb by one percentage point.
But if the third-party vote in 2020 looks more likes its historical norm and those voters go back to their home parties, the report says the 2020 election could yield an extraordinary deadlock.
This has caused some pushback against the demographic change:
Ann Coulter said that Donald Trump will continue to do well in the polls as long as he keeps talking about immigration.
“The voters keep saying, ‘We don’t want any more immigration,’” Coulter said. “That’s why Trump is so popular. So pick it up, Republicans.”
She also pointed out that demographic change has radically transformed America:
From 1620 to 1970, the U.S. was demographically stable — not to be confused with “a nation of immigrants.” The country was about 85% to 90% white, almost entirely British, German, French and Dutch, and 10% to 15% African American. (The American Indian population, technically in their own nations, steadily plummeted — an example of how vast numbers of new people can displace the old, both accidentally and on purpose.)
In a generation, the white majority has nearly disappeared, while the black percentage has remained about the same, with more than 90% of African Americans still native-born. White Americans are one border surge away from becoming a minority in their own country.
If everyone assimilated to our culture, who cares what race they are? But given sufficient numbers, they don’t. They don’t need to, and we certainly aren’t asking them to. The reason we successfully assimilated not-so-different European cultures was that we controlled the numbers — essentially stopping immigration for 50 years while we forged an American character.
This demographic change was decided in the 1960s:
Compared to almost entirely European immigration under the national-origins system, flows since 1965 have been more than half Latin American and one-quarter Asian. The largest share of today’s immigrant population, about 11.6 million, is from Mexico. Together with India, the Philippines, China, Vietnam, El Salvador, Cuba, South Korea, the Dominican Republic, and Guatemala, these ten countries account for nearly 60 percent of the current immigrant population.
This act caused the “fundamental transformation” that Barack Obama spoke about:
In the decades following Hart-Celler, America experienced drastic changes in both the numbers and origins of immigrants. The number of immigrants entering the U.S. after 1965 rose significantly, from approximately 250,000 in the 1950s to 700,000 by the 1980s. Doors were opened to large-scale immigration from Eastern Europe, Asia, Latin America, and the Caribbean, where extremely motivated immigrants took advantage of the family reunification provisions of the law to engage in chain migration, bringing an average of 2 relatives to the U.S. for each new green card granted. Within a few decades of Hart-Celler, family unification had become the driving force in U.S. immigration, favoring those who were most determined to move -exactly those nationalities the critics of the Act had hoped to keep out.
This radical transformation of demographics was bound to cause theories such as the one expressed by New Republic:
“The bill that we sign today is not a revolutionary bill,” President Johnson said during the signing ceremony. “It does not affect the lives of millions. It will not reshape the structure of our daily lives, or really add importantly to either our wealth or our power.” Senator Ted Kennedy (D-MA), the bill’s floor manager, stated: “It will not upset the ethnic mix of our society.” Even advocacy groups who had favored the national-origins quotas became supporters, predicting little change to the profile of immigration streams.
Despite these predictions, the measure had a profound effect on the flow of immigrants to the United States, and in only a matter of years began to transform the U.S. demographic profile. The number of new lawful permanent residents (or green-card holders) rose from 297,000 in 1965 to an average of about 1 million each year since the mid-2000s (see Figure 1). Accordingly, the foreign-born population has risen from 9.6 million in 1965 to a record high of 45 million in 2015 as estimated by a new study from the Pew Research Center Hispanic Trends Project. Immigrants accounted for just 5 percent of the U.S. population in 1965 and now comprise 14 percent.
This was motivated by an earlier demographic replacement, or that of WASPs with mixed-ethnic whites, as noted by the National Geographic article linked above:
“These days, I understand the WASPs.” Glover explains that he was born in the 1970s to a family of mixed European origin—Jewish, Irish, Greek, German, Slovene, people once not seen as fully white by the gatekeepers of social class. But over time they moved into the mainstream. “I definitely felt that I was a white American, which I understood to mean just plain American,” he says.
These new Americans, fueled by waves of European immigrants like Glover’s great-grandparents, were starting to displace the white Anglo-Saxon Protestants who had run the country for two centuries. In a short, candid essay he submitted to the Race Card Project, Glover wrote, “We had taken over their colleges, their clubs, and even the White House,” referring to the election of an Irish Catholic president, John F. Kennedy, in 1960.
“Well, now we’re in their shoes,” he wrote. “People of Color are moving into the mainstream now; ‘White’ is no longer the default setting for ‘American.’ And though it’s clear that this process is inevitable—it’s just a matter of numbers and demographics—a lot of the time, to be honest, I’m sad about it. The country is changing in ways that aren’t very good for me, and I’ve got no choice but to adapt. I’m not complaining; it’s only fair that other people get the same opportunity we got. But now I find myself looking back at the WASPs with new respect. Though there were many notable exceptions, for the most part during their fall from power they conducted themselves with quiet dignity. I’m sure it didn’t feel good for them at the time, but for the most part they just got on with their lives. We could learn from their example.”
Some have noted that the intention, with both the ethnic (Irish, Italian, Greek, Jewish, Spanish, Slavs) and racial (Hispanic, African, Asian, mixed) demographic shifts was simply to provide a permanent audience for the Left, since the places of origins of these immigrants tend to be Leftist:
Terry McAuliffe knew it would benefit his party if felons could vote again, so he unilaterally restored those rights and attacked anyone who disagreed as a bigot. Remember that? Well currently, control of Virginia’s House of Delegates hinges on a single race where the candidates literally tied with more than 23,000 votes cast.
In other words, McAuliffe’s gambit worked, so why not try it on a national scale, with a group far larger than just convicted felons?
Democrats know if they keep up the flood of illegals into the country, they can eventually turn it into a flood of voters for them. They don’t have to foster economic growth, or be capable administrators, or provide good government. They just have to keep the pump flowing, and power will be theirs.
If this is true, demographic replacement probably qualifies as genocide under the UN definition:
Any of the following acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group, as such: killing members of the group; causing serious bodily or mental harm to members of the group; deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part; imposing measures intended to prevent births within the group; [and] forcibly transferring children of the group to another group.
The New York Times however wants us to believe that “demographic replacement” and “replacement theory” are Right-wing conspiracy ideas instead:
Behind the idea is a racist conspiracy theory known as “the replacement theory,” which was popularized by a right-wing French philosopher. An extension of colonialist theory, it is predicated on the notion that white women are not having enough children and that falling birthrates will lead to white people around the world being replaced by nonwhite people.
And like so many fundamentalist ideologies, the foundation of this one requires the subjugation of women.
“In their minds, in this clash of civilization, white men are in a weaker position because their women are not doing the work of reproducing,” said Arun Kundnani, a professor at New York University and author of “The Muslims Are Coming! Islamophobia, Extremism and the Domestic War on Terror.” “They are saying, ‘Look, Muslims have got their women where they need to be, and we’re not doing a good job at that.’”
Unfortunately for The New York Times, much of the rest of Leftist media is celebrating replacement theory as a Leftist victory:
The U.S. white majority will soon disappear forever
While estimates are that the USA is 76.6% white, we are already seeing a demographic shift that will impact the 2020 election, as Leftists intended:
Hispanics are expected to outnumber whites in Texas by 2020 and expected to make up the state’s majority population by 2042.
That’s according to a recent report from the Office of the State Demographer, which outlines numerous population projections through 2050.
That covers the important state of Texas, but the impact will be nationwide:
And in another first, there will be more Hispanic voters eligible to vote than African Americans, according to the analysis from Pew Research Center.
The growth of non-white voters, which Pew said favored Democrat Hillary Rodham Clinton in the 2016 election, comes at the expense of the white vote.
Pew said that the white vote will total 66.7 percent in 2020, down from 76.4 in 2000.
This mirrors what happened to California through demographic replacement:
A better reason California turned blue is the liberal immigration policies pushed by Republicans in the era of Reagan and Bush. Measures such as the 1986 amnesty and the Immigration Act of 1990 both had a tremendous impact on California and significantly altered its demographics.
In 1980, Hispanics made up 19 percent of the state. Latinos are now the largest ethnic group in the state and make up around 40 percent of its population.
California was the leading destination for illegal immigrants in the 1980s, with an estimated yearly inflow of over 200,000 aliens into the state, around the time Reagan’s amnesty was passed, according to the Public Policy Institute of California. That amnesty legalized nearly three million illegal aliens and offered them a path to citizenship.
This also mirrors what has happened to America after the 1965 Hart-Celler Act:
That act is rarely mentioned when recounting the high points of 1960s liberalism, but its impact arguably rivals the Voting Rights Act, the creation of Medicare, or other legislative landmarks of the era. It transformed a nation 85 percent white in 1965 into one that’s one-third minority today, and on track for a nonwhite majority by 2042.
In the 1950s, 53 percent of all immigrants were Europeans and just 6 percent were Asians; by the 1990s, just 16 percent were Europeans and 31 percent were Asians. The percentages of Latino and African immigrants also jumped significantly.
By adding so many Asians, Latinos, and African immigrants, Rosenberg says, the act changed the racial narrative in America from one of oppression – the white-black divide dating to slavery – to one of diversity. That change was strongly echoed in the Obama campaign, which emphasized the candidate’s mixed-race background as making him representative of a new generation of Americans.
The Dunning-Kruger Effect holds that people are unable to understand anything about their level of cognitive complexity and that therefore, they assume they are more competent than they are, since they are oblivious to the areas outside their understanding necessary for having a complete conceptual picture:
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.
Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd.
The full paper is worth reading.
Corresponding, there is the Downing Effect, which states among other things that smarter people underestimate their intelligence:
His studies also evidenced that the ability to accurately estimate others’ IQ was proportional to one’s own IQ. This means that the lower the IQ of an individual, the less capable they are of appreciating and accurately appraising others’ IQ. Therefore individuals with a lower IQ are more likely to rate themselves as having a higher IQ than those around them. Conversely, people with a higher IQ, while better at appraising others’ IQ overall, are still likely to rate people of similar IQ as themselves as having higher IQs.
As we noted in this space earlier, while Clinton’s overall margin looks large and impressive, it is due to Clinton’s huge margin of victory in one state — California — where she got a whopping 4.3 million more votes than Trump.
California is the only state, in fact, where Clinton’s margin of victory was bigger than President Obama’s in 2012 — 61.5% vs. Obama’s 60%.
But California is the exception that proves the true genius of the Electoral College — which was designed to prevent regional candidates from dominating national elections.
EMTALA is a federal statute designed to prevent hospitals from turning away patients:
The Federal Emergency Medical Treatment and Labor Act (EMTALA) was passed as a part of the Consolidated Omnibus Budget Reconciliation Act of 1986, also called COBRA. It is a Federal statute which governs when and how a patient may be 1) refused treatment or 2) transferred from one hospital to another. Tellingly, the statute is also known as the “Patient Anti-Dumping Law”, which provides a clue as to its intent. The purpose of EMTALA is essentially to prevent hospitals from rejecting patients,refusing to treat them, or transferring them to “county hospitals” because they are unable to pay or are covered under Medicare/Medicaid.
One of the many provisions of the statute is to provide treatment without regard to the ability to pay. Any inquiry into payment ability is not to discourage individuals from remaining in the emergency department, or to delay stabilizing treatment based on the patient’s ability to pay. Operationally, this can pose a challenge to revenue cycle in the emergency department.
Patients are sorted, or “triaged”, based upon acuity level. Patients that present minor or non-emergent symptoms may be expedited, or “fast-tracked” through the emergency department, to make room and time for patients with more extensive injuries. It is not uncommon for hospitals to create an operational goal based upon the length of stay of these “fast track”patients.
Many health care organizations have wisely interpreted EMTALA conservatively in order to avoid liability.
It was made necessary by Medicare/Medicaid:
Before the 1980s, private hospitals charged patients according to their ability to pay, and this “cost shifting” allowed them to deliver a small amount of charity care. Over the years, this amount dwindled. Recent Internal Revenue Service reports found that 45 percent of private hospitals spend 4.8 percent or less of their revenues on uncompensated care. In contrast, public hospitals spend more than four times that amount (18.1 percent) on uncompensated care.
In 1983 the federal government established through Medicare a system that placed caps on how much hospitals could charge for treating patients with given diagnoses. This system, with charges tied to diagnosis-related groups (DRGs), made cost-shifting impossible, and, after its implementation, hospitals lost financial support for charity care. As changes in the economic climate made it more difficult for hospital EDs to care for indigent patients, reports surfaced that uninsured and publicly insured patients were either unable to access emergency care or were redirected from private EDs to public EDs.
Commentators who imply a causal relationship between EMTALA’s enactment and the nation’s health care crisis cite the surge in ED use from 85 million to almost 115 million visits per year, the closing of more than 560 hospitals and 1,200 EDs, and the shuttering of many trauma centers, maternity wards, and tertiary referral centers. In 90 percent of larger hospitals, the capacity to treat patients is saturated, primarily because of the lack of money to support inpatient critical care beds and nurses to staff them. The emergency care capability that does exist is plagued by rampant emergency medical services diversion and ED overcrowding, which alone accounts for 33 percent increases in wait times and has tripled the number of individuals who leave the ED before being seen.
Not surprisingly, this is bankrupting hospitals:
From 1994 through 2004, the number of ED visits increased 18 percent from 93.4 million to 110.2 million visits annually. This is an average increase of more than 1.5 million visits per year spread over all age groups. During roughly the same period, the United States also experienced a net loss of 703 hospitals, 198,000 hospital beds, and 425 EDs, mainly in response to rising costs of care and lower reimbursements by managed care organizations and other payers, including Medicare and Medicaid.
It has also reduced the effectiveness of emergency care:
America’s emergency rooms (ERs) are in crisis. Crowding, delays, and diversions have increased to epidemic proportions. In the United States healthcare system, ER visits account for 11% of outpatient encounters, 28% of acute care visits, and 50% of hospital admissions. By default, ERs have become, as noted in the 2006 Institute of Medicine report, “the safety net of the safety net”. For many Americans, it is now a place of last and first resort.
Entitlements are government payments or payments-in-kind made directly to citizens.
1 a : the state or condition of being entitled : right
b : a right to benefits specified especially by law or contract
2 : belief that one is deserving of or entitled to certain privileges
3 : a government program providing benefits to members of a specified group also : funds supporting or distributed by such a program
They are called entitlements because they resemble property rights:
entitlement – A Federal program or provision of law that requires payments to any person or unit of government that meets the eligibility criteria established by law. Entitlements constitute a binding obligation on the part of the Federal Government, and eligible recipients have legal recourse if the obligation is not fulfilled. Social Security and veterans’ compensation and pensions are examples of entitlement programs.
They are the fastest-rising area of spending since 1950:
In 2010, entitlement spending had grown to be almost 100 times higher than it was in 1960; it has increased by an explosive 9.5 percent per year for 50 straight years. Entitlement transfer payments to individuals (such as for income, healthcare, age, and unemployment) have been growing twice as fast as per capita income for 20 years, totaling $2.2 trillion in 2010 alone—which was greater than the entire gross domestic product of Italy and roughly the same as the GDP of Great Britain.
In 1960, entitlement spending accounted for less than a third of all federal spending; in 2010, it was just about two thirds of government outlays, with everything else—defense, justice, all the other duties of government—making up less than one third. Over the last half-century, income-related assistance (which we used to call “welfare”) multiplied more than thirtyfold after adjusting for inflation. The most shocking growth has been in Medicare and Medicaid. In the early 1960s, neither program existed; by 2010, these two programs cost more than $900 billion a year.
Half of all American households currently receive transfer payments from the government. According to the Census Bureau, only 30 percent of American households in the 1980s relied on any public assistance.
Because many government benefits to individuals are in the form of cash, the share of government transfers as a percentage of personal income has grown over the last four decades, which means people are actually earning less of their incomes.
This may in turn be killing our economy:
The long-time central bank chief repeated his warnings about the weight that Social Security, Medicare and other programs are having on what have been otherwise solid gains over the past few years.
“I think the real problem is over the long run, we’ve got this significant continued drain coming from entitlements, which are basically draining capital investment dollar for dollar,” he told CNBC’s Sara Eisen during a “Squawk on the Street ” interview.
“Without any major change in entitlements, entitlements are going to rise. Why? Because the population is aging. There’s no way to reverse that, and the politics of it are awful, as you well know,” Greenspan added.
We were warned against this
I have long been convinced that institutions purely democratic must, sooner or later, destroy liberty or civilization, or both. In Europe, where the population is dense, the effect of such institutions would be almost instantaneous. What happened lately in France is an example. In 1848 a pure democracy was established there. During a short time there was reason to expect a general spoliation, a national bankruptcy, a new partition of the soil, a maximum of prices, a ruinous load of taxation laid on the rich for the purpose of supporting the poor in idleness. Such a system would, in twenty years, have made France as poor and barbarous as the France of the Carlovingians. Happily the danger was averted, and now there is a despotism, a silent tribune, an enslaved press. Liberty is gone: but civilisation has been saved. – T.B. Macaulay, “Letter to Henry Stephens Randall,” May 23, 1857
A democracy cannot exist as a permanent form of government. It can only exist until the majority discovers it can vote itself largess out of the public treasury. After that, the majority always votes for the candidate promising the most benefits with the result the democracy collapses because of the loose fiscal policy ensuing, always to be followed by a dictatorship, then a monarchy. – Elmer T. Peterson, “This is the Hard Core of Freedom,” The Daily Oklahoman, December 9, 1951
Let us never forget this fundamental truth: the State has no source of money other than money which people earn themselves. If the State wishes to spend more it can do so only by borrowing your savings or by taxing you more. It is no good thinking that someone else will pay – that ‘someone else’ is you. There is no such thing as public money; there is only taxpayers’ money. – Margaret Thatcher, “Speech to Conservative Party Conference,” October 14, 1983
I think they’ve made the biggest financial mess that any government’s ever made in this country for a very long time, and Socialist governments traditionally do make a financial mess. They always run out of other people’s money. It’s quite a characteristic of them. They then start to nationalise everything, and people just do not like more and more nationalisation, and they’re now trying to control everything by other means. They’re progressively reducing the choice available to ordinary people. – Margaret Thatcher, “TV Interview For Thames TV This Week,” February 5, 1976
See also Budget.
Externalization refers to two things: (1) the tendency of individuals to delegate control of their minds to the group, state, or other external influence, and (2) the tendency of individuals to pass on costs to society at large.
People prefer electric shocks to being left alone with their thoughts:
One study published in Science even found that people would rather do mundane activities or — wait for it — administer electric shocks to themselves than be left alone with their thoughts.
“I think most people use distraction and are afraid to be alone and or sit in silence due to fear of unresolved feelings or thoughts that could come up,” says Kelley Kitley, LCSW, psychotherapist and owner of Serendipitous Psychotherapy in Chicago.
Our fear of contemplation comes from a sense that things are unwell out there:
Prof Ivo Vlaev, a behavioural psychologist at Warwick University and Imperial College, London, thinks the findings are “very interesting” but the electric shocks could be over-emphasised.
“The bottom line is that they felt miserable,” he told BBC News. “Research has shown that happiness is not only about experiencing pleasure. You need a sense of meaning and purpose – which you lack in these conditions. And when you have a task to do, you do have that sense – even if it’s a simple task.”
This is a process known as atomization:
In his 1995 essay, sociologist Robert Putnam warned of the increasing atomization of American society. The institutions of American social capital, he wrote, are on the decline: Attendance at public forums, religious groups, civic organizations, and even his eponymous bowling leagues have been steadily declining since the the heyday of the 1950s American suburban community. The social fabric of America is coming apart on the neighborhood level, wrote Putnam—and it’s only going to get worse.
Unfortunately, it seems Putnam was on to something. In a report for urbanism think-tank City Observatory, economist Joe Cortright tracks the decline of American social capital over the past 40 years not simply in terms of membership to voluntary organizations, but also through the relationships Americans have with their geographical neighbors. Data used in the report from the General Social Survey doesn’t paint a pretty picture: According to Cortright, the degree to which Americans trust one another is at a 40-year low.
An externality is an economic term referring to a cost or benefit incurred or received by a third party. However, the third party has no control over the creation of that cost or benefit.
Pollution emitted by a factory that muddies the surrounding environment and affects the health of nearby residents is a negative externality. The effect of a well-educated labor force on the productivity of a company is an example of a positive externality.
People seem to confuse these terms around here, so it’s time for us to do what researchers and academics do… go to primary sources!
Fascism [is] the complete opposite of…Marxian Socialism, the materialist conception of history of human civilization can be explained simply through the conflict of interests among the various social groups and by the change and development in the means and instruments of production…. Fascism, now and always, believes in holiness and in heroism; that is to say, in actions influenced by no economic motive, direct or indirect. And if the economic conception of history be denied, according to which theory men are no more than puppets, carried to and fro by the waves of chance, while the real directing forces are quite out of their control, it follows that the existence of an unchangeable and unchanging class-war is also denied – the natural progeny of the economic conception of history. And above all Fascism denies that class-war can be the preponderant force in the transformation of society
After Socialism, Fascism combats the whole complex system of democratic ideology, and repudiates it, whether in its theoretical premises or in its practical application. Fascism denies that the majority, by the simple fact that it is a majority, can direct human society; it denies that numbers alone can govern by means of a periodical consultation, and it affirms the immutable, beneficial, and fruitful inequality of mankind, which can never be permanently leveled through the mere operation of a mechanical process such as universal suffrage….
Fascism conceives of the State as an absolute, in comparison with which all individuals or groups are relative, only to be conceived of in their relation to the State. The conception of the Liberal State is not that of a directing force, guiding the play and development, both material and spiritual, of a collective body, but merely a force limited to the function of recording results: on the other hand, the Fascist State is itself conscious and has itself a will and a personality — thus it may be called the “ethic” State
Fascism is the doctrine best adapted to represent the tendencies and the aspirations of a people, like the people of Italy, who are rising again after many centuries of abasement and foreign servitude. But empire demands discipline, the coordination of all forces and a deeply felt sense of duty and sacrifice: this fact explains many aspects of the practical working of the regime, the character of many forces in the State, and the necessarily severe measures which must be taken against those who would oppose this spontaneous and inevitable movement of Italy in the twentieth century, and would oppose it by recalling the outworn ideology of the nineteenth century – repudiated wheresoever there has been the courage to undertake great experiments of social and political transformation; for never before has the nation stood more in need of authority, of direction and order. If every age has its own characteristic doctrine, there are a thousand signs which point to Fascism as the characteristic doctrine of our time. For if a doctrine must be a living thing, this is proved by the fact that Fascism has created a living faith; and that this faith is very powerful in the minds of men is demonstrated by those who have suffered and died for it.
We demand that the State shall make it its primary duty to provide a livelihood for its citizens. If it should prove impossible to feed the entire population, foreign nationals (non-citizens) must be deported from the Reich.
All citizens shall have equal rights and duties.
It must be the first duty of every citizen to perform physical or mental work. The activities of the individual must not clash with the general interest, but must proceed within the framework of the community and be for the general good.
We demand the creation and maintenance of a healthy middle class, the immediate communalizing of big department stores, and their lease at a cheap rate to small traders, and that the utmost consideration shall be shown to all small traders in the placing of State and municipal orders.
The enforcement or advocacy of strict obedience to authority at the expense of personal freedom.
Totalitarianism is best understood as any system of political ideas that is both thoroughly dictatorial and utopian. It is an ideal type of governing notion, and as such, it cannot be realised perfectly.
In 1889, on the centenary of the French Revolution, a Second International emerged from two rival socialist conventions in Paris. Intended as a revival of the International Working Men’s Association, this new organization was dominated by Marxists in general and the SPD in particular. By this time the SPD was both officially Marxist and a force to be reckoned with in German politics. Despite Otto von Bismarck’s attempts to suppress it, Wilhelm Liebknecht, August Bebel, and other leaders had transformed the SPD into a mass party. But its considerable success—the SPD won almost one-fifth of the votes cast in the parliamentary elections of 1890, for example—raised the question of whether socialism might be achieved through the ballot box rather than through revolution. The “orthodox” position, as developed by the SPD’s chief theorist, Karl Kautsky, tried to reconcile the SPD’s electoral practice with Marx’s revolutionary doctrine. But others had begun to think that it would be better to recognize that circumstances had changed and to revise Marx’s doctrine accordingly.
…Among the remaining orthodox Marxists was the Russian revolutionary V.I. Ulyanov, better known by his pseudonym Lenin. As the leader of the Bolshevik, or “majority,” faction of the Russian Social-Democratic Workers’ Party, Lenin himself had been accused of straying from the Marxist path. The problem for Russian Marxists was that Russia in the late 19th century remained a semifeudal country with barely the beginnings of industrial capitalism. To be sure, Marx had allowed that it might be possible for a country such as Russia to move directly from feudalism to socialism, but the standard position among Marxists was that capitalism was a necessary stage of economic and historical development; otherwise, there would be neither the productive power to overcome necessity nor the revolutionary proletariat to win freedom for all as it emancipated itself from capitalist exploitation.
This had been the standard position among Russian Marxists too, but it was not Lenin’s. Lenin had little faith in the revolutionary potential of the proletariat, arguing in What Is to Be Done? (1902) that the workers, left to themselves, would fight only for better wages and working conditions; they therefore needed to be educated, enlightened, and led to revolution by a “vanguard” party of professional revolutionaries. Moreover, the authoritarian nature of the Russian government required that the vanguard party be conspiratorial, disciplined, and elitist. Lenin’s Russian-Marxist rivals disputed these points, but his manipulation of the vote at a party congress enabled him to label them the Menshevik, or “minority,” faction.
Lenin’s commitment to revolution thus put him at odds with those who advocated a revised, evolutionary Marxism. In Imperialism, the Highest Stage of Capitalism (1916), Lenin argued against the revisionists, stating that the improvement in conditions enjoyed by the proletariat of Europe and the United States was a kind of bribe made possible by the “superprofits” that their countries’ capitalists were extracting from the labour and resources of the poorer parts of the world. But imperialism would also be the last stage of capitalism, for it was bound to expose the contradictions of capitalism not only in the industrial countries but also in the countries exploited by the imperialistic powers—hence the possibility of revolution in a country that had not itself gone through capitalism.
…Lenin’s commitment to revolution thus put him at odds with those who advocated a revised, evolutionary Marxism. In Imperialism, the Highest Stage of Capitalism (1916), Lenin argued against the revisionists, stating that the improvement in conditions enjoyed by the proletariat of Europe and the United States was a kind of bribe made possible by the “superprofits” that their countries’ capitalists were extracting from the labour and resources of the poorer parts of the world. But imperialism would also be the last stage of capitalism, for it was bound to expose the contradictions of capitalism not only in the industrial countries but also in the countries exploited by the imperialistic powers—hence the possibility of revolution in a country that had not itself gone through capitalism.
More than three-quarters of U.S. law enforcement officers say they are reluctant to use force when necessary, and nearly as many — 72% — say they or their colleagues are more reluctant to stop and question people who seem suspicious as a result of increased scrutiny of police, according to a new study published Wednesday by the Pew Research Center.
Former Chicago police superintendent Garry McCarthy recently tied the surge in violence in the nation’s third largest city — which tallied 762 murders and more than 4,300 shooting victims in 2016 — to a decline in street stops by cops. McCarthy was fired from his post in December 2015 after the court-ordered release of a video that showed a white police officer firing 16 shots at 17-year-old Laquan McDonald.
When homicide and violent crime rates and deadly attacks on police rose in 2015 and again in 2016, police were ready to point to the perceived causes and, in some places, to pull back from the proactive encounters needed to keep citizens safe.
The consequences of the ‘Ferguson effect’ are already appearing: The nation’s two-decades-long crime decline may be over. Gun violence in particular is spiraling upward in cities across America.
Looking at data from 56 large cities across the country, Rosenfeld found a 17% increase in homicide in 2015. Much of that increase came from only 10 cities, which saw an average 33% increase in homicide.
All 10 cities that saw sudden increases in homicide had large African American populations, he said. While it’s not clear what drove the increases, he said, he believes there is some connection between high-profile protests over police killings of unarmed black men, a further breakdown in black citizens’ trust of the police, and an increase in community violence.
“The only explanation that gets the timing right is a version of the Ferguson effect,” Rosenfeld said. Now, he said, that’s his “leading hypothesis”.
“Study: There Has Been No ‘Ferguson Effect’ in Baltimore,” one CityLab post was headlined in March of this year. The study actually showed that (A) arrests declined in Baltimore following the events of Ferguson and (B) arrests declined further, and crime exploded, when Baltimore suffered its own policing incident, the death of Freddie Gray. The study’s authors claimed it was unclear whether this could be called a Ferguson Effect, but conceded: “One reasonable interpretation is that the crime spike is a Ferguson effect that might have remained dormant had it not been ignited by a localized Gray effect.”
The strong version of what I have called the Ferguson Effect—a drop in proactive policing leading to rising crime—is the only explanation for the crime increase that matches the data. The country has just elected a new president who understands that the false narrative about the police has led to the breakdown of law and order in inner cities.
Threat to police
Still, “the cops I’ve spoken to say it’s different now,” Peter Moskos, a former Baltimore police officer who is now a professor at John Jay College of Criminal Justice, told Time magazine. “Cops are saying, ‘If we’re going to get in trouble for well-intentioned mistakes, then [forget] it, I’m not working.’ ”
Fear of social media
A female Chicago police officer did not use her service pistol to defend herself while she was being beaten to the point of unconsciousness because, “She didn’t want her family or the department to go through the scrutiny the next day on national news,” Chicago Police Superintendent Eddie Johnson told the Chicago Tribune in September 2016. “(S)he looked at me and said she thought she was gonna die, and she knew that she should shoot this guy, but she chose not to.”
SLS claims there was a “pattern and practice” of unconstitutional behavior because “African Americans account for 85% of vehicle stops, 90% of citations, and 93% of arrests made by” Ferguson police officers, despite the city’s population being only 67 percent black. This claim ignores the fact that numerous studies, including data from DOJ itself, demonstrate that blacks commit crimes and routine traffic violations at a much higher rate than whites. Other than this statistical disparity that is easily explained by such higher crime rates, SLS is unable to point to specific, intentional, knowing conduct and discriminatory policies promulgated by the city that are causing any unconstitutional policing.
The above-referenced factors have had the effect of “de-policing” in law enforcement agencies across the country, which the assailants have exploited. Departments – and individual officers – have increasingly made the conscious decision to stop engaging in proactive policing. The intense scrutiny and criticism law enforcement has received in the wake of several high-profile incidents has caused several officers to (1) “become scared and demoralized” and (2) avoid interacting with the community. This was highlighted when a police officer was beaten and slammed to the ground by a subject, and the officer was afraid to shoot the subject because of the fear of community backlash. The officer informed the superintendant that the officer chose not to shoot because the officer didn’t want his/her “family or the department to have to go through the scrutiny the next day on the national news.”
Kochel found that for black officers, their experiences with the components of the Ferguson effect were not as severe and their measures of self-legitimacy were higher. Additionally, black officers felt more prepared, had more confidence in their leadership abilities, felt more successful at delivering procedural justice during the civil unrest and reported having less of a mental and emotional impact from the experience
“In spite of the fact that minority police officers had the worst experiences, they had better outcomes when compared to white police officers,” Kochel said.
Similar outcomes in UK
The inquiry team noted fears among council staff of being labelled “racist” if they focused on victims’ descriptions of the majority of abusers as “Asian” men.
The free rider problem is a situation where some individuals consume more than their fair share or pay less than their fair share of the cost of a shared resource. It is a market failure that occurs when people take advantage of being able to use a common resource, or collective good, without paying for it, as is the case when citizens of a country utilize public goods without paying their fair share in taxes. The free rider problem only arises in a market in which supply is not diminished by the number of people consuming it and consumption cannot be restricted. Goods and services such as national defense, metropolitan police presence, flood control systems, access to clean water, sanitation infrastructure, libraries and public broadcasting services can be obtained through free riding.
Free riding depletes from a tax base, can be the cause of natural resource exploitation and can even lead to the disappearance of a good’s supply if enough people jump on board with the mentality. For some people, a free ride means there is little incentive to expend money or time toward the production of a collective good when they stand to enjoy its benefits even if they expend none at all.
Free rider problems occur for two reasons. First, because there is non-excludability, which means that when providing something that’s supposed to be for everyone, there’s no way to stop anyone from using it. Secondly, if the use of a good doesn’t reduce its availability for others, people won’t stop using it.
See also the related idea of the Tragedy of the Commons.
Why some advocate privatization:
The “free rider problem” occurs in situations in which a person derives a “positive externality” from the actions of another — that is, a benefit that he did not pay for. This occurs in situations where the beneficial effect of an action is “nonexcludable,” meaning that the benefits cannot be withheld from people who had nothing to do with the action.
For example, a beekeeper may keep bees solely as a means of producing honey. However, an ancillary effect of this activity — an externality — is that the bees will pollinate flowers in surrounding properties, benefiting the owners of those properties at no cost to them.
[This situation] is a “problem” only when compared to what might have been done instead — a problem of allegedly inefficient underproduction of the good in question. In other words, the problem is that, if not for the nonexcludability of the good, things could potentially have been even better.
If the beekeeper possessed some means to prevent surrounding property owners from benefiting from his bees, without detracting from his own enjoyment, then he would be able to negotiate with them to pay him for the benefit. Since he would then derive an additional benefit from his bees — the payment — he would have an incentive to keep even more bees, benefiting both himself and his neighbors to an even greater extent.
Free riders thwart cooperation:
Collective action occurs when a number of people work together to achieve some common objective. However, it has long been recognized that individuals often fail to work together to achieve some group goal or common good. The origin of that problem is the fact that, while each individual in any given group may share common interests with every other member, each also has conflicting interests. If taking part in a collective action is costly, then people would sooner not have to take part. If they believe that the collective act will occur without their individual contributions, then they may try to free ride.
Collective action problems have often been represented by simple game theory. The simple, one-shot “prisoner’s dilemma” game represents a series of more complex situations, where individual rational action leads to a suboptimal outcome. It would be in the interests of both players to cooperate, but they end up not cooperating because they can see the advantages of free riding and fear the dangers of being taken for a ride.
A supply-side response is to attempt to convince would-be free riders that if they do not contribute, they will not receive the good, not through exclusion but because the good will not be provided at all.
The more homogenous the group, the easier it is to discover any shared preferences, the fewer the cross-cutting cleavages, and, thus, the fewer the sources of conflict within the group. Homogeneity in another sense may work in the opposite direction. If the group is heterogeneous in terms of wealth, then it may be easier to secure collective action, because the rich members may provide the goods and allow poorer members to free ride.
Francis Fukuyama wrote a book entitled The End of History and the Last Man which posited that the postwar (after WW2) world had reached its ultimate evolution in liberal democracy and that human history had nowhere to go after that point.
From the book jacket:
It is Fukuyama’s brilliantly argued theme that, over time, the economic logic of modern science together with the struggle for recognition (thymos) lead to the eventual collapse of tyrannies, as we have witnessed on both the left and right. These forces drive even culturally disparate societies toward establishing capitalist liberal democracies as the end state of the historical process. The great question then becomes: can liberty and equality, both political and economic — the state of affairs at the presumed end of history produce a stable society in which man may be said to be, at last, completely satisfied? Or will the spiritual condition of this last man in history, deprived of outlets for his striving for mastery, inevitably lead him to plunge himself and the world back into the chaos and bloodshed of history?
Genes and Intelligence
Genes and Political Inclination
However, we now know that much of politics is genetic:
John Hibbing is a political scientist at the University of Nebraska-Lincoln. Over the years, he’s studied how our political views may also be influenced by our biology.
“We would look at brain scan results and we could be incredibly accurate knowing whether they’re liberal or conservative, just on the basis of that,” he says.
Genes aren’t the only driver behind our political views, though. Hibbing says environment and upbringing play a large role as well. But he has found that, on average, about 30 or 40 percent of our political attitudes come from genetics. And he thinks the idea that our politics may come, at least in part, from our biology may help us to have more empathy for people who disagree with us.
“Our political beliefs are part and parcel of our entire being,” he says.
This is probably the result of overall genetic makeup, not specific genes alone:
“Though the issues may change across time, the underlying trait of liberalism versus conservatism probably has existed since forever,” said Brad Verhulst, a Canadian researcher at Virginia Commonwealth University.
The way that our genes influence us changes over time. The genes themselves do not change, but the interaction between gene and behaviour is dynamic as people age, from puberty through menopause and mid-life crises. It is a generally stable system with crisis points.
Even in physical traits like height, for example, there is obviously a strong genetic component, in that tall parents tend to have tall children, but no one has yet found an actual gene or genes that make people tall.
These personality traits are visible in childhood:
The results, published in 2006 by the Journal of Research in Personality, were astonishing. In analyzing their data, the Blocks found a clear set of childhood personality traits that accurately predicted conservatism in adulthood. For instance, at the ages of three and four, the “conservative” preschoolers had been described as “uncomfortable with uncertainty,” as “rigidifying when experiencing duress,” and as “relatively over-controlled.” The girls were “quiet, neat, compliant, fearful and tearful, [and hoped] for help from the adults around.”
The third cluster shows the amazing finding of Bouchard’s survey: Identical twins reared apart had a strong correlation between their political orientations; but the scores of fraternal twins raised separately didn’t correlate significantly. These results suggest that genetics plays a decisive role in determining political attitudes. In other words, identical twins are more likely than fraternal twins to agree on divisive issues, precisely because they’re more closely related to one another.
Genes and Religion
Italian writer Antonio Gramsci came up with the idea of dominating culture in order to dominate politics, enabling a Leftist transformation of society:
Many of his propositions became a fundamental part of Western Marxist thought and influenced the post-World War II strategies of communist parties in the West. His reflections on the cultural and political concept of hegemony (notably in southern Italy), on the Italian Communist Party itself, and on the Roman Catholic Church were particularly important.
These ideas became the basis of Cultural Marxism:
Later dubbed by 1960s German student activist Rudi Dutschke as “the long march through the institutions,” Gramsci wrote in the 1930s of a “war of position” for socialists and communists to subvert Western culture from the inside in an attempt to compel it to redefine itself.
Gramsci used war metaphors to distinguish between a political “war of position”—which he compared to trench warfare—and the “war of movement (or maneuver),” which would be a sudden full-frontal assault resulting in complete social upheaval.
Gramsci believed that the conditions in Russia in 1917 that made revolution possible would not materialize in more advanced capitalist countries in the West. The strategy must be different and must include a mass democratic movement, an ideological struggle.
His advocacy of a war of position instead of a war of movement was not a rebuke of revolution itself, just a differing tactic—a tactic that required the infiltration of influential organizations that make up civil society.
He saw the task before them ultimately as destroying the economy in order to create the misery needed for the proles to demand Leftist rule:
Gramsci’s linking of the reality of class rule and class power with the equally real amalgam of practices and ideal principles of behavior, conformity, and law, is well synthesized in the specific connection between his concepts of ideology and hegemony, in particular, the concepts of “organic ideology” and the “organic intellectual.” It should not be overlooked that conferring upon the superstructures and indeed ideology a great degree of efficacy and even materiality within the social totality of class society is in the tradition of Marx’s notion of ideology. This recognized, it cannot be ignored that Gramsci was instrumental in rectifying the notion of ideology, as was held then by the “marxist” theoreticians of the Second International and the Bolshevik Party of the Stalin period.
Finally, it must be borne in mind that Gramsci’s conception of the dictatorship of the proletariat must be elaborated out of what he outlined through the concepts of ideology, hegemony, power, and organic intellectuals. Indeed, for Gramsci power rested on what was given, and what was given, i.e. the network of civil society, could not be overlooked and circumscribed in the course of the class struggle. Hence, power for a class rested not only on the economic level and on the simple capture and smashing of the dominant state apparatus, but was highly dependent on the legitimacy the class gained from subordinate classes in civil society through effective ideological struggle therein.
His transition from politics to economy to culture enabled him to argue for fundamental transformation which would in turn control economics and politics:
Gramsci’s signal contribution was to liberate the Marxist project from the prison of economic dogma, thereby dramatically enhancing its ability to subvert Christian society.
Looking back on the 20th century, it is clear that Marx was wrong in his assumption that most workers and peasants were dissatisfied with their places in, and alienated from, their societies, that they were seething with resentment against the middle and upper classes, or that they in any way were predisposed to revolution. Moreover, wherever Communism achieved power, its use of unprecedented levels of violence, coercion, and repression have generated underground opposition at home and militant opposition abroad, making endless killing and repression endemic to Marxism and essential for Communist survival. All of these undeniable facts, when examined honestly, posed insurmountable difficulties insofar as further extensions of Communist power were concerned, and assured some kind of ultimate crisis for Marxism.
In the Gramscian view workers and peasants were not, by and large, revolutionary-minded and they harbored no desire for the destruction of the existing order. Most had loyalties beyond, and far more powerful than, class considerations, even in those instances where their lives were less than ideal. More meaningful to ordinary people than class solidarity and class warfare were such things as faith in God and love of family and country. These were foremost among their overriding allegiances.
Furthermore, Communists were enjoined to put aside some of their class prejudice in the struggle for power, seeking to win even elements within the bourgeois classes, a process which Gramsci described as “the absorption of the elites of the enemy classes.” Not only would this strengthen Marxism with new blood, but it would deprive the enemy of this lost talent. Winning the bright young sons and daughters of the bourgeoisie to the red banner, wrote Gramsci, “results in [the anti-Marxist forces’] decapitation and renders them impotent.”
Gramscian and Marxist plans converged for a method of sabotaging, transforming, weakening, and taking over the West through cultural subversion
I identified some of the most important of the Soviet Union’s memetic weapons.
- There is no truth, only competing agendas.
- All Western (and especially American) claims to moral superiority over Communism/Fascism/Islam are vitiated by the West’s history of racism and colonialism.
- There are no objective standards by which we may judge one culture to be better than another. Anyone who claims that there are such standards is an evil oppressor.
- The prosperity of the West is built on ruthless exploitation of the Third World; therefore Westerners actually deserve to be impoverished and miserable.
- Crime is the fault of society, not the individual criminal. Poor criminals are entitled to what they take. Submitting to criminal predation is more virtuous than resisting it.
- The poor are victims. Criminals are victims. And only victims are virtuous. Therefore only the poor and criminals are virtuous. (Rich people can borrow some virtue by identifying with poor people and criminals.)
- For a virtuous person, violence and war are never justified. It is always better to be a victim than to fight, or even to defend oneself. But ‘oppressed’ people are allowed to use violence anyway; they are merely reflecting the evil of their oppressors.
- When confronted with terror, the only moral course for a Westerner is to apologize for past sins, understand the terrorist’s point of view, and make concessions.
Indeed, the index of Soviet success is that most of us no longer think of these memes as Communist propaganda.
The recession was caused by a default of half of the mortgages in America:
For most of his career, Barney Frank was the principal advocate in Congress for using the government’s authority to force lower underwriting standards in the business of housing finance. Although he claims to have tried to reverse course as early as 2003, that was the year he made the oft-quoted remark, “I want to roll the dice a little bit more in this situation toward subsidized housing.” Rather than reversing course, he was pressing on when others were beginning to have doubts.
By 2000, Fannie was offering no-downpayment loans. By 2002, Fannie and Freddie had bought well over $1 trillion of subprime and other low quality loans. Fannie and Freddie were by far the largest part of this effort, but the FHA, Federal Home Loan Banks, Veterans Administration and other agencies–all under congressional and HUD pressure–followed suit. This continued through the 1990s and 2000s until the housing bubble–created by all this government-backed spending–collapsed in 2007. As a result, in 2008, before the mortgage meltdown that triggered the crisis, there were 27 million subprime and other low quality mortgages in the US financial system. That was half of all mortgages.
This policy, which originated under Bill Clinton, was increased by neoconservative George W. Bush for altruistic reasons:
Bush, in Atlanta to introduce a plan to increase the number of minority homeowners by 5.5 million, was touring Park Place South, a development of starter homes in a neighborhood once marked by blight and crime.
Clinton also had altruistic intentions:
The meltdown was the consequence of a combination of the easy money and low interest rates engineered by the Federal Reserve and the easy housing engineered by a variety of government agencies and policies. Those agencies include the Department of Housing and Urban Development (HUD) and two nominally private “government-sponsored enterprises” (GSEs), Fannie Mae and Freddie Mac. The agencies — along with laws such as the Community Reinvestment Act (passed in the 1970s, then fortified in the Clinton years), which required banks to make loans to people with poor and nonexistent credit histories — made widespread homeownership a national goal.
Clinton’s contribution to the crisis lay in his appointment of Cuomo to HUD. Cuomo became HUD secretary in 1997 after becoming assistant secretary in 1993. In a heavily researched 2008 article in the Village Voice, Wayne Barrett writes,
Andrew Cuomo, the youngest Housing and Urban Development secretary in history, made a series of decisions between 1997 and 2001 that gave birth to the country’s current crisis. He took actions that — in combination with many other factors — helped plunge Fannie and Freddie into the subprime markets without putting in place the means to monitor their increasingly risky investments. He turned the Federal Housing Administration mortgage program into a sweetheart lender with sky-high loan ceilings and no money down, and he legalized what a federal judge has branded ‘kickbacks’ to brokers that have fueled the sale of overpriced and unsupportable loans. Three to four million families are now facing foreclosure, and Cuomo is one of the reasons why.
Perhaps the only domestic issue George Bush and Bill Clinton were in complete agreement about was maximizing home ownership, each trying to lay claim to a record percentage of homeowners, and both describing their efforts as a boon to blacks and Hispanics. HUD, Fannie, and Freddie were their instruments, and, as is now apparent, the more unsavory the means, the greater the growth.…
This coincided with other altruistic programs in devaluing currency as we pursued a “fast money” demand-side economic monetary theory:
The Clinton-era “fast money” policies did what wealth redistribution normally does, which is to destroy the middle class and increase “inequality” as a result. Not surprising, Americans lost 40% of their purchasing power in the aftermath of this recession, with minorities suffering perhaps the most.
Clinton was credited with restoring the economy, which was not correct as he merely deferred the crisis so that it could mature alongside the housing bubble, creating a mega-recession instead of two minor ones. This was an inevitable detonation of a Ponzi scheme based on demand-side economics.
It was part of the ongoing expansion of government by Leftists from the Clinton era:
What’s more, the “regulation” Frank now takes credit for was not his (H.R.1427 passed the House last year but never escaped Senate committee) but rather Nancy Pelosi’s (H.R. 3221 – The Housing and Economic Recovery Act of 2008). And Pelosi’s version, not surprisingly and unlike its Republican predecessors, was signed marked up with over 66 pages of Liberal wealth redistribution wish-fulfillment under the guise of assuring “affordable housing.” While it did establish (and way too late, Barney) the Federal Housing Finance Agency, with regulatory authority over Fannie Mae, Freddie Mac, the Federal Home Loan Banks, and the Office of Finance, it’s bogged down with tons of pork-fat. This oinker even increased the national debt limit from $9.82 trillion to $10.62 trillion, and commissioned a boatload of programs for low income families to spend it on.
In the end, a well-intentioned diversity program plus Clinton “fast money” policies made America’s economy collapse like that of the Soviets:
Every time Republicans in Congress or President Bush talked about reforming housing programs, Democrats like Rep. Barney Frank of Massachusetts and Sen. Chris Dodd of Connecticut threw fits, threatening to gum up Congress and implying that GOP lawmakers were racists. The Republicans backed off.
From 1997 to 2007, with the Fed slashing interest rates and flooding the banking system with liquidity, home lending soared. Banks abandoned long-standing lending standards to avoid being punished by regulators or singled out by newly empowered “community groups” such as ACORN as anti-minority.
When the Fed began raising interest rates to slow inflation, and put a brake on soaring housing prices, many of the loans made to low-income black and Hispanic borrowers predictably fell into delinquency or default — leaving mortgage lenders, Fannie and Freddie, and Wall Street with enormous losses.
Guns are essential to home protection and for defense against tyranny. The left wants to disarm the suburbs so that urban violence can dominate those people and make them pacified and dependent on government as well. However, gun culture is a culture of self-reliance and encourages people to see reality as something with immediate often violent consequences. The gun control we need is ensuring that crazy people stay away from guns; the best way to do this is to watch or isolate crazy people, because they can find guns by legal or illegal means.
Mandatory Gun Ownership
For that “Wild West” feeling:
They may rarely punish their citizens for choosing not to own a gun, but their loose mandates are more about making a statement than enforcing a law.
- Kennesaw, Georgia
- Nelson, Georgia
- Nucla, Colorado
- Gun Barrel City, Texas
- Virgin, Utah
Some of these are resistance to gun control:
The new law requires every head of household “to maintain a firearm, together with ammunition.”
Failure to own a gun, however, will not be prosecuted.
“It’s a Norman Rockwell painting. That’s what it is to me. It’s rare that you find a town like this these days,” Mitchell said.
It also has a deterrent effect:
Still, the crime rate, not that high to begin with, plummeted after the law was enacted — by 89%, compared with a 10% drop statewide, according to published accounts. Davis says there were 11 burglaries per 1,000 residents before the law, 2.7 after. Despite slight fluctuations, she says, crime here “is significantly lower” than similar-sized Georgia cities.
The Nelson ordinance is one of several similar laws around the USA that sprang up in the wake of the Newtown massacre, which sparked an intense debate on gun rights:
Spring City, Utah, with a population near 1,000, passed an ordinance earlier this year recommending that every household own a gun.
A big part of Haidt’s moral narrative is faith. He lays out the case that religion is an evolutionary adaptation for binding people into groups and enabling those units to better compete against other groups. Through faith, humans developed the “psychology of sacredness,” the notion that “some people, objects, days, words, values, and ideas are special, set apart, untouchable, and pure.” If people revere the same sacred objects, he writes, they can trust one another and cooperate toward larger goals.
Haidt sees morality as a “social construction” that varies by time and place. We all live in a “web of shared meanings and values” that become our moral matrix, he writes, and these matrices form what Haidt, quoting the science-fiction writer William Gibson, likens to “a consensual hallucination.”
Building on ideas from the anthropologist Richard Shweder, Haidt and his colleagues synthesize anthropology, evolutionary theory, and psychology to propose six innate moral foundations:
- authority/subversion, and
The moral mind, to him, resembles an audio equalizer with a series of slider switches that represent different parts of the moral spectrum. All political movements base appeals on different settings of the foundations—and the culture wars arise from what they choose to emphasize. Liberals jack up care, followed by fairness and liberty. They rarely value loyalty and authority. Conservatives dial up all six.
The Immigration and Nationality Act of 1965 was promised as no threat to the American way of life:
“This bill that we will sign today is not a revolutionary bill,” the president said. “It does not affect the lives of millions. It will not reshape the structure of our daily lives. … Yet it is still one of the most important acts of this Congress and of this administration. For it does repair a very deep and painful flaw in the fabric of American justice. It corrects a cruel and enduring wrong in the conduct of the American nation.”
The wrong that Johnson and Congress sought to correct was codified in legislation passed 41 years earlier, during a post-war era fraught with anxiety about mass immigration, the shadow of European radicalism, and theories of racial superiority.
The 1924 law established a quota system based on national origins. It directed nearly 70 percent of the immigration slots to northern Europeans, cutting back drastically on immigration from southern and eastern Europe. It maintained formidable barriers against immigration from Asia and Africa, while leaving immigration from the Western Hemisphere unrestricted — a gesture of hemispheric solidarity that also served the cheap-labor interests of American employers.
By 1980, most immigrants were coming from Latin America, Asia, and Africa — in numbers far greater than the annual average of 300,000 that had prevailed during the 1960s. Despite assurances by Hart-Celler advocates that the bill would add little to the immigrant stream, more than seven million newcomers entered the country legally during the 1980s. That trend has continued. Meanwhile, illegal immigration also began a decades-long surge.
This was designed to be fair to other ethnic, religious, and racial groups:
After World War II, opponents of the racially discriminatory national origins system spent twenty years working to dismantle the quotas, and the Hart-Celler Act was the product of these struggles. In doing away with national origins, and in replacing it with a system that was on its face race-neutral, the 1965 act can be seen as part of the civil rights moment in which it was passed, coming just one year after the Civil Rights Act of 1964, and in the same year as the Voting Rights Act.
Although in signing the act into law, President LBJ stated that it was “not a revolutionary bill,” Hart-Celler opened the doors to immigrants from around the world, ending the heavy emphasis on European immigrants that marked the earlier immigration system. To give one example, the number of immigrants gaining permanent visas from Asia in the 1970s was ten times as many as those in the 1950s. This represented a really remarkable change: while we have always been a nation of immigrants, and a nation of immigrants from around the world, the act helped to make us a far more multicultural nation.
This fundamentally transformed America, first in the name of ethnic diversity and later racial diversity:
The civil rights movement’s focus on equal treatment regardless of race or nationality led many to view the quota system as backward and discriminatory. In particular, Greeks, Poles, Portuguese and Italians–of whom increasing numbers were seeking to enter the U.S.–claimed that the quota system discriminated against them in favor of Northern Europeans. President John F. Kennedy even took up the immigration reform cause, giving a speech in June 1963 calling the quota system “intolerable.”
In the first five years after the bill’s passage, immigration to the U.S. from Asian countries–especially those fleeing war-torn Southeast Asia (Vietnam, Cambodia)–would more than quadruple. (Under past immigration policies, Asian immigrants had been effectively barred from entry.) Other Cold War-era conflicts during the 1960s and 1970s saw millions of people fleeing poverty or the hardships of communist regimes in Cuba, Eastern Europe and elsewhere to seek their fortune on American shores. All told, in the three decades following passage of the Immigration and Naturalization Act of 1965, more than 18 million legal immigrants entered the United States, more than three times the number admitted over the preceding 30 years.
By the end of the 20th century, the policies put into effect by the Immigration Act of 1965 had greatly changed the face of the American population. Whereas in the 1950s, more than half of all immigrants were Europeans and just 6 percent were Asians, by the 1990s only 16 percent were Europeans and 31 percent were of Asian descent, while the percentages of Latino and African immigrants had also jumped significantly. Between 1965 and 2000, the highest number of immigrants (4.3 million) to the U.S. came from Mexico, in addition to some 1.4 million from the Philippines. Korea, the Dominican Republic, India, Cuba and Vietnam were also leading sources of immigrants, each sending between 700,000 and 800,000 over this period.
This reversed, for ideological reasons, an earlier bias against diversity, multiculturalism, internationalism, and globalism:
The 1921 national-origins quota law was enacted in a special congressional session after President Wilson’s pocket veto. Along with earlier and other contemporary statutory bars to immigration from Asian countries, the quotas were proposed at a time when eugenics theories were widely accepted. The quota for each country was set at 2 percent of the foreign-born population of that nationality as enumerated in the 1890 census. The formula was designed to favor Western and Northern European countries and drastically limit admission of immigrants from Asia, Africa, the Middle East, and Southern and Eastern Europe.
Building on a campaign promise by President Kennedy, and with a strong push by President Johnson amid the enactment of other major civil-rights legislation, the 1965 law abolished the national-origins quota system. It was replaced with a preference system based on immigrants’ family relationships with U.S. citizens or legal permanent residents and, to a lesser degree, their skills. The law placed an annual cap of 170,000 visas for immigrants from the Eastern Hemisphere, with no single country allowed more than 20,000 visas, and for the first time established a cap of 120,000 visas for immigrants from the Western Hemisphere. Three-fourths of admissions were reserved for those arriving in family categories.
“The bill that we sign today is not a revolutionary bill,” President Johnson said during the signing ceremony. “It does not affect the lives of millions. It will not reshape the structure of our daily lives, or really add importantly to either our wealth or our power.” Senator Ted Kennedy (D-MA), the bill’s floor manager, stated: “It will not upset the ethnic mix of our society.” Even advocacy groups who had favored the national-origins quotas became supporters, predicting little change to the profile of immigration streams.
Despite these predictions, the measure had a profound effect on the flow of immigrants to the United States, and in only a matter of years began to transform the U.S. demographic profile. The number of new lawful permanent residents (or green-card holders) rose from 297,000 in 1965 to an average of about 1 million each year since the mid-2000s (see Figure 1). Accordingly, the foreign-born population has risen from 9.6 million in 1965 to a record high of 45 million in 2015 as estimated by a new study from the Pew Research Center Hispanic Trends Project. Immigrants accounted for just 5 percent of the U.S. population in 1965 and now comprise 14 percent.
In particular, it put us on course to be an Asian nation:
More than half of the entire foreign-born population of the United States has entered the country since 1990, and at the time of this writing, APAs represent the fastest-growing group. In fact, in the 30 years between 1980 and 2010, the APA population jumped nearly fourfold.
Finally, people have noticed that these new groups were favored because they vote Leftist exclusively:
The Hart-Celler Act, so-called after its co-sponsors New York Congressman Emanuel Celler and Michigan Senator Philip Hart, opened the floodgates to new immigrants when it went into effect in 1968. But the vast majority of them didn’t come from Europe; they came instead from Latin America, Africa and Asia. In 1965, non-Hispanic whites comprised over 85 percent of the American population. Fifty years later, that portion is just 62 percent, and falling.
In the first decade of the bill’s enactment, an average of 100,000 legal immigrants above the cap relocated to the U.S.; by 1980 the annual number soared to 730,000. Today, foreign-born immigrants comprise roughly 13 percent of the total population, approaching the all-time high of 14.7 percent in 1910. Another 20 percent were born in the United States but have at least one foreign-born parent. In other words, first- and second-generation Americans comprise a third of the country.
Today, Trump is polling at between 0 percent and 2 percent of the African-American vote, and 20 percent among Latinos—dismal statistics that can all be traced back to the Immigration and Nationality Act of 1965.
An Irishman managed to convince conservatives to support it:
Feighan agreed to support the reform proposal, but he insisted on a key change. Rather than giving preference to those immigrants whose skills were “especially advantageous” to the United States, Feighan insisted on prioritizing those immigrants who already had relatives in the United States, with a new preference category for adult brothers and sisters of naturalized U.S. citizens.
In justifying the change, Feighan told his conservative allies that a family unification preference would favor those nationalities already represented in the U.S. population, meaning Europeans. Among the conservative groups persuaded by Feighan’s argument was the American Legion, which came out in support of the immigration reform after originally opposing it.
This seems to come from a mistaken notion of defeating class warfare by agreeing to its terms:
For supporters, the intent of the legislation was to bring immigration policy into line with other anti-discrimination measures, not to fundamentally change the face of the nation. “We have removed all elements of second-class citizenship from our laws by the  Civil Rights Act,” declared Vice President Hubert Humphrey. “We must in 1965 remove all elements in our immigration law which suggest there are second-class people.”
The more typical response to the nativist arguments was simply to deny that the proposed immigration reform would bring any significant shift in the pattern of immigration. Secretary of State Dean Rusk, testifying in Congress, said he saw no indication of “a world situation where everybody is just straining to move to the United States.”
The heightened emphasis on family unification, rather than replicating the existing ethnic structure of the American population, led to the phenomenon of chain migration. The naturalization of a single immigrant from an Asian or African or Hispanic background opened the door to his or her brothers and sisters and their spouses, who in turn could sponsor their own brothers and sisters. Within a few decades, family unification had become the driving force in U.S. immigration, and it favored exactly those nationalities the critics of the 1965 Act had hoped to keep out, because those were the people most determined to move.
It faced opposition from the nativist wing of conservatives:
But some conservative senators were skeptical about altering the “national origins” system. They argued, as South Carolina Republican Strom Thurmond did, that the imperative “to preserve one’s identity and the identity of one’s nation” justified restrictionist policies that favored Western Europeans because they could more easily assimilate into the existing (white) culture. These opponents furthermore added that uncapped Western Hemisphere immigration was itself a looming threat to the United States’ identity and stability. “The day is not far off when the population explosion in Latin American countries will exert great pressures upon those people to emigrate to the United States,” said West Virginia Democrat Robert Byrd.
As it turns out, minorities never vote in any appreciable number for conservatives, but always vote Leftist, much as they do in their native lands:
As was the case in the 2016 presidential election, white men voted Republican by a wide margin (60% to 39%) while white women were divided (49% favored the Democratic candidate; as many supported the Republican).
Blacks voted overwhelmingly (90%) for the Democratic candidate, including comparable shares of black men (88%) and black women (92%).
Overall, 41% of voters said whites in the country today are favored over minorities; 19% said that minorities are favored over whites, while 33% said that no group is favored. Attitudes on this question were strongly correlated with vote choice. Among those who said whites are favored in the U.S., 87% voted for Democrats. By contrast, large majorities of those who said minorities are favored (85%) or that no group is favored (69%) voted for Republican candidates.
This is consistent with past patterns:
As in the past, white voters have tilted Republican, while minorities strongly favor Democrats. Fifty-four percent of white voters chose Republican this year, while 90 percent of blacks, 69 percent of Hispanics, 77 percent of Asians and 54 percent of other races voted Democrat. That Republicans have failed to make inroads with minority voters — who, come what may, will constitute a larger and larger share of the electorate in the years to come — will yet cause tears. But even in the shorter run, like 2020, this should make Republicans nervous. Whereas 52 percent of white women voted Republican in 2016, the party lost ground in 2018. An equal number of white women gave their votes to Democrats (49 percent) as to Republicans (49 percent).
Affordable Care Act (ACA)
It presupposes an obligation exists to pay for the care down the road as well, which brings the same attendant problems:
Also consider the problems of socialized medicine:
Heat death is a type of equilibrium where everything is equal:
Thermodynamics dictates that large systems evolve toward equilibrium over time. This is a balanced, calm state where no more reactions are favorable; nothing has energy to gain or lose compared to anything else.
In the far distant future, all these hot heavy specks will all be spread out into the enormous cold void, mixing until everything is a thin uniform mist. Like boiling water added to a bowl of cold soup, the two extremes will balance out and leave lukewarm broth.
Heat death originated from the work of several prodigious physicists who began the study of understanding how machines transform heat into mechanical work. Lord Kelvin, Sadi Carnot, and others formed an empirical understanding of how steam engines and other suppliers of motive force do this. They discovered that the machines were harnessing the tendency of energy to flow from hot areas to cold ones. Eventually, the entire system settles down to an intermediate temperature and no more net energy transfer occurs. (This is the maximization of entropy.)
It means that no further interaction can occur, hence the “death” part of the name:
One way to generalize the example is to consider the heat engine and its heat reservoir as parts of an isolated (or closed) system—i.e., one that does not exchange heat or work with its surroundings. For example, the heat engine and reservoir could be encased in a rigid container with insulating walls. In this case the second law of thermodynamics (in the simplified form presented here) says that no matter what process takes place inside the container, its entropy must increase or remain the same in the limit of a reversible process. Similarly, if the universe is an isolated system, then its entropy too must increase with time. Indeed, the implication is that the universe must ultimately suffer a “heat death” as its entropy progressively increases toward a maximum value and all parts come into thermal equilibrium at a uniform temperature. After that point, no further changes involving the conversion of heat into useful work would be possible. In general, the equilibrium state for an isolated system is precisely that state of maximum entropy. (This is equivalent to an alternate definition for the term entropy as a measure of the disorder of a system, such that a completely random dispersion of elements corresponds to maximum entropy, or minimum information. See information theory: Entropy.)
So what exactly is the connection between entropy and the second law? Recall that heat at the molecular level is the random kinetic energy of motion of molecules, and collisions between molecules provide the microscopic mechanism for transporting heat energy from one place to another. Because individual collisions are unchanged by reversing the direction of time, heat can flow just as well in one direction as the other. Thus, from the point of view of fundamental interactions, there is nothing to prevent a chance event in which a number of slow-moving (cold) molecules happen to collect together in one place and form ice, while the surrounding water becomes hotter. Such chance events could be expected to occur from time to time in a vessel containing only a few water molecules. However, the same chance events are never observed in a full glass of water, not because they are impossible but because they are exceedingly improbable. This is because even a small glass of water contains an enormous number of interacting molecules (about 1024), making it highly unlikely that, in the course of their random thermal motion, a significant fraction of cold molecules will collect together in one place. Although such a spontaneous violation of the second law of thermodynamics is not impossible, an extremely patient physicist would have to wait many times the age of the universe to see it happen.
Heat death means an end to change and evolution:
The idea comes from the second law of thermodynamics, which states that entropy – a measure of “disorder” or the number of ways a system can be arranged – always increases. Any system, including the universe, will eventually evolve into a state of maximum disorder – just like a sugar cube will always dissolve in a cup of tea but would take an insanely long time to randomly go back to an orderly cube structure. When all the energy the in the cosmos is uniformly spread out, there is no more heat or free energy to fuel processes that consume energy, such as life.
The heckler’s veto has two definitions, a legal and colloquial:
A heckler’s veto occurs when the government accepts restrictions on speech because of the anticipated or actual reactions of opponents of the speech. The Supreme Court first recognized the term in Brown v. Louisiana (1966), citing the work of First Amendment expert Harry Kalven Jr., who coined the phrase. The term is also used in general conversation to refer to any incident in which opponents block speech by direct action or by “shouting down” a speaker through protest.
The landmark heckler’s veto case is Terminiello v. Chicago (1949), in which a riot took place outside an auditorium before, during, and after a controversial speech. Justice William O. Douglas, writing for a 5-4 majority, held unconstitutional Arthur Terminiello’s conviction for causing a breach of the peace, noting that speech fulfills “its high purpose when it induces a condition of unrest, creates dissatisfaction with conditions as they are, or even stirs people to anger.”
In general, the core concern with the heckler’s veto is that allowing the suppression of speech because of the discontent of the opponents provides the perverse incentive for opponents to threaten violence rather than to meet ideas with more speech.
See it in action:
The progressive left has become increasingly hostile to free speech over the past few decades. Claims that speech can be violent, and that it should get different treatment depending on whether it operates for or against historically oppressed groups, have become the unchallenged truisms of freshman orientation courses and social-justice efforts.
What’s troubling is that the ACLU is moving in the same direction, yielding to the heckler’s veto and even declining to defend its own speech rights. The Virginia chapter initially issued a strong statement criticizing the disruption of Ms. Gastañaga’s speech—then redacted it in favor of ambiguous language. It brings to mind Robert Frost’s description of a liberal as someone too broad-minded to take his own side in a quarrel.
Most traits are mostly heritable:
We report a meta-analysis of twin correlations and reported variance components for 17,804 traits from 2,748 publications including 14,558,903 partly dependent twin pairs, virtually all published twin studies of complex traits. Estimates of heritability cluster strongly within functional domains, and across all traits the reported heritability is 49%. For a majority (69%) of traits, the observed twin correlations are consistent with a simple and parsimonious model where twin resemblance is solely due to additive genetic variation. The data are inconsistent with substantial influences from shared environment or non-additive genetic variation.
This is consistent with a historical view that “nature > nurture”:
In the context of current concerns about replication in psychological science, we describe 10 findings from behavioral genetic research that have replicated robustly. These are “big” findings, both in terms of effect size and potential impact on psychological science, such as linearly increasing heritability of intelligence from infancy (20%) through adulthood (60%).
Intelligence is inherited:
We estimate that 40% of the variation in crystallized-type intelligence and 51% of the variation in fluid-type intelligence between individuals is accounted for by linkage disequilibrium between genotyped common SNP markers and unknown causal variants. These estimates provide lower bounds for the narrow-sense heritability of the traits.
Intelligence follows social class lines:
We used matched birth and school records from Florida siblings and twins born in 1994–2002 to provide the largest, most population-diverse consideration of this hypothesis to date. We found no evidence of SES moderation of genetic influence on test scores, suggesting that articulating gene-environment interactions for cognition is more complex and elusive than previously supposed.
It influences age of onset of sexual activity:
Controlling for age, physical maturity, and mother’s education, a significant curvilinear relationship between intelligence and coital status was demonstrated; adolescents at the upper and lower ends of the intelligence distribution were less likely to have sex. Higher intelligence was also associated with postponement of the initiation of the full range of partnered sexual activities.
This can even extend to political and social outlooks:
Genetic modelling showed that variation in homophobia scores could be explained by additive genetic (36%), shared environmental (18%) and unique environmental factors (46%). However, corrections based on previous findings show that the shared environmental estimate may be almost entirely accounted for as extra additive genetic variance arising from assortative mating for homophobic attitudes. The results suggest that variation in attitudes toward homosexuality is substantially inherited, and that social environmental influences are relatively minor.
The risk for all CB was significantly elevated in the adopted-away offspring of biological parents of which at least one had CB [odds ratio (OR) 1.5, 95% confidence interval (CI) 1.4–1.6] and in the biological full and half-siblings of CB adoptees (OR 1.4, 95% CI 1.2–1.6 and OR 1.3, 95% CI 1.2–1.3, respectively). A genetic risk index (including biological parental/sibling history of CB and alcohol abuse) and an environmental risk index (including adoptive parental and sibling CB and a history of adoptive parental divorce, death, and medical illness) both strongly predicted probability of CB. These genetic and environmental risk indices acted additively on adoptee risk for CB.
And includes not just criminality, but type of criminal behavior:
Specifically, we trace the history of behavioral genetics and show that 1) the Burt and Simons critique dates back 40 years and has been subject to a broad array of empirical investigations, 2) the violation of assumptions in twin models does not invalidate their results, and 3) Burt and Simons created a distorted and highly misleading portrait of behavioral genetics and those who use quantitative genetic approaches.
Even more, general approach to life is mostly genetic:
A single higher-order factor, indicating a general life history strategy, composed of three lower-order factors, was replicated. Factor analyses were then performed on the genetic variance-covariance matrices. We found that (a) a single higher-order factor explained the preponderance of the genetic correlations among the scales and (b) this higher-order factor was itself 68 percent heritable and accounted for 82 percent of the genetic variance among the three component lower-order factors.
This, too, is consistent with historical knowledge:
There is now a large body of evidence that supports the conclusion that individual differences in most, if not all, reliably measured psychological traits, normal and abnormal, are substantively influenced by genetic factors.
The Log Cabin Republicans may interest those researching this topic:
Log Cabin Republicans is the nation’s largest Republican organization dedicated to representing LGBT conservatives and allies. For more than 30 years, we have promoted the fight for equality through our state and local chapters, our full-time office in Washington, DC, and our federal and state political action committees.
Conservatism is best understood as a philosophy or custom. It consists of a desire to conserve the best of the past, which means that it is exclusively driven by results we have seen in history, and is not driven by conjecture. It abstracts the lessons of history into several general principles, which it uses alongside specific historical events to ascertain the correct course of action in terms of its results and not anthropocentric concerns like social morality, law, economics, and optics.
Ideology, on the other hand, is speculative, conjectural, or hypothetical theory. It states that our outcome will be more correct if we adhere to certain human value-judgments, like equality or fairness, which are not found in nature, and it is not a philosophy because it speaks about how much better — in theory — life will be if we implement the theory, which is why it is said that ideology talks about how life “should” be rather than how it is.
Ideology is absolute and categorical. For example, the most popular ideology in history is egalitarianism, which roughly states that (1) humans are actually equal in social worth and (2) humanity must achieve equality through social engineering including strong government action. In this ideology, anything which achieves equality is an absolute good and therefore everything else is a means-to-the-end of this absolute good.
Principles are balanced heuristics. That is, we know roughly where we are going, and the usual methods we want to take to get there, but these have to be balanced against each other because all of life is trade-offs rather than absolute and universal change. That means that no single principle controls; we have no fixed and finite answers except specifics from history, and so in unknown situations or unknown variants of past situations, we use a balance of our principles and assessment of likely results in reality to determine tentatively what to do, and then we keep an eye on the situation as it develops, incorporating new data and changing our response as necessary.
Never in recorded history has diversity been anything but a problem. Look at Ireland with its Protestant and Catholic populations, Canada with its French and English populations, Israel with its Jewish and Palestinian populations.
Or consider the warring factions in India, Sri Lanka, China, Iraq, Czechoslovakia (until it happily split up), the Balkans and Chechnya. Also look at the festering hotbeds of tribal warfare — I mean the “beautiful mosaic” — in Third World hellholes like Afghanistan, Rwanda and South Central, L.A.
“Diversity” is a difficulty to be overcome, not an advantage to be sought. True, America does a better job than most at accommodating a diverse population. We also do a better job at curing cancer and containing pollution. But no one goes around mindlessly exclaiming: “Cancer is a strength!” “Pollution is our greatest asset!”
If there is any place in the Guinness Book of World Records for words repeated the most often, over the most years, without one speck of evidence, “diversity” should be a prime candidate.
Is diversity our strength? Or anybody’s strength, anywhere in the world? Does Japan’s homogeneous population cause the Japanese to suffer? Have the Balkans been blessed by their heterogeneity — or does the very word “Balkanization” remind us of centuries of strife, bloodshed and unspeakable atrocities, extending into our own times?
Has Europe become a safer place after importing vast numbers of people from the Middle East, with cultures hostile to the fundamental values of Western civilization?
Be it enacted by the Senate and House of Representatives of the United States of America, in Congress assembled, That any Alien being a free white person, who shall have resided within the limits and under the jurisdiction of the United States for the term of two years, may be admitted to become a citizen thereof on application to any common law Court of record in any one of the States wherein he shall have resided for the term of one year at least, and making proof to the satisfaction of such Court that he is a person of good character, and taking the oath or affirmation prescribed by law to support the Constitution of the United States, which Oath or Affirmation such Court shall administer, and the Clerk of such Court shall record such Application, and the proceedings thereon; and thereupon such person shall be considered as a Citizen of the United States.
Since in the 1790s the only “whites” in America were of Western European heritage — English, Scots, German, Dutch, Scandinavians, northern French — it is clear that when the term “white” is used it refers to people of Nordic-Germanic or blended Yamnaya-Cro-Magon heritage. The 1793 census confirms this with its discrete categories for free whites, slaves, and “other persons” which is generally acknowledged to mean Indians and free Negroes/Africans, who at the time were a very small population.
Liberalism is derived from two related features of Western culture. The first is the West’s preoccupation with individuality, as compared to the emphasis in other civilizations on status, caste, and tradition. Throughout much of history, the individual has been submerged in and subordinate to his clan, tribe, ethnic group, or kingdom. Liberalism is the culmination of developments in Western society that produced a sense of the importance of human individuality, a liberation of the individual from complete subservience to the group, and a relaxation of the tight hold of custom, law, and authority. In this respect, liberalism stands for the emancipation of the individual. See also individualism.
Liberalism also derives from the practice of adversariality in European political and economic life, a process in which institutionalized competition—such as the competition between different political parties in electoral contests, between prosecution and defense in adversary procedure, or between different producers in a market economy (see monopoly and competition)—generates a dynamic social order. Adversarial systems have always been precarious, however, and it took a long time for the belief in adversariality to emerge from the more traditional view, traceable at least to Plato, that the state should be an organic structure, like a beehive, in which the different social classes cooperate by performing distinct yet complementary roles.
“Equality” consists of the mythic state of human relations which Leftists assume on a conjectural, speculative, and hypothetical basis can be achieved within civilizations by means and bounds unspecified. This is a variation, like pluralism, of pacifism, or the idea that we can eliminate conflict through compromise (sitzpinkel in German).
In reality, “Inequality” simply refers to the fact that we have IQ inequality, and some people are smarter than others, and smarter people tend to earn more money and keep it in the family between generations until someone marries an idiot (waitress, stewardess, actress, public relations, stripper) and then dumbs down the family line.
Surveyed members of the Class of 2019 who identified as legacies reported higher best overall SAT scores—2269 on average—than their non-legacy peers, who reported SAT scores of 2221 on average.
The solution to “inequality” is to stop Leftism. Leftism seeks to take money, power, and prestige from the successful and give it to the unsuccessful under the theory that this will pacify the unsuccessful, and everyone will henceforth be happy. Instead, you simply work to death your most promising people, pay off the oblivious idiots who do not notice, and then your society fails like the Soviet Union or Venezuela.
IQ and the Wealth of Nations
https://www.gwern.net/docs/iq/2013-lynn.pdf (old data)
The formation of modern Israel was inspired by a need to end the diaspora and the anti-Semitism that it created:
In 1894, Captain Alfred Dreyfus, a Jewish officer in the French army, was unjustly accused of treason, mainly because of the prevailing anti-Semitic atmosphere. Herzl witnessed mobs shouting “Death to the Jews” in France, the home of the French Revolution, and resolved that there was only one solution: the mass immigration of Jews to a land that they could call their own. Thus, the Dreyfus Case became one of the determinants in the genesis of Political Zionism.
Herzl concluded that anti-Semitism was a stable and immutable factor in human society, which assimilation did not solve. He mulled over the idea of Jewish sovereignty, and, despite ridicule from Jewish leaders, published Der Judenstaat (The Jewish State, 1896). Herzl argued that the essence of the Jewish problem was not individual but national. He declared that the Jews could gain acceptance in the world only if they ceased being a national anomaly. The Jews are one people, he said, and their plight could be transformed into a positive force by the establishment of a Jewish state with the consent of the great powers. He saw the Jewish question as an international political question to be dealt with in the arena of international politics.
This was designed to acknowledge Jewish genetic roots and commonality:
Here, genome-wide analysis of seven Jewish groups (Iranian, Iraqi, Syrian, Italian, Turkish, Greek, and Ashkenazi) and comparison with non-Jewish groups demonstrated distinctive Jewish population clusters, each with shared Middle Eastern ancestry, proximity to contemporary Middle Eastern populations, and variable degrees of European and North African admixture. Two major groups were identified by principal component, phylogenetic, and identity by descent (IBD) analysis: Middle Eastern Jews and European/Syrian Jews. The IBD segment sharing and the proximity of European Jews to each other and to southern European populations suggested similar origins for European Jewry and refuted large-scale genetic contributions of Central and Eastern European and Slavic populations to the formation of Ashkenazi Jewry. Rapid decay of IBD in Ashkenazi Jewish genomes was consistent with a severe bottleneck followed by large expansion, such as occurred with the so-called demographic miracle of population expansion from 50,000 people at the beginning of the 15th century to 5,000,000 people at the beginning of the 19th century. Thus, this study demonstrates that European/Syrian and Middle Eastern Jews represent a series of geographical isolates or clusters woven together by shared IBD genetic threads.
These show us an origin in the original Jews of Israel:
Jewish groups share a lot of the genome identical by descent. Additionally, there’s a general agreement with the other results as to which groups are close to each other. They note in the text that the segments identical by descent among Jews are rather small, which implies that recombination has broken up the large blocks. So that means that a high proportion of Jewish-Jewish IDB is a function more of many common ancestors deep in the past, rather than a few more recent common ancestors. Ashkenazi Jews in particular exhibit increased sharing of the genome across short blocks as opposed to longer ones, suggestive of a demographic expansion from a small population.
Despite admixture with Europeans, this original group remains the genetic root of Jewish populations:
We find that the Jewish populations show a high level of genetic similarity to each other, clustering together in several types of analysis of population structure. Further, Bayesian clustering, neighbor-joining trees, and multidimensional scaling place the Jewish populations as intermediate between the non-Jewish Middle Eastern and European populations.
These results support the view that the Jewish populations largely share a common Middle Eastern ancestry and that over their history they have undergone varying degrees of admixture with non-Jewish populations of European descent.
This shows consistency with an origin in the middle east near or in Israel:
In conclusion, we show that, at least in the context of the studied sample, it is possible to predict full Ashkenazi Jewish ancestry with 100% sensitivity and 100% specificity, although it should be noted that the exact dividing line between a Jewish and non-Jewish cluster will vary across sample sets which in practice would reduce the accuracy of the prediction. While the full historical demographic explanations for this distinction remain to be resolved, it is clear that the genomes of individuals with full Ashkenazi Jewish ancestry carry an unambiguous signature of their Jewish heritage, and this seems more likely to be due to their specific Middle Eastern ancestry than to inbreeding.
This is consistent throughout scientific literature on the subject:
Progressively more detailed population genetic analysis carried out independently by multiple research groups over the past two decades has revealed a pattern for the population genetic architecture of contemporary Jews descendant from globally dispersed Diaspora communities. This pattern is consistent with a major, but variable component of shared Near East ancestry, together with variable degrees of admixture and introgression from the corresponding host Diaspora populations.
Ashkenazi Jews, the most Europeanized group, descent to a small band which ventured north into Europe:
Reconstruction of recent AJ history from such segments confirms a recent bottleneck of merely ≈350 individuals. Modelling of ancient histories for AJ and European populations using their joint allele frequency spectrum determines AJ to be an even admixture of European and likely Middle Eastern origins. We date the split between the two ancestral populations to ≈12–25 Kyr, suggesting a predominantly Near Eastern source for the repopulation of Europe after the Last Glacial Maximum.
This proves consistent with religious tradition in Judaism:
The Cohanim Modal Haplotype (CMH), the Jewish genetic marker that has received the most attention, was first publicized in the journal Nature in a study that identified six differences in the DNA sequence of male Jews that identified as Cohens or Cohanim (Jewish priests).^(47) Some scientists think that the Cohanim signature could represent the inheritance of over 100 generations from the founder of the patrilineal genetic line.^(48) The signature is traced to a date over 3000 years ago, in accordance with the oral tradition that the Cohens maintain a line of patrilineal descent from Aaron, the first Jewish priest.^(49)
During the diaspora, Ashkenazi Jews admixed with Europeans on the maternal line especially:
Here we show that all four major founders, ~40% of Ashkenazi mtDNA variation, have ancestry in prehistoric Europe, rather than the Near East or Caucasus. Furthermore, most of the remaining minor founders share a similar deep European ancestry. Thus the great majority of Ashkenazi maternal lineages were not brought from the Levant, as commonly supposed, nor recruited in the Caucasus, as sometimes suggested, but assimilated within Europe.
However, this occurred on top of a population with near east ancestry:
Here, using complete sequences of the maternally inherited mitochondrial DNA (mtDNA), we show that close to one-half of Ashkenazi Jews, estimated at 8,000,000 people, can be traced back to only 4 women carrying distinct mtDNAs that are virtually absent in other populations, with the important exception of low frequencies among non-Ashkenazi Jews. We conclude that four founding mtDNAs, likely of Near Eastern ancestry, underwent major expansion(s) in Europe within the past millennium.
This makes for a whole view, which is of a Semitic population in ancient Israel which was partially European, and then partially mixed during the diaspora:
Monoallelic genetic markers, Y-chromosomal DNA and mitochondrial DNA, have proven their usefulness in understanding the patrilineal and matrilineal origins of Jewish Diaspora groups. Y-DNA analysis showed that most Diaspora Jews are descended from a smaller group of Middle Eastern men. Seven Y chromosome major branches (E3b, G, J1, J2, Q, R1a1, and R1b) are prevalent among Ashkenazi Jews. Four of these (E3b, G, J1, J2, and Q) were part of the ancestral gene pool from the Middle East, whereas R1b and certain R1a sub-lineages are from Europe and may have incorporated into the Ashkenazi Jewish population. The presence of European Y-chromosomal lineages is the major difference between Ashkenazi Jews, Middle Eastern and Sephardic Jews.
The most common Ashkenazi Jewish Y chromosomal haplogroups are R1a1 and R1b. R1a1 is very common among Eastern European populations, Russians, Ukrainians, and Sorbs (Slavic speakers in Germany), as well as among certain Central Asian groups. However, it should be noted that a Middle Eastern origin for some R1a1 lineages cannot be ruled out. R1b is the most common Y-chromosome branch of Atlantic Europe. Its occurrence among Ashkenazi Jews may be an indicator of the mixture prior to the Ashkenazi Jewish migration to Eastern Europe or at later time points in certain locales.
This includes some Khazar/Turkish DNA but as a minority contribution after the foundation of the group in Israel:
Employing a variety of standard techniques for the analysis of populationgenetic structure, we find that Ashkenazi Jews share the greatest genetic ancestry with other Jewish populations, and among non-Jewish populations, with groups from Europe and the Middle East. No particular similarity of Ashkenazi Jews with populations from the Caucasus is evident, particularly with the populations that most closely represent the Khazar region. Thus, analysis of Ashkenazi Jews together with a large sample from the region of the Khazar Khaganate corroborates the earlier results that Ashkenazi Jews derive their ancestry primarily from populations of the Middle East and Europe, that they possess considerable shared ancestry with other Jewish populations, and that there is no indication of a significant genetic contribution either from within or from north of the Caucasus region.
This points to what history would consider the prevailing opinion: in the middle east, people from North Africa, Western Europe, and Asia mixed in trading communities, producing a unique tribe in Israel which then spread outward during the Roman Empire and embarked on a diaspora, enduring some admixture but remaining essentially Jewish and linked to the historical land of Israel of which modern Israel is now part.
Journolist was an unofficial mailing list where Leftist activists coordinated media manipulation in order to present talking points as common wisdom:
JournoList e-mails obtained by the Daily Caller reveal what anybody with two neurons to rub together already knew: Professional liberals don’t like Republicans and do like Democrats…In 2008, participants shared talking points about how to shape coverage to help Obama. They tried to paint any negative coverage of Obama’s racist and hateful pastor, Jeremiah Wright, as out of bounds. Journalists at such “objective” news organizations as Newsweek, Bloomberg, Time, and The Economist joined conversations with open partisans about the best way to criticize Sarah Palin.
In the 1930s, the New York Times deliberately whitewashed Stalin’s murders. In 1964, CBS reported that Barry Goldwater was tied up with German Nazis. In 1985, the Los Angeles Times polled 2,700 journalists at 621 newspapers and found that journalists identified themselves as liberal by a factor of 3 to 1. Their actual views on issues were far more liberal than even that would suggest…In other words, JournoList is a symptom, not the disease. And the disease is not a secret conspiracy but something more like the “open conspiracy” H. G. Wells fantasized about, where the smartest, best people at every institution make their progressive vision for the world their top priority.
Even the Left admits that this collusion in the media market was designed to promote a Leftist agenda:
The point was that it connected a bunch of people in divergent but related industries, all liberal or left-wing, and gave them space to talk about what they were/should have been working on.
Previously undisclosed FBI documents suggest that the Kent State antiwar protests were more meticulously planned than originally thought and that one or more gunshots may have been fired at embattled Ohio National Guardsmen before their killings of four students and woundings of at least nine others on that searing day in May 1970.
This includes an audio tape:
A noisy, violent altercation and four pistol shots took place about 70 seconds before Ohio National Guardsmen opened fire on antiwar protesters at Kent State University, according to a new analysis of a 40-year-old audiotape of the event.
Even before this evidence, the Kent State shooting was known as a mystery:
Later there would be much debate over why the Guardsmen had fired—whether they had been ordered to do so, whether they thought they had been fired upon themselves and shot in self-defense, or whether a small group of them had indeed conspired to shoot.
It was widely acknowledged that the crowd of demonstrators was the aggressor:
The answer offered by the Guardsmen is that they fired because they were in fear of their lives. Guardsmen testified before numerous investigating commissions as well as in federal court that they felt the demonstrators were advancing on them in such a way as to pose a serious and immediate threat to the safety of the Guardsmen, and they therefore had to fire in self-defense. Some authors (e.g., Michener, 1971 and Grant and Hill, 1974) agree with this assessment. Much more importantly, federal criminal and civil trials have accepted the position of the Guardsmen. In a 1974 federal criminal trial, District Judge Frank Battisti dismissed the case against eight Guardsmen indicted by a federal grand jury, ruling at mid-trial that the government’s case against the Guardsmen was so weak that the defense did not have to present its case. In the much longer and more complex federal civil trial of 1975, a jury voted 9-3 that none of the Guardsmen were legally responsible for the shootings. This decision was appealed, however, and the Sixth Circuit Court of Appeals ruled that a new trial had to be held because of the improper handling of a threat to a jury member.
The legal aftermath of the May 4 shootings ended in January of 1979 with an out-of-court settlement involving a statement signed by 28 defendants(3) as well as a monetary settlement, and the Guardsmen and their supporters view this as a final vindication of their position. The financial settlement provided $675,000 to the wounded students and the parents of the students who had been killed. This money was paid by the State of Ohio rather than by any Guardsmen, and the amount equaled what the State estimated it would cost to go to trial again. Perhaps most importantly, the statement signed by members of the Ohio National Guard was viewed by them to be a declaration of regret, not an apology or an admission of wrongdoing:
One of the suspects in the firing of shots from the demonstrator side is Terry Norman:
The FBI initially denied any connection with Norman, although the bureau had paid him for undercover work a month before the Kent State shootings. His relationship with the FBI may have begun even earlier than Norman has acknowledged, and he may later have had ties to the CIA.
The individual states in the United States are sometimes called “laboratories of democracy” because they can experiment with innovative policy ideas. This allows other states and the nation as a whole to see if the new ideas work or not before they adopt them.
The idea that states are ideal laboratories for democracy was popularized in the New State Ice Co. v. Liebmann case by U.S. Supreme Court Justice Louis Brandeis.
“Liberals” referred to those who, around and after the French Revolution, embraced the Enlightenment-era ideal of individualism, which was that we needed no order higher than the individual, specifically aristocracy and the caste system.
Instead, they suggested a mixture of tolerance and Darwinism: everybody do whatever they want — barring what we might call the “simple social compact” of avoiding murder, theft, and assault — and the best will rise over time, maintaining a social order without aristocrats.
This presented a problem in that it left society headless, so it was rapidly assimilated into the democratic ideal, which held that not only should people be able to live in a state of near-anarchy, but that they would then be polled in a utilitarian manner to see what a plurality thought was a good idea on questions of leadership.
Democracy created another problem, in that in addition to no one paying attention to the task of maintaining civilization and culture, it encouraged great resentment as people asked, effectively, “If we are all equal, why am I impoverished, powerless, and socially insignificant?”
In response to this, liberalism fragmented. The former type became “classical liberals” who are today represented by libertarians and some paleoconservatives, and the new type became infused with socialism, or the idea that society owed a duty to its citizens to subsidize them until they were “equal,” and the cost would be externalized or spread across the whole of society.
Liberalism is derived from two related features of Western culture. The first is the West’s preoccupation with individuality, as compared to the emphasis in other civilizations on status, caste, and tradition. Throughout much of history, the individual has been submerged in and subordinate to his clan, tribe, ethnic group, or kingdom. Liberalism is the culmination of developments in Western society that produced a sense of the importance of human individuality, a liberation of the individual from complete subservience to the group, and a relaxation of the tight hold of custom, law, and authority. In this respect, liberalism stands for the emancipation of the individual. See also individualism.
Liberalism also derives from the practice of adversariality in European political and economic life, a process in which institutionalized competition—such as the competition between different political parties in electoral contests, between prosecution and defense in adversary procedure, or between different producers in a market economy (see monopoly and competition)—generates a dynamic social order. Adversarial systems have always been precarious, however, and it took a long time for the belief in adversariality to emerge from the more traditional view, traceable at least to Plato, that the state should be an organic structure, like a beehive, in which the different social classes cooperate by performing distinct yet complementary roles.
Many of us refer to both group as Leftists because the basis of their philosophy is egalitarian:
The opposite of this is a belief in human differences and individuality, which is separate from individualism in that individualism believes that the needs, judgments, feelings, and desires of the individual come first before all else, but conservatives believe in the need for hierarchy, family, culture, heritage, and social structure, all of which are orders larger than the individual, where the individualist believes that the largest social unit in civilization should be the individual.
Contrary to public opinion, Mandela defined his career by being willing to use violence:
Mandela was no Christ nor even Gandhi nor Martin Luther King. He was for decades a man of violence. In 1961, he broke with African National Congress colleagues who preached non-violence, creating a terrorist wing.
He later pleaded guilty in court to acts of public violence, and behind bars sanctioned more, including the 1983 Church St car bomb that killed 19 people.
Mandela even suggested cutting off the noses of blacks deemed collaborators. His then wife Winnie advocated “necklacing” instead – a burning tyre around the neck.
Mandela’s support for other leaders of violence is even less forgivable. He maintained close ties to Cuban dictator Fidel Castro and backed Palestinian terrorist leader Yasser Arafat. As president in 1997, he gave his country’s highest award for a foreigner to Libya’s dictator, Colonel Muammar Gaddafi, who’d donated $10 million to the ANC. He gave the same award to the corrupt Indonesian president Suharto, who he said had donated $60 million.
He supported Nigerian coup leader Sani Abacha, refusing to say a word publicly to stop the 1995 hanging of activist Ken Saro-Wiwa.
Contrary to what the Leftist media says, he was a card-carrying Communist:
On the day of Nelson Mandela’s death the South African Communist Party chose to reveal a fact that it had long denied: that he was a party member. Indeed, at the time of his arrest he was on the Central Committee. The statement read: “At his arrest in August 1962, Nelson Mandela was not only a member of the then underground South African Communist Party, but was also a member of our Party’s Central Committee… After his release from prison in 1990, Cde Madiba became a great and close friend of the communists till his last days.”
One only has to consider some of the ANC’s current positions to see the Party’s imprint on its thinking. Reading the ANC’s most important current blueprint, Strategy and Tactics, adopted in 2007 we see its analysis of the nature of South African society. This refers to the country as “Colonialism of a Special Type, with both the coloniser and the colonised located in a common territory and with a large European settler population.” This formulation is lifted, almost word for word, from the programme of the South African Communist Party adopted in 1962.
This gained him support from the Communist international:
South Africans of various political persuasions were willing to take up arms against the government after the Sharpeville massacre of March 1960, but in this new context it was the Communist Party that had the best international connections. Four top party members secretly visited Beijing, where they had discussions with Mao Zedong personally, and Moscow. In both capitals they received assurances of support.
He continued in this belief after the fall of Communism:
At the very moment communism was collapsing in Eastern Europe, Mandela praised the South African Communist Party in his first speech following his release from prison. Mandela said in Cape Town on February 11: “We are heartened by the fact that the alliance between ourselves and the [communist] party remains as strong as it always was.”
Mandela was not jailed because of his political viewpoints. Rather, he was imprisoned in 19 6 2 for possessing explosive devises, which were used in sabotage attacks inside South Africa, and for inciting violence. Mandela’s violent actions would have resulted in imprisonment in virtually any country of the world.
This included endorsement of typical Communist murder, torture, and theft:
Indeed, outside of open support from ruthless communist dictatorships — the tyrants ruling over Cuba, East Germany, and the Soviet Union, for example — Mandela’s ANC and its South African Communist Party partners were widely viewed as ruthless communist terrorists. Considering their murderous activities, which included the barbaric executions and torture of countless South African blacks who opposed them, it is easy to understand why.
With help from elements of the Western establishment and the media, however, all of that gradually changed. Widely adored in South Africa and around the world, today Mandela is almost universally portrayed as a peaceful hero who struggled to bring down the white-led Apartheid regime that ruled the area for decades — all in the name of “democracy,” “equality,” and racial harmony.
In reality, the Soviet-backed revolutionary was imprisoned for terrorism, sedition, and sabotage — an integral part of Mandela’s long communist history that his adoring fans tend to downplay, at best, or more often, ignore altogether. Almost none of the adoring eulogies pouring forth from around the world have noted, for example, that Mandela was offered the chance to walk out of prison a free man if he would just renounce violence. He refused.
In fact, the ANC was essentially a Communist project:
The South African Communist Party and its patrons in Russia and China were a source of money and weapons for the largely feckless armed struggle, and for many, it meant solidarity with a cause larger than South Africa. Communist ideology undoubtedly seeped into the A.N.C., where it became part of a uniquely South African cocktail with African nationalism, Black Consciousness, religious liberalism and other, inchoate angers and resentments and yearnings.
The ANC followed Communist doctrine in initiating violence in South Africa:
Documents which have surfaced – including Mandela’s original autobiography written during his time in prison, minutes of meetings and statements from members of the SACP Central Committee – have, however, have led cast doubt on the insistence of the former alliance partners that the decision to take up arms was arrived at simultaneously, and have argued that the decision to launch an armed struggle was primarily initiative of the SACP, inspired by Fidel Castro’s 26th of July movement during the Cuban Revolution. Steven Ellis, professor at the University of Amsterdam, has researched the formation of MK extensively and has concluded that the decision to establish the armed movement was taken by the SACP, decided at a small conference in Emmarentia in December 1960. Mandela was among the 25 people in attendance.
Mandela conducted himself in a manner familiar to those who study Communism:
His book also provides fresh detail on how the ANC’s military wing had bomb-making lessons from the IRA, and intelligence training from the East German Stasi, which it used to carry out brutal interrogations of suspected “spies” at secret prison camps.
His doctrines of equality were influenced by earlier Communist activity:
Despite remonstrances about the need for appealing also to the white working class, the Comintern in 1928 ordered as correct the slogan: “an independent native South African republic as a stage towards a workers’ and peasants’ republic with full, equal rights for all races.” The factionalism and expulsions that followed this declaration virtually decimated the party. With the rise of Hitler and Moscow’s new emphasis on organizing all-class united fronts, the party slowly revived.
In fact, Communist theory formed the core of his thinking:
He already had a rough grasp of the essentials of Marxist ideas, acquired again through reading and discussions with Moses Kotane at the beginning of the 1950s. As he noted much later in his 1994 autobiography, during the 1950s, the certainties offered by “the scientific underpinnings of dialectical materialism” were for him powerfully compulsive.
Mandela counted on Soviet support for his agenda:
In South Africa, it was the Soviet bloc — the same communist governments that were brutally repressing their own people — that helped the ANC fight apartheid. In the 1980s, they were joined by an American and European anti-apartheid movement willing to overlook the ANC’s communist ties because they refused to see South Africa’s freedom struggle through a Cold War lens. At a time when men like Reagan and Cheney were insisting that the most important thing about Mandela was where he stood in the standoff between Washington and Moscow, millions of citizens across the West insisted that the ANC could be Soviet-backed, communist-influenced, and still lead a movement for freedom.
Western liberals covered up this fact in order to lionize Mandela:
This much could be easily gleaned by reading the SACP journal, African Communist, or just sniffing the air outside the London headquarters of the African National Congress; during the struggle years (1960-1990) the SACP reeked of Soviet orthodoxy, and the ANC reeked of the SACP. As a journalist, you had to be very careful what you said about this. The civilized line was the one ceaselessly propounded in The New York Times — Nelson Mandela was basically a black liberal, and his movement was striving for universal democratic values. Anyone who disagreed was an anti-Communist crank, as Keller labels me.
Mandela repeatedly expounded on the glories of the coming Communist society:
Under a Communist Party Government South Africa will become a land of milk and honey. Political, economic and social rights will cease to be enjoyed by Whites only. They will be shared equally by Whites and Non-Whites. There will be enough land and houses for all. There will be no unemployment, starvation and disease.
The test measures time preference, or the ability to wait for greater long-term gratification rather than achieve a smaller reward in the short term:
Mischel and his colleagues presented a preschooler with a plate of treats such as marshmallows. The child was then told that the researcher had to leave the room for a few minutes, but not before giving the child a simple choice: If the child waited until the researcher returned, she could have two marshmallows. If the child simply couldn’t wait, she could ring a bell and the researcher would come back immediately, but she would only be allowed one marshmallow.
[S]usceptibility to emotional responses may influence their behavior throughout life, as Mischel discovered when he revisited his marshmallow-test subjects as adolescents. He found that teenagers who had waited longer for the marshmallows as preschoolers were more likely to score higher on the SAT, and their parents were more likely to rate them as having a greater ability to plan, handle stress, respond to reason, exhibit self-control in frustrating situations and concentrate without becoming distracted.
Casey and colleagues examined brain activity in some subjects using functional magnetic resonance imaging. When presented with tempting stimuli, individuals with low self-control showed brain patterns that differed from those with high self-control. The researchers found that the prefrontal cortex (a region that controls executive functions, such as making choices) was more active in subjects with higher self-control. And the ventral striatum (a region thought to process desires and rewards) showed boosted activity in those with lower self-control.
The ability to anticipate and wait for delayed gratification led to better results in life:
The longer a child delayed gratification, Mischel found—that is, the longer she was able to wait—the better she would fare later in life at numerous measures of what we now call executive function. She would perform better academically, earn more money, and be healthier and happier. She would also be more likely to avoid a number of negative outcomes, including jail time, obesity, and drug use.
Mischel followed the kids in the original Bing sample for five decades, tracking how the ability to exercise self-control at an early age was correlated with various life outcomes as the children grew into adolescents and adults. (More recently, he also studied brain scans of that original cohort to examine how the ability to delay gratification is related to neural structures.)
Increased willpower led to better life results:
It began in the early 1960s at Stanford University’s Bing Nursery School, where Mischel and his graduate students gave children the choice between one reward (like a marshmallow, pretzel, or mint) they could eat immediately, and a larger reward (two marshmallows) for which they would have to wait alone, for up to 20 minutes. Years later, Mischel and his team followed up with the Bing preschoolers and found that children who had waited for the second marshmallow generally fared better in life. For example, studies showed that a child’s ability to delay eating the first treat predicted higher SAT scores and a lower body mass index (BMI)00737-8/abstract) 30 years after their initial Marshmallow Test. Researchers discovered that parents of “high delayers” even reported that they were more competent than “instant gratifiers”—without ever knowing whether their child had gobbled the first marshmallow.
But there’s been criticism of Mischel’s findings too—that his samples are too small or homogenous to support sweeping scientific conclusions and that the Marshmallow Test actually measures trust in authority, not what he says his grandmother called sitzfleisch, the ability to sit in a seat and reach a goal, despite obstacles
Some have criticized it as rewarding wealth:
Mischel and his colleagues administered the test and then tracked how children went on to fare later in life. They described the results in a 1990 study, which suggested that delayed gratification had huge benefits, including on such measures as standardized test scores.
In restaging the experiment, Watts and his colleagues thus adjusted the experimental design in important ways: The researchers used a sample that was much larger—more than 900 children—and also more representative of the general population in terms of race, ethnicity, and parents’ education. The researchers also, when analyzing their test’s results, controlled for certain factors—such as the income of a child’s household—that might explain children’s ability to delay gratification and their long-term success.
Ultimately, the new study finds limited support for the idea that being able to delay gratification leads to better outcomes. Instead, it suggests that the capacity to hold out for a second marshmallow is shaped in large part by a child’s social and economic background—and, in turn, that that background, not the ability to delay gratification, is what’s behind kids’ long-term success.
This however ignores the heritability of intelligence and other personality traits, which is why higher intelligence kids end up wealthier and healthier.
Sexual Assault and Coverup
Market distortions are disparities between actual value and market value caused by intervention from outside the markets:
Market distortion is an economic scenario that occurs when there is an intervention in a given market by a governing body. The intervention may take the form of price ceilings, price floors or tax subsidies.
Market distortions create market failures, which is not an economically ideal situation. Market distortions are often a byproduct of government policies that aim to protect and raise the general well-being of all market participants.
Such acts alter expressed value relative to potential value:
A market distortion refers to an event in which a governing body intervenes in a market. Generally, it sees the market clearing price for an item significantly differing from the price that a market would achieve while operating under conditions of perfect competition.
Market distortions may not always be economically efficient, but they generally intend to enhance a society’s welfare. As represented in the example above, markets can become distorted when a single business holds a monopoly and a lack of competition typically leads to higher prices – and in some cases, this requires the government to intervene.
These are usually “the path to Hell is paved with good intentions” style policies:
Governing body intervention in a specific market tied to one or more events, like price ceilings, price floors, or tax subsidies. It can enhance the welfare of society, but usually leads up to an ultimate effect of a market failure.
Even natural disasters can cause market distortions:
But there’s good reason for the astonishing increase: Hurricane Harvey.
After last year’s storm hit Houston at the end of August — and even days before — closings, showings and mortgage lending in this area effectively came to a halt as buyers and home shoppers put their plans on hold. For the last week of the month, real estate offices were dark.
Yet a comparison of the first three weeks of August with the same period in 2017, shows that sales activity still increased, especially for higher-end homes, the Houston Association of Realtors said Wednesday in a monthly report.
He knew that something was afoot, but had trouble identifying it directly:
In The Secret World of American Communism, John Haynes and I reprinted nearly one hundred Russian KGB documents establishing that Soviet intelligence had recruited American communists to spy on its behalf. We also showed that from its inception in 1919, the CPUSA had been generously funded by the Soviet Union, with subsidies that reached $3,000,000 a year by the mid-1980s, and that the Party leadership had worked closely with Soviet intelligence to ferret out American secrets. And we found snippets of information about a very hush-hush American project, code-named Venona, that had worked to decipher coded Soviet messages.
All told, some 350 Americans turn out to have worked for Soviet intelligence during World War II — a time when we were allies. American counter-intelligence eventually identified more than 125 of these agents — but were never able to nail down who the other 200 plus were. Virtually every one of the people accused of being a Soviet agent by Elizabeth Bentley and Whittaker Chambers — both reviled and denounced for making false charges not only by political partisans in the 1940s but by historians ever since — turns out to have been a Soviet spy.
Venona makes crystal clear that the leadership of the CPUSA was not only aware of Soviet intelligence networks in the government, but also actively assisted the KGB in recruiting American communists to spy. The CPUSA even had several liaisons who worked with KGB spymasters. The KGB code word for members of the CPUSA was “Fellow Countrymen.” Nearly every American who worked for the KGB or GRU was a member of the CPUSA.
But if McCarthy was wrong on the details — and what is history but details — many historians today are both wrong on the details about McCarthyism and morally obtuse about the nature of communism. Far too many American historians believe that anti-communism or the search for Soviet spies was baseless paranoia. They recoil so strongly from McCarthy that they are unable to recognize that just because an objectionable politician cynically employed anti-communism does not mean that anti-communism was objectionable. The CPUSA was a haven for spies and Soviet subversion presented a genuine security threat to the United States.
However, additional intelligence showed him to be more accurate:
We can be grateful to Herbert Romerstein and the late Eric Breindel for making the huge effort to tell the story of this extraordinary cipher-breaking program in their book, The Venona Secrets: Exposing Soviet Espionage and America’s Traitors (Regnery Publishing, $19.95, softcover, 608 pages). This book, in part, documents the revelations in Ann Coulter’s current bestseller, Treason. It is based both on the declassified Venona information and on the archives of the Communist International that were kept in Moscow, as well as the files of other Communist parties in Eastern and Central Europe. They became available to researchers after the fall of the Soviet Union.
The truth is, in the 1940s and the decades that followed, the State Department, the US Treasury, the CIO until it expelled some of its unions, and even the US Army, was thoroughly infiltrated by Americans whose loyalty was to the Soviet Union, a nation bent on the destruction of capitalism and the democracy upon which it is based. Ironically, you can thank the Soviet Union for Social Security and for the failure of our present educational system. Both were implemented by Leftists, the latter being based on the former Soviet model.
In his general outlook, McCarthy was right:
In the 1940s, the NSA had a top-secret program called Venona which intercepted (and much later decoded) messages between Moscow and its American agents. The recent publication of a batch of Venona transcripts gives evidence that the Roosevelt and Truman administrations were rife with communist spies and political operatives who reported, directly or indirectly, to the Soviet government, much as their anti-communist opponents charged. The Age of McCarthyism, it turns out, was not the simple witch hunt of the innocent by the malevolent as two generations of high school and college students have been taught.
Sen. Robert Taft of Ohio said, “The greatest Kremlin asset in our history has been the pro-communist group in the State Department who surrendered to every demand of Russia at Yalta and Potsdam, and promoted at every opportunity the communist cause in China until today communism threatens to take over all of Asia.” Secretary of State Dean Acheson, a pillar of the establishment, concluded that Taft had joined “the primitives.”
Yet, in a global sense McCarthy was on to something. McCarthy may have exaggerated the scope of the problem but not by much. The government was the workplace of perhaps 100 communist agents in 1943-45. He just didn’t know their names.
The media is 93% non-conservative and thanks to the internet, even more concentrated than before:
“As of 2013, only 7 percent of [journalists] identified as Republicans,” Silver wrote in March, chiding the press for its political homogeneity. Just after the election, presidential strategist Steve Bannon savaged the press on the same point but with a heartier vocabulary. “The media bubble is the ultimate symbol of what’s wrong with this country,” Bannon said.
The national media really does work in a bubble, something that wasn’t true as recently as 2008. And the bubble is growing more extreme. Concentrated heavily along the coasts, the bubble is both geographic and political. If you’re a working journalist, odds aren’t just that you work in a pro-Clinton county—odds are that you reside in one of the nation’s most pro-Clinton counties. And you’ve got company: If you’re a typical reader of Politico, chances are you’re a citizen of bubbleville, too.
Today, 73 percent of all internet publishing jobs are concentrated in either the Boston-New York-Washington-Richmond corridor or the West Coast crescent that runs from Seattle to San Diego and on to Phoenix. The Chicagoland area, a traditional media center, captures 5 percent of the jobs, with a paltry 22 percent going to the rest of the country.
A study found high concentration in Leftist circles:
Write Lars Wilnat and David Weaver, professors of journalism at Indiana, of their findings:
Compared with 2002, the percentage of full-time U.S. journalists who claim to be Democrats has dropped 8 percentage points in 2013 to about 28 percent, moving this figure closer to the overall population percentage of 30 percent, according to a December 12-15, 2013, ABC News/Washington Post national poll of 1,005 adults. This is the lowest percentage of journalists saying they are Democrats since 1971. An even larger drop was observed among journalists who said they were Republicans in 2013 (7.1 percent) than in 2002 (18 percent), but the 2013 figure is still notably lower than the percentage of U.S. adults who identified with the Republican Party (24 percent according to the poll mentioned above).
Back in 1971, the first time this survey was conducted, there was simply more partisanship among reporters. More than one in three (35.5 percent) said they were Democrats while more than one in four (25.7 percent) described themselves as Republicans. At that point, 32.5 percent called themselves independents.
Even when they claim to be “independents,” journalists vote Left:
The main reason why bias exists, I believe, is simply that newsrooms are filled overwhelmingly with liberals. Here’s the most important fact to know, if you want to understand media bias: If you poll Washington correspondents and ask “Who’d you vote for last election?”, about 93% will say the Democrat.
Not only that, but they collude:
A young blogger, Ezra Klein, formerly of the avowedly left-wing American Prospect and now with the avowedly mainstream Washington Post, founded the e-mail listserv JournoList for like-minded liberals to hash out and develop ideas. Some 400 people joined the by-invitation-only group. Most, it seems, were in the media, but many hailed from academia, think tanks, and the world of forthright liberal activism generally. They spoke freely about their political and personal biases, including their hatred of Fox News and Rush Limbaugh.
In 2008, participants shared talking points about how to shape coverage to help Obama. They tried to paint any negative coverage of Obama’s racist and hateful pastor, Jeremiah Wright, as out of bounds. Journalists at such “objective” news organizations as Newsweek, Bloomberg, Time, and The Economist joined conversations with open partisans about the best way to criticize Sarah Palin.
As James DeLong, a fellow at the Digital Society, correctly noted on the Enterprise Blog, “The real problem with JournoList is that much of it consisted of exchanges among people who worked for institutions about how to best hijack their employers for the cause of Progressivism.”
Members of most major Leftist publications and some “neutral” ones are represented:
According to records obtained by The Daily Caller, at several points during the 2008 presidential campaign a group of liberal journalists took radical steps to protect their favored candidate. Employees of news organizations including Time, Politico, the Huffington Post, the Baltimore Sun, the Guardian, Salon and the New Republic participated in outpourings of anger over how Obama had been treated in the media, and in some cases plotted to fix the damage.
In one instance, Spencer Ackerman of the Washington Independent urged his colleagues to deflect attention from Obama’s relationship with Wright by changing the subject. Pick one of Obama’s conservative critics, Ackerman wrote, “Fred Barnes, Karl Rove, who cares — and call them racists.”
Michael Tomasky, a writer for the Guardian, also tried to rally his fellow members of Journolist: “Listen folks–in my opinion, we all have to do what we can to kill ABC and this idiocy in whatever venues we have. This isn’t about defending Obama. This is about how the [mainstream media] kills any chance of discourse that actually serves the people.”
And when one collusion center shuts down, they start up another:
A prominent CNN commentator, the top two political reporters for The Huffington Post, a Reuters reporter, the editor of The Nation magazine, a producer for Al Jazeera America television, a U.S. News & World Report columnist, and approximately two dozen Huffington Post contributors are among the more than 1,000 members of Gamechanger Salon. Founded by leftwing activist Billy Wimsatt, the group is a secretive digital gathering of writers, opinion leaders, activists and political hands who share information, ideas and strategy via a closed Google group.
It spans media, unions, and NGOs:
“Gamechanger Salon” is a (now not-so-) secret Google group with a membership of over 1000 left wing influencers. Media Trackers discovered the group after filing an open records request concerning a professor and activist at the University of Wisconsin. Members include journalists from outlets like the Huffington Post, MSNBC, ThinkProgress, and Media Matters, and activists from groups like the Progressive Change Campaign Committee, Change.org, Planned Parenthood, and the AFL-CIO.
These people also belong to non-profit organizations dedicated to sharing propaganda:
Shareblue — which has partnered with SiriusXM Radio’s Progress channel — is a project of True Blue Media, a company formed by longtime Hillary Clinton ally David Brock. Brock has built a constellation of influential liberal organizations like the news watchdog group Media Matters and Correct the Record, a pro-Clinton super PAC.
Nerpel told Mediaite that American Bridge began funding Shareblue in 2016. American Bridge did not disclose making any payments to True Blue Media in its 2016 tax return, even though Brock served as the nonprofit’s chairman in 2015 and as its senior advisor through April 2016. While the group did report having roughly $1.6 million in notes and loans receivable as assets in 2016, IRS rules generally require nonprofits to specify when they do business with their former officers.
It operates beyond its own web site, influencing broader social media:
“Now, we can use our own social media platform to both weaponize oppo research and perfect its delivery system to expose Trump and his allies and collaborators, and to damage Trump. We will aim to have our factual news stories repeated in Democratic communications and paid political ads.”
Shareblue’s full editorial priorities are listed
These infiltrated other sites disguised as normal users:
They were able to accommodate such a heavy presence of paid shills, and master their art of deception so fantastically, that the average social media user likely couldn’t tell that they were arguing with or reading posts from hired guns, that weren’t firing bullets but firing deception in a war for your mind.
It’s seemingly brilliant, hire a few thousand people, place clean profile pictures onto each of them, and send them off to the likes of Facebook, Twitter, Reddit, YouTube, forums, and imageboards; to intentionally deceive the masses.
They would use the various platforms and their systems to continue their hateful push to demonize President Trump, and his supporters. Brigading their own posts with upvotes or likes with their thousands of paid accounts, to attempt to sway the undecided voters into believing the lies they saw at the top of the chain, were widely believed or the most popular opinion.
Metapolitics is a theory, popular among the European New Right, that to influence politics, one must change culture:
Metapolitics started from the idea of ‘the primacy of culture over politics as the premise to a revolution in the spirit of ‘right-wing Gramscism’ (Griffin, 2000). It directly refers to the long-term strategy of GRECE (Groupement de recherche et d’études pour la civilisation Européenne), better known as La Nouvelle Droite, the school of thought Alain De Benoist, Guillaume De Faye and Dominique Venner founded in 1968 (Maly, 2018b).
Metapolitics is currently a keyword within the New Right movement worldwide (Nagle, 2017, Maly, 2018 a & b, Hawley, 2017; Johnson, 2012). According to Maly (2018), metapolitics is at its core, an ideological project: the goal is to achieve cultural hegemony (Gramsci, 1971).
It aims at hegemonizing a traditionalist, right wing ideology and a völkish (ethnonationalist) or in their own words a ‘differentialist’ position (Benoist & Champertier, 2000). This ideology is -contrary to what the etiquette ‘New Right’ suggests – not new: ‘It expressly and explicitly borrows a great deal from the Weimar Republic’s ‘conservative revolution’ (Salzberg, 2016: 40).
This targets cultural change:
The metapolitical analysis does not simply relate to the obvious, surface actions of everyday politics, but examines what controls and affects the development of society as a whole over the course of long periods, which relates to the underlying assumptions and consciousness of the average citizens. Metapolitics considers culture, economy, history, and both foreign and domestic policy – not simply state, party, or nation. We must understand society as a whole, as an organism, to be able to reform it in a constructive and lasting fashion.
Gramsci claimed that the state is not limited to its political apparatus. In fact, it works in tandem with the so-called civil apparatus. In other words, every political power structure is reinforced by a civil consensus, which is the social and psychological support given by the masses. This support expresses itself in of the assumptions which underlie their culture, worldview, and customs. In order for any political ideology to maintain its grip on power, it must support itself by establishing and disseminating these cultural assumptions among the masses.
This experiment seemed to show that people would obey totalitarian orders because they deferred to authority, but may have shown simply that compliant people go along with what they think is wanted of them by people above them:
Milgram made sketches of a long box with circular buttons numbered one through nine. It was an electric shock indicator, a way to quantify and measure a person’s willingness to torture. This was the significant test he was looking for. No one would be actually shocked, of course, but the confederate would fake it. There was only one problem: in Asch’s experiment, it was easy to get a control test without group pressure; all you had to do was give the same line test to an isolated individual. But without the group, a lone person wouldn’t have any cause to shock a stranger. The control would require the experimenter to order the subject to perform.
Milgram surveyed other psychologists before he ran the experiments, and his consulting group guessed about a tenth of one per cent (.125) of subjects (only sadists and psychopaths) would max out the voltage before refusing. Instead, 65 per cent of subjects hit the 450 volt button – labelled ‘XXX’ instead of ‘lethal’ in the final model – three times before Milgram cut them off. All subjects reached 300 volts, which meant they believed they had administered 20 distinct shocks. It was a successful experiment. Too successful. Cross-cultural comparisons were beside the point if most Americans were already Nazis just waiting for the right orders.
In June, a group of European researchers released a Milgram-based study that cross-referenced participants’ shock scores (on a mock game show instead of a lab session) with their results from a personality survey administered months later. Though the results weren’t dramatic, they found that ‘nice’ and ‘agreeable’ people were more likely to follow instructions from a game-show host telling them to torture strangers.
There were, however, some issues:
Over 700 people took part in the experiments. When the news of the experiment was first reported, and the shocking statistic that 65 percent of people went to maximum voltage on the shock machine was reported, very few people, I think, realized then and even realize today that that statistic applied to 26 of 40 people. Of those other 700-odd people, obedience rates varied enormously. In fact, there were variations of the experiment where no one obeyed.
Including the influence of experimenters:
It’s fairly well known, though, that many participants refused to go to that level, particularly in the conditions when the study participants felt less pressured to conform. For example, if the learner and the teacher were in the same room, fewer teachers moved up the shock scale to its maximum voltage. What is less well broadcasted is the fact that many participants sensed that the learner wasn’t really receiving any shock.
These in fact make the results questionable:
By examining records of the experiment held at Yale, I found that in over half of the 24 variations, 60% of people disobeyed the instructions of the authority and refused to continue.
In listening to the original recordings of the experiments, it’s clear that Milgram’s experimenter John Williams deviated significantly from the script in his interactions with subjects. Williams – with Milgram’s approval – improvised in all manner of ways to exert pressure on subjects to keep administering shocks.
He left the lab to “check” on the learner, returning to reassure the teacher that the learner was OK. Instead of sticking to the standard four verbal commands described in accounts of the experimental protocol, Williams often abandoned the script and commanded some subjects 25 times and more to keep going. Teachers were blocked in their efforts to swap places with the learner or to check on him themselves.
The slavish obedience to authority we have come to associate with Milgram’s experiments comes to sound much more like bullying and coercion when you listen to these recordings.
Subjects wrote to Milgram or called him afterwards to describe what had made them suspicious. Some commented on how the learner’s cries seemed to be coming from a speaker in the corner of the room, suggesting it was a tape recording. Others noticed the check given to the learner looked dog-eared and worn, an indication that it had been handed over many times before. The experimenter’s lack of concern for the learner and failure to respond to the learner’s complaints suggested there was nothing to worry about. Some subjects described how they had surreptitiously pressed switches of lower voltage but still the learner’s cries intensified.
The desire to participate in an experiment and do what was necessary to prove the thesis may have been the real conformity:
After trawling through the Yale archives, the team gained access to the feedback that 659 of the 800 volunteers provided at the end of the experiment, when the set-up had been revealed.
Far from being distressed by the experience, the researchers found that most volunteers said they were very happy to have participated.
Professor Haslam said: “It appears from this feedback that the main reason participants weren’t distressed is that they did not think they had done anything wrong. This was largely due to Milgram’s ability to convince them that they had made an important contribution to science.”
This bullying may have selected for people who were easily misled in their desire to please:
After analyzing the conversation patterns from audio recordings of 117 study participants, Hollander found that Milgram’s original classification of his subjects—either obedient or disobedient—failed to capture the true dynamics of the situation. Rather, he argued, people in both categories tried several different forms of protest—those who successfully ended the experiment early were simply better at resisting than the ones that continued shocking.
“Research subjects may say things like ‘I can’t do this anymore’ or ‘I’m not going to do this anymore,’” he said, even those who went all the way to 450 volts. “I understand those practices to be a way of trying to stop the experiment in a relatively aggressive, direct, and explicit way.”
Morality refers to self-guidance such that, if there are not authorities around to compel good behavior, people follow it nonetheless. The classic parable follows:
Now that those who practise justice do so involuntarily and because they have not the power to be unjust will best appear if we imagine something of this kind: having given both to the just and the unjust power to do what they will, let us watch and see whither desire will lead them; then we shall discover in the very act the just and unjust man to be proceeding along the same road, following their interest, which all natures deem to be their good, and are only diverted into the path of justice by the force of law.
The liberty which we are supposing may be most completely given to them in the form of such a power as is said to have been possessed by Gyges the ancestor of Croesus the Lydian. According to the tradition, Gyges was a shepherd in the service of the king of Lydia; there was a great storm, and an earthquake made an opening in the earth at the place where he was feeding his flock. Amazed at the sight, he descended into the opening, where, among other marvels, he beheld a hollow brazen horse, having doors, at which he stooping and looking in saw a dead body of stature, as appeared to him, more than human, and having nothing on but a gold ring; this he took from the finger of the dead and reascended. Now the shepherds met together, according to custom, that they might send their monthly report about the flocks to the king; into their assembly he came having the ring on his finger, and as he was sitting among them he chanced to turn the collet of the ring inside his hand, when instantly he became invisible to the rest of the company and they began to speak of him as if he were no longer present. He was astonished at this, and again touching the ring he turned the collet outwards and reappeared; he made several trials of the ring, and always with the same result-when he turned the collet inwards he became invisible, when outwards he reappeared.
Whereupon he contrived to be chosen one of the messengers who were sent to the court; where as soon as he arrived he seduced the queen, and with her help conspired against the king and slew him, and took the kingdom. Suppose now that there were two such magic rings, and the just put on one of them and the unjust the other;,no man can be imagined to be of such an iron nature that he would stand fast in justice. No man would keep his hands off what was not his own when he could safely take what he liked out of the market, or go into houses and lie with any one at his pleasure, or kill or release from prison whom he would, and in all respects be like a God among men. Then the actions of the just would be as the actions of the unjust; they would both come at last to the same point.
And this we may truly affirm to be a great proof that a man is just, not willingly or because he thinks that justice is any good to him individually, but of necessity, for wherever any one thinks that he can safely be unjust, there he is unjust. For all men believe in their hearts that injustice is far more profitable to the individual than justice, and he who argues as I have been supposing, will say that they are right. If you could imagine any one obtaining this power of becoming invisible, and never doing any wrong or touching what was another’s, he would be thought by the lookers-on to be a most wretched idiot, although they would praise him to one another’s faces, and keep up appearances with one another from a fear that they too might suffer injustice. Enough of this.
You can download The Negro Family: The Case For National Action, which is the original “Moynihan Report” in full.
Sometimes it pops up in the news:
Nearly 50 years after the release of the U.S. Department of Labor report “The Negro Family: The Case for National Action,” which was highly controversial and widely criticized at the time, the new Urban Institute study found that the alarming statistics in the report back then “have only grown worse, not only for blacks, but for whites and Hispanics as well.”
“Today, the share of white children born outside marriage is about the same as the share of black children born outside marriage in Moynihan’s day,” the Urban Institute report said. “The percentage of black children born to unmarried mothers, in comparison, tripled between the early 1960s and 2009, remaining far higher than the percentage of white children born to unmarried mothers.”
“We have scores of studies that show that kids that grow up in single-women-headed families don’t fare as well, are more likely to do poorly in school and to drop out of school, to be arrested, to become single parents themselves,” [Haskins] said. “These factors reinforce the economic disadvantages that these kids face and impact the larger black community.”
It was not a jeremiad against blacks, or welfare, but welfare that did not include families in its calculus:
But Moynihan still professed concern for the family, and for the black family in particular. He began pushing for a minimum income for all American families. Nixon promoted Moynihan’s proposal—called the Family Assistance Plan—before the American public in a television address in August of 1969, and officially presented it to Congress in October.
Between 1963 and 1993, the murder rate doubled, the robbery rate quadrupled, and the aggravated-assault rate nearly quintupled. But the relationship between crime and incarceration is more discordant than it appears. Imprisonment rates actually fell from the 1960s through the early ’70s, even as violent crime increased. From the mid-’70s to the late ’80s, both imprisonment rates and violent-crime rates rose. Then, from the early ’90s to the present, violent-crime rates fell while imprisonment rates increased.
It seems he may have been right about having strong families reducing crime.
Despite the jihadist slogans accompanying the mailed anthrax, it had nothing to do with Saddam Hussein or any foreign element; the FBI ignored a 2002 tip from a scientific colleague of the actual anthrax killer, who turned out to be a Fort Detrick scientist named Bruce Edwards Ivins; the reason is that they had quickly obsessed on an innocent man named Steven Hatfill; the bureau was bullied into focusing on the government scientist by Democratic Sen. Patrick Leahy (whose office, along with that of Senate Majority Leader Tom Daschle, was targeted by an anthrax-laced letter) and was duped into focusing on Hatfill by two sources – a conspiracy-minded college professor with a political agenda who’d never met Hatfill and by Nicholas Kristof, who put his conspiracy theories in the paper while mocking the FBI for not arresting Hatfill.
Mueller, who micromanaged the anthrax case and fell in love with the dubious dog evidence, personally assured Ashcroft and presumably George W. Bush that in Steven Hatfill the bureau had its man. Comey, in turn, was asked by a skeptical Deputy Secretary of Defense Paul Wolfowitz if Hatfill was another Richard Jewell – the security guard wrongly accused of the Atlanta Olympics bombing. Comey replied that he was “absolutely certain” they weren’t making a mistake.
Nationalism means identification of nation (as opposed to nation-state) with an ethnic group and its culture. The three are viewed as inseparable. Therefore, a nationalist is someone who believes that mixed-ethnic or mixed-racial nations are illegitimate, opposes the nation-state, and is patriotic to his tribe rather than his nation-state.
All nations begin in a state of nature as extensions of tribalism, itself an extension of the family, but nationalism did not need to be formalized until Europe began adopting republics instead of monarchies in the wake of the French Revolution. The most famous nationalists were the Chinese nationalists, who fought as allies of America in WW2, and the Axis forces, all of whom were known as the “nationalist powers.”
Nationalism, translated into world politics, implies the identification of the state or nation with the people — or at least the desirability of determining the extent of the state according to ethnographic principles. In the age of nationalism, but only in the age of nationalism, the principle was generally recognized that each nationality should form a state—its state—and that the state should include all members of that nationality. Formerly states, or territories under one administration, were not delineated by nationality. ^(1)
The ancients tell us that nationalism did not arise during history, but was a condition of our natural state in prehistory:
The kinship of all Greeks in blood and speech, and the shrines of gods and the sacrifices that we have in common, and the likeness of our way of life. – Herodotus, Histories, 8.144.2 ^(2)
Our Constitution guarantees us natural rights by restricting government from impeding them. This follows a long line of thinking in Western Civilization which says that essentially, life is unfair but our solution is not to order people around, Mongol-style, but to rely on natural selection by rewarding those who have done the good, productive, excellent, wise, sensible, etc. and punishing those who have done the opposite, while doing nothing for the mediocre. To that end, we encourage society to allow people to find their own level of performance and reap the rewards from it.
We can see a great deal of evidentiary support for this view in the writings of the founders and those who inspired them, including the Declaration of Independence, which says that “all men are created equal.” The important word there is created, as opposed to made by government, which means that at birth, you are as equal as you are going to ever be. The purpose of the Constitution, then, is to preserve the “natural rights” of humanity including “life, liberty, and the pursuit of happiness” which are rights, privileges, or at the very least abilities that we have in the state of nature. If you could find a way to survive, you were able to live, move and interact freely, and make choices that you believed would lead to your greatness “happiness,” which meant a complex mosaic of prosperity, family, personal habits, and accumulation of knowledge.
Freedom of association derives from the liberty and pursuit of happiness prongs of the Constitutional definition of natural rights. You would not be forced to associate, or interact with, those that you did not want to; further, you would not be taxed to pay for them, nor to support their existence among you. They had the right to survive if they could and thus enjoy the other two prongs, but you were not responsible for them doing so, and your refusal to aid them in doing so was not only your right, but also an important part of natural selection and social Darwinism. That is, people who were useless, criminal, or insane would be allowed to die out, as also happens in the state of nature.
With Civil Rights, however, this was changed, because society and government became committed to be an agent of enforcing egalitarianism, or making everyone “equal” in political, social, and economic ways. Under natural rights, slavery was an institution of culture which was supported by law but not created by it, and so if it needed to be abolished, the correct way was to change culture. This is slow change and irritated many, so they used it as a pretext for war, and in order to do that, they had to redefine American government as an ideological regime.
After WW2, this accelerated even further, with America committing itself Soviet-style to enforcing mixed association between different social classes, races, and ethnic groups (remember, at the time, most WASPs did not invite the Southern Europeans, Eastern Europeans, and Mediterraneans/Levantines into their social groups, clubs, organizations, and businesses). This was changed through laws that forced government to enforce equality in all areas of public life, which effectively constituted social engineering of its population instead of support of culture, heritage, and the population as was envisioned by the founders.
In other words, America reversed itself from “government cannot impede your natural state” to “government must enforce an unnatural state upon you,” all in the name of equality, which as you know is a sister philosophy to that of socialism and Communism.
Let’s look at The Declaration of Independence second paragraph:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.
- That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed,
- That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.
Prudence, indeed, will dictate that Governments long established should not be changed for light and transient causes; and accordingly all experience hath shewn, that mankind are more disposed to suffer, while evils are sufferable, than to right themselves by abolishing the forms to which they are accustomed. But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.–Such has been the patient sufferance of these Colonies; and such is now the necessity which constrains them to alter their former Systems of Government.
This is a natural rights argument which in fact is opposed to egalitarianism. It says that “all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”
The material after the first comma elaborates on what comes before it, but even more, the word created is our clue. Men are born as equal as they are going to get, and that birth is the only equality; in nature, they have “certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness” which come from God and not Man.
In other words, there is no equality except being alive, and government should not impede the natural ability of a living creature to survive and make choices.
And while no one should deny that there are rights intrinsic to humanity, we must be careful about seizing onto the “human rights” mantra too quickly. For the phrase “human rights” does not denote what the West has long referenced as “natural rights.” In fact, the two categories of rights are, in some ways, not only unrelated but actually at enmity with one another.
Consider this: “natural rights” are frequently described as God-given, and as such provide a bulwark against government’s tendency to become tyrannical. “Human rights,” on the other hand, are usually the constructs of men: men who are most often atheistic (or “enlightened”) in their worldview, and therefore looking for some earthly-yet-quasi-universal justification for being nice to one another and abiding by the rules of the state.
Although the terms are similar, neoliberalism is distinct from modern liberalism. Both have their ideological roots in the classical liberalism of the 19th century, which championed economic laissez-faire and the freedom (or liberty) of individuals against the excessive power of government. That variant of liberalism is often associated with the economist Adam Smith, who argued in The Wealth of Nations (1776) that markets are governed by an “invisible hand” and thus should be subject to minimal government interference. But liberalism evolved over time into a number of different (and often competing) traditions. Modern liberalism developed from the social-liberal tradition, which focused on impediments to individual freedom—including poverty and inequality, disease, discrimination, and ignorance—that had been created or exacerbated by unfettered capitalism and could be ameliorated only through direct state intervention. Such measures began in the late 19th century with workers’ compensation schemes, the public funding of schools and hospitals, and regulations on working hours and conditions and eventually, by the mid-20th century, encompassed the broad range of social services and benefits characteristic of the so-called welfare state.
By the 1970s, however, economic stagnation and increasing public debt prompted some economists to advocate a return to classical liberalism, which in its revived form came to be known as neoliberalism. The intellectual foundations of that revival were primarily the work of the Austrian-born British economist Friedrich von Hayek, who argued that interventionist measures aimed at the redistribution of wealth lead inevitably to totalitarianism, and of the American economist Milton Friedman, who rejected government fiscal policy as a means of influencing the business cycle (see also monetarism). Their views were enthusiastically embraced by the major conservative political parties in Britain and the United States, which achieved power with the lengthy administrations of British Prime Minister Margaret Thatcher (1979–90) and U.S. Pres. Ronald Reagan (1981–89).
Net neutrality, also called “nut noot,” was a federal regulatory policy that demanded that internet service providers (ISPs) forward traffic equally to other sites on the net. Revoked by Donald Trump, it produced much protest, which seemed to overshadow protests against content censorship like FOSTA/SESTA and Articles 11-13 in the EU.
Most people, when they talk about net neutrality, are actually complaining because American broadband is really bad. It is really bad because of local regulations, not federal ones:
Broadband policy discussions usually revolve around the U.S. government’s Federal Communications Commission (FCC), yet it’s really our local governments and public utilities that impose the most significant barriers to entry.^(1)
The high cost arises not from breaking ground, but from local regulations designed to limit market national monopolies:
States have given municipalities the authority to offer broadband but made it difficult with tons of bureaucratic requirements, he said. “The bottom line is some states have created thickets of red tape designed to limit competition,” he said. Local residents and businesses are the ones suffering the consequences, he argued, pointing to members of the two communities in the audience. ^(2)
Those national monopolies were in fact caused by government regulation:
And whose fault is that? Well, that would be the government’s fault. It regulated the cable TV business with a heavy hand since its infancy, giving monopoly rights to operators to string cities with coaxial cable. Those policies have been relaxed, so now it’s easier for a new provider — like telephone companies or fiber-upstarts like Google — to create broadband competition. But the market power of entrenched cable operators and the remaining regulatory hurdles still deter new entrants, suppressing the sort of competition that would make broadband companies more mindful of the needs of customers. ^(3)
And other regulations like environmental law:
Many fine California city proposals for the Google Fiber project were ultimately passed over in part because of the regulatory complexity here brought about by CEQA [California Environmental Quality Act] and other rules. Other states have equivalent processes in place to protect the environment without causing such harm to business processes, and therefore create incentives for new services to be deployed there instead.^(5)
It costs about $4,000 per home to roll out new fiber, much of which is regulatory costs, which make it impossible for any one firm to roll out enough fiber to gain enough market share to pay for itself:
“We remain skeptical that Google will find a scalable and economically feasible model to extend its build out to a large portion of the US, as costs would be substantial, regulatory and competitive barriers material, and in the end the effort would have limited impact on the global trajectory of the business.”^(6)
In fact, the people who get fiber — such as in Kansas City and Austin — did so because the community was not very spread out and local politicians waived permitting, rights-of-way, and regulatory requirements:
Most encouraging is the recognition by government officials that policies which eliminate unnecessary regulation, lower costs and speed infrastructure deployment, can be a meaningful catalyst to additional investment in advanced networks which drives employment and economic growth.^(7)
There are multiple costs created and managed by local authorities which can be waived and lead to the rollout of new fiber:
But the key thing was that city officials promised to get out of the media giant’s way. They didn’t dangle tax breaks, but they did deliver access to public rights of way, expedite the permiting process, offer space in city facilities and provide assistance with marketing and public relations.
In testimony before Congress last year, Medin discussed how infrastructure issues, including rights-of-way, utility poles, conduit and ducts are critical to making the economics of a fiber network work.
“Let’s start with rights-of-way,” he said. “Governments across the country control access to the rights-of-way that private companies need in order to lay fiber. And government regulation of these rights-of-way often results in unreasonable fees, anti-investment terms and conditions, and long and unpredictable build-out timeframes. The expense and complexity of obtaining access to public rights-of-way in many jurisdictions increase the cost and slow the pace of broadband network investment and deployment.”
Medin also described how outdated pole attachment regulations can create huge delays. While hanging fiber from utlity poles should be easier and less expensive than tearing up a street, regulations often get in the way. ^(8)
With these regulations, rights-of-way, permits, and environmental rules in place, the cost becomes prohibitive for any firm to gain enough market share to achieve economies of scale:
Goldman Sachs Telco analyst Jason Armstrong noted that if Google devoted 25% of its $4.5bn annual capex to this project, it could equip 830K homes per year, or 0.7% of US households. As such, even a 50mn household build out, which would represent less than half of all US homes, could cost as much as $70bn. We note that Jason Armstrong estimates Verizon has spent roughly $15bn to date building out its FiOS fiber network covering an area of approximately 17mn homes.^(9)
One option to get around this is to allow local communities to build their own internet, but that may only work under special circumstances (and may have its own problems):
Greenfield has a few things going for it, certainly. It’s relatively small, so the buildout doesn’t have to incorporate a wide new expanse. It also has easy-to-navigate terrain; there are no mountains or swaths of heavy vegetation that engineers must cut through. Finally, it has the ability to hook into the “middle-mile” broadband infrastructure that has been built by the Massachusetts Broadband Initiative, which allows Greenfield to connect directly to the access point in Boston.^(10)
Conservatives do not trust net neutrality because any government regulation introduces a backdoor method of regulating content.
For example, if someone published something the government does not like, it could claim evidence of a net neutrality violation and then drown the business in subpoenas, confiscate its equipment, or scare off its customers.
Net neutrality was dead anyway because ISPs installed fast lanes and large companies like Google were installing servers on ISP networks.
When a network sends more than twice the traffic it receives, it is required to pay AT&T an interconnection fee, and the company won’t upgrade capacity to a CDN with heavy traffic until it is paid. After all, the FCC doesn’t require ISPs to upgrade their infrastructures to handle larger volumes of traffic (even though AT&T customers might believe that their hefty monthly tithes entitle them to a network capable of handling the traffic they request).
This may sound like Internet traffic being held for ransom, but it’s all perfectly legal and has been standard operating procedure for 20 years. CDNs and peering connections came about as a means to deliver content faster and more efficiently to Internet users. By making arrangements to put servers inside an ISP network and set up direct connections to ISPs, large content providers were able to facilitate the delivery of their traffic to users.
Today it’s estimated that half of all Internet traffic comes from just 30 providers, including Google, Facebook, and Netflix. And more and more of these large content providers have set up their own CDNs, rather than use a company like Akamai, and signed agreements with multiple ISPs for their CDN connections. Companies like Google, Facebook, and Netflix — and consumers — have benefited from these kinds of Internet fast lanes for years.^(16)
Fast lanes may also benefit the consumer:
And if bandwidth-heavy traffic that would have traveled over the open Internet (adding to congestion) is offloaded onto a separate fast lane that does not impair the preexisting pipe’s bandwidth capabilities, it should actually ease congestion on the existing lanes, rather than create slow lanes.^(17)
That means that heavy-use sites, like the big six — Google, Facebook, Amazon, Apple, Netflix, and Twitter — pay to have servers located in the ISPs connected to “fast lanes” which means that all other traffic is not slowed down by these sites that make up the majority of internet bandwidth usage:
During 2013, streaming speeds declined for customers at several of the largest ISPs. The underlying cause of this slowdown is still not totally clear, but in February 2014 Netflix paid an unspecified sum to Comcast for the right to place its own servers in Comcast’s facilities, in effect gaining preferred access to Comcast’s network.
In the subsequent months, speeds increased for Comcast customers streaming Netflix movies and TV, and Netflix eventually struck similar deals with other major ISPs later in the year (see figure above).^(18)
We do not trust net neutrality law because the ISPs have been writing it:
Beyond the dismissive rhetoric, ISPs are coincidentally united today in calling for Congress to act — and that’s because they’ve paid handsomely to control what Congress does. There’s one thing Republicans and Democrats can agree on, and that’s taking money from ISPs. The telecommunications industry was the most powerful lobbying force of the 20th century, and that power endures. It’s no secret that lobbyists in Washington write many of the laws, and the telecom industry spends a lot of money to make sure lawmakers use them. We’ve already seen net neutrality legislation written by the ISPs, and it’s filled with loopholes. It’s not just in Congress — companies like AT&T have deep influence over local and state broadband laws, and write those policies, too. Some pro-net neutrality advocates are also arguing today that Congress should act, and there are some good reasons for that. Laws can be stickier than the judgements of regulatory agencies, and if you want to make net neutrality the law of the land that’s a job for Congress. But there’s a reason the ISPs are all saying the same thing, and it’s because they’re very confident they will defeat the interests of consumers and constituents.^(4)
Not surprisingly, net neutrality benefits monopolists.
The internet works by sending little chunks of information, called packets, from one site to another. It sends these through other sites, with each site sending the packet along until it reaches its destination.
Net neutrality, which has not existed since the late 1980s, was the concept that each site would forward any packet to its destination. This naturally happened at the same speed.
When the internet became commercialized in 1987, and later in the mid-1990s, some companies began cutting back on this practice for a very sensible reason: if you are a small firm, and some huge internet firm receives most of your traffic, you may “throttle” that traffic so that the popular traffic does not delay or obstruct traffic to smaller sites.
The political concept “net neutrality” is that we can write some rules to bring back this condition, despite having politicized the internet and allowed giant monopolies to dominate it. This would give government control over the internet, and allow for selective enforcement to penalize content it does not like, but even more, is impossible to enforce. In addition, firms will work around it by downgrading their basic services, and then offering non-neutral expedited services.
Basically, by commercializing the internet, we ruined its chance at ever having packet neutrality; by allowing large companies like FANG (Facebook, Apple/Amazon, Netflix and Google) to be essentially monopolies, we have created an incentive to throttle their traffic.
Democrats have sold “net neutrality” to millennials who are terrified of paying more to get to Instagram, not realizing that American internet access is already terrible in part because we do not allow the kind of competitive practices that net neutrality laws would prohibit. This creates selling of the internet as a generic commodity and offers no incentive to improve services.
It makes sense to look at who is behind the “net neutrality” laws. These are favored by FANG companies because, by forcing everyone to have equal access to the FANG sites, they would effectively maintain the FANG monopolies.
Some economists think that net neutrality is harmful:
One paper in the American Economic Review in 1987 showed that discriminatory pricing (by an upstream monopolist selling to downstream competitors) would often tend to be against large successful incumbents—in this case, against established edge providers—not struggling new entrants, which is the opposite of the pro-net neutrality narrative. Moreover, the Commission ignored another paper of mine in Information and Economic Policy in 2007 that concluded net neutrality would harm applications that did not require high speeds and was more likely to harm welfare than improve it.^(11)
Net neutrality may simply protect monopolists and discourage new entrants to the market:
Ironically, Title II regulation may actually exacerbate the risk of anticompetitive conduct by broadband internet access service providers.^(12)
It will also raise prices to the consumer:
To the contrary, the economic evidence provides no support for the existence of market failure sufficient to warrant ex ante regulation of the type proposed by the Commission, and strongly suggests that the regulations, if adopted, would reduce consumer welfare in both the short and long run.^(13)
And will discourage investment in infrastructure:
However, long before the OIO, studies have raised concerns that network neutrality policies will discourage investment by internet service providers (ISPs) in broadband infrastructure, to the detriment of broadband accessibility, and may increase average consumer costs—both of which would only further exacerbate the digital divide.^(14)
As well as harming consumer welfare:
We show that such interrelationships are more complex than claimed by net neutrality proponents and do not provide a compelling rationale for regulation. We conclude that antitrust enforcement and/or more limited regulatory mechanisms provide a better framework for addressing competitive concerns raised by proponents of net neutrality.^(15)
And transfer wealth from the poor to the rich:
Moreover, the internal subsidization required by net neutrality generates a transfer from the relatively poor to the relatively rich. The potential welfare gains that might come from controlling anticompetitive abuse or government coercion through implementation of the policy can be achieved by alternative policies with less harmful consequences.^(16)
Net neutrality can be used as a censorship tool
DMCA claims caused material to be pulled from the net even if it was not under copyright simply because one party asserted that it was illegal and was willing to pretend to be able to go to court over that fact. Even if they did go to court, it would cost them $500 in fees after the case was dismissed. Copyright claims can and are used to enforce censorship.
In fact, we have seen this type of censorship used against a site that publishes retractions of scientific papers:
Of course, what’s not clear is who actually posted the content to NewsBulet.in and what the plan is. But, it certainly suggests some very questionable behavior from someone who wanted the stories about Anil Potti on RetractionWatch to disappear.^(19)
One reason for this problem is overuse of copyright claims that remove unrelated material:
Google previously noted that that 37% of all DMCA notices they receive are not valid copyright claims.
Google previously revealed that 57% of all the DMCA notices they receive come from companies targeting competitors.^(20)
In fact, DMCA censorship is now a broadly used tool:
- Actress Cindy Garcia tries to have “Innocence of Muslims” film removed from Youtube by claiming she owns the copyright to her performance in the film.
- Human Synergistics International gets a 2-year-old blog post removed for quoting four sentences from its “exclusive” trapped-in-a-desert team building exercise.
- Universal Music uses a false copyright claim to remove a negative review of one of its artist’s albums.
- A minority owner of the Miami Heat sues a blogger and Google in an attempt to censor an unflattering photo of him.
- A cartoonist, irritated by criticism of her work, attempts to get her cartoons removed from Something Awful, accusing the site of theft.^(21)
In short, if you want to resolve the issues raised by the debate on net neutrality:
Just a reminder: The New York Times covered up genocide.
They also dramatized news stories into fake news:
And the grey lady has con artists working for her on the regular:
Obamagate refers to the Watergate-style scandal where President Barack Obama used the FBI to spy on his political opponents.
The Obamagate controversy refers to several scandals during President Barack Obama tenure, notably the misuse of government agencies, media manipulation, illegal wiretaping, and domestic spying on American citizens to harass political opponents and critics. By the election year of 2016, Obama officials vastly expanded the use of the foreign intelligence gathering apparatus as a weapon against domestic political opponents. In 2013, there were 9,600 FISA search queries involving 195 Americans.
The media covered for him, and so a massive scandal was brushed under the rug:
Earlier on Sunday, Trump retweeted conservative commentator Buck Sexton, who claimed that “the outgoing president” — an apparent reference to Obama — “used his last weeks in office to target incoming officials and sabotage the new administration.” Trump in his retweet wrote, “The biggest political crime in American history, by far!”
It ties into Obama’s use of the IRS to investigate and harass non-Leftists:
The president shared a message from Rep. Jim Jordan, noting Obama’s IRS targeted Tea Party organizations before the 2012 elections and the FBI targeted President Trump before the 2016 election.
This led to the fake Trump Dossier used as the basis of the impeachment attempt against Trump:
The FBI spied on an official they believed — or pretended to believe — was working inside the 2016 Trump presidential campaign on behalf of the Russian government to hijack the election and install a Manchurian candidate who would give America away to Moscow.
FBI agents — operating at the highest levels of DOJ authority — sought secret warrant applications normally reserved for our worst enemies such as active terrorists plotting to kill as many innocent Americans as possible.
They cobbled those secret warrant applications together with bad information supplied by — among others — Mr. Trump’s political opponents during the 2016 election. Much of that information was gathered abroad from America’s enemies seeking to sow discord in our elections.
The inspector general found specifically at least 17 “significant errors or omissions” in the FBI’s secret warrant applications.
Evidence suggests Obama used the FBI and IRS as political arms of the Democratic Party and the Justice Department covered it up:
The FOIA request was made following a scandal in 2013, that revealed the US Internal Revenue Service had selected political groups applying for tax-exempt status for intensive scrutiny based on their names or political themes. The revelation led to wide condemnation of the agency and prompted several investigation, including an FBI order by the US Attorney General Eric Holder.
Initial reports described the selection as nearly exclusively targeting conservative groups with terms such as “Tea Party” in their names but later it was found that some liberal groups were also selected for additional review.
In January 2014, the FBI told Fox News that its investigation had found no evidence so far warranting the filing of federal criminal charges in connection with the controversy, as it had not found any evidence of “enemy hunting”, and that the investigation continued. On October 23, 2015, the Justice Department declared that no criminal charges would be filed.
Mentioned by James Taranto this term has its origin in the writings of Roger Scruton:
Argues that the advocate of multiculturalism as it is generally presented is rebelling against the established order and suffering from “oikophobia,” a hatred of home, a frequent disease of intellectuals.
It means hatred of the familiar, the home, the origins, and by extension, the self. Scruton again:
Being the opposite of xenophobia I propose to call this state of mind oikophobia, by which I mean (stretching the Greek a little) the repudiation of inheritance and home. Oikophobia is a stage through which the adolescent mind normally passes. But it is a stage in which some people — intellectuals especially — tend to become arrested. As George Orwell pointed out, intellectuals on the Left are especially prone to it, and this has often made them willing agents of foreign powers. The Cambridge spies offer a telling illustration of what oikophobia has meant for our country. And it is interesting to note that a recent BBC ‘docudrama’ constructed around that deplorable episode neither examined the realities of their treason nor addressed the suffering of the millions of their East European victims, but merely endorsed the oikophobia that had caused the spies to act as they did.
Something curious occurred a minute before Pianka began speaking. An official of the Academy approached a video camera operator at the front of the auditorium and engaged him in animated conversation. The camera operator did not look pleased as he pointed the lens of the big camera to the ceiling and slowly walked away.
Pianka then began laying out his concerns about how human overpopulation is ruining the Earth. He presented a doomsday scenario in which he claimed that the sharp increase in human population since the beginning of the industrial age is devastating the planet. He warned that quick steps must be taken to restore the planet before it’s too late.
Professor Pianka said the Earth as we know it will not survive without drastic measures. Then, and without presenting any data to justify this number, he asserted that the only feasible solution to saving the Earth is to reduce the population to 10 percent of the present number.
His favorite candidate for eliminating 90 percent of the world’s population is airborne Ebola ( Ebola Reston ), because it is both highly lethal and it kills in days, instead of years.
A paleoconservative is the furthest Right one can go before arriving at the Old Right, or those who favor the Old Order prior to the French Revolution.
Following the definition of conservative, a paleoconservative is a social conservative who accepts classical liberalism as better than socialism:
Paleoconservative: Following the French Revolution, a Right Wing was created which accepted the historical changes but disagreed with them, and did its best to resist further decay. Its primary method was classical liberalism, which is a Social Darwinist or libertarian system in which no person is obligated to subsidize any other, and thus in this state of natural freedom, the best rise and the rest fall. This existed for some time until rising Marxist thought in the 1930s-1945 made some degree of Marxism a de facto assumption of the Western states. Paleoconservatives re-unite classical liberalism with a heritage-based nationalism and strong cultural and socially conservative policies.
Paleonconservatives oppose the postwar order because they are nationalists, and race-, ethnic-, and caste-realists who reject egalitarianism in all forms:
Its representatives resisted neoconservatism and assumed positions that were in opposition to those of its influential opponents. But they also drew on older conservative thought, going back into the interwar period, which incorporated both European and American traditions of thought. Paleoconservatism was the last recognizably rightist form of the conservative movement, if we exclude some Alt-Right bloggers who, although occasionally worth reading, hardly form a coherent movement. It was precisely this rightist gestalt that has made the paleoconservatives and their efforts to represent the Old Right so profoundly distasteful to Conservatism, Inc.
These holdouts have never accepted equality as a “conservative” principle; they continue to believe in traditional gender distinctions and are not especially bothered by the hierarchies that existed in pre-modern communities. They also make faces when they hear the vague platitude “human rights”—what Richard Weaver called a “god term”—thrown into a conversation. Although paleos believe in universally applicable moral standards, they insist that rights are historic and attached to particular societies with their own histories. Paleoconservatives also believe the U.S. was founded as a “constitutional republic,” not as a “liberal democracy.” Perhaps most controversially, they stress lines of continuity extending from the civil rights and immigration legislation of the 1960s to the cultural and political transformation of our country that is now going on. Often attacked as racists or xenophobes, the Cassandra-like paleos are neither. They have boldly pointed out developmental connections that others choose to ignore.
Personhood is a legal fiction, or symbolic status that enables something or someone to participate in our legal system. This usually comes up regarding corporate personhood:
Obviously, corporations are not human. Yet the Court has held that they, like people, are entitled to certain fundamental rights, including the freedom to make political expenditures (Citizens United) and the religious freedom to object to birth-control coverage in their employees’ health insurance (Hobby Lobby).
Opponents of corporate rights too simplistically champion the notion that “corporations are not people.” Corporations deserve some Constitutional protections, both in order to keep government in check and because the ultimate beneficiaries are citizens. A corporate right to be free from government takings, for example, makes sense both as a matter of constitutional law and of economics. Government overreach is problematic whether the raisin grower is a family farm or a business corporation. And corporations left exposed to government expropriation would find investors reluctant to take that risk, undermining the basic social purpose of the corporation, to make money.
There are benefits to giving personhood to nature, tradition, even genetics:
Toledo voters passed the Lake Erie Bill of Rights, a unique charter amendment that establishes the huge lake as a person and grants it the legal rights that a human being or corporation would have.
The new law will allow the people of Toledo to act as legal guardians for Lake Erie – as if the citizens were the parents and the lake were their child – and polluters of the lake could be sued to pay for cleanup costs and prevention programs.
Adversarial systems have always been precarious, however, and it took a long time for the belief in adversariality to emerge from the more traditional view, traceable at least to Plato, that the state should be an organic structure, like a beehive, in which the different social classes cooperate by performing distinct yet complementary roles.
To study the conditions that promote delay of gratification, the American psychologist Walter Mischel and his colleagues designed an experimental situation (“the marshmallow test”) in which a child is asked to choose between a larger treat, such as two cookies or marshmallows, and a smaller treat, such as one cookie or marshmallow. After stating a preference for the larger treat, the child learns that to obtain that treat, it is necessary to wait for the experimenter to return. The child is also told that if he or she signals the experimenter, the experimenter will return and the child will receive the smaller treat. Thus, the smaller treat is available now, but the larger treat requires waiting. To get the larger treat, the child must resist the temptation to get an immediate treat.
That experimental situation has proven very useful both in demonstrating the importance of the ability to delay gratification and in identifying strategies that make it possible for children to delay gratification. Children who were best able to wait in that situation when they were four years old are more socially and academically successful as high-school students and earn higher Scholastic Aptitude Test (SAT) scores. The situation, adapted for adolescents and teens by the psychologist Edelgard Wulfert and her colleagues, also revealed that middle- and high-school students who can wait a week for a monetary reward earn higher grades, show less problem behaviour in school, and are less likely to use cigarettes, alcohol, and other drugs than their peers who choose not to wait.
This is similar to what we are told by the Biblical parable of the five talents:
14 “For it will be like a man going on a journey, who called his servants[a] and entrusted to them his property. 15 To one he gave five talents,[b] to another two, to another one, to each according to his ability. Then he went away. 16 He who had received the five talents went at once and traded with them, and he made five talents more. 17 So also he who had the two talents made two talents more. 18 But he who had received the one talent went and dug in the ground and hid his master’s money. 19 Now after a long time the master of those servants came and settled accounts with them. 20 And he who had received the five talents came forward, bringing five talents more, saying, ‘Master, you delivered to me five talents; here, I have made five talents more.’ 21 His master said to him, ‘Well done, good and faithful servant.[c] You have been faithful over a little; I will set you over much. Enter into the joy of your master.’ 22 And he also who had the two talents came forward, saying, ‘Master, you delivered to me two talents; here, I have made two talents more.’ 23 His master said to him, ‘Well done, good and faithful servant. You have been faithful over a little; I will set you over much. Enter into the joy of your master.’ 24 He also who had received the one talent came forward, saying, ‘Master, I knew you to be a hard man, reaping where you did not sow, and gathering where you scattered no seed, 25 so I was afraid, and I went and hid your talent in the ground. Here, you have what is yours.’ 26 But his master answered him, ‘You wicked and slothful servant! You knew that I reap where I have not sown and gather where I scattered no seed? 27 Then you ought to have invested my money with the bankers, and at my coming I should have received what was my own with interest. 28 So take the talent from him and give it to him who has the ten talents. 29 For to everyone who has will more be given, and he will have an abundance. But from the one who has not, even what he has will be taken away. 30 And cast the worthless servant into the outer darkness. In that place there will be weeping and gnashing of teeth.’
America spends a lot on public education:
Real spending per pupil ranges from a low of nearly $12,000 in the Phoenix area schools to a high of nearly $27,000 in the New York metro area. The gap between real and reported per-pupil spending ranges from a low of 23 percent in the Chicago area to a high of 90 percent in the Los Angeles metro region.
To put public school spending in perspective, we compare it to estimated total expenditures in local private schools. We find that, in the areas studied, public schools are spending 93 percent more than the estimated median private school.
This cost has been going up:
The United States spent an average of $11,392 per pupil educating its young people in the nation’s elementary-secondary school systems in fiscal year 2015. According to the most recent data available by the U.S. Census Bureau (released in 2017), this is the largest increase in per pupil spending since 2008, when the U.S. spent a reported $11,009 per pupil.
Public education by state rankings show that New York spent the most per pupil at $21,206. Nationally, the top five school districts per student spending were Anchorage School District, Alaska ($17,046), Baltimore City Schools, Maryland ($15,818), Boston City Schools, Massachusetts ($21,552), Howard County Schools, Maryland ($15,714), and New York City School District, New York ($21,980).
The states spending the least on a per-pupil basis in 2015 are Utah ($6,575), Idaho ($6,923), Arizona ($7,489), and Oklahoma ($8,082).
America spends more than most nations:
The United States spent more than $11,000 per elementary student in 2010 and more than $12,000 per high school student. When researchers factored in the cost for programs after high school education such as college or vocational training, the United States spent $15,171 on each young person in the system — more than any other nation covered in the report.
As a share of its economy, the United States spent more than the average country in the survey. In 2010, the United States spent 7.3 percent of its gross domestic product on education, compared with the 6.3 percent average of other OECD countries. Denmark topped the list on that measure with 8 percent of its gross domestic product going toward education.
The United States routinely trails its rival countries in performances on international exams despite being among the heaviest spenders on education.
Much of the spending goes to “support services,” or non-education activities provided by public schools:
Of the $639.5 billion in budget spending by U.S. schools for the 2015 fiscal year, $344.3 billion was spent on Instruction: salaries and wages comprising $216.9 billion and employee benefits the other $87.1 billion. Support services expenditures totaled $194.4 billion.
This includes education for illegal aliens:
The guidelines do not change existing practices, but rather remind schools of obligations established by Plyler v. Doe, a 1982 Supreme Court case that affirmed all children are guaranteed equal access to a basic public education regardless of their immigration status.
Schools are allowed to request proof that children live within the boundaries of the district, for which they typically accept documents like copies of phone and water bills, lease agreements, and affidavits, the guidance says.
More detailed statistics from the US Census:
Robert Putnam is a researcher famous for his explorations of social capital, social cohesion, and diversity.
Putnam claims the US has experienced a pronounced decline in “social capital,” a term he helped popularize. Social capital refers to the social networks — whether friendships or religious congregations or neighborhood associations — that he says are key indicators of civic well-being. When social capital is high, says Putnam, communities are better places to live. Neighborhoods are safer; people are healthier; and more citizens vote.
But even after statistically taking them all into account, the connection remained strong: Higher diversity meant lower social capital. In his findings, Putnam writes that those in more diverse communities tend to “distrust their neighbors, regardless of the color of their skin, to withdraw even from close friends, to expect the worst from their community and its leaders, to volunteer less, give less to charity and work on community projects less often, to register to vote less, to agitate for social reform more but have less faith that they can actually make a difference, and to huddle unhappily in front of the television.”
“People living in ethnically diverse settings appear to ‘hunker down’ — that is, to pull in like a turtle,” Putnam writes.
In documenting that hunkering down, Putnam challenged the two dominant schools of thought on ethnic and racial diversity, the “contact” theory and the “conflict” theory. Under the contact theory, more time spent with those of other backgrounds leads to greater understanding and harmony between groups. Under the conflict theory, that proximity produces tension and discord.
Putnam’s findings reject both theories. In more diverse communities, he says, there were neither great bonds formed across group lines nor heightened ethnic tensions, but a general civic malaise. And in perhaps the most surprising result of all, levels of trust were not only lower between groups in more diverse settings, but even among members of the same group.
Race is real:
Because human races emerged through such subtle changes, it can be underwhelming to look at a single gene — to borrow an example from Razib Khan of Gene Expression, a variant might be present 40 percent of the time in one racial group but 45 percent of the time in another. But as Wade notes, these small differences add up quickly, and scientists can use these “ancestry informative” DNA markers to easily sort humans into population clusters — clusters that correspond almost perfectly to the casual classifications people have used since well before the genetic age.
One can debate how broadly or narrowly to define the clusters — just how many races are there? — but it’s undeniable that human populations exhibit distinctive genetic patterns. Racial groupings are human decisions, and so is the social importance we attach to those groupings. But race, more broadly construed, is a feature of humanity itself.
People know their own racial and ethnic mix:
The study is by far the largest, consisting of 3,636 people who all identified themselves as either white, African-American, East Asian or Hispanic. Of these, only five individuals had DNA that matched an ethnic group different than the box they checked at the beginning of the study. That’s an error rate of 0.14 percent.
Ethnic differences are real and measurable:
We present a method to predict the ethnic origin of samples by comparing the sample genotypes with those from a reference set of samples of known origin. These predictions can be performed using just summary information on the known samples, and individual genotype data are not required.
Average IQs and other traits vary between racial, ethnic, and social class groups:
It was found that national IQs are correlated at 0.757 with real GDP (GrossDomestic Product) per capita 1998 and 0.706 with per capita GNP (Gross National Product)1998; and at 0.605 with the growth of per capita GDP 1950-90 and 0.643 with growth of percapita GNP 1976-98.
Physical appearance, including race, is coded in DNA:
Nearly every physical attribute we have can be predicted by looking at our DNA. That’s essentially what goes into the process of DNA Phenotyping.
For example, our DNA can tell scientists our eye color, skin color, hair color, ancestry, face shape, and much more. At Parabon Nanolabs, they use this information to create a predictive picture of the person in question.
“What we produce is not a photograph,” Greytak said. “And it never could be, because if that person went out and got a tattoo tomorrow, then what we produce will not look like them, or if they shave their head, or eat cheeseburgers for 20 years.”
He trusted the Left on spending cuts:
It happened to my father early in his first term when he sought to close a growing federal deficit caused by the deep economic recession. He believed Democrats in Congress would keep their pledge to make $3 in future spending cuts for every $1 in immediate tax increases.
In 1982 he signed a compromise tax bill with the horrible name of TEFRA — the Tax Equity and Fiscal Responsibility Act. And, when those promised spending cuts never materialized in Congress, TEFRA became one of the biggest regrets of my father’s presidency.
My father was duped by the duplicity of Democrats.
He was promised $1 of tax increases for $3 in spending cuts:
Sen. Bob Dole, R-Kan., convinced Reagan that Congress would make $3 in spending cuts for every $1 of tax increases. Reagan signed the tax increase — but Congress never made the spending cuts.
Meanwhile, Reagan stood back as Federal Reserve Chairman Paul Volcker, a Carter appointee, was squeezing inflation out of the economy by restricting the money supply.
Volcker later commended him, saying “People in the White House and Treasury put pressure on Reagan, but they could never get Reagan to criticize me.” The president, Volcker said, “had this visceral feeling that fighting inflation was a good thing.”
His tax cuts increased revenue:
Many critics of reducing taxes claim that the Reagan tax cuts drained the U.S. Treasury. The reality is that federal revenues increased significantly between 1980 and 1990:
- Total federal revenues doubled from just over $517 billion in 1980 to more than $1 trillion in 1990. In constant inflation-adjusted dollars, this was a 28 percent increase in revenue.
- As a percentage of the gross domestic product (GDP), federal revenues declined only slightly from 18.9 percent in 1980 to 18 percent in 1990.
- Revenues from individual income taxes climbed from just over $244 billion in 1980 to nearly $467 billion in 1990.5 In inflation-adjusted dollars, this amounts to a 25 percent increase.
He shrank the government, but less than he wanted, although it still had positive effects:
When the budget is looked at as a share of the economy, Reagan’s legacy looks a bit better from a small government perspective. Federal revenues as a share of gross domestic product fell from 19.6 percent in 1981 to 18.3 percent by 1989. Spending fell from 22.2 percent to 21.2 percent. Thus, Ronald Reagan shrank the federal government by about 5 percent — a less radical change than supporters or detractors often claim.
This shifted America away from socialist-style policies:
Ronald Reagan sought–and won–more spending cuts than any other modern president. He is the only president in the last forty years to cut inflation-adjusted nondefense outlays, which fell by 9.7 percent during his first term (see table 1). Sadly, during his second term, President Reagan did not manage to cut nondefense discretionary spending, and it grew by 0.2 percent. But his record is still quite remarkable if compared to other administrations. Every other president since Lyndon Johnson serving a full four-year term did not even do as well as Reagan in his less-impressive second term.
He presided over a return to “supply-side economics” which took a few years for the benefits to become visible:
Following the so-called “supply-side” economic program he propounded in his campaign, Reagan proposed massive tax cuts—30 percent reductions in both individual and corporate income taxes over a three-year period—which he believed would stimulate the economy and eventually increase revenues from taxes as income levels grew. At the same time, he proposed large increases in military expenditures ($1.5 trillion over a five-year period) and significant cuts in “discretionary” spending on social-welfare programs such as education, food stamps, low-income housing, school lunches for poor children, Medicaid (the major program of health insurance for the poor), and Aid to Families with Dependent Children (AFDC). In 1981 Congress passed most of the president’s budget proposals, though the tax cut was scaled back slightly, to 25 percent.
The results were mixed. A severe recession in 1982 pushed the nation’s unemployment rate to nearly 11 percent, the highest it had been since the Great Depression…By early 1983 the economy had begun to recover, and by the end of that year unemployment and inflation were significantly reduced; they remained relatively low in later years. Economic growth continued through the remainder of Reagan’s presidency, a period that his supporters would hail as “the longest peacetime expansion in American history.”
These highly-criticized changes created an economic boost for at least a decade:
President Ronald Reagan implemented policies to reduce the federal government’s reach into the daily lives and pocketbooks of Americans, including tax cuts intended to spur growth (known as Reaganomics). He also advocated for increases in military spending, reductions in certain social programs and measures to deregulate business.
By 1983, the nation’s economy had started to recover and enter a period of prosperity that would extend through the rest of Reagan’s presidency. Critics maintained that his policies led to budget deficits and a more significant national debt; some also held that his economic programs favored the rich.
His shift in economic policy reduced the inflation caused by “demand-side” economic policy under Carter:
The Federal Reserve’s adoption of a more monetarist approach to policy-making in 1979 changed the operating instrument of monetary policy, shifting it from interest rates to the monetary aggregates. This major shift in policy, which occurred in October 1979 and only two months after Volcker became Chairmanof the Fed, marked the beginning of the disinflation effort.
Reality is what it is independent of our judgments, feelings, and emotions about it:
There are two general aspects to realism, illustrated by looking at realism about the everyday world of macroscopic objects and their properties. First, there is a claim about existence. Tables, rocks, the moon, and so on, all exist, as do the following facts: the table’s being square, the rock’s being made of granite, and the moon’s being spherical and yellow. The second aspect of realism about the everyday world of macroscopic objects and their properties concerns independence. The fact that the moon exists and is spherical is independent of anything anyone happens to say or think about the matter.
In politics this means a study of the means of power:
Political realism assumes that interests are to be maintained through the exercise of power, and that the world is characterised by competing power bases. In international politics, most political theorists emphasise the nation state as the relevant agent, whereas Marxists focus on classes.
At its most general, it means a focus on tendencies and results in reality, not human intentions:
In everyday use realism is commonly attributed to caution, or moderation in one’s aspirations—the converse of utopianism. The word is also used to describe a variety of approaches in literature and the visual arts in which accurate depiction of reality is the aim. Each of these uses involves a contrast between human thought or imagination, on the one hand, and an external reality independent of mind, on the other. The notion that reality has a cognitive or normative authority over the mind is also generally present. In philosophy, realism signifies the assertion of the existence of a reality independently of our thoughts or beliefs about it.
It is the opposite of human cognition and group approval:
Realism, in philosophy, the viewpoint which accords to things which are known or perceived an existence or nature which is independent of whether anyone is thinking about or perceiving them.
When looking at re-incarceration rates, not merely re-arrest rates, the U.S. rate is lower, at 28.8%. Norway’s rate of actual re-incarceration is higher, at about twenty-five percent.
This came up in conversation, so it is useful to see what the context is here on Reddit:
About seven-in-ten (71%) of Reddit news users are men, 59% are between the ages of 18 and 29, and 47% identify as liberal, while only 13% are conservative (39% say they are moderate). In comparison, among all U.S. adults, about half (49%) are men, just 22% are 18- to 29-year-olds and about a quarter (24%) say they are liberal.
As could be expected, Reddit news users are also heavy internet users: 47% report going online almost constantly (compared with 21% of U.S. adults overall).
Also found this hilarious nugget:
The company announced Monday it had raised $300 million in its Series D investment round at a valuation of $3 billion. CNBC previously reported the company’s annual revenue topped $100 million, according to sources familiar with the matter, and at 330 million monthly active users (MAUs), this would make Reddit’s average revenue per user (ARPU) about $0.30.
That estimate would make Reddit’s ARPU significantly lower than other social networks, even those with similar MAUs. Twitter, for example, reported 321 MAUs for its latest quarterly report, and with annual revenue of about $3.04 billion in 2018, that would make its ARPU about $9.48.
Which makes sense in this context:
The collaborative studies focus on an understanding of how U.S. Internet users click on display ads. The updated results based on March 2009 Comscore data, and presented by Comscore chairman Gian Fulgoni and Kim McCarthy, manager, Research & Analytics at Starcom, at the iMedia Brand Summit in San Diego on September 14, 2009, indicated that the number of people who click on display ads in a month has fallen from 32 percent of Internet users in July 2007 to only 16 percent in March 2009, with an even smaller core of people (representing 8 percent of the Internet user base) accounting for the vast majority (85 percent) of all clicks.
The low value is because they are broke:
A February study from Starcom USA, Tacoda and ComScore found that click-throughs are dominated by “natural born clickers”–a mere 6 percent of online users who make about 50 percent of all click-throughs. Generally between the ages of 25 and 44, their household incomes are usually less than $40,000.
Instead of direct sales, they found that internet advertising is most useful for brand awareness and propaganda:
The comScore Brand Metrix norms database contains the results of studies that have been conducted across ten vertical industries and includes the following metrics: top-of mind unaided awareness, total unaided awareness, aided awareness, total advertising awareness, online ad recall, favorability, likelihood to recommend, and likelihood to purchase. For a subset of the studies, the norms database also includes the important behavioral metrics of advertiser trademark searches, site visitation and purchasing – both online and at retail stores.
These are closely related to the fourth branch of government.
The cost of federal regulation neared $2 trillion in 2014, according to a new report by the Competitive Enterprise Institute (CEI).
Ten Thousand Commandments: An Annual Snapshot of the Federal Regulatory State, a report by Clyde Wayne Crews, CEI’s vice president for policy, also reveals that the U.S. debt now exceeds the size of China’s economy.
“Federal regulation and intervention cost American consumers and businesses an estimated $1.88 trillion in 2014 in lost economic productivity and higher prices,” amounting to roughly $15,000 per household, the report said.
Crews’ analysis found that in 2010 the federal government spent around $55.4 billion dollars funding federal agencies, and enforcing existing regulation. But these costs barely compare to the compliance costs that regulation imposes on the economy. Crews’ report cites the work of economists Nicole V. Crain and W. Mark Crain, whose study of the net cost of regulations determined that in 2009 federal regulation cost businesses and consumers $1.75 trillion, or nearly 12% of America’s 2009 GDP. As a comparison, in the same year, corporate pre-tax profits for all businesses totaled about $ 1.46 trillion.
These could well be a huge part of our economic slowdown:
During the Clinton administration, the average number of major regulations — those with an economic impact of more than $100 million — enacted each year was thirty-six. During the Bush administration that rose to forty-five per year. In the Obama administration it has been seventy-two each year. And while it says more about an anemic economy than runaway regulation, the inflation-adjusted compliance costs for manufacturers have risen on average 7.6 percent each year since 1998, compared with average annual GDP growth of just 2.2 percent, and manufacturing output growth of only 0.4 percent. Given the other competitive challenges facing U.S. manufacturers, escalating regulatory costs are clearly a problem.
The Trump response:
Since taking office in January 2017, President Donald Trump has carried through on his campaign promise to cut the number of federal regulations. On January 30, 2017, he issued an executive order entitled “Reducing Regulation and Controlling Regulatory Costs” directing the federal agencies to repeal two existing regulations for every new regulation and to do so in such a way that the total cost of regulations does not increase.
According to an update status report on Trump’s order from the OMB, the agencies are far exceeding the two-for-one and regulatory cap requirements, having achieved a 22-1 ratio during the first eight months of FY 2017. Overall, notes the OMB, the agencies had cut 67 regulations while adding only 3 “significant” ones.
By August 2017, Congress had exercised the Congressional Review Act to eliminate 47 regulations issued by President Barack Obama. In addition, the agencies had voluntarily withdrawn over 1,500 of Obama’s regulations that were under consideration but not yet finalized. Under Trump, the agencies have generally been more reluctant to propose new regulations.
Regulations often have unintended effects:
Stringent fuel economy regulations imposed on cars in the 1970s had made it practically impossible for automakers to keep selling big station wagons. Yet many Americans still wanted roomy vehicles.
The answer, Mr. Sperlich and Mr. Iacocca realized, was to make family vehicles that were regulated as light trucks, a category of vehicles that includes pickups.
When Chrysler introduced the minivan in 1983, fewer than 3 percent of them were configured as cargo vehicles, with just a couple of seats in the front and a long, flat bed in the back. But that was enough for Mr. Iacocca to persuade federal regulators to label all minivans as light trucks.
On the internet we commonly hear about America that “we are not a democracy, we are a republic.” This riff has a kernel of truth, which is that our founding fathers feared both strong government and mob rule. However, the method of our political system is still democracy.
I often hear people argue (often quite militantly) that the United States is a republic, not a democracy. But that’s a false dichotomy. A common definition of “republic” is, to quote the American Heritage Dictionary, “A political order in which the supreme power lies in a body of citizens who are entitled to vote for officers and representatives responsible to them” — we are that. A common definition of “democracy” is, “Government by the people, exercised either directly or through elected representatives” — we are that, too.
The United States is not a direct democracy, in the sense of a country in which laws (and other government decisions) are made predominantly by majority vote. Some lawmaking is done this way, on the state and local levels, but it’s only a tiny fraction of all lawmaking. But we are a representative democracy, which is a form of democracy.
We are in fact a liberal democracy:
Liberal democracy is made up of two constituent parts: liberalism and democracy. Liberalism, traditionally, means support for limited government, individual rights, private property, and freedom of speech and association. Democracy denotes “government by consent of the governed” or some form of majority rule. The liberal democratic nation-state combines these two elements: A distinct “people” govern themselves, but this popular government is limited by individual rights.
America is returning to its racial divisions:
U.S. cities have grown more segregated over the past 40 years, and persistent and intensifying racial disparities between white communities and people of color have emerged.
Racial resegregation in U.S public schools is also deepening. A 2016 Government Accountability Office report announced: The promise of Brown v. Board of Education is unraveling.
In part, this is a product of what is known as school secession, a common practice by which wealthy white communities around the country choose to “break away from their public-school districts to form smaller, more exclusive ones.”
Moreover, as the New York Times reporters John Eligon and Robert Gebeloff noted in 2016: “Even when black households try to cross color boundaries, they are not always met with open arms: Studies have shown that white people prefer to live in communities where there are fewer black people, regardless of their income.”
“The median white family held 13 times as much net wealth as the median black family” in 2013, according to the Federal Reserve’s 2017 Survey of Consumer Finances, “and 10 times as much wealth as the median Latino family. Just a decade earlier, the disparity was 7 to 1 for black families and 9 to 1 for Latino families.”
Much of this is driven by a desire to get away from integrated schools:
Gardendale, a mostly white city 15 minutes north of Birmingham, had proposed separating from the Jefferson County School District, which encompasses Birmingham’s suburbs. The majority of children living in Jefferson County’s increasingly diverse subdivisions are black and Latino; Gardendale’s new district would be about 80 percent white. The move had come to the attention of the federal judge overseeing a decades-old desegregation order that requires Jefferson County—once a front in the resistance against the Brown decision—to maintain racially integrated schools.
In 1954, when the Supreme Court handed down its landmark Brown ruling declaring that separate schools for black and white children were inherently unequal, there were five school districts in Jefferson County. In the 63 years since then, that number has more than doubled as white communities established new school districts separate from the increasingly black and Latino county district.
Researchers have observed that while the racial test-score gap isn’t completely closed when schools are integrated, black students’ scores tend to go up when they’re in integrated environments. White children’s scores, meanwhile, aren’t affected either way by exposure to children of color. In his research, Rucker Johnson, an economist at the University of California, Berkeley, has looked beyond test scores. He’s found that white students who attend integrated schools have measurably less racial prejudice and tend to live in more integrated neighborhoods as adults.
This happened as school districts resegregated in response to Brown v. Board of Education:
Measured by the percentage of black students in schools that are 90 percent minority or more, segregation has increased in all regions of the country since the mid-’80s (when court orders were most common). Indeed, the Northeast’s segregation has increased compared to 1968, and is now the most segregated part of the whole country.
This appears to be a “white flight” away from high-poverty, highly-diverse schools:
First, the American South is resegregating, after two and a half decades in which civil rights law broke the tradition of apartheid in the region’s schools and made it the section of the country with the highest levels of integration in its schools. Second, the data shows continuously increasing segregation for Latino students, who are rapidly becoming our largest minority group and have been more segregated than African Americans for several years. Third, the report shows large and increasing numbers of African American and Latino students enrolled in suburban schools, but serious segregation within these communities, particularly in the nation’s large metropolitan areas. Since trends suggest that we will face a vast increase in suburban diversity, this raises challenges for thousands of communities. Fourth, we report a rapid ongoing change in the racial composition of American schools and the emergence of many schools with three or more racial groups. The report shows that all racial groups except whites experience considerable diversity in their schools but whites are remaining in overwhelmingly white schools even in regions with very large non-white enrollments.
Even more, people prefer neighborhoods for their own tribes (a mosaic of race, class, region, culture, religion, and ethnicity):
More than a half century after the civil rights era, many urban neighborhoods and institutions in American life remain unintegrated and, experts say, segregation is a primary driver in creating economic and social disparities and straining relations between police and the communities they serve.
While an analysis of census data by the Brookings Institution in Washington last year found a modest decline in black-white segregation nationwide over the past decade, the levels were still high. According to the report, “more than half of blacks would need to move to achieve complete integration.” School segregation has become even worse in recent decades. A report by the US Government Accountability Office earlier this year found that the percentage of public schools with high concentrations of poor and black or Hispanic students has nearly doubled since 2000.
Some blame charter schools for accelerating the process:
Although there remain some bright spots of innovation in socioeconomic-based integration, housing policy and collective community efforts, most trends are regressive andthe dissolution of voluntary plans has eroded the gains of the 1980s. The charter school movement, with its tendency to enroll a less-diverse student body, has further accelerated resegregation.
Unlike the 1960s-1980s, the current version is not whites moving out of inner cities, but fleeing one suburb for another:
This pattern of “white flight” to the suburbs was characteristic of American metro areas until the 1970s and 1980s, when newer suburbs – bigger, more spacious, more contemporary – began stealing residents away from the older inner-ring suburbs. And by the 1990s, more minorities were beginning to follow the same aspirational path as the former white city dwellers before them. Just as previous generations did, minorities sought larger homes, quieter environments and better schools. And white residents who craved insulation from the perils of urban living now saw it coming to their front lawns – again.
Indiana University doctoral student Samuel Kye examined census data from 1990-2010, and found that, as affluent minority populations in the suburbs grow, “white flight” continues. White residents in these transitioning suburbs are “especially sensitive” to racial and ethnic change, he argues: “Ethnoburbs [Kye’s term for suburbs with large numbers of racial or ethnic minorities] have lost a steady flow of white residents over the past 20 years.” The end result? African-American suburban migration has only led to greater segregation, creating ethnic pockets: whites in one, blacks in the other.
This has been an active decision. As black people move into their suburban idylls, longtime white residents flee to other suburbs, or retreat to the highest value enclaves in town.
This has created the rise of segregated neighborhoods or “Chinatowns” as all races flee the others:
In a study published Thursday in the August issue of American Sociological Review, a trio of academics looks into the data and finds that segregation is actually becoming more pronounced in many American neighborhoods. The practices derided by the Kerner Commission, including white flight, exclusionary zoning, and outright prejudice, are continuing to create black areas and white areas, but this time around, those areas exist in both the cities and the suburbs.
Previous data has suggested that segregation between black and white populations is declining. But much of that research looked at entire metropolitan areas, and found more minorities in suburbs, which led researchers to conclude that the nation was no longer divided into black cities and white suburbs. Lichter and his colleagues looked at at smaller communities, and found that while black residents don’t just live in inner cities anymore, the suburbs they’ve moved to are majority black, while other suburbs are majority white.
Segregation isn’t just happening between black and white towns. Hispanic and Asian residents are segregated into their own cities and towns, too. Dover, New Jersey, for instance, a town 30 miles west of New York, was 70 percent Hispanic in the 2010 Census. In 1980, it was only 25 percent Hispanic.
Obama attempted to forcibly reverse this:
Many have identified this as the psychology of the Left: a feeling of hopeless inefficacy and hatred for those who are not similarly mired.
deep-seated resentment, frustration, and hostility accompanied by a sense of being powerless to express these feelings directly
After the French Revolution, it became known as a pathology in the West:
Nietzsche sees ressentiment as the core of Christian and Judaic thought and, consequently, the central facet of western thought more generally. In this context, ressentiment is more fully defined as the desire to live a pious existence and thereby position oneself to judge others, apportion blame, and determine responsibility. Nietzsche did not invent the concept of ressentiment, it was a term that was very much ‘in the air’ in his lifetime (the late 19th century), as Fredric Jameson points out in his sharp critique of the concept in The Political Unconscious (1981).
He saw it as arising from class warfare and specifically, from mercantile populations obsessed with seizing control from natural leaders:
Nietzsche’s famous answer is unflattering to our modern conception. He insists that the transformation was the result of a “slave revolt in morality” (GM I, 10; cf. BGE 260). The exact nature of this alleged revolt is a matter of ongoing scholarly controversy (in recent literature, see Bittner 1994; Reginster 1997; Migotti 1998; Ridley 1998; May 1999: 41–54; Leiter 2002: 193–222; Janaway 2007: 90–106, 223–9; Owen 2007: 78–89; Wallace 2007; Anderson 2011; Poellner 2011), but the broad outline is clear enough. People who suffered from oppression at the hands of the noble, excellent, (but uninhibited) people valorized by good/bad morality—and who were denied any effective recourse against them by relative powerlessness—developed a persistent, corrosive emotional pattern of resentful hatred against their enemies, which Nietzsche calls ressentiment. That emotion motivated the development of the new moral concept
, purpose-designed for the moralistic condemnation of those enemies. (How conscious or unconscious—how “strategic” or not—this process is supposed to have been is one matter of scholarly controversy.) Afterward, via negation of the concept of evil, the new concept of goodness emerges, rooted in altruistic concern of a sort that would inhibit evil actions. Moralistic condemnation using these new values does little by itself to satisfy the motivating desire for revenge, but if the new way of thinking could spread, gaining more adherents and eventually influencing the evaluations even of the nobility, then the revenge might be impressive—indeed, “the most spiritual” form of revenge (GM I, 7; see also GM I, 10–11). For in that case, the revolt would accomplish a “radical revaluation” (GM I, 7) that would corrupt the very values that gave the noble way of life its character and made it seem admirable in the first place.
For Nietzsche, then, our morality amounts to a vindictive effort to poison the happiness of the fortunate (GM III, 14), instead of a high-minded, dispassionate, and strictly rational concern for others.
He identified this with Judaic philosophy:
It was the Jews who, with awe-inspiring consistency, dared to invert the aristocratic value-equation (good = noble = powerful = beautiful = happy = beloved of God) and to hang on to the inversion with their teeth . . ., saying “the wretched alone are the good; the poor, impotent, lowly alone are the good; the suffering, deprived, sick, ugly alone are pious, alone are blessed by God . . .”
Science — the idea of reproducible experimentation — underlies much of our modern technological ability, but has its limits. Specifically, it works poorly with polycausality and it is subject to human frailty, being administered by humans. It also does not do well outside the material realm, in the world of thoughts, the metaphysical, and aesthetics. In addition, science is a “product,” whether through published research or justifications for policy, so is subject to lobbying and self-interest by scienstists, causing a reproducibility crisis.
This began to be observed in psychology first:
Brian Nosek, a social psychologist and head of the Center for Open Science in Charlottesville, Virginia, and 269 co-authors repeated work reported in 98 original papers from three psychology journals, to see if they independently came up with the same results.
According to the replicators’ qualitative assessments, as previously reported by Nature, only 39 of the 100 replication attempts were successful. (There were 100 completed replication attempts on the 98 papers, as in two cases replication efforts were duplicated by separate teams.) But whether a replication attempt is considered successful is not straightforward. Today in Science, the team report the multiple different measures they used to answer this question^(1).
Another method assessed whether a statistically significant effect could be found, and produced an even bleaker result. Whereas 97% of the original studies found a significant effect, only 36% of replication studies found significant results. The team also found that the average size of the effects found in the replicated studies was only half that reported in the original studies.
Nosek believes that other scientific fields are likely to have much in common with psychology. One analysis found that only 6 of 53 high-profile papers in cancer biology could be reproduced^(2) and a related reproducibility project in cancer biology is currently under way. The incentives to find results worthy of high-profile publications are very strong in all fields, and can spur people to lose objectivity. “If this occurs on a broad scale, then the published literature may be more beautiful than reality,” says Nosek.
These results were replicated in a followup study which found that, not only are many studies not reproducible, but they could have been anticipated to be such:
But their study, published Monday in Nature Human Behaviour, also finds that social scientists can actually sniff out the dubious results with remarkable skill.
The results were better than the average of a previous review of the psychology literature, but still far from perfect. Of the 21 studies, the experimenters were able to reproduce 13. And the effects they saw were on average only about half as strong as had been trumpeted in the original studies.
“The likelihood that a finding will replicate or not is one part of what a reviewer would consider,” says Nosek. “But other things might influence the decision to publish. It may be that this finding isn’t likely to be true, but if it is true, it is super important, so we do want to publish it because we want to get it into the conversation.”
Biomedical research and other fields are also susceptible:
Over the recent years, there has been an increasing recognition of the weaknesses that pervade our current system of basic and preclinical research. This has been highlighted empirically in preclinical research by the inability to replicate the majority of findings presented in high-profile journals.^(1) ^(2) ^(3) The estimates for irreproducibility based on these empirical observations range from 75% to 90%. These estimates fit remarkably well with estimates of 85% for the proportion of biomedical research that is wasted at-large.^(4) ^(5) ^(6) ^(7) ^(8) ^(9) This irreproducibility is not unique to preclinical studies. It is seen across the spectrum of biomedical research. For example, similar concerns have been expressed for observational research where zero of 52 predictions from observational studies were confirmed in randomized clinical trials.^(10) ^(11) ^(12) At the heart of this irreproducibility lie some common, fundamental flaws in the currently adopted research practices. Although disappointing, this experience should probably not be surprising, and it is what one would expect also theoretically for many biomedical research fields based on how research efforts are conducted.^(13)
Intelligent people plan more, while lower intelligence people act more in line with bodily desires:
“Intelligence is negatively associated with sex frequency,” says Rosemary Hopcroft, a sociologist at the University of North Carolina at Charlotte. “It’s a bit dismaying.”
And people with higher education levels generally have lower numbers of sexual partners. The latest National Survey of Family Growth shows that, for example, men with college degrees are half as likely to have had four or more partners in the last year as men with a high school education alone.
Carolyn Halpern, a professor at the UNC School of Public Health, found a high concentration of teen virgins at the top of the intelligence scale. She thinks the smartest kids might hold off on sex because they’re thinking through its potential consequences.
People with high executive functioning—in judgment, decision-making, and impulse control—usually have what’s called a slow life history strategy, notes Aurelio José Figueredo, an evolutionary psychologist at the University of Arizona: They tend to have fewer partners and less sex but more resources (such as money and status) to invest in potential offspring.
This leads to a tendency of the intelligent to delay onset of first sexual experience:
Last December I passed a paper along to Razib showing that high-school age adolescents with higher IQs and extremely low IQs were less likely to have had first intercourse than those with average to below average intelligence. (i.e. for males with IQs under 70, 63.3% were still virgins, for those with IQs between 70-90 only 50.2% were virgin, 58.6% were virgins with IQs between 90-110, and 70.3% with IQs over 110 were virgins)
In fact, a more detailed study from 2000 is devoted strictly to this topic, and finds the same thing: Smart Teens Don’t Have Sex (or Kiss Much Either).
Depending on the specific age and gender, an adolescent with an IQ of 100 was 1.5 to 5 times more likely to have had intercourse than a teen with a score of 120 or 130. Each additional point of IQ increased the odds of virginity by 2.7% for males and 1.7% for females. But higher IQ had a similar relationship across the entire range of romantic/sexual interactions, decreasing the odds that teens had ever kissed or even held hands with a member of the opposite sex at each age.
This sub exists for users to ask conservatives questions about the definition of conservatism, Right-wing politics, conservative theory, traditional values, policy in conservatism, and conservative principles.
Only conservatives will answer those questions. New conservatives, independents, libertarians, moderates, (polite) liberals and others can then engage in Socratic dialogue in order to understand and explore most of Reddit and other social media sites. Those shouting matches are not productive; we aim toward civil discussion of high quality, and we moderate to match.
If anything here shocks, confuses, offends, triggers, traumatizes, or disturbs you and you have self-destructive thoughts, please consult this resource.
Social class is genetic:
In effect, the Babylonians took away the Jewish elites, selected in part for high intelligence, and left behind the poor and unskilled, selected in part for low intelligence. By the time the exiles returned, more than a century later, many of those remaining behind in Judah had been absorbed into other religions.
There are IQ band differences related to social class:
Yet in all this debate a simple and vital fact has been missed: higher social classes have a significantly higher average IQ than lower social classes.
The exact size of the measured IQ difference varies according to the precision of definitions of social class – but in all studies I have seen, the measured social class IQ difference is substantial and of significance and relevance to the issue of university admissions.
The existence of substantial class differences in average IQ seems to be uncontroversial and widely accepted for many decades among those who have studied the scientific literature. And IQ is highly predictive of a wide range of positive outcomes in terms of educational duration and attainment, attained income levels, and social status (see Deary – Intelligence, 2001).
This accounts for “inequality” and “privilege”:
As long ago as 1922, Professor Sir Godfrey Thomson and Professor Sir James Fitzjames Duff performed IQ tests on more than 13000 Northumbrian children aged 11-12, and found that the children of professionals had an average IQ of 112 compared with an average of 96 for unskilled labourers. These differences in IQ were predictive of future educational attainment.
Dozens of similar results have been reported since; indeed I am not aware of a single study which contradicts this finding. Social Class differences in intelligence are described in the authoritative textbook: IQ and Human Intelligence by N.J Mackintosh who is a Professor of Psychology at Cambridge University. And described in the 1996 American Psychological Association consensus statement Intelligence: knowns and unknowns: http://www.gifted.uconn.edu/siegle/research/Correlation/Intelligence.pdf.
Because IQ is substantially (although not entirely) hereditary (as has been shown by numerous studies of siblings including twins, and in adoption studies), and because IQ level is a good predictor of educational attainment; therefore with a fair system of exam-based selection, children from higher Social Classes will inevitably gain a disproportionately greater number of places at universities than those from lower Social Classes.
This conventional wisdom is proven true by looking at attainment in education as well as ultimate social class status:
We found that people with more education-linked genetics were more successful compared with parents and siblings. We also found mothers’ education-linked genetics predicted their children’s attainment over and above the children’s own genetics, indicating an environmentally mediated genetic effect.
The Left aims to blunt this, supporting socialization against genetics with anti-natural selection programs like entitlements:
To be included in the meta-analysis, the studies had to contain an objective measure of intelligence and a measure of participants’ family socioeconomic status in childhood. The studies also had to include participants that varied in their genetic relatedness (i.e., siblings versus identical twins) so that the researchers would be able to statistically disentangle genetic and environmental influences.
The researchers found that the relationship between genes, socioeconomic status, and intelligence depended on which country the participants were from.
It turns out that genetics is more important than nurture/socialization:
A core hypothesis in developmental theory predicts that genetic influences on intelligence and academic achievement are suppressed under conditions of socioeconomic privation and more fully realized under conditions of socioeconomic advantage: a Gene × Childhood Socioeconomic Status (SES) interaction. Tests of this hypothesis have produced apparently inconsistent results.
We can see this in competitive schooling:
Respondents who came from households with an income of less than $40,000 a year on average had lower overall best SAT scores—about 2189 on average—than those who came from more affluent backgrounds. Respondents whose parents make $500,000 or more each year reported a best overall SAT score of 2239 on average.
Surveyed members of the Class of 2019 who identified as legacies reported higher best overall SAT scores—2269 on average—than their non-legacy peers, who reported SAT scores of 2221 on average.
In addition to some entertaining Iron Maiden lyrics on the topic:
Social decay occurs and can be seen on both the physical level of everyday life in a city (abandoned buildings, vacant collapsing houses, streets in poor condition, etc) and on the emotional state of it’s inhabitants (narcissism, social anxiety, paranoia, etc). Typically people are generalized into a vast and generic group called “strangers” and these strangers are often ignored completely (by other strangers) in order to keep the city running efficiently and problems associated with poor behavior suppressed. The only time strangers interact with each other is when one stranger offers another stranger a service or something that the other stranger needs or wants.
Are we on the verge of societal collapse? Many of the greatest empires throughout world history were not conquered by outside forces. Rather, they crumbled inwardly as extreme social decay set in.
The fundamental level of trust that any society needs in order to operate efficiently is breaking down, and more Americans than ever are living in fear.
Once confidence in our societal institutions and our faith in one another is gone, it is going to be incredibly difficult to ever rebuild it.
And yet the level of economic pessimism, the cynicism about whether your kids will have a better life than you had, it’s much worse among the white community than it is among any other pocket of the country. And I think that’s really revealing. It suggests that people are seeing something on the ground that isn’t necessarily captured by just income and employment statistics.
Collapse can be defined as a rapid and enduring loss of population, identity and socio-economic complexity. Public services crumble and disorder ensues as government loses control of its monopoly on violence.
Societies of the past and present are just complex systems composed of people and technology. The theory of “normal accidents” suggests that complex technological systems regularly give way to failure. So collapse may be a normal phenomenon for civilisations, regardless of their size and stage.
Collapse expert and historian Joseph Tainter has proposed that societies eventually collapse under the weight of their own accumulated complexity and bureaucracy. Societies are problem-solving collectives that grow in complexity in order to overcome new issues. However, the returns from complexity eventually reach a point of diminishing returns. After this point, collapse will eventually ensue.
Once upon a time, our beautiful western cities were the envy of the rest of the world, but now they serve as shining examples of America’s accelerating decline. The worst parts of our major western cities literally look like post-apocalyptic wastelands, and the hordes of zombified homeless people that live in those areas are too drugged-out to care. The ironic thing is that these cities are not poor. In fact, San Francisco and Seattle are among the wealthiest cities in the entire nation. So if things are falling apart this dramatically now, how bad will things get when economic conditions really start to deteriorate?
America is in an advanced state of decay, and it is getting worse with each passing year.
If we keep doing the same things we will keep getting the same results, and right now there are no signs that the overall direction of this nation will change any time soon.
“Social fabric” refers to the intermeshing of informal institutions that creates daily life according to the inclinations of a culture. As opposed to public institutions, which take funding and have mission statements, informal institutions “just happen” in the same way that culture, ethnicity, customs, and faith organically arise wherever people originate.
Some countries have a strong and deep-set social fabric (see Japan, or China). Others have a much looser social fabric. It is important because a society functions on average in accordance to what is acceptable socially. When we change this, we change how society functions in potentially unpredictable ways.
A mod writes about Reddit:
Reddit is sandwiched between two forces: cynical, greedy admins who merely want to pump up the share price by showing more warm bodies using the service, and the voters I mean users who demonstrate a wide range of traits, with many of these users being entirely illogical, antisocial, broken, criminal, and outright stupid people.
Being a mod is like being a janitor. Every morning, there are a dozen turds on the floor. There are also a dozen users that you pray for every Sunday because they’re awesome and you’d love to know them in real life. There are maybe three people who provided a challenge to your thinking, and usually, all three read it somewhere else. Once a year you encounter someone who brings something actually new to the table. The rest of the time, expect to see:
As with humanity, Reddit divides into three groups:
In other words, if you make your standard “Just inform the users!” you are going to stick the mods with a lot of work informing people who will not understand what they are saying, not be reasonable, and will just come back to vandalize, and that is before the admins get involved and crack down because they are basically kindergarten teachers who want everybody to just get along and keep posting stupid cat pictures, photos of food, and personal drama so that Reddit’s user count goes up and so do the NWO-bucks that the admins receive for the stock that they own.
It seems that these systems always fail the same way:
After two decades of relative stability fueled by cheap Venezuelan oil, shortages of food and medicine have once again become a serious daily problem for millions of Cubans. A plunge in aid from Venezuela, the end of a medical services deal with Brazil, and poor performances in sectors including nickel mining, sugar, and tourism have left the communist state $1.5 billion in debt to the vendors that supply products ranging from frozen chicken to equipment for grinding grain into flour, according to former Economy Minister José Luis Rodríguez.
Stores no longer routinely stock eggs, flour, chicken, cooking oil, rice, powdered milk, and ground turkey, among other products. These basics disappear for days or weeks. Hours-long lines appear within minutes of trucks showing up with new supplies. Shelves are empty again within hours.
This is history repeating itself:
Yeltsin, then 58, “roamed the aisles of Randall’s nodding his head in amazement,” wrote Asin. He told his fellow Russians in his entourage that if their people, who often must wait in line for most goods, saw the conditions of U.S. supermarkets, “there would be a revolution.”
“Even the Politburo doesn’t have this choice. Not even Mr. Gorbachev,” he said.
The fact that stores like these were on nearly every street corner in America amazed him. They even offered free cheese samples.
About a year after the Russian leader left office, a Yeltsin biographer later wrote that on the plane ride to Yeltsin’s next destination, Miami, he was despondent. He couldn’t stop thinking about the plentiful food at the grocery store and what his countrymen had to subsist on in Russia.
“When I saw those shelves crammed with hundreds, thousands of cans, cartons and goods of every possible sort, for the first time I felt quite frankly sick with despair for the Soviet people,” Yeltsin wrote. “That such a potentially super-rich country as ours has been brought to a state of such poverty! It is terrible to think of it.
And even long before that:
On March 17, 1790, the revolutionary National Assembly voted to issue a new paper currency called the assignat, and in April, 400 million were put into circulation. Short of funds, the government issued another 800 million at the end of the summer. By late 1791, 1.5 billion assignats were circulating and purchasing power had decreased 14 percent. In August 1793 the number of assignats had increased to almost 4.1 billion, its value having depreciated 60 percent. In November 1795 the assignats numbered 19.7 billion, and by then its purchasing power had decreased 99 percent since first issued. In five years the money of revolutionary France had become worth less than the paper it was printed on.
The effects of this monetary collapse were fantastic. A huge debtor class was created with a vested interest in the inflation because depreciating assignats meant debtors repaid in increasingly worthless money. Others had speculated in land, often former Church properties the government had seized and sold off, and their fortunes were now tied to inflationary rises in land values. With money more worthless each day, pleasures of the moment took precedence over long-term planning and investment.
Goods were hoarded—and thus became scarcer—because sellers expected higher prices tomorrow. Soap became so scarce that Parisian washerwomen demanded that any sellers who refused to sell their product for assignats should be put to death. In February 1793 mobs in Paris attacked more than 200 stores, looting everything from bread and coffee to sugar and clothing.
However, these ideas are still popular with many people:
According to a Gallup poll published Monday, a majority of Democrats no longer hold a positive view of capitalism, while nearly 60 percent of them feel good about socialism.
The positive view of socialism among Democrats, and those who lean Democrat, actually dropped a point from 58 percent in 2016. But in those same two years, positive feelings about capitalism plummetted from 56 to 47 percent.
Gallup’s results reflect what many analysts and pundits have identified as a shift toward socialism within the Democratic Party. The success of self-identified socialist Sen. Bernie Sanders, I-Vt., in the 2016 Democratic presidential primary, and the upset victory of Alexandria Ocasio-Cortez – a member of the Democratic Socialists of America – over incumbent Rep. Joe Crowley, R-N.Y., are often cited as evidence of that shift.
In addition, I offer the following reasons why socialism must be rejected:
In short, if you want your society to rise up to greatness, avoid socialism.
A brief history of American socialized medicine:
The Socialist Party had endorsed a compulsory system as early as 1904, and in 1912 Theodore Roosevelt’s insurgent Progressive Party included a health insurance plank in its campaign platform.
In 1915, progressive reformers proposed a system of compulsory health insurance to protect workers against both wage loss and medical costs during sickness. The American Association for Labor Legislation’s (AALL) proposal, modeled on existing programs in Germany and England, was debated throughout the country and introduced as legislation in several states.
In the 1940s, new potential for grassroots mobilization arose when organized labor became a major backer of national health insurance. As the cost of medical care began eating up more of the average worker’s budget, both the AFL and the Congress of Industrial Organizations (CIO) took leadership roles in the struggle for health reform. In 1943, labor unions joined the reformerexperts of the Committee for the Nation’s Health and liberal administration officials in drafting the Wagner–Murray–Dingell bill (named for its congressional sponsors), the major health insurance legislation of the Truman era.
The outpouring of civil rights activity in the early 1960s spurred politicians to support Medicare as part of Johnson’s War on Poverty, and major civil rights groups all endorsed the legislation. Organized labor was again a strong supporter of health reform, not just to ensure care for the uninsurable but also “to eliminate the increasingly costly problem of negotiating health benefits for [union] retirees.”
Clinton, fearful of business and insurance company opposition, proposed a dauntingly complex system of “health alliances” that would preserve both employer-based coverage and the commercial insurance industry. Advocates for universal health coverage argued that this model would increase the power of private insurers and take away patients’ choice of doctors. One physician-activist dubbed the plan the “Health Insurance Industry Protection Act of 1993,” and another agreed that managed competition “won’t control costs and the entire health care system will be owned by a handful of insurance giants.”
See also Healthcare.
Solyndra file fraudulent data but was accepted by the Obama Administration anyway:
Our investigation confirmed that during the loan guarantee application process and while drawing down loan proceeds, Solyndra provided the Department with statements, assertions, and certifications that were inaccurate and misleading, misrepresented known facts, and, in some instances, omitted information that was highly relevant to key decisions in the process to award and execute the $535 million loan guarantee. In our view, the investigative record suggests that the actions of certain Solyndra officials were, at best, reckless and irresponsible or, at worst, an orchestrated effort to knowingly and intentionally deceive and mislead the Department.
We also found that the Department’s due diligence efforts were less than fully effective. At various points during the loan guarantee process, Solyndra officials provided certain information to the Department that, had it been considered more closely, would have cast doubt on the accuracy of certain of Solyndra’s prior representations. In these instances, the Department missed opportunities to detect and resolve indicators that portions of the data provided by Solyndra were unreliable. In the end, however, the actions of the Solyndra officials were at the heart of this matter, and they effectively undermined the Department’s efforts to manage the loan guarantee process. In so doing, they placed more than $500 million in U.S. taxpayers’ funds in jeopardy.
Obama approved this “green” effort where the Bush Administration thought it was too risky:
On March 20, 2009, then-Secretary of Energy Steven Chu announced Solyndra would be the recipient of a $535 million loan from his department under the Obama administration’s revamped loan guarantee program. Solyndra used the money, along with hundreds-of-millions more from private investors, to build a new facility where it would be mass-producing its easy-to-install cylindrical solar “panels.” The whole thing lasted about two years.
The ill-fated energy company had initially asked President George Bush for cash under the loan guarantee program, which was created to help companies working with clean energy technologies that might be considered too risky for private investors.
But it wasn’t until President Obama launched his sweeping stimulus spending plan that Solyndra’s application was approved, launching the California company to poster-child status despite what were apparently growing concerns about its long-term (and even short-term) viability.
This was part of a larger pattern of fraud by the Obama Energy Department:
Meant to create jobs and cut reliance on foreign oil, Obama’s green-technology program was infused with politics at every level, The Washington Post found in an analysis of thousands of memos, company records and internal e-mails. Political considerations were raised repeatedly by company investors, Energy Department bureaucrats and White House officials.
The southern strategy is typically represented by Leftists as an ongoing conservative policy. The facts are more complex: there was a Southern Strategy, but it didn’t work and was used for one election only, and it was not as the Left describes it. Furthermore, the southern GOP control didn’t really establish itself concretely until the 2000 election.
The following Presidential candidates won the south in the years below:
The rest is recent enough that I won’t bother citing them.
Democrats were racist in the 30s/40s/50s
A high ranking KKK member was the longest running Democratic Senator (Actually, the longest running US Senator — ever) was a senator until 2010. See: Robert Byrd. I won’t harp on you too badly for stopping at the 50’s though seeing as how the Dixiecrats still existed in the democrat party until the last one died (Byrd). There’s other examples, but I’m just pointing stuff out.
The problem with the southern strategy people brand anyone who argues for States Rights’ — a classical GOP view point, though the GOP is a split party (see Lincolns anti-federalist arguments citing the federalist papers with a moderate view between the classical federalist vs. anti-federalists of the time. This is particularly interesting because at the same the GOP (which came to be under Lincoln) views were seen as an obstruction of state rights at the time. Though, federal power now is several orders of magnitude outside of war time compared to what it was outside of wartime then). The libertarians — a very anti-federalist group who have had a lot of say in the GOP since Rand really gave birth to them as we know them today — believe strongly in states rights. This does not make them a southern strategy base with racial undertones. The main base echoing these sentiments today are indeed the libertarians. Those same people are also the the most socially liberal section of the overall GOP.
Until the GOP’s willing to turn their rhetorical guns on the Dixiecrats-turned-Republicans, though, they’re not going to make significant in-roads.
Interesting because only 3 of the 26 Dixiecrats turned republican. Two of them were governors and one a senator. The rest of the Dixiecrats — 23 — stayed Democrat until the end of their political careers. As shown above, the votes didn’t follow the Dixiecrats until Reagan, who won in a massive landslide anyways and was behind in the southern states (or behind overall: Carter was considered the presumptive nominee) until he stomped the debates. They then showed after Reagan that even the key Dixiecrat state still went Democrat in 1992 and 1996.
Student experiments are notorious because white college students tend to do what they believe is expected of them:
Blum’s expose — based on previously unpublished recordings of Zimbardo, a Stanford psychology professor, and interviews with the participants — offers evidence that the “guards” were coached to be cruel.
One of the men who acted as an inmate told Blum he enjoyed the experiment because he knew the guards couldn’t actually hurt him.
“There were no repercussions. We knew [the guards] couldn’t hurt us, they couldn’t hit us. They were white college kids just like us, so it was a very safe situation,” said Douglas Korpi, who was 22-years-old when he acted as an inmate in the study.
In fact, the students aimed for the results that they believed were wanted:
“That first day was very mellow,” Eshelman tells The Post. “It was so mellow that I made the decision to get something started. My thinking was, ‘Somebody’s paying a lot of money for this experiment and nothing’s happening. They must be trying to prove that prison’s a bad environment, so I’m gonna make it a bad environment.’ So I took on this tough-guy persona based on ‘Cool Hand Luke’ and the fraternity hazing I’d endured the previous year.”
Eshelman became the ringleader, and says most of the stuff he had the prisoners do was fairly harmless. “You line people up, shout at them, get them to get down and do 20 push-ups, have one prisoner turn to another and shout out ‘I LOVE YOU’ or something that would embarrass them,” he says.
Eshelman was taking acting classes, and he looked at his role as an improv opportunity.
The environment was also influenced by the adults:
Occasionally, disputes between prisoner and guards got out of hand, violating an explicit injunction against physical force that both prisoners and guards had read prior to enrolling in the study. When the “superintendent” and “warden” overlooked these incidents, the message to the guards was clear: all is well; keep going as you are. The participants knew that an audience was watching, and so a lack of feedback could be read as tacit approval. And the sense of being watched may also have encouraged them to perform. Dave Eshelman, one of the guards, recalled that he “consciously created” his guard persona. “I was in all kinds of drama productions in high school and college. It was something I was very familiar with: to take on another personality before you step out on the stage,” Eshelman said. In fact, he continued, “I was kind of running my own experiment in there, by saying, ‘How far can I push these things and how much abuse will these people take before they say, ‘Knock it off?’ ”
It’s often said that the study participants were ordinary guys—and they were, indeed, determined to be “normal” and healthy by a battery of tests. But they were also a self-selected group who responded to a newspaper advertisement seeking volunteers for “a psychological study of prison life.” In a 2007 study, the psychologists Thomas Carnahan and Sam McFarland asked whether that wording itself may have stacked the odds. They recreated the original ad, and then ran a separate ad omitting the phrase “prison life.” They found that the people who responded to the two ads scored differently on a set of psychological tests. Those who thought that they would be participating in a prison study had significantly higher levels of aggressiveness, authoritarianism, Machiavellianism, narcissism, and social dominance, and they scored lower on measures of empathy and altruism.
While some guard shifts were especially cruel, others remained humane. Many of the supposedly passive prisoners rebelled. Richard Yacco, a prisoner, remembered “resisting what one guard was telling me to do and being willing to go into solitary confinement. As prisoners, we developed solidarity—we realized that we could join together and do passive resistance and cause some problems.”
In fact, the whole experiment seemed designed to go off the rails:
Philip Zimbardo, who led the experiment and is now a professor emeritus of psychology at Stanford University, encouraged the guards to act “tough,” according to newfound audio from the Stanford archive.
The experiment did not live up to the standards expected:
Twenty-one boys (OK, young men) are asked to play a game of prisoners and guards. It’s 1971. There have recently been many news reports about prison riots and the brutality of guards. So, in this game, what are these young men supposed to do? Are they supposed to sit around talking pleasantly with one another about sports, girlfriends, movies, and such? No, of course not. This is a study of prisoners and guards, so their job clearly is to act like prisoners and guards—or, more accurately, to act out their stereotyped views of what prisoners and guards do. Surely, Professor Zimbardo, who is right there watching them (as the Prison Superintendent) would be disappointed if, instead, they had just sat around chatting pleasantly and having tea. Much research has shown that participants in psychological experiments are highly motivated to do what they believe the researchers want them to do. Any characteristics of an experiment that let research participants guess how the experimenters expect or want them to behave are referred to as demand characteristics. In any valid experiment it is essential to eliminate or at least minimize demand characteristics. In this experiment, the demands were everywhere.
Subsequent revelations about the experiment—published since the first edition of my textbook—reveal that the guards didn’t even have to guess how they were supposed to behave; they were largely told how by Zimbardo and his associates. In his relatively recent book, The Lucifer Effect [4, p 55] Zimbardo describes in the following terms what he told the guards at the outset of the study:
“We cannot physically abuse or torture them,” I said. “We can create boredom. We can create a sense of frustration. We can create fear in them, to some degree. We can create a notion of the arbitrariness that governs their lives, which are totally controlled by us, by the system, by you, me, [Warden] Jaffe. They’ll have no privacy at all, there will be constant surveillance — nothing they do will go unobserved. They will have no freedom of action. They will be able to do nothing and say nothing that we don’t permit. We’re going to take away their individuality in various ways. They’re going to be wearing uniforms, and at no time will anybody call them by name; they will have numbers and be called only by their numbers. In general, what all this should create in them is a sense of powerlessness. We have total power in the situation. They have none. …”
Is this not an overt invitation to be abusive in all sorts of psychological ways? And, when the guards did behave in these ways and escalated that behavior, with Zimbardo watching and apparently (by his silence) approving, would that not have confirmed in the subjects’ minds that they were behaving as they should? They were doing this all for the sake of an experiment, for the good of science, and apparently they were doing the right thing—so they continued, and did even more of it, until the experiment was stopped. They may also have been motivated by the same ideological passion that motivated Zimbardo to conduct the experiment in the first place–to prove to the world that prison guards are abusive because of the situation they are in, or to prove that prisoners are hurt by such abuse.
We all use profiling.
First, stereotypes are not bugs in our cultural software but features of our biological hardware. This is because the ability to stereotype is often essential for efficient decision-making, which facilitates survival. As Yale psychologist Paul Bloom has noted, “you don’t ask a toddler for directions, you don’t ask a very old person to help you move a sofa, and that’s because you stereotype.”
Our evolutionary ancestors were often called to act fast, on partial information from a small sample, in novel or risky situations. Under those conditions, the ability to form a better-than-chance prediction is an advantage. Our brain constructs general categories, from which it derives predictions about category-relevant specific, and novel, situations. That trick has served us well enough to be selected into our brain’s basic repertoire. Wherever humans live, so do stereotypes. The impulse to stereotype is not a cultural innovation, like couture, but a species-wide adaptation, like color vision.
“Our ability to stereotype people is not some sort of arbitrary quirk of the mind, but rather it’s a specific instance of a more general process, which is that we have experience with things and people in the world that fall into categories and we could use our experience to make generalizations of novel instances of these categories. So everyone here has a lot of experience with chairs and apples and dogs and based on this, you could see these unfamiliar examples and you could guess — you could sit on the chair, you could eat the apple, the dog will bark.”
Second, contrary to popular sentiment, stereotypes are usually accurate. (Not always to be sure. And some false stereotypes are purposefully promoted in order to cause harm. But this fact should further compel us to study stereotype accuracy well so that we can distinguish truth from lies in this area). That stereotypes are often accurate should not be surprising to the open and critically minded reader. From an evolutionary perspective, stereotypes had to confer a predictive advantage to be elected into the repertoire, which means that they had to possess a considerable degree of accuracy, not merely a ‘kernel of truth.’
Survey of the literature:
- Over 50 studies have now been performed assessing the accuracy of demographic, national, political, and other stereotypes.
- Stereotype accuracy is one of the largest and most replicable effects in all of social psychology. Richard et al (2003) found that fewer than 5% of all effects in social psychology exceeded r’s of .50. In contrast, nearly all consensual stereotype accuracy correlations and about half of all personal stereotype accuracy correlations exceed .50.
- The evidence from both experimental and naturalistic studies indicates that people apply their stereotypes when judging others approximately rationally. When individuating information is absent or ambiguous, stereotypes often influence person perception. When individuating information is clear and relevant, its effects are “massive” (Kunda & Thagard, 1996, yes, that is a direct quote, p. 292), and stereotype effects tend to be weak or nonexistent. This puts the lie to longstanding claims that “stereotypes lead people to ignore individual differences.”
- There are only a handful of studies that have examined whether the situations in which people rely on stereotypes when judging individuals increases or reduces person perception accuracy. Although those studies typically show that doing so increases person perception accuracy, there are too few to reach any general conclusion. Nonetheless, that body of research provides no support whatsoever for the common presumption that the ways and conditions under which people rely on stereotypes routinely reduces person perception accuracy.
the theory that prices are determined by the interaction of supply and demand: an increase in supply will lower prices if not accompanied by increased demand, and an increase in demand will raise prices unless accompanied by increased supply
The law of supply and demand is an unwritten rule which states that if there is little demand for a product, the supply will be less, and the price will be high, and if there is a high demand for a product, the price will be lower. If the demand for a product is high, the supply becomes greater, driving down the price.
The law of demand states that the higher the price of a product, the less consumers will demand that product. The law of supply says that producers of a particular good raise the price of that product to increase revenue.
Generally, low supply and high demand increase price. In contrast, the greater the supply and the lower the demand, the price tends to fall.
Also called a market-clearing price, the equilibrium price is the price at which the producer can sell all the units he wants to produce and the buyer can buy all the units he wants.
For example, if unemployment is high, there is a large supply of workers. As a result, businesses tend to lower wages. Conversely, when unemployment is low, the supply of workers is also low, and as a result, to entice workers, employers tend to offer higher salaries.
The most basic laws in economics are the law of supply and the law of demand. Indeed, almost every economic event or phenomenon is the product of the interaction of these two laws. The law of supply states that the quantity of a good supplied (i.e., the amount owners or producers offer for sale) rises as the market price rises, and falls as the price falls. Conversely, the law of demand says that the quantity of a good demanded falls as the price rises, and vice versa.
Economists often talk of “demand curves” and “supply curves.” A demand curve traces the quantity of a good that consumers will buy at various prices. As the price rises, the number of units demanded declines. That is because everyone’s resources are finite; as the price of one good rises, consumers buy less of that and, sometimes, more of other goods that now are relatively cheaper. Similarly, a supply curve traces the quantity of a good that sellers will produce at various prices. As the price falls, so does the number of units supplied. Equilibrium is the point at which the demand and supply curves intersect—the single price at which the quantity demanded and the quantity supplied are the same.
Leftist talking points are oft-repeated cliche explanations produced by the Leftist establishment that purport to show why conservative positions are wrong. They are customarily expressed in a circular manner to reinforce liberals’ perception that they are right, and when introduced in political debate, make the debate a circular my-team-versus-your-team in lieu of argument.
Talking points are usually produced by Jon Stewart, Steven Colbert, Rachel Maddow, Anderson Cooper and other liberal meme-generating facilities. Their dominant trait is using one aspect of a situation to explain the whole.
One recent example: “Texas has no zoning; a fertilizer plant blew up there; therefore, the lack of zoning caused this disaster.” As it turns out, the problem wasn’t no zoning, but the plant lying about what it had on the premises, and zoning would not have prohibited the plant’s location.
Like memes, liberal talking points depend on heavy repetition to be successful, and Reddit is the perfect place for circular conversation to repeat them. Their purpose is not to be logical, but to be plausible, and thus to be a suitable weapon in a shouting match to drown out the opposition.
The origin of the income tax on individuals is generally cited as the passage of the 16th Amendment, passed by Congress on July 2, 1909, and ratified February 3, 1913; however, its history actually goes back even further. During the Civil War Congress passed the Revenue Act of 1861 which included a tax on personal incomes to help pay war expenses. The tax was repealed ten years later. However, in 1894 Congress enacted a flat rate Federal income tax, which was ruled unconstitutional the following year by the U.S. Supreme Court because it was a direct tax not apportioned according to the population of each state. The 16th amendment, ratified in 1913, removed this objection by allowing the Federal government to tax the income of individuals without regard to the population of each State.
Effective Tax Rate
If you add up the four income-based categories of taxation (Federal, state/local, Social Security, and Medicare), the average American’s effective tax rate is 29.8%. This is in addition to any consumption-based taxes paid, such as sales tax, property tax, or other taxes on specific items.
Keep in mind also that 44% pay nothing:
Approximately 76.4 million or 44.4% of Americans won’t pay any federal income tax in 2018, up from 72.6 million people or 43.2% in 2016 before President Trump’s Tax Cuts and Jobs Act, according to estimates from the Tax Policy Center, a nonprofit joint venture by the Urban Institute and Brookings Institution, which are both Washington, D.C.-based think tanks. That’s below the 50% peak during the Great Recession. They still obviously pay sales tax, property taxes and other taxes.
We are all living off of the earnings of a relatively small group:
In reality, the 1 percent paid about 40 percent of all income taxes in 2017.
The Weekly Pulse also found that people wanted the “1 percent” to pay more in taxes—but lowered those numbers when asked how much someone who makes $500,000 in income should pay.
In fact, most high-tax states send more money to Washington than they get back in federal spending. Most low-tax states make a profit from the federal government’s system of taxing and spending.
The point is that when people can deduct their state tax, the rest of the nation picks up that slack, and this measurement is not covered by this article.
In addition, these payment shortfalls are only a couple percentage points, and go toward Leftist programs mandated by federal law:
The IRS collected taxes equal to 37.03 percent of the income earned in America (again, this includes non-individual income taxes collected in a state, such as corporate tax and excises). In the blue states, the figure is a little higher: 39.16 percent. In the purples and reds, it is a little lower: 36.37 percent and 34.29 percent, respectively.
One way of looking at it is to see what percentage of states’ budgets are paid for by direct transfers of federal funds. That was the analysis done in a Pew Trusts report in July 2017, based on federal government data from fiscal year 2015. Intergovernmental transfers from the feds to the states include the money used to pay for programs that Washington funds but the states administer. About half goes to health-care programs; education and transportation spending also come out of these funds. They also include welfare programs, grants to local police forces, and other indirect federal spending within a state (this 2013 CBO report gives more details).
In fact, the reality is more muddled:
In fact, it’s not as clear as liberals think that the system consists of “Makers and Takers,” with the blue states making the money, and the red states taking it. That belief seems to come from a years-old graphic, based on data that dates back to the middle of the George W. Bush administration. Since then, the electoral and economic maps have both changed a little bit. Thankfully, New York State has helpfully updated us, at least to 2013.
On a per-capita basis — which is the right way to calculate this — deep-blue New Jersey is the biggest donor state. But red-blooded Wyoming is the next biggest, and North Dakota makes the list too. There is certainly a preponderance of blue states at that end of the spectrum, but it’s not a clear “Donor states are blue” story. And if we match the 2013 data to the closest election (2012) we find that New Mexico, the biggest net recipient, went for Obama in 2012, as did Virginia, Maryland, Maine and Hawaii. What’s driving the net subsidies isn’t anything as simple as political identification.
Before 2011, Title IX was rarely enforced and largely ignored because of a strict standard of proof, according to K.C. Johnson, a professor at Brooklyn College and expert on due process in college sexual assault cases.
But changes triggered during the Obama Administration swung the pendulum to the other side, Johnson said.
The guidance switched to a “preponderance of the evidence” standard, meaning the incident was more likely than not to have occurred. It also made it more difficult for the defendants to access all evidence against them and to cross-examine the accusers.
Sexual assaults on and off campus reported to college authorities across the country more than doubled at Maryland schools, according to U.S. Department of Education data, mirroring a national trend.
Population, as Malthus said, naturally tends to grow “geometrically,” or, as we would now say, exponentially. In a finite world this means that the per capita share of the world’s goods must steadily decrease. Is ours a finite world?
A fair defense can be put forward for the view that the world is infinite; or that we do not know that it is not. But, in terms of the practical problems that we must face in the next few generations with the foreseeable technology, it is clear that we will greatly increase human misery if we do not, during the immediate future, assume that the world available to the terrestrial human population is finite.
… The tragedy of the commons develops in this way. Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons. Such an arrangement may work reasonably satisfactorily for centuries because tribal wars, poaching, and disease keep the numbers of both man and beast well below the carrying capacity of the land. Finally, however, comes the day of reckoning, that is, the day when the long-desired goal of social stability becomes a reality. At this point, the inherent logic of the commons remorselessly generates tragedy.
As a rational being, each herdsman seeks to maximize his gain. Explicitly or implicitly, more or less consciously, he asks, “What is the utility to me of adding one more animal to my herd?” This utility has one negative and one positive component.
- The positive component is a function of the increment of one animal. Since the herdsman receives all the proceeds from the sale of the additional animal, the positive utility is nearly +1.
- The negative component is a function of the additional overgrazing created by one more animal. Since, however, the effects of overgrazing are shared by all the herdsmen, the negative utility for any particular decision-making herdsman is only a fraction of -1.
Adding together the component partial utilities, the rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another…. But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit–in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.
There are two sides to the aesthetic experience. There’s the kind of relishing side, and the exploring side. Relishing a beautiful work of art or a sublime work of music is something that you can do without necessarily exploring the depths of the human heart, even though the work of art touches on them. And, perhaps, this kind of aestheticism means forgetting the cognitive dimension of aesthetic pleasures and realizing that they’re not just sensory pleasures. They’re not just pleasures in the way you experience things. They’re also directed toward a vision of the world. Each of those goals that I’ve talked about—truth and goodness and beauty—are important because they are what you’re focusing on but they seem to reduce art itself to an inadequate means. They seem to leave out the aesthetic dimension. Only when combined in a unity—the kind of truth and the kind of goodness of which beauty is the sign—do these values mark out a path for art. If beauty is the way in which truth is presented, the way in which goodness comes to your consciousness, then we seem to have something like an account of the value of art.
As mentioned earlier, symptoms of GID at prepubertal ages decrease or even disappear in a considerable percentage of children (estimates range from 80–95%) 11, 13. Therefore, any intervention in childhood would seem premature and inappropriate.
The overall mortality for sex-reassigned persons was higher during follow-up (aHR 2.8; 95% CI 1.8–4.3) than for controls of the same birth sex, particularly death from suicide (aHR 19.1; 95% CI 5.8–62.9). Sex-reassigned persons also had an increased risk for suicide attempts (aHR 4.9; 95% CI 2.9–8.5) and psychiatric inpatient care (aHR 2.8; 95% CI 2.0–3.9). Comparisons with controls matched on reassigned sex yielded similar results. Female-to-males, but not male-to-females, had a higher risk for criminal convictions than their respective birth sex controls.
This refers to the movement to keep modern society in the hands of the natural elites instead of the democratic elites:
Ultra, abbreviation of ultraroyalist, French ultraroyaliste, the extreme right wing of the royalist movement in France during the Second Restoration (1815–30). The ultras represented the interests of the large landowners, the aristocracy, clericalists, and former émigrés. They were opposed to the egalitarian and secularizing principles of the Revolution, but they did not aim at restoring the ancien régime; rather, they were concerned with manipulating France’s new constitutional machinery in order to regain the assured political and social predominance of the interests they represented.
The term descends from “royalism” or support of the monarchy, aristocracy, feudalism, and caste system:
a supporter or adherent of a king or royal government, especially in times of rebellion or civil war.
It is growing in popularity as democracy craters:
On October 14, as night fell on the newly grieving country, a large, angry crowd gathered outside a small shophouse in Phuket. Dressed in black, they demanded the arrest of a young man inside, who they accused of disrespecting the royal family on social media. A thick line of police officers was needed to keep the outraged mob back from the shop. It took several hours to calm the throng, which eventually dispersed at around 3 am.
Unions dominated from the 1940s through 1980s, at which point most labor went offshore:
But the end of the war saw a wave of strikes in many industries and it was at this point that union power and membership reached its zenith. The unions were a controlling force in the economy during the late ’40s and ’50s, and the AFL merged with the Congress of Industrial Organizations (CIO) at this point to spearhead the American labor force.
But the strength of the unions during this era led many union leaders into corruption and complacency, and the power of the unions began to decline in subsequent decades. As additional laws were passed outlawing child labor and mandating equal pay for equal work regardless of race or gender, unions became less important to workers who were able to rely on federal laws to protect them.
Despite the erosion in their power and influence, labor unions continue to prove their importance, as they were instrumental in getting President Obama elected in 2008 and reelected in 2012.
Associations with Communism may have hampered them:
The communist penetration contributed to the decline of American unions. When the Truman administration imposed “loyalty oaths” to get communists out of the government, union leaders were trapped. John L. Lewis, the president of the coal miners union and a (literally) violent foe of communists in his own rank and file, resisted loyalty oaths because he understood that they would lead to a kind of political regularity that would curb the labor movement’s ability to challenge its business adversaries. He proved to be right. With the reality of domestic communism downplayed, old political prejudices were passed on and unthinkingly accepted.
Their insistence on collective reward schemes raises costs and makes business unable to react to the market:
Unions can have the power to impede a company’s ability to compete and thrive. A firm might be in desperate trouble, yet its unions may be unwilling to bend or compromise in order to help the company survive. Many employers find themselves left very inflexible when they have union contracts to abide by. Meanwhile, if a union negotiates high wages for workers at a company, it may lead the company to charge higher prices for its offerings, which can make it less competitive with rivals.
Some argue that unions have led to a decline in the value of merit. In many union settings, workers can’t advance much or at all on their merits, but rather they must generally progress within the limits defined by union contracts (where advances might be based on seniority, for example). Employers may have trouble weeding out ineffective employees if they belong to unions. In theory, at least, unionized workers might become so comfortable and protected that they lose the incentive to work hard for their employer. And outstanding employees might lose their get-up-and-go if there’s no incentive to excel — or worse, if they’re pressured by the union not to go the extra mile.
Simply put, this is someone who believes the positive propaganda about the Left when it is an obvious lie:
The phrase “useful idiots,” often attributed to an earlier Vladimir, referred to Westerners who had been successfully manipulated by Soviet propaganda.
It simultaneously refers to their utility as tools of World Leftism, and the scorn that their manipulators feel for them:
Many years ago, a phrase was coined by the leaders of the Soviet Union to describe those in the West who naively promoted the cause of Russian Communism when in reality they were held in contempt and were being cynically used by the Soviet hierarchy. The term “useful idiot” more than ever applies to a vast swath of citizens in the United States who have been cynically used by the hardcore left for a cause they are unwilling to understand.
Among the mysteries confronting those of us who have immigrated to the United States from countries that have experienced the devastating outcome of socialist/Marxist ideology is why seemingly successful and educated people could be so easily swayed to support those whose end-game is to transform the country into a socialist “utopia” and to control the day-to-day lives of all Americans. Among these “useful idiots” are a seeming majority of the Jewish population as well as many in business, and nearly all in entertainment and the media.
The answer appears to be that despite the hardcore left accounting for less than 20% of the population, their influence extends far beyond thanks to the apparent inability of their peripheral supporters to use any modicum of reasoning — as the left in the United States has been able to identify and manipulate those susceptible to emotional arguments.
The Soviets knew that we had many weak people among us, and carefully manipulated them in order to force changes in our policy:
During the Vietnam War we spread vitriolic stories around the world, pretending that
America’s presidents sent Genghis Khan-style barbarian soldiers to Vietnam who raped at
random, taped electrical wires to human genitals, cut off limbs, blew up bodies and razed
entire villages. Those weren’t the facts. They were our tales, but some seven million
Americans ended up being convinced their own president, not communism, was the enemy.
As Yuri Andropov, who conceived this dezinformatsiya war against the United States used
to tell me, people are more willing to believe smut than holiness.
Like most disposable tools, they will be used and discarded when they no longer have utility for achieving the goal of Equality-Utopia:
[T]he useful idiots, the leftists who are idealistically believing in the beauty of the Soviet socialist or Communist or whatever system, when they get disillusioned, they become the worst enemies. That’s why my KGB instructors specifically made the point: never bother with leftists. Forget about these political prostitutes. Aim higher. […] They serve a purpose only at the stage of destabilization of a nation. For example, your leftists in the United States: all these professors and all these beautiful civil rights defenders. They are instrumental in the process of the subversion only to destabilize a nation. When their job is completed, they are not needed any more. They know too much. Some of them, when they get disillusioned, when they see that Marxist-Leninists come to power—obviously they get offended—they think that they will come to power. That will never happen, of course. They will be lined up against the wall and shot.” ― Yuri Bezmenov
Endemic to democracy:
Contrary to the claims of many liberals, the problem of voter fraud is as old as the country itself. As the U.S. Supreme Court noted when it upheld Indiana’s voter identification law, “flagrant examples” of voter fraud “have been documented throughout this Nation’s history by respected historians and journalists.”
Attempts to commandeer election results have been documented dating back to the 19th century, when New York City’s infamous Tammany Hall was synonymous with political corruption and election fraud. In one New York election 1844, 55,000 votes were recorded even though there were only 41,000 eligible voters. Decades later, these efforts have continued and determined fraudsters have become only more creative in their efforts to fix the outcome of elections.
However, as the National Commission on Federal Election Reform has stated, the problem “is not the magnitude of voter fraud. In close or disputed elections, and there are many, a small amount of fraud could make the margin of difference.” The U.S. Supreme Court has concurred with this assessment, noting that known instances of fraud “demonstrate that not only is the risk of voter fraud real but that it could affect the outcome of a close election.”
Indeed, recent elections bear this out. In 2015, a city council election in the New Jersey town of Perth Amboy was decided by a mere 10 votes. A judge overturned the election and ordered a new one after it was revealed that at least 13 illegal absentee ballots had been cast. The 2003 mayoral primary in East Chicago, Indiana, was overturned by the state Supreme Court after evidence of widespread fraud was revealed. The new election resulted in a different winner. Numerous convictions for election fraud resulted from this election, and are documented in The Heritage Foundation’s Voter Fraud Database.
Also in the UK:
I reported that the wholesale falsification of postal votes had not been confined to the wards of Aston and Bordesley Green, the subject of the Petitions, but had been widespread in those wards of Birmingham where the Labour Party was attempting to counteract the collapse of the Labour vote in the Muslim Asian community following the invasion of Iraq in 2003.
This often reflects political sensitivies:
Abuses of postal voting on demand were noted too often be carried out in communities where an individual’s right to vote in secret and exercise free choice may not be fully valued. Evidence was presented of pressure being put on vulnerable members of some ethnic minority communities, particularly women and young people, to vote according to the will of the elders, especially in communities of Pakistani and Bangladeshi background.
Sometimes it involves double voting:
More than 700 Pennsylvania voters might have cast two ballots in recent elections, the secretary of State said Thursday.
Data from the Kansas-based Interstate Voter Registration Data Crosscheck, a multistate coalition that agreed to work together to clean voter registration rolls of voters who have moved or died, found that 731 Pennsylvanians potentially cast two votes in Pennsylvania or a vote in Pennsylvania and a vote in another participating state in the 2012 election.
And in Colorado:
Election sleuthing by Brian Maass of KCNC-TV in Denver exposed multiple instances in recent years where dead Coloradans were still voting. A dead World War II veteran named John Grosso voted in a 2006 primary election, and a woman named Sara Sosa who died in 2009 cast ballots in 2010, 2011, 2012 and 2013. Mrs. Sosa’s husband Miguel died in 2008, but a vote was cast in his name one year later.
Administrators with the Secretary of State’s Office said the veteran’s vote may have been the result of an election judge’s error, but the station said that still didn’t explain why “dozens of others were still listed as active voters months and sometimes years after their deaths.”
The CBS affiliate noted that voter fraud is important because often times a race is decided by a slim margin. Colorado’s 7th Congressional district came down to 121 votes out of more than 175,000 that were cast in 2002, the station reported.
Harrisonburg officials and the FBI are investigating allegations of voter registration fraud after officials say almost 20 voter applications were turned in under the names of dead people.
Logan said applications using a deceased person’s real name and address but a false social security number would not be flagged in the voter system.
The perpetrators order mail-in ballots by forging the names of citizens in target districts. They then hire kindly middle-aged and older women to go door-to-door with those ballots in hand. They knock on the door of the citizen whose ballot they have, and make fraudulent claims as to why they are visiting. They may claim they are gathering signatures for a petition, or beg for a signature so they might “meet their quota” for whatever alleged cause they are soliciting on behalf of. Frequently, it is something like “Republicans are trying to take away the rights of black voters”.
The citizen then unknowingly signs the yellow ballot carrier envelope that contains their ballot – a ballot the perpetrators have already filled out that supports their candidate.
Especially local elections:
Authorities charged Edinburg Mayor Richard Molina with organized election fraud, a first-degree felony, and two counts of illegal voting for allegedly making voters change their addresses to places they did not live, including an apartment complex he owned.
Molina unseated the city’s longtime mayor by about 1,200 votes in 2017. Located along the U.S.-Mexico border, Edinburg is home to headquarters for U.S. Customs and Border Protection operations in the Rio Grande Valley. The city’s population is about 90,000.
Indeed according to a Pew Charitable Trust report from February 2012, one in eight voter registrations are “significantly inaccurate or no longer valid.” Since there are 146 million Americans registered to vote, this translates to a stunning 18 million invalid voter registrations on the books. Further, “More than 1.8 million deceased individuals are listed as voters, and approximately 2.75 million people have registrations in more than one state.” Numbers of this scale obviously provide ripe opportunity for fraud.
Some 3.5 million more people are registered to vote in the U.S. than are alive among America’s adult citizens. Such staggering inaccuracy is an engraved invitation to voter fraud.
The Election Integrity Project of Judicial Watch — a Washington-based legal-watchdog group — analyzed data from the U.S. Census Bureau’s 2011–2015 American Community Survey and last month’s statistics from the federal Election Assistance Commission. The latter included figures provided by 38 states. According to Judicial Watch, eleven states gave the EAC insufficient or questionable information. Pennsylvania’s legitimate numbers place it just below the over-registration threshold.
My tabulation of Judicial Watch’s state-by-state results yielded 462 counties where the registration rate exceeded 100 percent. There were 3,551,760 more people registered to vote than adult U.S. citizens who inhabit these counties.
In all, the analysis showed 119 dead people have voted a total of 229 times in Chicago in the last decade.
Allen says about 60,000 dead voters have been purged from the rolls over the last decade — but 2 Investigators found numerous examples of that not happening.
A comparison of records by David Goldstein, investigative reporter for CBS2/KCAL9, has revealed hundreds of so-called dead voters in Southern California, a vast majority of them in Los Angeles County.
CBS2 compared millions of voting records from the California Secretary of State’s office with death records from the Social Security Administration and found hundreds of so-called dead voters.
Specifically, 265 in Southern California and a vast majority of them, 215, in Los Angeles County alone.
Gov. Rick Scott’s election’s chief on Wednesday defiantly refused a federal demand to stop purging non-citizens from Florida’s voter rolls, intensifying an election-year confrontation with President Barack Obama’s administration as each side accuses the other of breaking federal law.
What’s more, the Voting Rights Act applies to only five Florida counties — Monroe, Hillsborough, Collier, Hardee and Hendry — and not the other 62 in Florida, including Miami-Dade, where about 1,600 of the 2,700 potential noncitizens were initially identified by the state in a database created by the Department of Highway Safety and Motor Vehicles.
They found many more potential violations:
Last month, Florida election officials announced that by cross-referencing voter rolls with driver’s licenses and other materials, they believed 2,600 registered voters were in fact not U.S. citizens, and that they were looking into the records of another 180,000 registered voters. Suspected noncitizens were then sent letters requiring them to confirm their citizenship in order to retain their voter rights.
And convictions arose:
In notebooks collected by investigators, Cabrera had names and addresses of more than 500 voters who were mostly elderly Hispanics who live in Hialeah. The lists, titled “Deisy’s Voters,” reportedly included information as to whether the voter was illiterate or was blind, deaf or had Alzheimer’s.
In 2008, she allegedly received more than $9,000 in payments from more than half a dozen judicial candidates. Cabrera worked as a campaign worker for three of the candidates but what they paid her was about half of what she listed in her notes
Last July, acting on a tip police stopped Cabrera and found she had 12 absentee ballots belonging to other voters in her possession.
Federal jurors have convicted two Magoffin County officials in a vote-fraud scheme in which the judge-executive also was implicated.
In one election, for instance, he added 60 votes to the total for a state representative candidate, and Risner signed names to the precinct log of people who hadn’t showed up to cover the extra votes, McCarty said.
Prosecutors also presented testimony from several people who said various members of the conspiracy paid them $50 in 2014 to vote for the right slate.
In Parker’s case, he was charged with voter fraud for votes in the Nov. 6, 2012, presidential election. He voted in person at his polling place in Spring Hill, Tenn. Authorities say that was after previously mailing in another vote by absentee ballot in Florida on Oct. 28, and yet another absentee ballot vote in North Carolina the following day. He pleaded guilty to felony voter registration and felony voting fraud in Rutherford County, N.C., last November, and was spared jail time under the law.
In Wisconsin, 52-year-old Robert Monroe also was sentenced to jail earlier this year after he was charged with 13 counts of election fraud, including multiple voting and voting twice in the 2012 presidential race. Prosecutors say Monroe voted by absentee ballot, where he lives in Shorewood, Wis., on Nov. 1, 2012. Then on election day five days later, authorities say he drove four hours south to Lebanon, Ind., to vote in person, using his Indiana drivers license to sign in.
Party officials said candidate Wendy Rosen, who was running in Maryland’s 1st Congressional District, had voted twice in Maryland and Florida in two separate elections. She pleaded guilty to voting illegally in two elections.
And in Cincinnati, veteran poll worker Melowese Richardson was accused of voting twice in the 2012 presidential election, after Hamilton County prosecutors charged her in 2013 with eight counts of illegal voting over several elections. She pleaded guilty to four counts, and prosecutors said she had even voted in the presidential election for her sister, who had been in a coma for almost a decade. Richardson was sentenced to five years in prison but was released early.
In Kansas, Lincoln Wilson also was charged with voting in both that state and in Colorado, where records show he is a registered Republican. He was accused of multiple counts stemming from the 2010, 2012, and 2014 elections.
Both Hudson and Kent Hallum waived indictment and entered guilty pleas to a conspiracy charge in an information on September 5, 2012. In doing so, each acknowledged his participation in a conspiracy to bribe voters to influence absentee votes in the Arkansas District 54 primary, its runoff election, and the general election, all of which were held between February and July 2011. Hudson Hallum was a candidate in that election and won the District 54 House seat.
Hudson Hallum and Kent Hallum tasked Carter, Malone, and others with identifying absentee ballot voters within District 54; obtaining and distributing absentee ballot applications to particular voters; determining when absentee ballots were mailed to absentee voters by the Crittenden County Clerk’s Office; and making contact with recipients of absentee ballots to assist those voters in completing the ballots. Once such absentee ballots were completed, the absentee voters typically placed their ballots in unsealed envelopes, which were retrieved by Carter, Malone, and others and then subsequently delivered to either Hudson Hallum or Kent Hallum for inspection to ensure that the absentee ballot votes had been cast for Hudson Hallum. After inspection by Hudson Hallum or Kent Hallum, the absentee ballots that contained votes for Hudson Hallum were sealed and mailed to the Crittenden County Clerk’s Office. If a ballot contained a vote for Hudson Hallum’s opponent, it was destroyed.
The Washington Secretary of State’s Office has found 74 potential possible cases of voter fraud in the state, including one from Lewis County, in the 2016 general election.
More than one out of every five registered Ohio voters is probably ineligible to vote.
In two counties, the number of registered voters actually exceeds the voting-age population: Northwestern Ohio’s Wood County shows 109 registered voters for every 100 eligible, while in Lawrence County along the Ohio River it’s a mere 104 registered per 100 eligible.
Another 31 counties show registrations at more than 90 percent of those eligible, a rate regarded as unrealistic by most voting experts. The national average is a little more than 70 percent.
Many other instances of state, local, and federal voting fraud were found:
As the debate rages, here are examples of voter fraud in 23 different states.
A 2014 article published in the journal Electoral Studies found that “non-citizen voters likely gave Senate Democrats the pivotal 60th vote needed to overcome filibusters in order to pass health care reform.” The report also noted that “there is reason to believe non-citizen voting changed one state’s Electoral College votes in 2008, delivering North Carolina to Obama.”
Researchers from Old Dominion University (ODU) and George Mason University (GMU) analyzed participation rates by non-citizens using data from 2008 and 2010 Cooperative Congressional Election Studies. With this data, the researchers estimated that roughly 620,000 non-citizens were registered to vote prior to the 2008 election.
The researchers focused on the North Carolina presidential tally as well as the senate race in Minnesota. By comparing non-citizen turnout to the vote margin needed to win the elections, they concluded that non-citizen voting likely won the elections for the Democratic Party candidates in both instances. Referring to the North Carolina election, the authors wrote that “it is likely … that John McCain would have won North Carolina were it not for the votes for Obama cast by non-citizens.” They described the Minnesota senate election as one of the most important congressional races in that election cycle, given that it ensured a 60-vote filibuster-proof Democratic majority. Notably, the vote was decided by only 312 votes. Highlighting the razor-thin margin in which candidate Al Franken won, the authors wrote that “participation by more than 0.65 percent of non-citizens in MN is sufficient to account for the entirety of Franken’s margin. Our best guess is that nearly ten times as many voted.”
Some have voted in presidential elections:
Robert J. Higdon, Jr., United States Attorney for the Eastern District of North Carolina today announced that nineteen foreign nationals were charged with, among other crimes, voting by alien for their actions prior to and on November 8, 2016. A twentieth defendant was charged with aiding and abetting a fellow defendant in falsely claiming United States citizenship in order to register to vote.
Many of these involve payment for collecting absentee or mail-in ballots:
The voter fraud charges involve mail-in ballots sent in ahead of the 2016 primary election in Texas. Paxton’s office said the case targeted elderly voters on the north side of Fort Worth.
Paxton’s office said the women harvested votes, by filling out applications for mail-in ballots, with forged signatures. Then they would either “assist” the voter with filling out the ballot, or fill it out themselves, and use deception to get the voter to sign the envelope the ballot would be sent back in.
“The harvesters sit around and fill these out by the hundreds, often by the thousands,” he said Friday.
There are databases of voter fraud:
This happened during the 2008 election too:
In the criminal complaint, Milwaukee County Assistant District Attorney Bruce Landgraf alleged that Latoya Lewis submitted at least two of eight voter registration cards filled out for the same man, who told police he didn’t register to vote through Lewis. The complaint says Lewis told police that she was trying to meet her quota as a paid voter registrar for the Association of Community Organizations for Reform Now.
Lewis is the fourth former Milwaukee registrar to face charges stemming from the 2008 election and the first employed by ACORN. Of three others employed by the Community Voters Project, one has pleaded guilty, one is appearing in court March 23 and one is at large. Others remain under investigation.
This manifested in some dramatic results:
A comprehensive study by the Milwaukee Police Department found a strong possibility existed that there was “an illegal organized attempt to influence the outcome” through voter fraud of the 2004 elections in Wisconsin. The Colorado Secretary of State’s office determined that nearly 5,000 people who were not United States citizens—and therefore according to the law, ineligible to vote—voted in the 2010 U.S. Senate race.
Apparently some resembled organized vote fraud:
Haynes, along with two other SEIU organizers, registered to vote for the Badger State’s April 5, 2011, election using an out-of-state ID and claiming a Marriott hotel in Glendale, Wisconsin, as his residence, Media Trackers reported in October 2011.
SEIU documents obtained by Media Trackers show the union spent $146,000 on the hotel, and Landgraf’s affidavit reports as many as 50 SEIU employees lived in the hotel in late 2010 and early 2011.
“The SEIU has a history of questionable tactics, and some of its top leaders have worked for and on behalf of extremely liberal candidates, including President Obama,” Sikma said in a statement. “What remains to be seen is whether or not this union will take internal action to prevent its employees from committing voter fraud, or whether it will simply look the other way and refuse to halt its questionable and suspect tactics.”
Some say this is systematic:
This report reveals the Left’s vote fraud strategy for the 2012 elections. Like a KGB operation, it is thorough, multi-faceted and redundant. It has overt and covert, illegal and legal elements, the latter of which are designed, at least in part, to facilitate illegal activities later. It is a deliberate, premeditated, comprehensive plan to win the 2012 presidential election at all costs, and is in keeping with the organizational methods, associations and ethics of the Community-Organizer-in-Chief, Barack Obama.
Postal vote fraud is common in other countries as well:
“This affects no less than 573,275 absentee ballots. Of these, 31,814 were pre-screened as invalid. Here, suspicion is more than justified.”
The party also said they had proof of children and foreign citizens voting in the election.
Following a two week hearing, Gerhard Holzinger, head of the Constitutional Court, confirmed today: “The challenge brought by Freedom Party leader Heinz-Christian Strache against the 22 May election has been upheld.”
The investigation found postal ballots were illegally handled in 94 of 117 districts.
Even the Left has warned of this Achilles heel:
Nationwide, the use of absentee ballots and other forms of voting by mail has more than tripled since 1980 and now accounts for almost 20 percent of all votes.
In 2008, 18 percent of the votes in the nine states likely to decide this year’s presidential election were cast by mail. That number will almost certainly rise this year, and voters in two-thirds of the states have already begun casting absentee ballots.
The trend will probably result in more uncounted votes, and it increases the potential for fraud. While fraud in voting by mail is far less common than innocent errors, it is vastly more prevalent than the in-person voting fraud that has attracted far more attention, election administrators say.
Members of the NAACP have been sentenced for voter fraud:
Lessadolla Sowers was convicted in the Tunica County Circuit Court of ten counts of voter fraud as a habitual offender. Mississippi Bureau of Investigations officers determined that a significant number of absentee ballots had been mailed to a post office box held in Sowers’s name. She was sentenced to five years in the custody of the Mississippi Department of Corrections for each count, with each sentence ordered to run concurrently with the others.
Some evidence suggests that this is business as usual:
An activist group obtained ballots for several deceased voters during New Hampshire’s primary elections. The group called Project Veritas captured the possible voting fraud on camera.
The secretly recorded video shows activists requesting ballots for recently deceased voters. In most cases, they receive the ballot with no questions asked.
The group tried the same stunt again and again and they succeeded at least nine times in using the names of recently deceased voters.
In one study, 13% of illegal aliens stated that they voted:
Up to one in eight of America’s voter registrations are invalid:
Los Angeles County may be California’s worst offender, but 10 of the state’s 58 counties also have registration rates exceeding 100% of the voting age population. In fact, the voter registration rate for the entire state of California is 101%.
And the Golden State isn’t alone. Eight states, as well as the District of Columbia, have total voter registration tallies exceeding 100%, and in total, 38 states have counties where voter registration rates exceed 100%. Another state that stands out is Kentucky, where the voter registration rate in 48 of its 120 counties exceeded 100% last year. About 15% of America’s counties where there is reliable voter data – that is, over 400 counties out of 2,800 – have voter registration rates over 100%.
This echoes a 2012 Pew study that found that 24 million voter registrations in the United States, about one out of every eight, are “no longer valid or are significantly inaccurate” – a number greater than the current population of Florida or New York state.
Some studies have found that vote fraud, especially by illegal aliens, is tipping elections:
We find that some non-citizens participate in U.S. elections, and that this participation has been large enough to change meaningful election outcomes including Electoral College votes, and Congressional elections. Non-citizen votes likely gave Senate Democrats the pivotal 60th vote needed to overcome filibusters in order to pass health care reform and other Obama administration priorities in the 111th Congress.
These may have swayed elections:
As many as 5.7 million noncitizens may have voted in the 2008 election, which put Barack Obama in the White House.
The ODU professors, who stand by their work in the face of attacks from the left, concluded that in 2008 as few as 38,000 and as many as 2.8 million noncitizens voted.
Mr. Agresti’s analysis of the same polling data settled on much higher numbers. He estimated that as many as 7.9 million noncitizens were illegally registered that year and 594,000 to 5.7 million voted.
States are obstructing attempts to discover voter fraud:
A June 2017 letter addressed to states from the commission asked for “publicly available voter roll data.” However, the letter also requested a lot of specific details, including: parts of Social Security numbers, dates of birth, addresses and information regarding felony convictions or military status.
The commission was also hit with a bevy of legal obstacles by organizations and state lawmakers regarding its practices.
White House press secretary Sarah Sanders said Trump signed an executive order to dismantle the commission “rather than engage in endless legal battles at taxpayer expense.”
Several secretaries of state, Republican and Democrat, bucked a request for sensitive data by the commission or said they would only provide limited data.
The same was found in the previous election:
Based on the data, Richman estimated that 6.4 percent of non-citizens voted in the 2008 presidential election, which would translate to around 1.2 million votes. According to Richman, about 80 percent of these non-citizens voted for Barack Obama against John McCain.
When corrected, this problem changes the outcome of elections:
Many politicians are taking firm stances on issues affecting migrants, including amnesty, entitlements and sanctuary city policies. This gives noncitizens a significant incentive to register as voters and cast a ballot. For example, in East Chicago, Indiana, a city with 30,000 residents, voting fraud was so systemic in 2003 that the State Supreme Court ordered a new election with heightened verification. When unlawful voters were prohibited from casting a ballot the outcome of the election changed.
Registering to vote is commonplace among noncitizens:
But buried in the back of the survey on page 68 is a ‘Voter Profile’ that reveals that 13 percent of noncitizen respondents admitted they were registered to vote (a violation of state and federal law), which matches closely the Old Dominion/George Mason study finding that 14.8 percent of noncitizens admitted they were registered to vote in 2008 and 15.6 percent of noncitizens admitted they were registered in 2010.
Some recent voter fraud comedy:
Democratic elections determine our future. Thus, we should make sure that those who are voting are those who have a vested interest in the future of our nation. This means they should be citizens in decent standing, not illegal immigrants, criminals, illiterates, winos, etc. and should possibly have a flat tax rate imposed so that we do not have a “47%” who vote without paying into the system.
In January 1964, Johnson declared “unconditional war on poverty in America.” Since then, the taxpayers have spent $22 trillion on Johnson’s war. Adjusted for inflation, that’s three times the cost of all military wars since the American Revolution.
Last year, government spent $943 billion dollars providing cash, food, housing and medical care to poor and low-income Americans. (That figure doesn’t include Social Security or Medicare.) More than 100 million people, or one-third of Americans, received some type of welfare aid, at an average cost of $9,000 per recipient. If converted into cash, this spending was five times what was needed to eliminate all poverty in the U.S.
The U.S. Census Bureau has just released its annual poverty report. The report claims that in 2013, 14.5 percent of Americans were poor. Remarkably, that’s almost the same poverty rate as in 1967, three years after the War on Poverty started.
Evidence came out thirty years later suggesting that it achieved nothing:
Ordinary people have suspected that for decades, of course, but we had to wait for the New York Times to decide this news was fit to print—which it finally did on February 9, 1998. In a front-page story on poverty in rural Kentucky, Michael Janofsky detailed the failure of this effort in the one region that was supposed to be the centerpiece of reform. “Federal and state agencies have plowed billions of dollars into Appalachia,” he wrote, yet the area “looks much as it did 30 years ago, when President Lyndon B. Johnson declared a war on poverty, taking special aim at the rural decay.”
Most attempts to justify it are statistical fictions:
Desperate to spin the disastrous War on Poverty as a success, progressives have tried to divert our attention from America’s growing underclass by pointing to the large decline in the Official Poverty Measure (OPM, which includes cash transfer payments) for senior citizens. The OPM for Americans age 65 and above fell from about 30% in 1967 to about 9% in 2012.
Not so fast, progressives. It is not clear that the OPM for seniors would be higher today if the War on Poverty had never been mounted.
Because the War on Poverty made Social Security benefits more generous, and also created Medicare, it produced an instantaneous reduction in the OPM for senior citizens. And, obviously, if Social Security and Medicare were terminated tomorrow, the OPM for senior citizens would rise.
Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich and Democratic (WEIRD) societies. Researchers – often implicitly – assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population.
The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior – hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation.
This creates an information field distortion:
A recent survey by Arnett (2008) of the top journals in six sub-disciplines of psychology revealed that 68% of subjects were from the US and fully 96% from ‘Western’ industrialized nations (European, North American, Australian or Israeli). That works out to a 96% concentration on 12% of the world’s population (Henrich et al. 2010: 63). Or, to put it another way, you’re 4000 times more likely to be studied by a psychologist if you’re a university undergraduate at a Western university than a randomly selected individual strolling around outside the ivory tower.
I worry that W.E.I.R.D. classification flatters the WEIRD, focusing on traits that Westerners typically highlight to describe themselves in ways that are, however inadvertently, pretty self-congratulatory.
This causes us to falsely assume human universalism based on a small model:
When these affluent American and non-Western populations are compared there are important differences in domains as seemingly unrelated as visual perception, fairness, cooperation, spatial reasoning, moral reasoning, reasoning styles, and even the heritability of IQ. In all cases American undergraduates didn’t simply differ, they differed substantially. Nevertheless, they form the basis of most researchers’ assumptions about human nature even though, as Henrich and colleagues conclude, “this particular subpopulation is highly unrepresentative of the species.”
This in turn projects cultural bias onto the world, and gives false results for human standards:
WEIRD subjects (perhaps you were one?) are still human, of course, so you might think that what’s generalizable to them must be generalizable to the rest of humanity. But in fact, that’s not the case. WEIRD subjects, from countries that represent only about 12 percent of the world’s population, differ from other populations in moral decision making, reasoning style, fairness, even things like visual perception. This is because a lot of these behaviors and perceptions are based on the environments and contexts in which we grew up. There’s a big dose of sociology in our psychology.
This causes us confusion in understanding cultural differences as well:
A modern liberal arts education gives lots of lip service to the idea of cultural diversity. It’s generally agreed that all of us see the world in ways that are sometimes socially and culturally constructed, that pluralism is good, and that ethnocentrism is bad. But beyond that the ideas get muddy. That we should welcome and celebrate people of all backgrounds seems obvious, but the implied corollary—that people from different ethno-cultural origins have particular attributes that add spice to the body politic—becomes more problematic.
Economists and psychologists, for their part, did an end run around the issue with the convenient assumption that their job was to study the human mind stripped of culture. The human brain is genetically comparable around the globe, it was agreed, so human hardwiring for much behavior, perception, and cognition should be similarly universal. No need, in that case, to look beyond the convenient population of undergraduates for test subjects. A 2008 survey of the top six psychology journals dramatically shows how common that assumption was: more than 96 percent of the subjects tested in psychological studies from 2003 to 2007 were Westerners—with nearly 70 percent from the United States alone. Put another way: 96 percent of human subjects in these studies came from countries that represent only 12 percent of the world’s population.
This leads to fundamental differences in how we perceive laws, ideals, and objectives:
The WEIRDer you are, the more you perceive a world full of separate objects, rather than relationships, and the more you use an analytical thinking style, focusing on categories and laws, rather than a holistic style, focusing on patterns and contexts.
Morality is like The Matrix, from the movie “The Matrix.” Morality is a consensual hallucination, and when you read the WEIRD people article, it’s like taking the red pill. You see, oh my God, I am in one particular matrix. But there are lots and lots of other matrices out there.
Larry Sanger, a founder of the project, notes that it has become permanently biased:
Wikipedia’s “NPOV” is dead. The original policy long since forgotten, Wikipedia no longer has an effective neutrality policy. There is a rewritten policy, but it endorses the utterly bankrupt canard that journalists should avoid what they call “false balance.” The notion that we should avoid “false balance” is directly contradictory to the original neutrality policy. As a result, even as journalists turn to opinion and activism, Wikipedia now touts controversial points of view on politics, religion, and science.
Wikipedia can be counted on to cover not just political figures, but political issues as well from a liberal-left point of view. No conservative would write, in an abortion article, “When properly done, abortion is one of the safest procedures in medicine,” a claim that is questionable on its face, considering what an invasive, psychologically distressing, and sometimes lengthy procedure it can be even when done according to modern medical practices. More to the point, abortion opponents consider the fetus to be a human being with rights; their view, that it is not safe for the baby, is utterly ignored. To pick another, random issue, drug legalization, dubbed drug liberalization by Wikipedia, has only a little information about any potential hazards of drug legalization policies; it mostly serves as a brief for legalization, followed by a catalog of drug policies worldwide.
It is time for Wikipedia to come clean and admit that it has abandoned NPOV (i.e., neutrality as a policy). At the very least they should admit that that they have redefined the term in a way that makes it utterly incompatible with its original notion of neutrality, which is the ordinary and common one. It might be better to embrace a “credibility” policy and admit that their notion of what is credible does, in fact, bias them against conservatism, traditional religiosity, and minority perspectives on science and medicine—to say nothing of many other topics on which Wikipedia has biases.
Conservatives have widely criticized the pro-Leftist bias at Wikipedia:
A cabal of editors exists who work together to bias articles and to hide embarrassing facts about left-wing political figures, while at the same time smearing conservatives. Scandals involving left-wing personalities are labeled “conspiracy theories”, for example, Spygate (conspiracy theory by Donald Trump) or “controversies” such as IRS targeting controversy or Hillary Clinton email controversy.
Biased editors’ standard tactics include claiming conservative-leaning sources as being unreliable (non-RS). This also applies to centrist sources that are simply being truthful. The best way to observe Wikipedia is by reading an article’s Talk page. One can see editors disparaging sources contrary to the mainstream media talking points. You will see scandals involving left-wing figures are typically dismissed as conspiracy theories. You will also see many derogatory comments about conservative figures, especially Donald Trump and Republican congressmen and senators. Editors who fight for balanced coverage eventually get permanently blocked.
Although Wales “made his original fortune as a pornography trafficker”, he has since tried to clean up his image and demands retractions when people report this fact.
It does this through political correctness, or enforcing Leftist language (and therefore, per Whorf-Sapir, Leftist bias) by removing non-Leftist language:
In 2016, researchers at the University of Koblenz-Landau in Germany found that the language of a Wikipedia entry influences the sources used, therefore offering a different version of the truth. In an English-language article about Russia’s annexation of Crimea, for example, 24% of sources were Ukrainian and 20% Russian.
A 2011 research paper that analysed almost 30,000 Wikipedia entries about US politics found the website to be slightly biased towards the Democrats. The study found that entries were more likely to use politically charged Democratic phrases (terms like “civil rights”) than Republican ones (“illegal immigration”).
Looking at US gubernatorial elections since 1978, the author found that its coverage in the years before 2001 (when Wikipedia was created) was scant.
It is policed by a small number of people who lean Left:
Wikipedia feels like a service that’s magically going to exist forever. But while it’s doing well financially, it’s bleeding away users. “The number of ‘very active’ editors (defined as more than 100 edits in a month) dropped from nearly 5,000 to about 3,200 between 2006 and 2014. The number of ‘active’ users (at least five edits per month) dropped from 50,000 to 30,000. One of my friends dropped out of editing several times because things were too contentious. Admin statistics are worse. We only have 551 active admins. We’re not replacing admins at a fast enough rate. [The process for naming admins] is horrible and broken. It was not an experience I’d wish on anyone else.”
It tends to allow defamation by including dubious “facts”:
Our efforts to correct the site have been rejected by the editors of the self-described “free encyclopedia.” For instance, supporters of Heartland will be surprised to learn that we “worked with the tobacco company Philip Morris to question or deny the health risks of secondhand smoke and to lobby against smoking bans,” that we “support climate change denial,” or that our decision to spin off our work on finance and insurance into the R Street Institute is characterized as the “resignation of almost the entire Heartland Washington D.C. office, taking the Institute’s biggest project (on insurance) with it.”
These are simply lies. The editors of Wikipedia refuse to remove these libelous claims — and allow them to proliferate — because they damage our reputation and effectiveness in the most important public policy debates facing the nation.
Wikipedia encourages political bias through its inner cabal of editors:
But new research shows that the average Wikipedia article is more politically biased than its Britannica counterpart. That sounds like an indictment of crowdsourcing, but on closer inspection it instead reveals what makes the crowd really work.
In a working paper (Ed.: now published) released last month, Shane Greenstein of Kellogg and Feng Zhu of Harvard Business School measured the political bias of Wikipedia and Britannica by counting the number of politically charged words in pairs of articles. Previous research has demonstrated that political partisans use different language. In the U.S., Republicans are more likely to use terms like “illegal immigration” and “border security.” Democrats are more likely to use “war in Iraq”, “civil rights”, or “trade deficit”. These word choices predict the speaker’s ideological slant.
The group blog actively deletes accurate information that contradicts the narrative:
I was surprised to read on Wikipedia that Oreskes’s work had been vindicated and that, for instance, one of her most thorough critics, British scientist and publisher Bennie Peiser, not only had been discredited but had grudgingly conceded Oreskes was right.
I checked with Peiser, who said he had done no such thing. I then corrected the Wikipedia entry, and advised Peiser that I had done so.
Peiser wrote back saying he couldn’t see my corrections on the Wikipedia page. I made the changes again, and this time confirmed that the changes had been saved. But then, in a twinkle, they were gone again. I made other changes. And others. They all disappeared shortly after they were made.
Wikipedia deletes non-narrative information in order to enforce political bias:
As my NewsBusters colleague P.J. Gladnick has documented, the online encyclopedia blocked all mention of allegations that former Democratic presidential candidate John Edwards had conducted an extramarital affair.
It tends to focus on criticism of conservatives but not Leftists:
Consider Ann Coulter versus Michael Moore. Coulter’s entry (on August 9, 2011) was 9028 words long.* Of this longer-than-usual entry, 3220 words were devoted to “Controversies and criticism” in which a series of incidents involving Coulter and quotes from her are cited with accompanying condemnations, primarily from her opponents on the Left. That’s 35.6 percent of Coulter’s entry devoted to making her look bad. By contrast, Moore’s entry is 2876 words (the more standard length for entries on political commentators), with 130 devoted to “Controversy.” That’s 4.5% of the word count, a fraction of Coulter’s. Does this mean that an “unbiased” commentator would find Coulter eight times as “controversial” as Moore?
This enables FAANG companies to report Leftist content as fact by over-emphasizing it:
But there is another major reason that a lot of people care about Wikipedia, whether they participate themselves in it or not, and why there are many critics concerned about the increasingly widespread role of the site. Because of its popularity and also because of its interconnected network of links, Wikipedia articles tend to score extremely high on Google and other Internet searches. In particular, if one searches on an individual’s name, his or her Wikipedia article will generally be among the top group of Google hits — much of the time the very first one. This has implications that are quite significant and in many instances troubling, which I will be discussing over the next couple of days.
This creates an unreliable resource in both search engines and media:
Whether through vandalism, subtle disinformation, or the prolonged battling over biased accounts, many of Wikipedia’s articles are unsuitable for scholarly use. Because of poor standards of sourcing and citation, it is often difficult to determine the origin of statements made in Wikipedia in order to determine their correctness. Pursuit of biased points of view by powerful administrators is considered a particular problem, as opposing voices are often permanently banned from Wikipedia. Wikipedia’s culture of disrespect for expertise and scholarship (see below) makes it difficult to trust anything there.
Wikipedia specifically disregards authors with special knowledge, expertise, or credentials. There is no way for a real scholar to distinguish himself or herself from a random anonymous editor merely claiming scholarly credentials, and thus no claim of credentials is typically believed. Even when credentials are accepted, Wikipedia affords no special regard for expert editors contributing in their fields. This has driven most expert editors away from editing Wikipedia in their fields. Similarly, Wikipedia implements no controls that distinguish mature and educated editors from immature and uneducated ones.
Wind turbine noise causes infrasound noise pollution:
There is evidence that infrasound has a physiological effect on the ear. Until this effect is fully understood, it is impossible to conclude that wind turbine noise does not cause any of the symptoms described. However, many believe that these symptoms are related largely to the stress caused by unwanted noise exposure.
Noise pollution from wind turbines is linked to wind turbine syndrome:
In December 2011, in a peer-reviewed report in the Bulletin of Science, Technology & Society, Dr Carl Phillips – one of the U.S.’s most distinguished epidemiologists – concluded that there is ‘overwhelming evidence that wind turbines cause serious health problems in nearby residents, usually stress-disorder type diseases, at a nontrivial rate’.
According to a study by U.S. noise control engineer Rick James, wind farms generate the same symptoms as Sick Building Syndrome – the condition that plagued office workers in the Eighties and Nineties as a result of what was eventually discovered to be the Low Frequency Noise (LFN), caused by misaligned air conditioning systems.
The combination of LFN and ‘amplitude modulation’ (loudness that goes up and down) leads to fatigue, poor concentration and dizziness.
Infrasound, or low frequency undulating noise, causes the disorder:
The name was coined by Nina Pierpont, a John Hopkins University-trained pediatrician, whose husband is an anti-wind activist, criticizing the economics and physics of wind power. Pierpont, who lives in upstate New York, calls wind turbine syndrome the green energy industry’s “dirty little secret.” She self-published “Wind Turbine Syndrome” in 2009, including case studies of people who lived within 1.25 miles of these “spinning giants” who reportedly got sick.
More than 45 Falmouth residents have complained to the town’s Board of Selectmen, which curtailed the hours of its two turbines at night. The board said it’s the pressure of infrasound — sounds with frequencies below 20 Hz — which are on the low end of audible for humans.
Rauch said he consulted with Pierpont and Alec Salt, an otolaryngology specialist at the Cochlear Fluids Research Laboratory at Washington University in Louis who suggests the level of infrasound generated by a wind turbine one mile away could be harmful.
Noise pollution is known to cause inflammation:
High decibel levels from road traffic and airplanes, for example, has been linked to high blood pressure, coronary artery disease, stroke and heart failure — even after controlling for other factors like air pollution and socioeconomic status.
In studies, nighttime noise has been linked to an increase in blood pressure — even if people didn’t wake up or realize their sleep had been disrupted. “One can close his eyes but not his ears,” Munzel said. “Our body will always react with a stress reaction.”
And other symptoms of wind turbine syndrome, many of which are inflammation-related like blood pressure:
In addition to causing hearing loss, excessive noise exposure can also raise blood pressure and pulse rates, cause irritability, anxiety, and mental fatigue, and interfere with sleep, recreation, and personal communication.
And inflammation correlates with growth of cancers
However, while the genetic changes that occur within cancer cells themselves, such as activated oncogenes or dysfunctional tumor suppressors, are responsible for many aspects of cancer development, they are not sufficient. Tumor promotion and progression are dependent on ancillary processes provided by cells of the tumor environment but that are not necessarily cancerous themselves. Inflammation has long been associated with the development of cancer.
High blood pressure is also linked to cancer:
Persistent high blood pressure can increase your risk of a number of serious conditions including vascular dementia, kidney disease and aortic aneurysms.
In September 2011, experts said raised blood pressure is linked to a higher risk of developing cancer or dying from the disease.
It revealed higher than normal blood pressure was associated with a ten to 20 per cent higher risk of developing cancer in men.
This seems to be the study in question.
Wind Turbines -> Noise pollution -> Inflammation -> Cancer.
Modern Europeans were born in the Bronze Age after a large wave of immigration by a nomadic people known as Yamnaya who came from the Russian steppe. It happened in the third millennium BC.
They have already shown that modern Europeans share the genetic components of the early hunters but with the arrival of farming culture about 8500 years ago, there was a mixing with new genetic components. This shows up as a genetic difference between southern and northern Europe.
Neolithic people (4000-1700 BC) resemble us more but there is still something missing, and last year it became clear to scientists that there must have been a third wave of migration.
Some believe that they were a race of violent conquerors:
The migrants’ ultimate source was a group of livestock herders called the Yamnaya who occupied the Eurasian steppe north of the Black Sea and the Caucasus mountains. Britain wasn’t their only destination. Between 5000 and 4000 years ago, the Yamnaya and their descendants colonised swathes of Europe, leaving a genetic legacy that persists to this day. Their arrival coincided with profound social and cultural changes. Burial practices shifted dramatically, a warrior class appeared, and there seems to have been a sharp upsurge in lethal violence. “I’ve become increasingly convinced there must have been a kind of genocide,” says Kristian Kristiansen at the University of Gothenburg, Sweden.
They brought Indo-European languages with them:
A mysterious group of humans from the east stormed western Europe 4,500 years ago — bringing with them technologies such as the wheel, as well as a language that is the forebear of many modern tongues, suggests one of the largest studies of ancient DNA yet conducted. Vestiges of these eastern émigrés exist in the genomes of nearly all contemporary Europeans, according to the authors, who analysed genome data from nearly 100 ancient Europeans.
The first Homo sapiens to colonize Europe were hunter-gatherers who arrived from Africa, by way of the Middle East, around 45,000 years ago. (Neanderthals and other archaic human species had begun roaming the continent much earlier.) Archaeology and ancient DNA suggest that farmers from the Middle East started streaming in around 8,000 years ago, replacing the hunter-gatherers in some areas and mixing with them in others.
But last year, a study of the genomes of ancient and contemporary Europeans found echoes not only of these two waves from the Middle East, but also of an enigmatic third group that they said could be from farther east.
They left a strong impression on Western Europe:
The research team, led by David Reich of Harvard Medical School, discovered that the DNA of the Yamnaya, 5,000-year-old steppe herders in western Russia, was a close match for 4,500-year-old individuals from Germany’s Corded Ware culture. Contemporary northern Europeans, including Norwegians, Scots, and Lithuanians maintain the strongest genetic link to the Yamnaya, but Reich’s team says it’s possible that the Yamnaya completely replaced populations in what is now Germany.
This group was nomadic:
Both studies indicate that today’s Europeans descend from three groups who moved into Europe at different stages of history.
The first were hunter-gatherers who arrived some 45,000 years ago in Europe. Then came farmers who arrived from the Near East about 8,000 years ago.
Finally, a group of nomadic sheepherders from western Russia called the Yamnaya arrived about 4,500 years ago. The authors of the new studies also suggest that the Yamnaya language may have given rise to many of the languages spoken in Europe today.
Archaeologists have long been fascinated by the Yamnaya, who left behind artifacts on the steppes of western Russia and Ukraine dating from 5,300 to 4,600 years ago. The Yamnaya used horses to manage huge herds of sheep, and followed their livestock across the steppes with wagons full of food and water.
This popular trade history book omits some crucial details:
Who is the most influential historian in America? Could it be Pulitzer Prize winners Arthur Schlesinger, Jr. or Joseph Ellis or David McCullough, whose scholarly works have reached a broad literary public? The answer is none of the above. The accolade belongs instead to the unreconstructed, anti-American Marxist Howard Zinn, whose cartoon anti-history of the United States is still selling 128,000 copies a year twenty years after its original publication. Many of those copies are assigned readings for courses in colleges and high schools taught by leftist disciples of their radical mentor.
Through Zinn’s looking-glass, Maoist China, site of history’s bloodiest state-sponsored killings, becomes “the closest thing, in the long history of that ancient country, to a people’s government, independent of outside control.” The authoritarian Nicaraguan Sandinistas were “welcomed” by their own people, while the opposition Contras, who backed the candidate that triumphed when free elections were finally held, were a “terrorist group” that “seemed to have no popular support inside Nicaragua.” Castro’s Cuba, readers learn, “had no bloody record of suppression.”
According to Zinn, it was Mumia Abu-Jamal’s “race and radicalism,” as well as his “persistent criticism of the Philadelphia police” that landed him on death row in the early 1980s. Nothing about Abu-Jamal’s gun being found at the scene; nothing about the testimony of numerous witnesses pointing to him as the triggerman; nothing about additional witnesses reporting a confession by Abu-Jamal—it was Abu-Jamal’s dissenting voice that caused a jury of twelve to unanimously sentence him to death.
Its author participated in the most deadly regime that humanity has known:
Zinn admitted membership in numerous Communist fronts, including the Americans Veterans Committee and the American Labor Party, which employed Zinn at its headquarters in Brooklyn at a time when Communists controlled it. But he steadfastly denied membership in the Communist Party itself.
Several Communist Party members said otherwise. The files paraphrase one informant’s conversation with Zinn in 1948 as the future historian traveled from a protest outside the Truman White House to a Brooklyn rally for presidential candidate Henry Wallace. According to the informant, “Zinn indicated that he is a member of the Communist Party and that he attends Party meetings five nights a week in Brooklyn.” The files summarize how another informant believed that Zinn was “selected as a delegate to the New York State Communist Party Convention.” The Zinn that emerges from the files manned picket lines, religiously attended almost daily party meetings, and collected subscriptions for The Daily Worker.
Perhaps it is unsurprising, then, that the FBI files also note “a photograph of Zinn taken in about 1951 which showed him instructing a class in Basic Marxism at the Twelfth Assembly District, CP Headquarters, Brooklyn, New York.” Were Stalin-era Communists in the habit of inviting “liberals” to teach them about Marxism?
It promotes a socialist narrative in the guise of representing history:
Key events were omitted. The mass slaughter that followed the Communist takeover of Cambodia? Good luck finding it in “A People’s History.” Like his fellow Communist historians in Moscow, Zinn conveniently “disappeared” the more than 2 million murdered by Cambodia’s Communist dictator Pol Pot.
Zinn was a member of numerous Soviet front groups, and he helped found the socialist New Party, which helped Barack Obama launch his political career. Zinn mentored a young neighbor in Connecticut, Matt Damon, who went on to be a movie star, and who plugged “A People’s History” in his film “Good Will Hunting.”
As for fake history, “Zinn did everything — misrepresented sources, omitted critical information, falsified evidence, and plagiarized,” Ms. Grabar writes. “Zinn liked to pretend [that] he wrote a ‘people’s’ history, telling the bottom-up story of neglected and forgotten men and women. The problem is that he falsified American history to promote Communist revolution. … all the while denying that he was a Communist.”
This narrative has lots of problems:
Much of the criticism of Zinn has come from dissenters on the left. Arthur M. Schlesinger Jr. once remarked that “I don’t take him very seriously. He’s a polemicist, not a historian.” Last year, the liberal historian Sean Wilentz referred to the “balefully influential works of Howard Zinn.” Reviewing A People’s History in The American Scholar, Harvard University professor Oscar Handlin denounced “the deranged quality of his fairy tale, in which the incidents are made to fit the legend, no matter how intractable the evidence of American history.” Socialist historian Michael Kazin judged Zinn’s most famous work “bad history, albeit gilded with virtuous intentions.”
Just how poor is Zinn’s history? After hearing of his death, I opened one of his books to a random page (Failure to Quit, p. 118) and was informed that there was “no evidence” that Muammar Qaddafi’s Libya was behind the 1986 bombing of La Belle Discotheque in Berlin. Whatever one thinks of the Reagan administration’s response, it is flat wrong, bordering on dishonest, to argue that the plot wasn’t masterminded in Tripoli. Nor is it correct to write that the American government, which funded the Afghan mujahadeen in the 1980s, “train[ed] Osama bin Laden,” a myth conclusively debunked by Washington Post correspondent Steve Coll in his Pulitzer Prize-winning book Ghost Wars.
Of Cuba, the reader of A People’s History is told that upon taking power, “Castro moved to set up a nationwide system of education, of housing, of land distribution to landless peasants.” Castro’s vast network of gulags and the spasm of “revolutionary justice” that sent thousands to prison or the executioners wall is left unmentioned. This is unsurprising, I suppose, when one considers that Zinn recently told an interviewer “you have to admire Cuba for being undaunted by this colossus of the North and holding fast to its ideals and to Socialism….Cuba is one of those places in the world where we can see hope for the future. With its very meager resources Cuba gives free health care and free education to everybody. Cuba supports culture, supports dance and music and theatre.”
There are questions about the methods that Zinn uses to source his history:
Not only does Zinn put a far-left spin on events in American history, but he uses illegitimate sources (ideological New Left historians, a socialist novelist, a Holocaust-denying historian), plagiarizes, misrepresents authors’ words, leaves out critical information, and presents outright lies.
It is used extensively:
The only clear way in my mind to do this was to google Zinn’s book (Bing link along with the word “Syllabus.” I stopped looking at google’s results after 8 pages, but I scanned forward to page 27 of the search results and still found plenty of colleges and universities using Zinn’s book in some form. Some are old syllabus and some are for ethnic classes and such, but frankly, I was surprised. Zinn’s book has no business being used in a serious way at any level (I was also surprised by the number of AP History classes in High schools that use it) unless it is used to show how history should not be written.
Including by high schools:
Several years ago, the Ann Arbor, Mich., public schools faced complaints from the parents of minority students that the American history curriculum was alienating their children. At a meeting of the district’s social-studies department chairs, the superintendent thought that he had discovered the cure for the divisions plaguing the school system. Holding up a copy of “A People’s History,” he asked, “How many of you have heard of Howard Zinn?” The chairwoman of the social studies department at the district’s largest school responded, “Oh, we’re already using that.”
Many teachers follow its interpretation of history:
To date, 80,000 teachers have signed up to access the free people’s history lessons and nearly 10,000 more teachers sign up every year. The majority of educators who download the lessons are middle and high school social studies teachers in public, public charter, parochial, private, and home schools. In addition, there are teachers in other subject areas, librarians, administrators, and other school staff.
This has produced a backlash by many who point out that the work is propaganda, not history:
“A People’s History” was being employed by the state of Indiana as a textbook for summer courses for teachers to earn “professional development credit.”
Removing it is not censorship; it is, rather, setting responsible academic standards — much like using Darwin and not the Bible in science class.
You might enjoying visiting the free speech Ask A Conservative at Ruqqus.