Immigration at the U.S. southern border from Latin America, especially Central American countries south of Mexico, exploded after Reagan’s ill-conceived intervention in local politics. His decision, heavily influenced by the CIA, provided guns and money to right wing militias in order to prevent legally-elected leftist leaders from reforming the land policies and economies of those nations.
For example, his Iran-Contra deal illegally sold guns to Iran where profits were channeled to finance Central American right wing militias. During that same time period, the CIA allegedly imported cocaine to the U.S. to raise money for the militias. The result was a blood bath of local people who only wanted their land back from multinational corporations and few wealthy despots.[1],[2]
These policies and resulting disruption brought floods of Latin American immigrants to the U.S. as refugees. Groups of El Salvadoran refugees in Los Angeles were subsequently preyed upon by local gangs which resulted in the formation of an El Salvadoran gang to protect the people. That gang became MS-13.
This is but one example of how U.S. foreign policy lies at the heart of our immigration troubles.
In an ideal world:
The U. S. President and Congress would agree to appoint a bi-partisan or non-partisan commission of policy experts to develop an entirely new immigration policy with a six-month deadline. This would replace the tangled and incomprehensible patchwork of laws currently on the books. Both the president and Congress would agree beforehand to implement the recommended policies as law within two months of the commission’s conclusion.
A separate nonpartisan commission, also with a six-month deadline, would draw up recommendations on foreign policy changes to address root causes of immigration from afflicted countries. U. S. resources currently earmarked for immigration extremes such as housing detainers and/or a ‘wall’ would be diverted to provide aid to those nations for education, U.N. observers over law enforcement and judicial process, and humanitarian aid.
Congress would create a 5-member bi-partisan committee to develop FACTS about immigration (pro and con) and mount a public education campaign to dispense those facts to the American people.
The president would encourage state and local governments to host forums where citizens could present ideas and concerns about immigration. This input would be channeled to the commission for consideration. This is not so much to expand commission information, although it is that, but mostly to engage the public as a force for proactive change.
During the commission’s study period, the president would direct an immediate suspension of I.C.E. activities regarding current U.S. residents who may be undocumented.
Sadly, there’s not currently a president or Congress capable of such action.
What’s missing from the debate about our borders? The reason why.
People don’t just pick up and leave their ancestral homes and extended families without a good reason. In so doing, they face a dangerous and expensive journey in search of a new home. Yet despite the risks and hardship, these folks feel they have no choice.
What we hear is news about brown-skinned folks mobbing our borders, crossing rivers and sneaking into the promised land. We see them standing in lines, tear-stained kids’ faces, our media swamped with shouting heads about illegal immigration. Build a wall! Trump yells.
What does any of that do to solve the problem?
Nothing.
The problem is ours. It is we who have caused this, maybe not us individually, but us as part of a Western culture’s willingness to overrun and exploit anyone weaker than us in order to enrich ourselves.
As reported on the PBS Newshour last night,[1] most of the current surge of immigration comes from three nations: Guatemala, El Salvador, and Honduras. These are collectively among the most violent, poverty-stricken areas of the Americas. To fully understand the terrible state of affairs in these countries, one must go back several centuries to the Spanish conquest when everything of value was stolen from the people. Since then, land ownership by rich plantation owners and all-powerful foreign corporations has removed people from their traditional way of life and left them with nothing but poorly paid jobs, if that.
The role of the United States intensified during the 20th century as socialist ideals filtered into Latin America. People embraced the idea of taking back the land from foreign interests and the wealthy power brokers in their country. The U.S. took an active albeit secretive role in destroying such efforts, as described in an article in the May 2016 issue of The Nation:[2]
…the active role Washington played in the “dirty war” in El Salvador in the 1980s, which pitted a right-wing government against Marxist guerrillas. The United States sent military advisers to help the Salvadoran military fight its dirty war, as well as hundreds of millions of dollars in economic and military aid.
The United States went well beyond remaining largely silent in the face of human-rights abuses in El Salvador. The State Department and White House often sought to cover up the brutality, to protect the perpetrators of even the most heinous crimes.
In March of 1980, the much beloved and respected Archbishop Oscar Arnulfo Romero was murdered. A voice for the poor and repressed, Romero, in his final Sunday sermon, had issued a plea to the country’s military junta that rings through the ages: “In the name of God, in the name of this suffering people whose cries rise to heaven more loudly each day, I implore you, I beg you, I order you in the name of God: stop the repression.” The next day, he was cut down by a single bullet while he was saying a private mass…
Eight months after the assassination, a military informant gave the US embassy in El Salvador evidence that it had been plotted by Roberto D’Aubuisson, a charismatic and notorious right-wing leader. D’Aubuisson had presided over a meeting in which soldiers drew lots for the right to kill the archbishop, the informant said. While any number of right-wing death squads might have wanted to kill Romero, only a few, like D’Aubuisson’s, were “fanatical and daring” enough to actually do it, the CIA concluded in a report for the White House.
Yet, D’Aubuisson continued to be welcomed at the US embassy in El Salvador, and when Elliott Abrams, the State Department’s point man on Central America during the Reagan administration, testified before Congress, he said he would not consider D’Aubuisson an extremist. “You would have to be engaged in murder,” Abrams said, before he would call him an extremist.
But D’Aubuisson was engaged in murder, and Washington knew it. (He died of throat cancer in 1992, at the age of 48. Abrams was convicted in 1991 of misleading Congress about the shipment of arms to the anti-Sandinista forces in Nicaragua, the so-called “Iran/Contra” affair. He was pardoned by President George H.W. Bush, later served as special adviser to President George W. Bush on democracy and human rights, and is now a foreign-policy adviser to GOP presidential candidate Ted Cruz.)
Then there was the murder of three nuns. The Nation’s article continues:
No act of barbarism is more emblematic of the deceit that marked Washington’s policy in El Salvador in the 1980s than the sexual assault and murder of four US churchwomen—three Roman Catholic nuns and a lay missionary—in December 1980, a month after Ronald Reagan was elected president.
The American ambassador, Robert White, who had been appointed by President Jimmy Carter, knew immediately that the Salvadoran military was responsible—even if he didn’t have the names of the perpetrators—but that was not what the incoming administration wanted to hear.
One of Reagan’s top foreign-policy advisers, Jeane Kirkpatrick, when asked if she thought the government had been involved, said, “The answer is unequivocal. No, I don’t think the government was responsible.” She then sought to besmirch the women. “The nuns were not just nuns,” she told The Tampa Tribune. “The nuns were also political activists,” with a leftist political coalition (Kirkpatrick died in 2006).
This history and the criminality of U.S. behavior in El Salvador is but one of many similar circumstances across Latin America. Our violent suppression of activists like Che Guevara and other native leaders occurs time and again. We’ve been unwilling to allow local people to reclaim their lands, now largely functioning as an extended plantation for multinational agri-business.
El Salvador has always been a largely agricultural country and despite recent shifts agriculture has continued to be a mainstay of the economy. Conflicts and peasant uprisings over the land date back more than four centuries, to the arrival of the Spanish conquistadores. Since the last 19th century, the most fertile lands have been concentrated in few hands, “An oligarchy known as las catorce (the original fourteen aristocratic families, which has later expanded in number) and used to grow coffee for export, forcing small-scale farmers onto marginal quality lands and making their subsistence increasingly precarious. In the second half of the twentieth century, an alliance of conservative civilians (dominated by las catorce) and military officers ruled the country until the late 1970s.
“A vicious circle was created whereby concentration of land by the wealthy furthered inequality, which led to land degradation and caused conflict that finally escalated into full scale civil war in 1980.” The long civil war decimated the environment, a result of the government’s “’scorched earth’ strategy designed to decimate the insurgency’s base of support in the countryside.” [3]
This destruction resulted in large-scale migration to urban areas which has placed further stress on the country’s delicate ecosystem. A long term result of the war and the ensuing shift in demography has been continuing conflicts over land and the ecological impact of its use near urban areas.
“… the real cause of the civil war in El Salvador is the issue of agrarian reform. The oligarchy tries to prevent it at all cost. The party of the landholding elite has close ties with the death squads…[4],[5]
Its topsoil depleted, its forests all but gone, its water and air polluted by chemicals, livestock, and human waste, El Salvador is a picture of where we’re headed. It’s the canary in the coal mine, a predictor of Western hemisphere futures where overpopulation, lack of environmental protections, and concentration of land ownership are allowed free rein.
Trump’s eager rallying cry against evil gangs—in particular MS-13—barely skims the surface of the real problems facing El Salvador and, by default, the rest of us.
The Mara Salvatrucha gang originated in Los Angeles, set up in the 1980s by Salvadoran immigrants in the city’s Pico-Union neighborhood who immigrated to the United States after the Central American civil wars of the 1980s.
Originally, the gang’s main purpose was to protect Salvadoran immigrants from other, more established gangs of Los Angeles, who were predominantly composed of Mexicans and African-Americans.[6]
With over 30,000 members internationally and its power concentrated in the so-called ‘Northern Triangle’ of Honduras, Guatemala, and El Salvador, MS-13 is a cautionary tale for us all. But that’s not the full picture for El Salvadorans:
The defense ministry has estimated that more than 500,000 Salvadorans are involved with gangs. (This number includes gang members’ relatives and children who have been coerced into crimes.) Turf wars between MS-13, the country’s largest gang, and its chief rivals, two factions of Barrio 18, have exacerbated what is the world’s highest homicide rate for people under the age of 19. In 2016, 540 Salvadoran minors were murdered—an average of 1.5 every day.
While a majority of El Salvador’s homicide victims are young men from poor urban areas, the gangs’ practice of explicitly targeting girls for sexual violence or coerced relationships is well known. Since 2000, the homicide rate for young women in El Salvador has also increased sharply, according to the latest data from the World Health Organization. To refuse the gangs’ demands can mean death for girls and their families.[7]
This explains why increasingly the people surging north to U. S. borders in search of safety are single young people and especially young women. It also exposes the ignorance and immorality of the Trump Administration’s recent decision to no longer accept gang violence as an adequate reason to offer sanctuary to immigrants and of its plans to reduce foreign aid to El Salvador. As further evidence of the administration’s deaf ear to the very real crisis of the region, it has reduced the immigration quota for people from the Caribbean and Latin America from 5,000 to 1,500.[8]
[4] M. Dufumier, “Reforme Agraire Au Salvador,” in Civilisations, Vol. 35, No. 2, Pour Une Conscience Lation-Americaine, Prealable A Des Rapports Sud-Sud: Centra d’Etude d l’Amerique Latine (Institute de Sociologie de l’Universite de Burxelles: 1985. 190. http://www.jstor.org.myaccess.library.utoronto.ca/stable/41229331.
A recent flap in Eureka Springs focused attention on the peculiar activity of ‘yarn bombing.’ This is where people obsessed with knitting/crochet apply their talents to public spaces in the name of art. In Eureka Springs, this includes wrapping tree trunks in complex patterns of colorful yarn.
Yikes!
No wonder a resident who sees these vibrantly adorned trees in the park across from her home allegedly, under cover of darkness, cut the yarn and freed the trees.
I do understand how the artist(s) who had applied themselves to these difficult tasks would feel hurt that their artistic talents were so rudely destroyed. On the other hand, I’m afraid my sympathy in this case lies with the vandal. Parks are, after all, supposedly places where everyone can enjoy the beauty of nature, set aside in cities where everything else bears the heavy hand of humanity.
Why gild the lily? Aren’t trees fascinating and beautiful enough on their own? I get that creations of yarn probably don’t stand well on their own in a public venue, but then, perhaps that’s a challenge for yarn artists to figure out rather than glomming onto nature’s talents.
And while I’m on the subject, what about those stacked rocks now cropping up in public spaces around the country? Who exactly wants to go to a wilderness like the Buffalo River and come across someone’s self-important effort to be noticed? Wow, man, how cool is that guy, stacking all those rocks so carefully?
Who gets off on venturing to a remote rural stream only to find the tracks of other humans? Isn’t that why we go out to embrace Nature in the first place, to leave behind the streets, noise, neon lights, exhaust fumes, shouts and cries, and all the other overwhelming and increasingly inescapable evidence of human habitation?
It doesn’t matter if the trace of other humans appears in the form of discarded fast food containers, rusting appliances, or yarn art. What matters is that we somehow come to an agreement that common spaces meant to preserve some tiny fragment of Nature remains exactly that—Nature.
Same goes for those Western lands currently contested by farmers who think they have some kind of God-given right to use public lands. If it’s “public lands,” that means it’s for all of us, NOT for an individual. Federal law needs to change so that no one gets dibs on public lands, not to be leased for running cattle any more than they should be leased for oil and mineral exploration. These destructive processes permanently mar the landscape, change the entire ecosystem, and leave only the heavy footprints of humans.
No. Just no.
So about those hiking/biking trails through our Northwest Arkansas cities. Why must these be pockmarked with works of art? Are we really so inured to the natural wonders of our environment that we can’t exist without putting our mark everywhere? Artworks are fine in museums, in front of fancy buildings, and even in the public square. Why is it necessary to drape the natural landscape of trail routes with reminders of people?
This is similar to the systematic abuse of music, another art form, by layering it into every single waking moment–television programs and movies, elevators, every store and office, every trip to anywhere. What about extended periods of silence? What about music as a singular amazing rendering of an art form worthy of our attention?
I have nothing against art. But putting art where it doesn’t belong is as much a violation of art as ignoring art entirely. Let’s create art parks where people visit to enjoy visual art. Maybe rotate installations surrounded by tended beds of colorful flowers, fountains, and other contributions from the natural world. But art in such settings is the key feature, not a bit thrown in here and there where people visit for other reasons besides art. Art museums, art galleries, art installations in designated public places—these are venues where art belongs, not superimposed on places meant to be natural.
Maybe this is a discussion to be held in any community. Where does art fit best in our town? Where can we preserve Nature for everyone’s enjoyment? If we’re trying to preserve Nature, can we limit our invasion to the creation of a pathway for us to walk/bike and leave the rest untouched?
Can we not at least agree to set aside places where the ham-fist of humanity is not allowed? Can we not recognize the rights of animals and plants to live undisturbed, at least in a few tiny corners of our world? Can we not pull back our tendencies to claim territory and preserve some fragments of nature amid urban spaces?
That means no yarn bombs on parkland tree trunks—unless that park is a designated art park. It means no cute stacked rocks along river banks and shorelines. We can improve on our cities by the use of Nature, but we cannot improve on Nature with the use of art. If the urge to leave your mark overwhelms you, keep it in your yard.
Once upon a time, people reserved loud outbursts for very special occasions.
HELP!
FIRE!
CHARGE!
In each case, the raised voice with its guttural message alerted anyone within earshot that an emergency required their immediate attention. Or in the case of warfare, now was the time to kill or be killed.
Polite company abhors a loud voice, such breech of manners considered the province only of drunkards, boors, or madmen. Like the boy crying wolf, making a loud noise with our voice serves us when normal communication fails, calls attention, and provokes a fight or flight response in those who hear it.
We respond to shouting both physically and emotionally as adrenalin dumps into our system. Our hands may form fists, our jaw clenches, our heart rate accelerates. Psychological studies have shown the negative impact of shouting:
Yelling activates structures in the limbic system that regulate “fight or flight” reactions. Repeated activation to these areas tells the brain that their environment is not safe, thus the interconnecting neurons in these areas must remain intact. …At work, overreacting creates a perceived unsafe environment and can also put others into constant fight or flight mode.[1]
Countless studies and publications warn against shouting at children, spouses, or employees. But why? Here’s an explanation.
The threat response is both mentally taxing and deadly to the productivity of a person — or of an organization. Because this response uses up oxygen and glucose from the blood, they are diverted from other parts of the brain, including the working memory function, which processes new information and ideas. This impairs analytic thinking, creative insight, and problem solving; in other words, just when people most need their sophisticated mental capabilities, the brain’s internal resources are taken away from them.[2]
Most of us realize that shouting is bad form. We also recognize that we don’t like to be the target of shouts. Then why do some of us tolerate shouting on a daily basis?
In the mid-1980s, a certain conservative radio announcer discovered that shouting on air provoked a rewarding response – people listened. Rush Limbaugh had been fired from previous radio jobs but finally found his niche after Congress repealed the Fairness Doctrine.
In 1984, Limbaugh returned to radio as a talk show host at KFBK in Sacramento… The repeal of the Fairness Doctrine—which had required that stations provide free air time for responses to any controversial opinions that were broadcast—by the FCC in 1987 meant stations could broadcast editorial commentary without having to present opposing views. … Rush Limbaugh was the first man to proclaim himself liberated from…liberal media domination.”[3]
It’s no surprise that the media had become, in some views, rife with so-called liberal viewpoints. Journalists are exposed to higher education before qualifying for a media job. Not only do journalists study literature, history, and political science which paint the broad picture of human suffering, but also upon being hired to a media job, journalists are immediately thrust onto the front lines of all the world’s social ills—crime, disease, prejudice, and injustice among them. Through these experiences, many journalists embrace a point of view that can be described as ‘liberal’ – by definition, “tolerant of different views and standards of behavior in others” and “concerned with general cultural matters and broadening of the mind.”
Professional journalists and the media outlets where they work must adhere to professional standards.
Members of the Society of Professional Journalists believe that public enlightenment is the forerunner of justice and the foundation of democracy. The duty of the journalist is to further those ends by seeking truth and providing a fair and comprehensive account of events and issues. Conscientious journalists from all media and specialties strive to serve the public with thoroughness and honesty. Professional integrity is the cornerstone of a journalist’s credibility.[4]
Not so with Rush Limbaugh, a college dropout. His admitted objective in radio is to sway people to a conservative point of view. People not only listened to his bombastic style but became agitated as if whatever was said in this shouting voice carried greater meaning, more importance, and undoubtedly revealed a threat heretofore unnoticed. His attention-grabbing delivery gained purchase among a vulnerable demographic.
The lesson quickly spread to other media, most notably to FOX News who came on air in 1996 with commentators who never miss an opportunity to shout. Few of these ‘announcers’ are professional journalists. As noted in a 2017 report in the Washington Post,
With the departure of credible centrist and conservative voices and professional journalists (e.g. Megyn Kelly, Greta Van Susteren, George Will, Major Garrett), the alternative-reality programming seen in the Fox evening and afternoon lineup and on “Fox & Friends” now overwhelms the rest of the operation.[5]
Neither Sean Hannity nor Glenn Beck, both popular FOX News commentators, completed college and are not journalists. Yet their audiences believe these men are delivering unbiased news.
The success of both outlets in hooking rapt viewers didn’t go without notice among other media. Some CNN reporters stepped up to the plate and began shouting as well, in particular Wolf Blitzer who doesn’t seem capable of speaking normally. Thus the current political and social crisis was born.
The Rush Limbaughs of the world use shouting not to intimidate listeners as might a parent, spouse, or employer, but to signal alarm. LISTEN TO ME! I’VE GOT NEWS! Whatever the content of such commentary, it’s not simply information that we can take or leave or interpret in comparison to equal but opposing information. This is life or death information. Dangerous. The context screams EMERGENCY!
Not only are listeners held captive by the threat of such emergencies, they suffer physical and emotional damage that makes them vulnerable to manipulation.
Researchers have long known about the infectious nature of stress… Studies have shown that there is “crossover” stress from one spouse to the other, between coworkers, and “spill over” from the work domain to home. The stress contagion effect, as it’s known, spreads anxiety like a virus. Our mirror neurons help suck us into the emotional eruptions of others. …Emotions are highly contagious, as film directors and fear-mongering propagandists know, especially negative emotions.[6]
Held captive by unconscious physical and emotional response to shouting newscasters, listeners become victims of a kind of Stockholm syndrome, “strong emotional ties that develop between two persons where one person intermittently harasses, beats, threatens, abuses, or intimidates the other.”[7] An urgent need to hear what the shouters say takes over normal intellectual function. There’s an emergency and they’re telling us about it. We have to listen.
No one questions that regular shouting at a spouse is a form of domestic abuse, or that shouting repeatedly at children is a form of child abuse. So why do so many people not question the harmful impact of loud-mouthed media personalities?
What could be a more perfect explanation for the masses of people walking around seemingly without the ability to think rationally about matters of critical importance in our nation’s politics? While liberals may gravitate to quietly spoken news of the day uttered by a calm commentator on the PBS NewsHour, many conservatives seem to require regular doses of shouting. There’s probably a clear connection between being shouted at with its rush of body chemistry and the acceptance of a point of view that seems to solve the problem just described in those shouts.
What any reasoning adult should know is that shouting is a theatrical tactic used to capture the attention of listeners/viewers, a form of bullying meant to hold its beleaguered audience. Sportscasters shout in order to build visceral excitement for whatever game they’re announcing. But why would we want the adrenaline rush of sports when we’re hearing news?
Isn’t ‘news’ at its most basic concept a source of information about important events around the world? About electing those who will steer our nation through challenging times? Do we really want to unquestionably accept a shouter’s point of view on such critical topics?
Limbaugh, FOX and other conservative shouters groom their audiences by occasionally lowering their voices, providing strokes to calm those just incited by the shouts. “Here, here,” the shouters say. “It’s not so bad. Here’s how to think about this.” And then the prescription is delivered, a calming pill of hate and prejudice, of unthinking narrow-mindedness convinced that any further information is not needed. The audience becomes like other sufferers of Stockholm syndrome, eager to defend their captors, afraid to turn away from the source of their agitation.
~~~
“Don’t raise your voice, improve your argument.” – Desmond Tutu
With trade politics thickening the air like an April snowstorm, you might be intrigued to know a bit about China’s past history with exports and imports. As in, we should be very careful.
We all know China’s civilization is one of the world’s oldest with historical records dating back at least to 2000 BCE when the Xia dynasty began. Legendary emperors introduced natural medicines including ephedrine, cannabis, and tea—the latter being the crux of a trade matter that would come back to haunt Westerners today.
Over 3000 years passed as the country went through various changes in leadership and cultural developments but here’s the important thing—they stayed within their borders. With all their advancements, they seemed content for all those centuries to keep to themselves.
Just because the Chinese did not use their advancing sophistication in efforts at world conquest did not meant they didn’t trade. Bits of Chinese silk have been found in Egypt from around 1000 BCE. The famous Silk Road, established around 200 BCE, accommodated the trade of Chinese silks, herbs and spices, and cultural ideas ranging from Buddhism to use of horses. Only with the Mongol invasions in the early 13th century were the Chinese forced to deal with outside forces.
The Chinese bounced back with the founding of the Song dynasty, considered the high point of classical Chinese civilization.
Empress Zheng (1079–1131)
The Song economy, facilitated by technology advancement, had reached a level of sophistication probably unseen in world history before its time. The population soared to over 100 million and the living standards of common people improved tremendously due to improvements in rice cultivation and the wide availability of coal for production. The capital cities of Kaifeng and subsequently Hangzhou were both the most populous cities in the world for their time, and encouraged vibrant civil societies unmatched by previous Chinese dynasties. Although land trading routes to the far west were blocked by nomadic empires, there were extensive maritime trade with neighboring states, which facilitated the use of Song coinage as the de facto currency of exchange. Giant wooden vessels equipped with compasses traveled throughout the China Seas and northern Indian Ocean. The concept of insurance was practiced by merchants to hedge the risks of such long-haul maritime shipments. With prosperous economic activities, the historically first use of paper currency emerged in the western city of Chengdu, as a supplement to the existing copper coins.
The Song dynasty was considered to be the golden age of great advancements in science and technology of China …Inventions such as the hydro-mechanical astronomical clock, the first continuous and endless power-transmitting chain, woodblock printing and paper money were all invented during the Song dynasty.
China’s military and imperial ambitions did eventually lead to imperialistic ambition. By the 1400s, Chinese colonialization in foreign lands extended to Japan and Vietnam. That was small potatoes compared to the Europeans who had begun far-flung expeditions to virtually every corner of the earth. In fact, by 1500 a strong isolationist fervor developed in China. When contacted by Western powers such as Portugal in 1520 and the Dutch in 1622, the Chinese vigorously repelled any and all attempts at collaboration.
Meanwhile, the West had begun to thirst for all things exotic including spices from Indonesia and India and especially tea, silk, and porcelain from China. Despite rich colonial profits from Caribbean sugar and tobacco to American cotton, African ivory, and Mexican silver, imperial appetites were insatiable. Just like today, the West—especially the fanatical tea-drinking British—suffered a terrible trade imbalance with China. China didn’t have much interest in the woolens and other commodities offered in trade by Britain and insisted on silver payment for its tea. By the late 17th and early 18th century, Britain faced a monetary crisis over its trade with China.
And here’s where it becomes very instructive as to our current trade situation with China, “we” meaning Americans, that peculiar offshoot of the British Empire who fired its first shot over the king’s helm by dumping, yes, you’ve heard this before, crates of tea overboard in Boston Harbor. British efforts to shore up its finances meant hiking taxes on tea, and the colonists weren’t having it.
Meanwhile, among its other conquests of empire around the world, British invasion of India brought them local merchants dealing an ancient and powerful substance known as opium. Clever Brits thought to import opium to China in the belief that it could balance its trade debts from tea. It didn’t take long for Chinese authorities to recognize the threat to their social order posed by widespread opium use. In 1780, the Qing government issued an edict against opium and other restrictions soon followed.
… Qing dynasty Qianlong Emperor wrote to King George III in response to the MaCartney Mission’s request for trade in 1793: “Our Celestial Empire possesses all things in prolific abundance and lacks no product within its borders. There is therefore no need to import the manufactures of outside barbarians in exchange for our own produce.” Tea also had to be paid in silver bullion, and critics of the tea trade at this time would point to the damage caused to Britain’s wealth by this loss of bullion. As a way to generate the silver needed as payment for tea, Britain began exporting opium from the traditional growing regions of British India (in present-day Pakistan and Afghanistan) into China. Although opium use in China had a long history, the British importation of opium, which began in the late 18th century, increased fivefold between 1821 and 1837, and usage of the drug became more widespread across Chinese society. The Qing government attitude towards opium, which was often ambivalent, hardened due to the social problems created by drug use, and took serious measures to curtail importation of opium in 1838–39. Tea by now had become an important source of tax revenue for the British Empire and the banning of the opium trade and thus the creation of funding issues for tea importers was one of the main causes of the First Opium War.
Delicate business, trade. By the early 1800s, Americans also began importing opium to China as a blend of opium and Turkish tobacco. The resulting competition between America and Britain brought opium prices down resulting in easier access for the average Chinese resident. By 1838, Britain alone imported more than 1,400 tons of opium to China. It was estimated that 27% of adult male Chinese were addicted.
China’s emperor wrote an impassioned letter to Queen Victoria, explaining the harms of opium use and questioning Britain’s “moral judgement.” Sources say the queen never received the missive, but it probably wouldn’t have made much difference. The economics of trade meant that the nation’s leaders bowed to commercial interests.
Under the new law in 1839, Chinese began boarding British ships and confiscating opium. In one raid alone, authorities destroyed over 1,200 tons of opium on a public beach. Outraged, British importers demanded the assistance of British military. Matters devolved as Chinese banned British ships from taking supplies or water at Chinese ports and various skirmishes ensued, leading to debate in British parliament. The House of Lords (which included owners of most of the ships and trading companies) wanted war with China. The House of Commons, more sympathetic to the problems caused by opium, wanted the opium trade to stop.
No extra points for guessing who won. In a military buildup of British ships and personnel beginning mid-1840 and supplemented by Indian dragoons by 1841, Western powers with their heavily armed gunships and superior technology sailed up the Pearl River and destroyed less-well-armed Chinese vessels and troops. The British blockaded Chinese ports up and down the coast. In July 1842, British warships steamed up the Yangtze River to Canton where they destroyed Chinese forts protecting the city. Ultimately, China had no choice but to surrender and accept terms including stiff fines payable in silver and British control over Hong Kong and Singapore, initiating what the Chinese called “The Century of Humiliation.”
Meanwhile, the East India Trading Company [British] sent Scottish botanist Robert Fortune to sneak into China to steal tea plants and a few Chinese men who knew how to grow it. Vast tea plantations in India were the result. Under British control until 1947, India’s tea crops bypassed China’s, thus ending the need to trade with China for the tea supply.
I don’t think I need to belabor the point. It seems the Chinese have learned their lessons well.
Willful ignorance is a pathetic condition I’ve written about before, but a new and unexpected manifestation came to my attention in the Saturday paper.[1] In an extended interview with the Arkansas Democrat-Gazette, Dr. J. Carlos Roman voiced his thoughts on the Arkansas Medical Marijuana Act and the various twists and turns on its way to becoming a functioning service to people in need. Among those thoughts was this stellar quote: “What are we going to do as a state and culture to make sure medical marijuana doesn’t become the next opioid crisis?”
Oh please, Scotty, beam me up now.
It’s possible Dr. Roman made this statement in an attempt to be politically correct, considering that he’s under fire for possible conflict of interest in his role as one of five members of the commission that oversees the licensing of Arkansas’ first growing and dispensing facilities. As such, he gave the highest score to the Natural State Medicinals Cultivation group. Entities that didn’t score so high were understandably miffed that Natural State was one of only five chosen for a license, considering that Dr. Roman’s friend Dr. Scott Schlesinger is one of the Natural State’s owners. Consequently, several of those potential licensees not chosen have sued for bias.
Roman argues that he didn’t expect or receive any quid pro quo for his ranking of Natural State. He also pointed out that he has worked for years in his role as a pain management physician to fight the opioid crisis. He says his reason for accepting the voluntary role on the licensing board was in part to “ensure that the medical marijuana industry gets off the ground responsibly.”
He goes on to admit that he was initially opposed to the amendment that voters passed in 2016 legalizing medical use, not because he was totally opposed to marijuana’s medical use but because of public “ignorance” and so-called false information about its medical potential touted by many supporters of the new law. He concedes a few benefits of natural marijuana might be in its use in appetite stimulation and anti-anxiety and admits he will “reluctantly” certify patients to receive ID cards required in the program.
He’s such a great guy, isn’t he? And now, through no fault of his own, he’s being villainized by permit applicants who didn’t score as high as the group co-owned by his friend.
Sometimes you have to appreciate karma. Because this scandal about his potential conflict of interest is exactly the kind of spotlight that’s needed for people like Dr. Roman.
Why? Because who should be more qualified or informed about medical research than a physician? Yet here we have a physician who specializes in pain management worrying that marijuana could become the next opioid crisis. Talk about willful ignorance.
Farmer slicing opium flower pod to harvest the resin. Condensed resin forms raw opium.
Any physician, especially a specialist in pain treatment, should be fully aware of the history and effects of opiates. The opium poppy has been used medically as far back as 4000 BCE. For that matter, so has marijuana. But opium has served a greater role in pain relief.
Not content with what nature had to offer in the opium plant, chemists in the 19th century began tinkering. The first result was morphine, introduced in 1827 by Merck. But after the Civil War with thousands of injured soldiers becoming addicted, Bayer Pharmaceuticals gallantly invented heroin which hit the marketplace in 1894 as a “safe” alternative. Less than twenty years later as the addictive potential of heroin became more widely known, German chemists synthesized oxycodone.
This new “safe” alternative medication spawned generations of synthesized opiate clones, each touted as safer than its precursor: Oxycontin, Percocet, Vicodin, Percodan, Tylox, and Demerol, to name a few. Now we have the latest spawn, Fentanyl, at fifty times the strength of heroin.
Now, in order to capitalize on marijuana’s therapeutic gifts, the chemists are busy again. Already pharmaceutical grade THC, one of many active ingredients in marijuana, has been synthesized for legal sale as Marinol. You see where this is headed. Soon, coming to a town near you, we’ll have a potentially lethal form of marijuana.
But not yet. What Dr. Roman should know and apparently doesn’t is that marijuana is very different from opiates is two important ways. It’s not addictive. Opiates are. And marijuana is non-toxic, meaning no matter how much you manage to ingest, it won’t kill you.
And therein lies the absurdity of his statement.
Not to single him out. I’d wager that most physicians in Arkansas and elsewhere have made zero effort to learn more about the chemical properties of cannabis.
…In a large-scale survey published in 1994 [by] epidemiologist James Anthony, then at the National Institute on Drug Abuse, and his colleagues asked more than 8,000 people between the ages of 15 and 64 about their use of marijuana and other drugs. The researchers found that of those who had tried marijuana at least once, about 9 percent eventually fit a diagnosis of cannabis dependence. The corresponding figure for alcohol was 15 percent; for cocaine, 17 percent; for heroin, 23 percent; and for nicotine, 32 percent. So although marijuana may be addictive for some, 91 percent of those who try it do not get hooked. Further, marijuana is less addictive than many other legal and illegal drugs.[2]
Please note that “dependence” and “addiction” are two very difference things, no matter how Anthony and others might interchange them.
Addiction is a primary, chronic, neurobiologic disease, with genetic, psychosocial, and environmental factors influencing its development and manifestations. It is characterized by behaviors that include one or more of the following: impaired control over drug use, compulsive use, continued use despite harm, and craving.[3]
Psychological dependence develops through consistent and frequent exposure to a stimulus. Behaviors which can produce observable psychological withdrawal symptoms include physical exercise, shopping, sex and self-stimulation using pornography, and eating food with high sugar or fat content, among others.[4]
Marijuana plant showing leaves, generally not containing much of the active ingredients, and flower buds, the primary medically-useful portion of the plant.
“Dependence” in itself is simply an adaptive state associated with a withdrawal syndrome upon cessation of repeated exposure to a stimulus such as the ‘high’ associated with marijuana. Some studies report that ending heavy marijuana use causes some users to experience wakefulness in subsequent nights and possibly headaches.
Compare that to opiate withdrawal. Within six to thirty hours of last use, symptoms include tearing up, muscle aches, agitation, trouble falling and staying asleep, excessive yawning, anxiety, nose running, sweats, racing heart, hypertension, and fever. Then within 72 hours, more severe symptoms ensue and last a week or more, in including nausea and vomiting, diarrhea, goosebumps, stomach cramps, depression, and intense drug cravings.
But more important than symptoms of withdrawal are the risks associated with use, most critical being the risk of overdose death. And this is where Dr. Norman’s ignorance takes center stage. People die from opiates at an increasing rate, about 181 people per day in 2017.
…Victims of a fatal [opiate] overdose usually die from respiratory depression—literally choking to death because they cannot get enough oxygen to feed the demands of the brain and other organ systems. This happens for several reasons… When the drug binds to the mu-opioid receptors it can have a sedating effect, which suppresses brain activity that controls breathing rate. It also hampers signals to the diaphragm, which otherwise moves to expand or contract the lungs. Opioids additionally depress the brain’s ability to monitor and respond to carbon dioxide when it builds up to dangerous levels in the blood.[5]
Compare that to the effects of marijuana.
Because cannabinoid receptors, unlike opioid receptors, are not located in the brainstem areas controlling respiration, lethal overdoses from Cannabis and cannabinoids do not occur.”[6]
Here’s a wake-up call to Dr. Roman and others in Arkansas playing this Mickey Mouse game over marijuana: in states where medical marijuana has been legalized, opiate-related deaths have decreased.
Over the past two decades, deaths from drug overdoses have become the leading cause of injury death in the United States. In 2011, 55% of drug overdose deaths were related to prescription medications; 75% of those deaths involved opiate painkillers. However, researchers found that opiate-related deaths decreased by approximately 33% in 13 states in the following six years after medical marijuana was legalized.
“The striking implication is that medical marijuana laws, when implemented, may represent a promising approach for stemming runaway rates of non-intentional opioid-analgesic-related deaths,” wrote opiate abuse researchers Dr. Mark S. Brown and Marie J. Hayes in a commentary published alongside the study.[7]
We are nearly two years from the day Arkansas voters approved a measure to provide medical marijuana to citizens of the state. With these lawsuits filed against the commission for potential conflict of interest, the date when persons in need might obtain legal weed moves even further from reach.
Dr. Roman’s apparent failure to educate himself is only the last of so many failures regarding public health and marijuana. Prohibition propaganda remains deeply entrenched in those who don’t bother to become informed. Legislative foot dragging has never been more egregious than in the months of throwing everything but the kitchen sink in front of the voters’ choice on this measure. The tragedy is that while all these men and women responsible for the public welfare fiddle with the law’s implementation, people are suffering needlessly. And dying.
I recently read a news report that Walmart is investigating the use of drones in pollinating agricultural crops.[1] That just about knocked me out of my chair, but then, on reflection, I saw the Walmart dream: total control of our food supply.
Granted, the bee die-offs are a serious problem for farmers, a result—according to most experts—of our love affair with poisoning our food. You see, spraying herbicides, fungicides, and pesticides on our crops to kill off pests like, well, anything that hurts the crop, also kills off the bees. Without pollination that bees perform so expertly, we’ll have no food.
How clever of Walmart to attempt some redress of this terrible problem! Their concept is to enlist drones with “sticky material or bristles” to spread pollen as they move from plant to plant. Of course the elephant in the room is the obvious question: if poisons used in agriculture are killing the bees, what are they doing to us?
Already we’ve heard—and mostly ignored—reports that frogs and other amphibians are experiencing reproductive deformities[2] due to environmental pollutants like Round-up’s glyphosate, now banned in Europe, and atrazine which is applied to tens of millions of acres of corn grown in the United States, making it one of the world’s most widely used agricultural chemicals. A powerful, low-cost herbicide, atrazine is also the subject of persistent controversy.[3]
“Atrazine demasculinizes male gonads producing testicular lesions associated with reduced germ cell numbers in teleost fish, amphibians, reptiles, and mammals, and induces partial and/or complete feminization in fish, amphibians, and reptiles,” according to years of study by scientist Tyrone Hayes whose reports on his research are the target of relentless attacks by atrazine’s primary manufacturer, Syngenta.[4]
Atrazine is just one of many chemicals in wide use across the United States known as endocrine disrupters, “shown to disrupt reproductive and sexual development, and these effects seem to depend on several factors, including gender, age, diet, and occupation… Human fetuses, infants and children show greater susceptibility than adults… in diseases such as cancer, allergies, neurological disorders and reproductive disorders.”[5]
Then there are the hundreds of other chemical cocktails we are forced to routinely ingest not only in our food but also in our drinking water. Tens of thousands of chemicals are released into the environment in products ranging from shampoo to toilet bowl cleaner, few if any of which have been tested for potential harmful effects on human health and which, at last count, only a handful are tested for or removed from drinking water supplies. Not that anyone has any idea how to remove them from the water. This is part of the don’t ask, don’t tell philosophy of the chemical industry which is not required by law to test human health effects unless and until some harm is proven.
Europe, more intelligently, requires testing to prove no harm before new chemicals can be used. What a concept.
It’s such a downer to the chemical and agricultural corporations that someone might want to avoid cancer, allergies, neurological disorders and reproductive disorders. What a hero Walmart will be for its clever solution to the bee die-off, allowing for continuing and possibly increasing chemical poisoning of our food supply through the use of drones! According to the report, its grocery business will be “aided by farm-related drones, which could be used to pollinate crops, monitor fields for pests, and spray pesticides.”
If we could believe for one second that Walmart’s concern is the nourishment of Americans, we might also be sold a bridge somewhere in Manhattan. We already know from years of experience with this corporation that its objective, at least since ole Sam Walton died and left the biz to his greedy kids, is only the bottom line. Squeeze producers to make the cheapest possible product. Eliminate warehousers and trucking firms. Pay employees wages so low they qualify for food stamps. Pocket the difference, a method that propels these money grubbers to the top of the wealth lists and gives them extra spending money to proclaim their ‘generosity’ with projects like Crystal Bridges.
They care nothing about American jobs. Sam was eager to advertise that his products were made in American. His body had hardly cooled when the kids were over there making deals in China. It’s hard to find any product in Walmart today that’s made in America.
Or customer service. They can’t work fast enough to eliminate those damn middle management jobs like department supervisors. If the computer models show that a particular inventory item isn’t the very best selling product, the motto is to shit-can the damn thing. It doesn’t matter if people have been purchasing that product at Walmart for the last twenty years. Thus was the case last week when I rushed in to purchase a battery for my camera, already late for a photo-shoot appointment for a book I’m working on, only to discover that Walmart has eliminated its camera department.
Then there’s the Williams seasoning mixes we’ve relied on for chili, tacos, and spaghetti, now swept from the shelves because Walmart is rolling out its store brand seasoning mixes. Okay, now if you really want to set my hair on fire, this is the right topic. How many times have you or I visited Walmart for a particular brand-name product that we especially enjoy only to discover its shelf area filled with Walmart’s Great Value brand. I wrote the corporate CEO: “First, let me say that I’d rather live the rest of my life without chili than to buy a brand I’m being coerced to buy.”
Do I have to tell you there was no response? Oh, and by the way, there’s no online email complaint method and in order to get the snail-mail address for the CEO, you have to spend an hour dodging through multiple departments who are trained, probably on threat of death, to take your complaint and “deliver the message.” Or to direct you to the store where you encountered your problem…
Then there’s the total incompetence of Walmart’s grocery buyers who don’t know the difference between a sliced almond and a slivered almond. Since last October, Walmart stores have had only sliced almonds. Big fat bags of sliced almonds. Great Value brand, of course.
The point is, if Walmart has no idea what it’s doing with almond inventory and no sense of patriotism about supporting American industry and no honor or reliability in customer service, then what will they do to our food supply? Already we can see a hint of how that will go with their careful bait and switch methods in supplanting traditional brands with their store brands. Once they’ve got their thumb in the pie from crops on up to the shelves, we’ll be completely at their mercy.
Yes, I’ve shopped other stores. But so many of us haven’t that the other stores have one by one folded up shop and drifted into shadow. There’s no local stationery store, unless you want to call Office Depot by that name. Which they’re not. They’re as bad or worse than Walmart. No nice little note cards on thick vellum paper. Now even the standard four-squares per inch graph pads have been supplanted by the smaller five per inch, no doubt some efficiency expert’s idea of customer service. Where is McRoy-McNair with their dusty basement of old colored paper and clasp envelopes in every conceivable size?
For years I’ve made it a point to buy everything I can from anyone but Walmart. This year I’ll be especially interested in farmers’ markets in the area where I live in order to support local farmers doing things the old fashioned way. I’ll be growing my own tomatoes, peppers, squash, and niceties like dill, thyme, sage, and basil. I live in the woods where there’s still a modest bee population, and I’m planting more bee-friendly flowers like lavender, rhododendron, California Lilac, and for my cats and the bees late into autumn, catnip.[6]
It’s dangerously late in this game when they start using drones to replace bees.
If gun advocates really want to protect the 2nd Amendment, they would be well advised to disclaim assault weapons. They’ve been banned before and they will be banned again. Why confuse the argument?
The 2nd Amendment is fairly precise in its statement: “A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”
The elemental phrase of this sentence is “a well-regulated militia.” We have that. It’s called the national guard. And yes, these well-regulated military units possess weapons. No one is infringing their right to bear those arms.
And no one is calling for a ban on all guns. The only source of that hue and cry is the National Rifle Association which uses that particular lie to strengthen its grip on the minds of a certain type of person. It’s also a useful lie to sell when your primary objective is to profit from the sale of any and all firearms, mass shootings be damned.
Are military-style weapons really so important that gun owners are willing to risk losing even more rights? This is a no-win argument, conflating assault rifles with more traditional firearms. If the objective is to strengthen the 2nd Amendment, a smart strategy would be to distance personal gun ownership from assault rifles.
It bears saying again that at the time the 2nd Amendment was written, the Founding Fathers did not and could not conceive of a gun like modern day assault rifles. Even the most basic revolver was unheard of to the common man. Fast forward to a more crowded, more urban, and more vulnerable population and add in the well-demonstrated threat to human life posed by assault weapons and you face the certainty that the Founding Fathers would have excluded assault weapons from the 2nd Amendment.
And let’s get real. What exactly does anyone expect to do with an assault rifle? They are not sporting guns. They are built to kill as many people as possible in the shortest possible amount of time. So unless you plan on becoming yet another mass murderer, you don’t need one.
It seems that assault-rifle aficionados live in fantasy land, believing that they must have such weapons to protect themselves from a potentially tyrannical government. All one must do to see through this delusion is to look at recent uprisings such as the rebellion in Syria. Entire cities have been destroyed in the government’s willingness to wipe out these rebels—chemical weapons, barrel bombs, white phosphorus bombs, entire populations of men, women, and children killed in their homes, schools and hospitals.
Never think for one minute that an armed insurrection within the United States would be met with less than deadly force. We’ve seen a few minor efforts along those lines. The Branch Davidians springs to mind, you know, that group of armed religionists who ended up burning themselves to death at Waco, Texas, rather than yield to the government. Or maybe the role model would be the Covenant, the Sword, and the Arm of the Lord, a white supremacist militia group based in Arkansas that was active in the late 1970s and the ’80s.
These bands of brothers with their camo trucks, prepper shelters, food rations, and an arsenal of AR-15s are hallucinating if they believe they can stand against the U. S. military. The idea is laughable. Even illegal grenade launchers and machine guns won’t help them. Will they shoot at the sky as Patriot missiles start flying their way? What happens when a few M270 rocket launch systems rumble into range? They’ll not even see the high-flying jets dropping cluster bombs. They won’t hear the Abrams tanks rolling toward them until the first rounds start blowing them and their shelters into the next life.
The reality is that U.S. citizens with assault weapons will have zero impact as resistance fighters against a government gone rogue. We already have a way to ensure that our government doesn’t go rogue. It’s called voting.
Then there’s meeting with legislators, running for election, and forming cogent arguments to be voiced among friends and neighbors. It’s called citizenship.
And if the scenario is a world where governments have crumbled and nothing is left but little groups of tough men fighting for God and the American Way, who are they fighting? Other Americans who also have AR-15s? What happens when the ammunition runs out? Why not do that first?
Assault weapons do not belong in the hands of civilians. So let’s get past that whole insanity and start working toward a peaceful future for everyone. Enough already.
Students who walked out of their Montgomery County, Maryland, schools protest against gun violence in front of the White House in Washington, U.S., February 21, 2018. REUTERS/Kevin Lamarque – RC164B322F90
The student walkout last Wednesday presented an excellent parenting opportunity. Sadly, a significant number of parents failed to recognize the opportunity. Instead, many of them sided with authoritarian school boards who insisted that any student who walked out of class in honor of the seventeen students killed at Florida’s Parkland high school should receive disciplinary action for his/her outrageous rebellion.
Parents had the choice to support this, to teach their kids about the history of protest that forms the foundation of our nation. Our country began as a protest. Important change over the subsequent centuries occurred due to protest, not quiet little acceptable moments condoned by the powers that be, but full-bodied march-in-the-streets, dump-your-damn-tea-overboard protests. This was a chance for parents and community leaders to demonstrate a true understanding of what protest means to our nation.
And to our future.
It’s sad to see that so many don’t understand, tragic that so many accept the conditioning to stand quietly and wait for someone to tell you what to do, what to think. It bodes ill for our future that what we seem to be teaching in our schools and homes today focuses on doing what we’re told instead of what is right. Sit in your assigned desk. Don’t talk. Remember to bring a pencil. Stand quietly in line.
This is training for lock-step corporate workers, not the thinking adults needed to guide our nation forward in a time of great global change. Where is the critical thinking, the understanding of history, that education is supposed to instill?
Yes, the need to stand up for the right thing may have consequences. Many of the students who walked out of class despite draconian threats of suspension and other disciplinary action understood and stood up for their rights anyway. This alone should cheer us all with hope for our collective future.
The acceptance and embrace of authoritarianism raised its ugly head last week, a fitting reminder of the mindset that allowed Donald Trump to become president. Above all else, his election to the highest office in our political system reflects this pervasive eagerness for an authority figure who claims to know the right path. This world view reflects the fear of so many who can’t catch up with the times, with a world going too fast for their 19th century brains to understand what is required of them. They don’t like change. So the solution is a strict set of simple rules enforced by a blustering promise-maker.
Discussion ebbs and flows on the topic of U.S. spending. Of particular interest to several of my liberal friends is military spending. Frequent Facebook posts on this subject claim that military spending consumes over half the budget.
I agree that the military is not an ideal place to invest so many billions of dollars. I also agree that the U.S. has a history of blowing money on weapons and aggression. Further, I question whether the U.S. uses military means when a better, longer-lasting path to peace and stability in troubled parts of the world would be investments in education, infrastructure, agriculture, and commercial development.
All that said, I have to protest the continuing use of incorrect data in arguing against military spending. The cause for less military spending is not enhanced by presenting incorrect information. Just the opposite.
The accurate 2017 budget breakdown:
Please note that the portion designated military spending occurs in the lower left of this pie chart and as such does not constitute half of the U.S. budget. It’s important to discriminate between a breakdown of discretionary spending and overall spending. Discretionary spending is one category of overall spending. It’s within the slice of pie of discretionary spending where we see the big bite that goes to the military:So yes, military spending within its slice of pie of discretionary spending, is over half the budget. And there’s no limit to the close examination this distribution of funds deserves. But please, let’s make our arguments based on the actual facts.
A second realm of considerable error by liberals calls for a shift in U.S. spending to better honor social programs like Social Security. A popular mantra on social media these days mistakenly claims that we ‘own’ our retirement funds because we paid into them. The following discussion spells out the facts:
“It’s My Money” [WRONG!]
* A common perception about Social Security benefits is: I am entitled to the money. It’s my money. I’ve saved it.
* Social Security is mainly a “pay-as-you-go” program. This means that it pays most of its benefits by taxing people who are currently working.
* Per the Social Security Administration: The money you pay in taxes is not held in a personal account for you to use when you get benefits. Your taxes are being used right now to pay people who now are getting benefits. Any unused money goes to the Social Security trust funds, not a personal account with your name on it.
* From the start of the Social Security program in 1937 through the end of 2016:
94% of all Social Security payroll taxes were spent in the same year they were collected.
13% of Social Security’s total income (including payroll taxes, taxes on Social Security benefits, transfers from the general fund of the Treasury, and interest on the Social Security Trust Fund) has accumulated in the Social Security Trust Fund.
* Per the Social Security Administration: Since the Social Security system has not accumulated assets equal to the liability of promised future benefits, the social security wealth that individuals hold represents a claim against the earnings of future generations rather than a claim against existing real assets.
* After the federal government pays back with interest all of the money it has borrowed from Social Security, the program’s current claim against the earnings of future generations is $30.8 trillion. This amounts to an average of $132,914 for every person now receiving Social Security benefits or paying Social Security payroll taxes.
* Per the Social Security Administration: There has been a temptation throughout the program’s history for some people to suppose that their FICA payroll taxes entitle them to a benefit in a legal, contractual sense. … Congress clearly had no such limitation in mind when crafting the law. … Benefits which are granted at one time can be withdrawn.…
* In 1960, the U.S. Supreme Court ruled (5 to 4) that entitlement to Social Security benefits is not a contractual right. [emphasis added]
[For more discussion of Social Security taxes, allocations, and projections, visit the Just Facts page.]
So let’s get our heads screwed on straight, fellow progressives. While a large chunk of U.S. tax dollars go to military expenditures, it is NOT consuming over half of our tax dollars. Our Social Security and Medicare funds are NOT held for our future use like individual savings accounts, but rather are spent immediately in payouts to persons currently receiving Social Security and Medicare benefits.
If we expect to prevail in directing our nation toward a more equitable and socially conscientious future, we need to be well informed and make our arguments for social justice in ways that make sense and align with the facts.