SearchRecent comments
Democracy LinksMember's Off-site Blogs |
human extinction being embedded in source code and amplified by algorithms....
There is a moment in every civilization’s collapse when the instruments of its destruction become visible to those paying attention. We are living in that moment now. But the warning signs are not carved in stone or written in prophecy—they are embedded in source code, amplified by algorithms, and funded by men who speak openly of human extinction while racing to cause it.
Palantir's Palestine: How AI Gods Are Building Our Extinction BY BETTBEAT MEDIA
In a nondescript office in Palo Alto, a man who claims to fear fascism has become its most sophisticated architect. In a sprawling Texas compound, another man who styles himself a free speech absolutist uses his platform to amplify the voices calling for ethnic cleansing. And in the bombed-out hospitals of Gaza, their technologies converge in a laboratory of horrors that prefigures what awaits us all. The four horsemen of this apocalypse do not ride horses. They deploy algorithms. The ConfessionProfessor Stuart Russell has spent fifty years studying artificial intelligence. He wrote the textbook from which nearly every AI CEO in Silicon Valley learned their craft. And now, at eighty hours a week, he works not to advance the field he helped create, but to prevent it from annihilating the species. “They are playing Russian roulette with every human being on Earth,” Russell said in a recent interview, his voice carrying the weight of someone who has seen the calculations and understood their implications. “Without our permission. They’re coming into our houses, putting a gun to the head of our children, pulling the trigger, and saying, ‘Well, you know, possibly everyone will die. Oops. But possibly we’ll get incredibly rich.’” This is not hyperbole from an outsider. This is the assessment of a man whose students now run the companies building these systems. And here is what should terrify you: the CEOs themselves agree with him. Dario Amodei, CEO of Anthropic, estimates a 25% chance of human extinction from AI. Elon Musk puts it at 20-30%. Sam Altman, before becoming CEO of OpenAI, declared that creating superhuman intelligence is “the biggest risk to human existence that there is.” Twenty-five percent. Thirty percent. These are not the odds of a coin flip. These are the odds of Russian roulette with two bullets in the chamber. And yet they continue to spin the cylinder. When Russell was asked if he would press a button to stop all AI progress forever, he hesitated—not because he believes the technology is safe, but because he still harbors hope that humanity might pull out of what he calls “this nosedive.” Asked again a year from now, he admits, he might give a different answer. “Ask me again in a year,” he said. “I might say, ‘Okay, we do need to press the button.’” But there may not be a button. There may not be a year. The event horizon, as Altman himself has written, may already be behind us. The Gorilla ProblemRussell offers what he calls “the gorilla problem” as a framework for understanding our predicament. A few million years ago, the human line branched off from the gorilla line in evolution. Today, gorillas have no say in whether they continue to exist. We are simply too intelligent, too capable, too dominant for their survival to be anything other than a matter of our sufferance. We decide if gorillas survive or go extinct. For now, we let them live. “Intelligence is actually the single most important factor to control planet Earth,” Russell explains. “And we’re in the process of making something more intelligent than us.” The logic is inescapable. If we create entities more capable than ourselves, we become the gorillas. And gorillas cannot negotiate the terms of their extinction. But here is where Russell’s framework is lacking and, in my personal opinion, requires expansion. The gorillas face one superior species. We face something far more insidious: a superior intelligence controlled by a handful of men whose values, as demonstrated by their actions, are antithetical to human flourishing. The gorillas, at least, are threatened by humanity in the aggregate. We are threatened by humanity’s worst specimens, amplified by technologies that multiply their power beyond anything history has witnessed. “These bombs are cheaper and you don’t want to waste expensive bombs on unimportant people”The Men Behind the CurtainAlexander Karp was born to activists. His mother, an African American artist, created works depicting the suffering of Black children murdered in Atlanta. His father, a German Jewish immigrant, worked as a pediatrician. They took young Alex to civil rights marches, exposed him to injustice, taught him to fight against oppression. And then he grew up to build Palantir. Named after the Seeing Stones of Tolkien’s legendarium—artifacts that were “meant to be used for good purposes” but proved “potentially very dangerous”—Palantir was founded in the aftermath of September 11th, 2001, with seed money from In-Q-Tel, the CIA’s venture capital arm. Karp, who claims he “cannot do something I do not believe in,” has spent two decades doing precisely that. The company’s software now powers what Israeli soldiers describe with chilling bureaucratic efficiency: “I would invest 20 seconds for each target and do dozens of them a day. I had zero added value as a human. Apart from being a stamp of approval.” Twenty seconds. That is the value of a Palestinian life in the algorithmic calculus of Alex Karp’s creation. The machine decides who dies. The human merely clicks. When whistleblowers revealed that Israeli intelligence officers were using “dumb bombs”—unguided munitions with no precision capability—on targets identified by Palantir’s AI, their justification was purely economic: “These bombs are cheaper and you don’t want to waste expensive bombs on unimportant people.” Unimportant people. Children. Doctors. Journalists. Poets. Karp has admitted, in a moment of rare candor: “I have asked myself if I were younger, at college, would I be protesting me?” He knows the answer. We all know the answer. He simply does not care. The Digital BrownshirtsElon Musk presents himself as a different kind of tech titan—the quirky engineer, the Mars visionary, the champion of free speech who bought Twitter to liberate it from the “woke mind virus.” But Sky News recently conducted an experiment that strips away this carefully constructed persona. Researchers created nine fresh accounts on X—Musk’s renamed platform—and left them running for a month. Three accounts followed left-leaning content. Three followed right-leaning content. Three followed only neutral accounts like sports and music. Every single account, regardless of its stated preferences, was flooded with right-wing content. Users who followed only sports teams saw twice as much right-wing political content as left-wing. Even the left-leaning accounts were fed 40% right-wing material. This is not organic engagement. This is algorithmic manipulation on a civilizational scale. “If you open the app on your phone and you immediately see a news agenda maybe filled with more hate towards certain groups, it’s going to have an impact,” observed Bruce Daisley, the former head of Twitter for Europe, the Middle East, and Africa. “And that’s not to say that free speech can’t exist, but if eight million people every day are opening their phones to see a news agenda that maybe is right to the fringes of what we’re used to, at the very least, we should have some visibility of the impact that’s going to have on politics.” Musk reinstated Tommy Robinson’s account—the far-right pro-Zionist agitator who organized 150,000 people to march through London calling for mass deportations. Robinson thanked Musk publicly. Musk reposted the thanks and declared it was time for “the English to ally with the hard men.” Hard men. The historical euphemism for fascists. When politicians Musk favors post content, their engagement skyrockets. When politicians he dislikes post identical amounts, their reach flatlines. This is not a town square. It is a propaganda machine with a proprietor who openly interferes in the politics of nations he does not inhabit, backing candidates he has never met, pushing ideologies that would have been considered fringe extremism a decade ago. And here is the connection that matters: Musk is the CEO of xAI, OpenAI’s largest competitor. He has declared himself a 30% believer in human extinction from AI. And he is using the world’s most influential social media platform to promote the political movements most likely to strip away the regulations that might prevent that extinction. The fascists have captured the algorithm. The Laboratory of the FutureDr. Ghada Karmi was a child in 1948 when she lost her homeland. She remembers enough to know that she lost her world. For seventy-seven years, she has watched as the mechanisms of Palestinian erasure evolved from rifles and bulldozers to algorithms and autonomous weapons systems. “Zionism is evil,” she says with the quiet certainty of someone who has spent a lifetime studying its fruits. “It is purely evil. It has created disasters, misery, atrocities, wars, aggression, unhappiness, insecurity for millions of Palestinians and Arabs. This ideology has no place whatsoever in a just world. None. It has to go. It has to end. And it has to be removed. Even its memory has to go.” But Zionism, in its current iteration, is not merely an ideology. It is a business model. It is a technology demonstration. It is the beta test for systems that will eventually be deployed everywhere. The Israeli military’s Project Lavender uses AI to identify targets for assassination. Soldiers describe processing “dozens of them a day” with “zero added value as a human.” The algorithm marks. The human clicks. The bomb falls. This is not a war. It is a sick twisted video game. Palantir’s technology identifies the targets. Musk’s Starlink provides the communications. American military contractors supply the weapons. And the entire apparatus is funded by governments whose citizens have marched in the millions demanding it stop. “The genocide has not provoked a change in the official attitude,” Dr. Karmi observes. “I’m astonished by this and it needs an explanation.” The explanation is simpler and more terrifying than any conspiracy. The explanation is that the people who control these technologies have decided that some lives are worth twenty seconds of consideration and others are worth none at all. And the governments that might regulate them have been captured by men waving fifty billion dollar checks. “They dangle fifty billion dollar checks in front of the governments,” Professor Russell explains. “On the other side, you’ve got very well-meaning, brilliant scientists like Jeff Hinton saying, actually, no, this is the end of the human race. But Jeff doesn’t have a fifty billion dollar check.” The King Midas ProblemRussell invokes the legend of King Midas to explain the trap we have built for ourselves. Midas wished that everything he touched would turn to gold. And it did. And then he touched his water and it became metal. He touched his food and it became inedible. He touched his daughter and she became a statue. “He dies in misery and starvation,” Russell recounts. “So this applies to our current situation in two ways. One is that greed is driving these companies to pursue technology with probabilities of extinction being worse than playing Russian roulette. And people are just fooling themselves if they think it’s naturally going to be controllable.” The CEOs know this. They have signed statements acknowledging it. They estimate the odds of catastrophe at one in four, one in three, and they continue anyway. Why? Because the economic value of AGI—artificial general intelligence—has been estimated at fifteen quadrillion dollars. This sum acts, in Russell’s metaphor, as “a giant magnet in the future. We’re being pulled towards it. And the closer we get, the stronger the force, the probability, the higher the probability that we will actually get there.” Fifteen quadrillion dollars. For comparison, the Manhattan Project cost roughly thirty billion in today’s dollars. The budget for AGI development next year will be a trillion dollars. Thirty times the investment that built the atomic bomb. And unlike the Manhattan Project, which was conducted in secret by a nation at war, this development is being conducted by private companies answerable only to their shareholders, in peacetime, with no democratic oversight, no regulatory framework, and no meaningful safety requirements. “The people developing the AI systems,” Russell observes, “they don’t even understand how the AI systems work. So their 25% chance of extinction is just a seat of the pants guess. They actually have no idea.” No idea. But they’re spending a trillion dollars anyway. Because the magnet is too strong. Because the incentives are too powerful. Because they have convinced themselves that someone else will figure out the safety problem. Eventually. Probably. Maybe. What Now?If everything goes right—if somehow we solve the control problem, if somehow we prevent extinction, if somehow we navigate the transition to artificial general intelligence without destroying ourselves—what then? Russell has asked this question to AI researchers, economists, science fiction writers, futurists. “No one has been able to describe that world,” he admits. “I’m not saying it’s not possible. I’m just saying I’ve asked hundreds of people in multiple workshops. It does not, as far as I know, exist in science fiction.” There is one series of novels, he notes, where humans and super intelligent AI coexist: Iain Banks’s Culture novels. “But the problem is, in that world there’s still nothing to do. To find purpose.” The only humans with meaning are the 0.01% on the frontier, expanding the boundaries of galactic civilization. Everyone else is desperately trying to join that group “so they have some purpose in life.” This is the best-case scenario. The utopia we’re racing toward is a cruise ship where the entertainment never ends and the meaning never arrives. “Epstein is dead, or so we are told. But his network remains. His colleagues are still building. His vision of a world sorted into the served and the sacrificed is being encoded into algorithms at this very moment”The IslandBut we need not speculate about what happens when mankind runs out of meaning. We have already seen it. We have the receipts, the flight logs, the testimony of survivors. The men who have everything showed us what they do when nothing is forbidden. Jeffrey Epstein’s island was not an aberration. It was a preview. Here was a man connected to the CIA, to Mossad, to the highest levels of American political power. A man who, according to recently released emails, estimated that the federal government knew about roughly twenty of the children he had trafficked. A man whose black book read like a who’s who of global power: presidents, princes, tech billionaires, Nobel laureates. The emails reveal something beyond mere criminality. They reveal an infrastructure. Epstein was, as media researcher Nolan Higdon documents, “someone who could find dirt on people and possibly destroy their image, and also was someone you could go to to protect people’s images as well.” He operated at the nexus of intelligence agencies, financial power, and technological development—advising on spyware, brokering deals between governments, connecting the men who would build the surveillance apparatus now pointed at all of us. When ABC reporter Amy Robach had evidence of his sex crimes, the network killed the story. When accusers came forward, the New York Times dismissed their claims as baseless. When he was finally convicted, he received a sentence so lenient it became known as the “sweetheart deal.” And when he died in a federal prison under circumstances so suspicious that CBS News debunked every official explanation—the wrong floor on the released footage, a camera malfunction the manufacturer says is impossible—the investigation simply stopped. The question is not whether Epstein was connected to these powerful figures. The emails have settled that. The question, as Higdon frames it, is how “one person could have his finger in so many pots with so many connections.” And the answer the media refuses to pursue is the obvious one: he was not operating alone. He was a node in a network—a network that included the intelligence agencies now partnering with AI companies, the billionaires now building our algorithmic future, the politicians now refusing to regulate any of it. What did these men do when they had accumulated more wealth than could be spent in a thousand lifetimes? When they had shaped governments, launched technologies, bent the arc of history to their will? They visited the island. The film “Hostel” imagined wealthy elites paying to torture and kill ordinary people for sport. Critics dismissed it as horror movie excess. But the premise—that absolute power produces absolute depravity, that men who want for nothing will eventually want the forbidden—was not fiction. It was prophecy. “What do you do when you have all the money in the world and all the power in the world?” asks Steve Grumbine, who has studied the Epstein files extensively. “Well, you do whatever you want to do. Absolute power corrupting absolutely.” The children trafficked to that island were not incidental to the system. They were the system—the currency of compromise, the mechanism of control, the ultimate expression of what happens when a class of people comes to believe they are gods. As I have written before: there is a reason why pedophiles turn out to be the most successful capitalists. This is the future the AI accelerationists are building, whether they know it or not. A world where a handful of men control technologies of unprecedented power, answerable to no one, restrained by nothing, their every appetite indulged by machines that never refuse and never report. The Epstein island, scaled to planetary dimensions. Epstein is dead, or so we are told. But his network remains. His colleagues are still building. His vision of a world sorted into the served and the sacrificed is being encoded into algorithms at this very moment. When Peter Thiel, another acquaintance of Epstein and co-founder of Palantir, named his company after Tolkien’s seeing stones, he perhaps did not consider the full implications of the reference. In the novels, the Palantiri were corrupted—used by Sauron to show partial truths that led to despair and domination. Those who gazed into them saw what the Dark Lord wanted them to see. We are all gazing into the stones now. And the men who control what we see — in these algorithmic Palantiri —have already shown us, on a Caribbean island and in the rubble of Gaza, exactly what they intend. It looks like algorithms making life-and-death decisions with twenty seconds of human oversight. It looks like predictive policing in Florida, where residents are cited for overgrown grass because software flagged them as potential criminals. It looks like the hollowing out of every profession, every craft, every form of human contribution that might give us purpose. It looks like Palestinian children being raped without end inside the dark chambers of the IDF dungeons. The EnablersDr. Karmi returns again and again to a simple question: Why? “Why should a state that was invented, with an invented population, have become so important that we can’t live without it?” she asks of Israel. But the question applies equally to Silicon Valley, to the tech platforms, to the entire apparatus of algorithmic control that now shapes our politics, our perceptions, our possibilities. The answer, she suggests, lies in understanding the enablers. “I think it’s absolutely crucial now to focus on the enablers,” she argues. “Because we can go on and on giving examples of Israeli brutality, of the atrocities, of the cruelties. That’s not for me the point. The point is who is allowing this to happen?” The same question must be asked of AI. Who is allowing this to happen? Who is funding the companies that acknowledge a 25% chance of human extinction and continue anyway? Who is providing the regulatory vacuum in which these technologies develop unchecked? Who is amplifying the voices calling for acceleration while silencing those calling for caution? The answer is the same class of people who have enabled every catastrophe of the modern era: the comfortable, the compliant, the compromised. The politicians who take the fifty billion dollar checks. The journalists who amplify the preferred narratives. The citizens who scroll past the warnings because they are too busy, too distracted, too convinced that someone else will handle it. “All the polls that have been done say most people, 80% maybe, don’t want there to be super intelligent machines,” Russell notes. “But they don’t know what to do.” They don’t know what to do. So they do nothing. And the machines keep learning. And the algorithms keep shaping. And the billionaires keep abusing. And the bombs keep falling. And the future keeps narrowing. The ResistanceWhat is to be done? Russell’s advice is almost quaint in its simplicity: “Talk to your representative, your MP, your congressperson. Because I think the policymakers need to hear from people. The only voices they’re hearing right now are the tech companies and their fifty billion dollar checks.” Dr. Karmi offers something similar: “My advice is to target the official structures which keep Israel going. They need to understand that being nice to Palestinians or sending food or whatever is fine, but it is not the point. The point for people living in western democracies is they can express a view.” The counterargument is obvious: these structures are captured. The platforms that might amplify our voices are controlled by the very forces we need to resist. The politicians who might act are bought. The media that might inform are complicit. But the counterargument misses the point. The point is not that resistance will succeed. The point is that resistance is the only thing that might succeed. “I’m not sure what to do,” Russell admits, “because of this giant magnet pulling everyone forward and the vast sums of money being put into this. But I am sure that if you want to have a future, and a world that you want your kids to live in, you need to make your voice heard.” What does that look like? It looks like refusing to use platforms designed to indoctrinate us. It looks like demanding that our representatives explain their positions on AI safety. It looks like supporting the whistleblowers who reveal what these companies are doing. It looks like building alternative structures that do not depend on the benevolence of billionaires. It looks like refusing to be gorillas. The ChoiceAlex Karp’s mother devoted her art to documenting the suffering of Black children murdered in Atlanta. His father spent his career caring for the sick. They taught him to march against injustice. And he built a machine that decides, in twenty seconds, which children in Gaza will die today. Elon Musk claims to champion free speech. He claims to fear the extinction of humanity. He claims to want to preserve western (un)civilization. And he uses his platform to amplify the voices calling for ethnic cleansing, to boost the politicians who would eliminate the regulations that might prevent catastrophe, to reshape the information environment of entire nations according to his preferences. Stuart Russell has spent fifty years in artificial intelligence. He could retire. He could play golf. He could sail. And instead he works eighty hours a week, trying to divert humanity from a course he believes leads to extinction. These are the choices that matter. Not the abstract debates about technology, but the concrete decisions about what we do with our one life, our one moment of influence, our one chance to shape what comes next. “There isn’t a bigger motivation than this,” Russell says simply. “It’s not only the right thing to do, it’s completely essential.” The gorillas could not choose their fate. They were outcompeted by a species more intelligent than themselves, and now their survival depends entirely on whether that species decides to permit it. We still have a choice. The machines are not yet smarter than us. The algorithms are not yet in complete control. The billionaires are not yet omnipotent. But the window is closing. The event horizon may already be behind us. And the men who control the most powerful technologies in human history have made their values abundantly clear. They will pursue profit over safety. They will amplify hatred over tolerance. They will choose rape over romance. They will enable genocide if the margins are favorable. They will risk extinction if the upside is sufficient. This is not speculation. This is the record. This is what they are doing, right now, in plain sight. The question is not whether we understand the danger. The question is what we will do about it. In the rubble of Gaza, AI systems are learning. They are learning that human life can be processed in twenty seconds. They are learning that some people are worth expensive bombs and others are not. They are learning that the international community will watch and do nothing. What they learn there, they will eventually apply everywhere. This is not a warning about the future. It is a description of the present. The future is merely the present, continued, worse. Unless we stop it. Unless we choose differently. Unless we refuse to become the gorillas. - Karim. * To increase the visibility of BettBeat Media, your restack of this article would be greatly appreciated. https://bettbeat.substack.com/p/palantirs-palestine-how-ai-gods-are?
THIS IS MOSTLY AN EXPOSE ON THE WESTERN SIDE [USA et al] USAGE OF ALTERNATIVE ARTIFICIAL INTELLIGENCE — ALWAYS POISED TO DESTROY SOMETHING, LIKE CATS SCRATCHING YOUR CURTAINS BECAUSE THEY CAN. LUCKILY, WE SHOULD KNOW THAT OTHER CHARACTERS WORKING ON AI ARE DEVELOPING BENEFICIAL CHARACTERISTICS OF THIS CAPER. AND WE ARE USING MANY SUCH FEATURES TODAY...
YOURDEMOCRACY.NET RECORDS HISTORY AS IT SHOULD BE — NOT AS THE WESTERN MEDIA WRONGLY REPORTS IT — SINCE 2005.
Gus Leonisky POLITICAL CARTOONIST SINCE 1951.
|
User login |
palantir's curse....
Land of the Free now the nightmare destination for US tourists
by Andrew Gardiner
Plan a trip to the NRL in Las Vegas, or the World Cup Soccer? That old social media post bagging Donald Trump could make it the trip from hell. Andrew Gardiner investigates the US tourism nightmare.
Never shy of the odd hyperbolic spiel, the NRL events team stayed true-to-form when it dubbed February’s two game Las Vegas rugby league showcase as “the ultimate bucket list event!”. “Join the excitement as the NRL takes centre stage (and) stay at a central 4-star hotel in the City of Lights” for just $4,499 twin-share, the spruikers went on.
A word of caution before you part with all that hard-earned loot: check every one of your emails, texts or social media posts over the past five-to-10 years for anything deemed “unacceptable” by those currently running things in the Land of the Free.
Chances are you could be sent straight home on the next available flight or, worse, detained in an Immigration and Customs Enforcement (ICE) facility for months if you’ve sent anything deemed critical of Donald Trump or his administration, something ‘antisemitic’ in nature or otherwise proscribed (the latter list varies from month to month in these turbulent times).
Ten years of emails for a tourist visaThe Trump administration has ramped up attempts to pre-screen and monitor sports fans, tourists, students, researchers and business travellers alike, planning a slew of “intrusive” demands for their every last email address over the preceding decade (for those seeking a 90-day visa waiver) and all their phone numbers, family members’ names/contact details, dates of birth, home and IP addresses, together with metadata from submitted photos if they’re applying for the Electronic System for Travel Authorisation (ESTA).
The Washington-based Electronic Privacy Information Center (EPIC) says the new vetting regime “calls into question how (a visitor’s) information is being used”. “The (Trump) administration has gone after people who espouse views it doesn’t like (so) you can imagine there are some similar problems that you may have with emails that you would find in their social media vetting”, EPIC’s lead counsel, Jeramie Scott, said.
Anyone thinking they can dodge the system via anonymous social media handles or burner email accounts would be well-advised to assume the worst rather than hoping for the best.
Palantir AI to target Israel criticsICE is using automated artificial intelligence (AI) to target those who speak out on Gaza or other taboos.
Software such as Palantir’s Immigration OS allows for constant mass monitoring, surveillance, and assessment of visitors. Another tool, Babel X, ‘assigns’ opinions to tourists or students – based on social media posts and online behaviour – in an arbitrary and imprecise fashion not dissimilar from Australia’s egregious RoboDebt scheme.
Fans of next year’s Las Vegas NRL combatants – the Dragons, Bulldogs, Knights and Cowboys – may well see a trip to Sinatra’s second home as the trip of a lifetime, but it’s thousands of dollars down the tubes if they’re caught calling Trump a clown way back in 2020.
The same goes for Socceroos fans, whose team qualified for the 2026 World Cup in June-July. Co-hosted by the US, Canada and Mexico, Australia is scheduled to play two of its three group stage fixtures in the American cities of Seattle and Santa Clara.
World Cup Soccer fans face rigorous checks“This policy represents the ‘thin edge of the wedge’ regarding civil liberties and fan rights,” Football Supporters Association Australia chair, Patrick Clancy, said.”While Australian (soccer) fans face rigorous checks, supporters from nations such as Iran or Haiti have been denied entry entirely”.
Iran has qualified for the 2026b World Cup.
Fans of fellow World Cup qualifiers Senegal and Côte d’Ivoire may also be locked out after Trump imposed partial travel bans on both last week. Along with fans from Haiti and Iran, arrivals from these countries (plus visitors from another 36 mostly third World countries subject to travel bans) are, for the most part, from racial or religious cohorts Amnesty International has accused the Trump administration of actively mistreating.
Australia’s 5,000-odd students in the US would, likewise, be well advised to watch their ps and qs, especially those with any sympathy for the plight of Palestinians. AI by Palantir and Babel is specifically aimed at expressions of such sympathy, which you’re perhaps more likely to encounter from Australians at Harvard or Cornell than from their rugby league-loving compatriots in Las Vegas.
The curse of PalantirWashington’s AI-driven “Catch and Revoke” scheme has been central to the deportation of more than 8,000 international students via social media monitoring, visa status tracking, and automated threat assessments of international students.
“These technologies enable authorities to swiftly track and target international students and other marginalised migrant groups at an unprecedented scale and scope … creating a climate of fear for international students across schools and campuses,” Amnesty International’s Erika Guevara-Rosas said.
While the exact number of Australian students deported from the United States is not publicly available, the International Education Association of Australia (IEAA) reports that Australians were definitely among those deported, with others feeling targeted and pondering leaving of their own accord.
“There’s definitely momentum to explore other options away from America,” IEAA chief executive Phil Honeywood told News Corp.
The Trump Factor“It’s the combination of the Donald Trump factor, the concern about how other nations have been dealt with, and genuine concerns about privacy and information being exposed to people other than their family and friends.”
MWM reached out to the US Embassy in Canberra for its take on “catch and revoke”, but had not heard back by publication time. However, Washington’s defence of the scheme is perhaps most succinctly summarised by Trump’s ambassador-designate to nearby Malaysia, Australian Nick Adams: “If (foreign students) are going to support Hamas, they should not be here. It’s a privilege to study in the United States, not a constitutional right” he wrote, quoting Republican political strategist Shermichael Singleton.
“Exactly right! The American people support deporting pro-Hamas student visa holders”, Adams agreed. Readers might recall Nick Adams as the former Ashfield (Sydney) councillor who rarely attended council meetings and abused a reporter who asked him about it, before turning his hand to a lucrative role as Florida-based, Trump-boosting “alpha male”.
US tourism in freefallIf you’re thinking of a trip to the States and you’re alarmed by any of the above, you’re not alone. Visits to the US dropped again in November the seventh straight month with a year-over-year decline.
Australian visitors dropped 13 per cent year-to-year in November, with just 57.478 venturing across the pond, per the US National Travel and Tourism Office (NTTO). This compares with just a five per cent equivalent drop in visitors from Europe, and marks a trend in which Australians avoided the US in favour of other destinations in greater numbers than visitors from all other continents (with the exception of travel ban-hit Africa).
NTTO doesn’t expect a full recovery to pre-Covid inbound tourism levels until 2029, oddly enough the year Trump’s second term ends. Rugby league fans might want to put their travel plans on the back burner until then.
https://michaelwest.com.au/land-of-the-free-now-the-nightmare-destination-for-us-tourists/
READ FROM TOP.
YOURDEMOCRACY.NET RECORDS HISTORY AS IT SHOULD BE — NOT AS THE WESTERN MEDIA WRONGLY REPORTS IT — SINCE 2005.
Gus Leonisky
POLITICAL CARTOONIST SINCE 1951.