To give you the best possible experience, this site uses cookies. If you continue browsing, you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.
Here's the latest edition of Worldcrunch Magazine, a selection of our best articles of the week from top international journalists, produced exclusively in English for Worldcrunch readers.
Our cover story, by Katarzyna Skiba and Valeria Berghinz for Worldcrunch, looks into how the tech world and tech giants are changing — from Poland to India, France to Argentina, Israel to the United States — and becoming more right wing. The industry, which started out in Silicon Valley and had a reputation for open-mindedness and politically progressive values, has recently had a more central role in today's economy, and has now shifted more far-right ever since the presidency of Donald Trump.
The culture of Silicon Valley was once associated with social liberalism and tolerance. However, the tech community worldwide, from moguls such as Elon Musk or Peter Thiel, to IT professionals in Poland, and self-described OSINT users in India, is showing signs of a noted right-wing shift.
PARIS — For decades, the tech world acquired a reputation for open-mindedness and politically progressive values. Indeed, the origins of Silicon Valley are intimately linked to the 1960s counter-culture scene just a few miles up the road in San Francisco.
With its central role in today's economy, and arrival in mainstream culture, those would-be hippie days were bound to fade. Yet there has been a notable shift to more conservative — and even far-right — voices from the tech community that first began during the presidency of Donald Trump. Now the rightward direction of tech appears to be accelerating, with the emergence over the past year of Elon Musk as a hero of the populist far-right as only the most visible example.
But it's not just an American thing: a look around the world finds that the growing connections between tech and the far right goes well beyond the U.S., with examples showing up from Poland to India to Argentina.
The inner workings of Artificial Intelligence are impenetrable, unexplainable and unpredictable. That build in some fundamental limits to its capacity and utility.
There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone, determine your creditworthiness and write poetry and computer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily.
But AI systems have a significant limitation: Many of their inner workings are impenetrable, making them fundamentally unexplainable and unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge.
If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?
Why AI is unpredictable
Trust is grounded in predictability. It depends on your ability to anticipate the behavior of others. If you trust someone and they don’t do what you expect, then your perception of their trustworthiness diminishes.
In neural networks, the strength of the connections between "neurons" changes as data passes from the input layer through hidden layers to the output layer, enabling the network to ‘learn’ patterns.
Many AI systems are built on deep learningneural networks, which in some ways emulate the human brain. These networks contain interconnected “neurons” with variables or “parameters” that affect the strength of connections between the neurons. As a naïve network is presented with training data, it “learns” how to classify the data by adjusting these parameters. In this way, the AI system learns to classify data it hasn’t seen before. It doesn’t memorize what each data point is, but instead predicts what a data point might be.
AI can't rationalize its decision making.
Many of the most powerful AI systems contain trillions of parameters. Because of this, the reasons AI systems make the decisions that they do are often opaque. This is the AI explainability problem – the impenetrable black box of AI decision-making.
Consider a variation of the “Trolley Problem.” Imagine that you are a passenger in a self-driving vehicle, controlled by an AI. A small child runs into the road, and the AI must now decide: run over the child or swerve and crash, potentially injuring its passengers. This choice would be difficult for a human to make, but a human has the benefit of being able to explain their decision. Their rationalization – shaped by ethical norms, the perceptions of others and expected behavior – supports trust.
In contrast, an AI can’t rationalize its decision-making. You can’t look under the hood of the self-driving vehicle at its trillions of parameters to explain why it made the decision that it did. AI fails the predictive requirement for trust.
AI behavior and human expectations
Trust relies not only on predictability, but also on normative or ethical motivations. You typically expect people to act not only as you assume they will, but also as they should. Human values are influenced by common experience, and moral reasoning is a dynamic process, shaped by ethical standards and others’ perceptions.
Unlike humans, AI doesn’t adjust its behavior based on how it is perceived by others or by adhering to ethical norms. AI’s internal representation of the world is largely static, set by its training data. Its decision-making process is grounded in an unchanging model of the world, unfazed by the dynamic, nuanced social interactions constantly influencing human behavior. Researchers are working on programming AI to include ethics, but that’s proving challenging.
The self-driving car scenario illustrates this issue. How can you ensure that the car’s AI makes decisions that align with human expectations? For example, the car could decide that hitting the child is the optimal course of action, something most human drivers would instinctively avoid. This issue is the AI alignment problem, and it’s another source of uncertainty that erects barriers to trust.
AI expert Stuart Russell explains the AI alignment problem.
Critical systems and trusting AI
One way to reduce uncertainty and boost trust is to ensure people are in on the decisions AI systems make. This is the approach taken by the U.S. Department of Defense, which requires that for all AI decision-making, a human must be either in the loop or on the loop. In the loop means the AI system makes a recommendation but a human is required to initiate an action. On the loop means that while an AI system can initiate an action on its own, a human monitor can interrupt or alter it.
As AI integration becomes more complex, we must resolve issues that limit trustworthiness.
While keeping humans involved is a great first step, I am not convinced that this will be sustainable long term. As companies and governments continue to adopt AI, the future will likely include nested AI systems, where rapid decision-making limits the opportunities for people to intervene. It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible. At that point, there will be no option other than to trust AI.
Avoiding that threshold is especially important because AI is increasingly being integrated into critical systems, which include things such as electric grids, the internet and military systems. In critical systems, trust is paramount, and undesirable behavior could have deadly consequences. As AI integration becomes more complex, it becomes even more important to resolve issues that limit trustworthiness.
Can people ever trust AI?
AI is alien – an intelligent system into which people have little insight. Humans are largely predictable to other humans because we share the same human experience, but this doesn’t extend to artificial intelligence, even though humans created it.
If trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it worthy of trust. More research in this area will hopefully shed light on this issue, ensuring that AI systems of the future are worthy of our trust.
A new biography of the Tesla, X (formerly Twitter) and Space X boss reveals that Elon Musk prevented the Ukrainian army from attacking the Russian fleet in Crimea last year, by limiting the beam of his Starlink satellites. Unchecked power is a problem.
This article was updated Sept. 14, 2023 at 12:20 p.m
-OpEd-
PARIS — Nothing Elon Musk does leaves us indifferent. The billionaire is often admired for his audacity, and regularly criticized for his attitude and some of his decisions.
A biography of the founder and CEO of Tesla and Space X, came out today in the United States — 688 pages published by Simon & Schuster and written by William Isaacson (the renowned biographer of Steve Jobs and Albert Einstein).
Stay up-to-date with the latest on the Russia-Ukraine war, with our exclusive international coverage.
One revelation from this book is making headlines, and it's a big one. Elon Musk — brace yourselves — prevented the Ukrainian army from destroying the Russian Black Sea fleet last year.
A bit of context: Starlink, the communications and internet satellite constellation owned by Musk, initially enabled Ukraine to escape Russian blackout attempts.
But when the Ukrainian army decided to send naval drones to destroy Russian ships anchored in Crimea, it found that the signal was blocked. And Starlink refused to extend it to Crimea, because, according to Issacson, Musk feared it would trigger World War III.
It's dizzying, and raises serious questions.
A geopolitical actor
First, the question of responsibility — where does Elon Musk get the legitimacy to decide what the Ukrainian army can and cannot do? He has the technology, which makes him a participant, but does he have the right to decide how a war should be fought? Isaacson doesn't say whether this decision was coordinated with the U.S. administration, which should be noted.
He has neither the rights or responsibilities of state actors.
This is the first time that a private contractor has had so much influence. As cyber-power specialist Asma Mhalla points out, Musk has become, whether we like it or not, a "geopolitical actor."
But he has neither the rights or responsibilities of state actors in conflicts, nor the freedom of non-governmental organizations. Starlink, or any other brand in the Musk universe, has its own interests.
Taiwan, for example, is scrutinizing the war in Ukraine to prepare for a possible Chinese invasion. Taiwan has also realized that Starlink is not to be counted on because Tesla, Musk's other brand, has a strong presence in China. The entrepreneur will do nothing that could displease Beijing.
SpaceX Falcon 9 rocket carrying 53 Starlink internet satellites
So how do we deal with such a figure? It's uncharted territory.
On Wednesday, Musk and other tech heavyweights, like Meta's Mark Zuckerberg and Google's Sundar Pichai, met with U.S. lawmakers behind closed doors to discuss artificial intelligence — another subject of keen interest for the business magnate. Speaking to reporters after the meeting, Musk said there was "overwhelming consensus" over the need for a regulator to ensure the safe use of AI.
Elon Musk does as he pleases, as we can see from the irresponsible way in which he manages the social network X, previously known as Twitter, even though it continues to be a crucial way for information in the world to circulate.
After following Musk for two years, Isaacson asks two hard-hitting questions about the 58-year-old's whimsical personality. To be truly innovative, must one be half-mad , or even a genius? And how do you stop such a brilliant mind from spiraling out of control?
The considerable power accumulated by Musk, but also by other tech giants, perhaps less flamboyant, is such that it must be taken into account by governments around the world: And so until further notice, they remain the only legitimate source of governance. The question is whether it may already be too late.
The IBM RAMAC 305, introduced on this day in 1956, was the world's first computer to use a magnetic hard disk drive for data storage. It stood for "Random Access Method of Accounting and Control" and was designed primarily for business data processing.
Get This Happened straight to your inbox ✉️ each day! Sign up here.
What was the significance of the RAMAC 305
The RAMAC 305 marked a significant milestone in the history of computing as it introduced the concept of random access storage using magnetic disks. It provided faster data access and retrieval compared to previous storage methods, which were sequential in nature. This advancement laid the foundation for modern disk-based storage systems that are still used today.
How did the RAMAC 305's storage system work?
The RAMAC 305's storage system consisted of a set of 50 magnetic disks, each with a diameter of 24 inches (about 60 cm). Data was stored in concentric tracks on these disks, and a read/write head moved mechanically to the desired track for data access. This allowed the computer to directly access any specific piece of data without having to sequentially read through the entire storage medium.
How was the RAMAC 305 used?
The RAMAC 305 found applications in various business and scientific fields that required fast data processing and storage. It was used for tasks like accounting, inventory management, and scientific calculations. Its random access storage capabilities made it particularly valuable for applications that needed quick data retrieval and updates.
Depicted by some artists as a threat to creativity, algorithms are used by others as a powerful new instrument, able to stimulate their imagination, expand their creative capabilities and open doors to so-far unexplored worlds.
PARIS — In the music world, there are those who, as Australian singer Nick Cave confided in the New Yorker, consider that ChatGPT should “go to hell and leave songwriting alone," and those who want to give it a try.
French-born mega DJ David Guetta tried his hand at a concert in February, playing, to a stunned crowd, a track composed using only online artificial intelligence services and rapped by a synthesized voice borrowed from Eminem. Two months later, a masked Internet user, Ghostwriter977, posted a fake AI-generated duet by Drake and The Weeknd, “Heart on My Sleeve," on TikTok, without the authorization of either musician.
This did not stop the track from racking up millions of views and becoming a short-lived success on streaming platforms. After just a few days, Spotify, Youtube and Apple Music removed the track to avoid upsetting Universal Music Group, the artists’ rights holder.
Generating new sounds
In the music industry, generative AI is often depicted as a threat, both to the rights of songwriters and composers on whom the algorithms feed, and to human creation, at risk of competing with and being diluted by a tsunami of soulless, machine-generated tracks. But this technology also opens up new creative possibilities, which are already being seized upon by artists in all genres, from the most expert to the most popular, while respecting copyright.
Research institute Sony CSL learned this lesson: in 2016, they published a track composed by AI in the style of the Beatles, entitled “Daddy’s Car." But the initiative ended in “bad buzz," as not all the necessary authorization had been sought. Since then, “We’ve laid down the rules: we don’t do anything without the artists; only tools with and for them," says Michael Turbot, head of technology promotion at Sony CSL.
His laboratory offers three types of service, based on databases whose rights have been fully respected. First, synthesizers, which produce new sounds. “AI is able to generate an infinite number of sounds which didn’t exist — for example, any interpolation between a guitar sound and a saxophone sound,” he says.
The second kind of tools: creative assistants. “You’re in the studio, but are not very good with such-and-such instruments,” suggests Michael Turbot. “The algorithm will then react to your musical idea and suggest creative possibilities, such as a bass, piano or drum line.”
Bass lines
“This doesn’t mean we don’t need instrumentalists anymore,” he insists. “But today, rare are the musicians who have access to their own drummers, for example. Failing that, they buy ready-made drum lines on the Internet. We offer customized melody lines.” Finally, the algorithms of Sony CSL take care of mixing a track. In this complex process, “AI will do all the calculations for you, allowing you to scan the whole spectrum of possibilities,” he explains.
AI is like an invisible partner without an ego.
Some artists are already playing the game, like Whim Therapy (Jérémy Benichou’s stage name). “I started to try these tools to see if I should be afraid of them, but as I used them, I quickly realized that the big replacement wasn’t around the corner,” the pop composer says. That didn’t stop him from getting a taste for it and trying out CSL's melody generators — drums, bass, piano — and the lyrics assistant, to create his first song, “Let It Go," which won second prize from the audience at the AI Song Contest competition in 2021.
“Imagine a score with three sheets; we remove the one in the middle, and ask the AI what it suggests," he says, describing how the service works. Won over by this first try, the artist kept going and produced an EP with these same technologies. He doesn’t use them to generate an entire track, but rather as a back-up, like an assistant when he’s lacking an idea.
Invisible partner
“It avoids blockages and old habits,” he explains. Most of the time, he doesn’t use the algorithm’s suggestions as they are, but uses them as a jumping-off point for a personal approach. “Take the bass line generator: all I had to do was turn it an octave higher and send it to a guitar amp so that interesting things would happen in terms of sound,” he says.
For him, AI is like an “invisible partner without an ego, with whom I don’t need to argue for 15 minutes," which unblocks his work and leads him in unexpected directions. Other artists have experimented with Sony CSL tools, such as electronic music producer DeLaurentis, whose latest album revisits pieces from the classical repertoire (Debussy, Ravel, Satie), with the help of AI.
A formidable compositional tool, AI is also an improv partner. Double bass player Joëlle Léandre, one of France’s leading contemporary musician, has provided proof of this, as part of a program at the Institute of Research and Acoustic/Music Coordination (Ircam), named Reach (Raising Cooperative Creativity in Cyber-Human Musicianship), which creates generative AI models.
“Our AI-based systems are capable of listening to musicians live,” explains Gérard Assayag, head of the program at Ircam, “Breaking down the sound signal into meaningful units (note, rhythm, harmony), to analyze the logic of what is played, even if it’s improvised, by building a cartography giving the range of possibilities, and at the moment of playing, choosing a trajectory within that cartography.”
For some, the universe of possibilities is only just starting to open.
Joëlle Léandre was able to work with the algorithms during a concert at the Centre Pompidou in mid-June, inventing musical phrases as she went along, to which the researchers’ “machines” answered, as if in front of other musicians. “For me, there is absolutely no difference,” she says. "I’ve been playing double bass for years; it’s a tool. My friends from Ircam have a tool too. The only difference is that they can offer a proliferation of sounds,” she says — while she is limited to her double bass.
We’re at the dawn of a great revolution.
With these interactive algorithms, Ircam sets itself apart from existing music-generating AI services, like Aiva or Soundful, which produce ready-made pieces on command, with a few prior adjustments, or from Google’s experimental program MusicLM, which generates music based on a text command. “When I ask Google for 10 seconds in the style of Brahms, I know what to expect. This type of algorithm is not very creative and produces ‘more of the same,' as the English say,” comments Gérard Assayag.
“For us, the challenge is to surprise and push back the limits of creation,” he adds. Ircam has other projects in the pipeline, notably the installation of its algorithms within the HyVibe intelligent acoustic guitar, which would be capable of becoming autonomous and playing on its own, following the musician’s lead.
The need for transparency
The universe of possibilities is only just starting to open. “We’re at the dawn of a great revolution,” he predicts. There will be losers too. “I can understand that some are worried,” says Jérémy Benichou. “Things are moving very fast; soon, AI will be able to create summary pieces that can be used in musical libraries. But it will become an enormous asset for those who will try something more daring.”
The majors, for their part, take a dim view of the massive influx of standardized tracks flooding the streaming platforms, threatening to dilute the value attributed to genuine artists. The International Federation of the Phonographic Industry was alarmed by this in its latest report.
“We know that there are playlists lasting several hours where no human being has intervened to showcase their inventiveness, and which are capturing revenue,” warns the president of the National Music Center (CNM), Jean-Philippe Thiellay. He insists on the need for transparency, both in the use of AI in a track and in the way playlists are composed on streaming platforms.
As with the origin of a product in commerce, “We should display that a playlist is made up of so many percent of AI-created tracks,” he suggests, as these authorless tracks potentially guarantee platforms a higher margin. “We also need to make sure that not a single cent of public money is used to finance productions that would have done without human intervention,” insists the president of the CNM — to ensure that, in music and elsewhere, AI remains a tool in human hands, and not the other way around.
Poland has received widespread investment from multinational companies and now, the country is bucking the worldwide trend by adding jobs in the tech sector.
The Polish economy and its tech sector have experienced marked growth in recent years, especially since it joined the European Union. Poland currently has 60,000 tech companies, including 10 of its own— companies that reach a value of $1 billion without being listed on the stock market.
IT and tech currently accounts for 8% of the Polish GDP. Giants such as Microsoft, Google, Meta, Intel, Samsung and Amazon have all invested in Polish IT and established their own centers within the country. Poland’s central location within Europe, and its proximity to other countries experiencing their own tech successes such as Germany and Lithuania, has also granted it a strategic advantage for additional investment.
In late March, experts began to warn that Polish tech wanted too much too fast, especially when the IT market was impacted by the same layoffs happening across the sector worldwide. Some technicians who did not lose their jobs were allowed to keep them on the condition that they accept a lower pension in the future.
But in spite of these challenges, the sector is still expected to grow. As of June 22, 38% of Polish IT firms have said that they are looking to hire new staff, according to Polish tech news service CRN. Even taking into account those firms saying they are looking to cut down, this amounts to 26% in employment growth across the industry.
Silicon Valley comes to Poland
Last year, Google invested 2.7 billion PLN (upwards of 600 million euros) in Polish tech. The company now owns the Warsaw tech hub, a space they had previously rented, using it as a center for developing Google Cloud technologies. Google has also expanded its Warsaw offices to include space for up to 2,500 employees, with the possibility of further development in the future. As of right now, it is the largest center for the development of Cloud technology in Europe.
As of 2022, Google employed over 1,000 Poles, over 600 of whom are programming engineers.
Aside from supporting Poland’s tech sector, senior officials at Google had a geopolitical message. “Our activities in Poland go beyond supporting the development of the digital economy. We will use our resources and spaces to support those who have been impacted by the ongoing war in Ukraine,” said Ruth Porat, the Senior Vice President and Financial Director of Google and Alphabet. Earlier that year, the company announced financial support to NGOs working for refugees from Ukraine coming to Poland.
This solution is very cost-competitive.
The company claims that over 270,000 Poles have taken part in initiatives developing their digital skills, and that, in the last two years, Google has trained over 24,000 cloud computing specialists, according to CRN.
On June 16, tech giant Intel announced their own investments in the Polish tech sector. The U.S. company plans to build a Semiconductor Integration and Testing Plant near Wrocław, according toGazeta Wyborcza. “This investment will create the first of its kind, comprehensive and modern value chain in Europe in the field of semiconductor production,” Prime Minister Mateusz Morawiecki said after the deal was announced.
The plant, which is projected to open in 2027, has received $4.6 billion of investment from Intel. Aside from temporary construction and related supplier jobs, the center will employ 2,000 individuals in total.
For Intel, this isn’t the beginning of a relationship with Poland, but a continuation. Intel has held various operations in Poland for over 30 years and currently employs a total of 4,000 Poles.
“Poland has already been the place for Intel’s operations,” said Intel CEO Pat Gelsinger. “The geographical location of the country will allow for effective cooperation with the company's production plants in Germany and Ireland.”
Gelsinger also cited lower costs in comparison to other countries, saying, “This solution is very cost-competitive compared to other manufacturing locations globally while offering great potential for a talent base that we will be helping to develop.”
Investing in green energy
Recent investments by Intel and Google have sparked a new wave of interest in the Polish tech sector. In 2020, Microsoft launched a landmark $1 billion IT investment in Poland, the largest ever at that point in time.
Intel’s arrival in Poland, as well as the growth of the tech sector in the country as a whole, have not been without their own added costs. According to unofficial estimates by Bloomberg, Germany was set to pay 11 billion euros in subsidies to Intel in exchange for the production plant. Whether Poland paid for the plant, and how much it may have promised Intel in subsidies for its investment, are unknown. Polish Prime Minister Morawiecki declined to answer Gazeta Wyborcza’s request for comment.
And, despite keeping their eyes on Poland, large firms have made it clear that their investment in the country is not unconditional. In February, a list of multinational companies — including tech giants Amazon and Google, but also Mercedes-Benz and Ikea —signed a letter addressed to the Polish Prime Minister and the Polish Parliament, urging Poland to invest in green energy in order to continue attracting their investments.
The verdict is a positive light at the end of the tunnel.
Poland is currently the most coal-dependent country in the EU, with 71% of its energy being coal-generated as of 2021.
US Internet shop Amazon WRO2 distribution center in Wroclaw, Poland.
Polish law has also sprung to the defense of the country’s burgeoning sector of tech entrepreneurs, who may end up paying less in taxes than they did before. After the Polish Treasury tried to implement higher taxes for workers in the sector, the Administrative Court of the Gdansk Voivodeship struck them down.
Workers in Poland’s tech industry are now taxed between 8.5 and 12%, but the recent judgment has increased IT technicians' chances of being taxed at a lower rate.
The judgment, which applies differential rates depending on whether technicians on undefined concepts of "software" or activities "related to software”, remain unclear. But workers in the sector are optimistic that the lower tax rates will soon apply to them.
“The verdict is a positive light at the end of the tunnel,” Piotr Sekulski, a tax advisor at the law firm Outsourced.pl, told Gazeta Wyborcza.
More modern than the West
In recent years, growth has been so significant that Poland has been experiencing what some have referred to as “reverse brain drain.” The presence of tech and software companies such as Google, Microsoft, and Nvidia, as well as the lower costs of living compared to countries like the US or UK, have prompted members of the Polish diaspora to return home.
This, combined with Poland’s own recent achievements in the IT and AI fields, has created a vocal group of Poles advocating for its tech industry over others.
"Poland is sitting on a gold mine of tech talent, ranking four overall in STEM graduates and number one in female STEM graduates," Polish-American venture capitalist Dominik Andrzejczuk told Euronews. "It’s this high concentration of tech talent that sets Poland up to be a real contender in the next 5-10 years.”
Other industries, even those completely unrelated to technology and IT, are taking note of Poland’s success in the area.
“In many ways that are very important to us, Poland is more modern than the West,” Valery Gaucherand, chief of L’Oréal’s division for Poland and the Baltic states, told Gazeta Wyborcza. "Digitization is Poland's great strength and it will be the source of its growth."
AI is so far unlikely to trigger a global nuclear catastrophe, but it might gradually undermine humans' capacity for critical and creative thinking as some decision-making and even writing tasks may increasingly be delegated to artificial intelligence.
The rise of ChatGPT and similar artificial intelligence systems has been accompanied by a sharp increase in anxiety about AI. For the past few months, executives and AI safety researchers have been offering predictions, dubbed “P(doom),” about the probability that AI will bring about a large-scale catastrophe.
Worries peaked in May 2023 when the nonprofit research and advocacy organization Center for AI Safety released a one-sentence statement: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” The statement was signed by many key players in the field, including the leaders of OpenAI, Google and Anthropic, as well as two of the so-called “godfathers” of AI: Geoffrey Hinton and Yoshua Bengio.
You might ask how such existential fears are supposed to play out. One famous scenario is the “paper clip maximizer” thought experiment articulated by Oxford philosopher Nick Bostrom. The idea is that an AI system tasked with producing as many paper clips as possible might go to extraordinary lengths to find raw materials, like destroying factories and causing car accidents.
A less resource-intensive variation has an AI tasked with procuring a reservation to a popular restaurant shutting down cellular networks and traffic lights in order to prevent other patrons from getting a table.
Office supplies or dinner, the basic idea is the same: AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs enslaving or destroying the human race.
Actual harm
In the past few years, my colleagues and I at UMass Boston’s Applied Ethics Center have been studying the impact of engagement with AI on people’s understanding of themselves, and I believe these catastrophic anxieties are overblown and misdirected.
Yes, AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent. In fact, that is already happening: Russian operatives likely attempted to embarrass Kremlin critic Bill Browder by ensnaring him in a conversation with an avatar for former Ukrainian President Petro Poroshenko. Cybercriminals have been using AI voice cloning for a variety of crimes – from high-tech heists to ordinary scams.
AI decision-making systems that offer loan approval and hiring recommendations carry the risk of algorithmic bias, since the training data and decision models they run on reflect long-standing social prejudices.
These are big problems, and they require the attention of policymakers. But they have been around for a while, and they are hardly cataclysmic.
What it means to be human
Actually, there is an existential danger inherent in using AI, but that risk is existential in the philosophical rather than apocalyptic sense. AI in its current form can alter the way people view themselves. It can degrade abilities and experiences that people consider essential to being human.
People will gradually lose the capacity to make these judgments themselves
For example, humans are judgment-making creatures. People rationally weigh particulars and make daily judgment calls at work and during leisure time about whom to hire, who should get a loan, what to watch and so on. But more and more of these judgments are being automated and farmed out to algorithms. As that happens, the world won’t end. But people will gradually lose the capacity to make these judgments themselves. The fewer of them people make, the worse they are likely to become at making them.
Or consider the role of chance in people’s lives. Humans value serendipitous encounters: coming across a place, person or activity by accident, being drawn into it and retrospectively appreciating the role accident played in these meaningful finds. But the role of algorithmic recommendation engines is to reduce that kind of serendipity and replace it with planning and prediction.
Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.
07 June 2023, Baden-Württemberg, Karlsruhe: The humanoid robot ''NAO'' is introduced in the inclusive daycare center at the Lebenshilfehaus Karlsruhe.
Nuclear weapons probably killed more than 200,000 people in Hiroshima and Nagasaki in 1945, claimed many more lives from cancer in the years that followed, generated decades of profound anxiety during the Cold War and brought the world to the brink of annihilation during the Cuban Missile crisis in 1962. They have also changed the calculations of national leaders on how to respond to international aggression, as currently playing out with Russia’s invasion of Ukraine.
AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is far from being able to decide on and then plan out the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.
Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.
Not dead but diminished
So, no, AI won’t blow up the world. But the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.
The human species will survive such losses. But our way of existing will be impoverished in the process. The fantastic anxieties around the coming AI cataclysm, singularity, Skynet, or however you might think of it, obscure these more subtle costs. Recall T.S. Eliot’s famous closing lines of “The Hollow Men”: “This is the way the world ends,” he wrote, “not with a bang but a whimper.”
The inventor of ChatGPT is in Europe to try to force leaders on the Continent to face hard questions about what artificial intelligence is bringing to our world, whether they like it or not.
-Analysis-
PARIS — Six months ago, Sam Altman’s name was only known to a small circle of technophiles. Earlier this week, when he came to France, he was received by President Emmanuel Macron and the Minister of Economy, and he is back in Paris on Friday to make other connections. On his Twitter account, he described his trip as a "World Tour," like a pop star.
Altman is the CEO of OpenAI, the U.S. company that created ChatGPT, the natural language artificial intelligence tool that has literally shaken the world. With 200 million users worldwide in just six months, ChatGPT has broken all sorts of records for the speed of technology adoption.
The world of Tech is prone to trends, and not all of them last. However, to quote Gilles Babinet, co-president of the National Digital Council in France, who has recently published an essay on the history of the internet titled Comment les hippies, Dieu et la science ont inventé Internet("How the Internet Was Invented by Hippies, God and Science"), we are currently facing an "anthropological break."
In other words, a qualitative leap that will impact all human activities, and even the political organization of our societies — with both positive and negative results.
Negative consequences
With such a significant breakthrough, concerns have quickly emerged, some of which have been voiced by researchers and entrepreneurs themselves. Some have futilely called for a moratorium. Calls for "regulation" are almost a natural reflex, especially in Europe, where technological advancements are too often simply experienced rather than invented.
Altman himself advocates for state regulation that allows for the emergence of a technology that has entered an exponential phase of development, while ensuring the prevention of abuse and negative consequences. However, during his visit to London this week, he issued a warning: If the impending European regulation is too restrictive, he will not hesitate to withdraw his software from the continent.
Could they cooperate on this issue when technology lies at the heart of their confrontation?
An even more radical figure in the debate is Eric Schmidt, the former CEO of Google who now leads a powerful defense technology fund. He believes that regulation should be left in the hands of tech players rather than politicians who, in his opinion, have no understanding of it whatsoever.
However, the idea of regulation is widely shared. No serious individual considers allowing the players in such an existential sector to self-regulate.
The recent G7 summit in Hiroshima even established a global think tank on AI, and some saw it as a prefiguration of an international authority similar to the UN's International Atomic Energy Agency.
The second concern is raised by Schmidt: How many elected officials or technocrats in our countries have an actual grasp of the subject? In 2018, the Chinese Communist Party's Politburo dedicated an entire session to artificial intelligence, a wise move.
Such evangelization is necessary in our societies, to be sure that we are not solely driven by fears and fantasies — and can become more than simply passive recipients.
A healthy dose of cynicism and short cuts allows parts for weapons and other technology to still make their way into Russia. Independent Russian-language media Vazhnyye Istorii traces the way both Moscow and much of the rest of the world circumvent export bans.
When Western countries imposed sanctions on Russia after the invasion of Ukraine, exporting Western technologies to Russia was effectively banned — at least, on paper.
But through a web of third parties, Russia is still finding ways to dodge the sanctions and import crucial components for weapons and other technology.
In the United States, personal sanctions prohibit American citizens and companies from doing business with specific Russian people and businesses. Other sanctions prevent them from doing business with entire industries. Secondary sanctions may be imposed on non-US companies caught violating US prohibitions.
A special permit is required for any export of high-tech products to Russia. These are only issued in exceptional circumstances, if ever. The largest manufacturers of microelectronics — Analog Devices, Texas Instruments and others — have all ceased commercial activities in Russia.
Still, products made by these companies are increasingly being found in the remains of Russian drones and missiles.
Components continue to enter Russia through a chain of intermediary firms in different countries. For example, an American company can buy them from a manufacturer, then sell them to a Chinese company, which can in turn sell them to a Russian intermediary who is not formally connected with the defense complex — who will then transfer the goods to the arms manufacturer.
In Russian independent media site Vazhnye Istorii ("Important stories"), defense expert Eric Woods at the James Martin Center for Nonproliferation Studies explores how these schemes work.
Export law loopholes
It is important to distinguish between sanctions and export controls. People often think they are one and the same, and in terms of their functions this is true, but sanctions and export controls use different mechanisms to control goods. There are many overlapping rules, and these rules are written in such a way that even Americans themselves can't understand them.
Consumer electronics are not subject to export controls.
If I am a U.S. citizen who wants to export something to Russia, I need to check if the goods are under export control. All goods subject to export control are dual-use goods (i.e.they can be used for civilian and military purposes), but not all dual-use goods are under export control. Here is the first confusion.
Consumer electronics are not subject to export controls. But it all depends on the context and who the end user will be. If my grandmother ordered electronics, it would be legal, but if the buyer was a Russian military enterprise, it wouldn’t be.
We have many sanctions, but there is not enough understanding of how they work. The laws are so complex that it is difficult for customs officers at the airport or port to understand them all.
Smugglers take advantage of this. It is by no means difficult to circumvent sanctions.
It's what your clients do
If the military or intelligence agencies want to get components that are under export control, they usually get them through intermediaries. It is often a difficult and laborious process, and it makes goods more expensive, but if customers have the time, energy and resources, they’ll do it.
American companies can easily say, "We don't ship to Russia, we don't ship to Iran, we don't ship to North Korea." And it's true, they don't. But their clients can, and often do.
The manufacturer wants to make money, not spend millions of dollars vetting every customer or business that walks in. When someone comes in and says, "Here's a million dollars, I need a product," you don't ask questions.
One American company, for example, sold computer equipment directly to a Russian company that makes launchers for the S-400 anti-aircraft missile system. The company’s compliance department told their bosses: "We can't do that; this is a rocket factory in Moscow." But they ignored the warning because the order was a big one.
The Washington Post reported another example in October last year of an American company making hypersonic missiles for the Pentagon. They sold the technology to one company in the U.S., that company to another, and now that technology is being used in Chinese weapons.
It is difficult to verify everything when there are so many intermediaries involved. But companies should and need to ask questions.
A well-rehearsed scheme
The scheme of using third countries to access goods under export control has been operating since the Soviet era. There are documents and studies from the 1970s and 1980s that the Soviet Union received a great deal of computer equipment and electronics from the United States.
Back in the days of Stalin, an international coordinating committee for export control was created, which was supposed to ensure that dual-use technologies did not get into the USSR. However, third countries such as Finland traded with both sides, undermining the goals of the committee.
Until last February 24, there were many cases when prohibited goods were imported, for example, through Finland or Estonia. Today, of course, they don’t cross the border — Estonia and Finland are now desperately trying to make their borders with Russia more secure. But instead, we see attempts in places like Taiwan and Hong Kong to do similar third-party deals with Russia.
Do sanctions work?
Many studies show that the Russian defense complex has been in total disorder since 2014. Sanctions work. There’s no doubt about it. The cases of circumvention of sanctions that we become aware of are the success stories of smugglers.
Of course, there are businesspeople who use sanctions as an opportunity and supply millions of dollars worth of military components to Russia. But are these supplies sufficient? We don't yet know for sure.
The United States would need the help of China, Malaysia, Indonesia and all countries that produce sanctioned components to combat sanctions violators. But it's almost impossible. Why would China want to help America fight Russia?
It would cost a vast amount of money to move the production of microchips to more loyal countries. It is more economically profitable to produce components in Malaysia, Indonesia, and other countries than on home turf. Only the most modern components are made in the U.S.
Secondary sanctions should be of concern to smaller countries doing business with Russia. If I were a Taiwanese company, I would be worried. Americans have a lot of money and they are willing to spend it in Taiwan. From an economic standpoint, it would be terrible to lose the American market. Mainland Chinese companies might be worried too, but it depends on who they consider their main customer.
A device on show to demonstrate the use of foreign components by Russian troops
Missiles hit civilians when you use inaccurate weapons in populated areas. Missiles, especially those designed and built during the Cold War, are not as accurate as the military claims them to be, even though they have been upgraded under Putin. This may be due to inattention, lack of information about the target, for example, when using old Soviet maps, or due to political pressure to launch. We have seen this multiple times already during the war.
If Russia were to lose Western tech, they would be left with 1970s era weapons.
The Americans have a so-called combat damage assessment — when the military checks whether they hit what they wanted. Whether this occurs to the same extent in Russia remains to be seen, but if these reports are falsified, like others are, in order to tell authorities what they want to hear, this is very bad. In this sense, the human factor plays a greater role than electronics.
But even so, if Russia were to lose Western tech, they would be left with 1970s era weapons.
Will Russia replace Western tech?
Even in the time of the USSR, Russian microchips lagged far behind American tech and often copied the developments of the U.S. instead of creating their own chips. I can’t see Putin, especially now, being able to change that.
Even when the West began to impose sanctions back in 2014, it seems that Russian arms manufacturers did not replace foreign components in their weapons. Whilst a huge amount of money was allocated to the defense complex to solve this problem, the money simply disappeared.
As for the replacement of Western components with Chinese ones: many US chips are part of complex supply chains involving companies that have offices in China. They are already Chinese to some extent. Whether the Russian defense complex will be able to switch to solely Chinese-designed electronics is unclear.
Russia's capabilities for the production of microelectronics are decades behind even a country like Malaysia. The equipment needed to set up your own production is big, heavy, hard to hide and hard to smuggle in. Maybe Russia will be able to buy used equipment. Or maybe the Chinese semiconductor market will move forward. Whatever the case, Putin has had two decades to build the semiconductor industry — and his attempts were about as successful as his war.
On this day in 1971, NASDAQ, the world's first electronic stock market was created in New York City.
Get This Happened straight to your inbox ✉️ each day! Sign up here.
What is NASDAQ?
The NASDAQ (National Association of Securities Dealers Automated Quotations) is an American stock exchange. The NASDAQ is operated and owned by the Nasdaq, Inc. and it is home to many technology-based companies and startups.
How did the NASDAQ differ from traditional stock markets?
The NASDAQ was the first market to use electronic trading systems, as opposed to the traditional open outcry system used by other stock markets. This allowed for faster and more efficient trading.
How has the NASDAQ evolved over the years?
Over the years, NASDAQ became more robust by adding automated trading systems, and in 1998, became the first stock market in the United States to trade online, using the slogan "the stock market for the next hundred years".
What is the current role of the NASDAQ in the global financial market?
The NASDAQ is currently one of the world's largest stock markets, with a wide range of companies listed on it, including technology giants like Apple and Microsoft. It plays a significant role in the global financial market.
PARIS — There’s a dual story about the U.S. tech scene circulating in the world’s media. The first is structural, about trendlines and economics as Silicon Valley’s all-powerful platforms and companies have seen their stocks tanking and announced large layoffs for the first time ever. The second storyline is about the big tech titans themselves.
No surprises, Twitter is still taking up extraordinary amounts of headline real estate. And it’s impossible to disentangle Twitter the company from its Very-Online new owner, as Elon Musk’s barrage of changes continue to cross new red-lines that could wind up threatening the viability of the company itself.
To receive Eyes on U.S. each week in your inbox, sign up here.
France’s Alternatives Economiques holds no punches in comparing Musk to Donald Trump, saying that like the twice-impeached former president, Musk uses Twitter to “shock, provoke, and even manipulate markets and public opinion.”
On Tuesday, Dubai-based Al-Arabiya quotes one of Musk’s irreverent Tweets warning (or threatening) that the company “will do lots of dumb things.” Al-Arabiya declares: “He couldn’t have made a clearer statement.”
International observers note that the spate of firings at Twitter may come back to haunt the company. Musk, who has laid off roughly half the company’s workers, has, by all accounts, decimated the teams responsible for content moderation.
In the U.S., this might be causing problems with advertisers, but in Europe, it’s potentially a problem with governments — which impose much stricter regulations on hate speech, and requirements that companies remove it. Italy’s Il Fatto Quotidiano and France’s Le Monde contextualize Musk’s purchase of Twitter as a peculiarly American battle about the limits of free speech.
Another French daily, La Croix, also expressed its worries about "the shadow side" of Twitter
“Elon would like to present himself as the grand moderator of the most political content on social media in the name of free speech,” Il Fatto’s Luca Ciarrocca writes. “The risk though is that it won’t be necessary because if he continues like this all the users could wind up leaving.”
As the FT reports, this puts Twitter on a “collision course with Brussels,” which has the ability to fine the company up to 6% of its global revenue under the Digital Services Act.
The upheavals at Twitter are partially responsible for the precipitous decline in the stock price of another Musk company — Tesla. Investors in the electric vehicle manufacturer are concerned that Musk might have to unload significant numbers of Tesla shares in order to cover the debt incurred in the acquisition of Twitter, leading the stock price lower.
But falling tech stocks are more than an Elon Musk story. Where the world used to look at Silicon Valley in awe, publications are striking a different tone these days. France’s Le Figaro writes: “There’s something broken in the kingdom of American tech.”
That something increasingly now includes employee headcounts.
It seems that Silicon Valley is cracking right now.
From Facebook’s parent company Meta (11,000 jobs cut) to Amazon (10,000) to Twitter (3,700), U.S. tech companies are trimming their employees by the thousands. That’s a lot of newly unemployed programmers… But the one company that’s not laying anyone off en masse? Dutch payments darling Adyen, botes Business Insider Nederland — even though its main competitor Stripe is cutting 14% of its workforce.
German weekly Die Zeit sees a deja vu: “The wave of layoffs brings back uncomfortable memories of 2000, when the dot-com bubble burst. At that time, the prices of the Internet companies, that were so new at the time, increased highly before crashing violently,” the German weekly writes. “It seems that Silicon Valley is cracking right now. Google was hit by collateral damages after the cryptocurrency crisis. And crypto itself is also struggling to survive.”
Not all are ready to count out the American tech giants so quickly. Georges Nahon, writing in French business daily Les Echos, notes how many fields are now opening up to the latest breakthroughs in artificial intelligence and the blockchain-backed web3.
“In Silicon Valley the next cycle has already started, driven by Generative AI which is already setting off a new gold rush with the creation of more than 100 start-ups in a very short time,” Nahon writes. “Any obituary of Silicon Valley has been written prematurely.”
Mexico City daily La Jornada was one of many international newspapers to feature former U.S. President Donald Trump on its front page Wednesday, following the announcement that Trump would be a candidate for the White House in 2024.
In an article titled “Trump: The Monster’s Back”, French-language Canadian daily Le Journal de Montréal compares Trump’s comeback plans to a zombie movie, writing: “Le Grand Orange was stabbed in the heart when he lost the House of Representatives back in 2018. He was gunned down when he lost the presidential election on Nov. 3, 2020. Now he’s tasted some midterms flamethrower. And still, who crawled out from his grave last Tuesday?”
😅 GOTT BLESS AMERICA
“That was one of the greatest football experiences I’ve ever had.” That’s how Tampa Bay Buccaneers superstar quarterback Tom Brady described the NFL game between his team and the Seattle Seahawks, which was for the first time taking place on German soil. The game, which the Buccaneers won 21:16, saw 69,811 fans gather in Munich's Allianz Arena (more used to the other kind of football, i.e. soccer).
It was also a streaming hit, with 5.8 million viewers on TV and online, the highest ratings for an NFL game played abroad to date. As German sports website Ran writes: Deutschland ist ein Football-Land!
There is definitely bad blood between Taylor Swift fans and Ticketmaster, after the ticket giant canceled this week’s general public sale for the U.S. singer’s upcoming tour because of “historically unprecedented” demand.
As for Europe, as Belgian media Moustique notes, Swift is not expected to tour there before fall 2023, which should leave local ticket providers ample time “to avoid a new fiasco”.