Re The UN General Assembly Speaker Schedule is Here! I note that whoever will be speaking for Canada this year…
AI, Chatbots, Society & Technology May-October 2023
Written by Diana Thebaud Nicholson // October 30, 2023 // Global economy, Government & Governance, Science & Technology // Comments Off on AI, Chatbots, Society & Technology May-October 2023
Bill C-27
Yoshua Bengio
Artificial intelligence (AI) The Guardian
Mila – Quebec Artificial Intelligence Institute, is recognized worldwide for its major contributions to AI.
Today, the Mila community boasts the largest concentration of deep learning academic researchers globally.
Mila is the result of a unique collaboration between Université de Montréal and McGill University, in close collaboration with Polytechnique Montréal and HEC Montréal. The institute also hosts professors from Université Laval in Quebec City, Université Sherbrooke and the École de technologie supérieure in Montreal, bringing together over 1,000 individuals advancing AI for the benefit of all. Since 2017, Mila and its sister institutes, Amii (Alberta) and Vector (Ontario), play a central role in the Pan-Canadian Artificial Intelligence Strategy–the first national AI strategy in the world–led by CIFAR.
AI Chatbot
A chatbot is a computer program or application that simulates and processes human conversation (either through text or voice), enables the user/human to interact with digital entities as if they were communicating with a real human.
Chatbots are in two major categories.
Foreign Affairs September/October 2022
Spirals of Delusion -How AI Distorts Decision-Making and Makes Dictators More Dangerous
By Henry Farrell, Abraham Newman, and Jeremy Wallace
The challenges to democracies such as the United States are all too visible. Machine learning may increase polarization—reengineering the online world to promote political division. It will certainly increase disinformation in the future, generating convincing fake speech at scale. The challenges to autocracies are more subtle but possibly more corrosive. Just as machine learning reflects and reinforces the divisions of democracy, it may confound autocracies, creating a false appearance of consensus and concealing underlying societal fissures until it is too late.
6 January
A Skeptical Take on the A.I. Revolution
The A.I. expert Gary Marcus asks: What if ChatGPT isn’t as intelligent as it seems?
Gary Marcus is an emeritus professor of psychology and neural science at N.Y.U. who has become one of the leading voices of A.I. skepticism. He’s not “anti-A.I.”; in fact, he’s founded multiple A.I. companies himself. But Marcus is deeply worried about the direction current A.I. research is headed, and even calls the release of ChatGPT A.I.’s “Jurassic Park moment.”
27-30 October
Biden Signs Sweeping Executive Order Regulating Artificial Intelligence
Biden today will take his first major step to confront the emerging risks of artificial intelligence by issuing a broad executive order regarding government agencies’ use of it.
Industry urged to boost privacy, security of AI systems
Government to assist with creating AI ‘watermarking’ standards
(Bloomberg) His directive is intended to promote the safe deployment of AI with a government-wide strategy. The executive order’s release occurs as countries around the world have struggled to grasp the new technology’s potential benefits — and perils. Lawmakers on Capitol Hill have been trying to establish a framework that would provide safety without stifling innovation.
Even as many executives plead for guidance from Washington, corporate America, particularly Silicon Valley, has been rushing into AI. Last week, Alphabet’s Google committed to invest $2 billion in the artificial intelligence company Anthropic.
… With AI policy as one of her key portfolio items, VP Kamala Harris will represent the administration at the Global Summit on AI Safety.
While there, White House aides tell us, Harris will be pitching the administration’s domestic policy as a blueprint for regulatory efforts around the globe — protecting human, civil, labor and consumer rights “in a way that does not stifle innovation,” as one put it. In addition to the EO, we’re told, Harris will announce other U.S. actions in London that remain under wraps for now.
As with most foreign policy matters these days, the subtext is obvious: China.
While Harris is not expected to directly focus on America’s emerging global rival, “she’s making sure these rules and norms [around AI] represent our interests and our values, not those of authoritarians,” a White House aide told us. “She’ll make clear that we’re ready to work with anyone if they’re prepared to promote responsible international rules and norms for AI.”
AI doomsday warnings a distraction from the danger it already poses, warns expert
A leading researcher, who will attend this week’s AI safety summit in London, warns of ‘real threat to the public conversation’
(The Guardian) Focusing on doomsday scenarios in artificial intelligence is a distraction that plays down immediate risks such as the large-scale generation of misinformation, according to … Aidan Gomez, co-author of a research paper that helped create the technology behind chatbots. … “Misinformation is one that is top of mind for me. These [AI] models can create media that is extremely convincing, very compelling, virtually indistinguishable from human-created text or images or media. … We need to figure out how we’re going to give the public the ability to distinguish between these different types of media.”
27 October
Sweeping new Biden order aims to alter the AI landscape
The White House is poised to make an all-hands effort to impose national rules on a fast-moving technology, according to a draft executive order.
President Joe Biden will deploy numerous federal agencies to monitor the risks of artificial intelligence and develop new uses for the technology while attempting to protect workers, according to a draft executive order obtained by POLITICO.
24 October
How language gaps constrain generative AI development
Regina Ta and Nicol Turner Lee
There are over 7,000 languages spoken worldwide, yet the internet is primarily written in English and a small group of other languages.
Generative AI tools are often trained on internet data, meaning access to these tools may be limited to those who speak a few data-rich languages like English, Spanish, and Mandarin.
Building inclusive digital ecosystems will require bridging the digital language divide by ensuring greater linguistic representation in AI training data.
(Brookings) Prompt-based generative artificial intelligence (AI) tools are quickly being deployed for a range of use cases, from writing emails and compiling legal cases to personalizing research essays in a wide range of educational, professional, and vocational disciplines. But language is not monolithic, and opportunities may be missed in developing generative AI tools for non-standard languages and dialects.
13 October
The Path to AI Arms Control
America and China Must Work Together to Avert Catastrophe
By Henry A. Kissinger and Graham Allison
(Foreign Affairs) Today, as the world confronts the unique challenges posed by another unprecedented and in some ways even more terrifying technology—artificial intelligence—it is not surprising that many have been looking to history for instruction. Will machines with superhuman capabilities threaten humanity’s status as master of the universe? Will AI undermine nations’ monopoly on the means of mass violence? Will AI enable individuals or small groups to produce viruses capable of killing on a scale that was previously the preserve of great powers? Could AI erode the nuclear deterrents that have been a pillar of today’s world order?
At this point, no one can answer these questions with confidence. But as we have explored these issues for the last two years with a group of technology leaders at the forefront of the AI revolution, we have concluded that the prospects that the unconstrained advance of AI will create catastrophic consequences for the United States and the world are so compelling that leaders in governments must act now. Even though neither they nor anyone else can know what the future holds, enough is understood to begin making hard choices and taking actions today—recognizing that these will be subject to repeated revision as more is discovered.
29 September
Hollywood’s Deal With Screenwriters Just Rewrote the Rules Around A.I.
By Dr. Adam Seth Litwin, associate professor of industrial and labor relations at Cornell University.
(NYT opinion) The W.G.A. contract establishes a precedent that an employer’s use of A.I. can be a central subject of bargaining. It further establishes the precedent that workers can and should have a say in when and how they use artificial intelligence at work.
It may come as a surprise to some that the W.G.A. apparently never wanted, nor sought, an outright ban on the use of tools like ChatGPT. Instead, it aimed for a more important assurance: that if A.I. raises writers’ productivity or the quality of their output, guild members should snare an equitable share of the performance gains. And the W.G.A. got it.
How did it achieve this? In this case, the parties agreed that A.I. is not a writer. The studios cannot use A.I. in place of a credited and paid guild member. Studios can rely on A.I. to generate a first draft, but the writers to whom they deliver it get the credit. These writers receive the same minimum pay they would have had they written the piece from scratch. Likewise, writers can elect to use A.I. on their own, when a studio allows it. However, no studio can require a guild member to use A.I.
23 September
The Internet Is About to Get Much Worse
By Julia Angwin
(NYT Opinion) We are in a time of eroding trust, as people realize that their contributions to a public space may be taken, monetized and potentially used to compete with them. When that erosion is complete, I worry that our digital public spaces might become even more polluted with untrustworthy content.
Already, artists are deleting their work from X, formerly known as Twitter, after the company said it would be using data from its platform to train its A.I. Hollywood writers and actors are on strike partly because they want to ensure their work is not fed into A.I. systems that companies could try to replace them with. News outlets including The New York Times and CNN have added files to their website to help prevent A.I. chatbots from scraping their content.
… While creators of quality content are contesting how their work is being used, dubious A.I.-generated content is stampeding into the public sphere. NewsGuard has identified 475 A.I.-generated news and information websites in 14 languages. A.I.-generated music is flooding streaming websites and generating A.I. royalties for scammers. A.I.-generated books — including a mushroom foraging guide that could lead to mistakes in identifying highly poisonous fungi — are so prevalent on Amazon that the company is asking authors who self-publish on its Kindle platform to also declare if they are using A.I.
18 September
The Google Trial Is Going to Rewrite Our Future
By Tim Wu
(NYT Opinion) The Google antitrust trial, which began last week, is ostensibly focused on the past — on a series of deals that Google made with other companies over the past two decades. The prosecution in the case, U.S. et al. v. Google, contends that Google illegally spent billions of dollars paying off Samsung and Apple to prevent anyone else from gaining a foothold in the market for online search.
But the true focus of the trial, like that of the Federal Trade Commission’s coming trial of Facebook’s parent company, Meta, on monopolization charges, is on the future. For the verdict will effectively establish the rules governing tech competition for the next decade, including the battle over commercialized artificial intelligence, as well as newer technologies we cannot yet envision.
The history of antitrust prosecutions shows this again and again: Loosening the grip of a controlling monopolist may not always solve the problem at hand (here, an online search monopoly). But it can open up closed markets, shake up the industry and spark innovation in unexpected areas.
13 September
In Show of Force, Silicon Valley Titans Pledge ‘Getting This Right’ With A.I.
Ever since ChatGPT, the A.I.-powered chatbot, exploded in popularity last year, lawmakers and regulators have grappled with how the technology might alter jobs, spread disinformation and potentially develop its own kind of intelligence.
(NYT) Elon Musk, Sam Altman, Mark Zuckerberg, Sundar Pichai and others discussed artificial intelligence with lawmakers, as tech companies strive to influence potential regulations.
The tech titans held forth on Wednesday in a three-hour meeting with lawmakers in Washington about A.I. and future regulations. The gathering, known as the A.I. Insight Forum, was part of a crash course for Congress on the technology and organized by the Senate leader, Chuck Schumer, Democrat of New York.
The meeting — also attended by Bill Gates, a founder of Microsoft; Sam Altman of OpenAI; Satya Nadella of Microsoft; and Jensen Huang of Nvidia — was a rare congregation of more than a dozen top tech executives in the same room. It amounted to one of the industry’s most proactive shows of force in the nation’s capital as companies race to be at the forefront of A.I. and to be seen to influence its direction.
10 September
An A.I. Leader Urges Regulation and a Rethink
The entrepreneur Mustafa Suleyman’s new book calls for lawmakers to seize the opportunities and mitigate the potentially catastrophic risks of artificial intelligence.
Mustafa Suleyman is one of the world’s leading artificial intelligence entrepreneurs, having co-founded not one but two start-ups at the cutting edge of the most transformative technology since the internet.
Now he has written a book, “The Coming Wave: Technology, Power, and the 21st Century’s Greatest Dilemma,” that calls for an urgent shift in how we think about and “contain” A.I. Failing to do so, he says, will leave us humans in the worst position: unable to tap into the huge opportunities of A.I. and at risk of being subsumed by a technology that poses an existential threat.
6 September
Ian Bremmer: Can we govern AI before it’s too late?
First, a disclaimer: I’m an AI enthusiast. I believe AI will drive nothing less than a new globalization that will give billions of people access to world-leading intelligence, facilitate impossible-to-imagine scientific advances, and unleash extraordinary innovation, opportunity, and growth. Importantly, we’re heading in this direction without policy intervention: The fundamental technologies are proven, the money is available, and the incentives are aligned for full-steam-ahead progress.
At the same time, artificial intelligence has the potential to cause unprecedented social, economic, political, and geopolitical disruption that upends our lives in lasting and irreversible ways.
Can we trust AI to tell the truth?
Is it possible to create artificial intelligence that doesn’t lie?
On GZERO World with Ian Bremmer, cognitive scientist, psychologist, and author Gary Marcus sat down to unpack some of the major recent advances–and limitations–in the field of generative AI. Despite large language model tools like ChatGPT doing impressive things like writing movie scripts or college essays in a matter of seconds, there’s still a lot that artificial intelligence can’t do: namely, it has a pretty hard time telling the truth.
Until there is a real breakthrough or new synthesis in the field, Marcus thinks we’re a long way from truthful AI, and incremental updates to the current large language models will continue to generate false information. “I will go on the record now in saying GPT-5 will [continue to hallucinate],” Marcus says, “If it’s just a bigger version trained on more data, it will continue to hallucinate. And the same with GPT-6.”
Google to require disclosure of AI use in political ads
Election and campaign ads will have to clearly state whether they use synthetic or AI-generated content in November.
Starting in November, Google will mandate all political advertisements label the use of artificial intelligence tools and synthetic content in their videos, images and audio.
As campaigns and digital strategists explore using generative AI-tools heading into the 2024 election cycle, Google is the first tech company to announce an AI-related disclosure requirement for political advertisers.
3 September
Lower fees, fewer lawyers and disruptive startups: Legal sector braces for impact from ChatGPT
Sean Silcoff
When Scott Stevenson was building his first startup in St. John’s, he was shocked that half of his initial financing was consumed by legal fees. There had to be a way to make legal services less expensive, he thought. So, in 2018, he co-founded another startup called Rally to automate the drafting of routine documents by lawyers through the use of customized online templates.
Revenue grew by more than 20 per cent each quarter as 100 law firms signed on, though many lawyers were indifferent, telling Mr. Stevenson their work was too “bespoke” for Rally’s software.
Then last September, his company launched an artificial intelligence tool called Spellbook. The Microsoft plug-in conjured up full clauses for documents, anticipating the necessary legalese. The tool used the large language models underlying OpenAI’s ChatGPT general purpose chatbot to draft legal documents in as little as one-quarter the usual time. “Our first users were immediately in love,” Mr. Stevenson said in an interview.
Two months later, ChatGPT became a global sensation. The effect on Mr. Stevenson’s company, now renamed Spellbook, was like magic. More than 74,000 people have joined a wait list for a trial (two-thirds who try it sign up) and Spellbook now has more than 1,000 clients. It doubled revenues in the first quarter from the prior period and raised $10.9-million in June.
29 August
The AI Power Paradox
AI cannot be governed like any previous technology, and it is already shifting traditional notions of geopolitical power. The challenge is clear: to design a new governance framework fit for this unique technology. If global governance of AI is to become possible, the international system must move past traditional conceptions of sovereignty and welcome technology companies to the table.
By Ian Bremmer and Mustafa Suleyman
(Foreign Affairs September/October 2023) Like past technological waves, AI will pair extraordinary growth and opportunity with immense disruption and risk. But unlike previous waves, it will also initiate a seismic shift in the structure and balance of global power as it threatens the status of nation-states as the world’s primary geopolitical actors. Whether they admit it or not, AI’s creators are themselves geopolitical actors, and their sovereignty over AI further entrenches the emerging “technopolar” order—one in which technology companies wield the kind of power in their domains once reserved for nation-states
A vision for inclusive AI governance
Casting a spotlight on the intricate landscape of AI governance, Ian Bremmer, president and founder of GZERO Media and Eurasia Group, and Mustafa Suleyman, CEO and co-founder of Inflection AI, eloquently unravel the pressing need for collaboration between governments, advanced industrial players, corporations, and a diverse spectrum of stakeholders in the AI domain. The exponential pace of this technological evolution demands a united front and the stakes have never been higher. There is urgency of getting AI governance right while the perils of getting it wrong could be catastrophic. While tech giants acknowledge this necessity, they remain engrossed in their domains, urging the imperative for collective action.
23 August
Murdered by My Replica?
Margaret Atwood responds to the revelation that pirated copies of her books are being used to train AI.
Stephen King: My Books Were Used to Train AI
(The Atlantic) I have said in one of my few forays into nonfiction (On Writing) that you can’t learn to write unless you’re a reader, and unless you read a lot. AI programmers have apparently taken this advice to heart. Because the capacity of computer memory is so large—everything I ever wrote could fit on one thumb drive, a fact that never ceases to blow my mind—these programmers can dump thousands of books into state-of-the-art digital blenders. Including, it seems, mine. The real question is whether you get a sum that’s greater than the parts, when you pour back out.
21 August
Bad news for execs in Hollywood that want to replace writers with AI. On Friday, a judge ruled that a piece of art created by AI is not open for copyright protection. “Human authorship is a bedrock requirement” when it comes to copyrighting art. This ruling is huge for writers and actors currently on strike. If AI work is not protected, it could raise issues for studios.
AI Art Cannot Be Copyrighted, Judge Rules
A human has to be involved in a piece of art’s creation in order for it to be copyrighted.
(PC Magazine) While the ruling focused on a piece of physical art, it’s also bound to catch the attention of studio execs. One factor in the ongoing strikes in Hollywood is the use of AI in script writing and acting. If that AI work is not protected, then it could raise issues for the studios in the future.
AI-generated art cannot receive copyrights, US court says
(Reuters) – A work of art created by artificial intelligence without any human input cannot be copyrighted under U.S. law, a U.S. court in Washington, D.C., has ruled.
Only works with human authors can receive copyrights, U.S. District Judge Beryl Howell said on Friday, affirming the Copyright Office’s rejection of an application filed by computer scientist Stephen Thaler on behalf of his DABUS system.
As Fight Over A.I. Artwork Unfolds, Judge Rejects Copyright Claim
But more legal challenges are on the way.
(NYT) There is an onslaught of upcoming cases challenging the legality of images and texts generated by artificial intelligence. In July, the comedian Sarah Silverman joined lawsuits accusing OpenAI and Meta of training their algorithms on her writing without permission. Other companies like GitHub and Stability AI are facing litigation that accuses them of illegally scraping artists’ work for their A.I. products.
20 August
There’s Only One Way to Control AI: Nationalization
AI’s infinite potential — and infinite risk — requires federal ownership.
by Charles Jennings, former CEO of an AI company partnered with CalTech/JPL. His 2019 book, Artificial Intelligence: Rise of the Lightspeed Learners, was reissued this year in a paperback edition.
(Politico opinion) While the best AI scientists obviously know a great deal about AI, certain aspects of today’s thinking machines are beyond anyone’s understanding. Scientists cleverly invented the term “black box” to describe the core of an AI’s brain, to avoid having to explain what’s going on inside it. There’s an element of uncertainty — even unknowability — in AI’s most powerful applications. This uncertainty grows as AIs get faster, smarter and more interconnected.
The AI threat is not Hollywood-style killer robots; it’s AIs so fast, smart and efficient that their behavior becomes dangerously unpredictable. As I used to tell potential tech investors, “The one thing we know for certain about AIs is that they will surprise us.”
Runaway AIs could cause sudden changes in power generation, food and water supply, world financial markets, public health and geopolitics. There is no end to the damage AIs could do if they were to leap ahead of us and start making their own arbitrary decisions — perhaps with nudges from bad actors trying to use AI against us.
Yet AI risk is only half the story. My years of work in AI have convinced me a huge AI dividend awaits if we can somehow muster the political will to align AI with humanity’s best interests.
16 August
The AI Power Paradox – Can States Learn to Govern Artificial Intelligence—Before It’s Too Late?
By Ian Bremmer and Mustafa Suleyman
(Foreign Affairs September/October 2023) AI is different—different from other technologies and different in its effect on power. It does not just pose policy challenges; its hyper-evolutionary nature also makes solving those challenges progressively harder. That is the AI power paradox. … Soon, AI developers will likely succeed in creating systems with self-improving capabilities—a critical juncture in the trajectory of this technology that should give everyone pause.
… AI also differs from older technologies in that almost all of it can be characterized as “dual use”—having both military and civilian applications. Many systems are inherently general, and indeed, generality is the primary goal of many AI companies. They want their applications to help as many people in as many ways as possible. But the same systems that drive cars can drive tanks. An AI application built to diagnose diseases might be able to create—and weaponize—a new one. The boundaries between the safely civilian and the militarily destructive are inherently blurred.
As their enormous benefits become self-evident, AI systems will only grow bigger, better, cheaper, and more ubiquitous. They will even become capable of quasi autonomy—able to achieve concrete goals with minimal human oversight—and, potentially, of self-improvement. Any one of these features would challenge traditional governance models; all of them together render these models hopelessly inadequate.
Within countries, AI will empower those who wield it to surveil, deceive, and even control populations—supercharging the collection and commercial use of personal data in democracies and sharpening the tools of repression authoritarian governments use to subdue their societies. Across countries, AI will be the focus of intense geopolitical competition.
Government to ‘prioritize’ generative-AI regulation, outlines plans for voluntary code of conduct
(National Post) Developers would have to prevent malicious or harmful use, such as: impersonating real people, tricking individuals to obtain private information, or giving legal or medical advice
Innovation Canada is consulting with experts, civil society, and industry this summer on a voluntary code of conduct for generative AI. It released a document outlining its plans for the code on Wednesday, after the consultation became public knowledge following an accidental early online posting.
The voluntary code would be in place before Bill C-27, privacy legislation with an AI component called the Artificial Intelligence and Data Act (AIDA), becomes law. Once that happens, the government “intends to prioritize the regulation of generative AI systems,” Innovation Canada said.
Experts have been calling for government regulation since the emergence of generative-AI systems like ChatGPT in the last nine months. The exponentially growing technology, which can be used to generate written text, photos, videos or code, has the potential to transform jobs and industries — and to be misused and cause harm.
Bill C-27 will move forward in the legislative process once the House of Commons reconvenes in the fall, but critics have pointed out the bill predates generative AI and that means it could already be outdated.
4 August
The AI Regulation Paradox
Regulating artificial intelligence to protect U.S. democracy could end up jeopardizing democracy abroad.
By Bhaskar Chakravorti, dean of global business at Tufts University’s Fletcher School of Law and Diplomacy.
(Foreign Policy) …artificial intelligence (AI) has added a fresh boost of creativity to the disinformation industry. Now anyone can become a political content creator thanks to new generative AI tools such as DALL-E, Reface, FaceMagic, and scores of others. Indeed, Meta just announced plans to release its new generative AI technology for public use, leading to even more possibilities for an explosion of such “creativity.”
The democratization of the disinformation process may well be the most serious threat yet to the functioning of U.S. democracy—an institution already under attack. Even the AI overlords are worried: Former Google CEO Eric Schmidt warned that “you can’t trust anything that you see or hear” in the elections thanks to AI. Sam Altman, CEO of OpenAI, the company that gave us ChatGPT, mentioned to U.S. lawmakers that he is nervous about the future of democracy.
3 August
What Can You Do When A.I. Lies About You?
(NYT) People have little protection or recourse when the technology creates and spreads falsehoods about them.
Ian Bremmer commented: the ai did make up a new book of mine that doesn’t exist: “the dawn of the new cold war, complete with kissinger foreword (that also doesn’t exist) and isbn number (ditto).”
1 August
Chatbots sometimes make things up. Is AI’s hallucination problem fixable?
Spend enough time with ChatGPT and other artificial intelligence chatbots and it doesn’t take long for them to spout falsehoods.
Described as hallucination, confabulation or just plain making things up, it’s now a problem for every business, organization and high school student trying to get a generative AI system to compose documents and get work done. Some are using it on tasks with the potential for high-stakes consequences, from psychotherapy to researching and writing legal briefs.
26 July
Google, Microsoft, OpenAI and startup form body to regulate AI development
Tech companies say Frontier Model Forum will focus on ‘safe and responsible’ creation of new models
Four of the most influential companies in artificial intelligence have announced the formation of an industry body to oversee safe development of the most advanced models.
The Frontier Model Forum has been formed by the ChatGPT developer OpenAI, Anthropic, Microsoft and Google, the owner of the UK-based DeepMind.
The group said it would focus on the “safe and responsible” development of frontier AI models, referring to AI technology even more advanced than the examples available currently.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Brad Smith, the president of Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
Canadian AI pioneer brings plea to U.S. Congress: Pass a law now
Hearing comes as lawmakers in U.S. and Canada weigh legislation on world-changing tech
(CBC) A giant in the field of artificial intelligence has issued a warning to American lawmakers: Regulate this technology, and do it quickly.
That appeal came at a hearing in Washington on Tuesday from Yoshua Bengio, a professor at the University of Montreal and founder of Mila, the Quebec AI institute.
“I firmly believe that urgent efforts, preferably in the coming months, are required,” said Bengio, one of three witnesses.
The hearing before a U.S. Senate subcommittee came as lawmakers study possible legislation to regulate the fast-evolving technology, touted for its potential world-changing economic and scientific benefits as well as unfathomable risks.
21 July
One of the “godfathers of AI” airs his concerns
Yoshua Bengio argues that the risk of catastrophe is real enough that action is needed now
(The Economist) WHERE IS RESEARCH into artificial intelligence (AI) heading? Is it all beneficial for humanity, or are there risks big enough that we need to make more effort to understand them and develop countermeasures? I believe the latter is true
Hollywood writers strike hits 50 days with no end in sight as WGA seeks deal
By Krysta Fauria And Andrew Dalton
(AP) Fifty days into a strike with no end in sight, about 1,000 Hollywood writers and their supporters marched and rallied in Los Angeles for a new contract with studios that includes payment guarantees and job security.
20 July
Google is testing a new AI tool that can write news articles and reportedly pitching it to The New York Times and News Corp
(Business Insider) Some executives were unsettled by the AI, but Google says it can’t replace journalists.
Several news organizations have already announced that they will implement AI in the newsroom.
19 July
9,000 authors rebuke AI companies, saying they exploited books as ‘food’ for chatbots
(LA Times) If users prompt GPT-4 to summarize works by Roxane Gay or Margaret Atwood, it can do so in detail, chapter by chapter. If users want ChatGPT to write a story in the style of an acclaimed author such as Maya Angelou, they can ask it to “write a personal essay in the style of Maya Angelou, exploring the theme of self-discovery and personal growth.” And voilà.
The generative AI is powered by two software programs known as large language models, which forgo a traditional programming method and instead extract massive amounts of text in order to produce natural and lifelike responses to user prompts.
In Tuesday’s open letter, the Authors Guild writes that “Generative AI technologies built on large language models owe their existence to our writings. These technologies mimic and regurgitate our language, stories, style, and ideas. Millions of copyrighted books, articles, essays, and poetry provide the ‘food’ for AI systems, endless meals for which there has been no bill.”
15 July
Maureen Dowd: Watch Out for the Fake Tom Cruise
…if you’re making a living driving a vehicle or working in a place where you use heavy machines like an auto body shop, all kinds of jobs, this is going to create the legal precedents that could protect you in the future, too. Almost nobody is immune to the risk that A.I. could devalue their economic position, even though A.I. will also have widespread benefits.
Tinseltown is going dark, as the actors join the writers on the picket line. Hollywood’s century-old business model was upended by Covid and also by streaming, which swept in like an occupying army. Then streaming hit a ceiling, and Netflix and Co. scrambled to pivot.
With a dramatically different economic model shaped by transformative technologies — A.I. is a key issue in the strike — the writers and actors want a new deal. And they deserve it. …
It’s a complex issue. Even as writers are demanding that studios not replace them with A.I., some studio execs are no doubt wondering if the writers are being hypocritical: Will they start using A.I. to help them finish their scripts on deadline?
13 July
David Brooks: ‘Human Beings Are Soon Going to Be Eclipsed’
I don’t know about you, but this is what life has been like for me since ChatGPT 3 was released. I find myself surrounded by radical uncertainty — uncertainty not only about where humanity is going but about what being human is. As soon as I begin to think I’m beginning to understand what’s happening, something surprising happens — the machines perform a new task, an authority figure changes his or her mind.
Beset by unknowns, I get defensive and assertive. I find myself clinging to the deepest core of my being — the vast, mostly hidden realm of the mind from which emotions emerge, from which inspiration flows, from which our desires pulse — the subjective part of the human spirit that makes each of us ineluctably who we are. I want to build a wall around this sacred region and say: “This is essence of being human. It is never going to be replicated by machine.”
But then some technologist whispers: “Nope, it’s just neural nets all the way down. There’s nothing special in there. There’s nothing about you that can’t be surpassed.”
Some of the technologists seem oddly sanguine as they talk this way. At least [eminent cognitive scientist Douglas] Hofstadter is enough of a humanist to be horrified.
12 July
Why Bill Gates Isn’t Worried About AI Models Making Stuff Up
In his latest blog post, Bill Gates proposes that AI can be used to address the problems that AI has created.
(Forbes) Making AI models more “self-aware” and “conscious” of their biases and factual errors could potentially prevent them from producing more false information in the future, the Microsoft cofounder wrote on his blog, GatesNotes. … One of the most well-known issues with large language models is their tendency to “hallucinate” or produce factually incorrect and biased or harmful information. That’s because models are trained on a vast amount of data collected from the internet, which is mired in bias and misinformation. But Gates believes that it’s possible to build AI tools that are conscious of the faulty data they are trained on and the biased assumptions they make.
10 July
Full Speed Ahead on A.I. Our Economy Needs It.
By Steven Rattner
(NYT Opinion) Now, the launch of ChatGPT and other generative A.I. platforms has unleashed a tsunami of hyperbolic fretting, this time about the fate of white-collar workers. Will paralegals — or maybe even a chunk of lawyers — be rendered superfluous? Will A.I. diagnose some medical conditions faster and better than doctors? Will my next guest essay be ghostwritten by a machine? A breathless press has already begun chronicling the first job losses.
Unlike most past rounds of technological improvement, the advent of A.I. has also birthed a small armada of non-economic fears, from disinformation to privacy to the fate of democracy itself. Some suggest in seriousness that A.I. could have a more devastating impact on humanity than nuclear war. …
Technological advances have both destroyed and created jobs
When was the last time you talked to a telephone operator? Or were conveyed by a manned elevator? In the place of these and so many other defunct tasks, a vast array of new categories has been created. A recent study co-authored by M.I.T. economist David Autor found that approximately 60 percent of jobs in 2018 were in occupations that didn’t exist in 1940.
… We can only achieve lasting economic progress and rising standards of living by increasing how much each worker produces. Technology — whether in the form of looms or robots or artificial intelligence — is central to that objective.
Generative A.I. — as dazzling and scary as it can be because of its potential to be a particularly transformative innovation — is just another step in the continuum of progress. Were our ancestors any less startled when they first witnessed other exceptional inventions, like a telephone transmitting voice or a light bulb illuminating a room?
6 July
This essay is adapted from David Brin’s nonfiction book in progress, Soul on Ai.
Give Every AI a Soul—or Else
To solve the “crisis” in artificial intelligence, AI beings must say, “I am me.”
(Wired) … Amid the toppling of many clichéd assumptions, we’ve learned that so-called Turing tests are irrelevant, providing no insight at all into whether generative large language models—GLLMs or “gollems”—are actually sapient beings. They will feign personhood, convincingly, long before there’s anything or anyone “under the skull.”
… How can such beings be held accountable? Especially when their speedy mental clout will soon be impossible for organic humans to track? Soon only AIs will be quick enough to catch other AIs that are engaged in cheating or lying. Um … duh? And so, the answer should be obvious. Sic them on each other. Get them competing, even tattling or whistle-blowing on each other.
Only there’s a rub. In order to get true reciprocal accountability via AI-vs.-AI competition, the top necessity is to give them a truly separated sense of self or individuality.
By individuation I mean that each AI entity (he/she/they/ae/wae) must have what author Vernor Vinge, way back in 1981, called a true name and an address in the real world. As with every other kind of elite, these mighty beings must say, “I am me. This is my ID and home-root. And yes, I did that.”
… I propose a new AI format for consideration: We should urgently incentivize AI entities to coalesce into discretely defined, separated individuals of relatively equal competitive strength.
Each such entity would benefit from having an identifiable true name or registration ID, plus a physical “home” for an operational-referential kernel. (Possibly “soul”?) And thereupon, they would be incentivized to compete for rewards. Especially for detecting and denouncing those of their peers who behave in ways we deem insalubrious. And those behaviors do not even have to be defined in advance, as most AI mavens and regulators and politicians now demand.
Not only does this approach farm out enforcement to entities who are inherently better capable of detecting and denouncing each other’s problems or misdeeds. The method has another, added advantage. It might continue to function, even as these competing entities get smarter and smarter, long after the regulatory tools used by organic humans—and prescribed now by most AI experts—lose all ability to keep up.
Putting it differently, if none of us organics can keep up with the programs, then how about we recruit entities who inherently can keep up? Because the watchers are made of the same stuff as the watched.
The biggest hedge fund in the world says ChatGPT was able to pass its investment associate test – and it’s like ‘having millions of them at once’
(Markets Insider) The co-CIO of Bridgewater Associates seems pretty impressed with the investment acumen of OpenAI’s ChatGPT artificial intelligence tool.
Greg Jensen, co-CIO of the world’s biggest hedge fund, told Bloomberg that ChatGPT was able to pass its investment associate test, and that the power of the buzzy AI chatbot is like having “millions” of junior staffers working all at once.
Speaking on the Odd Lots podcast, Jensen – who’s flagged AI as a major interest for Bridgewater well before ChatGPT’s viral craze – said the hedge fund was now experimenting with machine learning AI in its trading strategies.
29 June
AI-generated text is hard to spot. It could play a big role in the 2024 campaign
(NPR) Generative AI apps are more accessible than ever. While AI-generated images are still relatively easy to detect, spotting text written by AI is much harder. Experts are concerned about what this means for the 2024 election.
27 June
Can the EU bring law and order to AI?
As countries scramble to deal with the risks and rewards of AI, the European Union is way ahead on the first laws regulating artificial intelligence. Here’s what’s really in the new AI Act
(The Guardian) Deepfakes, facial recognition and existential threat: politicians, watchdogs and the public must confront daunting issues when it comes to regulating artificial intelligence.
Tech regulation has a history of lagging the industry, with the the UK’s online safety bill and the EU’s Digital Services Act only just arriving almost two decades after the launch of Facebook. AI is streaking ahead as well. ChatGPT already has more than 100 million users, the pope is in a puffer jacket and an array of experts have warned that the AI race is getting out of control.
But at least the European Union, as is often the case with tech, is making a start with the AI Act.
17 June
Congress is racing to regulate AI. Silicon Valley is eager to teach them how.
Lawmakers are flocking to private meetings, dinners and briefings with AI experts — including CEOs of the companies they’re trying to regulate
The overnight success of AI-powered ChatGPT has triggered a frenzy among Washington lawmakers to draft new laws addressing the promise and peril of the burgeoning field. When [Dragos Tudorache, a Romanian member of the European Parliament who co-leads AI work] visited Washington last month, he witnessed a tumult of activity around AI and attended a bipartisan briefing with OpenAI CEO Sam Altman.
But tackling the swiftly evolving technology requires a sophisticated understanding of complicated systems that back AI, which sometimes confound even experts. Congressional salary caps that pale in comparison to Silicon Valley’s sky-high paychecks make it difficult to retain staff technologists, putting lawmakers at a disadvantage in getting up to speed — a goal that has become increasingly urgent as the European Union has leaped ahead of Washington, advancing robust AI legislation just this week.
14 June
Europe moves ahead on AI regulation, challenging tech giants’ power
European lawmakers voted to approve the E.U. AI Act, putting Brussels a step closer to shaping global standards for artificial intelligence
(WaPo) European Union lawmakers on Wednesday took a key step toward passing a landmark artificial intelligence bill, putting Brussels on a collision course with American tech giants funneling billions of dollars into the burgeoning technology.
The European Parliament overwhelmingly approved the E.U. AI Act, a sweeping package that aims to protect consumers from potentially dangerous applications of artificial intelligence — reacting to concerns that recent advances in the technology could be used to nefarious ends, ushering in surveillance, algorithmically driven discrimination and prolific misinformation that could upend democracy.
The bill takes aim at the recent boom in generative AI, creating new obligations for applications like ChatGPT that make text or images, often with humanlike flair. Companies would have to label AI-generated content, to prevent AI from being abused to spread falsehoods. The legislation requires firms publish summaries of what copyrighted data is used to train their tools, addressing concerns from publishers that corporations are profiting off materials scraped from their websites.
The threat posed by the legislation is so grave that OpenAI, the maker of ChatGPT, said it may be forced to pull out of Europe, depending on what is included in the final text. The European Parliament’s approval is a critical step in the legislative process, but the bill still awaits negotiations with the European Council, whose membership largely consists of heads of state or government of E.U. countries. Officials say they hope to reach a final agreement by the end of the year. …
The vote cements the European Union’s position as the de facto global leader on tech regulation, as other governments — including the United States — are just beginning to grapple with the threat presented by AI, concerns fueled by the surging popularity of ChatGPT. The legislation would add to an arsenal of regulatory tools that Europe adopted over the last five years targeting Silicon Valley companies, while similar domestic efforts have languished. If adopted, the proposed rules are likely to influence policymakers around the world and usher in standards that could trickle down to all consumers, as companies shift their practices internationally to avoid a patchwork of policies.
12 June
UN chief backs idea of global AI watchdog like nuclear agency
(Reuters) – U.N. Secretary-General Antonio Guterres on Monday backed a proposal by some artificial intelligence executives for the creation of an international AI watchdog body like the International Atomic Energy Agency (IAEA).
Generative AI technology that can spin authoritative prose from text prompts has captivated the public since ChatGPT launched six months ago and became the fastest growing app of all time. AI has also become a focus of concern over its ability to create deepfake pictures and other misinformation.
“Alarm bells over the latest form of artificial intelligence – generative AI – are deafening. And they are loudest from the developers who designed it,” Guterres told reporters. “We must take those warnings seriously.”
He has announced plans to start work by the end of the year on a high-level AI advisory body to regularly review AI governance arrangements and offer recommendations on how they can align with human rights, the rule of law and common good.
11 June
UK PM Sunak pitches Britain as future home for AI regulation
(Reuters) – Prime Minister Rishi Sunak said on Monday Britain could be the global home of artificial intelligence regulation as he pitched London as a tech hub to industry leaders and urged them to grasp the opportunities and challenges of AI.
Sunak’s government will host a summit on the risks and regulation of AI later this year, and on Monday he said the “tectonic plates of technology are shifting.”
“The possibilities (of AI) are extraordinary. But we must – and we will – do it safely,” Sunak said in a speech at the London Tech Week conference.
30 May
Artificial intelligence poses ‘risk of extinction,’ tech execs and experts warn
More than 350 industry leaders sign letter equating risks with pandemics, nuclear war
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS). …among them were Geoffrey Hinton and Yoshua Bengio — two of the three so-called “godfathers of AI” who received the 2018 Turing Award for their work on deep learning — and professors from institutions ranging from Harvard to China’s Tsinghua University.
… “Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this has sparked fears the technology could lead to privacy violations, powerful misinformation campaigns and lead to issues with “smart machines” thinking for themselves.”
Deepfaking it: America’s 2024 election collides with AI boom
Welcome to America’s 2024 presidential race, where reality is up for grabs.
(Reuters) …deepfakes – realistic yet fabricated videos created by AI algorithms trained on copious online footage – are among thousands surfacing on social media, blurring fact and fiction in the polarized world of U.S. politics.
… “It’s going to be very difficult for voters to distinguish the real from the fake. And you could just imagine how either Trump supporters or Biden supporters could use this technology to make the opponent look bad,” said Darrell West, senior fellow at the Brookings Institution’s Center for Technology Innovation.
24 May
AI pioneer Yoshua Bengio says regulation in Canada is too slow, warns of ‘existential’ threats
Artificial intelligence pioneer Yoshua Bengio says regulation in Canada is on the right path, but progress is far too sluggish.
Speaking in Montreal, the Université de Montréal professor said he backed a bill tabled in the House of Commons last June that adopts a more general, principles-based approach to AI guardrails and leaves details to a later date.
However, Ottawa has said the act known as Bill C-27 will come into force no sooner than 2025.
“That’s way too slow,” Bengio told reporters Wednesday. “There are simple things that could happen that don’t need two years to be figured out.”
He is calling on the federal government to begin rolling out rules immediately against certain threats, such as “counterfeiting humans” using AI-driven bots.
“The users need to know that they’re talking to a machine or a human. Accounts on social media and so on need to be regulated so we know who’s behind the account – and it has to be human beings most of the time,” said Bengio, who in 2019 won the Turing Award, known as the Nobel Prize of the technology industry.
20 May
From Sauvé alumnus Charles C. Onu “The story of our journey to save newborn lives using AI cannot be told without featuring Mila. Honoured to be a Mila startup and to keep the light shining on AI for good!”
Ubenwa Health is a Montréal-based MedTech startup building the future of automated sound-based medical diagnostics. We are developing the first technology for rapid detection of neurological conditions in infants using only their cry sounds.
Yuval Noah Harari on AI’s threat to human existence
The Economist newsletter
“Language is the stuff almost all human culture is made of,” writes Yuval Noah Harari, a historian and philosopher, in a recent By Invitation essay. Religion, human rights, money—these things are not inscribed in our DNA, and require language to make sense. In his essay, Mr Harari poses the question: “What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures?” The answer, he believes, casts a dark cloud over the future of human civilisation.
We have spent a lot of time thinking about the staggering potential of language-focused artificial-intelligence tools. We recently published a cover package that considers how to worry wisely about AI. We’ve written about how large, creative AI models will transform lives and labour markets; explained why it is too soon to fear an AI-induced jobs apocalypse; and considered how good China can get at generative AI. On balance we believe that, properly regulated, the new generation of AI tools can be a net positive for humans. But I will leave the last word to Mr Harari: “We should regulate AI before it regulates us.”
The debate over whether AI will destroy us is dividing Silicon Valley
Prominent tech leaders are warning that artificial intelligence could take over. Other researchers and executives say that’s science fiction.
Gerrit De Vynck, Tech reporter covering Google, algorithms and artificial intelligence
The debate stems from breakthroughs in a field of computer science called machine learning over the past decade that has created software that can pull novel insights out of large amounts of data without explicit instructions from humans. That tech is ubiquitous now, helping power social media algorithms, search engines and image-recognition programs.
(WaPo) “Out of the actively practicing researchers in this discipline, far more are centered on current risk than on existential risk,” said Sara Hooker, director of Cohere for AI, the research lab of AI start-up Cohere, and a former Google researcher.
The current risks include unleashing bots trained on racist and sexist information from the web, reinforcing those ideas. The vast majority of the training data that AIs have learned from is written in English and from North America or Europe, potentially making the internet even more skewed away from the languages and cultures of most of humanity. The bots also often make up false information, passing it off as factual. In some cases, they have been pushed into conversational loops where they take on hostile personas. The ripple effects of the technology are still unclear, and entire industries are bracing for disruption, such as even high-paying jobs like lawyers or physicians being replaced.
19 May
ChatGPT Is Already Obsolete
The next generation of AI is leaving behind the viral chatbot
(The Atlantic) Last week, at Google’s annual conference dedicated to new products and technologies, the company announced a change to its premier AI product: The Bard chatbot, like OpenAI’s GPT-4, will soon be able to describe images. Although it may seem like a minor update, the enhancement is part of a quiet revolution in how companies, researchers, and consumers develop and use AI—pushing the technology not only beyond remixing written language and into different media, but toward the loftier goal of a rich and thorough comprehension of the world. ChatGPT is six months old, and it’s already starting to look outdated.
16 May
Congress took on AI regulation – and raised a lot more questions than answers
(Yahoo! Finance) Senator John Kennedy (R-La.) pressed on the practical steps that Congress can take – and Marcus and Altman obliged.
“Number one, a safety review, like we use with the [Federal Drug Administration] prior to widespread deployment,” said Marcus. “If you’re going to introduce something to 100 million people, somebody has to have their eyeballs on it… Number two, a nimble monitoring agency to follow what’s going on, not just pre-review, but also post as things are out there in the world, with the authority to call things back.”
Marcus added that there should also be funding focused on AI safety research and building an “AI constitution.”
For Altman’s part, he also called for an AI-focused regulating entity and regulation of AI overall.
“I would form a new agency that licenses any effort above a certain scale of capabilities, and can take that license away and ensure compliance with safety standards,” he said. “Number two, I would create a set of safety standards…We can give your office a longer list of things that we think are important there, but (there should be) specific tests a model has to pass before it can be deployed in the world.”
CEO behind ChatGPT warns Congress AI could cause ‘harm to the world’
In his first Congressional testimony, OpenAI CEO Sam Altman called for extensive regulation, including a new government agency charged with licensing AI models.
OpenAI CEO tells Senate that he fears AI’s potential to manipulate views
OpenAI chief executive Sam Altman testified before Congress for the first time on Tuesday, as the surging popularity of his company’s ChatGPT continues to trigger debate about the possibilities and perils of artificial intelligence.
The hearing featured lawmakers grappling with the ways artificial intelligence could upend the economy, democratic institutions and key social values. The Senate Judiciary subcommittee also heard testimony from IBM executive Christina Montgomery and New York University professor emeritus Gary Marcus.
Altman has emerged as a powerful voice in the growing debate about AI capabilities and regulation. He is currently in the midst of a month-long, international goodwill tour to talk to policymakers about the technology. Earlier this month, he was among a group of CEOs who convened at the White House about AI regulation.
Notably absent from Altman’s proposals: requiring AI models to offer transparency into their training data, as his fellow expert witness Gary Marcus has called for, or prohibiting them from being trained on artists’ copyrighted works, as Sen. Marsha Blackburn (R-Tenn.) has suggested.
The Biden administration is increasingly calling AI an important priority, and there are growing efforts on Capitol Hill to draft legislation addressing the technology. Senate Majority Leader Charles E. Schumer (D-N.Y.) has been developing a new AI framework, which would “deliver transparent, responsible AI while not stifling critical and cutting edge innovation.”
14 May
AI presents political peril for 2024 with threat to mislead voters
(AP) — Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election.
The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.
No more.
Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.
12 May
Special Report: ChatGPT And The Curse Of The Second Law
Peter Berezin
Unlike past technological revolutions, the impact of superintelligent AI could arrive quite quickly. It will usher in an era of unprecedented prosperity or turn us all into paper clips.
• Most discussions of AI extrapolate linearly from what AI can do today to what it can do tomorrow. But AI’s progression is following an exponential curve, not a linear one, meaning that advances could come much faster than expected.
• Just as the investment community and the broader public were blindsided by the exponential rise in cases during the early days of the pandemic, they will be blindsided by how quickly AI transforms society and the economy.
• Assuming that humanity survives the transition to superintelligent AI, the impact on growth could be comparable to what occurred first during the Agricultural Revolution, and later during the Industrial Revolution. Both revolutions saw a 30-to-100 fold increase in growth relative to the previous epoch.
The world is at an AI crossroads and it needs Canada to act now
Bill C-27 could be a model for the globe, but experts fear it will be too late if it isn’t adopted before Parliament breaks for the summer.
(Montreal Gazette Editorial Board) In Canada, Bill C-27, [the Digital Charter Implementation Act 2022] , was tabled a year ago. Experts like [Montreal deep-learning luminary Yoshia] Bengio laud it as a strong regulatory framework that could be a model for the world but fear it will be too late if it doesn’t get adopted before the House of Commons breaks for the summer.
The European Union is considering its own law. It seeks to harness the responsible, transparent and ethical deployment of AI by rating different machine learning applications by their risk to human health and fundamental rights. The more potentially harmful the technology, the more stringent the regulation attached to its use.
AI has a global reach and adequate bulwarks can only be achieved through international co-operation. Much like treaties limiting nuclear proliferation and tackling climate change, world leaders must put aside their difference and work together to find common ground for the sake of protecting all humanity. Easier said than done, as always.
Canada, as a world hub for AI development, and a respected middle power with a long history of brokering critical international agreements, could and should play a pivotal role in this effort.
10 May
Winners and Losers in the AI Arms Race
Barry Eichengreen, Professor of Economics at the University of California, Berkeley, and a former senior policy adviser at the International Monetary Fund.
Generative artificial-intelligence models like ChatGPT will revolutionize the economy, though no one can say when. Equally important, no one can say where, though there is no reason why AI, like previous general-purpose technologies, shouldn’t produce widely shared net benefits.
1-2 May
Toronto prof called ‘Godfather of AI’ quits Google to warn world about the dangerous technology
As artificial intelligence explodes into the public realm with leaping enhancements, Geoffrey Hinton is now frightened by his child
(National Post) When he was a computer science professor at the University of Toronto, Geoffrey Hinton revolutionized the way machines interact with people and the world. His work was so innovative, he was scooped up by Google and dubbed the Godfather of Artificial Intelligence.
As AI explodes into the public realm with leaping enhancements, however, he is now frightened by his child.
He has quit his job at Google, he said, “so that I could talk about the dangers of AI”
Hinton’s remarkable shift from leading AI proponent to AI klaxon pushes concerns over the rapid pace of its development from the confines of the scientific community and chronic doomsayers.
Hinton, 75, is starting to regret his life’s work, according to a feature interview in The New York Times.
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.
(NYT) … Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”
29 March
Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’
More than 1,000 tech leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems.
(NYT) A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” according to the letter, which the nonprofit Future of Life Institute released on Wednesday.
Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and a 2020 presidential candidate; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock.
“These things are shaping our world,” said Gary Marcus, an entrepreneur and academic who has long complained of flaws in A.I. systems, in an interview. “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”