Site icon IATA News

A.I. Is Mastering Language. Should We Trust What It Says?

But as GPT-3’s fluency has dazzled many observers, the large-language-model approach has also attracted significant criticism over the last few years. Some skeptics argue that the software is capable only of blind mimicry — that it’s imitating the syntactic patterns of human language but is incapable of generating its own ideas or making complex decisions, a fundamental limitation that will keep the L.L.M. approach from ever maturing into anything resembling human intelligence. For these critics, GPT-3 is just the latest shiny object in a long history of A.I. hype, channeling research dollars and attention into what will ultimately prove to be a dead end, keeping other promising approaches from maturing. Other critics believe that software like GPT-3 will forever remain compromised by the biases and propaganda and misinformation in the data it has been trained on, meaning that using it for anything more than parlor tricks will always be irresponsible.

Wherever you land in this debate, the pace of recent improvement in large language models makes it hard to imagine that they won’t be deployed commercially in the coming years. And that raises the question of exactly how they — and, for that matter, the other headlong advances of A.I. — should be unleashed on the world. In the rise of Facebook and Google, we have seen how dominance in a new realm of technology can quickly lead to astonishing power over society, and A.I. threatens to be even more transformative than social media in its ultimate effects. What is the right kind of organization to build and own something of such scale and ambition, with such promise and such potential for abuse?

Or should we be building it at all?

OpenAI’s origins date to July 2015, when a small group of tech-world luminaries gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the symbolic heart of Silicon Valley. The dinner took place amid two recent developments in the technology world, one positive and one more troubling. On the one hand, radical advances in computational power — and some new breakthroughs in the design of neural nets — had created a palpable sense of excitement in the field of machine learning; there was a sense that the long ‘‘A.I. winter,’’ the decades in which the field failed to live up to its early hype, was finally beginning to thaw. A group at the University of Toronto had trained a program called AlexNet to identify classes of objects in photographs (dogs, castles, tractors, tables) with a level of accuracy far higher than any neural net had previously achieved. Google quickly swooped in to hire the AlexNet creators, while simultaneously acquiring DeepMind and starting an initiative of its own called Google Brain. The mainstream adoption of intelligent assistants like Siri and Alexa demonstrated that even scripted agents could be breakout consumer hits.

But during that same stretch of time, a seismic shift in public attitudes toward Big Tech was underway, with once-popular companies like Google or Facebook being criticized for their near-monopoly powers, their amplifying of conspiracy theories and their inexorable siphoning of our attention toward algorithmic feeds. Long-term fears about the dangers of artificial intelligence were appearing in op-ed pages and on the TED stage. Nick Bostrom of Oxford University published his book ‘‘Superintelligence,’’ introducing a range of scenarios whereby advanced A.I. might deviate from humanity’s interests with potentially disastrous consequences. In late 2014, Stephen Hawking announced to the BBC that ‘‘the development of full artificial intelligence could spell the end of the human race.’’ It seemed as if the cycle of corporate consolidation that characterized the social media age was already happening with A.I., only this time around, the algorithms might not just sow polarization or sell our attention to the highest bidder — they might end up destroying humanity itself. And once again, all the evidence suggested that this power was going to be controlled by a few Silicon Valley megacorporations.

The agenda for the dinner on Sand Hill Road that July night was nothing if not ambitious: figuring out the best way to steer A.I. research toward the most positive outcome possible, avoiding both the short-term negative consequences that bedeviled the Web 2.0 era and the long-term existential threats. From that dinner, a new idea began to take shape — one that would soon become a full-time obsession for Sam Altman of Y Combinator and Greg Brockman, who recently had left Stripe. Interestingly, the idea was not so much technological as it was organizational: If A.I. was going to be unleashed on the world in a safe and beneficial way, it was going to require innovation on the level of governance and incentives and stakeholder involvement. The technical path to what the field calls artificial general intelligence, or A.G.I., was not yet clear to the group. But the troubling forecasts from Bostrom and Hawking convinced them that the achievement of humanlike intelligence by A.I.s would consolidate an astonishing amount of power, and moral burden, in whoever eventually managed to invent and control them.

In December 2015, the group announced the formation of a new entity called OpenAI. Altman had signed on to be chief executive of the enterprise, with Brockman overseeing the technology; another attendee at the dinner, the AlexNet co-creator Ilya Sutskever, had been recruited from Google to be head of research. (Elon Musk, who was also present at the dinner, joined the board of directors, but left in 2018.) In a blog post, Brockman and Sutskever laid out the scope of their ambition: ‘‘OpenAI is a nonprofit artificial-intelligence research company,’’ they wrote. ‘‘Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.’’ They added: ‘‘We believe A.I. should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.’’

The OpenAI founders would release a public charter three years later, spelling out the core principles behind the new organization. The document was easily interpreted as a not-so-subtle dig at Google’s ‘‘Don’t be evil’’ slogan from its early days, an acknowledgment that maximizing the social benefits — and minimizing the harms — of new technology was not always that simple a calculation. While Google and Facebook had reached global domination through closed-source algorithms and proprietary networks, the OpenAI founders promised to go in the other direction, sharing new research and code freely with the world.



Source link

Exit mobile version