Yet Hodgkinson worries that researchers in the field will pay attention to the technique, rather than the science, when trying to reverse engineer why the trio won the prize this year. “What I hope this doesn’t do is make researchers inappropriately use chatbots, by wrongly thinking that all AI tools are equivalent,” he says.
The fear that this could happen is founded in the explosion of interest around other supposedly transformative technologies. “There’s always hype cycles, recent ones being blockchain and graphene,” says Hodgkinson. Following graphene’s discovery in 2004, 45,000 academic papers mentioning the material were published between 2005 and 2009, according to Google Scholar. But after Andre Geim and Konstantin Novoselov’s Nobel Prize win for their discovery of the material, the number of papers published then shot up, to 454,000 between 2010 and 2014, and more than a million between 2015 and 2020. This surge in research has arguably had only a modest real-world impact so far.
Hodgkinson believes the energizing power of multiple researchers being recognized by the Nobel Prize panel for their work in AI could cause others to start congregating around the field—which could result in science of a changeable quality. “Whether there’s substance to the proposals and applications [of AI] is another matter,” he says.
We’ve already seen the impact of media and public attention toward AI on the academic community. The number of publications around AI has tripled between 2010 and 2022, according to research by Stanford University, with nearly a quarter of a million papers published in 2022 alone: more than 660 new publications a day. That’s before the November 2022 release of ChatGPT kickstarted the generative AI revolution.
The extent to which academics are likely to follow the media attention, money, and Nobel Prize committee plaudits is a question that vexes Julian Togelius, an associate professor of computer science at New York University’s Tandon School of Engineering who works on AI. “Scientists in general follow some combination of path of least resistance and most bang for their buck,” he says. And given the competitive nature of academia, where funding is increasingly scarce and directly linked to researchers’ job prospects, it seems likely that the combination of a trendy topic that—as of this week—has the potential to earn high-achievers a Nobel Prize could be too tempting to resist.
The risk is this could stymie innovative new thinking. “Getting more fundamental data out of nature, and coming up with new theories that humans can understand, are hard things to do,” says Togelius. But that requires deep thought. It’s far more productive for researchers instead to carry out simulations enabled by AI that support existing theories and involve existing data—producing small hops forward in understanding, rather than giant leaps. Togelius foresees that a new generation of scientists will end up doing exactly that, because it’s easier.
There’s also the risk that overconfident computer scientists, who have helped advance the field of AI, start to see AI work being awarded Nobel Prizes in unrelated scientific fields—in this instance, physics and chemistry—and decide to follow in their footsteps, encroaching on other people’s turf. “Computer scientists have a well-deserved reputation for sticking their noses into fields they know nothing about, injecting some algorithms, and calling it an advance, for better and/or worse,” says Togelius, who admits to having previously been tempted to add deep learning to another field of science and “advance” it, before thinking better of it, because he doesn’t know much about physics, biology, or geology.
Source link