The Power and Peril of Creative AI

March 4, 2019

OpenAI, the artificial intelligence nonprofit behind some of the field’s most impressive recent achievements, just released a new language-learning model so powerful that it helped journalists write about it. The system, innocuously dubbed GPT-2, has given new fodder to the ongoing debate about the power and safety of modern AI. 

Given nothing but an enormous database of internet articles, GPT-2 is able to string together coherent, context-sensitive paragraphs in response to brief prompts. But even more impressively, the system seems to have partially developed certain specific cognitive faculties — among them the capacity to translate French, generate acronyms, and weave story arcs — merely as a consequence of trying to predict the next word in a passage of text. GPT-2 is an unsupervised learning system, meaning it requires neither hardcoded capacities nor labeled training data to accomplish this feat. 

Of particular interest to psychologists, though, is the threat GPT-2 poses to traditionally creative domains of human activity. Though AI has already begun to replace some journalistic writing, GPT-2’s generative capacity represents a major advance over existing systems. This puts pressure on those who would cite creativity as a remaining bastion of human uniqueness. (As Jon Gratch pointed out in our interview with him, this battle for human uniqueness has a long history.) Whether GPT-2 and systems like it are truly creative, or indeed whether AI will ever be capable of creative activity at all, is a subject of vigorous debate.

In a recent article for the MIT Technology Review, the philosopher Sean Dorrance Kelly claims that AI cannot be creative because of the socially embedded nature of real creativity. At best, he argues, AI can aid and inspire humans in their creative discoveries. This line is echoed by the chess grandmaster Garry Kasparov (no stranger to the threat of machines), who has argued that human-machine collaboration — not domination — is the future of AI. “The solution isn’t less technology,” he recently told Verdict, “but better humans.”

As fears mount that scientific innovation is not keeping pace with investment in research, some have suggested that this collaborative approach could make invention cheaper and more rapid. But it will also come with a psychological cost, as researchers confront the prospect of becoming obsolete. As Mohammed AlQuraishi, a biologist who recently saw his work bested by DeepMind’s AlphaFold, told Vox: “A lot of scientists judge themselves based on how smart they are, how quickly they can solve a problem...So this could lead to a kind of widespread existential crisis.”

In the meantime, social scientists still have a major role to play in ensuring that the deployment of advanced AI goes smoothly. This was driven home when, alongside the ominous decision to keep the specifics of GPT-2’s design secret for safety reasons, OpenAI released a detailed call for social science research into AI Safety questions. While many AI researchers have tried their hands at specifying human values, OpenAI is one of only a handful of organizations working on specific, generalizable value alignment techniques. A useful discussion of their approach can be found here.

Screens & Teens

Though consumers and tech companies alike are increasingly awakening to the need for a healthier relationship with screens, research on the topic remains decidedly mixed. This is particularly true for teens and children, whose extensive use of apps like Snapchat and TikTok have been central to the concerns of the Time Well Spent community. Deciding which of these concerns are valid and which are overblown is very much an open question. As Larry Rosen said in our interview with him, “we have no earthly clue what it means to give a ten-year-old a smartphone.” 

In an effort to open up debate on this topic, the social psychologist Jonathan Haidt (in collaboration with Jean Twenge) has created an open source annotated bibliography. Haidt has been a vocal advocate of screen-time limits and, following Twenge’s research, has cited social media as a key cause of declining mental health among teens. The bibliography contains both a summary of his arguments and an ongoing list of countervailing studies. Social scientists are encouraged to contribute.

One source of confusion, as discussed in our recent interview with Adam Alter, is that screen-time may be too blunt a metric to capture any meaningful effects. Unlike books, newspapers, and other physical media, screens are universal, meaning that any number of previously disparate tasks now fall under the single banner of screen-time. And while all screen-time past a certain hour may be damaging for sleep, not all uses of screens are equally damaging to mental health. 

Now that many device manufacturers are building time-tracking software into their operating systems, it will become easier for researchers to get precise data on how teens are using screens without relying on self-report. But given the prevalence of digital technologies and data-collection in all of our lives, we’re likely to experience the consequences of this “grand experiment” long before we’re in a position to control them.

Nathanael Fast