Freedom from Facebook?

February 18, 2019

Facebook recently turned 15. As mistrust and misgivings about the platform continue to spread, a recent working paper looks at the effects of deactivating Facebook for four weeks. The results, as summarized by the New York Times: “More in-person time with friends and family. Less political knowledge, but also less partisan fever. A small bump in one’s daily moods and life satisfaction. And, for the average Facebook user, an extra hour a day of downtime.”

Interestingly, deactivation resulted in a significant drop in the polarization of users’ views about policy issues, but not their affective evaluations of their political opponents. This would seem to be consistent with fears that social media foster ideological “echo chambers,” allowing users to tailor their information streams to fit their existing political opinions. But research suggests that exposure to to opposing views can actually increase polarization. Indeed, the most salient individual effect on polarization in the deactivation study came from seeing less information that improved one’s understanding of the other party. Understanding one another better, at least online, is not sufficient to reduce polarization.

Short of deactivating our Facebooks en masse, is there another way to calm partisan furor? There’s good reason to believe that polarizing moral and emotional content will always be more attention-grabbing than pacifying material. Fixing this problem will likely require a change in the design incentives of the platforms themselves. Prompting users to consider the partisanship of their messages before posting, for example, could stem the tide of viral outrage. This and other possible design fixes are detailed in a useful Quartz piece by Tobias Rose-Stockwell.

Attitudes Toward AI

There is a broad spectrum of opinion among artificial intelligence researchers as to the status of the field and its future implications. Still, many of them agree that the media and the public woefully misrepresent the subject. This is concerning, given the massive impact the technology is likely to have in the near future. According to a recent report from Brookings, one-quarter of all jobs face a “high risk” of automation in the coming decades (to say nothing of the existential concerns some have voiced about the technology).

In an effort to gauge public opinion on AI, researchers from the Center for the Governance of AI at Oxford’s Future of Humanity Institute polled 2000 American participants. Here are some key findings from their report:

  • “Americans express mixed support for the development of AI. After reading a short explanation, a substantial minority (41%) somewhat support or strongly support the development of AI, while a smaller minority (22%) somewhat or strongly opposes it.

  • Among 13 AI governance challenges, American prioritize preventing AI-assisted surveillance from violating privacy and civil liberties, preventing AI from being used to spread fake and harmful content online, preventing AI cyber attacks, and protecting data privacy. All challenges were rated as 'important' and as over 50% likely to affect a large number of people in the US in the next 10 years by the respondents.

  • Americans have discernibly different levels of trust in different organizations to develop AI for the best interests of the public. The most trusted are university researchers and the U.S. military; the least trusted is Facebook. There was no actor for which the average respondent had 'a fair amount of confidence.'

  • The median respondent predicts that there is a 54% chance that high-level machine intelligence will be developed by 2028. We define high-level machine intelligence as when machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task.”

Nathanael Fast