Archive
Advancing technological intelligence to improve the human-technology relationship
Updates on our work
July 29 - In a recent Substack post, Matt Motyl, a Senior Advisor at the Neely Center, analyzes new data from the Neely Social Media Index to determine whether the rate of users finding meaningful connections on social platforms has increased, decreased, or remained the same over the past 15 months. The findings are not straightforward. Looking at the aggregate data at a glance, it may seem that the overall rate of experiencing meaningful connections on any social platform has been stable over the past 15 months. However, breaking it down by demographic and social identities gives us a more nuanced picture. Less educated people became significantly less likely to report meaningful connections online (-12.1%), while more educated individuals were significantly more likely to report meaningful connections online (+4.4%). These significant diverging trends led to the gap between the least and most educated groups expanding from 27.8% to 34.2%. Similarly, the lowest income users became somewhat less likely to report meaningful connections online (-2.5%), while the highest income users became somewhat more likely to report meaningful connections online (+3.5%). Examining specific platforms, we see that FaceTime and text messaging, both direct communication-oriented services, exhibited significant increases in the rate of users reporting meaningful connections over the past 15 months. Snapchat and LinkedIn also exhibited significant increases in the rate at which their users report experiencing meaningful connections with others. WhatsApp also exhibited an increase in this rate, but the change was within the margin of error and thus not statistically significant. Conversely, we see non-significant decreases of at least 1% in the rate at which users on Nextdoor, X (formerly Twitter), and Reddit report meaningful connections.
July 19-22 - The Neely Center, in partnership with Helena, Close Up, Generation Lab, and the Deliberative Democracy Lab at Stanford University, co-sponsored America in One Room: The Youth Vote, held in Washington DC. This groundbreaking event brought together over 400 first-time Gen Z voters to engage in Deliberative Polling on critical issues like climate change, social justice, and economic inequality. By providing balanced information and facilitating moderated discussions, participants were able to delve deeply into these topics and consider diverse viewpoints, enhancing their understanding of complex issues and highlighting the importance of civil discourse in a democratic society. Notable shifts in opinions from before and after the deliberative polling event included an increase in satisfaction with democracy from 29% to 58%, a rise in opposition to a nationwide ban on abortion medication from 78% to 80%, a decrease in support for raising the federal minimum wage from 62% to 48%, and an increase in support for US energy independence from 62% to 76%. Another key observation was that while Gen Z are aware of social media's potential harms, they strongly oppose regulatory limits, valuing the platform's freedom. The event was covered by major media outlets such as ABC, NBC, CNN, Bloomberg, Axios, Ms Magazine, among others, showcasing the fresh perspectives of these young voters. As we continue to support initiatives that foster informed and civil discourse, we believe that events like "America in One Room" are crucial in shaping a more inclusive and thoughtful political landscape.
July 16 - In a new piece for The Boston Globe, Managing Director of the Neely Center for Ethical Leadership and Decision Making, Ravi Iyer, shares insights on how we can better protect young people online. He recently collaborated with policy leaders in Minnesota to pass groundbreaking legislation that will “force social media platforms like Facebook, Instagram, TikTok, and Snapchat to reveal the results of their user experiments, disclose how their algorithms prioritize what users see on their feeds, explain how they treat abusive actors, and reveal how much time people spend on these platforms, including how often people are receiving notifications.” In the op-ed, he argues “Minnesota’s new law is an important step to begin holding social media companies accountable, and serves as a call to action to other states.”
July 9 - In a recent Tech Policy Press article discussing current design policy with a focus on the California Age Appropriate Design Code, the authors talked about the Neely Center’s Design Code for Social Media as well as the recent Minnesota legislation that builds on our design code. Through the Neely Center, Ravi worked with stakeholders to propose his own design code reflecting consensus best practices across the industry. He explained the core value of the “upstream” design and how it can respond to challenges like hate speech. “If you attack hate speech by identifying it and demoting it”, he explained, “you will miss out on, for example, ‘fear speech,’ which still creates a lot of hate. It can be as or more harmful than hate speech. You can’t define all the ways people hate or mislead one another … so you want to discourage all the harmful content, not just what you can identify. This affects the whole ecosystem: publishers see what is and isn’t rewarding.”
June 21 - On June 21, 2024, the inaugural cohort of the Neely Ethics and Technology Fellows Program presented their research on immersive technology along with industry experts and insiders. The fellows shared their findings on the positive and negative impacts of AR, VR, and mixed reality on areas such as fashion, gaming, healthcare, and advertising, culminating in the announcement of the Neely Purpose-Driven XR Library. Featuring industry insights from leaders such as Avi Bar-Zeev, April Boyd Noronha, and Sonya Haskins, this event explored the burgeoning $377 billion spatial computing market. “[Immersive Technology] has been around for a long time, and we’ve gone through many cycles of hype and excitement,” said Nathanael Fast, director of the Neely Center for Ethical Leadership and Decision Making. “But each time we go through a cycle, the technology is getting better and better … We remain very excited about the potential for societal benefit from extended reality, virtual and augmented reality, and spatial computing.”
June 11 - The National Academies of Sciences, Engineering, and Medicine is hosting a series of workshops to explore the latest thinking about human and organizational factors in implementing AI, including how to incorporate human insights about AI-produced output, human oversight of AI systems, and AI operation in real-world environments. Nate Fast, the Director of the Neely Center, has been invited to serve on the leadership committee for the workshops. The workshop sessions will focus on ways to broaden stakeholder participation (June 11, virtual); needs for evaluation, testing, and oversight (June 20, virtual); and the interplay between AI and organizational cultures (June 26, virtual). A final session will recap insights from the prior sessions and highlight paths forward (July 2, in DC). Events are free and open to the public. You are invited to sign up for the workshops. Registration is required.
May 23 - The Neely Center is committed to ensuring that technology is a positive force for our youth. The Anxious Generation, a recent bestseller authored by our longtime collaborator Jonathan Haidt, explores the impact of technology on youth. We are proud to share that our managing director, Ravi Iyer, contributed to a chapter in The Anxious Generation that addresses “what governments and tech companies can do now”? In a recent Substack article, Ravi expands on specific ideas from the Design Code for Social Media - particularly how device-based verification could help protect children. “The current system for protecting children online does not work. ...The providers of operating systems, which is a market that Apple, Google, and Microsoft dominate, could help. ... Device-based Age Verification could provide the control that parents want without the complexity that prevents the widespread use of current parental settings .… Device-based Age Verification would allow users to designate the user of a device as needing added protections across all applications used on that device.” The article and proposals within are already being considered by legislators across jurisdictions as the Neely Center continues to lead on design-based tech policy.
May 23-26 - The Association for Psychological Science (APS) Annual Convention is the premier global event in psychological science that attracts more than 2,000 researchers every year from around the world who share the latest discoveries and developments in a variety of fields. In 2024, the convention took place in San Francisco, California. Nathanael Fast, Director of the Neely Center, was invited to contribute to a panel at the event to speak about his research on AI, while Ravi Iyer explored opportunities for psychological scientists in the industry. Their participation highlighted the Neely Center's commitment to tracking technology’s growing impact on psychological science.
May 1-2 - In collaboration with the Council on Technology and Social Cohesion, the Neely Center co-hosted events on May 1st and 2nd, 2024, at the Internet Archive in San Francisco. The May 1st event brought together funders, investors, and founders to discuss how prosocial design could be incentivized financially. The May 2nd event brought together researchers and practitioners to examine evidence-based ideas for improving online comment spaces. Ravi Iyer, the Managing Director of the Neely Center, moderated a panel on fixing algorithmic feeds that included Jay Baxter from Twitter and Matt Motyl, who works at the Neely Center as well as the Integrity Institute. Stay tuned for the talks from the May 2nd event which will be posted publicly in the near future.April 15 - In a recent article in Time, Jigsaw, a Google subsidiary, revealed a new set of AI tools that can score posts based on the likelihood that they classify as good content: Is a post nuanced? Does it contain evidence-based reasoning? Does it share a personal story or foster human compassion? By returning a numerical score (from 0 to 1) representing the likelihood of a post containing each of those virtues and others, these new AI tools could allow the designers of online spaces to rank posts in a new way. Instead of posts that receive the most likes or comments rising to the top, platforms could, in an effort to foster a better community, choose to put the most nuanced comments, or the most compassionate ones, first. Jigsaw’s new AI tools could result in a paradigm shift for social media. Elevating more desirable forms of online speech could create new incentives for more positive online and possibly offline social norms. If a platform amplifies toxic comments, “then people get the signal they should do terrible things,” stated Ravi Iyer, a technologist and the Managing Director of the Neely Center who also helps run the nonprofit Psychology of Technology Research Network. He went on to add, “If all the top comments are informative and useful, then people follow the norm and create more informative and useful comments.”
April 8-9 - The University of Michigan organized a hybrid conference on Social Media and Society in India, featuring a host of speakers to discuss various ways in which social media is impacting contemporary life in India. The event is in its fourth iteration at the University of Michigan and is a premier venue for conversations around social media and society in India. Jimmy Narang, a postdoc at the Neely Center, presented his research on the mechanics of misinformation distribution in India. The Neely Center is proud to co-sponsor the event.
April 1 - In the summer of 2023, a small team at Yale's Justice Collaboratory – comprising Matt Katsaros, Andrea Gately, and Jessica Araujo - collaborated with Ishita Chordia, a researcher at the University of Washington Information School, to better understand discussions about crime on Nextdoor. They leveraged both the Neely Center Social Media Index and the Neely Center Design Code in their work which was published recently in Tech Policy Press. The paper proposes recommendations for enhancing online crime discussions.
March 27 - When we launched the Neely Social Media Index last year, we found that US adults who used X (Twitter) and Facebook were 2-3 times more likely to see content on those platforms that they considered bad for the world. In a Substack post, Matt Motyl delved into whether these experiences with detrimental content have evolved over the past year and how this varies across different social media and communication platforms. He found a decrease in reports of harmful content on Facebook, whereas on X (Twitter), there was an increase in reports of content potentially escalating the risk of violence.
March 26 - The European Commission recently sought our feedback on guidelines to mitigate systemic risks in electoral processes on large online platforms. The Neely Center provided input on the importance of design-based solutions that address many of the known limitations of watermarking. The Commission specifically cited the Neely Center among academic stakeholders who warned against an over-reliance on watermarking and labeling. We advised exploring design-based approaches, especially for scenarios where malicious actors might circumvent detection through watermarking.
March 26 - The Neely Center is proud to contribute to Jonathan Haidt's new book, The Anxious Generation, where several of the ideas from our Design Code were included in Chapter 10, concerning steps that technology companies and governments can take to improve teen mental health. Ravi Iyer, Neely Center Managing Director and longtime research collaborator of Prof. Haidt, helped write parts of the chapter which included a focus on design and the use of device-based age authentication. The Neely Center continues to work with the team behind the book to turn the energy behind the book into positive societal change.
March 22 - It is exciting to see several of our partners leveraging the Neely Center’s Design Code in engaging with global policymakers and civil society groups. Our partners at Build Up have engaged with the Kenyan and Ghanaian governments about specific ideas within our Design Code and continue to have fruitful dialogues about how these codes can be integrated into government policies. On March 22, 2024, our partners at Search for Common Ground organized a gathering in Sri Lanka of global civil society organizations working to combat Technology Facilitated Gender Based Violence (TFGBV) and invited the Neely Center to present our design recommendations to the group, as an alternative to current content-based legislation being considered in places like Sri Lanka, which civil society organizations worry will be used to curb free expression.
March 13 - Ravi Iyer, Managing Director of the Neely Center, was invited to join a panel at Stanford to present academic and industry perspectives to the White House Kids Online Health and Safety Task Force. In his remarks, he emphasized the recommendations from the Neely Center's Design Code that call for specific changes to platforms, backed by empirical evidence, that empower kids to avoid negative experiences with technology.
March 12 - In this Atlantic article, Nate Lubin, who collaborates with us on a variety of initiatives, discusses what to do about the "junkification" of the internet due to the rise of AI-generated synthetic content. In the article, he calls for platforms that are designed differently, so as not to set bad ecosystem incentives that foster junkification - specifically citing design changes from the Neely Center Design Code for Social Media. He also advocates for public health tools to assess platform risk and product experiment transparency.
March 12 - In an invited talk with the Federal Trade Commission (FTC), Neely Center's Ravi Iyer, discussed the impact of manipulative design patterns in social media, aligning with the FTC's focus on "Dark Patterns." His testimony emphasizes our role in advocating for transparency and fairness in digital design. The Neely Design Code provides specific design recommendations for policymakers and technologists to improve the impact of social media platforms on society. We are excited to see the Neely Center's work contributing to substantive discussions on digital ethics.
March 11 - The Neely Center, in collaboration with the Council on Technology and Social Cohesion, hosted the Design Solutions Summit 2024 in Washington DC. This event brought together a select group of thought leaders and innovators at the forefront of technology and democracy, focusing on the critical role of design in enhancing online civic discourse. The event was kicked off by a speech from Kathy Boockvar, former Secretary of State of Pennsylvania, who discussed the effect that online discourse can have on elections and included talks by numerous technologists with experience at Google, Facebook, and Instagram, as to potential design-based solutions. Among the participants in the workshop were representatives from Meta, Twitter, Google, USC, Notre Dame, Build Up, Search for Common Ground, Villanova University, the Prosocial Design Network, the National Democratic Institute, Stanford, Reset.Tech, Aspen Institute, the US State Department, Knight Georgetown, the Department of Homeland Security, the American Psychological Association, Athena Strategies, the Alliance for Peacebuilding, Protect Democracy, and India Civil Watch International. The event was co-sponsored by the Council of Technology and Social Cohesion and hosted at Search for Common Ground headquarters. Several participants leveraged the Neely Center's design code and election recommendations in their remarks. The convening was productive and resonated with the participants for its relevance. As one attendee noted, “For someone working in the responsible tech field, the summit was an incredible opportunity to learn not just about new design solutions but. almost more importantly, where the field is converging on which design solutions are most powerful.”
March 7-8 - The Managing Director of the Neely Center, Ravi Iyer, spoke at the Story Movements 2024, a convening supported by the MacArthur Foundation and hosted by American University's Center for Media and Social Impact. Ravi was part of a session entitled "AI, Social Media & Tech for the Future". Taking place on March 7th in Washington DC, the conference brought together people and organizations who are doing the good work of repairing and imagining a just world through media, storytelling, comedy, research, and technology. The event was open to the public.
March 5 - In a recent Substack post, Neely Center’s senior advisor Matt Motyl delves into the shifting dynamics of social media usage and its impact on user well-being and societal norms between 2023 and 2024. The study, supported by the Neely Social Media Index, provides a comparative look at how engagement with social media platforms has evolved since our initial survey in early 2023, revealing interesting trends such as a 5.8% decrease in YouTube and 2.9% decrease on LinkedIn and X usage among US adults. No platform increased its share of users in this time span.
March - We are excited to share our Design Code with decision-makers within the UK Government and the UK’s communications regulator (Ofcom), as they develop the new Online Safety Act. Ofcom is now designing and consulting on their codes of practices to implement the Act. Ofcom also has a history of measuring user experiences online, similar to our Neely Center Indices, and there is much to be learned methodologically across both efforts. Ofcom recently added Ravi Iyer, our Managing Director, as a member of their Economics and Analytics Group Academic Panel. As Ofcom implements the Online Safety Act in the UK, Ravi Iyer will be advising them on conceptual frameworks and empirical approaches to understand, measure, and improve outcomes for people in digital communications.
February 27 - Following recommendations from the Neely Center, Rep. Zach Stephenson has introduced the “Prohibiting Social Media Manipulation Act” aimed at curbing design practices that undermine user autonomy and elevate risks for Minnesotans on social media platforms. Ravi Iyer, Managing Director at the Neely Center, contributed insights and testified in support of the bill, which incorporates several of the Center's proposals such as enhanced privacy settings, ethical content amplification, reasonable usage limits, and greater transparency in platform testing.
February 20 - In an article discussing the risk of AI powered deepfakes for India's 2024 election, Alj Jazeera talked with Ravi Iyer, the Neely Center's Managing Director, about how platforms should respond. In keeping with our previous work on algorithmic design, Ravi discussed the difficulty platforms would have in detecting deepfakes and instead suggested redesigning algorithms that currently incentivize polarizing content. The ethical implications of deepfakes are undeniable, and regulating them remains a complex issue. Yet, safeguarding the integrity of our elections and democracy is paramount.
February 18 - We are excited to share this recently released paper that Neely Center helped sponsor and co-author that illuminates industry knowledge about the tradeoffs between quality and engagement optimization within algorithms. The paper highlights one of our core design code proposals that platforms should not optimize for engagement, but instead for judgments of quality. In collaboration with numerous partners in academia (University of California, Berkeley, Cornell Tech), civic society (Integrity Institute), and industry (Pinterest, LinkedIn), it also discusses many concrete alternative ways that platforms have introduced signals of quality into algorithms, often by eliciting explicit preference, with measurable results. The paper was also recently covered in Tech Policy Press's Sunday Show podcast.
February 6 - Ravi Iyer, Managing Director of the Neely Center, presented on “AI and Human Relationships: The Problem of Authenticity” at a symposium titled Science and Religion: Being Human in the Age of AI, organized by the Nova Forum. Ravi’s panel explored the intricate interplay between science and religion in our rapidly evolving technological landscape.February 1 - In a recently released report, Minnesota Attorney General's office has leveraged the Neely Center for Ethical Leadership and Decision Making's Design Code for its comprehensive study on the impacts of social media and artificial intelligence on young people. It not only highlights the challenges posed by digital platforms but also recommends actionable steps towards creating a safer online environment for youth, drawing on the principles outlined in the Design Code. Moreover, the report cites the Neely Center's Social Media Index as a credible tool for monitoring user experiences with technology. Attorney General Ellison emphasized the report's importance in shaping policies that protect young internet users from the adverse effects of emerging technologies: "The report my office released today details how technologies like social media and AI are harming children and teenagers and makes recommendations for what we can do to create a better online environment for young people. I will continue to use all the tools at my disposal to prevent ruthless corporations from preying on our children. I hope other policymakers will use the contents of this report to do the same."
February 1 - NBC11 in Minneapolis spoke with Ravi Iyer, Managing Director, Neely Center for Ethical Leadership and Decision Making, about the Center's role in helping to shape the state's recommendations to safeguard social media user experiences.
February - We are thrilled to share an insightful essay that emerged from our collaboration with global peacebuilding organizations. Published by Conciliation Resources in “Accord: An International Review of Peace Initiative” (Issue 30), this piece advocates for stakeholders to not only identify and address individual instances of harmful content within their communities but also to push for systemic reforms of the incentives within these digital ecosystems. The essay argues that peacebuilders and mediators must move beyond reactive moderation to proactive prevention, influencing the foundational policies that govern social media platforms.
January 31 - X CEO Linda Yaccarino's recent Senate testimony revealed a shift in the platform's approach to safety, with a notable increase in trust and safety staff and plans to hire more moderators. However, this move has sparked discussions about its sufficiency in ensuring user protection, especially for minors. In this Wired article, Matt Motyl, our senior advisor at the Neely Center, highlights the challenges of such measures, calling for a more genuine commitment to safety in tech.
January 31 - In the American Psychological Association "Speaking of Psychology" podcast (Episode 271), Nathanael Fast, Director of the Neely Center for Ethical Leadership and Decision Making, discussed how AI affects people’s decision-making and why it’s important that the potential benefits of AI flow to everyone, not just the privileged.
January 12 - In a recent insightful article by the American Psychological Association, the Neely Center’s Director Nathanael Fast and Managing Director Ravi Iyer, shared their thoughts and recommendations on an innovative approach to integrating psychological principles in tech design. Ravi's expertise in ethical tech applications can potentially shape the future of social media that is safer and healthier for children and youth. Nate emphasized the critical need for diverse inputs in AI design, advocating for ethical frameworks to prevent biases and societal harm. This piece is a great read for anyone interested in how psychology can drive technological advancements.
January 11 - Artificial Intelligence is weaving its way into the fabric of our daily lives more seamlessly than ever before. According to the latest analysis from the Neely UAS AI Index, 18% of US adults have interacted with AI-driven chat tools such as ChatGPT, Bard, and Claude. With this rapid growth in adoption of AI tools and an estimated generative AI market value of $1.3T by 2030, we must examine the adoption of chat-based AI tools. In a thought-provoking Substack post, our senior advisor Matt Motyl, postdoctoral researcher Jimmy Narang, and Neely Center Director Nate Fast, unpack the potential ramifications of chat-based AI tool adoption.
January 9 - In this article by Politico, the Neely Center's Director Nathanael Fast and Affiliate Faculty Director Juliana Schroeder were featured for their insights on AI's growing influence. The piece delves into the rapid integration of AI technologies in various industries and the ethical implications that accompany this trend. Addressing the issues around AI ethics and the challenges we face in this rapidly evolving landscape is crucial for understanding how we can navigate these advancements responsibly.
January 9 - Ravi Iyer, the Managing Director of the Neely Center, was a featured guest on the 375th episode of the Techdirt Podcast. In the segment, Ravi talked about the Design Code for Social Media developed by the Neely Center which proposes specific steps we can take to design social media systems that safeguard society more effectively. A lively debate ensued!
January 9-12 - The 2024 Consumer Technology Association (CES) conference, a pivotal event in the tech world, featured the Neely Center's Director, Nathanael Fast, and Managing Director, Ravi Iyer, as contributing speakers. They presented at sessions that provide enlightening insights into the ethical implications of technology, covering both well-established and emerging areas. This conference represents an invaluable opportunity for attendees to delve into the rapidly evolving landscape of tech ethics and understand the critical role of leadership in navigating its complexities. Registration is open here.December 6-7 - The Neely Center’s Managing Director, Ravi Iyer, presented a talk on how AI powered social media systems are affecting mental and physical health at Stanford University’s AI+Health Conference held on December 6-7, 2023. The audience, which included medical practitioners from a wide range of institutions, was eager to understand how AI is likely to impact their professional work. In his talk, Ravi discussed how systems could be designed to improve mental health and reduce health misinformation.
December 1- 3 - Organized by How to Build Up Inc., the Build Peace Conference aims to explore emergent challenges to peace in the digital era and introduces peacebuilding innovations to address these issues. The conference served as an interdisciplinary forum for addressing critical topics and transformative practices in peace, conflict, and innovation. The USC Marshall Neely Center for Ethical Leadership and Decision Making is pleased to serve as a sponsor of the Build Peace 2023 conference, which took place in Kenya from December 1 - 3. On Day 1 of the conference, the Neely Center hosted a workshop on “Risks and Benefits of AI for a Global Community”.
December - The Neely Center is excited to share the "Blueprint for Action" by the Convergence Collaborative on Digital Discourse, featuring contributions from our Managing Director Ravi Iyer. The digital environment can be fertile ground for disinformation and misinformation, psychological and behavioral manipulation, polarization, radicalization, surveillance, and addiction. As we kick off 2024, a pivotal election year for many countries around the world, this report offers timely strategies for enhancing digital discourse and strengthening democracy, including a resource such as the Design Code for Social Media proposed by the Neely Center.
November 19 - Stanford University’s McCoy Family Center for Ethics in Society hosted a conference on November 19, 2023, entitled Beyond Moderation 2023 that brought together academics and organizations interested in exploring how society could get beyond moderation to improve technology's impact on society. At the event, Ravi Iyer introduced the Neely Center's Design Code for Social Media in making an argument that society has a meaningful role to play in designing better technological systems.
November 1 - In this Time magazine op-ed, Ravi Iyer, the Managing Director of Neely Center, highlights how challenges in moderating content will always be present when dealing with conflict, including the recent conflict between Israel and Hamas. Leveraging the Neely Center's Design Code for Social Media, he presents a case for improving the design of online platforms as a timely alternative to attempting to adjudicate what people should and should not be able to say online.
October 29 - The Tech Policy Press' Sunday Show podcast hosted Ravi Iyer, the Neely Center Managing Director, to discuss the recently released Neely Center’s Design Code for Social Media. The podcast is widely listened to amongst those working on technology policy. In the podcast, Ravi discussed the specifics of the Design Code and how it advances current efforts to improve the impact of social media on youth.
October 10 - The 2nd Annual Metaverse Summit: Building Connections and Communities through Mixed Reality is took place on October 10, 2023, in Los Angeles. The summit featured Nathanael Fast, Director of the Neely Center, as a speaker. His panel focused on "Moving at the Speed of Innovation: How Can Policy Keep Up?".
September 28-29 - Hosted at Stanford University, the Trust and Safety Research Conference is taking place on September 28-29, 2023. The event brought together a cross-disciplinary group of academics and researchers from computer science, sociology, law, and political science to connect with practitioners and policymakers on challenges and new ideas for studying and addressing online trust and safety issues. Both our Director, Nathanael Fast, and Managing Director, Ravi Iyer, presented there, so keep an eye out.
September 27-29 - The Global Big Data Conference took place on September 27-29, 2023. This virtual event focused on artificial intelligence (AI). Neely Center's Managing Director, Ravi Iyer, presented at the keynote panel on Day 1 about Generative AI: Balancing Innovation and Responsibility.
September 26 - Ravi Iyer, Managing Director of the Neely Center for Ethical Leadership and Decision Making, was the inaugural speaker for the Pro-Social series on September 26, 2023. At the event, Ravi presented the Center's "Design Code for Social Media.” The talk series is organized by the ProSocial Design Network (PDN).
August 2 - What Can We Learn from the First Studies of Facebook’s and Instagram’s Roles in the US 2020 Election? Co-written by Ravi Iyer, Managing Director of the USC Marshall School’s Neely Center for Ethical Leadership and Decision Making, and Juliana Schroeder, professor at the UC Berkeley’s Haas School of Business, this Tech Policy Press article discusses four studies on Facebook and Instagram's role in the US 2020 election, indicating that optimizing for engagement can incentivize divisive content and potentially lead to polarization. It also highlights that chronological feeds may not improve social media, and surveys of stable attitudes don't always respond to short-term changes. Additionally, optimizing for reshares can increase views of divisive content. The need to examine the long-term effects on vulnerable users, publishers, and politicians is emphasized.
July 19 - How User Experience Metrics complement "Content that Requires Enforcement": On July 19, 2023, a Bloomberg article leveraged the Neely Social Media Index to examine an emerging trend on Twitter— a surge in harmful posts potentially undermining advertiser trust and revenue. Twitter CEO, Linda Yaccarino, rebutted the assertions, labeling the cited data as incorrect, misleading, and outdated. The USC Marshall Neely Center for Ethical Leadership and Decision Making addressed these concerns in this Substack post, articulating the complexity of the issue and sharing details of the research design employed by the Center to derive the cited data points.
July 19 - Twitter’s Surge in Harmful Content a Barrier to Advertiser Return: This Bloomberg article delved into the acquisition of Twitter by Elon Musk and subsequent policy changes that have allegedly led to an increase in harmful posts, negatively impacting advertiser confidence and revenue. In their discussion, the authors drew from the findings of the USC Marshall Neely Social Media Index data that shows that 30% of U.S. adults that used Twitter between March and May 2023, reported seeing content they consider bad for the world. The article has since been reprinted in Time and the Financial Post.
July 17 - Efforts to Rein In AI Tap Lesson From Social Media: Don’t Wait Until It’s Too Late: This Wall Street Journal article, Ravi Iyer, Managing Director of the USC Marshall Neely Center for Ethical Leadership and Decision Making, shares about the recently launched Neely Artificial Intelligence (AI) Index that tracks how people experience interactions with AI systems. The Neely Center strives to shape AI product design and deployment by advocating for rewarding platforms that make ethically sound design choices.
July 11 - How Tech Regulation Can Leverage Product Experimentation Results: Cowritten by Ravi Iyer, Managing Director of the USC Marshall Neely Center for Ethical Leadership and Decision Making, and Nathaniel Lubin, this Lawfare article emphasizes the need for transparency in technology regulation, particularly concerning the experimental results that tech companies use for product decisions. The authors propose a system where product experiment records and their impact on decisions and goals are shared with approved third-party reviewers and published on a regulated timeline, ensuring scrutiny while respecting business innovation and user privacy.
June 23 - Plurality Institute’s Spring Symposium: Bridging the Divide: On June 23, 2023, the Plurality Institute organized their Spring Symposium: Bridging the Divide. This live streamed event delved deep into exploring the theme of “bridging” during a time of increasing polarization. At the symposium, Ravi Iyer, Managing Director of the USC Marshall Neely Center for Ethical Leadership and Decision Making, underscored the value of diversity in improving algorithms.
June 21 - 3. Themes: The Most Harmful or Menacing Changes in Digital Life that are Likely by 2035: In this Pew article, Ravi Iyer, Managing Director of the USC Marshall Neely Center for Ethical Leadership and Decision Making, speculated on a scenario where a rogue state could build autonomous killing machines, with potentially disastrous consequences.
June 21 - Why Haidt and Schmidt’s Proposed Social Media Reforms Are Insufficient – In this After Babel Substack post, Ravi Iyer, Neely Center's Managing Director, casts a critical eye on the effectiveness of content moderation in social media. He posits that rather than being a sustainable remedy, content moderation is more akin to a short-term band-aid. He further proposes that achieving long-term, scalable solutions necessitates a comprehensive redesigning of the core algorithms that power social media platforms.
June 16 - AI Is Already Causing Unintended Harm. What Happens When it Falls Into the Wrong Hands? - In this article published in The Guardian, PTI’s senior advisor, David Evan Harris, warns of the potential dangers posed by powerful AI systems, citing the need for stringent controls, regulatory bodies, and legal measures to prevent misuse and ensure AI safety and integrity.
June 13 - No One Knows Exactly What Social Media is Doing to Teens - Some research suggests that social media platforms have contributed to an increase in teen depression and suicide attempts over the past decade and a half. This article in The Atlantic features work by two of our PTI collaborators, Jeff Hancock and Angela Lee, including their recent paper that highlights how social media may be more harmful for specific youth who have particular "mindsets". Our recent nationally representative panel on social media experiences includes their measure and we are looking forward to collaborating on platform specific relationships that mirror the results they have found in their work.
June 8 - NYC Social Media Summit - Organized by New York City Mayor Eric Adams and Commissioner Dr. Ashwin Vasan of the NYC Department of Health and Mental Hygiene (DOHMH) on June 8, 2023, this high-level summit focused on New York City’s role in addressing potential online threats to the mental health and well-being of young people. Ravi Iyer, Managing Director of the Psychology of Technology Institute at USC Marshall School of Business, addressed the responsibility that lies with social media companies themselves. He suggested that advocating for stricter content moderation policies may not be as effective as advocating for social media companies to design their platforms with the promotion of well-being in mind.
May 26 - Tech Layoffs Ravage the Teams That Fight Online Misinformation and Hate Speech - Despite the increasing threats of cyberbullying, misinformation, and potential for the abuse of AI, tech giants, including Meta, Alphabet, Amazon, and Microsoft, have made extensive layoffs in recent months, especially in departments dedicated to trust and safety and AI ethics. In this CNBC article, Ravi Iyer, Neely Center’s Managing Director, remarks that if platforms are not going to invest in reconsidering design choices that have been proven to be harmful, then there is indeed cause for concern.
May 23 - 4 Ways AI Safety Efforts Could Learn from Expereince with Social Media - This week we have published an article on how our experiences with social media can inform our efforts to safeguard against any undesirable upshots of generative AI. We also wanted to share a few other press pieces highlighting our work.
May 18 - How Do We Fix It? AI Revolution: Disaster or Great Leap Forward? - In this podcast, Nathanael Fast, the Director of the Neely Center for Ethical Leadership and Decision Making and the Co-founder of PTI, shared his thoughts on whether AI represents a looming disaster or a great leap forward.
May 7 - Elon Musk’s Goal for Twitter: ‘Unregretted User-Minutes’ - This recent Wall Street Journal article about Elon Musk’s proposal to optimize for unregretted time spent quoted our substack article on the topic, where we leverage the extensive research on regret for acts of commission vs. omission.
May 6 - Few Are Addressing One of Social Media’s Greatest Perils - Both PTI member Kiran Garimella’s work on fear speech and algorithmic design ideas from our working paper on the algorithmic management of polarization and violence were featured in this recent New York Times article by Julia Angwin on the fact that the proliferation of fear speech is an under-appreciated online concern.
May 4 - PeaceCon 2023: Reimagining Technology - We spoke on a panel on “The Role of Social Media Algorithms in Promoting Social Cohesion” leveraging the paper we wrote for the Knight First Amendment Institute.
April 27-28 - Optimizing for What? Algorithmic Amplification and Society - We presented a paper on The Algorithmic Management of Polarization and Violence on Social Media, to be published by Columbia’s Knight First Amendment Institute, in collaboration with colleagues from UC Berkeley’s Center for Human Compatible AI and the peacebuilding organization BuildUp. A YouTube recording of the event is available here.
April 21 - AirTalk Discussion of What We have to Gain or Lose From AI - PTI Co-founder Nathanael Fast discusses how technology has changed our behavior in the past and how we can therefore expect AI to change us as well.
April 14 - Article 14 Article - Following our presentation on Social Media & Society in India at the University of Michigan, we were asked to comment on the rise of hate influencers in India and were quoted in this article concerning platform responsibility and how algorithmic design can help hate influencers build audiences.
April 9 - Tech Policy Press Podcast - Following our appearance on Lawfare’s podcast, several podcast hosts (starting around 20:40) discuss our ideas about using design, rather than content moderation, to improve social media’s impact on society.
April 7-8 - Social Media & Society in India - We presented work on how design can be a way to create a healthier online ecosystem across international contexts, such as in India.
March 31 - Buzzfeed article on AI Therapy - We commented on the dangers of using AI technologies for mental health therapy for this Buzzfeed article.
March 30-31 - Social Media Governance Initiative Spring Conference: Beyond Moderation - We organized a panel on “Design and Architecting of Healthy Online Ecosystems” as well as sponsoring part of the event, which was held at Yale Law school in collaboration with The Justice Collaboratory.
March 27- - Lawfare’s Arbiters of Truth Podcast on how to improve technology’s impact on society through design, rather than via content moderation.
March 24 - Anticipating the Metaverse - This workshop, held at the University of Southern California on Friday, March 24 brought together leading experts in AR and VR to discuss the latest developments in the field of mixed reality as well as discuss the exciting opportunities and ethical challenges these technologies present. The USC Marshall Neely Center for Ethics and the Psychology of Technology Institute partnered to organize a gathering of influential leaders and scholars to share personal insights and develop practices and principles that will be included in a white paper on the opportunities and ethics surrounding mixed reality.
February 23-24 - Tech + Social Cohesion Conference - This conference provided a unique space for tech innovators, Trust & Safety staff, and practitioners with community bridge building and global peacebuilding experience to explore a new generation of tech products that offer design affordances and algorithms optimized for social cohesion. Psych of Tech spoke at the conference and sponsored parts of it as well, in collaboration with numerous partner organizations.
February 8 - A Call for More Research on the Psychology of Technology - Co-Director Nate Fast published an op-ed at Fast Company making the case for our need to study not only AI but also psychology. He wrote, “In the past 20 years, we embraced social media platforms like Facebook and Twitter with open arms. Only now, when it’s too late to turn back the clock, do we comprehend their power to influence our worldviews, spread misinformation and hate speech, and even sway elections.” By advancing our understanding of the psychology of technology we can identify and address harms of new tech sooner, while keeping the benefits.
Februrary 7 - Dissertation Award deadline - Winners were announced in April.
January 5 - Wall Street Journal covers Removing Engagement Incentives for Political Content - Managing Director Ravi Iyer is quoted in this article that shows how removing engagement incentives can be a content-neutral way to improve the societal impact of technology, as measured by reduced anger, bullying and misinformation. Also covered in this episode of The Gist podcast.
December 9-10, 2022 - HumanTech Summit - Co-Director Nate Fast gave a keynote address on “The Psychology of Technology at Work: Anticipating Organizations in the Age of AI” at the HumanTech Summit in Warsaw, Poland.
November 11-12, 2022 - PTI’s 6th Annual Conference
The sixth annual “New Directions in Research on the Psychology of Technology” conference was held at the Wharton School, University of Pennsylvania on November 11-12, 2022, and was co-hosted by the Wharton Human-Centered Technology Initiative, Neely Center for Ethical Leadership, and the Psychology of Technology Institute. Recordings coming soon.
November 4, 2022 - Online Workshop on Technology, Trust, and Democracy
How is tech influencing our ability to trust each other and maintain a healthy democracy? To discuss the answer to this critical question, we convened a set of experts to discuss: Jonathan Haidt, Frances Haugen, Shankar Vedantam, Pia Shah, Talia Stroud, and Kamy Akhavan. No recording is available, but you can read some of the highlights and quotes in this Twitter thread.