2023 has been dominated by ChatGPT and the acceleration in the use and widespread adoption of LLMs.This technology does seem to be set to disrupt all our lives. In The House Next Door series and in The Recoverist Quartet, and in my recent short story An Object Misplaced in Time, I imagine AI being used in warfare and eventually superceding humans.

The scenario of AI destroying humanity is now widely discussed in the media, with some including scientists and “godfathers of AI”, Geoffrey Hinton and Yoshua Bengio, as well as Google’s DeepMind CEO, Demis Hassabis, calling for governments to regulate the use of AI, in a similar way to nuclear weapons.

A slightly more lighthearted approach to discussing the potential pathways of our human future living with AI has been taken by  Scott Aaronson and Boaz Barak, computer scientists at the University of Texas at Austin and Harvard University in their Five Worlds of AI.

Nevertheless, as with all technology, AI does pose significant risks. including:

  • Ethical Concerns: Bias in AI, privacy issues.
  • Security Risks: AI in cybersecurity, potential for misuse.
  • Economic Impact: Job displacement, economic inequality.

What we can expect from AI in the next 5 years:

  • Healthcare: Personalised medicine, advanced diagnostics.
  • Environmental Applications: Climate modeling, renewable energy optimization.
  • Automotive Industry: Advancements in autonomous vehicles.

Innovation to expect long term:

  • General AI: AI with human-like cognitive abilities. 
  • Space Exploration: AI in unmanned missions, extraterrestrial research.
  • Human-AI Integration: Brain-computer interfaces, augmented reality applications.

What the academics are researching

It’s always interesting to look at what top universities are looking at. At MIT, tesearchers like Pataranutaporn, Liu, Finn, and others in the Fluid Interfaces group are studying human-AI interaction, particularly how priming beliefs about AI can influence trustworthiness, empathy, and effectiveness. Robert Mahari, a PhD student in the Human Dynamics group, is working on issues related to copyright and generative AI tools. Markus Elkatsha and Kent Larson are developing platforms dedicated to solving spatial design and urban planning challenges using AI. This includes projects like Mobility On-Demand, which looks at reducing dependency on fossil fuels in urban environments​​. AI and Machine Learning at EECS (Electrical Engineering and Computer Science): This MIT department combines computer science and electrical engineering traditions to develop techniques for systems that interact with an external world through perception, communication, and action. They also focus on learning, decision-making, and adapting to a changing environment​​.

AI@Cam Initiative: This is Cambridge University’s flagship mission on artificial intelligence, aiming to leverage world-leading research across the University and create connections between disciplines, sectors, and communities. This initiative focuses on the rapidly advancing field of AI and its applications to benefit science, society, and the economy. The initiative is interdisciplinary and challenge-led, aiming to connect the University’s AI capabilities to real-world needs​​.

Minderoo Centre for Technology and Democracy: Researchers from this center are part of a £31 million consortium to develop trustworthy and secure AI. This consortium aims to create a UK and international research and innovation ecosystem for responsible AI. Gina Neff, the Executive Director of the Minderoo Centre at Cambridge, is directing the strategy group for this project, which will focus on linking Britain’s responsible AI ecosystem and leading a national conversation around AI​​.

Centre for Human-Inspired Artificial Intelligence (CHIA): Led by Professor Anna Korhonen, Professor Per Ola Kristensson, and Dr. John Suckling, CHIA focuses on developing AI grounded in human values and benefiting humanity. The center’s research includes responsible AI, human-centered robotics, human-machine interaction, healthcare, economic sustainability, and climate change. This center is also supported by a partnership with Google​​.

Fei-Fei Li: Fei-Fei Li is the Sequoia Professor of Computer Science at Stanford University, co-director of Stanford’s Human-Centered AI Institute, and an affiliated faculty at Stanford Bio-X. Her research interests include cognitively inspired AI, machine learning, computer vision, and ambient intelligent systems for healthcare delivery​​.

Stanford Institute for Human-Centered Artificial Intelligence (HAI): HAI focuses on developing human-centered AI technologies. Their research falls into three key areas: Intelligence (developing AI that understands human language, emotions, intentions, behaviors, and interactions), Augment Human Capabilities (creating AI that collaborates with and augments humans), and Human Impact (studying how AI interacts with humans and social structures)​​.

One of the major focuses at Tsinghua University is the integration of AI with transportation. The university has collaborated on a white paper titled “Key Technologies and Prospects of Vehicle-Infrastructure Collaboration for Autonomous Driving,” which is a significant contribution in the field of vehicle-infrastructure collaboration technology innovation for autonomous driving. This reflects Tsinghua’s commitment to advancing AI in practical applications such as smart transportation​​.