Scientists have developed a method to convert thoughts into written words using a helmet equipped with sensors and artificial intelligence. In a study, participants wore a cap that captured brain activity through electroencephalogram (EEG) recordings while reading text. This data was then transformed into text using an AI model named DeWave.

Chin-Teng Lin from the University of Technology Sydney highlights the system’s non-invasive nature, affordability, and portability. Although initial accuracy was around 40%, recent improvements have pushed it above 60%. This progress was presented at the NeurIPS conference in New Orleans, Louisiana. Initially, participants read sentences aloud, but in later research, they read silently.

This development contrasts with previous research by Jerry Tang at the University of Texas at Austin, which also achieved similar accuracy but used MRI scans for brain activity interpretation. The EEG method is more practical as it does not require subjects to remain stationary inside a scanner.

Meanwhile, a team led by Feng Zhou at New York University has developed tiny machines, approximately 100 nanometres in size, constructed from four strands of DNA. These nanobots can exponentially replicate themselves in a solution containing raw DNA materials. They use their own structure as a scaffold to arrange these materials into copies of themselves.

Andrew Surman from King’s College London notes that these nanobots represent progress in creating DNA-based machines that could potentially manufacture drugs or chemicals, or function as rudimentary robots or computers. Earlier efforts were limited to 2D shapes that needed folding into 3D structures, a process prone to errors. This new method allows for direct construction of 3D structures.

Richard Handy from the University of Plymouth explains that these DNA structures serve as moulds or scaffolds for building nanostructures, such as replicating the original or creating drugs and chemicals. This technology could be particularly beneficial for people with genetic deficiencies, allowing for the production of necessary enzymes or proteins directly in the tissue.

However, Surman highlights that the self-replication process is not without limitations. It requires specific DNA chains, organic molecules, gold nanorods, and precise cycles of heating, cooling, and UV light exposure. The UV light solidifies the new nanobots, and heating helps separate the new structure from the parent, allowing each to create another copy. This complex process is currently confined to controlled laboratory settings, eliminating fears of an uncontrolled replication scenario (the “grey goo” scenario) that could consume all DNA.

While the potential for these nanobots to run out of control is low, Handy cautions that there is still some risk due to incomplete understanding of protein folding and 3D structures in cells. While safeguards can be built, they don’t guarantee absolute safety. The article concludes by referencing the “grey goo” scenario, a concept from science fiction that envisages self-replicating machines turning all matter into copies of themselves, which, according to this research, remains a fictional concern for now.

Elsewhere, a team of scientists has created a biocomputing system using living brain cells, or brain organoids, that can recognize an individual’s voice from hundreds of sound clips, demonstrating a basic form of speech recognition. These brain cell clusters, developed from stem cells, are integrated with a computer and placed on a microelectrode array for interaction. This system, named “Brainoware,” has the potential to perform AI tasks with significantly less energy than silicon chips.

Feng Guo at Indiana University Bloomington acknowledges that this is a preliminary step and much progress is needed. The organoids, about a few millimeters wide and containing up to 100 million nerve cells, respond to audio clips as sequences of signals. Initially, their accuracy in recognizing voices was 30-40%, but after two days of training, it increased to 70-80%.

This process, termed as adaptive learning, relies on unsupervised learning methods, without any feedback on performance. The organoids’ ability to learn depends on the formation of new nerve cell connections.

While this approach addresses the high energy consumption and limitations of silicon chips in conventional AI, it faces challenges. The brain organoids can only be maintained for one or two months, and the current application is limited to identifying speakers, not understanding speech content.

Experts like Titouan Parcollet at the University of Cambridge see potential in biocomputing but also caution against overestimating its capabilities compared to current deep learning models, which excel in specific tasks. Guo’s team aims to overcome these limitations and fully harness the computational power of brain organoids for AI computing.

And finally, in 2024, the DeepSouth neuromorphic supercomputer is set to launch in Australia, marking a significant advancement in the field of computational neuroscience. Developed by the International Centre for Neuromorphic Systems (ICNS) in collaboration with Intel and Dell, DeepSouth is distinct from traditional computers, utilizing hardware chips designed to mimic spiking neural networks, similar to synaptic processes in the human brain.

This groundbreaking machine is expected to perform 228 trillion synaptic operations per second, mirroring the human brain’s capability. Andre van Schaik of ICNS, who leads the project, notes that DeepSouth’s real-time simulation of a spiking neural network the size of a human brain is a first. While it won’t surpass current supercomputers in power, it aims to enhance our understanding of neuromorphic computing and brain functions.

Notably, DeepSouth addresses the issue of energy consumption in computing. Traditional supercomputers are significant energy consumers, whereas the human brain operates with much less power, equivalent to a light bulb. Neuromorphic systems like DeepSouth process data differently – performing many operations in parallel with reduced data movement, and being event-driven rather than continuously operational, leading to substantial power savings.

Ralph Etienne-Cummings from Johns Hopkins University, not directly involved in the project, believes DeepSouth will expedite neuroscience research and AI development. It offers an ideal platform for brain study and prototyping AI solutions. Additionally, the technology’s potential miniaturization could revolutionize drones and robots, enhancing their autonomy and energy efficiency.