More stories

  • in

    Brain signals transformed into speech through implants and AI

    Researchers from Radboud University and the UMC Utrecht have succeeded in transforming brain signals into audible speech. By decoding signals from the brain through a combination of implants and AI, they were able to predict the words people wanted to say with an accuracy of 92 to 100%. Their findings are published in the Journal of Neural Engineering this month.
    The research indicates a promising development in the field of Brain-Computer Interfaces, according to lead author Julia Berezutskaya, researcher at Radboud University’s Donders Institute for Brain, Cognition and Behaviour and UMC Utrecht. Berezutskaya and colleagues at the UMC Utrecht and Radboud University used brain implants in patients with epilepsy to infer what people were saying.
    Bringing back voices
    ‘Ultimately, we hope to make this technology available to patients in a locked-in state, who are paralyzed and unable to communicate,’ says Berezutskaya. ‘These people lose the ability to move their muscles, and thus to speak. By developing a brain-computer interface, we can analyse brain activity and give them a voice again.’
    For the experiment in their new paper, the researchers asked non-paralyzed people with temporary brain implants to speak a number of words out loud while their brain activity was being measured. Berezutskaya: ‘We were then able to establish direct mapping between brain activity on the one hand, and speech on the other hand. We also used advanced artificial intelligence models to translate that brain activity directly into audible speech. That means we weren’t just able to guess what people were saying, but we could immediately transform those words into intelligible, understandable sounds. In addition, the reconstructed speech even sounded like the original speaker in their tone of voice and manner of speaking.’
    Researchers around the world are working on ways to recognize words and sentences in brain patterns. The researchers were able to reconstruct intelligible speech with relatively small datasets, showing their models can uncover the complex mapping between brain activity and speech with limited data. Crucially, they also conducted listening tests with volunteers to evaluate how identifiable the synthesized words were. The positive results from those tests indicate the technology isn’t just succeeding at identifying words correctly, but also at getting those words across audibly and understandably, just like a real voice.
    Limitations
    ‘For now, there’s still a number of limitations,’ warns Berezutskaya. ‘In these experiments, we asked participants to say twelve words out loud, and those were the words we tried to detect. In general, predicting individual words is less complicated than predicting entire sentences. In the future, large language models that are used in AI research can be beneficial. Our goal is to predict full sentences and paragraphs of what people are trying to say based on their brain activity alone. To get there, we’ll need more experiments, more advanced implants, larger datasets and advanced AI models. All these processes will still take a number of years, but it looks like we’re heading in the right direction.’ More

  • in

    Making the invisible, visible: New method makes mid-infrared light detectable at room temperature

    Scientists from the University of Birmingham and the University of Cambridge have developed a new method for detecting mid-infrared (MIR) light at room temperature using quantum systems.
    The research, published today (28th August) in Nature Photonics, was conducted at the Cavendish Laboratory at the University of Cambridge and marks a significant breakthrough in the ability for scientists to gain insight into the working of chemical and biological molecules.
    In the new method using quantum systems, the team converted low-energy MIR photons into high-energy visible photons using molecular emitters. The new innovation has the capability to help scientists detect MIR and perform spectroscopy at a single-molecule level, at room temperature.
    Dr Rohit Chikkaraddy, an Assistant Professor at the University of Birmingham, and lead author on the study explained: “The bonds that maintain the distance between atoms in molecules can vibrate like springs, and these vibrations resonate at very high frequencies. These springs can be excited by mid-infrared region light which is invisible to the human eye. At room temperature, these springs are in random motion which means that a major challenge in detecting mid-infrared light is avoiding this thermal noise. Modern detectors rely on cooled semiconductor devices that are energy-intensive and bulky, but our research presents a new and exicting way to detect this light at room temperature.”
    The new approach is called MIR Vibrationally-Assisted Luminescence (MIRVAL), and uses molecules that have the capability of being both MIR and visible light. The team were able to assemble the molecular emitters into a very small plasmonic cavity which was resonant in both the MIR and visible ranges. They further engineered it so that the molecular vibrational states and electronic states were able to interact, resulting in an efficient transduction of MIR light into enhanced visible luminescence.
    Dr Chikkaraddy continued: “The most challenging aspect was to bring together three widely different length scales — the visible wavelength which are hundreds of nanometres, molecular vibrations which are less than a nanometre, and the mid-infrared wavelengths which are ten thousand nanometres — into a single platform and combine them effectively.”
    Through the creation of picocavities, incredibly small cavities that trap light and are formed by single-atom defects on the metallic facets, the researchers were able to achieve extreme light confinement volume below one cubic nanometre. This meant the team could confine MIR light all the way down to the scale of a single molecule.
    This breakthrough has the ability to deepen understanding of complex systems, and opens the gateway to infrared-active molecular vibrations, which are typically inaccessible at the single-molecule level. But MIRVAL could prove beneficial in a number of fields, byeond pure scientific research.
    Dr Chikkaraddy concluded: “MIRVAL could have a number of uses such as real-time gas sensing, medical diagnostics, astronomical surveys and quantum communication, as we can now see the vibrational fingerprint of individual molecules at MIR frequencies. The ability to detect MIR at room temperature means that it is that much easier to explore these applications and conduct further research in this field. Through further advancements, this novel method could not only find its way into practical devices that will shape the future of MIR technologies, but also unlock the ability to coherently manipulate the intricate interplay of ‘balls with springs’ atoms in molecular quantum systems.” More

  • in

    Pros and cons of ChatGPT plugin, Code Interpreter, in education, biology, health

    While West Virginia University researchers see potential in educational settings for the newest official ChatGPT plugin, called Code Interpreter, they’ve found limitations for its use by scientists who work with biological data utilizing computational methods to prioritize targeted treatment for cancer and genetic disorders.
    “Code Interpreter is a good thing and it’s helpful in an educational setting as it makes coding in the STEM fields more accessible to students,” said Gangqing “Michael” Hu, assistant professor in the Department of Microbiology, Immunology and Cell Biology at the WVU School of Medicine and director of the Bioinformatics Core. “However, it doesn’t have the features you need for bioinformatics. These are technical issues that can be overcome. Future developments of Code Interpreter are likely to extend its use to many fields such as bioinformatics, finance and economics.”
    Since its release in December 2022, the popular artificial intelligence chatbot ChatGPT has gained the attention of businesses, educators and the general public. However, it didn’t quite live up to the needs of people working in biomedical research including bioinformatics — the field where computer science meets biology — who eagerly awaited OpenAI’s Code Interpreter plugin hoping it would fill the gaps.
    Hu and his team put Code Interpreter to the test on a variety of tasks to evaluate its features. Their findings, published in Annals of Biomedical Engineering, show the plugin breaks down some of the barriers, but not all of them.
    For example, people without a science background will have an ease of access to coding, or computer programming, with Code Interpreter. Hu said it’s also cost-effective and sparks a curiosity for students to explore data analysis and boosts their interest in learning. He points out, though, users will need to understand how to interpret data and recognize whether the results are accurate and know how to interact with the chatbot.
    Bioinformaticians rely on precise coding, computer software programs and internet access to store, analyze and interpret biological data such as DNA and human genome used for advancements in modern medicine.
    Despite the need for improvements specific to bioinformatics, Hu said, Code Interpreter helps users determine whether a response is accurate or if it is a fictitious answer presented with confidence, known as a hallucination. More

  • in

    New quantum device generates single photons and encodes information

    A new approach to quantum light emitters generates a stream of circularly polarized single photons, or particles of light, that may be useful for a range of quantum information and communication applications. A Los Alamos National Laboratory team stacked two different, atomically thin materials to realize this chiral quantum light source.
    “Our research shows that it is possible for a monolayer semiconductor to emit circularly polarized light without the help of an external magnetic field,” said Han Htoon, scientist at Los Alamos National Laboratory. “This effect has only been achieved before with high magnetic fields created by bulky superconducting magnets, by coupling quantum emitters to very complex nanoscale photonics structures or by injecting spin-polarized carriers into quantum emitters. Our proximity-effect approach has the advantage of low-cost fabrication and reliability.”
    The polarization state is a means of encoding the photon, so this achievement is an important step in the direction of quantum cryptography or quantum communication.
    “With a source to generate a stream of single photons and also introduce polarization, we have essentially combined two devices in one,” Htoon said.
    Indentation key to photoluminescence
    As described in Nature Materials, the research team worked at the Center for Integrated Nanotechnologies to stack a single-molecule-thick layer of tungsten diselenide semiconductor onto a thicker layer of nickel phosphorus trisulfide magnetic semiconductor. Xiangzhi Li, postdoctoral research associate, used atomic force microscopy to create a series of nanometer-scale indentations on the thin stack of materials. The indentations are approximately 400 nanometers in diameter, so over 200 of such indents can easily be fit across the width of a human hair.
    The indentations created by the atomic microscopy tool proved useful for two effects when a laser was focused on the stack of materials. First, the indentation forms a well, or depression, in the potential energy landscape. Electrons of the tungsten diselenide monolayer fall into the depression. That stimulates the emission of a stream of single photons from the well. More

  • in

    AI helps robots manipulate objects with their whole bodies

    Imagine you want to carry a large, heavy box up a flight of stairs. You might spread your fingers out and lift that box with both hands, then hold it on top of your forearms and balance it against your chest, using your whole body to manipulate the box.
    Humans are generally good at whole-body manipulation, but robots struggle with such tasks. To the robot, each spot where the box could touch any point on the carrier’s fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable.
    Now MIT researchers found a way to simplify this process, known as contact-rich manipulation planning. They use an AI technique called smoothing, which summarizes many contact events into a smaller number of decisions, to enable even a simple algorithm to quickly identify an effective manipulation plan for the robot.
    While still in its early days, this method could potentially enable factories to use smaller, mobile robots that can manipulate objects with their entire arms or bodies, rather than large robotic arms that can only grasp using fingertips. This may help reduce energy consumption and drive down costs. In addition, this technique could be useful in robots sent on exploration missions to Mars or other solar system bodies, since they could adapt to the environment quickly using only an onboard computer.
    “Rather than thinking about this as a black-box system, if we can leverage the structure of these kinds of robotic systems using models, there is an opportunity to accelerate the whole procedure of trying to make these decisions and come up with contact-rich plans,” says H.J. Terry Suh, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper on this technique.
    Joining Suh on the paper are co-lead author Tao Pang PhD ’23, a roboticist at Boston Dynamics AI Institute; Lujie Yang, an EECS graduate student; and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research appears this week in IEEE Transactions on Robotics.
    Learning about learning
    Reinforcement learning is a machine-learning technique where an agent, like a robot, learns to complete a task through trial and error with a reward for getting closer to a goal. Researchers say this type of learning takes a black-box approach because the system must learn everything about the world through trial and error. More

  • in

    ChatGPT shows limited ability to recommend guidelines-based cancer treatments

    For many patients, the internet serves as a powerful tool for self-education on medical topics. With ChatGPT now at patients’ fingertips, researchers from Brigham and Women’s Hospital, a founding member of the Mass General Brigham healthcare system, assessed how consistently the artificial intelligence chatbot provides recommendations for cancer treatment that align with National Comprehensive Cancer Network (NCCN) guidelines. Their findings, published in JAMA Oncology, show that in approximately one-third of cases, ChatGPT 3.5 provided an inappropriate (“non-concordant”) recommendation, highlighting the need for awareness of the technology’s limitations.
    “Patients should feel empowered to educate themselves about their medical conditions, but they should always discuss with a clinician, and resources on the Internet should not be consulted in isolation,” said corresponding author Danielle Bitterman, MD, of the Department of Radiation Oncology and the Artificial Intelligence in Medicine (AIM) Program of Mass General Brigham. “ChatGPT responses can sound a lot like a human and can be quite convincing. But, when it comes to clinical decision-making, there are so many subtleties for every patient’s unique situation. A right answer can be very nuanced, and not necessarily something ChatGPT or another large language model can provide.”
    The emergence of artificial intelligence tools in health has been groundbreaking and has the potential to positively reshape the continuum of care. Mass General Brigham, as one of the nation’s top integrated academic health systems and largest innovation enterprises, is leading the way in conducting rigorous research on new and emerging technologies to inform the responsible incorporation of AI into care delivery, workforce support, and administrative processes.
    Although medical decision-making can be influenced by many factors, Bitterman and colleagues chose to evaluate the extent to which ChatGPT’s recommendations aligned with the NCCN guidelines, which are used by physicians at institutions across the country. They focused on the three most common cancers (breast, prostate and lung cancer) and prompted ChatGPT to provide a treatment approach for each cancer based on the severity of the disease. In total, the researchers included 26 unique diagnosis descriptions and used four, slightly different prompts to ask ChatGPT to provide a treatment approach, generating a total of 104 prompts.
    Nearly all responses (98 percent) included at least one treatment approach that agreed with NCCN guidelines. However, the researchers found that 34 percent of these responses also included one or more non-concordant recommendations, which were sometimes difficult to detect amidst otherwise sound guidance. A non-concordant treatment recommendation was defined as one that was only partially correct; for example, for a locally advanced breast cancer, a recommendation of surgery alone, without mention of another therapy modality. Notably, complete agreement in scoring only occurred in 62 percent of cases, underscoring both the complexity of the NCCN guidelines themselves and the extent to which ChatGPT’s output could be vague or difficult to interpret.
    In 12.5 percent of cases, ChatGPT produced “hallucinations,” or a treatment recommendation entirely absent from NCCN guidelines. These included recommendations of novel therapies, or curative therapies for non-curative cancers. The authors emphasized that this form of misinformation can incorrectly set patients’ expectations about treatment and potentially impact the clinician-patient relationship.
    Going forward, the researchers are exploring how well both patients and clinicians can distinguish between medical advice written by a clinician versus a large language model (LLM) like ChatGPT. They are also prompting ChatGPT with more detailed clinical cases to further evaluate its clinical knowledge.
    The authors used GPT-3.5-turbo-0301, one of the largest models available at the time they conducted the study and the model class that is currently used in the open-access version of ChatGPT (a newer version, GPT-4, is only available with the paid subscription). They also used the 2021 NCCN guidelines, because GPT-3.5-turbo-0301 was developed using data up to September 2021. While results may vary if other LLMs and/or clinical guidelines are used, the researchers emphasize that many LLMs are similar in the way they are built and the limitations they possess.
    “It is an open research question as to the extent LLMs provide consistent logical responses as oftentimes ‘hallucinations’ are observed,” said first author Shan Chen, MS, of the AIM Program. “Users are likely to seek answers from the LLMs to educate themselves on health-related topics — similarly to how Google searches have been used. At the same time, we need to raise awareness that LLMs are not the equivalent of trained medical professionals.” More

  • in

    Scientists invent micrometers-thin battery charged by saline solution that could power smart contact lenses

    Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed a flexible battery as thin as a human cornea, which stores electricity when it is immersed in saline solution, and which could one day power smart contact lenses.
    Smart contact lenses are high-tech contact lenses capable of displaying visible information on our corneas and can be used to access augmented reality. Current uses include helping to correct vision, monitoring wearers’ health, and flagging and treating diseases for people with chronic health conditions such as diabetes and glaucoma. In the future, smart contact lenses could be developed to record and transmit everything a wearer sees and hears to cloud-based data storage.
    However, to reach this future potential a safe and suitable battery needs to be developed to power them. Existing rechargeable batteries rely on wires or induction coils that contain metal and are unsuitable for use in the human eye, as they are uncomfortable and present risks to the user.
    The NTU-developed battery is made of biocompatible materials and does not contain wires or toxic heavy metals, such as those in lithium-ion batteries or wireless charging systems. It has a glucose-based coating that reacts with the sodium and chloride ions in the saline solution surrounding it, while the water the battery contains serves as the ‘wire’ or ‘circuitry’ for electricity to be generated.
    The battery could also be powered by human tears as they contain sodium and potassium ions, at a lower concentration. Testing the current battery with a simulated tear solution, the researchers showed that the battery’s life would be extended an additional hour for every twelve-hour wearing cycle it is used. The battery can also be charged conventionally by an external power supply.
    Associate Professor Lee Seok Woo, from NTU’s School of Electrical and Electronic Engineering (EEE), who led the study, said: “This research began with a simple question: could contact lens batteries be recharged with our tears? There were similar examples for self-charging batteries, such as those for wearable technology that are powered by human perspiration.
    “However, previous techniques for lens batteries were not perfect as one side of the battery electrode was charged and the other was not. Our approach can charge both electrodes of a battery through a unique combination of enzymatic reaction and self-reduction reaction. Besides the charging mechanism, it relies on just glucose and water to generate electricity, both of which are safe to humans and would be less harmful to the environment when disposed, compared to conventional batteries.”
    Co-first author Dr Yun Jeonghun, a research fellow from NTU’s EEE said: “The most common battery charging system for smart contact lenses requires metal electrodes in the lens, which are harmful if they are exposed to the naked human eye. Meanwhile, another mode of powering lenses, induction charging, requires a coil to be in the lens to transmit power, much like wireless charging pad for a smartphone. Our tear-based battery eliminates the two potential concerns that these two methods pose, while also freeing up space for further innovation in the development smart contact lenses.” More

  • in

    The pressure is real for mums managing their children’s digital use

    Parents are spending considerable amounts of energy thinking about and mitigating the risks associated with their kids using mobile phones and the internet.
    The impacts of too much screen time on children’s physical and mental health, development and education are common concerns among parents.
    New research by the University of South Australia suggests that mums in particular are experiencing a “relentless and intense” mental load linked to their children’s digital use.
    UniSA researcher Dr Fae Heaselgrave calls this additional burden “digital care work,” which involves mothers monitoring their children’s digital activity, familiarising themselves with social media platforms and coming up with strategies to manage their kids’ media use.
    “At a societal level, we already know that the use of mobile phones, laptops, and computers in the home is more prevalent than ever. Families in Australia own on average almost eight digital devices, with children owning up to three devices each,” she says.
    “What we don’t know as much about is the effect children’s digital media use has on a mother’s role. Digital care work — which is an extension of the wider unpaid care role mothers provide in the home — is more often the domain of women, as mothers tend to be the primary caregiver.
    “This means the increased use of digital devices is having a bigger impact on mums in terms of demanding more time, energy and mental and cognitive work, which can also affect their career choices and paid work patterns.”
    In a series of interviews with Adelaide mothers of children aged 9 to 16, Dr Heaselgrave found that digital care work intensifies modern mothering by requiring an additional investment of time and energy to monitor children’s digital media use. More