More stories

  • in

    Computer scientists use AI to accelerate computing speed by thousands of times

    A team of computer scientists at the University of Massachusetts Amherst, led by Emery Berger, recently unveiled a prize-winning Python profiler called Scalene. Programs written with Python are notoriously slow — up to 60,000 times slower than code written in other programming languages — and Scalene works to efficiently identify exactly where Python is lagging, allowing programmers to troubleshoot and streamline their code for higher performance.
    There are many different programming languages — C++, Fortran and Java are some of the more well-known ones — but, in recent years, one language has become nearly ubiquitous: Python.
    “Python is a ‘batteries-included’ language,” says Berger, who is a professor of computer science in the Manning College of Information and Computer Sciences at UMass Amherst, “and it has become very popular in the age of data science and machine learning because it is so user-friendly.” The language comes with libraries of easy-to-use tools and has an intuitive and readable syntax, allowing users to quickly begin writing Python code.
    “But Python is crazy inefficient,” says Berger. “It easily runs between 100 to 1,000 times slower than other languages, and some tasks might take 60,000 times as long in Python.”
    Programmers have long known this, and to help fight Python’s inefficiency, they can use tools called “profilers.” Profilers run programs and then pinpoint why and which parts are slow.
    Unfortunately, existing profilers do surprisingly little to help Python programmers. At best, they indicate that a region of code is slow, and leave it to the programmer to figure out what, if anything, can be done.
    Berger’s team, which included UMass computer science graduate students Sam Stern and Juan Altmayer Pizzorno, built Scalene to be the first profiler that not only precisely identifies inefficiencies in Python code, but also uses AI to suggest how the code can be improved. More

  • in

    Quantum computer unveils atomic dynamics of light-sensitive molecules

    Researchers at Duke University have implemented a quantum-based method to observe a quantum effect in the way light-absorbing molecules interact with incoming photons. Known as a conical intersection, the effect puts limitations on the paths molecules can take to change between different configurations.
    The observation method makes use of a quantum simulator, developed from research in quantum computing, and addresses a long-standing, fundamental question in chemistry critical to processes such as photosynthesis, vision and photocatalysis. It is also an example of how advances in quantum computing are being used to investigate fundamental science.
    The results appear online August 28 in the journal Nature Chemistry.
    “As soon as quantum chemists ran into these conical intersection phenomena, the mathematical theory said that there were certain molecular arrangements that could not be reached from one to the other,” said Kenneth Brown, the Michael J. Fitzpatrick Distinguished Professor of Engineering at Duke. “That constraint, called a geometric phase, isn’t impossible to measure, but nobody has been able to do it. Using a quantum simulator gave us a way to see it in its natural quantum existence.”
    Conical intersections can be visualized as a mountain peak touching the tip of its reflection coming from above and govern the motion of electrons between energy states. The bottom half of the conical intersection represents the energy states and physical locations of an unexcited molecule in its ground state. The top half represents the same molecule but with its electrons excited, having absorbed energy from an incoming light particle.
    The molecule can’t stay in the top state — its electrons are out of position relative to their host atoms. To return to the more favorable lower energy state, the molecule’s atoms begin rearranging themselves to meet the electrons. The point where the two mountains meet — the conical intersection — represents an inflection point. The atoms can either fail to get to the other side by readjusting to their original state, dumping excess energy in the molecules around them in the process, or they can successfully make the switch.
    Because the atoms and electrons are moving so fast, however, they exhibit quantum effects. Rather than being in any one shape — at any one place on the mountain — at any given time, the molecule is actually in many shapes at once. One could think of all these possible locations as being represented by a blanket wrapped around a portion of the mountainous landscape. More

  • in

    Brain signals transformed into speech through implants and AI

    Researchers from Radboud University and the UMC Utrecht have succeeded in transforming brain signals into audible speech. By decoding signals from the brain through a combination of implants and AI, they were able to predict the words people wanted to say with an accuracy of 92 to 100%. Their findings are published in the Journal of Neural Engineering this month.
    The research indicates a promising development in the field of Brain-Computer Interfaces, according to lead author Julia Berezutskaya, researcher at Radboud University’s Donders Institute for Brain, Cognition and Behaviour and UMC Utrecht. Berezutskaya and colleagues at the UMC Utrecht and Radboud University used brain implants in patients with epilepsy to infer what people were saying.
    Bringing back voices
    ‘Ultimately, we hope to make this technology available to patients in a locked-in state, who are paralyzed and unable to communicate,’ says Berezutskaya. ‘These people lose the ability to move their muscles, and thus to speak. By developing a brain-computer interface, we can analyse brain activity and give them a voice again.’
    For the experiment in their new paper, the researchers asked non-paralyzed people with temporary brain implants to speak a number of words out loud while their brain activity was being measured. Berezutskaya: ‘We were then able to establish direct mapping between brain activity on the one hand, and speech on the other hand. We also used advanced artificial intelligence models to translate that brain activity directly into audible speech. That means we weren’t just able to guess what people were saying, but we could immediately transform those words into intelligible, understandable sounds. In addition, the reconstructed speech even sounded like the original speaker in their tone of voice and manner of speaking.’
    Researchers around the world are working on ways to recognize words and sentences in brain patterns. The researchers were able to reconstruct intelligible speech with relatively small datasets, showing their models can uncover the complex mapping between brain activity and speech with limited data. Crucially, they also conducted listening tests with volunteers to evaluate how identifiable the synthesized words were. The positive results from those tests indicate the technology isn’t just succeeding at identifying words correctly, but also at getting those words across audibly and understandably, just like a real voice.
    Limitations
    ‘For now, there’s still a number of limitations,’ warns Berezutskaya. ‘In these experiments, we asked participants to say twelve words out loud, and those were the words we tried to detect. In general, predicting individual words is less complicated than predicting entire sentences. In the future, large language models that are used in AI research can be beneficial. Our goal is to predict full sentences and paragraphs of what people are trying to say based on their brain activity alone. To get there, we’ll need more experiments, more advanced implants, larger datasets and advanced AI models. All these processes will still take a number of years, but it looks like we’re heading in the right direction.’ More

  • in

    Making the invisible, visible: New method makes mid-infrared light detectable at room temperature

    Scientists from the University of Birmingham and the University of Cambridge have developed a new method for detecting mid-infrared (MIR) light at room temperature using quantum systems.
    The research, published today (28th August) in Nature Photonics, was conducted at the Cavendish Laboratory at the University of Cambridge and marks a significant breakthrough in the ability for scientists to gain insight into the working of chemical and biological molecules.
    In the new method using quantum systems, the team converted low-energy MIR photons into high-energy visible photons using molecular emitters. The new innovation has the capability to help scientists detect MIR and perform spectroscopy at a single-molecule level, at room temperature.
    Dr Rohit Chikkaraddy, an Assistant Professor at the University of Birmingham, and lead author on the study explained: “The bonds that maintain the distance between atoms in molecules can vibrate like springs, and these vibrations resonate at very high frequencies. These springs can be excited by mid-infrared region light which is invisible to the human eye. At room temperature, these springs are in random motion which means that a major challenge in detecting mid-infrared light is avoiding this thermal noise. Modern detectors rely on cooled semiconductor devices that are energy-intensive and bulky, but our research presents a new and exicting way to detect this light at room temperature.”
    The new approach is called MIR Vibrationally-Assisted Luminescence (MIRVAL), and uses molecules that have the capability of being both MIR and visible light. The team were able to assemble the molecular emitters into a very small plasmonic cavity which was resonant in both the MIR and visible ranges. They further engineered it so that the molecular vibrational states and electronic states were able to interact, resulting in an efficient transduction of MIR light into enhanced visible luminescence.
    Dr Chikkaraddy continued: “The most challenging aspect was to bring together three widely different length scales — the visible wavelength which are hundreds of nanometres, molecular vibrations which are less than a nanometre, and the mid-infrared wavelengths which are ten thousand nanometres — into a single platform and combine them effectively.”
    Through the creation of picocavities, incredibly small cavities that trap light and are formed by single-atom defects on the metallic facets, the researchers were able to achieve extreme light confinement volume below one cubic nanometre. This meant the team could confine MIR light all the way down to the scale of a single molecule.
    This breakthrough has the ability to deepen understanding of complex systems, and opens the gateway to infrared-active molecular vibrations, which are typically inaccessible at the single-molecule level. But MIRVAL could prove beneficial in a number of fields, byeond pure scientific research.
    Dr Chikkaraddy concluded: “MIRVAL could have a number of uses such as real-time gas sensing, medical diagnostics, astronomical surveys and quantum communication, as we can now see the vibrational fingerprint of individual molecules at MIR frequencies. The ability to detect MIR at room temperature means that it is that much easier to explore these applications and conduct further research in this field. Through further advancements, this novel method could not only find its way into practical devices that will shape the future of MIR technologies, but also unlock the ability to coherently manipulate the intricate interplay of ‘balls with springs’ atoms in molecular quantum systems.” More

  • in

    Chemists turned plastic waste into tiny bars of soap

    Luis Melecio-Zambrano is the summer 2023 science writing intern at Science News. They are finishing their master’s degree in science communication from the University of California, Santa Cruz, where they have reported on issues of environmental justice and agriculture. More

  • in

    Pros and cons of ChatGPT plugin, Code Interpreter, in education, biology, health

    While West Virginia University researchers see potential in educational settings for the newest official ChatGPT plugin, called Code Interpreter, they’ve found limitations for its use by scientists who work with biological data utilizing computational methods to prioritize targeted treatment for cancer and genetic disorders.
    “Code Interpreter is a good thing and it’s helpful in an educational setting as it makes coding in the STEM fields more accessible to students,” said Gangqing “Michael” Hu, assistant professor in the Department of Microbiology, Immunology and Cell Biology at the WVU School of Medicine and director of the Bioinformatics Core. “However, it doesn’t have the features you need for bioinformatics. These are technical issues that can be overcome. Future developments of Code Interpreter are likely to extend its use to many fields such as bioinformatics, finance and economics.”
    Since its release in December 2022, the popular artificial intelligence chatbot ChatGPT has gained the attention of businesses, educators and the general public. However, it didn’t quite live up to the needs of people working in biomedical research including bioinformatics — the field where computer science meets biology — who eagerly awaited OpenAI’s Code Interpreter plugin hoping it would fill the gaps.
    Hu and his team put Code Interpreter to the test on a variety of tasks to evaluate its features. Their findings, published in Annals of Biomedical Engineering, show the plugin breaks down some of the barriers, but not all of them.
    For example, people without a science background will have an ease of access to coding, or computer programming, with Code Interpreter. Hu said it’s also cost-effective and sparks a curiosity for students to explore data analysis and boosts their interest in learning. He points out, though, users will need to understand how to interpret data and recognize whether the results are accurate and know how to interact with the chatbot.
    Bioinformaticians rely on precise coding, computer software programs and internet access to store, analyze and interpret biological data such as DNA and human genome used for advancements in modern medicine.
    Despite the need for improvements specific to bioinformatics, Hu said, Code Interpreter helps users determine whether a response is accurate or if it is a fictitious answer presented with confidence, known as a hallucination. More

  • in

    New quantum device generates single photons and encodes information

    A new approach to quantum light emitters generates a stream of circularly polarized single photons, or particles of light, that may be useful for a range of quantum information and communication applications. A Los Alamos National Laboratory team stacked two different, atomically thin materials to realize this chiral quantum light source.
    “Our research shows that it is possible for a monolayer semiconductor to emit circularly polarized light without the help of an external magnetic field,” said Han Htoon, scientist at Los Alamos National Laboratory. “This effect has only been achieved before with high magnetic fields created by bulky superconducting magnets, by coupling quantum emitters to very complex nanoscale photonics structures or by injecting spin-polarized carriers into quantum emitters. Our proximity-effect approach has the advantage of low-cost fabrication and reliability.”
    The polarization state is a means of encoding the photon, so this achievement is an important step in the direction of quantum cryptography or quantum communication.
    “With a source to generate a stream of single photons and also introduce polarization, we have essentially combined two devices in one,” Htoon said.
    Indentation key to photoluminescence
    As described in Nature Materials, the research team worked at the Center for Integrated Nanotechnologies to stack a single-molecule-thick layer of tungsten diselenide semiconductor onto a thicker layer of nickel phosphorus trisulfide magnetic semiconductor. Xiangzhi Li, postdoctoral research associate, used atomic force microscopy to create a series of nanometer-scale indentations on the thin stack of materials. The indentations are approximately 400 nanometers in diameter, so over 200 of such indents can easily be fit across the width of a human hair.
    The indentations created by the atomic microscopy tool proved useful for two effects when a laser was focused on the stack of materials. First, the indentation forms a well, or depression, in the potential energy landscape. Electrons of the tungsten diselenide monolayer fall into the depression. That stimulates the emission of a stream of single photons from the well. More

  • in

    AI helps robots manipulate objects with their whole bodies

    Imagine you want to carry a large, heavy box up a flight of stairs. You might spread your fingers out and lift that box with both hands, then hold it on top of your forearms and balance it against your chest, using your whole body to manipulate the box.
    Humans are generally good at whole-body manipulation, but robots struggle with such tasks. To the robot, each spot where the box could touch any point on the carrier’s fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable.
    Now MIT researchers found a way to simplify this process, known as contact-rich manipulation planning. They use an AI technique called smoothing, which summarizes many contact events into a smaller number of decisions, to enable even a simple algorithm to quickly identify an effective manipulation plan for the robot.
    While still in its early days, this method could potentially enable factories to use smaller, mobile robots that can manipulate objects with their entire arms or bodies, rather than large robotic arms that can only grasp using fingertips. This may help reduce energy consumption and drive down costs. In addition, this technique could be useful in robots sent on exploration missions to Mars or other solar system bodies, since they could adapt to the environment quickly using only an onboard computer.
    “Rather than thinking about this as a black-box system, if we can leverage the structure of these kinds of robotic systems using models, there is an opportunity to accelerate the whole procedure of trying to make these decisions and come up with contact-rich plans,” says H.J. Terry Suh, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper on this technique.
    Joining Suh on the paper are co-lead author Tao Pang PhD ’23, a roboticist at Boston Dynamics AI Institute; Lujie Yang, an EECS graduate student; and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research appears this week in IEEE Transactions on Robotics.
    Learning about learning
    Reinforcement learning is a machine-learning technique where an agent, like a robot, learns to complete a task through trial and error with a reward for getting closer to a goal. Researchers say this type of learning takes a black-box approach because the system must learn everything about the world through trial and error. More