More stories

  • in

    Scientists use A.I.-generated images to map visual functions in the brain

    Researchers at Weill Cornell Medicine, Cornell Tech and Cornell’s Ithaca campus have demonstrated the use of AI-selected natural images and AI-generated synthetic images as neuroscientific tools for probing the visual processing areas of the brain. The goal is to apply a data-driven approach to understand how vision is organized while potentially removing biases that may arise when looking at responses to a more limited set of researcher-selected images.
    In the study, published Oct. 23 in Communications Biology, the researchers had volunteers look at images that had been selected or generated based on an AI model of the human visual system. The images were predicted to maximally activate several visual processing areas. Using functional magnetic resonance imaging (fMRI) to record the brain activity of the volunteers, the researchers found that the images did activate the target areas significantly better than control images.
    The researchers also showed that they could use this image-response data to tune their vision model for individual volunteers, so that images generated to be maximally activating for a particular individual worked better than images generated based on a general model.
    “We think this is a promising new approach to study the neuroscience of vision,” said study senior author Dr. Amy Kuceyeski, a professor of mathematics in radiology and of mathematics in neuroscience in the Feil Family Brain and Mind Research Institute at Weill Cornell Medicine.
    The study was a collaboration with the laboratory of Dr. Mert Sabuncu, a professor of electrical and computer engineering at Cornell Engineering and Cornell Tech, and of electrical engineering in radiology at Weill Cornell Medicine. The study’s first author was Dr. Zijin Gu, a who was a doctoral student co-mentored by Dr. Sabuncu and Dr. Kuceyeski at the time of the study.
    Making an accurate model of the human visual system, in part by mapping brain responses to specific images, is one of the more ambitious goals of modern neuroscience. Researchers have found for example, that one visual processing region may activate strongly in response to an image of a face whereas another may respond to a landscape. Scientists must rely mainly on non-invasive methods in pursuit of this goal, given the risk and difficulty of recording brain activity directly with implanted electrodes. The preferred non-invasive method is fMRI, which essentially records changes in blood flow in small vessels of the brain — an indirect measure of brain activity — as subjects are exposed to sensory stimuli or otherwise perform cognitive or physical tasks. An fMRI machine can read out these tiny changes in three dimensions across the brain, at a resolution on the order of cubic millimeters.
    For their own studies, Dr. Kuceyeski and Dr. Sabuncu and their teams used an existing dataset comprising tens of thousands of natural images, with corresponding fMRI responses from human subjects, to train an AI-type system called an artificial neural network (ANN) to model the human brain’s visual processing system. They then used this model to predict which images, across the dataset, should maximally activate several targeted vision areas of the brain. They also coupled the model with an AI-based image generator to generate synthetic images to accomplish the same task.

    “Our general idea here has been to map and model the visual system in a systematic, unbiased way, in principle even using images that a person normally wouldn’t encounter,” Dr. Kuceyeski said.
    The researchers enrolled six volunteers and recorded their fMRI responses to these images, focusing on the responses in several visual processing areas. The results showed that, for both the natural images and the synthetic images, the predicted maximal activator images, on average across the subjects, did activate the targeted brain regions significantly more than a set of images that were selected or generated to be only average activators. This supports the general validity of the team’s ANN-based model and suggests that even synthetic images may be useful as probes for testing and improving such models.
    In a follow-on experiment, the team used the image and fMRI-response data from the first session to create separate ANN-based visual system models for each of the six subjects. They then used these individualized models to select or generate predicted maximal-activator images for each subject. The fMRI responses to these images showed that, at least for the synthetic images, there was greater activation of the targeted visual region, a face-processing region called FFA1, compared to the responses to images based on the group model. This result suggests that AI and fMRI can be useful for individualized visual-system modeling, for example to study differences in visual system organization across populations.
    The researchers are now running similar experiments using a more advanced version of the image generator, called Stable Diffusion.
    The same general approach could be useful in studying other senses such as hearing, they noted.
    Dr. Kuceyeski also hopes ultimately to study the therapeutic potential of this approach.
    “In principle, we could alter the connectivity between two parts of the brain using specifically designed stimuli, for example to weaken a connection that causes excess anxiety,” she said. More

  • in

    2D material reshapes 3D electronics for AI hardware

    Multifunctional computer chips have evolved to do more with integrated sensors, processors, memory and other specialized components. However, as chips have expanded, the time required to move information between functional components has also grown.
    “Think of it like building a house,” said Sang-Hoon Bae, an assistant professor of mechanical engineering and materials science at the McKelvey School of Engineering at Washington University in St. Louis. “You build out laterally and up vertically to get more function, more room to do more specialized activities, but then you have to spend more time moving or communicating between rooms.”
    To address this challenge, Bae and a team of international collaborators, including researchers from the Massachusetts Institute of Technology, Yonsei University, Inha University, Georgia Institute of Technology and the University of Notre Dame, demonstrated monolithic 3D integration of layered 2D material into novel processing hardware for artificial intelligence (AI) computing. They envision that their new approach will not only provide a material-level solution for fully integrating many functions into a single, small electronic chip, but also pave the way for advanced AI computing. Their work was published Nov. 27 in Nature Materials, where it was selected as a front cover article.
    The team’s monolithic 3D-integrated chip offers advantages over existing laterally integrated computer chips. The device contains six atomically thin 2D layers, each with its own function, and achieves significantly reduced processing time, power consumption, latency and footprint. This is accomplished through tightly packing the processing layers to ensure dense interlayer connectivity. As a result, the hardware offers unprecedented efficiency and performance in AI computing tasks.
    This discovery offers a novel solution to integrate electronics and also opens the door to a new era of multifunctional computing hardware. With ultimate parallelism at its core, this technology could dramatically expand the capabilities of AI systems, enabling them to handle complex tasks with lightning speed and exceptional accuracy, Bae said.
    “Monolithic 3D integration has the potential to reshape the entire electronics and computing industry by enabling the development of more compact, powerful and energy-efficient devices,” Bae said. “Atomically thin 2D materials are ideal for this, and my collaborators and I will continue improving this material until we can ultimately integrate all functional layers on a single chip.”
    Bae said these devices also are more flexible and functional, making them suitable for more applications.
    “From autonomous vehicles to medical diagnostics and data centers, the applications of this monolithic 3D integration technology are potentially boundless,” he said. “For example, in-sensor computing combines sensor and computer functions in one device, instead of a sensor obtaining information then transferring the data to a computer. That lets us obtain a signal and directly compute data resulting in faster processing, less energy consumption and enhanced security because data isn’t being transferred.” More

  • in

    Straining memory leads to new computing possibilities

    By strategically straining materials that are as thin as a single layer of atoms, University of Rochester scientists have developed a new form of computing memory that is at once fast, dense, and low-power. The researchers outline their new hybrid resistive switches in a study published in Nature Electronics.
    Developed in the lab of Stephen M. Wu, an assistant professor of electrical and computer engineering and of physics, the approach marries the best qualities of two existing forms of resistive switches used for memory: memristors and phase-change materials. Both forms have been explored for their advantages over today’s most prevalent forms of memory, including dynamic random access memory (DRAM) and flash memory, but have their drawbacks.
    Wu says that memristors, which operate by applying voltage to a thin filament between two electrodes, tend to suffer from a relative lack of reliability compared to other forms of memory. Meanwhile, phase-change materials, which involve selectively melting a material into either an amorphous state or a crystalline state, require too much power.
    “We’ve combined the idea of a memristor and a phase-change device in a way that can go beyond the limitations of either device,” says Wu. “We’re making a two-terminal memristor device, which drives one type of crystal to another type of crystal phase. Those two crystal phases have different resistance that you can then story as memory.”
    The key is leveraging 2D materials that can be strained to the point where they lie precariously between two different crystal phases and can be nudged in either direction with relatively little power.
    “We engineered it by essentially just stretching the material in one direction and compressing it in another,” says Wu. “By doing that, you enhance the performance by orders of magnitude. I see a path where this could end up in home computers as a form of memory that’s ultra-fast and ultra-efficient. That could have big implications for computing in general.”
    Wu and his team of graduate students conducted the experimental work and partnered with researchers from Rochester’s Department of Mechanical Engineering, including assistant professors Hesam Askari and Sobhit Singh, to identify where and how to strain the material. According to Wu, the biggest hurdle remaining to making the phase-change memristors is continuing to improve their overall reliability — but he is nonetheless encouraged by the team’s progress to date. More

  • in

    Researchers show an old law still holds for quirky quantum materials

    Long before researchers discovered the electron and its role in generating electrical current, they knew about electricity and were exploring its potential. One thing they learned early on was that metals were great conductors of both electricity and heat.
    And in 1853, two scientists showed that those two admirable properties of metals were somehow related: At any given temperature, the ratio of electronic conductivity to thermal conductivity was roughly the same in any metal they tested. This so-called Wiedemann-Franz law has held ever since — except in quantum materials, where electrons stop behaving as individual particles and glom together into a sort of electron soup. Experimental measurements have indicated that the 170-year-old law breaks down in these quantum materials, and by quite a bit.
    Now, a theoretical argument put forth by physicists at the Department of Energy’s SLAC National Accelerator Laboratory, Stanford University and the University of Illinois suggests that the law should, in fact, approximately hold for one type of quantum material — the copper oxide superconductors, or cuprates, which conduct electricity with no loss at relatively high temperatures.
    In a paper published in Science today, they propose that the Wiedemann-Franz law should still roughly hold if one considers only the electrons in cuprates. They suggest that other factors, such as vibrations in the material’s atomic latticework, must account for experimental results that make it look like the law does not apply.
    This surprising result is important to understanding unconventional superconductors and other quantum materials, said Wen Wang, lead author of the paper and a PhD student with the Stanford Institute for Materials and Energy Sciences (SIMES) at SLAC.
    “The original law was developed for materials where electrons interact with each other weakly and behave like little balls that bounce off defects in the material’s lattice,” Wang said. “We wanted to test the law theoretically in systems where neither of these things was true.”
    Peeling a quantum onion
    Superconducting materials, which carry electric current without resistance, were discovered in 1911. But they operated at such extremely low temperatures that their usefulness was quite limited.

    That changed in 1986, when the first family of so-called high-temperature or unconventional superconductors — the cuprates — was discovered. Although cuprates still require extremely cold conditions to work their magic, their discovery raised hopes that superconductors could someday work at much closer to room temperature — making revolutionary technologies like no-loss power lines possible.
    After nearly four decades of research, that goal is still elusive, although a lot of progress has been made in understanding the conditions in which superconducting states flip in and out of existence.
    Theoretical studies, performed with the help of powerful supercomputers, have been essential for interpreting the results of experiments on these materials and for understanding and predicting phenomena that are out of experimental reach.
    For this study, the SIMES team ran simulations based on what’s known as the Hubbard model, which has become an essential tool for simulating and describing systems where electrons stop acting independently and join forces to produce unexpected phenomena.
    The results show that when you only take electron transport into account, the ratio of electronic conductivity to thermal conductivity approaches what the Wiedemann-Franz law predicts, Wang said. “So, the discrepancies that have been seen in experiments should be coming from other things like phonons, or lattice vibrations, that are not in the Hubbard model,” she said.
    SIMES staff scientist and paper co-author Brian Moritz said that although the study did not investigate how vibrations cause the discrepancies, “somehow the system still knows that there is this correspondence between charge and heat transport amongst the electrons. That was the most surprising result.”
    From here, he added, “maybe we can peel the onion to understand a little bit more.”
    Major funding for this study came from the DOE Office of Science. Computational work was carried out at Stanford University and on resources of the National Energy Research Scientific Computing Center, which is a DOE Office of Science user facility. More

  • in

    Researchers develop novel deep learning-based detection system for autonomous vehicles

    Autonomous vehicles hold the promise of tackling traffic congestion, enhancing traffic flow through vehicle-to-vehicle communication, and revolutionizing the travel experience by offering comfortable and safe journeys. Additionally, integrating autonomous driving technology into electric vehicles could contribute to more eco-friendly transportation solutions.
    A critical requirement for the success of autonomous vehicles is their ability to detect and navigate around obstacles, pedestrians, and other vehicles across diverse environments. Current autonomous vehicles employ smart sensors such as LiDARs (Light Detection and Ranging) for a 3D view of the surroundings and depth information, RADaR (Radio Detection and Ranging) for detecting objects at night and cloudy weather, and a set of cameras for providing RGB images and a 360-degree view, collectively forming a comprehensive dataset known as point cloud. However, these sensors often face challenges like reduced detection capabilities in adverse weather, on unstructured roads, or due to occlusion.
    To overcome these shortcomings, an international team of researchers led by Professor Gwanggil Jeon from the Department of Embedded Systems Engineering at Incheon National University (INU), Korea, has recently developed a groundbreaking Internet-of-Things-enabled deep learning-based end-to-end 3D object detection system. “Our proposed system operates in real time, enhancing the object detection capabilities of autonomous vehicles, making navigation through traffic smoother and safer,” explains Prof. Jeon. Their paper was made availableonline on October 17, 2022, and published in Volume 24, Issue 11 of the journal IEEE Transactions on Intelligent Transport Systems on November 2023.
    The proposed innovative system is built on the YOLOv3 (You Only Look Once) deep learning object detection technique, which is the most active state-of-the-art technique available for 2D visual detection. The researchers first used this new model for 2D object detection and then modified the YOLOv3 technique to detect 3D objects. Using both point cloud data and RGB images as input, the system generates bounding boxes with confidence scores and labels for visible obstacles as output.
    To assess the system’s performance, the team conducted experiments using the Lyft dataset, which consisted of road information captured from 20 autonomous vehicles traveling a predetermined route in Palo Alto, California, over a four-month period. The results demonstrated that YOLOv3 exhibits high accuracy, surpassing other state-of-the-art architectures. Notably, the overall accuracy for 2D and 3D object detection were an impressive 96% and 97%, respectively.
    Prof. Jeon emphasizes the potential impact of this enhanced detection capability: “By improving detection capabilities, this system could propel autonomous vehicles into the mainstream. The introduction of autonomous vehicles has the potential to transform the transportation and logistics industry, offering economic benefits through reduced dependence on human drivers and the introduction of more efficient transportation methods.”
    Furthermore, the present work is expected to drive research and development in various technological fields such as sensors, robotics, and artificial intelligence. Going ahead, the team aims to explore additional deep learning algorithms for 3D object detection, recognizing the current focus on 2D image development.
    In summary, this groundbreaking study could pave the way for a widespread adoption of autonomous vehicles and, in turn, a more environment-friendly and comfortable mode of transport. More

  • in

    Broadband buzz: Periodical cicadas’ chorus measured with fiber optic cables

    Hung from a common utility pole, a fiber optic cable — the kind bringing high-speed internet to more and more American households — can be turned into a sensor to detect temperature changes, vibrations, and even sound, through an emerging technology called distributed fiber optic sensing.
    However, as NEC Labs America photonics researcher Sarper Ozharar, Ph.D., explains, acoustic sensing in fiber optic cables “is limited to only nearby sound sources or very loud events, such as emergency vehicles, car alarms, or cicada emergences.”
    Cicadas? Indeed, periodical cicadas — the insects known for emerging by the billions on 13- or 17-year cycles and making a collective racket with their buzzy mating calls — are loud enough to be detected through fiber optic acoustic sensing. And a new proof-of-concept study shows how the technology could open new pathways for charting the populations of these famously ephemeral bugs.
    “I was surprised and excited to learn how much information about the calls was gathered, despite it being located near a busy section of Middlesex County in New Jersey,” says entomologist Jessica Ware, Ph.D., associate curator and chair of the Division of Invertebrate Zoology at the American Museum of Natural History and co-author on the study, published in the Entomological Society of America’s Journal of Insect Science.
    As the researchers explain in their report, distributed fiber optic sensing is based on detecting and analyzing “backscatter” in a cable. When an optical pulse is sent through a fiber cable, tiny imperfections or disturbances in the cable cause a small fraction of the signal to bounce back to the source. Timing the arrival of the backscattered light can be used to calculate the exact point along the cable from which it bounced back. And, monitoring how the backscatter varies over time creates a signature of the disturbance — which, in the case of acoustic sensing, can indicate volume and frequency of the sound.
    A single sensor can be deployed on a huge segment of cable, too; the researchers offer an example of a 50-kilometer cable with a sensor that can detect the location of disturbances at a scale as precise as 1 meter. “This is identical to installing 50,000 [acoustic] sensors in the monitored region that are inherently synchronized and do not require onsite power supply,” they write.
    In 2021, Brood X, the largest of several populations of cicadas that emerge on 17-year cycles, came out of the ground in at least 15 states and the District of Columbia in the Midwest and mid-Atlantic regions of the U.S., including New Jersey, where Ozharar works at NEC Laboratories America, Inc. There, Ozharar and colleagues used NEC’s fiber-sensing test apparatus — cable strung on three 35-foot utility poles on the grounds of NEC’s lab in Princeton — to see if they could detect and analyze the sound of Brood X cicadas buzzing in trees nearby between June 9 and June 24 that year.

    Sure enough, the cicadas’ buzzing was evident. It showed up as a strong signal at 1.33 kilohertz (kHz) via the fiber optic sensing, which matched the frequency of the cicadas’ call measured with a traditional audio sensor placed in same location. The researchers also observed the cicadas’ peak frequency varying between 1.2 kHz and 1.5 kHz, a pattern that appeared to follow changes in temperature at the test site. The overall intensity of the cicadas’ buzzing was also observed through the fiber optic sensing, and the signal decreased over the course of the test period, as the cicadas’ chorus peaked and then faded as they reached the end of their reproductive period.
    “We think it is really exciting and interesting that this new technology, designed and optimized for other applications and seemingly unrelated to entomology, can support entomological studies,” Ozharar says. Indeed, fiber optic sensors are multifunctional, meaning they can be installed and used for any number of purposes, detecting cicadas one day and some other disturbance the next.
    Ware says fiber optic sensing could soon play a role in detecting a variety of insects. “Periodical cicadas were a noisy cohort that was picked up by these systems, but it will be interesting to see if annual measurements of insect soundscapes and vibrations could be useful in monitoring insect abundance in an area across seasons and years,” she says.
    As for periodical cicadas, more than a dozen broods are known to emerge in different years and different areas of the eastern United States. The growing network of fiber optic infrastructure in the country — with fiber internet available to more than 40 percent of U.S. households as of 2022, according to the Fiber Broadband Association — could be incorporated into entomologists’ efforts to observe and measure these emergences over time.
    “Thanks to the booming development of broadband access and telecommunications, fiber cables are ubiquitously available across communities, weaving a vast network that not only provides high-speed internet but also serves as a foundation for the next generation of sensing technologies,” Ozharar says.
    Brood X cicadas will remain underground until 2038. Their brief appearances and massive numbers make them a challenge to study, but the long gap between their arrivals allows entomologists to make significant technological leaps in the interim. In 2021, Brood X was observed in unprecedented volume through a crowdsourced mobile smartphone app — a method barely conceivable when Brood X had last emerged in 2004. By 2038, fiber optic sensing could well be the next avenue leading to a similar advance. More

  • in

    Artificial intelligence paves way for new medicines

    Researchers have developed an AI model that can predict where a drug molecule can be chemically altered.
    A team of researchers from LMU, ETH Zurich, and Roche Pharma Research and Early Development (pRED) Basel has used artificial intelligence (AI) to develop an innovative method that predicts the optimal method for synthesizing drug molecules. “This method has the potential to significantly reduce the number of required lab experiments, thereby increasing both the efficiency and sustainability of chemical synthesis,” says David Nippa, lead author of the corresponding paper, which has been published in the journal Nature Chemistry. Nippa is a doctoral student in Dr. David Konrad’s research group at the Faculty of Chemistry and Pharmacy at LMU and at Roche.
    Active pharmaceutical ingredients typically consist of a framework to which functional groups are attached. These groups enable a specific biological function. To achieve new or improved medical effects, functional groups are altered and added to new positions in the framework. However, this process is particularly challenging in chemistry, as the frameworks, which mainly consist of carbon and hydrogen atoms, are hardly reactive themselves. One method of activating the framework is the so-called borylation reaction. In this process, a chemical group containing the element boron is attached to a carbon atom of the framework. This boron group can then be replaced by a variety of medically effective groups. Although borylation has great potential, it is difficult to control in the lab.
    Together with Kenneth Atz, a doctoral student at ETH Zurich, David Nippa developed an AI model that was trained on data from trustworthy scientific works and experiments from an automated lab at Roche. It can successfully predict the position of borylation for any molecule and provides the optimal conditions for the chemical transformation. “Interestingly, the predictions improved when the three-dimensional information of the starting materials were taken into account, not just their two-dimensional chemical formulas,” says Atz.
    The method has already been successfully used to identify positions in existing active ingredients where additional active groups can be introduced. This helps researchers develop new and more effective variants of known drug active ingredients more quickly. More

  • in

    What was thought of as noise, points to new type of ultrafast magnetic switching

    Noise on the radio when reception is poor is a typical example of how fluctuations mask a physical signal. In fact, such interference or noise occurs in every physical measurement in addition to the actual signal. “Even in the loneliest place in the universe, where there should be nothing at all, there are still fluctuations of the electromagnetic field,” says physicist Ulrich Nowak. In the Collaborative Research Centre (CRC) 1432 “Fluctuations and Nonlinearities in Classical and Quantum Matter beyond Equilibrium” at the University of Konstanz, researchers do not see this omnipresent noise as a disturbing factor that needs to be eliminated as far as possible, but as a source of information that tells us something about the signal.
    No magnetic effect, but fluctuations
    This approach has now proved successful when investigating antiferromagnets. Antiferromagnets are magnetic materials in which the magnetizations of several sub-lattices cancel out each other. Nevertheless, antiferromagnetic insulators are considered promising for energy-efficient components in the field of information technology. As they have hardly any magnetic fields on the outside, they are very difficult to characterize physically. Yet, antiferromagnets are surrounded by magnetic fluctuations, which can tell us a lot about this weakly magnetic material.
    In this spirit, the groups of the two materials scientists Ulrich Nowak and Sebastian Gönnenwein analysed the fluctuations of antiferromagnetic materials in the context of the CRC. The decisive factor in their theoretical as well as experimental study, recently published in the journal Nature Communications, was the specific frequency range. “We measure very fast fluctuations and have developed a method with which fluctuations can still be detected on the ultrashort time scale of femtoseconds,” says experimental physicist Sebastian Gönnenwein. A femtosecond is one millionth of a billionth of a second.
    New experimental approach for ultrafast time scales
    On slower time scales, one could use electronics that are fast enough to measure these fluctuations. On ultrafast time scales, this no longer works, which is why a new experimental approach had to be developed. It is based on an idea from the research group of Alfred Leitenstorfer, who is also a member of the Collaborative Research Centre. Employing laser technology, the researchers use pulse sequences or pulse pairs in order to obtain information about fluctuations. Initially, this measurement approach was developed to investigate quantum fluctuations, and has now been extended to fluctuations in magnetic systems. Takayuki Kurihara from the University of Tokyo played a key role in this development as the third cooperation partner. He was a member of the Leitenstorfer research group and the Zukunftskolleg at the University of Konstanz from 2018 to 2020.
    Detection of fluctuations using ultrashort light pulses
    In the experiment, two ultrashort light pulses are transmitted through the magnet with a time delay, testing the magnetic properties during the transit time of each pulse, respectively. The light pulses are then checked for similarity using sophisticated electronics. The first pulse serves as a reference, the second contains information about how much the antiferromagnet has changed in the time between the first and second pulse. Different measurement results at the two points of time confirm the fluctuations. Ulrich Nowak’s research group also modelled the experiment in elaborate computer simulations in order to better understand its results.
    One unexpected result was the discovery of what is known as telegraph noise on ultrashort time scales. This means that there is not only unsorted noise, but also fluctuations in which the system switches back and forth between two well-defined states.Such fast, purely random switching has never been observed before and could be interesting for applications such as random number generators. In any case, the new methodological possibilities for analyzing fluctuations on ultrashort time scales offer great potential for further discoveries in the field of functional materials. More