More stories

  • in

    New imager microchip helps devices bring hidden objects to light

    Researchers from The University of Texas at Dallas and Oklahoma State University have developed an innovative terahertz imager microchip that can enable devices to detect and create images through obstacles that include fog, smoke, dust and snow.
    The team is working on a device for industrial applications that require imaging up to 20 meters away. The technology could also be adapted for use in cars to help drivers or autonomous vehicle systems navigate through hazardous conditions that reduce visibility. On an automotive display, for example, the technology could show pixelated outlines and shapes of objects, such as another vehicle or pedestrians.
    “The technology allows you to see in vision-impaired environments. In industrial settings, for example, devices using the microchips could help with packaging inspections for manufacturing process control, monitoring moisture content or seeing through steam. If you are a firefighter, it could help you see through smoke and fire,” said Dr. Kenneth K. O, professor of electrical and computer engineering and the Texas Instruments Distinguished University Chair in the Erik Jonsson School of Engineering and Computer Science.
    Yukun Zhu, a doctoral candidate in electrical engineering, announced the imaging technology on Feb. 21 at the virtual International Solid-State Circuits Conference, sponsored by the Institute of Electrical and Electronics Engineers (IEEE) and its Solid-State Circuits Society.
    The advance is the result of more than 15 years of work by O and his team of students, researchers and collaborators. This latest effort is supported by through its TI Foundational Technology Research Program.
    “TI has been part of the journey through much of the 15 years,” said O, who is director of the Texas Analog Center of Excellence (TxACE) at UT Dallas. “The company has been a key supporter of the research.”
    The microchip emits radiation beams in the terahertz range (430 GHz) of the electromagnetic spectrum from pixels no larger than a grain of sand. The beams travel through fog, dust and other obstacles that optical light cannot penetrate and bounce off objects and back to the microchip, where the pixels pick up the signal to create images. Without the use of external lenses, the terahertz imager includes the microchip and a reflector that increases the imaging distance and quality and reduces power consumption.
    The researchers designed the imager using complementary metal-oxide semiconductor (CMOS) technology. This type of integrated circuit technology is used to manufacture the bulk of consumer electronics devices, which makes the imager affordable. O’s group was one of the first to show that CMOS technology was viable, and since then they have worked to develop a variety of new applications.
    “Another breakthrough result enabled through innovations that overcame fundamental active-gain limits of CMOS is that this imaging technology consumes more than 100 times less power than the phased arrays currently being investigated for the same imaging applications. This and the use of CMOS make consumer applications of this technology possible,” said O, a fellow of the IEEE.
    TxACE is supported by the Semiconductor Research Corp., TI, the UT System and UT Dallas.
    “UT Dallas and Oklahoma State continue to discover technological innovations that will help shape the future,” said Dr. Swaminathan Sankaran, design director and Distinguished Member Technical Staff at TI Kilby Labs. “What Dr. O and his research team were able to accomplish was truly remarkable with this terahertz monostatic reflection-mode imager work. Their research paves a path for improved raw angular resolution and low-power, cost system integration, and we are excited to see what applications and use cases this terahertz imaging technology will lead to.”
    Story Source:
    Materials provided by University of Texas at Dallas. Original written by Kim Horner. Note: Content may be edited for style and length. More

  • in

    Using artificial intelligence to find anomalies hiding in massive datasets

    Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.
    Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.
    Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.
    “In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.
    The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.
    Probing probabilities
    The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies. More

  • in

    A security technique to fool would-be cyber attackers

    Multiple programs running on the same computer may not be able to directly access each other’s hidden information, but because they share the same memory hardware, their secrets could be stolen by a malicious program through a “memory timing side-channel attack.”
    This malicious program notices delays when it tries to access a computer’s memory, because the hardware is shared among all programs using the machine. It can then interpret those delays to obtain another program’s secrets, like a password or cryptographic key.
    One way to prevent these types of attacks is to allow only one program to use the memory controller at a time, but this dramatically slows down computation. Instead, a team of MIT researchers has devised a new approach that allows memory sharing to continue while providing strong security against this type of side-channel attack. Their method is able to speed up programs by 12 percent when compared to state-of-the-art security schemes.
    In addition to providing better security while enabling faster computation, the technique could be applied to a range of different side-channel attacks that target shared computing resources, the researchers say.
    “Nowadays, it is very common to share a computer with others, especially if you are do computation in the cloud or even on your own mobile device. A lot of this resource sharing is happening. Through these shared resources, an attacker can seek out even very fine-grained information,” says senior author Mengjia Yan, the Homer A. Burnell Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
    The co-lead authors are CSAIL graduate students Peter Deutsch and Yuheng Yang. Additional co-authors include Joel Emer, a professor of the practice in EECS, and CSAIL graduate students Thomas Bourgeat and Jules Drean. The research will be presented at the International Conference on Architectural Support for Programming Languages and Operating Systems. More

  • in

    Deep neural network to find hidden turbulent motion on the sun

    Scientists developed a neural network deep learning technique to extract hidden turbulent motion information from observations of the Sun. Tests on three different sets of simulation data showed that it is possible to infer the horizontal motion from data for the temperature and vertical motion. This technique will benefit solar astronomy and other fields such as plasma physics, fusion science, and fluid dynamics.
    The Sun is important to the Sustainable Development Goal of Affordable and Clean Energy, both as the source of solar power and as a natural example of fusion energy. Our understanding of the Sun is limited by the data we can collect. It is relatively easy to observe the temperature and vertical motion of solar plasma, gas so hot that the component atoms break down into electrons and ions. But it is difficult to determine the horizontal motion.
    To tackle this problem, a team of scientists led by the National Astronomical Observatory of Japan and the National Institute for Fusion Science created a neural network model, and fed it data from three different simulations of plasma turbulence. After training, the neural network was able to correctly infer the horizontal motion given only the vertical motion and the temperature.
    The team also developed a novel coherence spectrum to evaluate the performance of the output at different size scales. This new analysis showed that the method succeeded at predicting the large-scale patterns in the horizontal turbulent motion, but had trouble with small features. The team is now working to improve the performance at small scales. It is hoped that this method can be applied to future high resolution solar observations, such as those expected from the SUNRISE-3 balloon telescope, as well as to laboratory plasmas, such as those created in fusion science research for new energy.
    Story Source:
    Materials provided by National Institutes of Natural Sciences. Note: Content may be edited for style and length. More

  • in

    Machine learning antibiotic prescriptions can help minimize resistance spread

    Antibiotics are a double-edged sword: on the one hand, antibiotics are essential to curing bacterial infections. On the other, their use promotes the appearance and proliferation of antibiotic-resistant bacteria. Using genomic sequencing techniques and machine learning analysis of patient records, the researchers have developed an antibiotic prescribing algorithm which cuts the risk of emergence of antibiotic resistance by half.
    The paper, published today in Science, is a collaboration between research group of Professor Roy Kishony from the Technion — Israel Institute of Technology Faculty of Biology and the Henry and Marilyn Taub Faculty of Computer Science in collaboration with Professors Varda Shalev, Gabriel Chodick, and Jacob Kuint at Maccabi KSM Research and Innovation Center headed by Dr. Tal Patalon. Focusing on two very common bacterial infections, urinary tract infections and wound infections, the paper describes how each patient’s past infection history can be used to choose the best antibiotic to prescribe them to reduce the chances of antibiotic resistance emerging.
    Clinical treatment of infections focuses on correctly matching an antibiotic to the resistance profile of the pathogen, but even such correctly matched treatments can fail as resistance can emergence during treatment itself. “We wanted to understand how antibiotic resistance emerges during treatment and find ways to better tailor antibiotic treatment for each patient to not only correctly match the patient’s current infection susceptibility, but also to minimize their risk of infection recurrence and gain of resistance to treatment,” said Prof. Kishony.
    The key to the success of the approach was understanding that the emergence of antibiotic resistance could be predicted in individual patients’ infections. Bacteria can evolve by randomly acquiring mutations that makes them resistant, but the randomness of the process makes it hard to predict and to avoid. However, the researchers discovered that in most patients’ infections resistance was not acquired by random mutations. Instead, resistance emerged due to reinfection by existing resistant bacteria from the patient’s own microbiome. The researchers turned these findings into an advantage: they proposed matching an antibiotic not only to the susceptibility of the bacteria causing the patient’s current infection, but also to the bacteria in their microbiome that could replace it.
    “We found that the antibiotic susceptibility of the patient’s past infections could be used to predict their risk of returning with a resistant infection following antibiotic treatment’ explained Dr. Mathew Stracy, the first author of the paper. “Using this data, together with the patient’s demographics like age and gender, allowed us to develop the algorithm.”
    The study was supported by the National Institutes of Health (NIH), the Israel Science Foundation within the Israel Precision Medicine Partnership program, the Ernest and Bonnie Beutler Research Program of Excellence in Genomic Medicine, the European Research Council (ERC), the Wellcome Trust, and the D. Dan & Betty Kahn Foundation.
    “I hope to see the algorithm applied at the point of care, providing doctors with better tools to personalize antibiotic treatments to improve treatment and minimize the spread of resistance,” said Dr. Tal Patalon.
    Story Source:
    Materials provided by Technion-Israel Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Faster, more efficient living cell separation achieved with new microfluidic chip

    A Japanese research team created a new way to sort living cells suspended in fluid using an all-in-one operation in a lab-on-chip that required only 30 minutes for the entire separation process. This device eliminated the need for labor-intensive sample pretreatment and chemical tagging techniques while preserving the original structure of the cells. They constructed a prototype of a microfluidic chip that uses electric fields to gently coax cells in one direction or another in dielectrophoresis, a phenomenon or movement of neutral particles when they are subjected to an external non-uniform electric field.
    The Hiroshima University Office of Academic Research and Industry-Academia-Government and Community Collaboration, led by Professor Fumito Maruyama, published their findings on January 14 in iScience.
    Dielectrophoresis induces the motion of suspended particles, such as cells, by applying a non-uniform electric field. Since the strength of dielectrophoretic force depends on the size of the cell and its dielectric properties, this technique can be used to selectively separate cells based on these differences. In this paper, Maruyama and his team introduced the separation of two types of eukaryotic cells with the developed microfluidic chip that used dielectrophoresis.
    Dielectrophoresis could be particularly useful in separating living cells for medical research applications and the medical industry. Its most significant advantage over other methods is its simplicity.
    “In conventional cell separation methods such as commercially available cell sorters, cells are generally labeled with markers such as fluorescent substances or antibodies, and cells cannot be maintained in their original physical state,” Maruyama said. “Therefore, separating differently sized cells using microfluidic channels and dielectrophoresis has been studied as a potentially great method for separating cells without labeling.”
    Maruyama noted, “Dielectrophoresis cannot entirely replace existing separation methods such as centrifuge and polyester mesh filters. However, it opens the door to faster cell separation that may be useful in certain research and industrial areas, such as the preparation of cells for therapeutics, platelets, and cancer-fighting T-cells come to mind.”
    Other common medical industry uses of cell separation include removing unwanted bacteria cells from donated blood and separating stem cells and their derivatives, which are crucial for developing stem cell therapies.
    “If enrichment of a certain cell type from a solution of two or more cell types is needed, our dielectrophoresis-based system is an excellent option as it can simply enable a continuous pass-through of a large number of cells. The enriched cells are then easily collected from an outlet port,” Maruyama added.
    The process outlined by Maruyama and his colleagues was all-in-one.
    “The device eliminated sample pretreatment and established cell separation by all-in-one operation in a lab-on-chip, requiring only a small volume (0.5-1 mL) to enumerate the target cells and completing the entire separation process within 30 minutes. Such a rapid cell separation technique is in high demand by many researchers to promptly characterize the target cells,” he said.
    “Future research may examine refinements, allowing us to use dielectrophoresis to target certain cell types with greater specificity.”
    Story Source:
    Materials provided by Hiroshima University. Note: Content may be edited for style and length. More

  • in

    New simulations refine axion mass, refocusing dark matter search

    Physicists searching — unsuccessfully — for today’s most favored candidate for dark matter, the axion, have been looking in the wrong place, according to a new supercomputer simulation of how axions were produced shortly after the Big Bang 13.6 billion years ago.
    Using new calculational techniques and one of the world’s largest computers, Benjamin Safdi, assistant professor of physics at the University of California, Berkeley; Malte Buschmann, a postdoctoral research associate at Princeton University; and colleagues at MIT and Lawrence Berkeley National Laboratory simulated the era when axions would have been produced, approximately a billionth of a billionth of a billionth of a second after the universe came into existence and after the epoch of cosmic inflation.
    The simulation at Berkeley Lab’s National Research Scientific Computing Center (NERSC) found the axion’s mass to be more than twice as big as theorists and experimenters have thought: between 40 and 180 microelectron volts (micro-eV, or ?eV), or about one 10-billionth the mass of the electron. There are indications, Safdi said, that the mass is close to 65 ?eV. Since physicists began looking for the axion 40 years ago, estimates of the mass have ranged widely, from a few ?eV to 500 ?eV.
    “We provide over a thousandfold improvement in the dynamic range of our axion simulations relative to prior work and clear up a 40-year old question regarding the axion mass and axion cosmology,” Safdi said.
    The more definitive mass means that the most common type of experiment to detect these elusive particles — a microwave resonance chamber containing a strong magnetic field, in which scientists hope to snag the conversion of an axion into a faint electromagnetic wave — won’t be able to detect them, no matter how much the experiment is tweaked. The chamber would have to be smaller than a few centimeters on a side to detect the higher-frequency wave from a higher-mass axion, Safdi said, and that volume would be too small to capture enough axions for the signal to rise above the noise.
    “Our work provides the most precise estimate to date of the axion mass and points to a specific range of masses that is not currently being explored in the laboratory,” he said. “I really do think it makes sense to focus experimental efforts on 40 to 180 ?eV axion masses, but there’s a lot of work gearing up to go after that mass range.”
    One newer type of experiment, a plasma haloscope, which looks for axion excitations in a metamaterial — a solid-state plasma — should be sensitive to an axion particle of this mass, and could potentially detect one. More

  • in

    Researchers develop design scheme for fiber reinforced composites

    Fiber reinforced composites (FRCs), which are engineering materials comprising stiff fibers embedded in a soft matrix, typically have a constant fiber radius that limits their performance. Now, researchers from the Gwangju Institute of Science and Technology in Korea have developed a scheme for AI-assisted design of FRC structures with spatially varying optimal fiber sizes, making FRCs more lightweight without compromising their mechanical strength and stiffness, which will reduce the energy consumption of cars, aircrafts, and other vehicles.
    Fiber reinforced composites (FRCs) are a class of sophisticated engineering materials composed of stiff fibers embedded in a soft matrix. When properly designed, FRCs provide outstanding structural strength and stiffness for their weight, making them an attractive option for aircrafts, spacecrafts, and other vehicles where having a lightweight structure is essential.
    Despite their usefulness, however, FRCs are limited by the fact that they are designed using fibers with a constant radius and a spatially-fixed fiber density, which compromises the trade-off between weight and mechanical strength. Simply put, currently available FRCs are, in fact, heavier than necessary to meet the application standards.
    To tackle this issue, an international research team led by Professor Jaewook Lee of the Gwangju Institute of Science and Technology in Korea recently developed a new approach for the inverse design of FCRs with spatially-varying fiber size and orientation, also known as “functionally graded composites.” The proposed method is based on a “multiscale topology optimization,” which allows one to automatically find the best functionally graded composite structure given a set of design parameters and constraints.
    “Topology optimization is an AI-based design technique that relies on computer simulation to generate an optimal structural shape instead of on the designer’s intuition and experience,” explains Prof. Lee, “On the other hand, a multiscale approach is a numerical method that combines the results of analyses conducted at different scales to derive structural characteristics.” Unlike similar existing approaches that are limited to two-dimensional functionally graded composites, the proposed methodology can simultaneously determine the optimal three-dimensional composite structure alongside its microscale fiber densities and fiber orientations.
    The team demonstrated the potential of their method through several computer-assisted experiments where various functionally graded composite designs with constant or varying fiber sizes were compared. The experiments included designs for a bell crank, a displacement inverter mechanism, and a simple support beam. As expected, the results showed improved performances in the designs with locally tailored fiber sizes. This paper was made available online on October 9, 2021, and published in Volume 279 of Composite Structures on January 1, 2022.
    Many applications for vehicles, aircrafts, and robotics benefit from lightweight structures, and the proposed approach can now assist engineers to this end. However, the benefits can extend well beyond the target applications themselves. As Prof. Lee explains: “Our methodology could help develop more energy-efficient vehicles and machinery by weight reduction, which would reduce their energy consumption and, in turn, contribute towards achieving carbon neutrality.”
    Story Source:
    Materials provided by GIST (Gwangju Institute of Science and Technology). Note: Content may be edited for style and length. More