More stories

  • in

    New twist on DNA data storage lets users preview stored files

    Researchers from North Carolina State University have turned a longstanding challenge in DNA data storage into a tool, using it to offer users previews of stored data files — such as thumbnail versions of image files.
    DNA data storage is an attractive technology because it has the potential to store a tremendous amount of data in a small package, it can store that data for a long time, and it does so in an energy-efficient way. However, until now, it wasn’t possible to preview the data in a file stored as DNA — if you wanted to know what a file was, you had to “open” the entire file.
    “The advantage to our technique is that it is more efficient in terms of time and money,” says Kyle Tomek, lead author of a paper on the work and a Ph.D. student at NC State. “If you are not sure which file has the data you want, you don’t have to sequence all of the DNA in all of the potential files. Instead, you can sequence much smaller portions of the DNA files to serve as previews.”
    Here’s a quick overview of how this works.
    Users “name” their data files by attaching sequences of DNA called primer-binding sequences to the ends of DNA strands that are storing information. To identify and extract a given file, most systems use polymerase chain reaction (PCR). Specifically, they use a small DNA primer that matches the corresponding primer-binding sequence to identify the DNA strands containing the file you want. The system then uses PCR to make lots of copies of the relevant DNA strands, then sequences the entire sample. Because the process makes numerous copies of the targeted DNA strands, the signal of the targeted strands is stronger than the rest of the sample, making it possible to identify the targeted DNA sequence and read the file.
    However, one challenge that DNA data storage researchers have grappled with is that if two or more files have similar file names, the PCR will inadvertently copy pieces of multiple data files. As a result, users have to give files very distinct names to avoid getting messy data. More

  • in

    Researchers create quantum microscope that can see the impossible

    In a major scientific leap, University of Queensland researchers have created a quantum microscope that can reveal biological structures that would otherwise be impossible to see.
    This paves the way for applications in biotechnology, and could extend far beyond this into areas ranging from navigation to medical imaging.
    The microscope is powered by the science of quantum entanglement, an effect Einstein described as “spooky interactions at a distance.”
    Professor Warwick Bowen, from UQ’s Quantum Optics Lab and the ARC Centre of Excellence for Engineered Quantum Systems (EQUS), said it was the first entanglement-based sensor with performance beyond the best possible existing technology.
    “This breakthrough will spark all sorts of new technologies — from better navigation systems to better MRI machines, you name it,” Professor Bowen said.
    “Entanglement is thought to lie at the heart of a quantum revolution. More

  • in

    Important contribution to spintronics has received little consideration until now

    The movement of electrons can have a significantly greater influence on spintronic effects than previously assumed. This discovery was made by an international team of researchers led by physicists from the Martin Luther University Halle-Wittenberg (MLU). Until now, a calculation of these effects took, above all, the spin of electrons into consideration. The study was published in the journal “Physical Review Research” and offers a new approach in developing spintronic components.
    Many technical devices are based on conventional semiconductor electronics. Charge currents are used to store and process information in these components. However, this electric current generates heat and energy is lost. To get around this problem, spintronics uses a fundamental property of electrons known as spin. “This is an intrinsic angular momentum, which can be imagined as a rotational movement of the electron around its own axis,” explains Dr Annika Johansson, a physicist at MLU. The spin is linked to a magnetic moment that, in addition to the charge of the electrons, could be used in a new generation of fast and energy-efficient components.
    Achieving this requires an efficient conversion between charge and spin currents. This conversion is made possible by the Edelstein effect: by applying an electric field, a charge current is generated in an originally non-magnetic material. In addition, the electron spins align, and the material becomes magnetic. “Previous papers on the Edelstein effect primarily focused on how electron spin contributes to magnetisation, but electrons can also carry an orbital moment that also contributes to magnetisation. If the spin is the intrinsic rotation of the electron, then the orbital moment is the motion around the nucleus of the atom,” says Johansson. This is similar to the earth, which rotates both on its own axis and around the sun. Like spin, this orbital moment generates a magnetic moment.
    In this latest study, the researchers used simulations to investigate the interface between two oxide materials commonly used in spintronics. “Although both materials are insulators, a metallic electron gas is present at their interface which is known for its efficient charge-to-spin conversion,” says Johansson. The team also factored the orbital moment into the calculation of the Edelstein effect and found that its contribution to the Edelstein effect is at least one order of magnitude greater than that of spin. These findings could help to increase the efficiency of spintronic components.
    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More

  • in

    Machine learning speeds up simulations in material science

    Research, development, and production of novel materials depend heavily on the availability of fast and at the same time accurate simulation methods. Machine learning, in which artificial intelligence (AI) autonomously acquires and applies new knowledge, will soon enable researchers to develop complex material systems in a purely virtual environment. How does this work, and which applications will benefit? In an article published in the Nature Materials journal, a researcher from Karlsruhe Institute of Technology (KIT) and his colleagues from Göttingen and Toronto explain it all.
    Digitization and virtualization are becoming increasingly important in a wide range of scientific disciplines. One of these disciplines is materials science: research, development, and production of novel materials depend heavily on the availability of fast and at the same time accurate simulation methods. This, in turn, is beneficial for a wide range of different applications — from efficient energy storage systems, such as those indispensable for the use of renewable energies, to new medicines, for whose development an understanding of complex biological processes is required. AI and machine learning methods can take simulations in material sciences to the next level. “Compared to conventional simulation methods based on classical or quantum mechanical calculations, the use of neural networks specifically tailored to material simulations enables us to achieve a significant speed advantage,” explains physicist and AI expert Professor Pascal Friederich, Head of the AiMat — Artificial Intelligence for Materials Sciences research group at KIT’s Institute of Theoretical Informatics (ITI). “With faster simulation systems, scientists will be able to develop larger and more complex material systems in a purely virtual environment, and to understand and optimize them down to the atomic level.”
    High Precision from the Atom to the Material
    In an article published in Nature Materials, Pascal Friederich, who is also associate group leader of the Nanomaterials by Information-Guided Design division at KIT’s Institute of Nanotechnology (INT), presents, together with researchers from the University of Göttingen and the University of Toronto, an overview of the basic principles of machine learning used for simulations in material sciences. This also includes the data acquisition process and active learning methods. Machine learning algorithms not only enable artificial intelligence to process the input data, but also to find patterns and correlations in large data sets, learn from them, and make autonomous predictions and decisions. For simulations in materials science, it is important to achieve high accuracy over different time and size scales, ranging from the atom to the material, while limiting computational costs. In their article, the scientists also discuss various current applications, such as small organic molecules and large biomolecules, structurally disordered solid, liquid, and gaseous materials, as well as complex crystalline systems — for example, metal-organic frameworks that can be used for gas storage or for separation, for sensors or for catalysts.
    Even More Speed with Hybrid Methods
    To further extend the possibilities of material simulations in the future, the researchers from Karlsruhe, Göttingen, and Toronto suggest the development of hybrid methods: these combine machine learning (ML) and molecular mechanics (MM) methods. MM simulations use so-called force fields in order to calculate the forces acting on each individual particle and thus predict motions. As the potentials of the ML and MM methods are quite similar, a tight integration with variable transition areas is possible. These hybrid methods could significantly accelerate the simulation of large biomolecules or enzymatic reactions in the future, for example.
    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More

  • in

    Physicists achieve significant improvement in spotting neutrinos in a cosmic haystack

    How do you spot a subatomic neutrino in a “haystack” of particles streaming from space? That’s the daunting prospect facing physicists studying neutrinos with detectors near Earth’s surface. With little to no shielding in such non-subterranean locations, surface-based neutrino detectors, usually searching for neutrinos produced by particle accelerators, are bombarded by cosmic rays — relentless showers of subatomic and nuclear particles produced in Earth’s atmosphere by interactions with particles streaming from more-distant cosmic locations. These abundant travelers, mostly muons, create a web of crisscrossing particle tracks that can easily obscure a rare neutrino event.
    Fortunately, physicists have developed tools to tone down the cosmic “noise.”
    A team including physicists from the U.S. Department of Energy’s Brookhaven National Laboratory describes the approach in two papers recently accepted to be published in Physical Review Applied and the Journal of Instrumentation (JINST). These papers demonstrate the scientists’ ability to extract clear neutrino signals from the MicroBooNE detector at DOE’s Fermi National Accelerator Laboratory (Fermilab). The method combines CT-scanner-like image reconstruction with data-sifting techniques that make accelerator-produced neutrino signals stand out 5 to 1 against the cosmic ray background.
    “We developed a set of algorithms that reduce the cosmic ray background by a factor of 100,000,” said Chao Zhang, one of the Brookhaven Lab physicists who helped to develop the data-filtering techniques. Without the filtering, MicroBooNE would see 20,000 cosmic rays for every neutrino interaction, he said. “This paper demonstrates the crucial ability to eliminate the cosmic ray backgrounds.”
    Bonnie Fleming, a professor at Yale University who is a co-spokesperson for MicroBooNE, said, “This work is critical both for MicroBooNE and for the future U.S. neutrino research program. Its impact will extend notably beyond the use of this ‘Wire-Cell’ analysis technique, even on MicroBooNE, where other reconstruction paradigms have adopted these data-sorting methods to dramatically reduce cosmic ray backgrounds.”
    Tracking neutrinos
    MicroBooNE is one of three detectors that form the international Short-Baseline Neutrino program at Fermilab, each located a different distance from a particle accelerator that generates a carefully controlled neutrino beam. The three detectors are designed to count up different types of neutrinos at increasing distances to look for discrepancies from what’s expected based on the mix of neutrinos in the beam and what’s known about neutrino “oscillation.” Oscillation is a process by which neutrinos swap identities among three known types, or “flavors.” Spotting discrepancies in neutrino counts could point to a new unknown oscillation mechanism — and possibly a fourth neutrino variety. More

  • in

    'PrivacyMic': For a smart speaker that doesn't eavesdrop

    Microphones are perhaps the most common electronic sensor in the world, with an estimated 320 million listening for our commands in the world’s smart speakers. The trouble is that they’re capable of hearing everything else, too.
    But now, a team of University of Michigan researchers has developed a system that can inform a smart home — or listen for the signal that would turn on a smart speaker — without eavesdropping on audible sound.
    The key to the device, called PrivacyMic, is ultrasonic sound at frequencies above the range of human hearing. Running dishwashers, computer monitors, even finger snaps, all generate ultrasonic sounds, which have a frequency of 20 kilohertz or higher. We can’t hear them — but dogs, cats and PrivacyMic can.
    The system pieces together the ultrasonic information that’s all around us to identify when its services are needed, and sense what’s going on around it. Researchers have demonstrated that it can identify household and office activities with greater than 95% accuracy.
    “There are a lot of situations where we want our home automation system or our smart speaker to understand what’s going on in our home, but we don’t necessarily want it listening to our conversations,” said Alanson Sample, U-M associate professor of electrical engineering and computer science. “And what we’ve found is that you can have a system that understands what’s going on and a hard guarantee that it will never record any audible information.”
    Ubiquitous computing + privacy
    PrivacyMic can filter out audible information right on the device. That makes it more secure than encryption or other security measures that take steps to secure audio data after it’s recorded or limit who has access to it. Those measures could all leave sensitive information vulnerable to hackers, but with PrivacyMic, the information simply doesn’t exist. More

  • in

    Researchers create intelligent electronic microsystems from 'green' material

    A research team from the University of Massachusetts Amherst has created an electronic microsystem that can intelligently respond to information inputs without any external energy input, much like a self-autonomous living organism. The microsystem is constructed from a novel type of electronics that can process ultralow electronic signals and incorporates a device that can generate electricity “out of thin air” from the ambient environment.
    The groundbreaking research was published June 7 in the journal Nature Communications.
    Jun Yao, an assistant professor in the electrical and computer engineering (ECE) and an adjunct professor in biomedical engineering, led the research with his longtime collaborator, Derek R. Lovley, a Distinguished Professor in microbiology.
    Both of the key components of the microsystem are made from protein nanowires, a “green” electronic material that is renewably produced from microbes without producing “e-waste.” The research heralds the potential of future green electronics made from sustainable biomaterials that are more amenable to interacting with the human body and diverse environments.
    This breakthrough project is producing a “self-sustained intelligent microsystem,” according to the U.S. Army Combat Capabilities Development Command Army Research Laboratory, which is funding the research.
    Tianda Fu, a graduate student in Yao’s group, is the lead author. “It’s an exciting start to explore the feasibility of incorporating ‘living’ features in electronics. I’m looking forward to further evolved versions,” Fu said.
    The project represents a continuing evolution of recent research by the team. Previously, the research team discovered that electricity can be generated from the ambient environment/humidity with a protein-nanowire-based Air Generator (or ‘Air-Gen’), a device which continuously produces electricity in almost all environments found on Earth. The Air-Gen invention was reported in Nature in 2020.
    Also in 2020, Yao’s lab reported in Nature Communications that the protein nanowires can be used to construct electronic devices called memristors that can mimic brain computation and work with ultralow electrical signals that match the biological signal amplitudes.
    “Now we piece the two together,” Yao said of the creation. “We make microsystems in which the electricity from Air-Gen is used to drive sensors and circuits constructed from protein-nanowire memristors. Now the electronic microsystem can get energy from the environment to support sensing and computation without the need of an external energy source (e.g. battery). It has full energy self-sustainability and intelligence, just like the self-autonomy in a living organism.”
    The system is also made from environmentally friendly biomaterial — protein nanowires harvested from bacteria. Yao and Lovley developed the Air-Gen from the microbe Geobacter, discovered by Lovley many years ago, which was then utilized to create electricity from humidity in the air and later to build memristors capable of mimicking human intelligence.
    “So, from both function and material,” says Yao, “we are making an electronic system more bio-alike or living-alike.”
    “The work demonstrates that one can fabricate a self-sustained intelligent microsystem,” said Albena Ivanisevic, the biotronics program manager at the U.S. Army Combat Capabilities Development Command Army Research Laboratory. “The team from UMass has demonstrated the use of artificial neurons in computation. It is particularly exciting that the protein nanowire memristors show stability in aqueous environment and are amenable to further functionalization. Additional functionalization not only promises to increase their stability but also expand their utility for sensor and novel communication modalities of importance to the Army.”
    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence enhances efficacy of sleep disorder treatments

    Difficulty sleeping, sleep apnea and narcolepsy are among a range of sleep disorders that thousands of Danes suffer from. Furthermore, it is estimated that sleep apnea is undiagnosed in as many as 200,000 Danes.
    In a new study, researchers from the University of Copenhagen’s Department of Computer Science have collaborated with the Danish Center for Sleep Medicine at the danish hospital Rigshospitalet to develop an artificial intelligence algorithm that can improve diagnoses, treatments, and our overall understanding of sleep disorders.
    “The algorithm is extraordinarily precise. We completed various tests in which its performance rivaled that of the best doctors in the field, worldwide,” states Mathias Perslev, a PhD at the Department of Computer Science and lead author of the study, recently published in the journal npj Digital Medicine (link).
    Can support doctors in their treatments
    Today’s sleep disorder examinations typically begin with admittance to a sleep clinic. Here, a person’s night sleep is monitored using various measuring instruments. A specialist in sleep disorders then reviews the 7-8 hours of measurements from the patient’s overnight sleep.
    The doctor manually divides these 7-8 hours of sleep into 30-second intervals, all of which must be categorized into different sleep phases, such as REM (rapid eye movement) sleep, light sleep, deep sleep, etc. It is a time-consuming job that the algorithm can perform in seconds. More