More stories

  • in

    AI helps design the perfect chickpea

    A massive international research effort has led to development of a genetic model for the ‘ultimate’ chickpea, with the potential to lift crop yields by up to 12 per cent.
    The research consortium genetically mapped thousands of chickpea varieties, and the UQ team then used this information to identify the most valuable gene combinations using artificial intelligence (AI).
    Professor Ben Hayes led the UQ component of the project with Professor Kai Voss-Fels and Associate Professor Lee Hickey, to develop a ‘haplotype’ genomic prediction crop breeding strategy, for enhanced performance for seed weight.
    “Most crop species only have a few varieties sequenced, so it was a massive undertaking by the international team to analyse more than 3000 cultivated and wild varieties,” Professor Hayes said.
    The landmark international study was led by Dr Rajeev Varshney from the International Crops Research Institute for the Semi-Arid Tropics in Hyderabad, India. The study confirmed chickpea’s origin in the Fertile Crescent and provides a complete picture of genetic variation within chickpea.
    “We identified 1,582 novel genes and established the pan-genome of chickpea, which will serve as a foundation for breeding superior chickpea varieties with enhanced yield, higher resistance to drought, heat and diseases,” Dr Varshney said. More

  • in

    Machine learning refines earthquake detection capabilities

    Researchers at Los Alamos National Laboratory are applying machine learning algorithms to help interpret massive amounts of ground deformation data collected with Interferometric Synthetic Aperture Radar (InSAR) satellites; the new algorithms will improve earthquake detection.
    “Applying machine learning to InSAR data gives us a new way to understand the physics behind tectonic faults and earthquakes,” said Bertrand Rouet-Leduc, a geophysicist in Los Alamos’ Geophysics group. “That’s crucial to understanding the full spectrum of earthquake behavior.”
    New satellites, such as the Sentinel 1 Satellite Constellation and the upcoming NISAR Satellite, are opening a new window into tectonic processes by allowing researchers to observe length and time scales that were not possible in the past. However, existing algorithms are not suited for the vast amount of InSAR data flowing in from these new satellites, and even more data will be available in the near future.
    In order to process all of this data, the team at Los Alamos developed the first tool based on machine learning algorithms to extract ground deformation from InSAR data, which enables the detection of ground deformation automatically — without human intervention — at a global scale. Equipped with autonomous detection of deformation on faults, this tool can help close the gap in existing detection capabilities and form the foundations for a systematic exploration of the properties of active faults.
    Systematically characterizing slip behavior on active faults is key to unraveling the physics of tectonic faulting, and will help researchers understand the interplay between slow earthquakes, which gently release stress, and fast earthquakes, which quickly release stress and can cause significant damage to surrounding communities.
    The team’s new methodology enables the detection of ground deformation automatically at a global scale, with a much finer temporal resolution than existing approaches, and a detection threshold of a few millimeters. Previous detection thresholds were in the centimeter range.
    In preliminary results of the approach, applied to data over the North Anatolian Fault, the method reaches two millimeter detection, revealing a slow earthquakes twice as extensive as previously recognized.
    This work was funded through Los Alamos National Laboratory’s Laboratory Directed Research and Development Office.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    First quantum simulation of baryons

    A team of researchers led by an Institute for Quantum Computing (IQC) faculty member performed the first-ever simulation of baryons — fundamental quantum particles — on a quantum computer.
    With their results, the team has taken a step towards more complex quantum simulations that will allow scientists to study neutron stars, learn more about the earliest moments of the universe, and realize the revolutionary potential of quantum computers.
    “This is an important step forward — it is the first simulation of baryons on a quantum computer ever,” Christine Muschik, an IQC faculty member, said. “Instead of smashing particles in an accelerator, a quantum computer may one day allow us to simulate these interactions that we use to study the origins of the universe and so much more.”
    Muschik, also a physics and astronomy professor at the University of Waterloo and associate faculty member at the Perimeter Institute, leads the Quantum Interactions Group, which studies the quantum simulation of lattice gauge theories. These theories are descriptions of the physics of reality, including the Standard Model of particle physics. The more inclusive a gauge theory is of fields, forces, particles, spatial dimensions and other parameters, the more complex it is — and the more difficult it is for a classical supercomputer to model.
    Non-Abelian gauge theories are particularly interesting candidates for simulations because they are responsible for the stability of matter as we know it. Classical computers can simulate the non-Abelian matter described in these theories, but there are important situations — such as matter with high densities — that are inaccessible for regular computers. And while the ability to describe and simulate non-Abelian matter is fundamental for being able to describe our universe, none has ever been simulated on a quantum computer.
    Working with Randy Lewis from York University, Muschik’s team at IQC developed a resource-efficient quantum algorithm that allowed them to simulate a system within a simple non-Abelian gauge theory on IBM’s cloud quantum computer paired with a classical computer.
    With this landmark step, the researchers are blazing a trail towards the quantum simulation of gauge theories far beyond the capabilities and resources of even the most powerful supercomputers in the world.
    “What’s exciting about these results for us is that the theory can be made so much more complicated,” Jinglei Zhang, a postdoctoral fellow at IQC and the University of Waterloo Department of Physics and Astronomy, said. “We can consider simulating matter at higher densities, which is beyond the capability of classical computers.”
    As scientists develop more powerful quantum computers and quantum algorithms, they will be able to simulate the physics of these more complex non-Abelian gauge theories and study fascinating phenomena beyond the reach of our best supercomputers.
    This breakthrough demonstration is an important step towards a new era of understanding the universe based on quantum simulation.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    A personalized exosuit for real-world walking

    People rarely walk at a constant speed and a single incline. We change speed when rushing to the next appointment, catching a crosswalk signal, or going for a casual stroll in the park. Slopes change all the time too, whether we’re going for a hike or up a ramp into a building. In addition to environmental variably, how we walk is influenced by sex, height, age, and muscle strength, and sometimes by neural or muscular disorders such as stroke or Parkinson’s Disease.
    This human and task variability is a major challenge in designing wearable robotics to assist or augment walking in real-world conditions. To date, customizing wearable robotic assistance to an individual’s walking requires hours of manual or automatic tuning — a tedious task for healthy individuals and often impossible for older adults or clinical patients.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a new approach in which robotic exosuit assistance can be calibrated to an individual and adapt to a variety of real-world walking tasks in a matter of seconds. The bioinspired system uses ultrasound measurements of muscle dynamics to develop a personalized and activity-specific assistance profile for users of the exosuit.
    “Our muscle-based approach enables relatively rapid generation of individualized assistance profiles that provide real benefit to the person walking,” said Robert D. Howe, the Abbott and James Lawrence Professor of Engineering, and co-author of the paper.
    The research is published in Science Robotics.
    Previous bioinspired attempts at developing individualized assistance profiles for robotic exosuits focused on the dynamic movements of the limbs of the wearer. The SEAS researchers took a different approach. The research was a collaboration between Howe’s Harvard Biorobotics Laboratory, which has extensive experience in ultrasound imaging and real-time image processing, and the Harvard Biodesign Lab, run by Conor J. Walsh, the Paul A. Maeder Professor of Engineering and Applied Sciences at SEAS, which develops soft wearable robots for augmenting and restoring human performance. More

  • in

    When algorithms get creative

    Our brains are incredibly adaptive. Every day, we form new memories, acquire new knowledge, or refine existing skills. This stands in marked contrast to our current computers, which typically only perform pre-programmed actions. At the core of our adaptability lies synaptic plasticity. Synapses are the connection points between neurons, which can change in different ways depending on how they are used. This synaptic plasticity is an important research topic in neuroscience, as it is central to learning processes and memory. To better understand these brain processes and build adaptive machines, researchers in the fields of neuroscience and artificial intelligence (AI) are creating models for the mechanisms underlying these processes. Such models for learning and plasticity help to understand biological information processing and should also enable machines to learn faster.
    Algorithms mimic biological evolution
    Working in the European Human Brain Project, researchers at the Institute of Physiology at the University of Bern have now developed a new approach based on so-called evolutionary algorithms. These computer programs search for solutions to problems by mimicking the process of biological evolution, such as the concept of natural selection. Thus, biological fitness, which describes the degree to which an organism adapts to its environment, becomes a model for evolutionary algorithms. In such algorithms, the “fitness” of a candidate solution is how well it solves the underlying problem.
    Amazing creativity
    The newly developed approach is referred to as the “evolving-to-learn” (E2L) approach or “becoming adaptive.” The research team led by Dr. Mihai Petrovici of the Institute of Physiology at the University of Bern and Kirchhoff Institute for Physics at the University of Heidelberg, confronted the evolutionary algorithms with three typical learning scenarios. In the first, the computer had to detect a repeating pattern in a continuous stream of input without receiving feedback about its performance. In the second scenario, the computer received virtual rewards when behaving in a particular desired manner. Finally, in the third scenario of “guided learning,” the computer was precisely told how much its behavior deviated from the desired one.
    “In all these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity, and thereby successfully solved a new task,” says Dr. Jakob Jordan, corresponding and co-first author from the Institute of Physiology at the University of Bern. In doing so, the algorithms showed amazing creativity: “For example, the algorithm found a new plasticity model in which signals we defined are combined to form a new signal. In fact, we observe that networks using this new signal learn faster than with previously known rules,” emphasizes Dr. Maximilian Schmidt from the RIKEN Center for Brain Science in Tokyo, co-first author of the study. The results were published in the journal eLife.
    “We see E2L as a promising approach to gain deep insights into biological learning principles and accelerate progress towards powerful artificial learning machines,” says Mihai Petrovoci. “We hope it will accelerate the research on synaptic plasticity in the nervous system,” concludes Jakob Jordan. The findings will provide new insights into how healthy and diseased brains work. They may also pave the way for the development of intelligent machines that can better adapt to the needs of their users.
    Story Source:
    Materials provided by University of Bern. Note: Content may be edited for style and length. More

  • in

    Adding sound to quantum simulations

    When sound was first incorporated into movies in the 1920s, it opened up new possibilities for filmmakers such as music and spoken dialogue. Physicists may be on the verge of a similar revolution, thanks to a new device developed at Stanford University that promises to bring an audio dimension to previously silent quantum science experiments.
    In particular, it could bring sound to a common quantum science setup known as an optical lattice, which uses a crisscrossing mesh of laser beams to arrange atoms in an orderly manner resembling a crystal. This tool is commonly used to study the fundamental characteristics of solids and other phases of matter that have repeating geometries. A shortcoming of these lattices, however, is that they are silent.
    “Without sound or vibration, we miss a crucial degree of freedom that exists in real materials,” said Benjamin Lev, associate professor of applied physics and of physics, who set his sights on this issue when he first came to Stanford in 2011. “It’s like making soup and forgetting the salt; it really takes the flavor out of the quantum ‘soup.'”
    After a decade of engineering and benchmarking, Lev and collaborators from Pennsylvania State University and the University of St. Andrews have produced the first optical lattice of atoms that incorporates sound. The research was published Nov. 11 in Nature. By designing a very precise cavity that held the lattice between two highly reflective mirrors, the researchers made it so the atoms could “see” themselves repeated thousands of times via particles of light, or photons, that bounce back and forth between the mirrors. This feedback causes the photons to behave like phonons — the building blocks of sound.
    “If it were possible to put your ear to the optical lattice of atoms, you would hear their vibration at around 1 kHz,” said Lev.
    A supersolid with sound
    Previous optical lattice experiments were silent affairs because they lacked the special elasticity of this new system. Lev, young graduate student Sarang Gopalakrishnan — now an assistant professor of physics at Penn State and co-author of the paper — and Paul Goldbart (now provost of Stony Brook University) came up with the foundational theory for this system. But it took collaboration with Jonathan Keeling — a reader at the University of St. Andrews and co-author of the paper — and years of work to build the corresponding device. More

  • in

    Nuclear radiation used to transmit digital data wirelessly

    Engineers have successfully transferred digitally encoded information wirelessly using nuclear radiation instead of conventional technology.
    Radio waves and mobile phone signals relies on electromagnetic radiation for communication but in a new development, engineers from Lancaster University in the UK, working with the Jožef Stefan Institute in Slovenia, transferred digitally encoded information using “fast neutrons” instead.
    The researchers measured the spontaneous emission of fast neutrons from californium-252, a radioactive isotope produced in nuclear reactors.
    Modulated emissions were measured using a detector and recorded on a laptop.
    Several examples of information, i.e., a word, the alphabet and a random number selected blindly, were encoded serially into the modulation of the neutron field and the output decoded on a laptop which recovered the encoded information on screen.
    A double-blind test was performed in which a number derived from a random number generator was encoded without prior knowledge of those uploading it, and then transmitted and decoded.
    All transmission tests attempted proved to be 100% successful.
    Professor Malcolm Joyce of Lancaster University said: “We demonstrate the potential of fast neutron radiation as a medium for wireless communications for applications where conventional electromagnetic transmission is either not feasible or is inherently limited.”
    He said fast neutrons have an advantage over conventional electromagnetic waves, which are significantly weakened by transmission through materials including metals.
    “In some safety-critical scenarios, such as concerning the integrity of reactor containments, and metal vaults and bulkheads in maritime structures, it can be important to minimise the number of penetrations made through such metal structures for communications cabling. The use of neutrons for information transmission through such structures could negate the need for such penetrations and is perhaps also relevant to scenarios where limited transmissions are desirable in difficult circumstances, such as for emergency rescue operations.”
    Fast neutrons could also be incorporated into a mixed-signal, electronic systems to achieve signal mixing being between electrons and neutrons. This could contribute to the requirement to ensure the integrity of information transfer.
    Story Source:
    Materials provided by Lancaster University. Note: Content may be edited for style and length. More

  • in

    New computer model is a key step toward low-temperature preservation of 3D tissues, organs

    Medical science is a key step closer to the cryopreservation of brain slices used in neurological research, pancreatic cells for the treatment of diabetes and even whole organs thanks to a new computer model that predicts how tissue’s size will change during the preservation process.
    Findings of the study led by Adam Higgins of the Oregon State University College of Engineering were published in Biophysical Journal.
    “Cryopreservation of tissues would be useful for biomedical research and for transplantation medicine, but it’s difficult to cryopreserve tissues for various reasons,” said Higgins, associate professor of bioengineering. “A major reason is that formation of ice can break apart a tissue from the inside. Folks who cook are probably already familiar with this — a tomato that has been frozen and thawed becomes mushy.”
    Cryopreservation has long been widely used in comparatively simpler applications such as preserving semen, blood, embryos and plant seeds. A barrier to other uses has been damage from ice crystallization and the harmful nature of the compounds added to prevent ice formation.
    Vitrification, Higgins explains, is a cryopreservation strategy that thwarts ice crystal damage through chemicals known as cryoprotectants, or CPAs, that can keep ice from forming. An example of a CPA is ethylene glycol, used in automobile antifreeze.
    In tissues, a high enough concentration of CPAs causes a solid “glass” to form rather than ice crystals when tissue temperature is reduced to liquid nitrogen levels; liquid nitrogen boils at minus-320 degrees Fahrenheit. More