More stories

  • in

    Breakthrough innovation could solve temperature issues for source-gated transistors and lead to low-cost, flexible displays

    Low-cost, flexible displays that use very little energy could be a step closer, thanks to an innovation from the University of Surrey that solves a problem that has plagued source-gated transistors (SGT).
    SGTs are not widely used because current designs have a problem with how their performance changes with temperature. To solve this problem, scientists from the University of Surrey have developed a new design for the transistor part called the source. They have proposed adding very thin layers of insulating material at the source contact to change the way in which electric charges flow.
    Dr Radu Sporea, project lead from the University of Surrey, said:
    “We used a rapidly emerging semiconductor material called IGZO or indium-gallium-zinc oxide to create the next generation of source-gated transistors. Through nanoscale contact engineering, we obtained transistors that are much more stable with temperature than previous attempts. Device simulations allowed us to understand this effect.
    “This new design adds temperature stability to SGTs and retains usual benefits like using low power, producing high signal amplification, and being more reliable under different conditions. While source-gated transistors are not mainstream because of a handful of performance limitations, we are steadily chipping away at their shortcomings.”
    A source-gated transistor (SGT) is a special type of transistor that combines two fundamental components of electronics — a thin-film transistor and a carefully engineered metal-semiconductor contact. It has many advantages over traditional transistors, including using less power and being more stable. SGTs are suitable for large-area electronics and are promising candidates to be used in various fields such as medicine, engineering and computing.
    Salman Alfarisyi performed the simulations at the University of Surrey as part of his final-year undergraduate project. Salman said:
    “Source-gate transistors could be the building block to new power-efficient flexible electronics technology that helps to meet our energy needs without damaging the health of our planet. For example, their sensing and signal amplification ability makes it easy to recommend them as key elements for medical devices that interface with our entire body, allowing us to better understand human health.”
    The study has been published by IEEE Transactions on Electron Devices.
    The University of Surrey is a world-leading centre for excellence in sustainability — where our multi-disciplinary research connects society and technology to equip humanity with the tools to tackle climate change, clean our air, reduce the impacts of pollution on health and help us live better, more sustainable lives. The University is committed to improving its own resource efficiency on its estate and being a sector leader, aiming to be carbon neutral by 2030. A focus on research that makes a difference to the world has contributed to Surrey being ranked 55th in the world in the Times Higher Education (THE) University Impact Rankings 2022, which assesses more than 1,400 universities’ performance against the United Nations’ Sustainable Development Goals (SDGs). More

  • in

    Physicists discover a new switch for superconductivity

    Under certain conditions — usually exceedingly cold ones — some materials shift their structure to unlock new, superconducting behavior. This structural shift is known as a “nematic transition,” and physicists suspect that it offers a new way to drive materials into a superconducting state where electrons can flow entirely friction-free.
    But what exactly drives this transition in the first place? The answer could help scientists improve existing superconductors and discover new ones.
    Now, MIT physicists have identified the key to how one class of superconductors undergoes a nematic transition, and it’s in surprising contrast to what many scientists had assumed.
    The physicists made their discovery studying iron selenide (FeSe), a two-dimensional material that is the highest-temperature iron-based superconductor. The material is known to switch to a superconducting state at temperatures as high as 70 kelvins (close to -300 degrees Fahrenheit). Though still ultracold, this transition temperature is higher than that of most superconducting materials.
    The higher the temperature at which a material can exhibit superconductivity, the more promising it can be for use in the real world, such as for realizing powerful electromagnets for more precise and lightweight MRI machines or high-speed, magnetically levitating trains.
    For those and other possibilities, scientists will first need to understand what drives a nematic switch in high-temperature superconductors like iron selenide. In other iron-based superconducting materials, scientists have observed that this switch occurs when individual atoms suddenly shift their magnetic spin toward one coordinated, preferred magnetic direction.

    But the MIT team found that iron selenide shifts through an entirely new mechanism. Rather than undergoing a coordinated shift in spins, atoms in iron selenide undergo a collective shift in their orbital energy. It’s a fine distinction, but one that opens a new door to discovering unconventional superconductors.
    “Our study reshuffles things a bit when it comes to the consensus that was created about what drives nematicity,” says Riccardo Comin, the Class of 1947 Career Development Associate Professor of Physics at MIT. “There are many pathways to get to unconventional superconductivity. This offers an additional avenue to realize superconducting states.”
    Comin and his colleagues will publish their results in a study appearing in Nature Materials. Co-authors at MIT include Connor Occhialini, Shua Sanchez, and Qian Song, along with Gilberto Fabbris, Yongseong Choi, Jong-Woo Kim, and Philip Ryan at Argonne National Laboratory.
    Following the thread
    The word “nematicity” stems from the Greek word “nema,”meaning “thread” — for instance, to describe the thread-like body of the nematode worm. Nematicity is also used to describe conceptual threads, such as coordinated physical phenomena. For instance, in the study of liquid crystals, nematic behavior can be observed when molecules assemble in coordinated lines.

    In recent years, physicists have used nematicity to describe a coordinated shift that drives a material into a superconducting state. Strong interactions between electrons cause the material as a whole to stretch infinitesimally, like microscopic taffy, in one particular direction that allows electrons to flow freely in that direction. The big question has been what kind of interaction causes the stretching. In some iron-based materials, this stretching seems to be driven by atoms that spontaneously shift their magnetic spins to point in the same direction. Scientists have therefore assumed that most iron-based superconductors make the same, spin-driven transition.
    But iron selenide seems to buck this trend. The material, which happens to transition into a superconducting state at the highest temperature of any iron-based material, also seems to lack any coordinated magnetic behavior.
    “Iron selenide has the least clear story of all these materials,” says Sanchez, who is an MIT postdoc and NSF MPS-Ascend Fellow. “In this case, there’s no magnetic order. So,understanding the origin of nematicity requires looking very carefully at how the electrons arrange themselves around the iron atoms, and what happens as those atoms stretch apart.”
    A super continuum
    In their new study, the researchers worked with ultrathin, millimeter-long samples of iron selenide, which they glued to a thin strip of titanium. They mimicked the structural stretching that occurs during a nematic transition by physically stretching the titanium strip, which in turn stretched the iron selenide samples. As they stretched the samples by a fraction of a micron at a time, they looked for any properties that shifted in a coordinated fashion.
    Using ultrabright X-rays, the team tracked how the atoms in each sample were moving, as well as how each atom’s electrons were behaving. After a certain point, they observed a definite, coordinated shift in the atoms’ orbitals. Atomic orbitals are essentially energy levels that an atom’s electrons can occupy. In iron selenide, electrons can occupy one of two orbital states around an iron atom. Normally, the choice of which state to occupy is random. But the team found that as they stretched the iron selenide, its electrons began to overwhelmingly prefer one orbital state over the other. This signaled a clear, coordinated shift, along with a new mechanism of nematicity, and superconductivity.
    “What we’ve shown is that there are different underlying physics when it comes to spin versus orbital nematicity, and there’s going to be a continuum of materials that go between the two,” says Occhialini, an MIT graduate student. “Understanding where you are on that landscape will be important in looking for new superconductors.”
    This research was supported by the Department of Energy, the Air Force Office of Scientific Research, and the National Science Foundation. More

  • in

    New microcomb device advances photonic technology

    A new tool for generating microwave signals could help propel advances in wireless communication, imaging, atomic clocks, and more.
    Frequency combs are photonic devices that produce many equally spaced laser lines, each locked to a specific frequency to produce a comb-like structure. They can be used to generate high-frequency, stable microwave signals and scientists have been attempting to miniaturize the approach so they can be used on microchips.
    Scientists have been limited in their abilities to tune these microcombs at a rate to make them effective. But a team of researchers led by University of Rochester’s Qiang Lin, professor of electrical and computer engineering and optics, outlined a new high-speed tunable microcomb in Nature Communications.
    “One of the hottest areas of research in nonlinear integrated photonics is trying to produce this kind of a frequency comb on a chip-scale device,” says Lin. “We are excited to have developed the first microcomb device to produce a highly tunable microwave source.”
    The device is a lithium niobate resonator that allows users to manipulate the bandwidth and frequency modulation rates several orders-of-magnitude faster than existing microcombs.
    “The device provides a new approach to electro-optic processing of coherent microwaves and opens up a great avenue towards high-speed control of soliton comb lines that is crucial for many applications including frequency metrology, frequency synthesis, RADAR/LiDAR, sensing, and communication,” says Yang He ’20 (PhD), who was an electrical and computer engineering postdoctoral scholar in Lin’s lab and is the first author on the paper.
    Other coauthors from Lin’s group include Raymond Lopez-Rios, Usman A. Javid, Jingwei Ling, Mingxiao Li, and Shixin Xue.
    The project was a collaboration between faculty and students at Rochester’s Department of Electrical and Computer Engineering and Institute of Optics as well as the California Institute of Technology. The work was supported in part by the Defense Threat Reduction Agency, the Defense Advanced Research Projects Agency, and the National Science Foundation. More

  • in

    Now, every biologist can use machine learning

    The amount of data generated by scientists today is massive, thanks to the falling costs of sequencing technology and the increasing amount of available computing power. But parsing through all that data to uncover useful information is like searching for a molecular needle in a haystack. Machine learning (ML) and other artificial intelligence (AI) tools can dramatically speed up the process of data analysis, but most ML tools are difficult for non-ML experts to access and use. Recently, automated machine learning (AutoML) methods have been developed that can automate the design and deployment of ML tools, but they are often very complex and require a facility with ML that few scientists outside of the AI field have.
    A group of scientists at the Wyss Institute for Biologically Inspired Engineering at Harvard University and MIT has now filled that unmet need by building a new, comprehensive AutoML platform designed for biologists with little to no ML experience. Their platform, called BioAutoMATED, can use sequences of nucleic acids, peptides, or glycans as input data, and its performance is comparable to other AutoML platforms while requiring minimal user input. The platform is described in a new paper published in Cell Systems and is available to download from GitHub.
    “Our tool is for folks who don’t have the ability to build their own custom ML models, who find themselves asking questions like, ‘I have this cool data set, will ML even work for it? How do I get it into an ML model? The complexity of ML is what’s stopping me from going further with this data set, so how do I overcome that?’,” said co-first author Jackie Valeri, a graduate student in the lab of Wyss Core Faculty member Jim Collins, Ph.D. “We wanted to make it easy for biologists and experts in other domains to use the power of ML and AutoML to answer fundamental questions and help uncover biology that means something.”
    AutoML for all
    Like many great ideas, the seed that would become BioAutoMATED was planted not in the lab, but over lunch. Valeri and co-first authors Luis Soenksen, Ph.D. and Katie Collins were eating together at one of the Wyss Institute’s dining tables when they realized that despite the Institute’s reputation as a world-class destination for biological research, only a handful of the top experts working there were capable of building and training ML models that could greatly benefit their work.
    “We decided that we needed to do something about that, because we wanted the Wyss to be at the forefront of the AI biotech revolution, and we also wanted the development of these tools to be driven by biologists, for biologists,” said Soenksen, a Postdoctoral Fellow at the Wyss Institute who is also a serial entrepreneur in the science and technology space. “Now, everyone agrees that AI is the future, but four years ago when we got this idea, it wasn’t that obvious, particularly for biological research. So, it started as a tool that we wanted to build to serve ourselves and our Wyss colleagues, but now we know that it can serve much more.”
    While various AutoML systems have already been developed to simplify the process of generating ML models from datasets, they typically have drawbacks; among them, the fact that each AutoML tool is designed to look at only one type of model (e.g., neural networks) when searching for an optimal solution. This limits the resulting model to a narrow set of possibilities, when in reality, a different type of model altogether may be more optimal. Another issue is that most AutoML tools aren’t designed specifically to take biological sequences as their input data. Some tools have been developed that use language models for analyzing biological sequences, but these lack automation features and are difficult to use.

    To build a robust all-in-one AutoML for biology, the team modified three existing AutoML tools that each use a different approach for generating models: AutoKeras, which searches for optimal neural networks; DeepSwarm, which uses swarm-based algorithms to search for convolutional neural networks; and TPOT, which searches non-neural networks using a variety of methods including genetic programming and self-learning. BioAutoMATED then produces standardized output results for all three tools, so that the user can easily compare them and determine which type produces the most useful insights from their data.
    The team built BioAutoMATED to be able to take as inputs DNA, RNA, amino acid, and glycan (sugars molecules found on the surfaces of cells) sequences of any length, type, or biological function. BioAutoMATED automatically pre-processes the input data, then generates models that can predict biological functions from the sequence information alone.
    The platform also has a number of features that help users determine whether they need to gather additional data to improve the quality of the output, learn which features of a sequence the models “paid attention” to most (and thus may be of more biological interest), and design new sequences for future experiments.
    Nucleotides and peptides and glycans, oh my!
    To test-drive their new framework, the team first used it to explore how changing the sequence of a stretch of RNA called the ribosome binding site (RBS) affected the efficiency with which a ribosome could bind to the RNA and translate it into protein in E. coli bacteria. They fed their sequence data into BioAutoMATED, which identified a model generated by the DeepSwarm algorithm that could accurately predict translation efficiency. This model performed as well as models created by a professional ML expert, but was generated in just 26.5 minutes and only required ten lines of input code from the user (other models can require more than 750). They also used BioAutoMATED to identify which areas of the sequence seemed to be the most important in determining translation efficiency, and to design new sequences that could be tested experimentally.

    They then moved on to trials of feeding peptide and glycan sequence data into BioAutoMATED and using the results to answer specific questions about those sequences. The system generated highly accurate information about which amino acids in a peptide sequence are most important in determining an antibody’s ability to bind to the drug ranibizumab (Lucentis), and also classified different types of glycans into immunogenic and non-immunogenic groups based on their sequences. The team also used it to optimize the sequences of RNA-based toehold switches, informing the design of new toehold switches for experimental testing with minimal input coding from the user.
    “Ultimately, we were able to show that BioAutoMATED helps people 1) recognize patterns in biological data, 2) ask better questions about that data, and 3) answer those questions quickly, all within a single framework — without having to become an ML expert themselves,” said Katie Collins, who is currently a graduate student at the University of Cambridge and worked on the project while an undergraduate at MIT.
    Any models predicted with the help of BioAutoMATED, as with any other ML tool, need to be experimentally validated in the lab whenever possible. But the team is hopeful that it could be further integrated into the ever-growing set of AutoML tools, one day extending its function beyond biological sequences to any sequence-like object, such as fingerprints.
    “Machine learning and artificial intelligence tools have been around for a while now, but it’s only with the recent development of user-friendly interfaces that they’ve exploded in popularity, as in the case of ChatGPT,” said Jim Collins, who is also the Termeer Professor of Medical Engineering & Science at MIT. “We hope that BioAutoMATED can enable the next generation of biologists to faster and more easily discover the underpinnings of life.”
    “Enabling non-experts to use these platforms is critical for being able to harness ML techniques’ full potential to solve long-standing problems in biology, and beyond. This advance by the Collins team is a major step forward for making AI a key collaborator for biologists and bioengineers,” said Wyss Founding Director Don Ingber, M.D., Ph.D., who is also the also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and the Hansjörg Wyss Professor of Bioinspired Engineering at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS).
    Additional authors of the paper include George Cai from the Wyss Institute and Harvard Medical School; former Wyss Institute members Pradeep Ramesh, Rani Powers, Nicolaas Angenent-Mari, and Diogo Camacho; and Felix Wong and Timothy Lu from MIT.
    This research was supported by the Defense Threat Reduction Agency (grant HDTRA-12210032), the DARPA SD2 program, the Paul G. Allen Frontiers Group, the Wyss Institute for Biologically Inspired Engineering, an MIT-Takeda Fellowship, CONACyT grant 342369/408970, and an MIT-TATA Center fellowship (2748460). More

  • in

    An app can transform smartphones into thermometers that accurately detect fevers

    If you’ve ever thought you may be running a temperature yet couldn’t find a thermometer, you aren’t alone. A fever is the most commonly cited symptom of COVID-19 and an early sign of many other viral infections. For quick diagnoses and to prevent viral spread, a temperature check can be crucial. Yet accurate at-home thermometers aren’t commonplace, despite the rise of telehealth consultations.
    There are a few potential reasons for that. The devices can range from $15 to $300, and many people need them only a few times a year. In times of sudden demand — such as the early days of the COVID-19 pandemic — thermometers can sell out. Many people, particularly those in under-resourced areas, can end up without a vital medical device when they need it most.
    To address this issue, a team led by researchers at the University of Washington has created an app called FeverPhone, which transforms smartphones into thermometers without adding new hardware. Instead, it uses the phone’s touchscreen and repurposes the existing battery temperature sensors to gather data that a machine learning model uses to estimate people’s core body temperatures. When the researchers tested FeverPhone on 37 patients in an emergency department, the app estimated core body temperatures with accuracy comparable to some consumer thermometers. The team published its findings March 28 in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.
    “In undergrad, I was doing research in a lab where we wanted to show that you could use the temperature sensor in a smartphone to measure air temperature,” said lead author Joseph Breda, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “When I came to the UW, my adviser and I wondered how we could apply a similar technique for health. We decided to measure fever in an accessible way. The primary concern with temperature isn’t that it’s a difficult signal to measure; it’s just that people don’t have thermometers.”
    The app is the first to use existing phone sensors and screens to estimate whether people have fevers. It needs more training data to be widely used, Breda said, but for doctors, the potential of such technology is exciting.
    “People come to the ER all the time saying, ‘I think I was running a fever.’ And that’s very different than saying ‘I was running a fever,'” said Dr. Mastafa Springston, a co-author on the study and a UW clinical instructor at the Department of Emergency Medicine in the UW School of Medicine. “In a wave of influenza, for instance, people running to the ER can take five days, or even a week sometimes. So if people were to share fever results with public health agencies through the app, similar to how we signed up for COVID exposure warnings, this earlier sign could help us intervene much sooner.”
    Clinical-grade thermometers use tiny sensors known as thermistors to estimate body temperature. Off-the-shelf smartphones also happen to contain thermistors; they’re mostly used to monitor the temperature of the battery. But the UW researchers realized they could use these sensors to track heat transfer between a person and a phone. The phone touchscreen could sense skin-to-phone contact, and the thermistors could gauge the air temperature and the rise in heat when the phone touched a body.

    To test this idea, the team started by gathering data in a lab. To simulate a warm forehead, the researchers heated a plastic bag of water with a sous-vide machine and pressed phone screens against the bag. To account for variations in circumstances, such as different people using different phones, the researchers tested three phone models. They also added accessories such as a screen protector and a case and changed the pressure on the phone.
    The researchers used the data from different test cases to train a machine learning model that used the complex interactions to estimate body temperature. Since the sensors are supposed to gauge the phone’s battery heat, the app tracks how quickly the phone heats up and then uses the touchscreen data to account for how much of that comes from a person touching it. As they added more test cases, the researchers were able to calibrate the model to account for the variations in things such as phone accessories.
    Then the team was ready to test the app on people. The researchers took FeverPhone to the UW School of Medicine’s Emergency Department for a clinical trial where they compared its temperature estimates against an oral thermometer reading. They recruited 37 participants, 16 of whom had at least a mild fever.
    To use FeverPhone, the participants held the phones like point-and-shoot cameras — with forefingers and thumbs touching the corner edges to reduce heat from the hands being sensed (some had the researcher hold the phone for them). Then participants pressed the touchscreen against their foreheads for about 90 seconds, which the researchers found to be the ideal time to sense body heat transferring to the phone.
    Overall, FeverPhone estimated patient core body temperatures with an average error of about 0.41 degrees Fahrenheit (0.23 degrees Celsius), which is in the clinically acceptable range of 0.5 C.
    The researchers have highlighted a few areas for further investigation. The study didn’t include participants with severe fevers above 101.5 F (38.6 C), because these temperatures are easy to diagnose and because sweaty skin tends to confound other skin-contact thermometers, according to the team. Also, FeverPhone was tested on only three phone models. Training it to run on other smartphones, as well as devices such as smartwatches, would increase its potential for public health applications, the teamsaid.
    “We started with smartphones since they’re ubiquitous and easy to get data from,” Breda said. “I am already working on seeing if we can get a similar signal with a smartwatch. What’s nice, because watches are much smaller, is their temperature will change more quickly. So you could imagine having a user put a Fitbit to their forehead and measure in 10 seconds whether they have a fever or not.”
    Shwetak Patel, a UW professor in the Allen School and the electrical and computer engineering department, was a senior author on the paper, and Alex Mariakakis, an assistant professor in the University of Toronto’s computer science department, was a co-author. This research was supported by the University of Washington Gift Fund. More

  • in

    AI that uses sketches to detect objects within an image could boost tumor detection, and search for rare bird species

    Teaching machine learning tools to detect specific objects in a specific image and discount others is a “game-changer” that could lead to advancements in cancer detection, according to leading researchers from the University of Surrey.
    Surrey is set to present its unique sketch-based object detection tool at this year’s Computer Vision, Pattern, and Recognition Conference (CVPR). The tool allows the user to sketch an object, which the AI will use as a basis to search within an image to find something that matches the sketch — while discounting more general options.
    Professor Yi-Zhe Song, leads this research at the University of Surrey’s Institute for People-Centred AI. He commented:
    “An artist’s sketch is full of individual cues that words cannot convey concisely, reiterating the phrase ‘a picture paints a thousand words’. For newer AI systems, simple descriptive words help to generate images, but none can express the individualism of the user or the exact match the user is looking for.
    “This is where our sketch-based tool comes into play. AI is instructed by the artist via sketches to find an exact object and discount others. Which can be amazingly helpful in medicine, by finding more aggressive tumours, or helping to protect wildlife conservation by detecting rare animals.”
    An example that researchers use in their paper to the conference is of the tool helping to search a picture full of zebras — with only a sketch of a single zebra eating to direct its search. The AI tool takes visual cues into account, such as pose and structure, but bases the decisions off the exact requirements given by the amateur artist.
    Professor Song continued:
    “The ability for AI to detect objects based on individual amateur sketches introduces a significant leap in harnessing human creativity in Computer Vision. It allows humans to interact with AI from a whole different perspective, no longer letting AI dictate the decisions, but asking it to behave exactly as instructed, keeping necessary human intervention.”
    This research will be presented at the Computer Vision, Pattern, and Recognition Conference (CVPR) 2023 which showcases world-leading AI research on a global stage. The University of Surrey sees an exceptional number of papers accepted to the CVPR 2023, far above other educational institutions, with over 18 papers accepted and one nominated for the Best Paper Award.
    The University of Surrey is a research-intensive university, producing world-leading research and delivering innovation in teaching to transform lives and change the world for the better. The University of Surrey’s Institute for People-Centred AI combines over 30 years of technical excellence in the field of machine learning with multi-disciplinary research to answer the technical, ethical and governance questions that will enable the future of AI to be truly people-centred. A focus on research that makes a difference to the world has contributed to Surrey being ranked 55th in the world in the Times Higher Education (THE) University Impact Rankings 2022, which assesses more than 1,400 universities’ performance against the United Nations’ Sustainable Development Goals (SDGs). More

  • in

    AI reveals hidden traits about our planet’s flora to help save species

    In a world-first, scientists from UNSW and Botanic Gardens of Sydney, have trained AI to unlock data from millions of plant specimens kept in herbaria around the world, to study and combat the impacts of climate change on flora.
    “Herbarium collections are amazing time capsules of plant specimens,” says lead author on the study, Associate Professor Will Cornwell. “Each year over 8000 specimens are added to the National Herbarium of New South Wales alone, so it’s not possible to go through things manually anymore.”
    Using a new machine learning algorithm to process over 3000 leaf samples, the team discovered that contrary to frequently observed interspecies patterns, leaf size doesn’t increase in warmer climates within a single species.
    Published in the American Journal of Botany, this research not only reveals that factors other than climate have a strong effect on leaf size within a plant species, but demonstrates how AI can be used to transform static specimen collections and to quickly and effectively document climate change effects.
    Herbarium collections move to the digital world
    Herbaria are scientific libraries of plant specimens that have existed since at least the 16th century.

    “Historically, a valuable scientific effort was to go out, collect plants, and then keep them in a herbarium. Every record has a time and a place and a collector and a putative species ID,” says A/Prof. Cornwell, a researcher at the School of BEES and a member of UNSW Data Science Hub.
    A couple of years ago, to help facilitate scientific collaboration, there was a movement to transfer these collections online.
    “The herbarium collections were locked in small boxes in particular places, but the world is very digital now. So to get the information about all of the incredible specimens to the scientists who are now scattered across the world, there was an effort to scan the specimens to produce high resolution digital copies of them.”
    The largest herbarium imaging project was undertaken at the Botanic Gardens of Sydney when over 1 million plant specimens at the National Herbarium of New South Wales were transformed into high-resolution digital images.
    “The digitisation project took over two years and shortly after completion, one of the researchers — Dr Jason Bragg — contacted me from the Botanic Gardens of Sydney. He wanted to see how we could incorporate machine learning with some of these high-resolution digital images of the Herbarium specimens.”
    “I was excited to work with A/Prof. Cornwell in developing models to detect leaves in the plant images, and to then use those big datasets to study relationships between leaf size and climate,” says Dr Bragg.

    “Computer vision” measures leaf sizes
    Together with Dr Bragg at the Botanic Gardens of Sydney and UNSW Honours student Brendan Wilde, A/Prof. Cornwell created an algorithm that could be automated to detect and measure the size of leaves of scanned herbarium samples for two plant genera — Syzygium (generally known as lillipillies, brush cherries or satinas) and Ficus (a genus of about 850 species of woody trees, shrubs and vines).
    “This is a type of AI is called a convolutional neural network, also known as Computer Vision,” says A/Prof. Cornwell. The process essentially teaches the AI to see and identify the components of a plant in the same way a human would.
    “We had to build a training data set to teach the computer, this is a leaf, this is a stem, this is a flower,” says A/Prof. Cornwell. “So we basically taught the computer to locate the leaves and then measure the size of them.
    “Measuring the size of leaves is not novel, because lots of people have done this. But the speed with which these specimens can be processed and their individual characteristics can be logged is a new development.”
    A break in frequently observed patterns
    A general rule of thumb in the botanical world is that in wetter climates, like tropical rainforests, the leaves of plants are bigger compared to drier climates, such as deserts.
    “And that’s a very consistent pattern that we see in leaves between species all across the globe,” says A/Prof. Cornwell. “The first test we did was to see if we could reconstruct that relationship from the machine learned data, which we could. But the second question was, because we now have so much more data than we had before, do we see the same thing within species?”
    The machine learning algorithm was developed, validated, and applied to analyse the relationship between leaf size and climate within and among species for Syzygium and Ficus plants.
    The results from this test were surprising — the team discovered that while this pattern can be seen between different plant species, the same correlation isn’t seen within a single species across the globe, likely because a different process, known as gene flow, is operating within species. That process weakens plant adaptation on a local scale and could be preventing the leaf size-climate relationship from developing within species.
    Using AI to predict future climate change responses
    The machine learning approach used here to detect and measure leaves, though not pixel perfect, provided levels of accuracy suitable for examining links between leaf traits and climate.
    “But because the world is changing quite fast, and there is so much data, these kinds of machine learning methods can be used to effectively document climate change effects,” says A/Prof. Cornwell.
    What’s more, the machine learning algorithms can be trained to identify trends that might not be immediately obvious to human researchers. This could lead to new insights into plant evolution and adaptations, as well as predictions about how plants might respond to future effects of climate change. More

  • in

    Open-source software to speed up quantum research

    Quantum technology is expected to fundamentally change many key areas of society. Researchers are convinced that there are many more useful quantum properties and applications to explore than those we know today. A team of researchers at Chalmers University of Technology in Sweden have now developed open-source, freely available software that will pave the way for new discoveries in the field and accelerate quantum research significantly.
    Within a few decades, quantum technology is expected to become a key technology in areas such as health, communication, defence and energy. The power and potential of the technology lie in the odd and very special properties of quantum particles. Of particular interest to researchers in the field are the superconducting properties of quantum particles that give components perfect conductivity with unique magnetic properties. These superconducting properties are considered conventional today and have already paved the way for entirely new technologies used in applications such as magnetic resonance imaging equipment, maglev trains and quantum computer components. However, years of research and development remain before a quantum computer can be expected to solve real computing problems in practice, for example. The research community is convinced that there are many more revolutionary discoveries to be made in quantum technology than those we know today.
    Open-source code to explore new superconducting properties
    Basic research in quantum materials is the foundation of all quantum technology innovation, from the birth of the transistor in 1947, through the laser in the 1960s to the quantum computers of today. However, experiments on quantum materials are often very resource-intensive to develop and conduct, take many years to prepare and mostly produce results that are difficult to interpret. Now, however, a team of researchers at Chalmers have developed the open-source software SuperConga, which is free for everyone to use, and specifically designed to perform advanced simulations and analyses of quantum components. The programme operates at the mesoscopic level, which means that it can carry out simulations that are capable of ‘picking up’ the strange properties of quantum particles, and also apply them in practice. The open-source code is the first of its kind in the world and is expected to be able to explore completely new superconducting properties and eventually pave the way for quantum computers that can use advanced computing to tackle societal challenges in several areas.
    “We are specifically interested in unconventional superconductors, which are an enigma in terms of how they even work and what their properties are. We know that they have some desirable properties that allow quantum information to be protected from interference and fluctuations. Interference is what currently limits us from having a quantum computer that can be used in practice. And this is where basic research into quantum materials is crucial if we are to make any progress,” says Mikael Fogelström, Professor of Theoretical Physics at Chalmers.
    These new superconductors continue to be highly enigmatic materials — just as their conventional siblings once were when they were discovered in a laboratory more than a hundred years ago. After that discovery, it would be more than 40 years before researchers could describe them in theory. The Chalmers researchers now hope that their open-source code can contribute to completely new findings and areas of application.
    “We want to find out about all the other exciting properties of unconventional superconductors. Our software is powerful, educational and user-friendly, and we hope that it will help generate new understanding and suggest entirely new applications for these unexplored superconductors,” says Patric Holmvall, postdoctoral researcher in condensed matter physics at Uppsala University.
    Desire to make life easier for quantum researchers and students
    To be able to explore revolutionary new discoveries, tools are needed that can study and utilise the extraordinary quantum properties at the minimal particle level, and can also be scaled up large enough to be used in practice. Researchers need to work at mesoscopic scale. This lies at the interface between the microscopic scale, i.e. the atomic level at which the quantum properties of the particles can still be utilised, and the macroscopic scale which measures everyday objects in our world which, unlike quantum particles, are subject to the laws of classical physics. On account of the software’s ability to work at this mesoscopic level, the Chalmers researchers now hope to make life easier for researchers and students working with quantum physics.
    “Extremely simplified models based on either the microscopic or macroscopic scale are often used at present. This means that they do not manage to identify all the important physics or that they cannot be used in practice. With this free software, we want to make it easier for others to accelerate and improve their quantum research without having to reinvent the wheel every time,” says Tomas Löfwander, Professor of Applied Quantum Physics at Chalmers. More