More stories

  • in

    Virtual reality game to objectively detect ADHD

    Researchers have used virtual reality games, eye tracking and machine learning to show that differences in eye movements can be used to detect ADHD, potentially providing a tool for more precise diagnosis of attention deficits. Their approach could also be used as the basis for an ADHD therapy, and with some modifications, to assess other conditions, such as autism.
    ADHD is a common attention disorder that affects around six percent of the world’s children. Despite decades of searching for objective markers, ADHD diagnosis is still based on questionnaires, interviews and subjective observation. The results can be ambiguous, and standard behavioural tests don’t reveal how children manage everyday situations. Recently, a team consisting of researchers from Aalto University, the University of Helsinki, and Åbo Akademi University developed a virtual reality game called EPELI that can be used to assess ADHD symptoms in children by simulating situations from everyday life.
    Now, the team tracked the eye movements of children in a virtual reality game and used machine learning to look for differences in children with ADHD. The new study involved 37 children diagnosed with ADHD and 36 children in a control group. The children played EPELI and a second game, Shoot the Target, in which the player is instructed to locate objects in the environment and “shoot” them by looking at them. 
    ‘We tracked children’s natural eye movements as they performed different tasks in a virtual reality game, and this proved to be an effective way of detecting ADHD symptoms. The ADHD children’s gaze paused longer on different objects in the environment, and their gaze jumped faster and more often from one spot to another. This might indicate a delay in visual system development and poorer information processing than other children,’ said Liya Merzon, a doctoral researcher at Aalto University.
    Brushing your teeth with distractions
    Project lead Juha Salmitaival, an Academy Research Fellow at Aalto, explains that part of the game’s strength is its motivational value. ‘This isn’t just a new technology to objectively assess ADHD symptoms. Children also find the game more interesting than standard neuropsychological tests,’ he says. More

  • in

    New software based on Artificial Intelligence helps to interpret complex data

    Experimental data is often not only highly dimensional, but also noisy and full of artefacts. This makes it difficult to interpret the data. Now a team at HZB has designed software that uses self-learning neural networks to compress the data in a smart way and reconstruct a low-noise version in the next step. This enables to recognise correlations that would otherwise not be discernible. The software has now been successfully used in photon diagnostics at the FLASH free electron laser at DESY. But it is suitable for very different applications in science.
    More is not always better, but sometimes a problem. With highly complex data, which have many dimensions due to their numerous parameters, correlations are often no longer recognisable. Especially since experimentally obtained data are additionally disturbed and noisy due to influences that cannot be controlled.
    Helping humans to interpret the data
    Now, new software based on artificial intelligence methods can help: It is a special class of neural networks (NN) that experts call “disentangled variational autoencoder network (β-VAE).” Put simply, the first NN takes care of compressing the data, while the second NN subsequently reconstructs the data. “In the process, the two NNs are trained so that the compressed form can be interpreted by humans,” explains Dr Gregor Hartmann. The physicist and data scientist supervises the Joint Lab on Artificial Intelligence Methods at HZB, which is run by HZB together with the University of Kassel.
    Extracting core principles without prior knowledge
    Google Deepmind had already proposed to use β-VAEs in 2017. Many experts assumed that the application in the real world would be challenging, as non-linear components are difficult to disentangle. “After several years of learning how the NNs learn, it finally worked,” says Hartmann. β-VAEs are able to extract the underlying core principle from data without prior knowledge.
    Photon energy of FLASH determined
    In the study now published, the group used the software to determine the photon energy of FLASH from single-shot photoelectron spectra. “We succeeded in extracting this information from noisy electron time-of-flight data, and much better than with conventional analysis methods,” says Hartmann. Even data with detector-specific artefacts can be cleaned up this way.
    A powerful tool for different problems
    “The method is really good when it comes to impaired data,” Hartmann emphasises. The programme is even able to reconstruct tiny signals that were not visible in the raw data. Such networks can help uncover unexpected physical effects or correlations in large experimental data sets. “AI-based intelligent data compression is a very powerful tool, not only in photon science,” says Hartmann.
    Now plug and play
    In total, Hartmann and his team spent three years developing the software. “But now, it is more or less plug and play. We hope that soon many colleagues will come with their data and we can support them.”
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    New winged robot can land like a bird

    A bird landing on a branch makes the maneuver look like the easiest thing in the world, but in fact, the act of perching involves an extremely delicate balance of timing, high-impact forces, speed, and precision. It’s a move so complex that no flapping-wing robot (ornithopter) has been able to master it, until now.
    Raphael Zufferey, a postdoctoral fellow in the Laboratory of Intelligent Systems (LIS) and Biorobotics ab (BioRob) in the School of Engineering, is the first author on a recent Nature Communications paper describing the unique landing gear that makes such perching possible. He built and tested it in collaboration with colleagues at the University of Seville, Spain, where the 700-gram ornithopter itself was developed as part of the European project GRIFFIN.
    “This is the first phase of a larger project. Once an ornithopter can master landing autonomously on a tree branch, then it has the potential to carry out specific tasks, such as unobtrusively collecting biological samples or measurements from a tree. Eventually, it could even land on artificial structures, which could open up further areas of application,” Zufferey says.
    He adds that the ability to land on a perch could provide a more efficient way for ornithopters – which, like many unmanned aerial vehicles (UAVs) have limited battery life – to recharge using solar energy, potentially making them ideal for long-range missions.
    “This is a big step toward using flapping-wing robots, which as of now can really only do free flights, for manipulation tasks and other real-world applications,” he says.
    Maximizing strength and precision; minimizing weight and speed
    The engineering problems involved in landing an ornithopter on a perch without any external commands required managing many factors that nature has already so perfectly balanced. The ornithopter had to be able to slow down significantly as it perched, while still maintaining flight. The claw needed to be strong enough to grasp the perch and support the weight of the robot, without being so heavy that it could not be held aloft. “That’s one reason we went with a single claw rather than two,” Zufferey notes. Finally, the robot needed to be able to perceive its environment and the perch in front of it in relation to its own position, speed, and trajectory. More

  • in

    Lucky find! How science behind epidemics helped physicists to develop state-of-the-art conductive paint

    In new research published in Nature Communications, University of Sussex scientists demonstrate how a highly conductive paint coating that they have developed mimics the network spread of a virus through a process called ‘explosive percolation’ — a mathematical process which can also be applied to population growth, financial systems and computer networks, but which has not been seen before in materials systems. The finding was a serendipitous development as well as a scientific first for the researchers.
    The process of percolation — the statistical connectivity in a system, such as when water flows through soil or through coffee grounds — is an important component in the development of liquid technology. And it was that process which researchers in the University of Sussex Material Physics group were expecting to see when they added graphene oxide to polymer latex spheres, such as those used in emulsion paint, to make a polymer composite.
    But when they gently heated the graphene oxide to make it electrically conductive, the scientists kick-started a process that saw this conductive system grow exponentially, to the extent that the new material created consumed the network, similar to the way a new strain of a virus can become dominant. This emergent material behaviour led to a new highly-conductive paint solution that, because graphene oxide is a cheap and easy to mass produce nanomaterial, is both one of the most affordable and most conductive low-loading composites reported. Before, now, it was accepted that such paints or coatings were necessarily one or the other.
    Electrically conductive paints and inks have a range of useful applications in new printed technologies, for example by imparting coatings with properties such as anti-static or making coatings that block electromagnetic interference (EMI), as well as being vital in the development of wearable health monitors.
    Alan Dalton, Professor of Experimental Physics, who heads up the Materials Physics Group at the University of Sussex explains the potential of this serendipitous finding: “My research team and I have been working on developing conductive paints and inks for the last ten years and it was to both my surprise and delight that we have discovered the key to revolutionising this work is a mathematical process that we normally associate with population growth and virus transmission.
    “By enabling us to create highly-conductive polymer composites that are also affordable, thanks to the cheap and scalable nature of graphene oxide, this development opens up the doors to a range of applications that we’ve not even been able to fully consider yet, but which could greatly enhance the sustainability of Electric Vehicle materials — including batteries — as well as having the potential to add conductive coatings to materials, such as ceramics, that aren’t inherently so. We can’t wait to get going on exploring the possibilities.”
    Dr Sean Ogilvie, a research fellow in Professor Dalton’s Materials Physics Group at the University of Sussex, who worked on this development adds: “The most exciting aspect of these nanocomposites is that we are using a very simple process, similar to applying emulsion paint and drying with a heat gun, which then kickstarts a process creating chemical bridges between the graphene sheets, producing electrical paths which are more conductive than if they were made entirely from graphene. More

  • in

    AI better than human eye at predicting brain metastasis outcomes

    A recent study by York University researchers suggests an innovative artificial intelligence (AI) technique they developed is considerably more effective than the human eye when it comes to predicting therapy outcomes in patients with brain metastases. The team hopes the new research and technology could eventually lead to more tailored treatment plans and better health outcomes for cancer patients.
    “This is a sophisticated and comprehensive analysis of MRIs to find features and patterns that are not usually captured by the human eye,” says York Research Chair Ali Sadeghi-Naini, associate professor of biomedical engineering and computer science in the Lassonde School of Engineering, and lead on the study.
    “We hope our technique, which is a novel AI-based predictive method of detecting radiotherapy failure in brain metastasis, will be able to help oncologists and patients make better informed decisions and adjust treatment in a situation where time is of the essence.”
    Previous studies have shown that using standard practices, such as MRI imaging — assessing the size, location — and number of brain metastases — as well as the primary cancer type and overall condition of the patient, oncologists are able to predict treatment failure (defined as continued growth of the tumour) about 65 per cent of the time. The researchers created and tested several AI models and their best one had an 83 per cent accuracy.
    Brain metastases are a type of cancerous tumour that develops when primary cancers in the lungs, breasts, colon or other parts of the body are spread to the brain via the bloodstream or lymphatic system. While there are various treatment options, stereotactic radiotherapy is one of the more common, with treatment consisting of concentrated doses of radiation targeted at the area with the tumour.
    “Not all of the tumours respond to radiation — up to 30 per cent of these patients have continued growth of their tumour, even after treatment,” Sadeghi-Naini says. “This is often not discovered until months after treatment via follow-up MRI.”
    This delay is time patients with brain metastases cannot afford, as it is a particularly debilitating condition with most people succumbing to the disease between three months to five years after diagnosis. “It’s very important to predict therapy response even before that therapy begins,” Sadeghi-Naini continues. More

  • in

    Designing better battery electrolytes

    Looking at the future of battery materials.
    Designing a battery is a three-part process. You need a positive electrode, you need a negative electrode, and — importantly — you need an electrolyte that works with both electrodes.
    An electrolyte is the battery component that transfers ions — charge-carrying particles — back and forth between the battery’s two electrodes, causing the battery to charge and discharge. For today’s lithium-ion batteries, electrolyte chemistry is relatively well-defined. For future generations of batteries being developed around the world and at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, however, the question of electrolyte design is wide open.
    “While we are locked into a particular concept for electrolytes that will work with today’s commercial batteries, for beyond-lithium-ion batteries the design and development of different electrolytes will be crucial,” said Shirley Meng, chief scientist at the Argonne Collaborative Center for Energy Storage Science (ACCESS) and professor of molecular engineering at the Pritzker School of Molecular Engineering of The University of Chicago. “Electrolyte development is one key to the progress we will achieve in making these cheaper, longer-lasting and more powerful batteries a reality, and taking one major step towards continuing to decarbonize our economy.”
    In a new paper published in Science, Meng and colleagues laid out their vision for electrolyte design in future generations of batteries.
    Even relatively small departures from today’s batteries will require a rethinking of electrolyte design, according to Meng. Switching from a nickel-containing oxide to a sulfur-based material as the main constituent of a lithium-ion battery’s positive electrode could yield significant performance benefits and reduce costs if scientists can figure out how to rejigger the electrolyte, she said. More

  • in

    Study shows how machine learning could predict rare disastrous events, like earthquakes or pandemics

    When it comes to predicting disasters brought on by extreme events (think earthquakes, pandemics or “rogue waves” that could destroy coastal structures), computational modeling faces an almost insurmountable challenge: Statistically speaking, these events are so rare that there’s just not enough data on them to use predictive models to accurately forecast when they’ll happen next.
    But a team of researchers from Brown University and Massachusetts Institute of Technology say it doesn’t have to be that way.
    In a new study in Nature Computational Science, the scientists describe how they combined statistical algorithms — which need less data to make accurate, efficient predictions — with a powerful machine learning technique developed at Brown and trained it to predict scenarios, probabilities and sometimes even the timeline of rare events despite the lack of historical record on them.
    Doing so, the research team found that this new framework can provide a way to circumvent the need for massive amounts of data that are traditionally needed for these kinds of computations, instead essentially boiling down the grand challenge of predicting rare events to a matter of quality over quantity.
    “You have to realize that these are stochastic events,” said George Karniadakis, a professor of applied mathematics and engineering at Brown and a study author. “An outburst of pandemic like COVID-19, environmental disaster in the Gulf of Mexico, an earthquake, huge wildfires in California, a 30-meter wave that capsizes a ship — these are rare events and because they are rare, we don’t have a lot of historical data. We don’t have enough samples from the past to predict them further into the future. The question that we tackle in the paper is: What is the best possible data that we can use to minimize the number of data points we need?”
    The researchers found the answer in a sequential sampling technique called active learning. These types of statistical algorithms are not only able to analyze data input into them, but more importantly, they can learn from the information to label new relevant data points that are equally or even more important to the outcome that’s being calculated. At the most basic level, they allow more to be done with less. More

  • in

    When using virtual reality as a teaching tool, context and 'feeling real' matter

    A new study by UCLA psychologists reveals that when VR is used to teach language, context and realism matter.
    The research is published in the journal npj Science of Learning.
    “The context in which we learn things can help us remember them better,” said Jesse Rissman, the paper’s corresponding author and a UCLA associate professor of psychology. “We wanted to know if learning foreign languages in virtual reality environments could improve recall, especially when there was the potential for two sets of words to interfere with each other.”
    Researchers asked 48 English-speaking participants to try to learn 80 words in two phonetically similar African languages, Swahili and Chinyanja, as they navigated virtual reality settings.
    Wearing VR headsets, participants explored one of two environments — a fantasy fairyland or a science fiction landscape — where they could click to learn the Swahili or Chinyanja names for the objects they encountered. Some participants learned both languages in the same VR environment; others learned one language in each environment.
    Participants navigated through the virtual worlds four times over the course of two days, saying the translations aloud each time. One week later, the researchers followed up with a pop quiz to see how well the participants remembered what they had learned. More