More stories

  • in

    AI for astrophysics: Algorithms help chart the origins of heavy elements

    The origin of heavy elements in our universe is theorized to be the result of neutron star collisions, which produce conditions hot and dense enough for free neutrons to merge with atomic nuclei and form new elements in a split-second window of time. Testing this theory and answering other astrophysical questions requires predictions for a vast range of masses of atomic nuclei. Los Alamos National Laboratory scientists are front and center in using machine learning algorithms (an application of artificial intelligence) to successfully model the atomic masses of the entire nuclide chart — the combination of all possible protons and neutrons that defines elements and their isotopes.
    “Many thousands of atomic nuclei that have yet to be measured may exist in nature,” said Matthew Mumpower, a theoretical physicist and co-author on several recent papers detailing atomic masses research. “Machine learning algorithms are very powerful, as they can find complex correlations in data, a result that theoretical nuclear physics models struggle to efficiently produce. These correlations can provide information to scientists about ‘missing physics’ and can in turn be used to strengthen modern nuclear models of atomic masses.”
    Simulating the rapid neutron-capture process
    Most recently, Mumpower and his colleagues, including former Los Alamos summer student Mengke Li and postdoc Trevor Sprouse, authored a paper in Physics Letters B that described simulating an important astrophysical process with a physics-based machine learning mass model. The r process, or rapid neutron-capture process, is the astrophysical process that occurs in extreme environments, like those produced by neutron star collisions. Heavy elements may result from this “nucleosynthesis”; in fact, half of the heavy isotopes up to bismuth and all of thorium and uranium in the universe may have been created by the r process.
    But modeling the r process requires theoretical predictions of atomic masses currently beyond experimental reach. The team’s physics-informed machine-learning approach trains a model based on random selection from the Atomic Mass Evaluation, a large database of masses. Next the researchers use these predicted masses to simulate the r process. The model allowed the team to simulate r-process nucleosynthesis with machine-learned mass predictions for the first time — a significant feat, as machine learning predictions generally break down when extrapolating.
    “We’ve shown that machine learning atomic masses can open the door to predictions beyond where we have experimental data,” Mumpower said. “The critical piece is that we tell the model to obey the laws of physics. By doing so, we enable physics-based extrapolations. Our results are on par with or outperform contemporary theoretical models and can be immediately updated when new data is available.”
    Investigating nuclear structures
    The r-process simulations complement the research team’s application of machine learning to related investigations of nuclear structure. In a recent article in Physical Review C selected as an Editor’s Suggestion, the team used machine learning algorithms to reproduce nuclear binding energies with quantified uncertainties; that is, they were able to ascertain the energy needed to separate an atomic nucleus into protons and neutrons, along with an associated error bar for each prediction. The algorithm thus provides information that would otherwise take significant computational time and resources to obtain from current nuclear modeling.
    In related work, the team used their machine learning model to combine precision experimental data with theoretical knowledge. These results have motivated some of the first experimental campaigns at the new Facility for Rare Isotope Beams, which seeks to expand the known region of the nuclear chart and uncover the origin of the heavy elements. More

  • in

    Robot ANYmal can do parkour and walk across rubble

    ANYmal has for some time had no problem coping with the stony terrain of Swiss hiking trails. Now researchers at ETH Zurich have taught this quadrupedal robot some new skills: it is proving rather adept at parkour, a sport based on using athletic manoeuvres to smoothly negotiate obstacles in an urban environment, which has become very popular. ANYmal is also proficient at dealing with the tricky terrain commonly found on building sites or in disaster areas.
    To teach ANYmal these new skills, two teams, both from the group led by ETH Professor Marco Hutter of the Department of Mechanical and Process Engineering, followed different approaches.
    Exhausting the mechanical options
    Working in one of the teams is ETH doctoral student Nikita Rudin, who does parkour in his free time. “Before the project started, several of my researcher colleagues thought that legged robots had already reached the limits of their development potential,” he says, “but I had a different opinion. In fact, I was sure that a lot more could be done with the mechanics of legged robots.”
    With his own parkour experience in mind, Rudin set out to further push the boundaries of what ANYmal could do. And he succeeded, by using machine learning to teach the quadrupedal robot new skills. ANYmal can now scale obstacles and perform dynamic manoeuvres to jump back down from them.
    In the process, ANYmal learned like a child would — through trial and error. Now, when presented with an obstacle, ANYmal uses its camera and artificial neural network to determine what kind of impediment it’s dealing with. It then performs movements that seem likely to succeed based on its previous training.
    Is that the full extent of what’s technically possible? Rudin suggests that this is largely the case for each individual new skill. But he adds that this still leaves plenty of potential improvements. These include allowing the robot to move beyond solving predefined problems and instead asking it to negotiate difficult terrain like rubble-strewn disaster areas.
    Combining new and traditional technologies
    Getting ANYmal ready for precisely that kind of application was the goal of the other project, conducted by Rudin’s colleague and fellow ETH doctoral student Fabian Jenelten. But rather than relying on machine learning alone, Jenelten combined it with a tried-and-tested approach used in control engineering known as model-based control. This provides an easier way of teaching the robot accurate manoeuvres, such as how to recognise and get past gaps and recesses in piles of rubble. In turn, machine learning helps the robot master movement patterns that it can then flexibly apply in unexpected situations. “Combining both approaches lets us get the most out of ANYmal,” Jenelten says.
    As a result, the quadrupedal robot is now better at gaining a sure footing on slippery surfaces or unstable boulders. ANYmal is soon also to be deployed on building sites or anywhere that is too dangerous for people — for instance to inspect a collapsed house in a disaster area. More

  • in

    Scientists use novel technique to create new energy-efficient microelectronic device

    Breakthrough could help lead to the development of new low-power semiconductors or quantum devices.
    As the integrated circuits that power our electronic devices get more powerful, they are also getting smaller. This trend of microelectronics has only accelerated in recent years as scientists try to fit increasingly more semiconducting components on a chip.
    Microelectronics face a key challenge because of their small size. To avoid overheating, microelectronics need to consume only a fraction of the electricity of conventional electronics while still operating at peak performance.
    Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have achieved a breakthrough that could allow for a new kind of microelectronic material to do just that. In a new study published in Advanced Materials, the Argonne team proposed a new kind of “redox gating” technique that can control the movement of electrons in and out of a semiconducting material.
    “Redox” refers to a chemical reaction that causes a transfer of electrons. Microelectronic devices typically rely on an electric “field effect” to control the flow of electrons to operate. In the experiment, the scientists designed a device that could regulate the flow of electrons from one end to another by applying a voltage — essentially, a kind of pressure that pushes electricity — across a material that acted as a kind of electron gate. When the voltage reached a certain threshold, roughly half of a volt, the material would begin to inject electrons through the gate from a source redox material into a channel material.
    By using the voltage to modify the flow of electrons, the semiconducting device could act like a transistor, switching between more conducting and more insulating states.
    “The new redox gating strategy allows us to modulate the electron flow by an enormous amount even at low voltages, offering much greater power efficiency,” said Argonne materials scientist Dillon Fong, an author of the study. “This also prevents damage to the system. We see that these materials can be cycled repeatedly with almost no degradation in performance.”
    “Controlling the electronic properties of a material also has significant advantages for scientists seeking emergent properties beyond conventional devices,” said Argonne materials scientist Wei Chen, one of the study’s co-corresponding authors.

    “The subvolt regime, which is where this material operates, is of enormous interest to researchers looking to make circuits that act similarly to the human brain, which also operates with great energy efficiency,” he said.
    The redox gating phenomenon could also be useful for creating new quantum materials whose phases could be manipulated at low power, said Argonne physicist Hua Zhou, another co-corresponding author of the study. Moreover, the redox gating technique may extend across versatile functional semiconductors and low-dimensional quantum materials composed of sustainable elements.
    Work done at Argonne’s Advanced Photon Source, a DOE Office of Science user facility, helped characterize the redox gating behavior.
    Additionally, Argonne’s Center for Nanoscale Materials, also a DOE Office of Science user facility, was used for materials synthesis, device fabrication and electrical measurements of the device.
    A paper based on the study, “Redox Gating for Colossal Carrier Modulation and Unique Phase Control,” appeared in the Jan. 6, 2024 issue of Advanced Materials. In addition to Fong, Chen and Zhou, contributor authors include Le Zhang, Changjiang Liu, Hui Cao, Andrew Erwin, Dillon Fong, Anand Bhattacharya, Luping Yu, Liliana Stan, Chongwen Zou and Matthew V. Tirrell.
    The work was funded by DOE’s Office of Science, Office of Basic Energy Sciences, and Argonne’s laboratory-directed research and development program. More

  • in

    Supply chain disruptions will further exacerbate economic losses from climate change

    Global GDP loss from climate change will increase exponentially the warmer the planet gets when its cascading impact on global supply chains is factored in, finds a new study led by UCL researchers.
    The study, published in Nature, is the first to chart “indirect economic losses” from climate change on global supply chains that will affect regions that would have been less affected by projected warming temperatures.
    These previously unquantified disruptions in supply chains will further exacerbate projected economic losses due to climate change, bringing a projected net economic loss of between $3.75 trillion and $24.7 trillion in adjusted 2020 dollars by 2060, depending on how much carbon dioxide gets emitted.
    Senior author Professor Dabo Guan (UCL Bartlett School of Sustainable Construction) said: “These projected economic impacts are staggering. These losses get worse the more the planet warms, and when you factor in the effects on global supply chains it shows how everywhere is at economic risk.”
    As the global economy has grown more interconnected, disruptions in one part of the world have knock-on effects elsewhere in the world, sometimes in unexpected ways. Crop failures, labour slowdowns and other economic disruptions in one region can affect the supplies of raw materials flowing to other parts of the world that depend on them, disrupting manufacturing and trade in faraway regions. This is the first study to analyse and quantify the propagation of these disruptions from climate change, as well as their economic impacts.
    As the Earth warms, the worse off economically it becomes, with compounding damage and economic losses climbing exponentially as time goes on and the hotter it gets. Climate change disrupts the global economy primarily by health costs from people suffering from heat exposure, work stoppages when it’s too hot to work, and economic disruptions cascading through supply chains.
    The researchers compared expected economic losses across three projected global warming scenarios, called “Shared Socioeconomic Pathways,” based on low, medium and high projected global emissions levels. The best-case scenario would see global temperatures rise by only 1.5 degrees C over preindustrial levels by 2060, the middle track, which most experts believe Earth is on now, would see global temperatures rise by around 3 degrees C, and the worst-case scenario would see global temperatures rise by 7 degrees C.

    By 2060, projected economic losses will be nearly five times as much under the highest emissions path than the lowest, with economic losses getting progressively worse the warmer it gets. By 2060, total GDP losses will amount to 0.8% under 1.5 degrees of warming, 2.0% under 3 degrees of warming and 3.9% under 7 degrees of warming.
    The team calculated that supply chain disruptions also get progressively worse the warmer the climate gets, accounting for a greater and greater proportion of economic losses. By 2060, supply chain losses will amount to 0.1% of total global GDP (13% of the total GDP lost) under 1.5 degrees of warming, 0.5% of total GDP (25% of the total GDP lost) under 3 degrees, and 1.5% of total GDP (38% of the total GDP lost) under 7 degrees.
    Co-lead author, Dr Daoping Wang of King’s College London, said: “The negative impacts of extreme heat sometimes occur quietly on global supply chains, even escaping our notice altogether. Our developed Disaster Footprint model tracks and visually represents these impacts, underlining the imperative for global collaborative efforts in adapting to extreme heat.”
    For example, although extreme heat events occur more often in low-latitude countries, high-latitude regions, such as Europe or the United States, are also at significant risk. Future extreme heat is likely to cost Europe and the US about 2.2% and about 3.5% of their GDP respectively under the high emission scenario. The UK would lose about 1.5% of its GDP, with chemical products, tourism and electrical equipment industries suffering the greatest losses. Some of these losses originate from supply chain fluctuations caused by extreme heat in countries close to the equator.
    The direct human cost is likewise significant. Even under the lowest path, 2060 will see 24% more days of extreme heatwaves and an additional 590,000 heatwave deaths annually, while under the highest path there would be more than twice as many heatwaves and an expected 1.12 million additional annual heatwave deaths. These impacts will not be evenly distributed around the world, but countries situated near to the equator will bear the brunt of climate change, particularly developing countries.
    Co-lead author, Yida Sun from Tsinghua University said: “Developing countries suffer disproportionate economic losses compared to their carbon emissions. As multiple nodes in developing countries are hit simultaneously, economic damage can spread rapidly through the global value chain.”
    The researchers highlighted two illustrative examples of industries that are part of supply chains at risk from climate change: Indian food production and tourism in the Dominican Republic.

    The Indian food industry is heavily reliant on imports of fats and oils from Indonesia and Malaysia, Brazilian sugar, as well as vegetables, fruits and nuts from Southeast Asia and Africa. These supplier countries are among those most affected by climate change, diminishing India’s access to raw materials, which will diminish its food exports. As a result, the economies of countries reliant on these foods will feel the pinch of diminished supply and higher prices.
    The Dominican Republic is expected to see a decline in its tourism as its climate grows too warm to attract vacationers. A nation whose economy is heavily reliant on tourism, this slowdown will hurt tourism-reliant industries including manufacturing, construction, insurance, financial services, and electronic equipment.
    Professor Guan said: “This research is an important reminder that preventing every additional degree of climate change is critical. Understanding what nations and industries are most vulnerable is crucial for devising effective and targeted adaption strategies.” More

  • in

    New AI technology enables 3D capture and editing of real-life objects

    Imagine performing a sweep around an object with your smartphone and getting a realistic, fully editable 3D model that you can view from any angle — this is fast becoming reality, thanks to advances in AI.
    Researchers at Simon Fraser University (SFU) in Canada have unveiled new AI technology for doing exactly this. Soon, rather than merely taking 2D photos, everyday consumers will be able to take 3D captures of real-life objects and edit their shapes and appearance as they wish, just as easily as they would with regular 2D photos today.
    In a new paper presented at the annual flagship international conference on AI research, the Conference on Neural Information Processing Systems (NeurIPS) in New Orleans, Louisiana, researchers demonstrated a new technique called Proximity Attention Point Rendering (PAPR) that can turn a set of 2D photos of an object into a cloud of 3D points that represents the object’s shape and appearance. Each point then gives users a knob to control the object with — dragging a point changes the object’s shape, and editing the properties of a point changes the object’s appearance. Then in a process known as “rendering,” the 3D point cloud can then be viewed from any angle and turned into a 2D photo that shows the edited object as if the photo was taken from that angle in real life.
    Using the new AI technology, researchers showed how a statue can be brought to life — the technology automatically converted a set of photos of the statue into a 3D point cloud, which is then animated. The end result is a video of the statue turning its head from side to side as the viewer is guided on a path around it.
    AI and machine learning are really driving a paradigm shift in the reconstruction of 3D objects from 2D images. The remarkable success of machine learning in areas like computer vision and natural language is inspiring researchers to investigate how traditional 3D graphics pipelines can be re-engineered with the same deep learning-based building blocks that were responsible for the runaway AI success stories of late,” said Dr. Ke Li, an assistant professor of computer science at Simon Fraser University (SFU), director of the APEX lab and the senior author on the paper. “It turns out that doing so successfully is a lot harder than we anticipated and requires overcoming several technical challenges. What excites me the most is the many possibilities this brings for consumer technology — 3D may become as common a medium for visual communication and expression as 2D is today.”
    One of the biggest challenges in 3D is on how to represent 3D shapes in a way that allows users to edit them easily and intuitively. One previous approach, known as neural radiance fields (NeRFs), does not allow for easy shape editing because it needs the user to provide a description of what happens to every continuous coordinate. A more recent approach, known as 3D Gaussian splatting (3DGS), is also not well-suited for shape editing because the shape surface can get pulverized or torn to pieces after editing.
    A key insight came when the researchers realized that instead of considering each 3D point in the point cloud as a discrete splat, they can think of each as a control point in a continuous interpolator. Then when the point is moved, the shape changes automatically in an intuitive way. This is similar to how animators define the motion of objects in animated videos — by specifying the positions of objects at a few points in time, their motion at every point in time is automatically generated by an interpolator.

    However, how to mathematically define an interpolator between an arbitrary set of 3D points is not straightforward. The researchers formulated a machine learning model that can learn the interpolator in an end-to-end fashion using a novel mechanism known as proximity attention.
    In recognition of this technological leap, the paper was awarded with a spotlight at the NeurIPS conference, an honour reserved for the top 3.6% of paper submissions to the conference.
    The research team is excited for what’s to come. “This opens the way to many applications beyond what we’ve demonstrated,” said Dr. Li. “We are already exploring various ways to leverage PAPR to model moving 3D scenes and the results so far are incredibly promising.”
    The authors of the paper are Yanshu Zhang, Shichong Peng, Alireza Moazeni and Ke Li. Zhang and Peng are co-first authors, Zhang, Peng and Moazeni are PhD students at the School of Computing Science and all are members of the APEX Lab at Simon Fraser University (SFU). More

  • in

    Scientists develop ultra-thin semiconductor fibers that turn fabrics into wearable electronics

    Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed ultra-thin semiconductor fibres that can be woven into fabrics, turning them into smart wearable electronics.
    To create reliably functioning semiconductor fibres, they must be flexible and without defects for stable signal transmission. However, existing manufacturing methods cause stress and instability, leading to cracks and deformities in the semiconductor cores, negatively impacting their performance and limiting their development.
    NTU scientists conducted modelling and simulations to understand how stress and instability occur during the manufacturing process. They found that the challenge could be overcome through careful material selection and a specific series of steps taken during fibre production.
    They developed a mechanical design and successfully fabricated hair-thin, defect-free fibres spanning 100 metres, which indicates its market scalability. Importantly the new fibres can be woven into fabrics using existing methods.
    To demonstrate their fibres’ high quality and functionality, the NTU research team developed prototypes. These included a smart beanie hat to help a visually impaired person cross the road safely through alerts on a mobile phone application; a shirt that receives information and transmits it through an earpiece, like a museum audio guide; and a smartwatch with a strap that functions as a flexible sensor that conforms to the wrist of users for heart rate measurement even during physical activities.
    The team believes that their innovation is a fundamental breakthrough in the development of semiconductor fibres that are ultra-long and durable, meaning they are cost-effective and scalable while offering excellent electrical and optoelectronic (meaning it can sense, transmit and interact with light) performance.
    NTU Associate Professor Wei Lei at the School of Electrical and Electronic Engineering (EEE) and lead-principal investigator of the study said, “The successful fabrication of our high-quality semiconductor fibres is thanks to the interdisciplinary nature of our team. Semiconductor fibre fabrication is a highly complex process, requiring know-how from materials science, mechanical, and electrical engineering experts at different stages of the study. The collaborative team effort allowed us a clear understanding of the mechanisms involved, which ultimately helped us unlock the door to defect-free threads, overcoming a long-standing challenge in fibre technology.”
    The study, published in the top scientific journal Nature, is aligned with the University’s commitment to fostering innovation and translating research into practical solutions that benefit society under its NTU2025 five-year strategic plan.

    Developing semiconductor fibre
    To develop their defect-free fibres, the NTU-led team selected pairs of common semiconductor material and synthetic material — a silicon semiconductor core with a silica glass tube and a germanium core with an aluminosilicate glass tube. The materials were selected based on their attributes which complemented each other. These included thermal stability, electrical conductivity, and the ability to allow electric current to flow through (resistivity).
    Silicon was selected for its ability to be heated to high temperatures and manipulated without degrading and for its ability to work in the visible light range, making it ideal for use in devices meant for extreme conditions, such as sensors on the protective clothing for fire fighters. Germanium, on the other hand, allows electrons to move through the fibre quickly (carrier mobility) and work in the infrared range, which makes it suitable for applications in wearable or fabric-based (i.e. curtains, tablecloth) sensors that are compatible with indoor Light fidelity (‘LiFi’) wireless optical networks.
    Next, the scientists inserted the semiconductor material (core) inside the glass tube, heating it at high temperature until the tube and core were soft enough to be pulled into a thin continuous strand.
    Due to the different melting points and thermal expansion rates of their selected materials, the glass functioned like a wine bottle during the heating process, containing the semiconductor material which, like wine, fills the bottle, as it melted.
    First author of the study Dr Wang Zhixun, Research Fellow in the School of EEE, said, “It took extensive analysis before landing on the right combination of materials and process to develop our fibres. By exploiting the different melting points and thermal expansion rates of our chosen materials, we successfully pulled the semiconductor materials into long threads as they entered and exited the heating furnace while avoiding defects.”
    The glass is removed once the strand cools and combined with a polymer tube and metal wires. After another round of heating, the materials are pulled to form a hair-thin, flexible thread.

    In lab experiments, the semiconductor fibres showed excellent performance. When subjected to responsivity tests, the fibres could detect the entire visible light range, from ultraviolet to infrared, and robustly transmit signals of up to 350 kilohertz (kHz) bandwidth, making it a top performer of its kind. Moreover, the fibres were 30 times tougher than regular ones.
    The fibres were also evaluated for their washability, in which a cloth woven with semiconductor fibres was cleaned in a washing machine ten times, and results showed no significant drop in the fibre performance.
    Co-principal investigator, Distinguished University Professor Gao Huajian, who completed the study while he was at NTU, said, “Silicon and germanium are two widely used semiconductors which are usually considered highly brittle and prone to fracture. The fabrication of ultra-long semiconductor fibre demonstrates the possibility and feasibility of making flexible components using silicon and germanium, providing extensive space for the development of flexible wearable devices of various forms. Next, our team will work collaboratively to apply the fibre manufacturing method to other challenging materials and to discover more scenarios where the fibres play key roles.”
    Compatibility with industry’s production methods hints at easy adoption
    To demonstrate the feasibility of use in real-life applications, the team built smart wearable electronics using their newly created semiconductor fibres. These include a beanie, a sweater, and a watch that can detect and process signals.
    To create a device that assists the visually impaired in crossing busy roads, the NTU team wove fibres into a beanie hat, along with an interface board. When tested experimentally outdoors, light signals received by the beanie were sent to a mobile phone application, triggering an alert.
    A shirt woven with the fibres, meanwhile, functioned as a ‘smart top’, which could be worn at a museum or art gallery to receive information about exhibits and feed it into an earpiece as the wearer walked around the rooms.
    A smartwatch with a wrist band integrated with the fibres functioned as a flexible and conformal sensor to measure heart rate, as opposed to traditional designs where a rigid sensor is installed on the body of the smartwatch, which may not be reliable in circumstances when users are very active, and the sensor is not in contact with the skin. Moreover, the fibres replaced bulky sensors in the body of the smartwatch, saving space and freeing up design opportunities for slimmer watch designs.
    Co-author Dr Li Dong, a Research Fellow in the School of Mechanical and Aerospace Engineering, said, “Our fibre fabrication method is versatile and easily adopted by industry. The fibre is also compatible with current textile industry machinery, meaning it has the potential for large-scale production. By demonstrating the fibres’ use in everyday wearable items like a beanie and a watch, we prove that our research findings can serve as a guide to creating functional semiconductor fibres in the future.”
    For their next steps, the researchers are planning to expand the types of materials used for the fibres and come up with semiconductors with different hollow cores, such as rectangular and triangular shapes, to expand their applications. More

  • in

    Artificial intelligence detects heart defects in newborns

    Many children announce their arrival in the delivery room with a piercing cry. As a newborn automatically takes its first breath, the lungs inflate, the blood vessels in the lungs widen, and the whole circulatory system reconfigures itself to life outside the womb. This process doesn’t always go to plan, however. Some infants — particularly those who are very sick or born prematurely — suffer from pulmonary hypertension, a serious disorder in which the arteries to the lungs remain narrowed after delivery or close up again in the first few days or weeks after birth. This constricts the flow of blood to the lungs, reducing the amount of oxygen in the blood.
    Prompt diagnosis and treatment improve prognosis
    Severe cases of pulmonary hypertension need to be detected and treated as rapidly as possible. The sooner treatment begins, the better the prognosis for the newborn infant. Yet making the correct diagnosis can be challenging. Only experienced paediatric cardiologists are able to diagnose pulmonary hypertension based on a comprehensive ultrasound examination of the heart. “Detecting pulmonary hypertension is time-consuming and requires a cardiologist with highly specific expertise and many years of experience. Only the largest paediatric clinics tend to have those skills on hand,” says Professor Sven Wellmann, Medical Director of the Department of Neonatology at KUNO Klinik St. Hedwig, part of the Hospital of the Order of St. John in Regensburg in Germany.
    Researchers from the group led by Julia Vogt, who runs the Medical Data Science Group at ETH Zurich, recently teamed up with neonatologists at KUNO Klinik St. Hedwig to develop a computer model that provides reliable support in diagnosing the disease in newborn infants. Their results have now been published in the International Journal of Computer Vision.
    Making AI reliable and explainable
    The ETH researchers began by training their algorithm on hundreds of video recordings taken from ultrasound examinations of the hearts of 192 newborns. This dataset also included moving images of the beating heart taken from different angles as well as diagnoses by experienced paediatric cardiologists (is pulmonary hypertension present or not) and an evaluation of the disease’s severity (“mild” or “moderate to severe”). To determine the algorithm’s success at interpreting the images, the researchers subsequently added a second dataset of ultrasound images from 78 newborn infants, which the model had never seen before. The model suggested the correct diagnosis in around 80 to 90 percent of cases and was able to determine the correct level of disease severity in around 65 to 85 percent of cases.
    “The key to using a machine-learning model in a medical context is not just the prediction accuracy, but also whether humans are able to understand the criteria the model uses to make decisions,” Vogt says. Her model makes this possible by highlighting the parts of the ultrasound image on which its categorisation is based. This allows doctors to see exactly which areas or characteristics of the heart and its blood vessels the model considered to be suspicious. When the paediatric cardiologists examined the datasets, they discovered that the model looks at the same characteristics as they do, even though it was not explicitly programmed to do so.
    A human makes the diagnosis
    This machine-learning model could potentially be extended to other organs and diseases, for example to diagnose heart septal defects or valvular heart disease.
    It could also be useful in regions where no specialists are available: standardised ultrasound images could be taken by a healthcare professional, and the model could then provide a preliminary risk assessment and an indication of whether a specialist should be consulted. Medical facilities that do have access to highly qualified specialists could use the model to ease their workload and to help reach a better and more objective diagnosis. “AI has the potential to make significant improvements to healthcare. The crucial issue for us is that the final decision should always be made by a human, by a doctor. AI should simply be providing support to ensure that the maximum number of people can receive the best possible medical care,” Vogt says. More

  • in

    Opening new doors in the VR world, literally

    Room-scale virtual reality (VR) is one where users explore a VR environment by physically walking through it. The technology provides many benefits given its highly immersive experience. Yet the drawbacks are that it requires large physical spaces. It can also lack the haptic feedback when touching objects.
    Take for example opening a door. Implementing this seemingly menial task in the virtual world means recreating the haptics of grasping a doorknob whilst simultaneously preventing users from walking into actual walls in their surrounding areas.
    Now, a research group has developed a new system to overcome this problem: RedirectedDoors+.
    The group was led by Kazuyuki Fujita, Kazuki Takashima, and Yoshifumi Kitamura from Tohoku University and Morten Fjeld from Chalmers University of Technology and the University of Bergen.
    “Our system, which built upon an existing visuo-haptic door-opening redirection technique, allows participants to subtly manipulate the walking direction while opening doors in VR, guiding them away from real walls,” points out Professor Fujita, who is based at Tohoku University’s Research Institute of Electrical Communication (RIEC). “At the same time, our system reproduces the realistic haptics of touching a doorknob, enhancing the quality of the experience.”
    To provide users with that experience, RedirectedDoors+ employs a small number of ‘door robots.’ The robots have a doorknob-shaped attachment and can move in any direction, giving immediate touch feedback when the user interacts with the doorknob. In addition, the VR environment rotates in sync with the door movement, ensuring the user stays within the physical space limits.
    A simulation study conducted to evaluate the performance of the system demonstrated the physical space size could be significantly reduced in six different VR environments. A validation study with 12 users walking with the system likewise demonstrated that this system works safely in real-world environments.
    “RedirectDoors+ has redefined the boundaries of VR exploration, offering unprecedented freedom and realism in virtual environments,” adds Fujita. “It has a wide range of applicability, such as in VR vocational training, architectural design, and urban planning.” More