More stories

  • in

    Scientists use novel technique to create new energy-efficient microelectronic device

    Breakthrough could help lead to the development of new low-power semiconductors or quantum devices.
    As the integrated circuits that power our electronic devices get more powerful, they are also getting smaller. This trend of microelectronics has only accelerated in recent years as scientists try to fit increasingly more semiconducting components on a chip.
    Microelectronics face a key challenge because of their small size. To avoid overheating, microelectronics need to consume only a fraction of the electricity of conventional electronics while still operating at peak performance.
    Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have achieved a breakthrough that could allow for a new kind of microelectronic material to do just that. In a new study published in Advanced Materials, the Argonne team proposed a new kind of “redox gating” technique that can control the movement of electrons in and out of a semiconducting material.
    “Redox” refers to a chemical reaction that causes a transfer of electrons. Microelectronic devices typically rely on an electric “field effect” to control the flow of electrons to operate. In the experiment, the scientists designed a device that could regulate the flow of electrons from one end to another by applying a voltage — essentially, a kind of pressure that pushes electricity — across a material that acted as a kind of electron gate. When the voltage reached a certain threshold, roughly half of a volt, the material would begin to inject electrons through the gate from a source redox material into a channel material.
    By using the voltage to modify the flow of electrons, the semiconducting device could act like a transistor, switching between more conducting and more insulating states.
    “The new redox gating strategy allows us to modulate the electron flow by an enormous amount even at low voltages, offering much greater power efficiency,” said Argonne materials scientist Dillon Fong, an author of the study. “This also prevents damage to the system. We see that these materials can be cycled repeatedly with almost no degradation in performance.”
    “Controlling the electronic properties of a material also has significant advantages for scientists seeking emergent properties beyond conventional devices,” said Argonne materials scientist Wei Chen, one of the study’s co-corresponding authors.

    “The subvolt regime, which is where this material operates, is of enormous interest to researchers looking to make circuits that act similarly to the human brain, which also operates with great energy efficiency,” he said.
    The redox gating phenomenon could also be useful for creating new quantum materials whose phases could be manipulated at low power, said Argonne physicist Hua Zhou, another co-corresponding author of the study. Moreover, the redox gating technique may extend across versatile functional semiconductors and low-dimensional quantum materials composed of sustainable elements.
    Work done at Argonne’s Advanced Photon Source, a DOE Office of Science user facility, helped characterize the redox gating behavior.
    Additionally, Argonne’s Center for Nanoscale Materials, also a DOE Office of Science user facility, was used for materials synthesis, device fabrication and electrical measurements of the device.
    A paper based on the study, “Redox Gating for Colossal Carrier Modulation and Unique Phase Control,” appeared in the Jan. 6, 2024 issue of Advanced Materials. In addition to Fong, Chen and Zhou, contributor authors include Le Zhang, Changjiang Liu, Hui Cao, Andrew Erwin, Dillon Fong, Anand Bhattacharya, Luping Yu, Liliana Stan, Chongwen Zou and Matthew V. Tirrell.
    The work was funded by DOE’s Office of Science, Office of Basic Energy Sciences, and Argonne’s laboratory-directed research and development program. More

  • in

    Supply chain disruptions will further exacerbate economic losses from climate change

    Global GDP loss from climate change will increase exponentially the warmer the planet gets when its cascading impact on global supply chains is factored in, finds a new study led by UCL researchers.
    The study, published in Nature, is the first to chart “indirect economic losses” from climate change on global supply chains that will affect regions that would have been less affected by projected warming temperatures.
    These previously unquantified disruptions in supply chains will further exacerbate projected economic losses due to climate change, bringing a projected net economic loss of between $3.75 trillion and $24.7 trillion in adjusted 2020 dollars by 2060, depending on how much carbon dioxide gets emitted.
    Senior author Professor Dabo Guan (UCL Bartlett School of Sustainable Construction) said: “These projected economic impacts are staggering. These losses get worse the more the planet warms, and when you factor in the effects on global supply chains it shows how everywhere is at economic risk.”
    As the global economy has grown more interconnected, disruptions in one part of the world have knock-on effects elsewhere in the world, sometimes in unexpected ways. Crop failures, labour slowdowns and other economic disruptions in one region can affect the supplies of raw materials flowing to other parts of the world that depend on them, disrupting manufacturing and trade in faraway regions. This is the first study to analyse and quantify the propagation of these disruptions from climate change, as well as their economic impacts.
    As the Earth warms, the worse off economically it becomes, with compounding damage and economic losses climbing exponentially as time goes on and the hotter it gets. Climate change disrupts the global economy primarily by health costs from people suffering from heat exposure, work stoppages when it’s too hot to work, and economic disruptions cascading through supply chains.
    The researchers compared expected economic losses across three projected global warming scenarios, called “Shared Socioeconomic Pathways,” based on low, medium and high projected global emissions levels. The best-case scenario would see global temperatures rise by only 1.5 degrees C over preindustrial levels by 2060, the middle track, which most experts believe Earth is on now, would see global temperatures rise by around 3 degrees C, and the worst-case scenario would see global temperatures rise by 7 degrees C.

    By 2060, projected economic losses will be nearly five times as much under the highest emissions path than the lowest, with economic losses getting progressively worse the warmer it gets. By 2060, total GDP losses will amount to 0.8% under 1.5 degrees of warming, 2.0% under 3 degrees of warming and 3.9% under 7 degrees of warming.
    The team calculated that supply chain disruptions also get progressively worse the warmer the climate gets, accounting for a greater and greater proportion of economic losses. By 2060, supply chain losses will amount to 0.1% of total global GDP (13% of the total GDP lost) under 1.5 degrees of warming, 0.5% of total GDP (25% of the total GDP lost) under 3 degrees, and 1.5% of total GDP (38% of the total GDP lost) under 7 degrees.
    Co-lead author, Dr Daoping Wang of King’s College London, said: “The negative impacts of extreme heat sometimes occur quietly on global supply chains, even escaping our notice altogether. Our developed Disaster Footprint model tracks and visually represents these impacts, underlining the imperative for global collaborative efforts in adapting to extreme heat.”
    For example, although extreme heat events occur more often in low-latitude countries, high-latitude regions, such as Europe or the United States, are also at significant risk. Future extreme heat is likely to cost Europe and the US about 2.2% and about 3.5% of their GDP respectively under the high emission scenario. The UK would lose about 1.5% of its GDP, with chemical products, tourism and electrical equipment industries suffering the greatest losses. Some of these losses originate from supply chain fluctuations caused by extreme heat in countries close to the equator.
    The direct human cost is likewise significant. Even under the lowest path, 2060 will see 24% more days of extreme heatwaves and an additional 590,000 heatwave deaths annually, while under the highest path there would be more than twice as many heatwaves and an expected 1.12 million additional annual heatwave deaths. These impacts will not be evenly distributed around the world, but countries situated near to the equator will bear the brunt of climate change, particularly developing countries.
    Co-lead author, Yida Sun from Tsinghua University said: “Developing countries suffer disproportionate economic losses compared to their carbon emissions. As multiple nodes in developing countries are hit simultaneously, economic damage can spread rapidly through the global value chain.”
    The researchers highlighted two illustrative examples of industries that are part of supply chains at risk from climate change: Indian food production and tourism in the Dominican Republic.

    The Indian food industry is heavily reliant on imports of fats and oils from Indonesia and Malaysia, Brazilian sugar, as well as vegetables, fruits and nuts from Southeast Asia and Africa. These supplier countries are among those most affected by climate change, diminishing India’s access to raw materials, which will diminish its food exports. As a result, the economies of countries reliant on these foods will feel the pinch of diminished supply and higher prices.
    The Dominican Republic is expected to see a decline in its tourism as its climate grows too warm to attract vacationers. A nation whose economy is heavily reliant on tourism, this slowdown will hurt tourism-reliant industries including manufacturing, construction, insurance, financial services, and electronic equipment.
    Professor Guan said: “This research is an important reminder that preventing every additional degree of climate change is critical. Understanding what nations and industries are most vulnerable is crucial for devising effective and targeted adaption strategies.” More

  • in

    New AI technology enables 3D capture and editing of real-life objects

    Imagine performing a sweep around an object with your smartphone and getting a realistic, fully editable 3D model that you can view from any angle — this is fast becoming reality, thanks to advances in AI.
    Researchers at Simon Fraser University (SFU) in Canada have unveiled new AI technology for doing exactly this. Soon, rather than merely taking 2D photos, everyday consumers will be able to take 3D captures of real-life objects and edit their shapes and appearance as they wish, just as easily as they would with regular 2D photos today.
    In a new paper presented at the annual flagship international conference on AI research, the Conference on Neural Information Processing Systems (NeurIPS) in New Orleans, Louisiana, researchers demonstrated a new technique called Proximity Attention Point Rendering (PAPR) that can turn a set of 2D photos of an object into a cloud of 3D points that represents the object’s shape and appearance. Each point then gives users a knob to control the object with — dragging a point changes the object’s shape, and editing the properties of a point changes the object’s appearance. Then in a process known as “rendering,” the 3D point cloud can then be viewed from any angle and turned into a 2D photo that shows the edited object as if the photo was taken from that angle in real life.
    Using the new AI technology, researchers showed how a statue can be brought to life — the technology automatically converted a set of photos of the statue into a 3D point cloud, which is then animated. The end result is a video of the statue turning its head from side to side as the viewer is guided on a path around it.
    AI and machine learning are really driving a paradigm shift in the reconstruction of 3D objects from 2D images. The remarkable success of machine learning in areas like computer vision and natural language is inspiring researchers to investigate how traditional 3D graphics pipelines can be re-engineered with the same deep learning-based building blocks that were responsible for the runaway AI success stories of late,” said Dr. Ke Li, an assistant professor of computer science at Simon Fraser University (SFU), director of the APEX lab and the senior author on the paper. “It turns out that doing so successfully is a lot harder than we anticipated and requires overcoming several technical challenges. What excites me the most is the many possibilities this brings for consumer technology — 3D may become as common a medium for visual communication and expression as 2D is today.”
    One of the biggest challenges in 3D is on how to represent 3D shapes in a way that allows users to edit them easily and intuitively. One previous approach, known as neural radiance fields (NeRFs), does not allow for easy shape editing because it needs the user to provide a description of what happens to every continuous coordinate. A more recent approach, known as 3D Gaussian splatting (3DGS), is also not well-suited for shape editing because the shape surface can get pulverized or torn to pieces after editing.
    A key insight came when the researchers realized that instead of considering each 3D point in the point cloud as a discrete splat, they can think of each as a control point in a continuous interpolator. Then when the point is moved, the shape changes automatically in an intuitive way. This is similar to how animators define the motion of objects in animated videos — by specifying the positions of objects at a few points in time, their motion at every point in time is automatically generated by an interpolator.

    However, how to mathematically define an interpolator between an arbitrary set of 3D points is not straightforward. The researchers formulated a machine learning model that can learn the interpolator in an end-to-end fashion using a novel mechanism known as proximity attention.
    In recognition of this technological leap, the paper was awarded with a spotlight at the NeurIPS conference, an honour reserved for the top 3.6% of paper submissions to the conference.
    The research team is excited for what’s to come. “This opens the way to many applications beyond what we’ve demonstrated,” said Dr. Li. “We are already exploring various ways to leverage PAPR to model moving 3D scenes and the results so far are incredibly promising.”
    The authors of the paper are Yanshu Zhang, Shichong Peng, Alireza Moazeni and Ke Li. Zhang and Peng are co-first authors, Zhang, Peng and Moazeni are PhD students at the School of Computing Science and all are members of the APEX Lab at Simon Fraser University (SFU). More

  • in

    Scientists develop ultra-thin semiconductor fibers that turn fabrics into wearable electronics

    Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed ultra-thin semiconductor fibres that can be woven into fabrics, turning them into smart wearable electronics.
    To create reliably functioning semiconductor fibres, they must be flexible and without defects for stable signal transmission. However, existing manufacturing methods cause stress and instability, leading to cracks and deformities in the semiconductor cores, negatively impacting their performance and limiting their development.
    NTU scientists conducted modelling and simulations to understand how stress and instability occur during the manufacturing process. They found that the challenge could be overcome through careful material selection and a specific series of steps taken during fibre production.
    They developed a mechanical design and successfully fabricated hair-thin, defect-free fibres spanning 100 metres, which indicates its market scalability. Importantly the new fibres can be woven into fabrics using existing methods.
    To demonstrate their fibres’ high quality and functionality, the NTU research team developed prototypes. These included a smart beanie hat to help a visually impaired person cross the road safely through alerts on a mobile phone application; a shirt that receives information and transmits it through an earpiece, like a museum audio guide; and a smartwatch with a strap that functions as a flexible sensor that conforms to the wrist of users for heart rate measurement even during physical activities.
    The team believes that their innovation is a fundamental breakthrough in the development of semiconductor fibres that are ultra-long and durable, meaning they are cost-effective and scalable while offering excellent electrical and optoelectronic (meaning it can sense, transmit and interact with light) performance.
    NTU Associate Professor Wei Lei at the School of Electrical and Electronic Engineering (EEE) and lead-principal investigator of the study said, “The successful fabrication of our high-quality semiconductor fibres is thanks to the interdisciplinary nature of our team. Semiconductor fibre fabrication is a highly complex process, requiring know-how from materials science, mechanical, and electrical engineering experts at different stages of the study. The collaborative team effort allowed us a clear understanding of the mechanisms involved, which ultimately helped us unlock the door to defect-free threads, overcoming a long-standing challenge in fibre technology.”
    The study, published in the top scientific journal Nature, is aligned with the University’s commitment to fostering innovation and translating research into practical solutions that benefit society under its NTU2025 five-year strategic plan.

    Developing semiconductor fibre
    To develop their defect-free fibres, the NTU-led team selected pairs of common semiconductor material and synthetic material — a silicon semiconductor core with a silica glass tube and a germanium core with an aluminosilicate glass tube. The materials were selected based on their attributes which complemented each other. These included thermal stability, electrical conductivity, and the ability to allow electric current to flow through (resistivity).
    Silicon was selected for its ability to be heated to high temperatures and manipulated without degrading and for its ability to work in the visible light range, making it ideal for use in devices meant for extreme conditions, such as sensors on the protective clothing for fire fighters. Germanium, on the other hand, allows electrons to move through the fibre quickly (carrier mobility) and work in the infrared range, which makes it suitable for applications in wearable or fabric-based (i.e. curtains, tablecloth) sensors that are compatible with indoor Light fidelity (‘LiFi’) wireless optical networks.
    Next, the scientists inserted the semiconductor material (core) inside the glass tube, heating it at high temperature until the tube and core were soft enough to be pulled into a thin continuous strand.
    Due to the different melting points and thermal expansion rates of their selected materials, the glass functioned like a wine bottle during the heating process, containing the semiconductor material which, like wine, fills the bottle, as it melted.
    First author of the study Dr Wang Zhixun, Research Fellow in the School of EEE, said, “It took extensive analysis before landing on the right combination of materials and process to develop our fibres. By exploiting the different melting points and thermal expansion rates of our chosen materials, we successfully pulled the semiconductor materials into long threads as they entered and exited the heating furnace while avoiding defects.”
    The glass is removed once the strand cools and combined with a polymer tube and metal wires. After another round of heating, the materials are pulled to form a hair-thin, flexible thread.

    In lab experiments, the semiconductor fibres showed excellent performance. When subjected to responsivity tests, the fibres could detect the entire visible light range, from ultraviolet to infrared, and robustly transmit signals of up to 350 kilohertz (kHz) bandwidth, making it a top performer of its kind. Moreover, the fibres were 30 times tougher than regular ones.
    The fibres were also evaluated for their washability, in which a cloth woven with semiconductor fibres was cleaned in a washing machine ten times, and results showed no significant drop in the fibre performance.
    Co-principal investigator, Distinguished University Professor Gao Huajian, who completed the study while he was at NTU, said, “Silicon and germanium are two widely used semiconductors which are usually considered highly brittle and prone to fracture. The fabrication of ultra-long semiconductor fibre demonstrates the possibility and feasibility of making flexible components using silicon and germanium, providing extensive space for the development of flexible wearable devices of various forms. Next, our team will work collaboratively to apply the fibre manufacturing method to other challenging materials and to discover more scenarios where the fibres play key roles.”
    Compatibility with industry’s production methods hints at easy adoption
    To demonstrate the feasibility of use in real-life applications, the team built smart wearable electronics using their newly created semiconductor fibres. These include a beanie, a sweater, and a watch that can detect and process signals.
    To create a device that assists the visually impaired in crossing busy roads, the NTU team wove fibres into a beanie hat, along with an interface board. When tested experimentally outdoors, light signals received by the beanie were sent to a mobile phone application, triggering an alert.
    A shirt woven with the fibres, meanwhile, functioned as a ‘smart top’, which could be worn at a museum or art gallery to receive information about exhibits and feed it into an earpiece as the wearer walked around the rooms.
    A smartwatch with a wrist band integrated with the fibres functioned as a flexible and conformal sensor to measure heart rate, as opposed to traditional designs where a rigid sensor is installed on the body of the smartwatch, which may not be reliable in circumstances when users are very active, and the sensor is not in contact with the skin. Moreover, the fibres replaced bulky sensors in the body of the smartwatch, saving space and freeing up design opportunities for slimmer watch designs.
    Co-author Dr Li Dong, a Research Fellow in the School of Mechanical and Aerospace Engineering, said, “Our fibre fabrication method is versatile and easily adopted by industry. The fibre is also compatible with current textile industry machinery, meaning it has the potential for large-scale production. By demonstrating the fibres’ use in everyday wearable items like a beanie and a watch, we prove that our research findings can serve as a guide to creating functional semiconductor fibres in the future.”
    For their next steps, the researchers are planning to expand the types of materials used for the fibres and come up with semiconductors with different hollow cores, such as rectangular and triangular shapes, to expand their applications. More

  • in

    Artificial intelligence detects heart defects in newborns

    Many children announce their arrival in the delivery room with a piercing cry. As a newborn automatically takes its first breath, the lungs inflate, the blood vessels in the lungs widen, and the whole circulatory system reconfigures itself to life outside the womb. This process doesn’t always go to plan, however. Some infants — particularly those who are very sick or born prematurely — suffer from pulmonary hypertension, a serious disorder in which the arteries to the lungs remain narrowed after delivery or close up again in the first few days or weeks after birth. This constricts the flow of blood to the lungs, reducing the amount of oxygen in the blood.
    Prompt diagnosis and treatment improve prognosis
    Severe cases of pulmonary hypertension need to be detected and treated as rapidly as possible. The sooner treatment begins, the better the prognosis for the newborn infant. Yet making the correct diagnosis can be challenging. Only experienced paediatric cardiologists are able to diagnose pulmonary hypertension based on a comprehensive ultrasound examination of the heart. “Detecting pulmonary hypertension is time-consuming and requires a cardiologist with highly specific expertise and many years of experience. Only the largest paediatric clinics tend to have those skills on hand,” says Professor Sven Wellmann, Medical Director of the Department of Neonatology at KUNO Klinik St. Hedwig, part of the Hospital of the Order of St. John in Regensburg in Germany.
    Researchers from the group led by Julia Vogt, who runs the Medical Data Science Group at ETH Zurich, recently teamed up with neonatologists at KUNO Klinik St. Hedwig to develop a computer model that provides reliable support in diagnosing the disease in newborn infants. Their results have now been published in the International Journal of Computer Vision.
    Making AI reliable and explainable
    The ETH researchers began by training their algorithm on hundreds of video recordings taken from ultrasound examinations of the hearts of 192 newborns. This dataset also included moving images of the beating heart taken from different angles as well as diagnoses by experienced paediatric cardiologists (is pulmonary hypertension present or not) and an evaluation of the disease’s severity (“mild” or “moderate to severe”). To determine the algorithm’s success at interpreting the images, the researchers subsequently added a second dataset of ultrasound images from 78 newborn infants, which the model had never seen before. The model suggested the correct diagnosis in around 80 to 90 percent of cases and was able to determine the correct level of disease severity in around 65 to 85 percent of cases.
    “The key to using a machine-learning model in a medical context is not just the prediction accuracy, but also whether humans are able to understand the criteria the model uses to make decisions,” Vogt says. Her model makes this possible by highlighting the parts of the ultrasound image on which its categorisation is based. This allows doctors to see exactly which areas or characteristics of the heart and its blood vessels the model considered to be suspicious. When the paediatric cardiologists examined the datasets, they discovered that the model looks at the same characteristics as they do, even though it was not explicitly programmed to do so.
    A human makes the diagnosis
    This machine-learning model could potentially be extended to other organs and diseases, for example to diagnose heart septal defects or valvular heart disease.
    It could also be useful in regions where no specialists are available: standardised ultrasound images could be taken by a healthcare professional, and the model could then provide a preliminary risk assessment and an indication of whether a specialist should be consulted. Medical facilities that do have access to highly qualified specialists could use the model to ease their workload and to help reach a better and more objective diagnosis. “AI has the potential to make significant improvements to healthcare. The crucial issue for us is that the final decision should always be made by a human, by a doctor. AI should simply be providing support to ensure that the maximum number of people can receive the best possible medical care,” Vogt says. More

  • in

    Opening new doors in the VR world, literally

    Room-scale virtual reality (VR) is one where users explore a VR environment by physically walking through it. The technology provides many benefits given its highly immersive experience. Yet the drawbacks are that it requires large physical spaces. It can also lack the haptic feedback when touching objects.
    Take for example opening a door. Implementing this seemingly menial task in the virtual world means recreating the haptics of grasping a doorknob whilst simultaneously preventing users from walking into actual walls in their surrounding areas.
    Now, a research group has developed a new system to overcome this problem: RedirectedDoors+.
    The group was led by Kazuyuki Fujita, Kazuki Takashima, and Yoshifumi Kitamura from Tohoku University and Morten Fjeld from Chalmers University of Technology and the University of Bergen.
    “Our system, which built upon an existing visuo-haptic door-opening redirection technique, allows participants to subtly manipulate the walking direction while opening doors in VR, guiding them away from real walls,” points out Professor Fujita, who is based at Tohoku University’s Research Institute of Electrical Communication (RIEC). “At the same time, our system reproduces the realistic haptics of touching a doorknob, enhancing the quality of the experience.”
    To provide users with that experience, RedirectedDoors+ employs a small number of ‘door robots.’ The robots have a doorknob-shaped attachment and can move in any direction, giving immediate touch feedback when the user interacts with the doorknob. In addition, the VR environment rotates in sync with the door movement, ensuring the user stays within the physical space limits.
    A simulation study conducted to evaluate the performance of the system demonstrated the physical space size could be significantly reduced in six different VR environments. A validation study with 12 users walking with the system likewise demonstrated that this system works safely in real-world environments.
    “RedirectDoors+ has redefined the boundaries of VR exploration, offering unprecedented freedom and realism in virtual environments,” adds Fujita. “It has a wide range of applicability, such as in VR vocational training, architectural design, and urban planning.” More

  • in

    Researchers develop a new control method that optimizes autonomous ship navigation

    Existing ship control systems using Model Predictive Control for Maritime Autonomous Surface Ships (MASS) do not consider the various forces acting on ships in real sea conditions. Addressing this gap, in a new study, researchers developed a novel time-optimal control method, that accounts for the real wave loads acting on a ship, enabling effective planning and control of MASS at sea.
    The study of ship manoeuvring at sea has long been the central focus of the shipping industry. With the rapid advancements in remote control, communication technologies and artificial intelligence, the concept of Maritime Autonomous Surface Ships (MASS) has emerged as a promising solution for autonomous marine navigation. This shift highlights the growing need for optimal control models for autonomous ship manoeuvring.
    Designing a control system for time-efficient ship manoeuvring is one of the most difficult challenges in autonomous ship control. While many studies have investigated this problem and proposed various control methods, including Model Predictive Control (MPC), most have focused on control in calm waters, which do not represent real operating conditions. At sea, ships are continuously affected by different external loads, with loads from sea waves being the most significant factor affecting manoeuvring performance.
    To address this gap, a team of researchers, led by Assistant Professor Daejeong Kim from the Division of Navigation Convergence Studies at the Korea Maritime & Ocean University in South Korea, designed a novel time-optimal control method for MASS. “Our control model accounts for various forces that act on the ship, enabling MASS to better navigate and track targets in dynamic sea conditions,” says Dr. Kim. Their study was made available online on January 05, 2024, and published in Volume 293 of the journal Ocean Engineering on February 1, 2024.
    At the heart of this innovative control system is a comprehensive mathematical ship model that accounts for various forces in the sea, including wave loads, acting on key parts of a ship such as the hull, propellers, and rudders. However, this model cannot be directly used to optimise the manoeuvring time. For this, the researchers developed a novel time optimisation model that transforms the mathematical model from a temporal formulation to a spatial one. This successfully optimises the manoeuvring time.
    These two models were integrated into a nonlinear MPC controller to achieve time-optimal control. They tested this controller by simulating a real ship model navigating in the sea with different wave loads. Additionally, for effective course planning and tracking researchers proposed three control strategies: Strategy A excluded wave loads during both the planning and tracking stages, serving as a reference; Strategy B included wave loads only in the planning stage, and Strategy C included wave loads in both stages, measuring their influence on both propulsion and steering.
    Experiments revealed that wave loads increased the expected manoeuvring time on both strategies B and C. Comparing the two strategies, the researchers found strategy B to be simpler with lower performance than strategy C, with the latter being more reliable. However, strategy C places an additional burden on the controller by including wave load prediction in the planning stage.
    “Our method enhances the efficiency and safety of autonomous vessel operations and potentially reduces shipping costs and carbon emissions, benefiting various sectors of the economy,” remarks Dr. Kim, highlighting the potential of this study. “Overall, our study addresses a critical gap in autonomous ship manoeuvring which could contribute to the development of a more technologically advanced maritime industry.” More

  • in

    Straightening teeth? AI can help

    A new tool being developed by the University of Copenhagen and 3Shape will help orthodontists correctly fit braces onto teeth. Using artificial intelligence and virtual patients, the tool predicts how teeth will move, so as to ensure that braces are neither too loose nor too tight.
    Many of us remember the feeling of having our braces regularly adjusted and retightened at the orthodontist’s office. And every year, about 30 percent of Danish youth up to the age of 15 wear braces to align crooked teeth. Orthodontists use the knowledge gained from their educations and experience to perform their jobs, but without the possibilities that a computer can provide for predicting final results.
    A new tool, developed in a collaboration between the University of Copenhagen’s Department of Computer Science and the company 3Shape, makes it possible to simulate how braces should fit to give the best result without too many unnecessary inconveniences.
    The tool has been developed with the help of scanned imagery of teeth and bone structures from human jaws, which artificial intelligence then uses to predict how sets of braces should be designed to best straighten a patient’s teeth.
    “Our simulation is able to let an orthodontist know where braces should and shouldn’t exert pressure to straighten teeth. Currently, these interventions are based entirely upon the discretion of orthodontists and involve a great deal of trial and error. This can lead to many adjustments and visits to the orthodontist’s office, which our simulation can help reduce in the long run,” says Professor Kenny Erleben, who heads IMAGE (Image Analysis, Computational Modelling and Geometry), a research section at UCPH’s Department of Computer Science.
    Helps predict tooth movement
    It’s no wonder that it can be difficult to predict exactly how braces will move teeth, because teeth continue shifting slightly throughout a person’s life. And, these movements are very different from mouth to mouth.

    “The fact that tooth movements vary from one patient to another makes it even more challenging to accurately predict how teeth will move for different people. Which is why we’ve developed a new tool and a dataset of different models to help overcome these challenges,” explains Torkan Gholamalizadeh, from 3Shape and a PhD from the Department of Computer Science.
    As an alternative to the classic bracket and braces, a new generation of clear braces, known as aligners, has gained ground. Aligners are designed as a transparent plastic cast of the teeth that patients fit over their teeth.
    Patients must wear aligners for at least 22 hours a day and they need to be swapped for new and tighter sets every two weeks. Because aligners are made of plastic, a person’s teeth also change the contours of the aligner itself, something that the new tool also takes into account.
    “As transparent aligners are softer than metal braces, calculating how much force it takes to move the teeth becomes even more complicated. But it’s a factor that we’ve taught our model to take into account, so that one can predict tooth movements when using aligners as well,” says Torkan Gholamalizadeh.
    Digital twins can improve treatment
    Researchers created a computer model that creates accurate 3D simulations of an individual patient’s jaw, and which dentists and technicians can use to plan the best possible treatment.

    To create these simulations, researchers mapped sets of human teeth using detailed CT scans of teeth and of the small, fine structures between the jawbone and the teeth known as peridontal ligaments — a kind of fiber-rich connective tissue that holds teeth firmly in the jaw.
    This type of precise digital imitation is referred to as a digital twin — and in this context, the researchers built up a database of ‘digital dental patients’.
    But they didn’t stop there. The researchers’ database also contains other digital patient types that could one day be of use elsewhere in the healthcare sector:
    “Right now, we have a database of digital patients that, besides simulating aligner designs, can be used for hip implants, among other things. In the long run, this could make life easier for patients and save resources for society,” says Kenny Erleben.
    The area of research that makes use of digital twins is relatively new and, for the time being, Professor Erleben’s database of virtual patients is a world leader. However, the database will need to get even bigger if digital twins are to really take root and have benefit the healthcare sector and society.
    “More data will allow us to simulate treatments and adapt medical devices so as to more precisely target patients across entire populations,” says Professor Erleben.
    Furthermore, the tool must clear various regulatory hurdles before it is rolled out for orthodontists. This is something that the researchers hope to see in the foreseeable future.
    Box: Digital twins
    A digital twin is a virtual model that lives in the cloud, and is designed to accurately mirror a human being, physical object, system, or real-world process.
    “The virtual model can answer what’s happening in the real world, and do so instantly. For example, one can ask what would happen if you pushed on one tooth and get answers with regards to where it would move and how it would affect other teeth. This can be done quickly, so that you know what’s happening. Today, weeks must pass before finding out whether a desired effect has been achieved,” says Professor Kenny Erleben.
    Digital twins can be used to plan, design and optimize — and can therefore be used to operate companies, robots, factories and used much more in the energy, healthcare and other sectors.
    One of the goals of working with digital twins at the Department of Computer Science is to be able to create simulations of populations, for example, in the healthcare sector. If working with a medical product, virtual people must be exposed to and tested for their reactions in various situations. A simulation provides a picture of what would happen to an individual — and finally, to an entire population.
    About the study
    In their study, the researchers developed a simulation tool using CT scans of teeth, which can predict how a dental brace should best be designed and attached.
    The research is described in the studies: “Deep-learning-based segmentation of individual tooth and bone with periodontal ligament interface details for simulation purposes” and “Open-Full-Jaw: An open-access dataset and pipeline for finite element models of human jaw.”
    The research is part of the EU research project Rainbow, which conducts research into computer-simulated medicine across seven European universities in collaboration with government agencies and industry.
    The research was conducted in collaboration with the company 3Shape, which manufactures intraoral scanners and provides medical software for digital dentistry purposes. More