More stories

  • in

    New AI technology enables 3D capture and editing of real-life objects

    Imagine performing a sweep around an object with your smartphone and getting a realistic, fully editable 3D model that you can view from any angle — this is fast becoming reality, thanks to advances in AI.
    Researchers at Simon Fraser University (SFU) in Canada have unveiled new AI technology for doing exactly this. Soon, rather than merely taking 2D photos, everyday consumers will be able to take 3D captures of real-life objects and edit their shapes and appearance as they wish, just as easily as they would with regular 2D photos today.
    In a new paper presented at the annual flagship international conference on AI research, the Conference on Neural Information Processing Systems (NeurIPS) in New Orleans, Louisiana, researchers demonstrated a new technique called Proximity Attention Point Rendering (PAPR) that can turn a set of 2D photos of an object into a cloud of 3D points that represents the object’s shape and appearance. Each point then gives users a knob to control the object with — dragging a point changes the object’s shape, and editing the properties of a point changes the object’s appearance. Then in a process known as “rendering,” the 3D point cloud can then be viewed from any angle and turned into a 2D photo that shows the edited object as if the photo was taken from that angle in real life.
    Using the new AI technology, researchers showed how a statue can be brought to life — the technology automatically converted a set of photos of the statue into a 3D point cloud, which is then animated. The end result is a video of the statue turning its head from side to side as the viewer is guided on a path around it.
    AI and machine learning are really driving a paradigm shift in the reconstruction of 3D objects from 2D images. The remarkable success of machine learning in areas like computer vision and natural language is inspiring researchers to investigate how traditional 3D graphics pipelines can be re-engineered with the same deep learning-based building blocks that were responsible for the runaway AI success stories of late,” said Dr. Ke Li, an assistant professor of computer science at Simon Fraser University (SFU), director of the APEX lab and the senior author on the paper. “It turns out that doing so successfully is a lot harder than we anticipated and requires overcoming several technical challenges. What excites me the most is the many possibilities this brings for consumer technology — 3D may become as common a medium for visual communication and expression as 2D is today.”
    One of the biggest challenges in 3D is on how to represent 3D shapes in a way that allows users to edit them easily and intuitively. One previous approach, known as neural radiance fields (NeRFs), does not allow for easy shape editing because it needs the user to provide a description of what happens to every continuous coordinate. A more recent approach, known as 3D Gaussian splatting (3DGS), is also not well-suited for shape editing because the shape surface can get pulverized or torn to pieces after editing.
    A key insight came when the researchers realized that instead of considering each 3D point in the point cloud as a discrete splat, they can think of each as a control point in a continuous interpolator. Then when the point is moved, the shape changes automatically in an intuitive way. This is similar to how animators define the motion of objects in animated videos — by specifying the positions of objects at a few points in time, their motion at every point in time is automatically generated by an interpolator.

    However, how to mathematically define an interpolator between an arbitrary set of 3D points is not straightforward. The researchers formulated a machine learning model that can learn the interpolator in an end-to-end fashion using a novel mechanism known as proximity attention.
    In recognition of this technological leap, the paper was awarded with a spotlight at the NeurIPS conference, an honour reserved for the top 3.6% of paper submissions to the conference.
    The research team is excited for what’s to come. “This opens the way to many applications beyond what we’ve demonstrated,” said Dr. Li. “We are already exploring various ways to leverage PAPR to model moving 3D scenes and the results so far are incredibly promising.”
    The authors of the paper are Yanshu Zhang, Shichong Peng, Alireza Moazeni and Ke Li. Zhang and Peng are co-first authors, Zhang, Peng and Moazeni are PhD students at the School of Computing Science and all are members of the APEX Lab at Simon Fraser University (SFU). More

  • in

    Scientists develop ultra-thin semiconductor fibers that turn fabrics into wearable electronics

    Scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed ultra-thin semiconductor fibres that can be woven into fabrics, turning them into smart wearable electronics.
    To create reliably functioning semiconductor fibres, they must be flexible and without defects for stable signal transmission. However, existing manufacturing methods cause stress and instability, leading to cracks and deformities in the semiconductor cores, negatively impacting their performance and limiting their development.
    NTU scientists conducted modelling and simulations to understand how stress and instability occur during the manufacturing process. They found that the challenge could be overcome through careful material selection and a specific series of steps taken during fibre production.
    They developed a mechanical design and successfully fabricated hair-thin, defect-free fibres spanning 100 metres, which indicates its market scalability. Importantly the new fibres can be woven into fabrics using existing methods.
    To demonstrate their fibres’ high quality and functionality, the NTU research team developed prototypes. These included a smart beanie hat to help a visually impaired person cross the road safely through alerts on a mobile phone application; a shirt that receives information and transmits it through an earpiece, like a museum audio guide; and a smartwatch with a strap that functions as a flexible sensor that conforms to the wrist of users for heart rate measurement even during physical activities.
    The team believes that their innovation is a fundamental breakthrough in the development of semiconductor fibres that are ultra-long and durable, meaning they are cost-effective and scalable while offering excellent electrical and optoelectronic (meaning it can sense, transmit and interact with light) performance.
    NTU Associate Professor Wei Lei at the School of Electrical and Electronic Engineering (EEE) and lead-principal investigator of the study said, “The successful fabrication of our high-quality semiconductor fibres is thanks to the interdisciplinary nature of our team. Semiconductor fibre fabrication is a highly complex process, requiring know-how from materials science, mechanical, and electrical engineering experts at different stages of the study. The collaborative team effort allowed us a clear understanding of the mechanisms involved, which ultimately helped us unlock the door to defect-free threads, overcoming a long-standing challenge in fibre technology.”
    The study, published in the top scientific journal Nature, is aligned with the University’s commitment to fostering innovation and translating research into practical solutions that benefit society under its NTU2025 five-year strategic plan.

    Developing semiconductor fibre
    To develop their defect-free fibres, the NTU-led team selected pairs of common semiconductor material and synthetic material — a silicon semiconductor core with a silica glass tube and a germanium core with an aluminosilicate glass tube. The materials were selected based on their attributes which complemented each other. These included thermal stability, electrical conductivity, and the ability to allow electric current to flow through (resistivity).
    Silicon was selected for its ability to be heated to high temperatures and manipulated without degrading and for its ability to work in the visible light range, making it ideal for use in devices meant for extreme conditions, such as sensors on the protective clothing for fire fighters. Germanium, on the other hand, allows electrons to move through the fibre quickly (carrier mobility) and work in the infrared range, which makes it suitable for applications in wearable or fabric-based (i.e. curtains, tablecloth) sensors that are compatible with indoor Light fidelity (‘LiFi’) wireless optical networks.
    Next, the scientists inserted the semiconductor material (core) inside the glass tube, heating it at high temperature until the tube and core were soft enough to be pulled into a thin continuous strand.
    Due to the different melting points and thermal expansion rates of their selected materials, the glass functioned like a wine bottle during the heating process, containing the semiconductor material which, like wine, fills the bottle, as it melted.
    First author of the study Dr Wang Zhixun, Research Fellow in the School of EEE, said, “It took extensive analysis before landing on the right combination of materials and process to develop our fibres. By exploiting the different melting points and thermal expansion rates of our chosen materials, we successfully pulled the semiconductor materials into long threads as they entered and exited the heating furnace while avoiding defects.”
    The glass is removed once the strand cools and combined with a polymer tube and metal wires. After another round of heating, the materials are pulled to form a hair-thin, flexible thread.

    In lab experiments, the semiconductor fibres showed excellent performance. When subjected to responsivity tests, the fibres could detect the entire visible light range, from ultraviolet to infrared, and robustly transmit signals of up to 350 kilohertz (kHz) bandwidth, making it a top performer of its kind. Moreover, the fibres were 30 times tougher than regular ones.
    The fibres were also evaluated for their washability, in which a cloth woven with semiconductor fibres was cleaned in a washing machine ten times, and results showed no significant drop in the fibre performance.
    Co-principal investigator, Distinguished University Professor Gao Huajian, who completed the study while he was at NTU, said, “Silicon and germanium are two widely used semiconductors which are usually considered highly brittle and prone to fracture. The fabrication of ultra-long semiconductor fibre demonstrates the possibility and feasibility of making flexible components using silicon and germanium, providing extensive space for the development of flexible wearable devices of various forms. Next, our team will work collaboratively to apply the fibre manufacturing method to other challenging materials and to discover more scenarios where the fibres play key roles.”
    Compatibility with industry’s production methods hints at easy adoption
    To demonstrate the feasibility of use in real-life applications, the team built smart wearable electronics using their newly created semiconductor fibres. These include a beanie, a sweater, and a watch that can detect and process signals.
    To create a device that assists the visually impaired in crossing busy roads, the NTU team wove fibres into a beanie hat, along with an interface board. When tested experimentally outdoors, light signals received by the beanie were sent to a mobile phone application, triggering an alert.
    A shirt woven with the fibres, meanwhile, functioned as a ‘smart top’, which could be worn at a museum or art gallery to receive information about exhibits and feed it into an earpiece as the wearer walked around the rooms.
    A smartwatch with a wrist band integrated with the fibres functioned as a flexible and conformal sensor to measure heart rate, as opposed to traditional designs where a rigid sensor is installed on the body of the smartwatch, which may not be reliable in circumstances when users are very active, and the sensor is not in contact with the skin. Moreover, the fibres replaced bulky sensors in the body of the smartwatch, saving space and freeing up design opportunities for slimmer watch designs.
    Co-author Dr Li Dong, a Research Fellow in the School of Mechanical and Aerospace Engineering, said, “Our fibre fabrication method is versatile and easily adopted by industry. The fibre is also compatible with current textile industry machinery, meaning it has the potential for large-scale production. By demonstrating the fibres’ use in everyday wearable items like a beanie and a watch, we prove that our research findings can serve as a guide to creating functional semiconductor fibres in the future.”
    For their next steps, the researchers are planning to expand the types of materials used for the fibres and come up with semiconductors with different hollow cores, such as rectangular and triangular shapes, to expand their applications. More

  • in

    Artificial intelligence detects heart defects in newborns

    Many children announce their arrival in the delivery room with a piercing cry. As a newborn automatically takes its first breath, the lungs inflate, the blood vessels in the lungs widen, and the whole circulatory system reconfigures itself to life outside the womb. This process doesn’t always go to plan, however. Some infants — particularly those who are very sick or born prematurely — suffer from pulmonary hypertension, a serious disorder in which the arteries to the lungs remain narrowed after delivery or close up again in the first few days or weeks after birth. This constricts the flow of blood to the lungs, reducing the amount of oxygen in the blood.
    Prompt diagnosis and treatment improve prognosis
    Severe cases of pulmonary hypertension need to be detected and treated as rapidly as possible. The sooner treatment begins, the better the prognosis for the newborn infant. Yet making the correct diagnosis can be challenging. Only experienced paediatric cardiologists are able to diagnose pulmonary hypertension based on a comprehensive ultrasound examination of the heart. “Detecting pulmonary hypertension is time-consuming and requires a cardiologist with highly specific expertise and many years of experience. Only the largest paediatric clinics tend to have those skills on hand,” says Professor Sven Wellmann, Medical Director of the Department of Neonatology at KUNO Klinik St. Hedwig, part of the Hospital of the Order of St. John in Regensburg in Germany.
    Researchers from the group led by Julia Vogt, who runs the Medical Data Science Group at ETH Zurich, recently teamed up with neonatologists at KUNO Klinik St. Hedwig to develop a computer model that provides reliable support in diagnosing the disease in newborn infants. Their results have now been published in the International Journal of Computer Vision.
    Making AI reliable and explainable
    The ETH researchers began by training their algorithm on hundreds of video recordings taken from ultrasound examinations of the hearts of 192 newborns. This dataset also included moving images of the beating heart taken from different angles as well as diagnoses by experienced paediatric cardiologists (is pulmonary hypertension present or not) and an evaluation of the disease’s severity (“mild” or “moderate to severe”). To determine the algorithm’s success at interpreting the images, the researchers subsequently added a second dataset of ultrasound images from 78 newborn infants, which the model had never seen before. The model suggested the correct diagnosis in around 80 to 90 percent of cases and was able to determine the correct level of disease severity in around 65 to 85 percent of cases.
    “The key to using a machine-learning model in a medical context is not just the prediction accuracy, but also whether humans are able to understand the criteria the model uses to make decisions,” Vogt says. Her model makes this possible by highlighting the parts of the ultrasound image on which its categorisation is based. This allows doctors to see exactly which areas or characteristics of the heart and its blood vessels the model considered to be suspicious. When the paediatric cardiologists examined the datasets, they discovered that the model looks at the same characteristics as they do, even though it was not explicitly programmed to do so.
    A human makes the diagnosis
    This machine-learning model could potentially be extended to other organs and diseases, for example to diagnose heart septal defects or valvular heart disease.
    It could also be useful in regions where no specialists are available: standardised ultrasound images could be taken by a healthcare professional, and the model could then provide a preliminary risk assessment and an indication of whether a specialist should be consulted. Medical facilities that do have access to highly qualified specialists could use the model to ease their workload and to help reach a better and more objective diagnosis. “AI has the potential to make significant improvements to healthcare. The crucial issue for us is that the final decision should always be made by a human, by a doctor. AI should simply be providing support to ensure that the maximum number of people can receive the best possible medical care,” Vogt says. More

  • in

    Opening new doors in the VR world, literally

    Room-scale virtual reality (VR) is one where users explore a VR environment by physically walking through it. The technology provides many benefits given its highly immersive experience. Yet the drawbacks are that it requires large physical spaces. It can also lack the haptic feedback when touching objects.
    Take for example opening a door. Implementing this seemingly menial task in the virtual world means recreating the haptics of grasping a doorknob whilst simultaneously preventing users from walking into actual walls in their surrounding areas.
    Now, a research group has developed a new system to overcome this problem: RedirectedDoors+.
    The group was led by Kazuyuki Fujita, Kazuki Takashima, and Yoshifumi Kitamura from Tohoku University and Morten Fjeld from Chalmers University of Technology and the University of Bergen.
    “Our system, which built upon an existing visuo-haptic door-opening redirection technique, allows participants to subtly manipulate the walking direction while opening doors in VR, guiding them away from real walls,” points out Professor Fujita, who is based at Tohoku University’s Research Institute of Electrical Communication (RIEC). “At the same time, our system reproduces the realistic haptics of touching a doorknob, enhancing the quality of the experience.”
    To provide users with that experience, RedirectedDoors+ employs a small number of ‘door robots.’ The robots have a doorknob-shaped attachment and can move in any direction, giving immediate touch feedback when the user interacts with the doorknob. In addition, the VR environment rotates in sync with the door movement, ensuring the user stays within the physical space limits.
    A simulation study conducted to evaluate the performance of the system demonstrated the physical space size could be significantly reduced in six different VR environments. A validation study with 12 users walking with the system likewise demonstrated that this system works safely in real-world environments.
    “RedirectDoors+ has redefined the boundaries of VR exploration, offering unprecedented freedom and realism in virtual environments,” adds Fujita. “It has a wide range of applicability, such as in VR vocational training, architectural design, and urban planning.” More

  • in

    Researchers develop a new control method that optimizes autonomous ship navigation

    Existing ship control systems using Model Predictive Control for Maritime Autonomous Surface Ships (MASS) do not consider the various forces acting on ships in real sea conditions. Addressing this gap, in a new study, researchers developed a novel time-optimal control method, that accounts for the real wave loads acting on a ship, enabling effective planning and control of MASS at sea.
    The study of ship manoeuvring at sea has long been the central focus of the shipping industry. With the rapid advancements in remote control, communication technologies and artificial intelligence, the concept of Maritime Autonomous Surface Ships (MASS) has emerged as a promising solution for autonomous marine navigation. This shift highlights the growing need for optimal control models for autonomous ship manoeuvring.
    Designing a control system for time-efficient ship manoeuvring is one of the most difficult challenges in autonomous ship control. While many studies have investigated this problem and proposed various control methods, including Model Predictive Control (MPC), most have focused on control in calm waters, which do not represent real operating conditions. At sea, ships are continuously affected by different external loads, with loads from sea waves being the most significant factor affecting manoeuvring performance.
    To address this gap, a team of researchers, led by Assistant Professor Daejeong Kim from the Division of Navigation Convergence Studies at the Korea Maritime & Ocean University in South Korea, designed a novel time-optimal control method for MASS. “Our control model accounts for various forces that act on the ship, enabling MASS to better navigate and track targets in dynamic sea conditions,” says Dr. Kim. Their study was made available online on January 05, 2024, and published in Volume 293 of the journal Ocean Engineering on February 1, 2024.
    At the heart of this innovative control system is a comprehensive mathematical ship model that accounts for various forces in the sea, including wave loads, acting on key parts of a ship such as the hull, propellers, and rudders. However, this model cannot be directly used to optimise the manoeuvring time. For this, the researchers developed a novel time optimisation model that transforms the mathematical model from a temporal formulation to a spatial one. This successfully optimises the manoeuvring time.
    These two models were integrated into a nonlinear MPC controller to achieve time-optimal control. They tested this controller by simulating a real ship model navigating in the sea with different wave loads. Additionally, for effective course planning and tracking researchers proposed three control strategies: Strategy A excluded wave loads during both the planning and tracking stages, serving as a reference; Strategy B included wave loads only in the planning stage, and Strategy C included wave loads in both stages, measuring their influence on both propulsion and steering.
    Experiments revealed that wave loads increased the expected manoeuvring time on both strategies B and C. Comparing the two strategies, the researchers found strategy B to be simpler with lower performance than strategy C, with the latter being more reliable. However, strategy C places an additional burden on the controller by including wave load prediction in the planning stage.
    “Our method enhances the efficiency and safety of autonomous vessel operations and potentially reduces shipping costs and carbon emissions, benefiting various sectors of the economy,” remarks Dr. Kim, highlighting the potential of this study. “Overall, our study addresses a critical gap in autonomous ship manoeuvring which could contribute to the development of a more technologically advanced maritime industry.” More

  • in

    Straightening teeth? AI can help

    A new tool being developed by the University of Copenhagen and 3Shape will help orthodontists correctly fit braces onto teeth. Using artificial intelligence and virtual patients, the tool predicts how teeth will move, so as to ensure that braces are neither too loose nor too tight.
    Many of us remember the feeling of having our braces regularly adjusted and retightened at the orthodontist’s office. And every year, about 30 percent of Danish youth up to the age of 15 wear braces to align crooked teeth. Orthodontists use the knowledge gained from their educations and experience to perform their jobs, but without the possibilities that a computer can provide for predicting final results.
    A new tool, developed in a collaboration between the University of Copenhagen’s Department of Computer Science and the company 3Shape, makes it possible to simulate how braces should fit to give the best result without too many unnecessary inconveniences.
    The tool has been developed with the help of scanned imagery of teeth and bone structures from human jaws, which artificial intelligence then uses to predict how sets of braces should be designed to best straighten a patient’s teeth.
    “Our simulation is able to let an orthodontist know where braces should and shouldn’t exert pressure to straighten teeth. Currently, these interventions are based entirely upon the discretion of orthodontists and involve a great deal of trial and error. This can lead to many adjustments and visits to the orthodontist’s office, which our simulation can help reduce in the long run,” says Professor Kenny Erleben, who heads IMAGE (Image Analysis, Computational Modelling and Geometry), a research section at UCPH’s Department of Computer Science.
    Helps predict tooth movement
    It’s no wonder that it can be difficult to predict exactly how braces will move teeth, because teeth continue shifting slightly throughout a person’s life. And, these movements are very different from mouth to mouth.

    “The fact that tooth movements vary from one patient to another makes it even more challenging to accurately predict how teeth will move for different people. Which is why we’ve developed a new tool and a dataset of different models to help overcome these challenges,” explains Torkan Gholamalizadeh, from 3Shape and a PhD from the Department of Computer Science.
    As an alternative to the classic bracket and braces, a new generation of clear braces, known as aligners, has gained ground. Aligners are designed as a transparent plastic cast of the teeth that patients fit over their teeth.
    Patients must wear aligners for at least 22 hours a day and they need to be swapped for new and tighter sets every two weeks. Because aligners are made of plastic, a person’s teeth also change the contours of the aligner itself, something that the new tool also takes into account.
    “As transparent aligners are softer than metal braces, calculating how much force it takes to move the teeth becomes even more complicated. But it’s a factor that we’ve taught our model to take into account, so that one can predict tooth movements when using aligners as well,” says Torkan Gholamalizadeh.
    Digital twins can improve treatment
    Researchers created a computer model that creates accurate 3D simulations of an individual patient’s jaw, and which dentists and technicians can use to plan the best possible treatment.

    To create these simulations, researchers mapped sets of human teeth using detailed CT scans of teeth and of the small, fine structures between the jawbone and the teeth known as peridontal ligaments — a kind of fiber-rich connective tissue that holds teeth firmly in the jaw.
    This type of precise digital imitation is referred to as a digital twin — and in this context, the researchers built up a database of ‘digital dental patients’.
    But they didn’t stop there. The researchers’ database also contains other digital patient types that could one day be of use elsewhere in the healthcare sector:
    “Right now, we have a database of digital patients that, besides simulating aligner designs, can be used for hip implants, among other things. In the long run, this could make life easier for patients and save resources for society,” says Kenny Erleben.
    The area of research that makes use of digital twins is relatively new and, for the time being, Professor Erleben’s database of virtual patients is a world leader. However, the database will need to get even bigger if digital twins are to really take root and have benefit the healthcare sector and society.
    “More data will allow us to simulate treatments and adapt medical devices so as to more precisely target patients across entire populations,” says Professor Erleben.
    Furthermore, the tool must clear various regulatory hurdles before it is rolled out for orthodontists. This is something that the researchers hope to see in the foreseeable future.
    Box: Digital twins
    A digital twin is a virtual model that lives in the cloud, and is designed to accurately mirror a human being, physical object, system, or real-world process.
    “The virtual model can answer what’s happening in the real world, and do so instantly. For example, one can ask what would happen if you pushed on one tooth and get answers with regards to where it would move and how it would affect other teeth. This can be done quickly, so that you know what’s happening. Today, weeks must pass before finding out whether a desired effect has been achieved,” says Professor Kenny Erleben.
    Digital twins can be used to plan, design and optimize — and can therefore be used to operate companies, robots, factories and used much more in the energy, healthcare and other sectors.
    One of the goals of working with digital twins at the Department of Computer Science is to be able to create simulations of populations, for example, in the healthcare sector. If working with a medical product, virtual people must be exposed to and tested for their reactions in various situations. A simulation provides a picture of what would happen to an individual — and finally, to an entire population.
    About the study
    In their study, the researchers developed a simulation tool using CT scans of teeth, which can predict how a dental brace should best be designed and attached.
    The research is described in the studies: “Deep-learning-based segmentation of individual tooth and bone with periodontal ligament interface details for simulation purposes” and “Open-Full-Jaw: An open-access dataset and pipeline for finite element models of human jaw.”
    The research is part of the EU research project Rainbow, which conducts research into computer-simulated medicine across seven European universities in collaboration with government agencies and industry.
    The research was conducted in collaboration with the company 3Shape, which manufactures intraoral scanners and provides medical software for digital dentistry purposes. More

  • in

    You don’t need glue to hold these materials together — just electricity

    Is there a way to stick hard and soft materials together without any tape, glue or epoxy? A new study published in ACS Central Science shows that applying a small voltage to certain objects forms chemical bonds that securely link the objects together. Reversing the direction of electron flow easily separates the two materials. This electroadhesion effect could help create biohybrid robots, improve biomedical implants and enable new battery technologies.
    When an adhesive is used to attach two things, it binds the surfaces either through mechanical or electrostatic forces. But sometimes those attractions or bonds are difficult, if not impossible, to undo. As an alternative, reversible adhesion methods are being explored, including electroadhesion (EA). Though the term is used to describe a few different phenomena, one definition involves running an electric current through two materials causing them to stick together, thanks to attractions or chemical bonds. Previously, Srinivasa Raghavan and colleagues demonstrated that EA can hold soft, oppositely charged materials together, and even be used to build simple structures. This time, they wanted to see if EA could reversibly bind a hard material, such as graphite, to a soft material, such as animal tissue.
    The team first tested EA using two graphite electrodes and an acrylamide gel. A small voltage (5 volts) was applied for a few minutes, causing the gel to permanently adhere to the positively charged electrode. The resulting chemical bond was so strong that, when one of the researchers tried to wrench the two pieces apart, the gel tore before it disconnected from the electrode. Notably, when the current’s direction was reversed, the graphite and gel easily separated — and the gel instead adhered to the other electrode, which was now positively charged. Similar tests were run on a variety of materials — metals, various gel compositions, animal tissues, fruits and veggies — to determine the phenomenon’s ubiquity.
    For EA to occur, the authors found that the hard material needs to conduct electrons, and the soft material needs to contain salt ions They hypothesize that the adhesion arises from chemical bonds that form between the surfaces after an exchange of electrons. This may explain why some metals that hold onto their electrons strongly, including titanium, and some fruits that contain more sugar than salts, including grapes, failed to adhere in some situations. A final experiment showed that EA can occur completely underwater, revealing an even wider range of possible applications. The team says that this work could help create new batteries, enable biohybrid robotics, enhance biomedical implants and much more. More

  • in

    Staying in the loop: How superconductors are helping computers ‘remember’

    Computers work in digits — 0s and 1s to be exact. Their calculations are digital; their processes are digital; even their memories are digital. All of which requires extraordinary power resources. As we look to the next evolution of computing and developing neuromorphic or “brain like” computing, those power requirements are unfeasible.
    To advance neuromorphic computing, some researchers are looking at analog improvements. In other words, not just advancing software, but advancing hardware too. Research from the University of California San Diego and UC Riverside shows a promising new way to store and transmit information using disordered superconducting loops.
    The team’s research, which appears in the Proceedings of the National Academy of Sciences, offers the ability of superconducting loops to demonstrate associative memory, which, in humans, allows the brain to remember the relationship between two unrelated items.
    “I hope what we’re designing, simulating and building will be able to do that kind of associative processing really fast,” stated UC San Diego Professor of Physics Robert C. Dynes, who is one of the paper’s co-authors.
    Creating lasting memories
    Picture it: you’re at a party and run into someone you haven’t seen in a while. You know their name but can’t quite recall it. Your brain starts to root around for the information: where did I meet this person? How were we introduced? If you’re lucky, your brain finds the pathway to retrieve what was missing. Sometimes, of course, you’re unlucky.
    Dynes believes that short-term memory moves into long-term memory with repetition. In the case of a name, the more you see the person and use the name, the more deeply it is written into memory. This is why we still remember a song from when we were ten years old but can’t remember what we had for lunch yesterday.

    “Our brains have this remarkable gift of associative memory, which we don’t really understand,” stated Dynes, who is also president emeritus of the University of California and former UC San Diego chancellor. “It can work through the probability of answers because it’s so highly interconnected. This computer brain we built and modeled is also highly interactive. If you input a signal, the whole computer brain knows you did it.”
    Staying in the loop
    How do disordered superconducting loops work? You need a superconducting material — in this case, the team used yttrium barium copper oxide (YBCO). Known as a high-temperature superconductor, YBCO becomes superconducting around 90 Kelvin (-297 F), which in the world of physics, is not that cold. This made it relatively easy to modify. The YBCO thin films (about 10 microns wide) were manipulated with a combination of magnetic fields and currents to create a single flux quantum on the loop. When the current was removed, the flux quantum stayed in the loop. Think of this as a piece of information or memory.
    This is one loop, but associative memory and processing require at least two pieces of information. For this, Dynes used disordered loops, meaning the loops are different sizes and follow different patterns — essentially random.
    A Josephson juncture, or “weak link,” as it is sometimes known, in each loop acted as a gate through which the flux quanta could pass. This is how information is transferred and the associations are built.
    Although traditional computing architecture has continuous high-energy requirements, not just for processing but also for memory storage, these superconducting loops show significant power savings — on the scale of a million times less. This is because the loops only require power when performing logic tasks. Memories are stored in the physical superconducting material and can remain there permanently, as long as the loop remains superconducting.

    The number of memory locations available increases exponentially with more loops: one loop has three locations, but three loops have 27. For this research, the team built four loops with 81 locations. Next, Dynes would like to expand the number of loops and the number memory locations.
    “We know these loops can store memories. We know the associative memory works. We just don’t know how stable it is with a higher number of loops,” he said.
    This work is not only noteworthy to physicists and computer engineers; it may also be important to neuroscientists. Dynes talked to another University of California president emeritus, Richard Atkinson, a world-renowned cognitive scientist who helped create a seminal model of human memory called the Atkinson-Shiffrin model.
    Atkinson, who is also former UC San Diego chancellor and professor emeritus in the School of Social Sciences, was excited about the possibilities he saw: “Bob and I have had some great discussions trying to determine if his physics-based neural network could be used to model the Atkinson-Shiffrin theory of memory. His system is quite different from other proposed physics-based neural networks, and is rich enough that it could be used to explain the workings of the brain’s memory system in terms of the underlying physical process. It’s a very exciting prospect.”
    Full list of authors: Uday S. Goteti and Robert C. Dynes (both UC San Diego); Shane A. Cybart (UC Riverside).
    This work was primarily supported as part of the Quantum Materials for Energy Efficient Neuromorphic Computing (Q-MEEN-C) (Department of Energy DE-SC0019273). Other support was provided by the Department of Energy National Nuclear Security Agency (DE-NA0004106) and the Air Force Office of Scientific Research (FA9550-20-1-0144). More