More stories

  • in

    Magnon-based computation could signal computing paradigm shift

    Like electronics or photonics, magnonics is an engineering subfield that aims to advance information technologies when it comes to speed, device architecture, and energy consumption. A magnon corresponds to the specific amount of energy required to change the magnetization of a material via a collective excitation called a spin wave.
    Because they interact with magnetic fields, magnons can be used to encode and transport data without electron flows, which involve energy loss through heating (known as Joule heating) of the conductor used. As Dirk Grundler, head of the Lab of Nanoscale Magnetic Materials and Magnonics (LMGN) in the School of Engineering explains, energy losses are an increasingly serious barrier to electronics as data speeds and storage demands soar.
    “With the advent of AI, the use of computing technology has increased so much that energy consumption threatens its development,” Grundler says. “A major issue is traditional computing architecture, which separates processors and memory. The signal conversions involved in moving data between different components slow down computation and waste energy.”
    This inefficiency, known as the memory wall or Von Neumann bottleneck, has had researchers searching for new computing architectures that can better support the demands of big data. And now, Grundler believes his lab might have stumbled on such a “holy grail.”
    While doing other experiments on a commercial wafer of the ferrimagnetic insulator yttrium iron garnet (YIG) with nanomagnetic strips on its surface, LMGN PhD student Korbinian Baumgaertl was inspired to develop precisely engineered YIG-nanomagnet devices. With the Center of MicroNanoTechnology’s support, Baumgaertl was able to excite spin waves in the YIG at specific gigahertz frequencies using radiofrequency signals, and — crucially — to reverse the magnetization of the surface nanomagnets.
    “The two possible orientations of these nanomagnets represent magnetic states 0 and 1, which allows digital information to be encoded and stored,” Grundler explains.

    A route to in-memory computation
    The scientists made their discovery using a conventional vector network analyzer, which sent a spin wave through the YIG-nanomagnet device. Nanomagnet reversal happened only when the spin wave hit a certain amplitude, and could then be used to write and read data.
    “We can now show that the same waves we use for data processing can be used to switch the magnetic nanostructures so that we also have nonvolatile magnetic storage within the very same system,” Grundler explains, adding that “nonvolatile” refers to the stable storage of data over long time periods without additional energy consumption.
    It’s this ability to process and store data in the same place that gives the technique its potential to change the current computing architecture paradigm by putting an end to the energy-inefficient separation of processors and memory storage, and achieving what is known as in-memory computation.
    Optimization on the horizon
    Baumgaertl and Grundler have published the groundbreaking results in the journal Nature Communications, and the LMGN team is already working on optimizing their approach.
    “Now that we have shown that spin waves write data by switching the nanomagnets from states 0 to 1, we need to work on a process to switch them back again — this is known as toggle switching,” Grundler says.
    He also notes that theoretically, the magnonics approach could process data in the terahertz range of the electromagnetic spectrum (for comparison, current computers function in the slower gigahertz range). However, they still need to demonstrate this experimentally.
    “The promise of this technology for more sustainable computing is huge. With this publication, we are hoping to reinforce interest in wave-based computation, and attract more young researchers to the growing field of magnonics.” More

  • in

    Could changes in Fed's interest rates affect pollution and the environment?

    Can monetary policy such as the United States Federal Reserve raising interest rates affect the environment? According to a new study by Florida Atlantic University’s College of Business, it can.
    Using a stylized dynamic aggregate demand-aggregate supply (AD-AS) model, researchers explored the consequences of traditional monetary tools — namely changes in the short-term interest rate — to the environment. Specifically, they looked at how monetary policy impacts CO2 emissions in the short and long run. The AD-AS model conveys several interlocking relationships between the four macroeconomic goals of growth, unemployment, inflation and a sustainable balance of trade.
    For the study, researchers also used the Global Vector AutoRegressive (GVAR) methodology, which interconnects regions using an explicit economic integration variable, in this case, bilateral trade, allowing for spillover effects.
    Joao Ricardo Faria, Ph.D., co-author and a professor in the Economics Department within FAU’s College of Business, and collaborators from Federal University of Ouro Preto and the University of São Paulo in Brazil, examined four regions for the study: U.S., United Kingdom, Japan and the Eurozone (all the European Union countries that incorporate the euro as their national currency).
    In addition, they used data from eight other countries to characterize the international economy. Their method explicitly models their interplay to assess not only the domestic impact of a policy shift, but also its repercussion to other economies.
    Results of the study, published in the journal Energy Economics, suggest that the impact of monetary policy on pollution is basically domestic: a monetary contraction or reduction in a region reduces its own emissions, but this does not seem to spread out to other economies. However, the findings do not imply that the international economy is irrelevant to determining one region’s emissions level.
    “The actions of a country, like the U.S., are not restricted to its borders. For example, a positive shock in the Federal Reserve’s monetary policy may cause adjustments in the whole system, including the carbon emissions of the other regions,” said Faria.
    The approach used in this study considered the U.S.’s own dynamics as well as the responses of other economies. Moreover, analysis of four distinct regions allowed researchers to verify and compare how domestic markets react to the same policy.
    The study also identified important differences across regions. For example, monetary policy does not seem to reduce short-run emissions in the U.K., or long-run emissions in the Eurozone. Moreover, the cointegration coefficient for Japan is much larger than those of the other regions, suggesting strong effects of monetary policy on CO2 emissions. Furthermore, cointegration analysis suggests a relationship between interest rates and emissions in the long run.
    Statistical analyses also suggest that external factors are relevant to understanding each region’s fluctuations in emissions. A large fraction of the fluctuations in domestic CO2 emissions come from external sources.
    “Findings from our study suggest efforts to reduce emissions can benefit from internationally coordinated policies,” said Faria. “Thus, the main policy prescription is to increase international coordination and efforts to reduce CO2 emissions. We realize that achieving coordination is not an easy endeavor despite international efforts to reduce carbon emissions, such as the Paris Agreement. Our paper highlights the payoffs of coordinated policies. We hope it motivates future research on how to achieve successful coordination.” More

  • in

    Preschoolers prefer to learn from a competent robot than an incompetent human

    Who do children prefer to learn from? Previous research has shown that even infants can identify the best informant. But would preschoolers prefer learning from a competent robot over an incompetent human?
    According to a new paper by Concordia researchers published in the Journal of Cognition and Development, the answer largely depends on age.
    The study compared two groups of preschoolers: one of three-year-olds, the other of five-year-olds. The children participated in Zoom meetings featuring a video of a young woman and a small robot with humanoid characteristics (head, face, torso, arms and legs) called Nao sitting side by side. Between them were familiar objects that the robot would label correctly while the human would label them incorrectly, e.g., referring to a car as a book, a ball as a shoe and a cup as a dog.
    Next, the two groups of children were presented with unfamiliar items: the top of a turkey baster, a roll of twine and a silicone muffin container. Both the robot and the human used different nonsense terms like “mido,” “toma,” “fep” and “dax” to label the objects. The children were then asked what the object was called, endorsing either the label offered by the robot or by the human.
    While the three-year-olds showed no preference for one word over another, the five-year-olds were much more likely to state the term provided by the robot than the human.
    “We can see that by age five, children are choosing to learn from a competent teacher over someone who is more familiar to them — even if the competent teacher is a robot,” says the paper’s lead author, PhD candidate Anna-Elisabeth Baumann. Horizon Postdoctoral Fellow Elizabeth Goldman and undergraduate research assistant Alexandra Meltzer also contributed to the study. Professor and Concordia University Chair of Developmental Cybernetics Diane Poulin-Dubois in the Department of Psychology supervised the study.

    The researchers repeated the experiments with new groups of three- and five-year-olds, replacing the humanoid Nao with a small truck-shaped robot called Cozmo. The results resembled those observed with the human-like robot, suggesting that the robot’s morphology does not affect the children’s selective trust strategies.
    Baumann adds that, along with the labelling task, the researchers administered a naive biology task. The children were asked if biological organs or mechanical gears formed the internal parts of unfamiliar animals and robots. The three-year-olds appeared confused, assigning both biological and mechanical internal parts to the robots. However, the five-year-olds were much more likely to indicate that only mechanical parts belonged inside the robots.
    “This data tells us that the children will choose to learn from a robot even though they know it is not like them. They know that the robot is mechanical,” says Baumann.
    Being right is better than being human
    While there has been a substantial amount of literature on the benefits of using robots as teaching aides for children, the researchers note that most studies focus on a single robot informant or two robots pitted against each other. This study, they write, is the first to use both a human speaker and a robot to see if children deem social affiliation and similarity more important than competency when choosing which source to trust and learn from.
    Poulin-Dubois points out that this study builds on a previous paper she co-wrote with Goldman and Baumann. That paper shows that by age five, children treat robots similarly to how adults do, i.e., as depictions of social agents.
    “Older preschoolers know that robots have mechanical insides, but they still anthropomorphize them. Like adults, these children attribute certain human-like qualities to robots, such as the ability to talk, think and feel,” she says.
    “It is important to emphasize that we see robots as tools to study how children can learn from both human and non-human agents,” concludes Goldman. “As technology use increases, and as children interact with technological devices more, it is important for us to understand how technology can be a tool to help facilitate their learning.” More

  • in

    First silicon integrated ECRAM for a practical AI accelerator

    The transformative changes brought by deep learning and artificial intelligence are accompanied by immense costs. For example, OpenAI’s ChatGPT algorithm costs at least $100,000 every day to operate. This could be reduced with accelerators, or computer hardware designed to efficiently perform the specific operations of deep learning. However, such a device is only viable if it can be integrated with mainstream silicon-based computing hardware on the material level.
    This was preventing the implementation of one highly promising deep learning accelerator — arrays of electrochemical random-access memory, or ECRAM — until a research team at the University of Illinois Urbana-Champaign achieved the first material-level integration of ECRAMs onto silicon transistors. The researchers, led by graduate student Jinsong Cui and professor Qing Cao of the Department of Materials Science & Engineering, recently reported an ECRAM device designed and fabricated with materials that can be deposited directly onto silicon during fabrication in Nature Electronics, realizing the first practical ECRAM-based deep learning accelerator.
    “Other ECRAM devices have been made with the many difficult-to-obtain properties needed for deep learning accelerators, but ours is the first to achieve all these properties and be integrated with silicon without compatibility issues,” Cao said. “This was the last major barrier to the technology’s widespread use.”
    ECRAM is a memory cell, or a device that stores data and uses it for calculations in the same physical location. This nonstandard computing architecture eliminates the energy cost of shuttling data between the memory and the processor, allowing data-intensive operations to be performed very efficiently.
    ECRAM encodes information by shuffling mobile ions between a gate and a channel. Electrical pulses applied to a gate terminal either inject ions into or draw ions from a channel, and the resulting change in the channel’s electrical conductivity stores information. It is then read by measuring the electric current that flows across the channel. An electrolyte between the gate and the channel prevents unwanted ion flow, allowing ECRAM to retain data as a nonvolatile memory.
    The research team selected materials compatible with silicon microfabrication techniques: tungsten oxide for the gate and channel, zirconium oxide for the electrolyte, and protons as the mobile ions. This allowed the devices to be integrated onto and controlled by standard microelectronics. Other ECRAM devices draw inspiration from neurological processes or even rechargeable battery technology and use organic substances or lithium ions, both of which are incompatible with silicon microfabrication.
    In addition, the Cao group device has numerous other features that make it ideal for deep learning accelerators. “While silicon integration is critical, an ideal memory cell must achieve a whole slew of properties,” Cao said. “The materials we selected give rise to many other desirable features.”
    Since the same material was used for the gate and channel terminals, injecting ions into and drawing ions from the channel are symmetric operations, simplifying the control scheme and significantly enhancing reliability. The channel reliably held ions for hours at time, which is sufficient for training most deep neural networks. Since the ions were protons, the smallest ion, the devices switched quite rapidly. The researchers found that their devices lasted for over 100 million read-write cycles and were vastly more efficient than standard memory technology. Finally, since the materials are compatible with microfabrication techniques, the devices could be shrunk to the micro- and nanoscales, allowing for high density and computing power.
    The researchers demonstrated their device by fabricating arrays of ECRAMs on silicon microchips to perform matrix-vector multiplication, a mathematical operation crucial to deep learning. Matrix entries, or neural network weights, were stored in the ECRAMs, and the array performed the multiplication on the vector inputs, represented as applied voltages, by using the stored weights to change the resulting currents. This operation as well as the weight update was performed with a high level of parallelism.
    “Our ECRAM devices will be most useful for AI edge-computing applications sensitive to chip size and energy consumption,” Cao said. “That’s where this type of device has the most significant benefits compared to what is possible with silicon-based accelerators.”
    The researchers are patenting the new device, and they are working with semiconductor industry partners to bring this new technology to market. According to Cao, a prime application of this technology is in autonomous vehicles, which must rapidly learn its surrounding environment and make decisions with limited computational resources. He is collaborating with Illinois electrical & computer engineering faculty to integrate their ECRAMs with foundry-fabricated silicon chips and Illinois computer science faculty to develop software and algorithms taking advantage of ECRAM’s unique capabilities. More

  • in

    AI 'brain' created from core materials for OLED TVs

    ChatGPT’s impact extends beyond the education sector and is causing significant changes in other areas. The AI language model is recognized for its ability to perform various tasks, including paper writing, translation, coding, and more, all through question-and-answer-based interactions. The AI system relies on deep learning, which requires extensive training to minimize errors, resulting in frequent data transfers between memory and processors. However, traditional digital computer systems’ von Neumann architecture separates the storage and computation of information, resulting in increased power consumption and significant delays in AI computations. Researchers have developed semiconductor technologies suitable for AI applications to address this challenge.
    A research team at POSTECH, led by Professor Yoonyoung Chung (Department of Electrical Engineering, Department of Semiconductor Engineering), Professor Seyoung Kim (Department of Materials Science and Engineering, Department of Semiconductor Engineering), and Ph.D. candidate Seongmin Park (Department of Electrical Engineering), has developed a high-performance AI semiconductor device using indium gallium zinc oxide (IGZO), an oxide semiconductor widely used in OLED displays. The new device has proven to be excellent in terms of performance and power efficiency.
    Efficient AI operations, such as those of ChatGPT, require computations to occur within the memory responsible for storing information. Unfortunately, previous AI semiconductor technologies were limited in meeting all the requirements, such as linear and symmetric programming and uniformity, to improve AI accuracy.
    The research team sought IGZO as a key material for AI computations that could be mass-produced and provide uniformity, durability, and computing accuracy. This compound comprises four atoms in a fixed ratio of indium, gallium, zinc, and oxygen and has excellent electron mobility and leakage current properties, which have made it a backplane of the OLED display.
    Using this material, the researchers developed a novel synapse device composed of two transistors interconnected through a storage node. The precise control of this node’s charging and discharging speed has enabled the AI semiconductor to meet the diverse performance metrics required for high-level performance. Furthermore, applying synaptic devices to a large-scale AI system requires the output current of synaptic devices to be minimized. The researchers confirmed the possibility of utilizing the ultra-thin film insulators inside the transistors to control the current, making them suitable for large-scale AI.
    The researchers used the newly developed synaptic device to train and classify handwritten data, achieving a high accuracy of over 98%, which verifies its potential application in high-accuracy AI systems in the future.
    Professor Chung explained, “The significance of my research team’s achievement is that we overcame the limitations of conventional AI semiconductor technologies that focused solely on material development. To do this, we utilized materials already in mass production. Furthermore, Linear and symmetrical programming characteristics were obtained through a new structure using two transistors as one synaptic device. Thus, our successful development and application of this new AI semiconductor technology show great potential to improve the efficiency and accuracy of AI.”
    This study was published last week on the inside back cover of Advanced Electronic Materials and was supported by the Next-Generation Intelligent Semiconductor Technology Development Program through the National Research Foundation, funded by the Ministry of Science and ICT of Korea. More

  • in

    Scientists discover easy way to make atomically-thin metal layers for new technology

    The secret to a perfect croissant is the layers — as many as possible, each one interspersed with butter. Similarly, a new material with promise for new applications is made of many extremely thin layers of metal, between which scientists can slip different ions for various purposes. This makes them potentially very useful for future high-tech electronics or energy storage.
    Until recently, these materials — known as MXenes, pronounced “max-eens” — were as labor-intensive as good croissants made in a French bakery.
    But a new breakthrough by scientists with the University of Chicago shows how to make these MXenes far more quickly and easily, with fewer toxic byproducts.
    Researchers hope the discovery, published March 24 in Science, will spur new innovation and pave the way towards using MXenes in everyday electronics and devices.
    Atom economy
    When they were discovered in 2011, MXenes made a lot of scientists very excited. Usually, when you shave a metal like gold or titanium to create atomic-thin sheets, it stops behaving like a metal. But unusually strong chemical bonds in MXenes allow them to retain the special abilities of metal, like conducting electricity strongly.

    They’re also easily customizable: “You can put ions between the layers to use them to store energy, for example,” said chemistry graduate student Di Wang, co-first author of the paper along with postdoctoral scholar Chenkun Zhou.
    All of these advantages could make MXenes extremely useful for building new devices — for example, to store electricity or to block electromagnetic wave interference.
    However, the only way we knew to make MXenes involved several intensive chemical engineering steps, including heating the mixture at 3,000°F followed by a bath in hydrofluoric acid.
    “This is fine if you’re making a few grams for experiments in the laboratory, but if you wanted to make large amounts to use in commercial products, it would become a major corrosive waste disposal issue,” explained Dmitri Talapin, the Ernest DeWitt Burton Distinguished Service Professor of Chemistry at the University of Chicago, joint appointee at Argonne National Laboratory and the corresponding author on the paper.
    To design a more efficient and less toxic method, the team used the principles of chemistry — in particular “atom economy,” which seeks to minimize the number of wasted atoms during a reaction.

    The UChicago team discovered new chemical reactions that allow scientists to make MXenes from simple and inexpensive precursors, without the use of hydrofluoric acid. It consists of just one step: mixing several chemicals with whichever metal you wish to make layers of, then heating the mixture at 1,700°F. “Then you open it up and there they are,” said Wang.
    The easier, less toxic method opens up new avenues for scientists to create and explore new varieties of MXenes for different applications — such as different metal alloys or different ion flavorings. The team tested the method with titanium and zirconium metals, but they think the technique can also be used for many other different combinations.
    “These new MXenes are also visually beautiful,” Wang added. “They stand up like flowers — which may even make them better for reactions, because the edges are exposed and accessible for ions and molecules to move in between the metal layers.”
    Graduate student Wooje Cho was also a co-author on the paper. The exploration was made possible by help from UChicago colleagues across departments, including theoretical chemist Suri Vaikuntanathan, X-ray research facility director Alexander Filatov, and electrochemists Chong Liu and Mingzhan Wang of the Pritzker School of Molecular Engineering. Electron microscopy was performed by Robert Klie and Francisco Lagunas with the University of Illinois Chicago.
    Part of the research was conducted via the U.S. Department of Energy’s Advanced Materials for Energy-Water Systems, an Energy Frontier Research Center; the University of Chicago Materials Research Science and Engineering Center; and at the Center for Nanoscale Materials at Argonne National Laboratory. More

  • in

    Artificial intelligence predicts genetics of cancerous brain tumors in under 90 seconds

    Using artificial intelligence, researchers have discovered how to screen for genetic mutations in cancerous brain tumors in under 90 seconds — and possibly streamline the diagnosis and treatment of gliomas, a study suggests.
    A team of neurosurgeons and engineers at Michigan Medicine, in collaboration with investigators from New York University, University of California, San Francisco and others, developed an AI-based diagnostic screening system called DeepGlioma that uses rapid imaging to analyze tumor specimens taken during an operation and detect genetic mutations more rapidly.
    In a study of more than 150 patients with diffuse glioma, the most common and deadly primary brain tumor, the newly developed system identified mutations used by the World Health Organization to define molecular subgroups of the condition with an average accuracy over 90%. The results are published in Nature Medicine.
    “This AI-based tool has the potential to improve the access and speed of diagnosis and care of patients with deadly brain tumors,” said lead author and creator of DeepGlioma Todd Hollon, M.D., a neurosurgeon at University of Michigan Health and assistant professor of neurosurgery at U-M Medical School.
    Molecular classification is increasingly central to the diagnosis and treatment of gliomas, as the benefits and risks of surgery vary among brain tumor patients depending on their genetic makeup. In fact, patients with a specific type of diffuse glioma called astrocytomas can gain an average of five years with complete tumor removal compared to other diffuse glioma subtypes.
    However, access to molecular testing for diffuse glioma is limited and not uniformly available at centers that treat patients with brain tumors. When it is available, Hollon says, the turnaround time for results can take days, even weeks.

    “Barriers to molecular diagnosis can result in suboptimal care for patients with brain tumors, complicating surgical decision-making and selection of chemoradiation regimens,” Hollon said.
    Prior to DeepGlioma, surgeons did not have a method to differentiate diffuse gliomas during surgery. An idea that started in 2019, the system combines deep neural networks with an optical imaging method known as stimulated Raman histology, which was also developed at U-M, to image brain tumor tissue in real time.
    “DeepGlioma creates an avenue for accurate and more timely identification that would give providers a better chance to define treatments and predict patient prognosis,” Hollon said.
    Even with optimal standard-of-care treatment, patients with diffuse glioma face limited treatment options. The median survival time for patients with malignant diffuse gliomas is only 18 months.
    While the development of medications to treat the tumors is essential, fewer than 10% of patients with glioma are enrolled in clinical trials, which often limit participation by molecular subgroups. Researchers hope that DeepGlioma can be a catalyst for early trial enrollment.
    “Progress in the treatment of the most deadly brain tumors has been limited in the past decades- in part because it has been hard to identify the patients who would benefit most from targeted therapies,” said senior author Daniel Orringer, M.D., an associate professor of neurosurgery and pathology at NYU Grossman School of Medicine, who developed stimulated Raman histology. “Rapid methods for molecular classification hold great promise for rethinking clinical trial design and bringing new therapies to patients.”
    Additional authors include Cheng Jiang, Asadur Chowdury, Akhil Kondepudi, Arjun Adapa, Wajd Al-Holou, Jason Heth, Oren Sagher, Maria Castro, Sandra Camelo-Piragua, Honglak Lee, all of University of Michigan, Mustafa Nasir-Moin, John Golfinos, Matija Snuderl, all of New York University, Alexander Aabedi, Pedro Lowenstein, Mitchel Berger, Shawn Hervey-Jumper, all of University of California, San Francisco, Lisa Irina Wadiura, Georg Widhalm, both of Medical University Vienna, Volker Neuschmelting, David Reinecke, Niklas von Spreckelsen, all of University Hospital Cologne, and Christian Freudiger, Invenio Imaging, Inc.
    This work was supported by the National Institutes of Health, Cook Family Brain Tumor Research Fund, the Mark Trauner Brain Research Fund, the Zenkel Family Foundation, Ian’s Friends Foundation and the UM Precision Health Investigators Awards grant program. More

  • in

    New in-home AI tool monitors the health of elderly residents

    Engineers are harnessing artificial intelligence (AI) and wireless technology to unobtrusively monitor elderly people in their living spaces and provide early detection of emerging health problems.
    The new system, built by researchers at the University of Waterloo, follows an individual’s activities accurately and continuously as it gathers vital information without the need for a wearable device and alerts medical experts to the need to step in and provide help.
    “After more than five years of working on this technology, we’ve demonstrated that very low-power, millimetre-wave radio systems enabled by machine learning and artificial intelligence can be reliably used in homes, hospitals and long-term care facilities,” said Dr. George Shaker, an adjunct associate professor of electrical and computer engineering.
    “An added bonus is that the system can alert healthcare workers to sudden falls, without the need for privacy-intrusive devices such as cameras.”
    The work by Shaker and his colleagues comes as overburdened public healthcare systems struggle to meet the urgent needs of rapidly growing elderly populations.
    While a senior’s physical or mental condition can change rapidly, it’s almost impossible to track their movements and discover problems 24/7 — even if they live in long-term care. In addition, other existing systems for monitoring gait — how a person walks — are expensive, difficult to operate, impractical for clinics and unsuitable for homes.

    The new system represents a major step forward and works this way: first, a wireless transmitter sends low-power waveforms across an interior space, such as a long-term care room, apartment or home.
    As the waveforms bounce off different objects and the people being monitored, they’re captured and processed by a receiver. That information goes into an AI engine which deciphers the processed waves for detection and monitoring applications.
    The system, which employs extremely low-power radar technology, can be mounted simply on a ceiling or by a wall and doesn’t suffer the drawbacks of wearable monitoring devices, which can be uncomfortable and require frequent battery charging.
    “Using our wireless technology in homes and long-term care homes can effectively monitor various activities such as sleeping, watching TV, eating and the frequency of bathroom use,” Shaker said.
    “Currently, the system can alert care workers to a general decline in mobility, increased likelihood of falls, possibility of a urinary tract infection, and the onset of several other medical conditions.”
    Waterloo researchers have partnered with a Canadian company, Gold Sentintel, to commercialize the technology, which has already been installed in several long-term care homes.
    A paper on the work, AI-Powered Non-Contact In-Home Gait Monitoring and Activity Recognition System Based on mm-Wave FMCW Radar and Cloud Computing, appears in the IEEE Internet of Things Journal.
    Doctoral student Hajar Abedi was the lead author, with contributions from Ahmad Ansariyan, Dr. Plinio Morita, Dr. Jen Boger and Dr. Alexander Wong. More