More stories

  • in

    Could wearables capture well-being?

    Applying machine learning models, a type of artificial intelligence (AI), to data collected passively from wearable devices can identify a patient’s degree of resilience and well-being, according to investigators at the Icahn School of Medicine at Mount Sinai in New York.
    The findings, reported in the May 2nd issue of JAMIA Open, support wearable devices, such as the Apple Watch®, as a way to monitor and assess psychological states remotely without requiring the completion of mental health questionnaires.
    The paper points out that resilience, or an individual’s ability to overcome difficulty, is an important stress mitigator, reduces morbidity, and improves chronic disease management.
    “Wearables provide a means to continually collect information about an individual’s physical state. Our results provide insight into the feasibility of assessing psychological characteristics from this passively collected data,” said first author Robert P. Hirten, MD, Clinical Director, Hasso Plattner Institute for Digital Health at Mount Sinai. “To our knowledge, this is the first study to evaluate whether resilience, a key mental health feature, can be evaluated from devices such as the Apple Watch.”
    Mental health disorders are common, accounting for 13 percent of the burden of global disease, with a quarter of the population at some point experiencing psychological illness. Yet we have limited resources for their evaluation, say the researchers.
    “There are wide disparities in access across geography and socioeconomic status, and the need for in-person assessment or the completion of validated mental health surveys is further limiting,” said senior author Zahi Fayad, PhD, Director of the BioMedical Engineering and Imaging Institute at Icahn Mount Sinai. “A better understanding of who is at psychological risk and an improved means of tracking the impact of psychological interventions is needed. The growth of digital technology presents an opportunity to improve access to mental health services for all people.”
    To determine if machine learning models could be trained to distinguish an individual’s degree of resilience and psychological well-being using the data from wearable devices, the Icahn Mount Sinai researchers analyzed data from the Warrior Watch Study. Leveraged for the current digital observational study, the data set comprised 329 health care workers enrolled at seven hospitals in New York City.

    Subjects wore an Apple Watch® Series 4 or 5 for the duration of their participation, measuring heart rate variability and resting heart rate throughout the follow-up period. Surveys were collected measuring resilience, optimism, and emotional support at baseline. The metrics collected were found to be predictive in identifying resilience or well-being states. Despite the Warrior Watch Study not being designed to evaluate this endpoint, the findings support the further assessment of psychological characteristics from passively collected wearable data.
    “We hope that this approach will enable us to bring psychological assessment and care to a larger population, who may not have access at this time,” said Micol Zweig, MPH, co-author of the paper and Associate Director of Clinical Research, Hasso Plattner Institute for Digital Health at Mount Sinai. “We also intend to evaluate this technique in other patient populations to further refine the algorithm and improve its applicability.”
    To that end, the research team plans to continue using wearable data to observe a range of physical and psychological disorders and diseases. The simultaneous development of sophisticated analytical tools, including artificial intelligence, say the investigators, can facilitate the analysis of data collected from these devices and apps to identify patterns associated with a given mental or physical disease condition.
    The paper is titled “A machine learning approach to determine resilience utilizing wearable device data: analysis of an observational cohort.”
    Additional co-authors are Matteo Danielleto, PhD, Maria Suprun, PhD, Eddye Golden, MPH, Sparshdeep Kaur, BBA, Drew Helmus, MPH, Anthony Biello, BA, Dennis Charney, MD, Laurie Keefer, PhD, Mayte Suarez-Farinas, PhD, and Girish N Nadkami, MD, all from the Icahn School of Medicine at Mount Sinai.
    Support for this study was provided by the Ehrenkranz Lab for Human Resilience, the BioMedical Engineering and Imaging Institute, the Hasso Plattner Institute for Digital Health, the Mount Sinai Clinical Intelligence Center, and the Dr. Henry D. Janowitz Division of Gastroenterology, all at Icahn Mount Sinai, and from the National Institutes of Health, grant number K23DK129835. More

  • in

    Sensor enables high-fidelity input from everyday objects, human body

    Couches, tables, sleeves and more can turn into a high-fidelity input device for computers using a new sensing system developed at the University of Michigan.
    The system repurposes technology from new bone-conduction microphones, known as Voice Pickup Units (VPUs), which detect only those acoustic waves that travel along the surface of objects. It works in noisy environments, along odd geometries such as toys and arms, and on soft fabrics such as clothing and furniture.
    Called SAWSense, for the surface acoustic waves it relies on, the system recognizes different inputs, such as taps, scratches and swipes, with 97% accuracy. In one demonstration, the team used a normal table to replace a laptop’s trackpad.
    “This technology will enable you to treat, for example, the whole surface of your body like an interactive surface,” said Yasha Iravantchi, U-M doctoral candidate in computer science and engineering. “If you put the device on your wrist, you can do gestures on your own skin. We have preliminary findings that demonstrate this is entirely feasible.”
    Taps, swipes and other gestures send acoustic waves along the surfaces of materials. The system then classifies these waves with machine learning to turn all touch into a robust set of inputs. The system was presented last week at the 2023 Conference on Human Factors in Computing Systems, where it received a best paper award.
    As more objects continue to incorporate smart or connected technology, designers are faced with a number of challenges when trying to give them intuitive input mechanisms. This results in a lot of clunky incorporation of input methods such as touch screens, as well as mechanical and capacitive buttons, Iravantchi says. Touch screens may be too costly to enable gesture inputs across large surfaces like counters and refrigerators, while buttons only allow one kind of input at predefined locations.

    Past approaches to overcome these limitations have included the use of microphones and cameras for audio- and gesture-based inputs, but the authors say techniques like these have limited practicality in the real world.
    “When there’s a lot of background noise, or something comes between the user and the camera, audio and visual gesture inputs don’t work well,” Iravantchi said.
    To overcome these limitations, the sensors powering SAWSense are housed in a hermetically sealed chamber that completely blocks even very loud ambient noise. The only entryway is through a mass-spring system that conducts the surface-acoustic waves inside the housing without ever coming in contact with sounds in the surrounding environment. When combined with the team’s signal processing software, which generates features from the data before feeding it into the machine learning model, the system can record and classify the events along an object’s surface.
    “There are other ways you could detect vibrations or surface-acoustic waves, like piezo-electric sensors or accelerometers,” said Alanson Sample, U-M associate professor of electrical engineering and computer science, “but they can’t capture the broad range of frequencies that we need to tell the difference between a swipe and a scratch, for instance.”
    The high fidelity of the VPUs allows SAWSense to identify a wide range of activities on a surface beyond user touch events. For instance, a VPU on a kitchen countertop can detect chopping, stirring, blending or whisking, as well as identifying electronic devices in use such as a blender or microwave.

    “VPUs do a good job of sensing activities and events happening in a well-defined area,” Iravantchi said. “This allows the functionality that comes with a smart object without the privacy concerns of a standard microphone that senses the whole room, for example.”
    When multiple VPUs are used in combination, SAWSense could enable more specific and sensitive inputs, especially those that require a sense of space and distance like the keys on a keyboard or buttons on a remote.
    In addition, the researchers are exploring the use of VPUs for medical sensing, including picking up delicate noises such as the sounds of joints and connective tissues as they move. The high-fidelity audio data VPUs provide could enable real-time analytics about a person’s health, Sample says.
    The research is partially funded by Meta Platforms Inc.
    The team has applied for patent protection with the assistance of U-M Innovation Partnerships and is seeking partners to bring the technology to market. More

  • in

    Lithography-free photonic chip offers speed and accuracy for artificial intelligence

    Photonic chips have revolutionized data-heavy technologies. On their own or in concert with traditional electronic circuits, these laser-powered devices send and process information at the speed of light, making them a promising solution for artificial intelligence’s data-hungry applications.
    In addition to their incomparable speed, photonic circuits use significantly less energy than electronic ones. Electrons move relatively slowly through hardware, colliding with other particles and generating heat, while photons flow without losing energy, generating no heat at all. Unburdened by the energy loss inherent in electronics, integrated photonics are poised to play a leading role in sustainable computing.
    Photonics and electronics draw on separate areas of science and use distinct architectural structures. Both, however, rely on lithography to define their circuit elements and connect them sequentially. While photonic chips don’t make use of the transistors that populate electronic chips’ ever-shrinking and increasingly layered grooves, their complex lithographic patterning guides laser beams through a coherent circuit to form a photonic network that can perform computational algorithms.
    But now, for the first time, researchers at the University of Pennsylvania School of Engineering and Applied Science have created a photonic device that provides programmable on-chip information processing without lithography, offering the speed of photonics augmented by superior accuracy and flexibility for AI applications.
    Achieving unparalleled control of light, this device consists of spatially distributed optical gain and loss. Lasers cast light directly on a semiconductor wafer, without the need for defined lithographic pathways.
    Liang Feng, Professor in the Departments of Materials Science and Engineering (MSE) and Electrical Systems and Engineering (ESE), along with Ph.D. student Tianwei Wu (MSE) and postdoctoral fellows Zihe Gao and Marco Menarini (ESE), introduced the microchip in a recent study published in Nature Photonics.

    Silicon-based electronic systems have transformed the computational landscape. But they have clear limitations: they are slow in processing signal, they work through data serially and not in parallel, and they can only be miniaturized to a certain extent. Photonics is one of the most promising alternatives because it can overcome all these shortcomings.
    “But photonic chips intended for machine learning applications face the obstacles of an intricate fabrication process where lithographic patterning is fixed, limited in reprogrammability, subject to error or damage and expensive,” says Feng. “By removing the need for lithography, we are creating a new paradigm. Our chip overcomes those obstacles and offers improved accuracy and ultimate reconfigurability given the elimination of all kinds of constraints from predefined features.”
    Without lithography, these chips become adaptable data-processing powerhouses. Because patterns are not pre-defined and etched in, the device is intrinsically free of defects. Perhaps more impressively, the lack of lithography renders the microchip impressively reprogrammable, able to tailor its laser-cast patterns for optimal performance, be the task simple (few inputs, small datasets) or complex (many inputs, large datasets).
    In other words, the intricacy or minimalism of the device is a sort of living thing, adaptable in ways no etched microchip could be.
    “What we have here is something incredibly simple,” says Wu. “We can build and use it very quickly. We can integrate it easily with classical electronics. And we can reprogram it, changing the laser patterns on the fly to achieve real-time reconfigurable computing for on-chip training of an AI network.”
    An unassuming slab of semiconductor, the device couldn’t be simpler. It’s the manipulation of this slab’s material properties that is the key to research team’s breakthrough in projecting lasers into dynamically programmable patterns to reconfigure the computing functions of the photonic information processor.

    This ultimate reconfigurability is critical for real-time machine learning and AI.
    “The interesting part,” says Menarini, “is how we are controlling the light. Conventional photonic chips are technologies based on passive material, meaning its material scatters light, bouncing it back and forth. Our material is active. The beam of pumping light modifies the material such that when the signal beam arrives, it can release energy and increase the amplitude of signals.”
    “This active nature is the key to this science, and the solution required to achieve our lithography-free technology,” adds Gao. “We can use it to reroute optical signals and program optical information processing on-chip.”
    Feng compares the technology to an artistic tool, a pen for drawing pictures on a blank page.
    “What we have achieved is exactly the same: pumping light is our pen to draw the photonic computational network (the picture) on a piece of unpatterned semiconductor wafer (the blank page).”
    But unlike indelible lines of ink, these beams of light can be drawn and redrawn, their patterns tracing innumerable paths to the future. More

  • in

    Realistic simulated driving environment based on ‘crash-prone’ Michigan intersection

    The first statistically realistic roadway simulation has been developed by researchers at the University of Michigan. While it currently represents a particularly perilous roundabout, future work will expand it to include other driving situations for testing autonomous vehicle software.
    The simulation is a machine-learning model that trained on data collected at a roundabout on the south side of Ann Arbor, recognized as one of the most crash-prone intersections in the state of Michigan and conveniently just a few miles from the offices of the research team.
    Known as the Neural Naturalistic Driving Environment or NeuralNDE, it turned that data into a simulation of what drivers experience everyday. Virtual roadways like this are needed to ensure the safety of autonomous vehicle software before other cars, cyclists and pedestrians ever cross its path.
    “The NeuralNDE reproduces the driving environment and, more importantly, realistically simulates these safety-critical situations so we can evaluate the safety performance of autonomous vehicles,” said Henry Liu, U-M professor of civil engineering and director of Mcity, a U-M-led public-private mobility research partnership.
    Liu is also director of Center for Connected and Automated Transportation and corresponding author of the study in Nature Communications.
    Safety critical events, which require a driver to make split-second decisions and take action, don’t happen that often. Drivers can go many hours between events that force them to slam on the brakes or swerve to avoid a collision, and each event has its own unique circumstances.

    Together, these represent two bottlenecks in the effort to simulate our roadways, known as the “curse of rarity” and the “curse of dimensionality” respectively. The curse of dimensionality is caused by the complexity of the driving environment, which includes factors like pavement quality, the current weather conditions, and the different types of road users including pedestrians and bicyclists.
    To model it all, the team tried to see it all. They installed sensor systems on light poles which continuously collect data at the State Street/Ellsworth Road roundabout.
    “The reason that we chose that location is that roundabouts are a very challenging, urban driving scenario for autonomous vehicles. In a roundabout, drivers are required to spontaneously negotiate and cooperate with other drivers moving through the intersection. In addition, this particular roundabout experiences high traffic volume and is two lanes, which adds to its complexity,” said Xintao Yan, a Ph.D. student in civil and environmental engineering and first author of the study, who is advised by Liu.
    The NeuralNDE serves as a key component of the CCAT Safe AI Framework for Trustworthy Edge Scenario Tests, or SAFE TEST, a system developed by Liu’s team that uses artificial intelligence to reduce the testing miles required to ensure the safety of autonomous vehicles by 99.99%. It essentially breaks the “curse of rarity,” introducing safety-critical incidents a thousand times more frequently than they occur in real driving. The NeuralNDE is also critical to a project designed to enable the Mcity Test Facility to be used for remote testing of AV software.
    But unlike a fully virtual environment, these tests take place in mixed reality on closed test tracks such as the Mcity Test Facility and the American Center for Mobility in Ypsilanti, Michigan. In addition to the real conditions of the track, the autonomous vehicles also experience virtual drivers, cyclists and pedestrians behaving in both safe and dangerous ways. By testing these scenarios in a controlled environment, AV developers can fine-tune their systems to better handle all driving situations.
    The NeuralNDE is not only beneficial for AV developers but also for researchers studying human driver behavior. The simulation can interpret data on how drivers respond to different scenarios, which can help develop more functional road infrastructure.
    In 2021, the U-M Transportation Research Institute was awarded $9.95 million in funding by the U.S. Department of Transportation to expand the number of intersections equipped with these sensors to 21. This implementation will expand the capabilities of the NeuralNDE and provide real-time alerts to drivers with connected vehicles.
    The research was funded by Mcity, CCAT and the U-M Transportation Research Institute. Founded in 1965, UMTRI is a global leader in multidisciplinary research and a partner of choice for industry leaders, foundations and government agencies to advance safe, equitable and efficient transportation and mobility. CCAT is a regional university transportation research center that was recently awarded a $15 million, five-year renewal by the USDOT. More

  • in

    Brain activity decoder can reveal stories in people’s minds

    A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.
    The study, published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. The work relies in part on a transformer model, similar to the ones that power Open AI’s ChatGPT and Google’s Bard.
    Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.
    “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
    The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words.
    For example, in experiments, a participant listening to a speaker say, “I don’t have my driver’s license yet” had their thoughts translated as, “She has not even started to learn to drive yet.” Listening to the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!'” was decoded as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.'”
    Beginning with an earlier version of the paper that appeared as a preprint online, the researchers addressed questions about potential misuse of the technology. The paper describes how decoding worked only with cooperative participants who had participated willingly in training the decoder. Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance — for example, by thinking other thoughts — results were similarly unusable.

    “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Tang said. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
    In addition to having participants listen or think about stories, the researchers asked subjects to watch four short, silent videos while in the scanner. The semantic decoder was able to use their brain activity to accurately describe certain events from the videos.
    The system currently is not practical for use outside of the laboratory because of its reliance on the time need on an fMRI machine. But the researchers think this work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
    “fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” Huth said. “So, our exact kind of approach should translate to fNIRS,” although, he noted, the resolution with fNIRS would be lower.
    This work was supported by the Whitehall Foundation, the Alfred P. Sloan Foundation and the Burroughs Wellcome Fund.
    The study’s other co-authors are Amanda LeBel, a former research assistant in the Huth lab, and Shailee Jain, a computer science graduate student at UT Austin.
    Alexander Huth and Jerry Tang have filed a PCT patent application related to this work. More

  • in

    Researchers explore why some people get motion sick playing VR games while others don’t

    The way our senses adjust while playing high-intensity virtual reality games plays a critical role in understanding why some people experience severe cybersickness and others don’t.
    Cybersickness is a form of motion sickness that occurs from exposure to immersive VR and augmented reality applications.
    A new study, led by researchers at the University of Waterloo, found that the subjective visual vertical — a measure of how individuals perceive the orientation of vertical lines — shifted considerably after participants played a high-intensity VR game.
    “Our findings suggest that the severity of a person’s cybersickness is affected by how our senses adjust to the conflict between reality and virtual reality,” said Michael Barnett-Cowan, a professor in the Department of Kinesiology and Health Sciences. “This knowledge could be invaluable for developers and designers of VR experiences, enabling them to create more comfortable and enjoyable environments for users.”
    The researchers collected data from 31 participants. They assessed their perceptions of the vertical before and after playing two VR games, one high-intensity and one low-intensity.
    Those who experienced less sickness were more likely to have the largest change in the subjective visual vertical following exposure to VR, particularly at a high intensity. Conversely, those who had the highest levels of cybersickness were less likely to have changed how they perceived vertical lines. There were no significant differences between males and females, nor between participants with low and high gaming experience.
    “While the subjective vertical visual task significantly predicted the severity of cybersickness symptoms, there is still much to be explained,” said co-author William Chung, a former Waterloo doctoral student who is now a postdoctoral fellow at the Toronto Rehabilitation Institute.
    “By understanding the relationship between sensory reweighting and cybersickness susceptibility, we can potentially develop personalized cybersickness mitigation strategies and VR experiences that take into account individual differences in sensory processing and hopefully lower the occurrence of cybersickness.”
    As VR continues to revolutionize gaming, education and social interaction, addressing the pervasive issue of cybersickness — marked by symptoms such as nausea, disorientation, eye strain and fatigue — is critical for ensuring a positive user experience. More

  • in

    Structured exploration allows biological brains to learn faster than AI

    Neuroscientists have uncovered how exploratory actions enable animals to learn their spatial environment more efficiently. Their findings could help build better AI agents that can learn faster and require less experience.
    Researchers at the Sainsbury Wellcome Centre and Gatsby Computational Neuroscience Unit at UCL found the instinctual exploratory runs that animals carry out are not random. These purposeful actions allow mice to learn a map of the world efficiently. The study, published today in Neuron, describes how neuroscientists tested their hypothesis that the specific exploratory actions that animals undertake, such as darting quickly towards objects, are important in helping them learn how to navigate their environment.
    “There are a lot of theories in psychology about how performing certain actions facilitates learning. In this study, we tested whether simply observing obstacles in an environment was enough to learn about them, or if purposeful, sensory-guided actions help animals build a cognitive map of the world,” said Professor Tiago Branco, Group Leader at the Sainsbury Wellcome Centre and corresponding author on the paper.
    In previous work, scientists at SWC observed a correlation between how well animals learn to go around an obstacle and the number of times they had run to the object. In this study, Philip Shamash, SWC PhD student and first author of the paper, carried out experiments to test the impact of preventing animals from performing exploratory runs. By expressing a light-activated protein called channelrhodopsin in one part of the motor cortex, Philip was able to use optogenetic tools to prevent animals from initiating exploratory runs towards obstacles.
    The team found that even though mice had spent a lot of time observing and sniffing obstacles, if they were prevented in running towards them, they did not learn. This shows that the instinctive exploratory actions themselves are helping the animals learn a map of their environment.
    To explore the algorithms that the brain might be using to learn, the team worked with Sebastian Lee, a PhD student in Andrew Saxe’s lab at SWC, to run different models of reinforcement learning that people have developed for artificial agents, and observe which one most closely reproduces the mouse behaviour.
    There are two main classes of reinforcement learning models: model-free and model-based. The team found that under some conditions mice act in a model-free way but under other conditions, they seem to have a model of the world. And so the researchers implemented an agent that can arbitrate between model-free and model-based. This is not necessarily how the mouse brain works, but it helped them to understand what is required in a learning algorithm to explain the behaviour.
    “One of the problems with artificial intelligence is that agents need a lot of experience in order to learn something. They have to explore the environment thousands of times, whereas a real animal can learn an environment in less than ten minutes. We think this is in part because, unlike artificial agents, animals’ exploration is not random and instead focuses on salient objects. This kind of directed exploration makes the learning more efficient and so they need less experience to learn,” explain Professor Branco.
    The next steps for the researchers are to explore the link between the execution of exploratory actions and the representation of subgoals. The team are now carrying out recordings in the brain to discover which areas are involved in representing subgoals and how the exploratory actions lead to the formation of the representations.
    This research was funded by a Wellcome Senior Research Fellowship (214352/Z/18/Z) and by the Sainsbury Wellcome Centre Core Grant from the Gatsby Charitable Foundation and Wellcome (090843/F/09/Z), the Sainsbury Wellcome Centre PhD Programme and a Sir Henry Dale Fellowship from the Wellcome Trust and Royal Society (216386/Z/19/Z). More

  • in

    Engineers ‘grow’ atomically thin transistors on top of computer chips

    Emerging AI applications, like chatbots that generate natural human language, demand denser, more powerful computer chips. But semiconductor chips are traditionally made with bulk materials, which are boxy 3D structures, so stacking multiple layers of transistors to create denser integrations is very difficult.
    However, semiconductor transistors made from ultrathin 2D materials, each only about three atoms in thickness, could be stacked up to create more powerful chips. To this end, MIT researchers have now demonstrated a novel technology that can effectively and efficiently “grow” layers of 2D transition metal dichalcogenide (TMD) materials directly on top of a fully fabricated silicon chip to enable denser integrations.
    Growing 2D materials directly onto a silicon CMOS wafer has posed a major challenge because the process usually requires temperatures of about 600 degrees Celsius, while silicon transistors and circuits could break down when heated above 400 degrees. Now, the interdisciplinary team of MIT researchers has developed a low-temperature growth process that does not damage the chip. The technology allows 2D semiconductor transistors to be directly integrated on top of standard silicon circuits.
    In the past, researchers have grown 2D materials elsewhere and then transferred them onto a chip or a wafer. This often causes imperfections that hamper the performance of the final devices and circuits. Also, transferring the material smoothly becomes extremely difficult at wafer-scale. By contrast, this new process grows a smooth, highly uniform layer across an entire 8-inch wafer.
    The new technology is also able to significantly reduce the time it takes to grow these materials. While previous approaches required more than a day to grow a single layer of 2D materials, the new approach can grow a uniform layer of TMD material in less than an hour over entire 8-inch wafers.
    Due to its rapid speed and high uniformity, the new technology enabled the researchers to successfully integrate a 2D material layer onto much larger surfaces than has been previously demonstrated. This makes their method better-suited for use in commercial applications, where wafers that are 8 inches or larger are key.

    “Using 2D materials is a powerful way to increase the density of an integrated circuit. What we are doing is like constructing a multistory building. If you have only one floor, which is the conventional case, it won’t hold many people. But with more floors, the building will hold more people that can enable amazing new things. Thanks to the heterogenous integration we are working on, we have silicon as the first floor and then we can have many floors of 2D materials directly integrated on top,” says Jiadi Zhu, an electrical engineering and computer science graduate student and co-lead author of a paper on this new technique.
    Zhu wrote the paper with co-lead-author Ji-Hoon Park, an MIT postdoc; corresponding authors Jing Kong, professor of electrical engineering and computer science (EECS) and a member of the Research Laboratory for Electronics; and Tomás Palacios, professor of EECS and director of the Microsystems Technology Laboratories (MTL); as well as others at MIT, MIT Lincoln Laboratory, Oak Ridge National Laboratory, and Ericsson Research. The paper appears today in Nature Nanotechnology.
    Slim materials with vast potential
    The 2D material the researchers focused on, molybdenum disulfide, is flexible, transparent, and exhibits powerful electronic and photonic properties that make it ideal for a semiconductor transistor. It is composed of a one-atom layer of molybdenum sandwiched between two atoms of sulfide.
    Growing thin films of molybdenum disulfide on a surface with good uniformity is often accomplished through a process known as metal-organic chemical vapor deposition (MOCVD). Molybdenum hexacarbonyl and diethylene sulfur, two organic chemical compounds that contain molybdenum and sulfur atoms, vaporize and are heated inside the reaction chamber, where they “decompose” into smaller molecules. Then they link up through chemical reactions to form chains of molybdenum disulfide on a surface.

    But decomposing these molybdenum and sulfur compounds, which are known as precursors, requires temperatures above 550 degrees Celsius, while silicon circuits start to degrade when temperatures surpass 400 degrees.
    So, the researchers started by thinking outside the box — they designed and built an entirely new furnace for the metal-organic chemical vapor deposition process.
    The oven consists of two chambers, a low-temperature region in the front, where the silicon wafer is placed, and a high-temperature region in the back. Vaporized molybdenum and sulfur precursors are pumped into the furnace. The molybdenum stays in the low-temperature region, where the temperature is kept below 400 degrees Celsius — hot enough to decompose the molybdenum precursor but not so hot that it damages the silicon chip.
    The sulfur precursor flows through into the high-temperature region, where it decomposes. Then it flows back into the low-temperature region, where the chemical reaction to grow molybdenum disulfide on the surface of the wafer occurs.
    “You can think about decomposition like making black pepper — you have a whole peppercorn and you grind it into a powder form. So, we smash and grind the pepper in the high-temperature region, then the powder flows back into the low-temperature region,” Zhu explains.
    Faster growth and better uniformity
    One problem with this process is that silicon circuits typically have aluminum or copper as a top layer so the chip can be connected to a package or carrier before it is mounted onto a printed circuit board. But sulfur causes these metals to sulfurize, the same way some metals rust when exposed to oxygen, which destroys their conductivity. The researchers prevented sulfurization by first depositing a very thin layer of passivation material on top of the chip. Then later they could open the passivation layer to make connections.
    They also placed the silicon wafer into the low-temperature region of the furnace vertically, rather than horizontally. By placing it vertically, neither end is too close to the high-temperature region, so no part of the wafer is damaged by the heat. Plus, the molybdenum and sulfur gas molecules swirl around as they bump into the vertical chip, rather than flowing over a horizontal surface. This circulation effect improves the growth of molybdenum disulfide and leads to better material uniformity.
    In addition to yielding a more uniform layer, their method was also much faster than other MOCVD processes. They could grow a layer in less than an hour, while typically the MOCVD growth process takes at least an entire day.
    Using the state-of-the-art MIT.Nano facilities, they were able to demonstrate high material uniformity and quality across an 8-inch silicon wafer, which is especially important for industrial applications where bigger wafers are needed.
    “By shortening the growth time, the process is much more efficient and could be more easily integrated into industrial fabrications. Plus, this is a silicon-compatible low-temperature process, which can be useful to push 2D materials further into the semiconductor industry,” Zhu says.
    In the future, the researchers want to fine-tune their technique and use it to grow many stacked layers of 2D transistors. In addition, they want to explore the use of the low-temperature growth process for flexible surfaces, like polymers, textiles, or even papers. This could enable the integration of semiconductors onto everyday objects like clothing or notebooks.
    This work is partially funded by the MIT Institute for Soldier Nanotechnologies, the National Science Foundation Center for Integrated Quantum Materials, Ericsson, MITRE, the U.S. Army Research Office, and the U.S. Department of Energy. The project also benefitted from the support of TSMC University Shuttle. More