More stories

  • in

    Reduced power consumption in semiconductor devices

    Stepping stones are placed to help travelers to cross streams. As long as there are stepping stones that connect the both sides of the water, one can easily get across with just a few steps. Using the same principal, a research team at POSTECH has developed technology that cuts the power consumption in semiconductor devices in half by placing stepping stones.
    A research team led by Professor Junwoo Son and Dr. Minguk Cho (Department of Materials Science and Engineering) at POSTECH has succeeded in maximizing the switching efficiency of oxide semiconductor devices by inserting platinum nanoparticles. The findings from the study were recently published in the international journal Nature Communications.
    The oxide material with the metal-insulator phase transition, in which the phase of a material rapidly changes from an insulator to a metal when the threshold voltage is reached, is spotlighted as a key material for fabricating low-power semiconductor devices.
    The metal-insulator phase transition occurs when insulator domains, several nanometer (nm, billionth of a meter) units big, are transformed into metal domains. The key was to reduce the magnitude of the voltage applied to the device to increase the switching efficiency of a semiconductor device.
    The research team succeeded in increasing the switching efficiency of the device by using platinum nanoparticles. When voltage was applied to a device, an electric current “skipped” through these particles and a rapid phase transition occurred.
    The memory effect of the device also increased by more than a million times. In general, after the voltage is cut off, it immediately changes to the insulator phase where no current flows; this duration was extremely short at 1 millionth of a second. However, it was confirmed that the memory effect of remembering the previous firing of the devices can be increased to several seconds, and the device could be operated again with relatively low voltage owing to the residual metallic domains remaining near the platinum nanoparticles.
    This technology is anticipated to be essential for the development of next-generation electronic devices, such as intelligent semiconductors or neuromorphic semiconductor devices that can process vast amounts of data with less power.
    This study was conducted with the support from the Basic Science Research Program, Mid-career Researcher Program, and the Next-generation Intelligence Semiconductor Program of the National Research Foundation of Korea.
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    A swarm of 3D printing drones for construction and repair

    An international research team led by drone expert Mirko Kovac of Empa and Imperial College London has taken bees as a model to develop a swarm of cooperative, 3D-printing drones. Under human control, these flying robots work as a team to print 3D materials for building or repairing structures while flying, as the scientists report in the cover story of the latest issue of Nature.
    3D printing is gaining momentum in the construction industry. Both on-site and in the factory, static and mobile robots print materials for use in construction projects, such as steel and concrete structures.
    A new approach to 3D printing — led in its development by Imperial College London and Empa, the Swiss Federal Laboratories of Materials Science and Technology — uses flying robots, known as drones, that use collective building methods inspired by natural builders like bees and wasps.
    The system, called Aerial Additive Manufacturing (Aerial-AM), involves a fleet of drones working together from a single blueprint.
    It consists of BuilDrones, which deposit materials during flight, and quality-controlling ScanDrones, which continually measure the BuilDrones’ output and inform their next manufacturing steps.
    The researchers say that in contrast to alternative methods, in-flight 3D printing unlocks doors that will lead to on-site manufacturing and building in difficult-to-access or dangerous locations such as post-disaster relief construction and tall buildings or infrastructure. More

  • in

    Key element for a scalable quantum computer

    Millions of quantum bits are required for quantum computers to prove useful in practical applications. The scalability is one of the greatest challenges in the development of future devices. One problem is that the qubits have to be very close to each other on the chip in order to couple them together. Researchers at Forschungszentrum Jülich and RWTH Aachen University have now come a significant step closer to solving the problem. They succeeded in transferring electrons, the carriers of quantum information, over several micrometres on a quantum chip. Their “quantum bus” could be the key component to master the leap to millions of qubits.
    Quantum computers have the potential to vastly exceed the capabilities of conventional computers for certain tasks. But there is still a long way to go before they can help to solve real-world problems. Many applications require quantum processors with millions of quantum bits. Today’s prototypes merely come up with a few of these compute units.
    “Currently, each individual qubit is connected via several signal lines to control units about the size of a cupboard. That still works for a few qubits. But it no longer makes sense if you want to put millions of qubits on the chip. Because that’ s necessary for quantum error correction,” says Dr. Lars Schreiber from the JARA Institute for Quantum Information at Forschungszentrum Jülich and RWTH Aachen University.
    At some point, the number of signal lines becomes a bottleneck. The lines take up too much space compared to the size of the tiny qubits. And a quantum chip cannot have millions of inputs and outputs — a modern classical chip only contains about 2000 of these. Together with colleagues at Forschungszentrum Jülich and RWTH Aachen University, Schreiber has been conducting research for several years to find a solution to this problem.
    Their overall goal is to integrate parts of the control electronics directly on the chip. The approach is based on so-called semiconductor spin qubits made of silicon and germanium. This type of qubit is comparatively tiny. The manufacturing processes largely match those of conventional silicon processors. This is considered to be advantageous when it comes to realising very many qubits. But first, some fundamental barriers have to be overcome.
    “The natural entanglement that is caused by the proximity of the particles alone is limited to a very small range, about 100 nanometres. To couple the qubits, they currently have to be placed very close to each other. There is simply no space for additional control electronics that we would like to install there,” says Schreiber. More

  • in

    Artificial intelligence tools quickly detect signs of injection drug use in patients' health records

    An automated process that combines natural language processing and machine learning identified people who inject drugs (PWID) in electronic health records more quickly and accurately than current methods that rely on manual record reviews.
    Currently, people who inject drugs are identified through International Classification of Diseases (ICD) codes that are specified in patients’ electronic health records by the healthcare providers or extracted from those notes by trained human coders who review them for billing purposes. But there is no specific ICD code for injection drug use, so providers and coders must rely on a combination of non-specific codes as proxies to identify PWIDs — a slow approach that can lead to inaccuracies.
    The researchers manually reviewed 1,000 records from 2003-2014 of people admitted to Veterans Administration hospitals with Staphylococcus aureus bacteremia, a common infection that develops when the bacteria enters openings in the skin, such as those at injection sites. They then developed and trained algorithms using natural language processing and machine learning and compared them with 11 proxy combinations of ICD codes to identify PWIDs.
    Limitations to the study include potentially poor documentation by providers. Also, the dataset used is from 2003 to 2014, but the injection drug use epidemic has since shifted from prescription opioids and heroin to synthetic opioids like fentanyl, which the algorithm may miss because the dataset where it learned the classification does not have many examples of that drug. Finally, the findings may not be applicable to other circumstances given that they are based entirely on data from the Veterans Administration.
    Use of this artificial intelligence model significantly speeds up the process of identifying PWIDs, which could improve clinical decision making, health services research, and administrative surveillance.
    “By using natural language processing and machine learning, we could identify people who inject drugs in thousands of notes in a matter of minutes compared to several weeks that it would take a manual reviewer to do this,” said lead author Dr. David Goodman-Meza, assistant professor of medicine in the division of infectious diseases at the David Geffen School of Medicine at UCLA. “This would allow health systems to identify PWIDs to better allocate resources like syringe services programs and substance use and mental health treatment for people who use drugs.”
    The study’s other researchers are Dr. Amber Tang, Dr. Matthew Bidwell Goetz, Steven Shoptaw, and Alex Bui of UCLA; Dr. Michihiko Goto of University of Iowa and Iowa City VA Medical Center; Dr. Babak Aryanfar of VA Greater Los Angeles Healthcare System; Sergio Vazquez of Dartmouth College; and Dr. Adam Gordon of University of Utah and VA Salt Lake City Health Care System. Goodman-Meza and Goetz also have appointments with VA Greater Los Angeles Healthcare System.
    Story Source:
    Materials provided by University of California – Los Angeles Health Sciences. Note: Content may be edited for style and length. More

  • in

    Smart microrobots walk autonomously with electronic 'brains'

    Cornell University researchers have installed electronic “brains” on solar-powered robots that are 100 to 250 micrometers in size — smaller than an ant’s head — so that they can walk autonomously without being externally controlled.
    While Cornell researchers and others have previously developed microscopic machines that can crawl, swim, walk and fold themselves up, there were always “strings” attached; to generate motion, wires were used to provide electrical current or laser beams had to be focused directly onto specific locations on the robots.
    “Before, we literally had to manipulate these ‘strings’ in order to get any kind of response from the robot,” said Itai Cohen, professor of physics. “But now that we have these brains on board, it’s like taking the strings off the marionette. It’s like when Pinocchio gains consciousness.”
    The innovation sets the stage for a new generation of microscopic devices that can track bacteria, sniff out chemicals, destroy pollutants, conduct microsurgery and scrub the plaque out of arteries.
    The project brought together researchers from the labs of Cohen, Alyosha Molnar, associate professor of electrical and computer engineering; and Paul McEuen, professor of physical science, all co-senior authors on the paper. The lead author is postdoctoral researcher Michael Reynolds.
    The team’s paper, “Microscopic Robots with Onboard Digital Control,” published Sept. 21 in Science Robotics.
    The “brain” in the new robots is a complementary metal-oxide-semiconductor (CMOS) clock circuit that contains a thousand transistors, plus an array of diodes, resistors and capacitors. The integrated CMOS circuit generates a signal that produces a series of phase-shifted square wave frequencies that in turn set the gait of the robot. The robot legs are platinum-based actuators. Both the circuit and the legs are powered by photovoltaics.
    “Eventually, the ability to communicate a command will allow us to give the robot instructions, and the internal brain will figure out how to carry them out,” Cohen said. “Then we’re having a conversation with the robot. The robot might tell us something about its environment, and then we might react by telling it, ‘OK, go over there and try to suss out what’s happening.'”
    The new robots are approximately 10,000 times smaller than macroscale robots that feature onboard CMOS electronics, and they can walk at speeds faster than 10 micrometers per second.
    The fabrication process that Reynolds designed, basically customizing foundry-built electronics, has resulted in a platform that can enable other researchers to outfit microscopic robots with their own apps — from chemical detectors to photovoltaic “eyes” that help robots navigate by sensing changes in light.
    “What this lets you imagine is really complex, highly functional microscopic robots that have a high degree of programmability, integrated with not only actuators, but also sensors,” Reynolds said. “We’re excited about the applications in medicine — something that could move around in tissue and identify good cells and kill bad cells — and in environmental remediation, like if you had a robot that knew how to break down pollutants or sense a dangerous chemical and get rid of it.”
    Video: https://youtu.be/bCjnekohBAY
    Story Source:
    Materials provided by Cornell University. Original written by David Nutt, courtesy of the Cornell Chronicle. Note: Content may be edited for style and length. More

  • in

    People who distrust fellow humans show greater trust in artificial intelligence

    A person’s distrust in humans predicts they will have more trust in artificial intelligence’s ability to moderate content online, according to a recently published study. The findings, the researchers say, have practical implications for both designers and users of AI tools in social media.
    “We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI’s classification,” said S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State. “Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias.”
    The study, published in the journal of New Media & Society also found that “power users” who are experienced users of information technology, had the opposite tendency. They trusted the AI moderators less because they believe that machines lack the ability to detect nuances of human language.
    The study found that individual differences such as distrust of others and power usage predict whether users will invoke positive or negative characteristics of machines when faced with an AI-based system for content moderation, which will ultimately influence their trust toward the system. The researchers suggest that personalizing interfaces based on individual differences can positively alter user experience. The type of content moderation in the study involves monitoring social media posts for problematic content like hate speech and suicidal ideation.
    “One of the reasons why some may be hesitant to trust content moderation technology is that we are used to freely expressing our opinions online. We feel like content moderation may take that away from us,” said Maria D. Molina, an assistant professor of communication arts and sciences at Michigan State University, and the first author of this paper. “This study may offer a solution to that problem by suggesting that for people who hold negative stereotypes of AI for content moderation, it is important to reinforce human involvement when making a determination. On the other hand, for people with positive stereotypes of machines, we may reinforce the strength of the machine by highlighting elements like the accuracy of AI.”
    The study also found users with conservative political ideology were more likely to trust AI-powered moderation. Molina and coauthor Sundar, who also co-directs Penn State’s Media Effects Research Laboratory, said this may stem from a distrust in mainstream media and social media companies.
    The researchers recruited 676 participants from the United States. The participants were told they were helping test a content moderating system that was in development. They were given definitions of hate speech and suicidal ideation, followed by one of four different social media posts. The posts were either flagged for fitting those definitions or not flagged. The participants were also told if the decision to flag the post or not was made by AI, a human or a combination of both.
    The demonstration was followed by a questionnaire that asked the participants about their individual differences. Differences included their tendency to distrust others, political ideology, experience with technology and trust in AI.
    “We are bombarded with so much problematic content, from misinformation to hate speech,” Molina said. “But, at the end of the day, it’s about how we can help users calibrate their trust toward AI due to the actual attributes of the technology, rather than being swayed by those individual differences.”
    Molina and Sundar say their results may help shape future acceptance of AI. By creating systems customized to the user, designers could alleviate skepticism and distrust, and build appropriate reliance in AI.
    “A major practical implication of the study is to figure out communication and design strategies for helping users calibrate their trust in automated systems,” said Sundar, who is also director of Penn State’s Center for Socially Responsible Artificial Intelligence. “Certain groups of people who tend to have too much faith in AI technology should be alerted to its limitations and those who do not believe in its ability to moderate content should be fully informed about the extent of human involvement in the process.”
    Story Source:
    Materials provided by Penn State. Original written by Jonathan McVerry. Note: Content may be edited for style and length. More

  • in

    Artificial soft surface autonomously mimics shapes of nature

    Engineers at Duke University have developed a scalable soft surface that can continuously reshape itself to mimic objects in nature. Relying on electromagnetic actuation, mechanical modeling and machine learning to form new configurations, the surface can even learn to adapt to hindrances such as broken elements, unexpected constraints or changing environments.
    The research appears online September 21 in the journal Nature.
    “We’re motivated by the idea of controlling material properties or mechanical behaviors of an engineered object on the fly, which could be useful for applications like soft robotics, augmented reality, biomimetic materials, and subject-specific wearables,” said Xiaoyue Ni, assistant professor of mechanical engineering and materials science at Duke. “We are focusing on engineering the shape of matter that hasn’t been predetermined, which is a pretty tall task to achieve, especially for soft materials.”
    Previous work on morphing matter, according to Ni, hasn’t typically been programmable; it’s been programmed instead. That is, soft surfaces equipped with designed active elements can shift their shapes between few shapes, like a piece of origami, in response to light or heat or other stimuli triggers. In contrast, Ni and her laboratory wanted to create something much more controllable that could morph and reconfigure as often as it likes into any physically possible shapes.
    To create such a surface, the researchers started by laying out a grid of snake-like beams made of a thin layer of gold encapsulated by a thin polymer layer. The individual beams are just eight micrometers thick — about the thickness of a cotton fiber — and less than a millimeter wide. The lightness of the beams allows magnetic forces to easily and rapidly deform them.
    To generate local forces, the surface is put into a low-level static magnetic field. Voltage changes create a complex but easily predictable electrical current along the golden grid, driving the out-of-plane displacement of the grid. More

  • in

    Artificial intelligence used to uncover the cellular origins of Alzheimer's disease and other cognitive disorders

    Mount Sinai researchers have used novel artificial intelligence methods to examine structural and cellular features of human brain tissues to help determine the causes of Alzheimer’s disease and other related disorders. The research team found that studying the causes of cognitive impairment by using an unbiased AI-based method — as opposed to traditional markers such as amyloid plaques — revealed unexpected microscopic abnormalities that can predict the presence of cognitive impairment. These findings were published in the journal Acta Neuropathologica Communications on September 20.
    “AI represents an entirely new paradigm for studying dementia and will have a transformative effect on research into complex brain diseases, especially Alzheimer’s disease,” said co-corresponding author John Crary, MD, PhD, Professor of Pathology, Molecular and Cell-Based Medicine, Neuroscience, and Artificial Intelligence and Human Health, at the Icahn School of Medicine at Mount Sinai. “The deep learning approach was applied to the prediction of cognitive impairment, a challenging problem for which no current human-performed histopathologic diagnostic tool exists.”
    The Mount Sinai team identified and analyzed the underlying architecture and cellular features of two regions in the brain, the medial temporal lobe and frontal cortex. In an effort to improve the standard of postmortem brain assessment to identify signs of diseases, the researchers used a weakly supervised deep learning algorithm to examine slide images of human brain autopsy tissues from a group of more than 700 elderly donors to predict the presence or absence of cognitive impairment. The weakly supervised deep learning approach is able to handle noisy, limited, or imprecise sources to provide signals for labeling large amounts of training data in a supervised learning setting. This deep learning model was used to pinpoint a reduction in Luxol fast blue staining, which is used to quantify the amount of myelin, the protective layer around brain nerves. The machine learning models identified a signal for cognitive impairment that was associated with decreasing amounts of myelin staining; scattered in a non-uniform pattern across the tissue; and focused in the white matter, which affects learning and brain functions. The two sets of models trained and used by the researchers were able to predict the presence of cognitive impairment with an accuracy that was better than random guessing.
    In their analysis, the researchers believe the diminished staining intensity in particular areas of the brain identified by AI may serve as a scalable platform to evaluate the presence of brain impairment in other associated diseases. The methodology lays the groundwork for future studies, which could include deploying larger scale artificial intelligence models as well as further dissection of the algorithms to increase their predictive accuracy and reliability. The team said, ultimately, the goal of this neuropathologic research program is to develop better tools for diagnosis and treatment of people suffering from Alzheimer’s disease and related disorders.
    “Leveraging AI allows us to look at exponentially more disease relevant features, a powerful approach when applied to a complex system like the human brain,” said co-corresponding author Kurt W. Farrell, PhD, Assistant Professor of Pathology, Molecular and Cell-Based Medicine, Neuroscience, and Artificial Intelligence and Human Health, at Icahn Mount Sinai. “It is critical to perform further interpretability research in the areas of neuropathology and artificial intelligence, so that advances in deep learning can be translated to improve diagnostic and treatment approaches for Alzheimer’s disease and related disorders in a safe and effective manner.”
    Lead author Andrew McKenzie, MD, PhD, Co-Chief Resident for Research in the Department of Psychiatry at Icahn Mount Sinai, added: “Interpretation analysis was able to identify some, but not all, of the signals that the artificial intelligence models used to make predictions about cognitive impairment. As a result, additional challenges remain for deploying and interpreting these powerful deep learning models in the neuropathology domain.”
    Researchers from the University of Texas Health Science Center in San Antonio, Texas, Newcastle University in Tyne, United Kingdom, Boston University School of Medicine in Boston, and UT Southwestern Medical Center in Dallas also contributed to this research. The study was supported by funding from the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, and the Tau Consortium by the Rainwater Charitable Foundation. More