More stories

  • in

    Smart microrobots walk autonomously with electronic 'brains'

    Cornell University researchers have installed electronic “brains” on solar-powered robots that are 100 to 250 micrometers in size — smaller than an ant’s head — so that they can walk autonomously without being externally controlled.
    While Cornell researchers and others have previously developed microscopic machines that can crawl, swim, walk and fold themselves up, there were always “strings” attached; to generate motion, wires were used to provide electrical current or laser beams had to be focused directly onto specific locations on the robots.
    “Before, we literally had to manipulate these ‘strings’ in order to get any kind of response from the robot,” said Itai Cohen, professor of physics. “But now that we have these brains on board, it’s like taking the strings off the marionette. It’s like when Pinocchio gains consciousness.”
    The innovation sets the stage for a new generation of microscopic devices that can track bacteria, sniff out chemicals, destroy pollutants, conduct microsurgery and scrub the plaque out of arteries.
    The project brought together researchers from the labs of Cohen, Alyosha Molnar, associate professor of electrical and computer engineering; and Paul McEuen, professor of physical science, all co-senior authors on the paper. The lead author is postdoctoral researcher Michael Reynolds.
    The team’s paper, “Microscopic Robots with Onboard Digital Control,” published Sept. 21 in Science Robotics.
    The “brain” in the new robots is a complementary metal-oxide-semiconductor (CMOS) clock circuit that contains a thousand transistors, plus an array of diodes, resistors and capacitors. The integrated CMOS circuit generates a signal that produces a series of phase-shifted square wave frequencies that in turn set the gait of the robot. The robot legs are platinum-based actuators. Both the circuit and the legs are powered by photovoltaics.
    “Eventually, the ability to communicate a command will allow us to give the robot instructions, and the internal brain will figure out how to carry them out,” Cohen said. “Then we’re having a conversation with the robot. The robot might tell us something about its environment, and then we might react by telling it, ‘OK, go over there and try to suss out what’s happening.'”
    The new robots are approximately 10,000 times smaller than macroscale robots that feature onboard CMOS electronics, and they can walk at speeds faster than 10 micrometers per second.
    The fabrication process that Reynolds designed, basically customizing foundry-built electronics, has resulted in a platform that can enable other researchers to outfit microscopic robots with their own apps — from chemical detectors to photovoltaic “eyes” that help robots navigate by sensing changes in light.
    “What this lets you imagine is really complex, highly functional microscopic robots that have a high degree of programmability, integrated with not only actuators, but also sensors,” Reynolds said. “We’re excited about the applications in medicine — something that could move around in tissue and identify good cells and kill bad cells — and in environmental remediation, like if you had a robot that knew how to break down pollutants or sense a dangerous chemical and get rid of it.”
    Video: https://youtu.be/bCjnekohBAY
    Story Source:
    Materials provided by Cornell University. Original written by David Nutt, courtesy of the Cornell Chronicle. Note: Content may be edited for style and length. More

  • in

    People who distrust fellow humans show greater trust in artificial intelligence

    A person’s distrust in humans predicts they will have more trust in artificial intelligence’s ability to moderate content online, according to a recently published study. The findings, the researchers say, have practical implications for both designers and users of AI tools in social media.
    “We found a systematic pattern of individuals who have less trust in other humans showing greater trust in AI’s classification,” said S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State. “Based on our analysis, this seems to be due to the users invoking the idea that machines are accurate, objective and free from ideological bias.”
    The study, published in the journal of New Media & Society also found that “power users” who are experienced users of information technology, had the opposite tendency. They trusted the AI moderators less because they believe that machines lack the ability to detect nuances of human language.
    The study found that individual differences such as distrust of others and power usage predict whether users will invoke positive or negative characteristics of machines when faced with an AI-based system for content moderation, which will ultimately influence their trust toward the system. The researchers suggest that personalizing interfaces based on individual differences can positively alter user experience. The type of content moderation in the study involves monitoring social media posts for problematic content like hate speech and suicidal ideation.
    “One of the reasons why some may be hesitant to trust content moderation technology is that we are used to freely expressing our opinions online. We feel like content moderation may take that away from us,” said Maria D. Molina, an assistant professor of communication arts and sciences at Michigan State University, and the first author of this paper. “This study may offer a solution to that problem by suggesting that for people who hold negative stereotypes of AI for content moderation, it is important to reinforce human involvement when making a determination. On the other hand, for people with positive stereotypes of machines, we may reinforce the strength of the machine by highlighting elements like the accuracy of AI.”
    The study also found users with conservative political ideology were more likely to trust AI-powered moderation. Molina and coauthor Sundar, who also co-directs Penn State’s Media Effects Research Laboratory, said this may stem from a distrust in mainstream media and social media companies.
    The researchers recruited 676 participants from the United States. The participants were told they were helping test a content moderating system that was in development. They were given definitions of hate speech and suicidal ideation, followed by one of four different social media posts. The posts were either flagged for fitting those definitions or not flagged. The participants were also told if the decision to flag the post or not was made by AI, a human or a combination of both.
    The demonstration was followed by a questionnaire that asked the participants about their individual differences. Differences included their tendency to distrust others, political ideology, experience with technology and trust in AI.
    “We are bombarded with so much problematic content, from misinformation to hate speech,” Molina said. “But, at the end of the day, it’s about how we can help users calibrate their trust toward AI due to the actual attributes of the technology, rather than being swayed by those individual differences.”
    Molina and Sundar say their results may help shape future acceptance of AI. By creating systems customized to the user, designers could alleviate skepticism and distrust, and build appropriate reliance in AI.
    “A major practical implication of the study is to figure out communication and design strategies for helping users calibrate their trust in automated systems,” said Sundar, who is also director of Penn State’s Center for Socially Responsible Artificial Intelligence. “Certain groups of people who tend to have too much faith in AI technology should be alerted to its limitations and those who do not believe in its ability to moderate content should be fully informed about the extent of human involvement in the process.”
    Story Source:
    Materials provided by Penn State. Original written by Jonathan McVerry. Note: Content may be edited for style and length. More

  • in

    Artificial soft surface autonomously mimics shapes of nature

    Engineers at Duke University have developed a scalable soft surface that can continuously reshape itself to mimic objects in nature. Relying on electromagnetic actuation, mechanical modeling and machine learning to form new configurations, the surface can even learn to adapt to hindrances such as broken elements, unexpected constraints or changing environments.
    The research appears online September 21 in the journal Nature.
    “We’re motivated by the idea of controlling material properties or mechanical behaviors of an engineered object on the fly, which could be useful for applications like soft robotics, augmented reality, biomimetic materials, and subject-specific wearables,” said Xiaoyue Ni, assistant professor of mechanical engineering and materials science at Duke. “We are focusing on engineering the shape of matter that hasn’t been predetermined, which is a pretty tall task to achieve, especially for soft materials.”
    Previous work on morphing matter, according to Ni, hasn’t typically been programmable; it’s been programmed instead. That is, soft surfaces equipped with designed active elements can shift their shapes between few shapes, like a piece of origami, in response to light or heat or other stimuli triggers. In contrast, Ni and her laboratory wanted to create something much more controllable that could morph and reconfigure as often as it likes into any physically possible shapes.
    To create such a surface, the researchers started by laying out a grid of snake-like beams made of a thin layer of gold encapsulated by a thin polymer layer. The individual beams are just eight micrometers thick — about the thickness of a cotton fiber — and less than a millimeter wide. The lightness of the beams allows magnetic forces to easily and rapidly deform them.
    To generate local forces, the surface is put into a low-level static magnetic field. Voltage changes create a complex but easily predictable electrical current along the golden grid, driving the out-of-plane displacement of the grid. More

  • in

    Artificial intelligence used to uncover the cellular origins of Alzheimer's disease and other cognitive disorders

    Mount Sinai researchers have used novel artificial intelligence methods to examine structural and cellular features of human brain tissues to help determine the causes of Alzheimer’s disease and other related disorders. The research team found that studying the causes of cognitive impairment by using an unbiased AI-based method — as opposed to traditional markers such as amyloid plaques — revealed unexpected microscopic abnormalities that can predict the presence of cognitive impairment. These findings were published in the journal Acta Neuropathologica Communications on September 20.
    “AI represents an entirely new paradigm for studying dementia and will have a transformative effect on research into complex brain diseases, especially Alzheimer’s disease,” said co-corresponding author John Crary, MD, PhD, Professor of Pathology, Molecular and Cell-Based Medicine, Neuroscience, and Artificial Intelligence and Human Health, at the Icahn School of Medicine at Mount Sinai. “The deep learning approach was applied to the prediction of cognitive impairment, a challenging problem for which no current human-performed histopathologic diagnostic tool exists.”
    The Mount Sinai team identified and analyzed the underlying architecture and cellular features of two regions in the brain, the medial temporal lobe and frontal cortex. In an effort to improve the standard of postmortem brain assessment to identify signs of diseases, the researchers used a weakly supervised deep learning algorithm to examine slide images of human brain autopsy tissues from a group of more than 700 elderly donors to predict the presence or absence of cognitive impairment. The weakly supervised deep learning approach is able to handle noisy, limited, or imprecise sources to provide signals for labeling large amounts of training data in a supervised learning setting. This deep learning model was used to pinpoint a reduction in Luxol fast blue staining, which is used to quantify the amount of myelin, the protective layer around brain nerves. The machine learning models identified a signal for cognitive impairment that was associated with decreasing amounts of myelin staining; scattered in a non-uniform pattern across the tissue; and focused in the white matter, which affects learning and brain functions. The two sets of models trained and used by the researchers were able to predict the presence of cognitive impairment with an accuracy that was better than random guessing.
    In their analysis, the researchers believe the diminished staining intensity in particular areas of the brain identified by AI may serve as a scalable platform to evaluate the presence of brain impairment in other associated diseases. The methodology lays the groundwork for future studies, which could include deploying larger scale artificial intelligence models as well as further dissection of the algorithms to increase their predictive accuracy and reliability. The team said, ultimately, the goal of this neuropathologic research program is to develop better tools for diagnosis and treatment of people suffering from Alzheimer’s disease and related disorders.
    “Leveraging AI allows us to look at exponentially more disease relevant features, a powerful approach when applied to a complex system like the human brain,” said co-corresponding author Kurt W. Farrell, PhD, Assistant Professor of Pathology, Molecular and Cell-Based Medicine, Neuroscience, and Artificial Intelligence and Human Health, at Icahn Mount Sinai. “It is critical to perform further interpretability research in the areas of neuropathology and artificial intelligence, so that advances in deep learning can be translated to improve diagnostic and treatment approaches for Alzheimer’s disease and related disorders in a safe and effective manner.”
    Lead author Andrew McKenzie, MD, PhD, Co-Chief Resident for Research in the Department of Psychiatry at Icahn Mount Sinai, added: “Interpretation analysis was able to identify some, but not all, of the signals that the artificial intelligence models used to make predictions about cognitive impairment. As a result, additional challenges remain for deploying and interpreting these powerful deep learning models in the neuropathology domain.”
    Researchers from the University of Texas Health Science Center in San Antonio, Texas, Newcastle University in Tyne, United Kingdom, Boston University School of Medicine in Boston, and UT Southwestern Medical Center in Dallas also contributed to this research. The study was supported by funding from the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, and the Tau Consortium by the Rainwater Charitable Foundation. More

  • in

    How the brain develops: A new way to shed light on cognition

    A new study introduces a new neurocomputational model of the human brain that could shed light on how the brain develops complex cognitive abilities and advance neural artificial intelligence research. Published Sept. 19, the study was carried out by an international group of researchers from the Institut Pasteur and Sorbonne Université in Paris, the CHU Sainte-Justine, Mila — Quebec Artificial Intelligence Institute, and Université de Montréal.
    The model, which made the cover of the journal Proceedings of the National Academy of Sciences (PNAS), describes neural development over three hierarchical levels of information processing: the first sensorimotor level explores how the brain’s inner activity learns patterns from perception and associates them with action; the cognitive level examines how the brain contextually combines those patterns; lastly, the conscious level considers how the brain dissociates from the outside world and manipulates learned patterns (via memory) no longer accessible to perception.The team’s research gives clues into the core mechanisms underlying cognition thanks to the model’s focus on the interplay between two fundamental types of learning: Hebbian learning, which is associated with statistical regularity (i.e., repetition) — or as neuropsychologist Donald Hebb has put it, “neurons that fire together, wire together” — and reinforcement learning, associated with reward and the dopamine neurotransmitter.
    The model solves three tasks of increasing complexity across those levels, from visual recognition to cognitive manipulation of conscious percepts. Each time, the team introduced a new core mechanism to enable it to progress.
    The results highlight two fundamental mechanisms for the multilevel development of cognitive abilities in biological neural networks: synaptic epigenesis, with Hebbian learning at the local scale and reinforcement learning at the global scale; and self-organized dynamics, through spontaneous activity and balanced excitatory/inhibitory ratio of neurons.”Our model demonstrates how the neuro-AI convergence highlights biological mechanisms and cognitive architectures that can fuel the development of the next generation of artificial intelligence and even ultimately lead to artificial consciousness,” said team member Guillaume Dumas, an assistant professor of computational psychiatry at UdeM, and a principal investigator at the CHU Sainte-Justine Research Centre.
    Reaching this milestone may require integrating the social dimension of cognition, he added. The researchers are now looking at integrating biological and social dimensions at play in human cognition. The team has already pioneered the first simulation of two whole brains in interaction.
    Anchoring future computational models in biological and social realities will not only continue to shed light on the core mechanisms underlying cognition, the team believes, but will also help provide a unique bridge to artificial intelligence towards the only known system with advanced social consciousness: the human brain.
    Story Source:
    Materials provided by University of Montreal. Note: Content may be edited for style and length. More

  • in

    Did my computer say it best?

    With autocorrect and auto-generated email responses, algorithms offer plenty of assistance to help people express themselves.
    But new research from the University of Georgia shows people who rely on algorithms for assistance with language-related, creative tasks didn’t improve their performance and were more likely to trust low-quality advice.
    Aaron Schecter, an assistant professor in management information systems at the Terry College of Business, had his study “Human preferences toward algorithmic advice in a word association task” published this month in Scientific Reports. His co-authors are Nina Lauharatanahirun, a biobehavioral health assistant professor at Pennsylvania State University, and recent Terry College Ph.D. graduate and current Northeastern University assistant professor Eric Bogert.
    The paper is the second in the team’s investigation into individual trust in advice generated by algorithms. In an April 2021 paper, the team found people were more reliant on algorithmic advice in counting tasks than on advice purportedly given by other participants.
    This study aimed to test if people deferred to a computer’s advice when tackling more creative and language-dependent tasks. The team found participants were 92.3% more likely to use advice attributed to an algorithm than to take advice attributed to people.
    “This task did not require the same type of thinking (as the counting task in the prior study) but in fact we saw the same biases,” Schecter said. “They were still going to use the algorithm’s answer and feel good about it, even though it’s not helping them do any better.”
    Using an algorithm during word association More

  • in

    Mathematics enable scientists to understand organization within a cell's nucleus

    Science fiction writer Arthur C. Clarke’s third law says “Any sufficiently advanced technology is indistinguishable from magic.”
    Indika Rajapakse, Ph.D., is a believer. The engineer and mathematician now finds himself a biologist. And he believes the beauty of blending these three disciplines is crucial to unraveling how cells work.
    His latest development is a new mathematical technique to begin to understand how a cell’s nucleus is organized. The technique, which Rajapakse and collaborators tested on several types of cells, revealed what the researchers termed self-sustaining transcription clusters, a subset of proteins that play a key role in maintaining cell identity.
    They hope this understanding will expose vulnerabilities that can be targeted to reprogram a cell to stop cancer or other diseases.
    “More and more cancer biologists think genome organization plays a huge role in understanding uncontrollable cell division and whether we can reprogram a cancer cell. That means we need to understand more detail about what’s happening in the nucleus,” said Rajapakse, associate professor of computational medicine and bioinformatics, mathematics, and biomedical engineering at the University of Michigan. He is also a member of the U-M Rogel Cancer Center.
    Rajapakse is senior author on the paper, published in Nature Communications. The project was led by a trio of graduate students with an interdisciplinary team of researchers.
    The team improved upon an older technology to examine chromatin, called Hi-C, that maps which pieces of the genome are close together. It can identify chromosome translocations, like those that occur in some cancers. Its limitation, however, is that it sees only these adjacent genomic regions.
    The new technology, called Pore-C, uses much more data to visualize how all of the pieces within a cell’s nucleus interact. The researchers used a mathematical technique called hypergraphs. Think: three-dimensional Venn diagram. It allows researchers to see not just pairs of genomic regions that interact but the totality of the complex and overlapping genome-wide relationships within the cells.
    “This multi-dimensional relationship we can understand unambiguously. It gives us a more detailed way to understand organizational principles inside the nucleus. If you understand that, you can also understand where these organizational principles deviate, like in cancer,” Rajapakse said. “This is like putting three worlds together — technology, math and biology — to study more detail inside the nucleus.”
    The researchers tested their approach on neonatal fibroblasts, biopsied adult fibroblasts and B lymphocytes. They identified organizations of transcription clusters specific to each cell type. They also found what they called self-sustaining transcription clusters, which serve as key transcriptional signatures for a cell type.
    Rajapakse describes this as the first step in a bigger picture.
    “My goal is to construct this kind of picture over the cell cycle to understand how a cell goes through different stages. Cancer is uncontrollable cell division,” Rajapakse said. If we understand how a normal cell changes over time, we can start to examine controlled and uncontrolled systems and find ways to reprogram that system.” More

  • in

    Silicon nanopillars for quantum communication

    Around the world, specialists are working on implementing quantum information technologies. One important path involves light: Looking ahead, single light packages, also known as light quanta or photons, could transmit data that is both coded and effectively tap proof. To this end, new photon sources are required that emit single light quanta in a controlled fashion — and on demand. Only recently has it been discovered that silicon can host sources of single-photons with properties suitable for quantum communication. So far, however, no-one has known how to integrate the sources into modern photonic circuits. For the first time, a team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has now presented an appropriate production technology using silicon nanopillars: a chemical etching method followed by ion bombardment.
    “Silicon and single-photon sources in the telecommunication field have long been the missing link in speeding up the development of quantum communication by optical fibers. Now we have created the necessary preconditions for it,” explains Dr. Yonder Berencén of HZDR’s Institute of Ion Beam Physics and Materials Research who led the current study. Although single-photon sources have been fabricated in materials like diamonds, only silicon-based sources generate light particles at the right wavelength to proliferate in optical fibers — a considerable advantage for practical purposes.
    The researchers achieved this technical breakthrough by choosing a wet etching technique — what is known as MacEtch (metal-assisted chemical etching) — rather than the conventional dry etching techniques for processing the silicon on a chip. These standard methods, which allow the creation of silicon photonic structures, use highly reactive ions. These ions induce light-emitting defects caused by the radiation damage in the silicon. However, they are randomly distributed and overlay the desired optical signal with noise. Metal-assisted chemical etching, on the other hand does not generate these defects — instead, the material is etched away chemically under a kind of metallic mask.
    The goal: single photon sources compatible with the fiber-optic network
    Using the MacEtch method, researchers initially fabricated the simplest form of a potential light wave-guiding structure: silicon nanopillars on a chip. They then bombarded the finished nanopillars with carbon ions, just as they would with a massive silicon block, and thus generated photon sources embedded in the pillars. Employing the new etching technique means the size, spacing, and surface density of the nanopillars can be precisely controlled and adjusted to be compatible with modern photonic circuits. Per square millimeter chip, thousands of silicon nanopillars conduct and bundle the light from the sources by directing it vertically through the pillars.
    The researchers varied the diameter of the pillars because “we had hoped this would mean we could perform single defect creation on thin pillars and actually generate a single photon source per pillar” explains Berencén. “It didn’t work perfectly the first time. By comparison, even for the thinnest pillars, the dose of our carbon bombardment was too high. But now it’s just a short step to single photon sources.”
    A step on which the team is already working intensively because the new technique has also unleashed something of a race for future applications. “My dream is to integrate all the elementary building blocks, from a single photon source via photonic elements through to a single photon detector, on one single chip and then connect lots of chips via commercial optical fibers to form a modular quantum network,” says Berencén.
    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More