More stories

  • in

    How the brain develops: A new way to shed light on cognition

    A new study introduces a new neurocomputational model of the human brain that could shed light on how the brain develops complex cognitive abilities and advance neural artificial intelligence research. Published Sept. 19, the study was carried out by an international group of researchers from the Institut Pasteur and Sorbonne Université in Paris, the CHU Sainte-Justine, Mila — Quebec Artificial Intelligence Institute, and Université de Montréal.
    The model, which made the cover of the journal Proceedings of the National Academy of Sciences (PNAS), describes neural development over three hierarchical levels of information processing: the first sensorimotor level explores how the brain’s inner activity learns patterns from perception and associates them with action; the cognitive level examines how the brain contextually combines those patterns; lastly, the conscious level considers how the brain dissociates from the outside world and manipulates learned patterns (via memory) no longer accessible to perception.The team’s research gives clues into the core mechanisms underlying cognition thanks to the model’s focus on the interplay between two fundamental types of learning: Hebbian learning, which is associated with statistical regularity (i.e., repetition) — or as neuropsychologist Donald Hebb has put it, “neurons that fire together, wire together” — and reinforcement learning, associated with reward and the dopamine neurotransmitter.
    The model solves three tasks of increasing complexity across those levels, from visual recognition to cognitive manipulation of conscious percepts. Each time, the team introduced a new core mechanism to enable it to progress.
    The results highlight two fundamental mechanisms for the multilevel development of cognitive abilities in biological neural networks: synaptic epigenesis, with Hebbian learning at the local scale and reinforcement learning at the global scale; and self-organized dynamics, through spontaneous activity and balanced excitatory/inhibitory ratio of neurons.”Our model demonstrates how the neuro-AI convergence highlights biological mechanisms and cognitive architectures that can fuel the development of the next generation of artificial intelligence and even ultimately lead to artificial consciousness,” said team member Guillaume Dumas, an assistant professor of computational psychiatry at UdeM, and a principal investigator at the CHU Sainte-Justine Research Centre.
    Reaching this milestone may require integrating the social dimension of cognition, he added. The researchers are now looking at integrating biological and social dimensions at play in human cognition. The team has already pioneered the first simulation of two whole brains in interaction.
    Anchoring future computational models in biological and social realities will not only continue to shed light on the core mechanisms underlying cognition, the team believes, but will also help provide a unique bridge to artificial intelligence towards the only known system with advanced social consciousness: the human brain.
    Story Source:
    Materials provided by University of Montreal. Note: Content may be edited for style and length. More

  • in

    Did my computer say it best?

    With autocorrect and auto-generated email responses, algorithms offer plenty of assistance to help people express themselves.
    But new research from the University of Georgia shows people who rely on algorithms for assistance with language-related, creative tasks didn’t improve their performance and were more likely to trust low-quality advice.
    Aaron Schecter, an assistant professor in management information systems at the Terry College of Business, had his study “Human preferences toward algorithmic advice in a word association task” published this month in Scientific Reports. His co-authors are Nina Lauharatanahirun, a biobehavioral health assistant professor at Pennsylvania State University, and recent Terry College Ph.D. graduate and current Northeastern University assistant professor Eric Bogert.
    The paper is the second in the team’s investigation into individual trust in advice generated by algorithms. In an April 2021 paper, the team found people were more reliant on algorithmic advice in counting tasks than on advice purportedly given by other participants.
    This study aimed to test if people deferred to a computer’s advice when tackling more creative and language-dependent tasks. The team found participants were 92.3% more likely to use advice attributed to an algorithm than to take advice attributed to people.
    “This task did not require the same type of thinking (as the counting task in the prior study) but in fact we saw the same biases,” Schecter said. “They were still going to use the algorithm’s answer and feel good about it, even though it’s not helping them do any better.”
    Using an algorithm during word association More

  • in

    Mathematics enable scientists to understand organization within a cell's nucleus

    Science fiction writer Arthur C. Clarke’s third law says “Any sufficiently advanced technology is indistinguishable from magic.”
    Indika Rajapakse, Ph.D., is a believer. The engineer and mathematician now finds himself a biologist. And he believes the beauty of blending these three disciplines is crucial to unraveling how cells work.
    His latest development is a new mathematical technique to begin to understand how a cell’s nucleus is organized. The technique, which Rajapakse and collaborators tested on several types of cells, revealed what the researchers termed self-sustaining transcription clusters, a subset of proteins that play a key role in maintaining cell identity.
    They hope this understanding will expose vulnerabilities that can be targeted to reprogram a cell to stop cancer or other diseases.
    “More and more cancer biologists think genome organization plays a huge role in understanding uncontrollable cell division and whether we can reprogram a cancer cell. That means we need to understand more detail about what’s happening in the nucleus,” said Rajapakse, associate professor of computational medicine and bioinformatics, mathematics, and biomedical engineering at the University of Michigan. He is also a member of the U-M Rogel Cancer Center.
    Rajapakse is senior author on the paper, published in Nature Communications. The project was led by a trio of graduate students with an interdisciplinary team of researchers.
    The team improved upon an older technology to examine chromatin, called Hi-C, that maps which pieces of the genome are close together. It can identify chromosome translocations, like those that occur in some cancers. Its limitation, however, is that it sees only these adjacent genomic regions.
    The new technology, called Pore-C, uses much more data to visualize how all of the pieces within a cell’s nucleus interact. The researchers used a mathematical technique called hypergraphs. Think: three-dimensional Venn diagram. It allows researchers to see not just pairs of genomic regions that interact but the totality of the complex and overlapping genome-wide relationships within the cells.
    “This multi-dimensional relationship we can understand unambiguously. It gives us a more detailed way to understand organizational principles inside the nucleus. If you understand that, you can also understand where these organizational principles deviate, like in cancer,” Rajapakse said. “This is like putting three worlds together — technology, math and biology — to study more detail inside the nucleus.”
    The researchers tested their approach on neonatal fibroblasts, biopsied adult fibroblasts and B lymphocytes. They identified organizations of transcription clusters specific to each cell type. They also found what they called self-sustaining transcription clusters, which serve as key transcriptional signatures for a cell type.
    Rajapakse describes this as the first step in a bigger picture.
    “My goal is to construct this kind of picture over the cell cycle to understand how a cell goes through different stages. Cancer is uncontrollable cell division,” Rajapakse said. If we understand how a normal cell changes over time, we can start to examine controlled and uncontrolled systems and find ways to reprogram that system.” More

  • in

    Silicon nanopillars for quantum communication

    Around the world, specialists are working on implementing quantum information technologies. One important path involves light: Looking ahead, single light packages, also known as light quanta or photons, could transmit data that is both coded and effectively tap proof. To this end, new photon sources are required that emit single light quanta in a controlled fashion — and on demand. Only recently has it been discovered that silicon can host sources of single-photons with properties suitable for quantum communication. So far, however, no-one has known how to integrate the sources into modern photonic circuits. For the first time, a team led by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has now presented an appropriate production technology using silicon nanopillars: a chemical etching method followed by ion bombardment.
    “Silicon and single-photon sources in the telecommunication field have long been the missing link in speeding up the development of quantum communication by optical fibers. Now we have created the necessary preconditions for it,” explains Dr. Yonder Berencén of HZDR’s Institute of Ion Beam Physics and Materials Research who led the current study. Although single-photon sources have been fabricated in materials like diamonds, only silicon-based sources generate light particles at the right wavelength to proliferate in optical fibers — a considerable advantage for practical purposes.
    The researchers achieved this technical breakthrough by choosing a wet etching technique — what is known as MacEtch (metal-assisted chemical etching) — rather than the conventional dry etching techniques for processing the silicon on a chip. These standard methods, which allow the creation of silicon photonic structures, use highly reactive ions. These ions induce light-emitting defects caused by the radiation damage in the silicon. However, they are randomly distributed and overlay the desired optical signal with noise. Metal-assisted chemical etching, on the other hand does not generate these defects — instead, the material is etched away chemically under a kind of metallic mask.
    The goal: single photon sources compatible with the fiber-optic network
    Using the MacEtch method, researchers initially fabricated the simplest form of a potential light wave-guiding structure: silicon nanopillars on a chip. They then bombarded the finished nanopillars with carbon ions, just as they would with a massive silicon block, and thus generated photon sources embedded in the pillars. Employing the new etching technique means the size, spacing, and surface density of the nanopillars can be precisely controlled and adjusted to be compatible with modern photonic circuits. Per square millimeter chip, thousands of silicon nanopillars conduct and bundle the light from the sources by directing it vertically through the pillars.
    The researchers varied the diameter of the pillars because “we had hoped this would mean we could perform single defect creation on thin pillars and actually generate a single photon source per pillar” explains Berencén. “It didn’t work perfectly the first time. By comparison, even for the thinnest pillars, the dose of our carbon bombardment was too high. But now it’s just a short step to single photon sources.”
    A step on which the team is already working intensively because the new technique has also unleashed something of a race for future applications. “My dream is to integrate all the elementary building blocks, from a single photon source via photonic elements through to a single photon detector, on one single chip and then connect lots of chips via commercial optical fibers to form a modular quantum network,” says Berencén.
    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More

  • in

    Can eyes on self-driving cars reduce accidents?

    Robotic eyes on autonomous vehicles could improve pedestrian safety, according to a new study at the University of Tokyo. Participants played out scenarios in virtual reality (VR) and had to decide whether to cross a road in front of a moving vehicle or not. When that vehicle was fitted with robotic eyes, which either looked at the pedestrian (registering their presence) or away (not registering them), the participants were able to make safer or more efficient choices.
    Self-driving vehicles seem to be just around the corner. Whether they’ll be delivering packages, plowing fields or busing kids to school, a lot of research is underway to turn a once futuristic idea into reality.
    While the main concern for many is the practical side of creating vehicles that can autonomously navigate the world, researchers at the University of Tokyo have turned their attention to a more “human” concern of self-driving technology. “There is not enough investigation into the interaction between self-driving cars and the people around them, such as pedestrians. So, we need more investigation and effort into such interaction to bring safety and assurance to society regarding self-driving cars,” said Professor Takeo Igarashi from the Graduate School of Information Science and Technology.
    One key difference with self-driving vehicles is that drivers may become more of a passenger, so they may not be paying full attention to the road, or there may be nobody at the wheel at all. This makes it difficult for pedestrians to gauge whether a vehicle has registered their presence or not, as there might be no eye contact or indications from the people inside it.
    So, how could pedestrians be made aware of when an autonomous vehicle has noticed them and is intending to stop? Like a character from the Pixar movie Cars, a self-driving golf cart was fitted with two large, remote-controlled robotic eyes. The researchers called it the “gazing car.” They wanted to test whether putting moving eyes on the cart would affect people’s more risky behavior, in this case, whether people would still cross the road in front of a moving vehicle when in a hurry.
    The team set up four scenarios, two where the cart had eyes and two without. The cart had either noticed the pedestrian and was intending to stop or had not noticed them and was going to keep driving. When the cart had eyes, the eyes would either be looking towards the pedestrian (going to stop) or looking away (not going to stop). More

  • in

    New software platform advances understanding of the surface finish of manufactured components

    Scientists from the University of Freiburg, Germany, and the University of Pittsburgh have developed a software platform that facilitates and standardizes the analysis of surfaces. The contact.engineering platform enables users to create a digital twin of a surface and thus to help predict, for example, how quickly it wears out, how well it conducts heat, or how well it adheres to other materials. The team included Michael Röttger from the Department of Microsystems Engineering, Lars Pastewka and Antoine Sanner from the Department of Microsystems Engineering and the University of Freiburg’s Cluster of Excellence livMatS, and Tevis Jacobs from the Department of Mechanical Engineering and Materials Science at the University of Pittsburgh’s Swanson School of Engineering.
    Topography influences material properties
    All engineered materials have surface roughness, even if they appear smooth when seen with the naked eye. Viewed under a microscope, they resemble the surfaces of a mountain landscape. “It is of particular interest, in both industrial applications and scientific research, to have precise knowledge of a surface’s topography, as this influences properties like the adhesion, friction, wettability, and durability of the material,” says Pastewka.
    Saving time and cost in manufacturing
    Manufacturers must carefully control the surface finish of, for example, automobiles or medical devices to ensure proper performance of the final application. At present, the optimal surface finish is found primarily by a trial-and-error process, where a series of components are made with different machining practices and then their properties are tested to determine which is best. This is a slow and costly process. “It would be far more efficient to use scientific models to design the optimal topography for a given application, but this is not possible at present,” says Jacobs. “It would require scientific advancements in linking topography to properties, and technical advancements in measuring and describing a surface.”
    The contact.engineering platform facilitates both of these advances and standardizes the procedure: It automatically integrates the various data from different tools, corrects measurement errors, and uses the data to create a digital twin of the surface. The platform calculates statistical metrics and applies mechanical models to the surfaces, helping to predict behavior. “The users can thus identify which topographical features influence which properties. This allows a systematic optimization of finishing processes,” says Pastewka.
    Facilitating open science
    The software platform also serves as a database on which users can share their measurements with colleagues or collaborators. Users can also choose to make their surface measurements available to the public. When they publish the data, a digital object identifier (DOI) is generated that can be referenced in scientific publications.
    “We are continually developing contact.engineering and would like to add even more analysis tools, for example for the chemical composition of surfaces,” says Pastewka. “The goal is to provide users with a digital twin that is as comprehensive as possible. That’s why we also welcome suggestions for improvements to the software platform from users in industry and research.”
    The development of contact.engineering was funded from the European Research Council and the US National Science Foundation, as well as from the University of Freiburg’s Cluster of Excellence Living, Adaptive, and Energy-autonomous Materials Systems (livMatS).
    Story Source:
    Materials provided by University of Pittsburgh. Note: Content may be edited for style and length. More

  • in

    Machine learning generates 3D model from 2D pictures

    Researchers from the McKelvey School of Engineering at Washington University in St. Louis have developed a machine learning algorithm that can create a continuous 3D model of cells from a partial set of 2D images that were taken using the same standard microscopy tools found in many labs today.
    Their findings were published Sept. 16 in the journal Nature Machine Intelligence.
    “We train the model on the set of digital images to obtain a continuous representation,” said Ulugbek Kamilov, assistant professor of electrical and systems engineering and of computer science and engineering. “Now, I can show it any way I want. I can zoom in smoothly and there is no pixelation.”
    The key to this work was the use of a neural field network, a particular kind of machine learning system that learns a mapping from spatial coordinates to the corresponding physical quantities. When the training is complete, researchers can point to any coordinate and the model can provide the image value at that location.
    A particular strength of neural field networks is that they do not need to be trained on copious amounts of similar data. Instead, as long as there is a sufficient number of 2D images of the sample, the network can represent it in its entirety, inside and out.
    The image used to train the network is just like any other microscopy image. In essence, a cell is lit from below; the light travels through it and is captured on the other side, creating an image.
    “Because I have some views of the cell, I can use those images to train the model,” Kamilov said. This is done by feeding the model information about a point in the sample where the image captured some of the internal structure of the cell.
    Then the network takes its best shot at recreating that structure. If the output is wrong, the network is tweaked. If it’s correct, that pathway is reinforced. Once the predictions match real-world measurements, the network is ready to fill in parts of the cell that were not captured by the original 2D images.
    The model now contains information of a full, continuous representation of the cell — there’s no need to save a data-heavy image file because it can always be recreated by the neural field network.
    And, Kamilov said, not only is the model an easy-to-store, true representation of the cell, but also, in many ways, it’s more useful than the real thing.
    “I can put any coordinate in and generate that view,” he said. “Or I can generate entirely new views from different angles.” He can use the model to spin a cell like a top or zoom in for a closer look; use the model to do other numerical tasks; or even feed it into another algorithm.
    Story Source:
    Materials provided by Washington University in St. Louis. Note: Content may be edited for style and length. More

  • in

    A smartphone's camera and flash could help people measure blood oxygen levels at home

    First, pause and take a deep breath.
    When we breathe in, our lungs fill with oxygen, which is distributed to our red blood cells for transportation throughout our bodies. Our bodies need a lot of oxygen to function, and healthy people have at least 95% oxygen saturation all the time.
    Conditions like asthma or COVID-19 make it harder for bodies to absorb oxygen from the lungs. This leads to oxygen saturation percentages that drop to 90% or below, an indication that medical attention is needed.
    In a clinic, doctors monitor oxygen saturation using pulse oximeters — those clips you put over your fingertip or ear. But monitoring oxygen saturation at home multiple times a day could help patients keep an eye on COVID symptoms, for example.
    In a proof-of-principle study, University of Washington and University of California San Diego researchers have shown that smartphones are capable of detecting blood oxygen saturation levels down to 70%. This is the lowest value that pulse oximeters should be able to measure, as recommended by the U.S. Food and Drug Administration.
    The technique involves participants placing their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels. When the team delivered a controlled mixture of nitrogen and oxygen to six subjects to artificially bring their blood oxygen levels down, the smartphone correctly predicted whether the subject had low blood oxygen levels 80% of the time. More