More stories

  • in

    Brain builds and uses maps of social networks, physical space, in the same way

    Even in these social-distanced days, we keep in our heads a map of our relationships with other people: family, friends, coworkers and how they relate to each other. New research from the Center for Mind and Brain at the University of California, Davis shows that we put together this social map in much the same way that we assemble a map of physical places and things.
    “When we’re learning to navigate the real world, we don’t start off by seeing a whole map,” said Erie Boorman, assistant professor at the Center for Mind and Brain and UC Davis Department of Psychology. “We sample the world and reconstruct it.” The work is published July 22 in the journal Neuron.
    Research has shown that animals navigate using a representation of the outside world in their brain. Whether rats in a maze or people in a new city, they build this internal map in pieces then stitch them together. That work earned a Nobel Prize for Physiology or Medicine for John O’Keefe, May-Britt Moser and Edvard Moset in 2014.
    Boorman and UC Davis colleagues Seongmin Park, Douglas Miller and Charan Ranganath, with Hamed Nili at the University of Oxford, wondered if our brains represent abstract relationships, such as social networks, in the same way.
    To find out, they gave volunteers pieces of information about two groups of people ranked by perceived relative competence and popularity. The volunteers were only told about relations on one dimension between a pair of people who differed by one rank level at a time: for example, that Alice is more popular than Bob, but Bob is seen as more competent than Charles.
    The true social hierarchy could be mapped as a two-dimensional grid defined by dimensions of competence and popularity, but this was not shown to the volunteers. They only could infer it by integrating piecemeal learned relationships between pairs of individuals in one dimension at a time.

    advertisement

    They also learned about relative ranks of a few people between groups.
    Assembling a map
    They were later asked about relationships between new pairs of people in the grid while the researchers used functional magnetic resonance imaging to measure brain activity. Without being prompted, based only on pairwise comparisons, the volunteers organized the information into a two-dimensional grid in their brains. This two-dimensional map was present across three brain regions called the hippocampus, entorhinal cortex and ventromedial prefrontal cortex/medial orbitofrontal cortex.
    Based on limited comparisons between the two groups, they were also able to generalize to the rest of the group. For example, if Cynthia from group 1 was more popular than David from group 2, that affected the rank of other members of group 2 compared to group 1.
    The volunteers weren’t told to think about the data in that way, Boorman said. Given only pairwise comparisons, they inferred the remaining hierarchical arrangement of the whole set.

    advertisement

    “If you know how two social networks are related to each other, you can make a good inference about the relationship between two individuals in different social networks before direct experiences,” Park said.
    The study points to a general principle behind how we make decisions based on past experience. Whether we are remembering a route in the physical world, or learning about a set of friends and acquaintances, we start with a template, such as a 2-D topology, and a few landmarks, and fit new data around them.
    “Our results show that our brain organizes knowledge learned from separate experiences in a structural form like a map, which allows us to use past experiences to make a novel decision,” Park said.
    That allows us to quickly adapt to a new situation based on past experience. This may help to explain humans’ remarkable flexibility in generalizing experiences from one task to another, a key challenge in artificial intelligence research.
    “We know a lot about how the neural codes for representing physical space,” Boorman said. “It looks like the human brain uses the same codes to organize abstract, non-spatial information as well.” More

  • in

    Twitter data reveals global communication network

    Twitter mentions show distinct community structure patterns resulting from communication preferences of individuals affected by physical distance between users and commonalities, such as shared language and history.
    While previous investigations have identified patterns using other data, such as mobile phone usage and Facebook friend connections, research from the New England Complex Systems Institute looks at the collective effect of message transfer in the global community. The group’s results are reported in an article in the journal Chaos, by AIP Publishing.
    The scientists used the mentions mechanism in Twitter data to map the flow of information around the world. A mention in Twitter occurs when a user explicitly includes another @username in their tweet. This is a way to directly communicate with another user but is also a way to retransmit or retweet content.
    The investigators examined Twitter data from December 2013 and divided the world into 8,000 cells, each approximately 100 kilometers wide. A network was built on this lattice, where each node is a precise location and a link, or edge, is the number of Twitter users in one location who are mentioned in another location.
    Twitter is banned in several countries and is known to be more prevalent in countries with higher gross domestic product, so this affects the data. Their results show large regions, such as the U.S. and Europe, are strongly connected inside each region, but they are also weakly connected to other areas.
    “While strong ties keep groups cohesive, weak ties integrate groups at the large scale and are responsible for the spread of information systemwide,” said co-author Leila Hedayatifar.
    The researchers used a computational technique to determine modularity, a value that quantifies distance between communities on a network compared to a random arrangement. They also investigated a quantity known as betweenness centrality, which measures the number of shortest paths through each node. This measure highlights the locations that serve as connectors between many places.
    By optimizing the modularity, the investigators found 16 significant global communities. Three large communities exist in the Americas: an English-speaking region, Central and South American countries, and Brazil in its own group. Multiple communities exist in Europe, Asia and Africa.
    The data can also be analyzed on a finer scale, revealing subcommunities. Strong regional associations exist within countries or even cities. Istanbul, for example, has Twitter conversations that are largely restricted to certain zones within the city.
    The investigators also looked at the effect of common languages, borders and shared history.
    “We found, perhaps surprisingly, that countries who had a common colonizer have a decreased preference of interaction,” Hedayatifar said.
    She suggests hierarchical interactions with the colonizing country might inhibit interactions between former colonies.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Silver-plated gold nanostars detect early cancer biomarkers

    Biomedical engineers at Duke University have engineered a method for simultaneously detecting the presence of multiple specific microRNAs in RNA extracted from tissue samples without the need for labeling or target amplification. The technique could be used to identify early biomarkers of cancer and other diseases without the need for the elaborate, time-consuming, expensive processes and special laboratory equipment required by current technologies.
    The results appeared online on May 4 in the journal Analyst.
    “The general research focus in my lab has been on the early detection of diseases in people before they even know they’re sick,” said Tuan Vo-Dinh, director of the Fitzpatrick Institute for Photonics and the R. Eugene and Susie E. Goodson Distinguished Professor of Biomedical Engineering at Duke. “And to do that, you need to be able to go upstream, at the genomic level, to look at biomarkers like microRNA.”
    MicroRNAs are short RNA molecules that bind to messenger RNA and stop them from delivering their instructions to the body’s protein-producing machines. This could effectively silence certain sections of DNA or regulate gene expression, thus altering the behaviors of certain biological functions. More than 2000 microRNAs have been discovered in humans that affect development, differentiation, growth and metabolism.
    As researchers have discovered and learned more about these tiny genetic packages, many microRNAs have been linked to the misregulation of biological functions, resulting in diseases ranging from brain tumors to Alzheimer’s. These discoveries have led to an increasing interest in using microRNAs as disease biomarkers and therapeutic targets. Due to the very small amounts of miRNAs present in bodily samples, traditional methods of studying them require genetic-amplification processes such as quantitative reverse transcription PCR (qRT-PCR) and RNA sequencing.
    While these technologies perform admirably in well-equipped laboratories and research studies that can take months or years, they aren’t as well-suited for fast diagnostic results at the clinic or out in the field. To try to bridge this gap in applicability, Vo-Dinh and his colleagues are turning to silver-plated gold nanostars.

    advertisement

    “Gold nanostars have multiple spikes that can act as lighting rods for enhancing electromagnetic waves, which is a unique feature of the particle’s shape,” said Vo-Dinh, who also holds a faculty appointment in Duke chemistry. “Our tiny nanosensors, called ‘inverse molecular sentinels,’ take advantage of this ability to create clear signals of the presence of multiple microRNAs.”
    While the name is a mouthful, the basic idea of the nanosensor design is to get a label molecule to move very close to the star’s spikes when a specific stretch of target RNA is recognized and captured. When a laser is then shined on the triggered sensor, the lightning rod effect of the nanostar tips causes the label molecule to shine extremely brightly, signaling the capture of the target RNA.
    The researchers set this trigger by tethering a label molecule to one of the nanostar’s points with a stretch of DNA. Although it’s built to curl in on itself in a loop, the DNA is held open by an RNA “spacer” that is tailored to bind with the target microRNA being tested for. When that microRNA comes by, it sticks to and removes the spacer, allowing the DNA to curl in on itself in a loop and bring the label molecule in close contact with the nanostar.
    Under laser excitation, that label emits a light called a Raman signal, which is generally very weak. But the shape of the nanostars — and a coupling effect of separate reactions caused by the gold nanostars and silver coating — amplifies Raman signals several million-folds, making them easier to detect.
    “The Raman signals of label molecules exhibit sharp peaks with very specific colors like spectral fingerprints that make them easily distinguished from one another when detected,” said Vo-Dinh. “Thus we can actually design different sensors for different microRNAs on nanostars, each with label molecules exhibiting their own specific spectral fingerprints. And because the signal is so strong, we can detect each one of these fingerprints independently of each other.”
    In this clinical study, Vo-Dinh and this team collaborated with Katherine Garman, associate professor of medicine, and colleagues at the Duke Cancer Institute to use the new nanosensor platform to demonstrate that they can detect miR-21, a specific microRNA associated with very early stages of esophageal cancer, just as well as other more elaborate state-of-the-art methods. In this case, the use of miR-21 alone is enough to distinguish healthy tissue samples from cancerous samples. For other diseases, however, it might be necessary to detect several other microRNAs to get a reliable diagnosis, which is exactly why the researchers are so excited by the general applicability of their inverse molecular sentinel nanobiosensors.
    “Usually three or four genetic biomarkers might be sufficient to get a good diagnosis, and these types of biomarkers can unmistakably identify each disease,” said Vo-Dinh. “That’s why we’re encouraged by just how strong of a signal our nanostars create without the need of time-consuming target amplification. Our method could provide a diagnostic alternative to histopathology and PCR, thus simplifying the testing process for cancer diagnostics.”
    For more than three years, Vo-Dinh has worked with his colleagues and Duke’s Office of Licensing and Ventures to patent his nanostar-based biosensors. With that patent recently awarded, the researchers are excited to begin testing the limits of their technology’s abilities and exploring technology transfer possibilities with the private sector.
    “Following these encouraging results, we are now very excited to apply this technology to detect colon cancer directly from blood samples in a new NIH-funded project,” said Vo-Dinh. “It’s very challenging to detect early biomarkers of cancer directly in the blood before a tumor even forms, but we have high hopes.” More

  • in

    Can social unrest, riot dynamics be modeled?

    Episodes of social unrest rippled throughout Chile in 2019 and disrupted the daily routines of many citizens. Researchers specializing in economics, mathematics and physics in Chile and the U.K. banded together to explore the surprising social dynamics people were experiencing.
    To do this, they combined well-known epidemic models with tools from the physics of chaos and interpreted their findings through the lens of social science as economics.
    In the journal Chaos, from AIP Publishing, the team reports that social media is changing the rules of the game, and previously applied epidemic-like models, on their own, may no longer be enough to explain current rioting dynamics. Using epidemiological mathematical models to understand the spread of infectious diseases dates back more than 100 years.
    “In the 1970s, this type of methodology was used to understand the dynamics of riots that occurred in U.S. cities in the 1960s,” said Jocelyn Olivari Narea, co-author and an assistant professor at Adolfo Ibáñez University in Chile. “More recently, it was used to model French rioting events in 2005.”
    From a mathematical point of view, the team’s work is based on the SIR epidemiological model, known for modeling infectious disease spread. This technique separates the population into susceptible, infectious and recovered individuals.
    “Within a rioting context, someone ‘susceptible’ is a potential rioter, an ‘infected individual’ is an active rioter, and a ‘recovered person’ is one that stopped rioting,” explained co-author Katia Vogt-Geisse. “Rioting spreads when effective contact between an active rioter and a potential rioter occurs.”
    They discovered that the SIR model uses Hamiltonian mechanics for mathematics, just like Newton’s laws for physics.

    advertisement

    “This allowed us to apply well-known tools of the physics of chaos to show that within the presence of an external force, the dynamics become very rich,” said co-author Sergio Rica Mery. “The external force that we included in the model represents the occasional trigger that increases rioting activity.”
    When including such triggers, the team found the way a sequence of events occurs varies greatly based on the initial number of potential rioters and active rioters.
    “Even the sequence of rioting events can be chaotic,” Rica Mery said. “Rich dynamics reveal the complexity involved in making predictions of rioting activity.”
    The team’s work comes at a timely moment as social unrest is becoming more common — even within the context of the current pandemic.
    “We just saw episodes of rioting in Minnesota due to racial unrest and how it ended up spreading to various locations within the U.S. and even abroad,” Olivari Narea said.
    The team pointed out it was surprising that the idea of disease spread can be well applied to rioting activity spread to obtain a good fit of rioting activity data.
    “While you might think that the study of disease transmission and problems of a social nature vary greatly, our work shows epidemiological models of the most simple SIR type, enriched by triggers and tools of the physics of chaos, can describe rioting activities well,” Vogt-Geisse said.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Photon-based processing units enable more complex machine learning

    Machine learning performed by neural networks is a popular approach to developing artificial intelligence, as researchers aim to replicate brain functionalities for a variety of applications.
    A paper in the journal Applied Physics Reviews, by AIP Publishing, proposes a new approach to perform computations required by a neural network, using light instead of electricity. In this approach, a photonic tensor core performs multiplications of matrices in parallel, improving speed and efficiency of current deep learning paradigms.
    In machine learning, neural networks are trained to learn to perform unsupervised decision and classification on unseen data. Once a neural network is trained on data, it can produce an inference to recognize and classify objects and patterns and find a signature within the data.
    The photonic TPU stores and processes data in parallel, featuring an electro-optical interconnect, which allows the optical memory to be efficiently read and written and the photonic TPU to interface with other architectures.
    “We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a tensor processing unit, but they consume a fraction of the power and have higher throughput and, when opportunely trained, can be used for performing inference at the speed of light,” said Mario Miscuglio, one of the authors.
    Most neural networks unravel multiple layers of interconnected neurons aiming to mimic the human brain. An efficient way to represent these networks is a composite function that multiplies matrices and vectors together. This representation allows the performance of parallel operations through architectures specialized in vectorized operations such as matrix multiplication.
    However, the more intelligent the task and the higher accuracy of the prediction desired, the more complex the network becomes. Such networks demand larger amounts of data for computation and more power to process that data.
    Current digital processors suitable for deep learning, such as graphics processing units or tensor processing units, are limited in performing more complex operations with greater accuracy by the power required to do so and by the slow transmission of electronic data between the processor and the memory.
    The researchers showed that the performance of their TPU could be 2-3 orders higher than an electrical TPU. Photons may also be an ideal match for computing node-distributed networks and engines performing intelligent tasks with high throughput at the edge of a networks, such as 5G. At network edges, data signals may already exist in the form of photons from surveillance cameras, optical sensors and other sources.
    “Photonic specialized processors can save a tremendous amount of energy, improve response time and reduce data center traffic,” said Miscuglio.
    For the end user, that means data is processed much faster, because a large portion of the data is preprocessed, meaning only a portion of the data needs to be sent to the cloud or data center.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Spinal stimulators repurposed to restore touch in lost limb

    Imagine tying your shoes or taking a sip of coffee or cracking an egg but without any feeling in your hand. That’s life for users of even the most advanced prosthetic arms.
    Although it’s possible to simulate touch by stimulating the remaining nerves in the stump after an amputation, such a surgery is highly complex and individualized. But according to a new study from the University of Pittsburgh’s Rehab Neural Engineering Labs, spinal cord stimulators commonly used to relieve chronic pain could provide a straightforward and universal method for adding sensory feedback to a prosthetic arm.
    For this study, published today in eLife, four amputees received spinal stimulators, which, when turned on, create the illusion of sensations in the missing arm.
    “What’s unique about this work is that we’re using devices that are already implanted in 50,000 people a year for pain — physicians in every major medical center across the country know how to do these surgical procedures — and we get similar results to highly specialized devices and procedures,” said study senior author Lee Fisher, Ph.D., assistant professor of physical medicine and rehabilitation, University of Pittsburgh School of Medicine.
    The strings of implanted spinal electrodes, which Fisher describes as about the size and shape of “fat spaghetti noodles,” run along the spinal cord, where they sit slightly to one side, atop the same nerve roots that would normally transmit sensations from the arm. Since it’s a spinal cord implant, even a person with a shoulder-level amputation can use this device.
    Fisher’s team sent electrical pulses through different spots in the implanted electrodes, one at a time, while participants used a tablet to report what they were feeling and where.

    advertisement

    All the participants experienced sensations somewhere on their missing arm or hand, and they indicated the extent of the area affected by drawing on a blank human form. Three participants reported feelings localized to a single finger or part of the palm.
    “I was pretty surprised at how small the area of these sensations were that people were reporting,” Fisher said. “That’s important because we want to generate sensations only where the prosthetic limb is making contact with objects.”
    When asked to describe not just where but how the stimulation felt, all four participants reported feeling natural sensations, such as touch and pressure, though these feelings often were mixed with decidedly artificial sensations, such as tingling, buzzing or prickling.
    Although some degree of electrode migration is inevitable in the first few days after the leads are implanted, Fisher’s team found that the electrodes, and the sensations they generated, mostly stayed put across the month-long duration of the experiment. That’s important for the ultimate goal of creating a prosthetic arm that provides sensory feedback to the user.
    “Stability of these devices is really critical,” Fisher said. “If the electrodes are moving around, that’s going to change what a person feels when we stimulate.”
    The next big challenges are to design spinal stimulators that can be fully implanted rather than connecting to a stimulator outside the body and to demonstrate that the sensory feedback can help to improve the control of a prosthetic hand during functional tasks like tying shoes or holding an egg without accidentally crushing it. Shrinking the size of the contacts — the parts of the electrode where current comes out — is another priority. That might allow users to experience even more localized sensations.
    “Our goal here wasn’t to develop the final device that someone would use permanently,” Fisher said. “Mostly we wanted to demonstrate the possibility that something like this could work.” More

  • in

    3D hand-sensing wristband signals future of wearable tech

    In a potential breakthrough in wearable sensing technology, researchers from Cornell University and the University of Wisconsin, Madison, have designed a wrist-mounted device that continuously tracks the entire human hand in 3D.
    The bracelet, called FingerTrak, can sense and translate into 3D the many positions of the human hand, including 20 finger joint positions, using three or four miniature, low-resolution thermal cameras that read contours on the wrist. The device could be used in sign language translation, virtual reality, mobile health, human-robot interaction and other areas, the researchers said.
    “This was a major discovery by our team — that by looking at your wrist contours, the technology could reconstruct in 3D, with keen accuracy, where your fingers are,” said Cheng Zhang, assistant professor of information science and director of Cornell’s new SciFi Lab, where FingerTrak was developed. “It’s the first system to reconstruct your full hand posture based on the contours of the wrist.”
    Past wrist-mounted cameras have been considered too bulky and obtrusive for everyday use, and most could reconstruct only a few discrete hand gestures.
    FingerTrak’s breakthrough is a lightweight bracelet, allowing for free movement. Instead of using cameras to directly capture the position of the fingers, the focus of most prior research, FingerTrak uses a combination of thermal imaging and machine learning to virtually reconstruct the hand. The bracelet’s four miniature, thermal cameras — each about the size of a pea — snap multiple “silhouette” images to form an outline of the hand.
    A deep neural network then stitches these silhouette images together and reconstructs the virtual hand in 3D. Through this method, Zhang and his fellow researchers were able to capture the entire hand pose, even when the hand is holding an object.
    While the technology has a wide range of possible uses, Zhang said the most promising is its potential application in sign language translation.
    “Current sign language translation technology requires the user to either wear a glove or have a camera in the environment, both of which are cumbersome,” he said. “This could really push the current technology into new areas.”
    FingerTrak could also have an impact on health care applications, specifically in monitoring disorders that affect fine-motor skills, said Yin Li, assistant professor of biostatistics and medical informatics at the University of Wisconsin, Madison School of Medicine and Public Health, who contributed to the software behind FingerTrak.
    “How we move our hands and fingers often tells about our health condition,” Li said. “A device like this might be used to better understand how the elderly use their hands in daily life, helping to detect early signs of diseases like Parkinson’s and Alzheimer’s.”
    “FingerTrak: Continuous 3D Hand Pose Tracking by Deep Learning Hand Silhouettes Captured by Miniature Thermal Cameras on Wrist,” was published in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies. It also will be presented at the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing, taking place virtually Sept. 12-16.

    Story Source:
    Materials provided by Cornell University. Original written by Louis DiPietro. Note: Content may be edited for style and length. More

  • in

    Powerful human-like hands create safer human-robotics interactions

    Need a robot with a soft touch? A team of Michigan State University engineers has designed and developed a novel humanoid hand that may be able to help.
    In industrial settings, robots often are used for tasks that require repetitive grasping and manipulation of objects. The end of a robot where a human hand would be found is known as an end effector or gripper.
    “The novel humanoid hand design is a soft-hard hybrid flexible gripper. It can generate larger grasping force than a traditional pure soft hand, and simultaneously be more stable for accurate manipulation than other counterparts used for heavier objects,” said lead author Changyong Cao, director of the Laboratory for Soft Machines and Electronics at MSU and assistant professor in Packaging, Mechanical Engineering, and Electrical and Computer Engineering.
    This new research, “Soft Humanoid Hands with Large Grasping Force Enabled by Flexible Hybrid Pneumatic Actuators,” is published in Soft Robotics.
    Generally, soft-hand grippers — which are used primarily in settings where an object may be fragile, light and irregularly shaped — present several disadvantages: sharp surfaces, poor stability in grasping unbalanced loads and relatively weak grasping force for handling heavy loads.
    When designing the new model, Cao and his team took into consideration a number of human-environment interactions, from fruit picking to sensitive medical care. They identified that some processes require a safe but firm interaction with fragile objects; most existing gripping systems are not suitable for these purposes.

    advertisement

    The team explained that the design novelty resulted in a prototype demonstrating the merits of a responsive, fast, lightweight gripper capable of handling a multitude of tasks that traditionally required different types of gripping systems.
    Each finger of the soft humanoid hand is constructed from a flexible hybrid pneumatic actuator — or FHPA — driven to bend by pressurized air, creating a modular framework for movement in which each digit moves independently of the others.
    “Traditional rigid grippers for industrial applications are generally made of simple but re- liable rigid structures that help in generating large forces, high accuracy and repeatability,” Cao said. “The proposed soft humanoid hand has demonstrated excellent adaptability and compatibility in grasping complex-shaped and fragile objects while simultaneously maintaining a high level of stiffness for exerting strong clamping forces to lift heavy loads.”
    In essence, the best of both worlds, Cao explained.
    The FHPA is composed of both hard and soft components, built around a unique structural combination of actuated air bladders and a bone-like spring core.
    “They combine the advantages of the deformability, adaptability and compliance of soft grippers while maintaining the large output force originated from the rigidity of the actuator,” Cao said.
    He believes the prototype can be useful in industries such as fruit picking, automated packaging, medical care, rehabilitation and surgical robotics.
    With ample room for future research and development, the team hopes to combine its advances with Cao’s recent work on so-called ‘smart’ grippers, integrating printed sensors in the gripping material. And by combining the hybrid gripper with ‘soft arms’ models, the researchers aim to more accurately mimic precise human actions.

    Story Source:
    Materials provided by Michigan State University. Note: Content may be edited for style and length. More