More stories

  • in

    Bleak cyborg future from brain-computer interfaces if we're not careful

    Surpassing the biological limitations of the brain and using one’s mind to interact with and control external electronic devices may sound like the distant cyborg future, but it could come sooner than we think.
    Researchers from Imperial College London conducted a review of modern commercial brain-computer interface (BCI) devices, and they discuss the primary technological limitations and humanitarian concerns of these devices in APL Bioengineering, from AIP Publishing.
    The most promising method to achieve real-world BCI applications is through electroencephalography (EEG), a method of monitoring the brain noninvasively through its electrical activity. EEG-based BCIs, or eBCIs, will require a number of technological advances prior to widespread use, but more importantly, they will raise a variety of social, ethical, and legal concerns.
    Though it is difficult to understand exactly what a user experiences when operating an external device with an eBCI, a few things are certain. For one, eBCIs can communicate both ways. This allows a person to control electronics, which is particularly useful for medical patients that need help controlling wheelchairs, for example, but also potentially changes the way the brain functions.
    “For some of these patients, these devices become such an integrated part of themselves that they refuse to have them removed at the end of the clinical trial,” said Rylie Green, one of the authors. “It has become increasingly evident that neurotechnologies have the potential to profoundly shape our own human experience and sense of self.”
    Aside from these potentially bleak mental and physiological side effects, intellectual property concerns are also an issue and may allow private companies that develop eBCI technologies to own users’ neural data.
    “This is particularly worrisome, since neural data is often considered to be the most intimate and private information that could be associated with any given user,” said Roberto Portillo-Lara, another author. “This is mainly because, apart from its diagnostic value, EEG data could be used to infer emotional and cognitive states, which would provide unparalleled insight into user intentions, preferences, and emotions.”
    As the availability of these platforms increases past medical treatment, disparities in access to these technologies may exacerbate existing social inequalities. For example, eBCIs can be used for cognitive enhancement and cause extreme imbalances in academic or professional successes and educational advancements.
    “This bleak panorama brings forth an interesting dilemma about the role of policymakers in BCI commercialization,” Green said. “Should regulatory bodies intervene to prevent misuse and unequal access to neurotech? Should society follow instead the path taken by previous innovations, such as the internet or the smartphone, which originally targeted niche markets but are now commercialized on a global scale?”
    She calls on global policymakers, neuroscientists, manufacturers, and potential users of these technologies to begin having these conversations early and collaborate to produce answers to these difficult moral questions.
    “Despite the potential risks, the ability to integrate the sophistication of the human mind with the capabilities of modern technology constitutes an unprecedented scientific achievement, which is beginning to challenge our own preconceptions of what it is to be human,” Green said. More

  • in

    Study finds surprising source of social influence

    Imagine you’re a CEO who wants to promote an innovative new product — a time management app or a fitness program. Should you send the product to Kim Kardashian in the hope that she’ll love it and spread the word to her legions of Instagram followers? The answer would be ‘yes’ if successfully transmitting new ideas or behavior patterns was as simple as showing them to as many people as possible.
    However, a forthcoming study in the journal Nature Communications finds that as prominent and revered as social influencers seem to be, in fact, they are unlikely to change a person’s behavior by example — and might actually be detrimental to the cause.
    Why?
    “When social influencers present ideas that are dissonant with their followers’ worldviews — say, for example, that vaccination is safe and effective — they can unintentionally antagonize the people they are seeking to persuade because people typically only follow influencers whose ideas confirm their beliefs about the world,” says Damon Centola, Elihu Katz Professor of Communication, Sociology, and Engineering at Penn, and senior author on the paper.
    So what strategy do we take if we want to use an online or real world neighborhood network to ‘plant’ a new idea? Is there anyone in a social network who is effective at transmitting new beliefs? The new study delivers a surprising answer: yes, and it’s the people you’d least expect to have any pull. To stimulate a shift in thinking, target small groups of people in the “outer edge” or fringe of a network.
    Centola and Douglas Guilbeault, Ph.D., a recent Annenberg graduate, studied over 400 public health networks to discover which people could spread new ideas and behaviors most effectively. They tested every possible person in every network to determine who would be most effective for spreading everything from celebrity gossip to vaccine acceptance. More

  • in

    Ultrathin magnet operates at room temperature

    The development of an ultrathin magnet that operates at room temperature could lead to new applications in computing and electronics — such as high-density, compact spintronic memory devices — and new tools for the study of quantum physics.
    The ultrathin magnet, which was recently reported in the journal Nature Communications, could make big advances in next-gen memories, computing, spintronics, and quantum physics. It was discovered by scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley.
    “We’re the first to make a room-temperature 2D magnet that is chemically stable under ambient conditions,” said senior author Jie Yao, a faculty scientist in Berkeley Lab’s Materials Sciences Division and associate professor of materials science and engineering at UC Berkeley.
    “This discovery is exciting because it not only makes 2D magnetism possible at room temperature, but it also uncovers a new mechanism to realize 2D magnetic materials,” added Rui Chen, a UC Berkeley graduate student in the Yao Research Group and lead author on the study.”
    The magnetic component of today’s memory devices is typically made of magnetic thin films. But at the atomic level, these magnetic films are still three-dimensional — hundreds or thousands of atoms thick. For decades, researchers have searched for ways to make thinner and smaller 2D magnets and thus enable data to be stored at a much higher density.
    Previous achievements in the field of 2D magnetic materials have brought promising results. But these early 2D magnets lose their magnetism and become chemically unstable at room temperature. More

  • in

    New algorithm may help autonomous vehicles navigate narrow, crowded streets

    It is a scenario familiar to anyone who has driven down a crowded, narrow street. Parked cars line both sides, and there isn’t enough space for vehicles traveling in both directions to pass each other. One has to duck into a gap in the parked cars or slow and pull over as far as possible for the other to squeeze by.
    Drivers find a way to negotiate this, but not without close calls and frustration. Programming an autonomous vehicle (AV) to do the same — without a human behind the wheel or knowledge of what the other driver might do — presented a unique challenge for researchers at the Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research.
    “It’s the unwritten rules of the road, that’s pretty much what we’re dealing with here,” said Christoph Killing, a former visiting research scholar in the School of Computer Science’s Robotics Institute and now part of the Autonomous Aerial Systems Lab at the Technical University of Munich. “It’s a difficult bit. You have to learn to negotiate this scenario without knowing if the other vehicle is going to stop or go.”
    While at CMU, Killing teamed up with research scientist John Dolan and Ph.D. student Adam Villaflor to crack this problem. The team presented its research, “Learning To Robustly Negotiate Bi-Directional Lane Usage in High-Conflict Driving Scenarios,” at the International Conference on Robotics and Automation.
    The team believes their research is the first into this specific driving scenario. It requires drivers — human or not — to collaborate to make it past each other safely without knowing what the other is thinking. Drivers must balance aggression with cooperation. An overly aggressive driver, one that just goes without regard for other vehicles, could put itself and others at risk. An overly cooperative driver, one that always pulls over in the face of oncoming traffic, may never make it down the street.
    “I have always found this to be an interesting and sometimes difficult aspect of driving in Pittsburgh,” Dolan said.
    Autonomous vehicles have been heralded as a potential solution to the last mile challenges of delivery and transportation. But for an AV to deliver a pizza, package or person to their destination, they have to be able to navigate tight spaces and unknown driver intentions.
    The team developed a method to model different levels of driver cooperativeness — how likely a driver was to pull over to let the other driver pass — and used those models to train an algorithm that could assist an autonomous vehicle to safely and efficiently navigate this situation. The algorithm has only been used in simulation and not on a vehicle in the real world, but the results are promising. The team found that their algorithm performed better than current models.
    Driving is full of complex scenarios like this one. As the autonomous driving researchers tackle them, they look for ways to make the algorithms and models developed for one scenario, say merging onto a highway, work for other scenarios, like changing lanes or making a left turn against traffic at an intersection.
    “Extensive testing is bringing to light the last percent of touch cases,” Dolan said. “We keep finding these corner cases and keep coming up with ways to handle them.”
    Video: https://www.youtube.com/watch?v=5njRSHcHMBk
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    Novel techniques extract more accurate data from images degraded by environmental factors

    Computer vision technology is increasingly used in areas such as automatic surveillance systems, self-driving cars, facial recognition, healthcare and social distancing tools. Users require accurate and reliable visual information to fully harness the benefits of video analytics applications but the quality of the video data is often affected by environmental factors such as rain, night-time conditions or crowds (where there are multiple images of people overlapping with each other in a scene). Using computer vision and deep learning, a team of researchers led by Yale-NUS College Associate Professor of Science (Computer Science) Robby Tan, who is also from the National University of Singapore’s (NUS) Faculty of Engineering, has developed novel approaches that resolve the problem of low-level vision in videos caused by rain and night-time conditions, as well as improve the accuracy of 3D human pose estimation in videos.
    The research was presented at the 2021 Conference on Computer Vision and Pattern Recognition (CVPR).
    Combating visibility issues during rain and night-time conditions
    Night-time images are affected by low light and human-made light effects such as glare, glow, and floodlights, while rain images are affected by rain streaks or rain accumulation (or rain veiling effect).
    “Many computer vision systems like automatic surveillance and self-driving cars, rely on clear visibility of the input videos to work well. For instance, self-driving cars cannot work robustly in heavy rain and CCTV automatic surveillance systems often fail at night, particularly if the scenes are dark or there is significant glare or floodlights,” explained Assoc Prof Tan.
    In two separate studies, Assoc Prof Tan and his team introduced deep learning algorithms to enhance the quality of night-time videos and rain videos, respectively. In the first study, they boosted the brightness yet simultaneously suppressed noise and light effects (glare, glow and floodlights) to yield clear night-time images. This technique is new and addresses the challenge of clarity in night-time images and videos when the presence of glare cannot be ignored. In comparison, the existing state-of-the-art methods fail to handle glare. More

  • in

    Scientists adopt deep learning for multi-object tracking

    Implementing algorithms that can simultaneously track multiple objects is essential to unlock many applications, from autonomous driving to advanced public surveillance. However, it is difficult for computers to discriminate between detected objects based on their appearance. Now, researchers at the Gwangju Institute of Science and Technology (GIST) have adapted deep learning techniques in a multi-object tracking framework, overcoming short-term occlusion and achieving remarkable performance without sacrificing computational speed.
    Computer vision has progressed much over the past decade and made its way into all sorts of relevant applications, both in academia and in our daily lives. There are, however, some tasks in this field that are still extremely difficult for computers to perform with acceptable accuracy and speed. One example is object tracking, which involves recognizing persistent objects in video footage and tracking their movements. While computers can simultaneously track more objects than humans, they usually fail to discriminate the appearance of different objects. This, in turn, can lead to the algorithm to mix up objects in a scene and ultimately produce incorrect tracking results.
    At the Gwangju Institute of Science and Technology in Korea, a team of researchers led by Professor Moongu Jeon seeks to solve these issues by incorporating deep learning techniques into a multi-object tracking framework. In a recent study published in Information Sciences, they present a new tracking model based on a technique they call ‘deep temporal appearance matching association (Deep-TAMA)’ which promises innovative solutions to some of the most prevalent problems in multi-object tracking. This paper was made available online in October 2020 and was published in volume 561 of the journal in June 2021.
    Conventional tracking approaches determine object trajectories by associating a bounding box to each detected object and establishing geometric constraints. The inherent difficulty in this approach is in accurately matching previously tracked objects with objects detected in the current frame. Differentiating detected objects based on hand-crafted features like color usually fails because of changes in lighting conditions and occlusions. Thus, the researchers focused on enabling the tracking model with the ability to accurately extract the known features of detected objects and compare them not only with those of other objects in the frame but also with a recorded history of known features. To this end, they combined joint-inference neural networks (JI-Nets) with long-short-term-memory networks (LSTMs).
    LSTMs help to associate stored appearances with those in the current frame whereas JI-Nets allow for comparing the appearances of two detected objects simultaneously from scratch — one of the most unique aspects of this new approach. Using historical appearances in this way allowed the algorithm to overcome short-term occlusions of the tracked objects. “Compared to conventional methods that pre-extract features from each object independently, the proposed joint-inference method exhibited better accuracy in public surveillance tasks, namely pedestrian tracking,” highlights Dr. Jeon. Moreover, the researchers also offset a main drawback of deep learning — low speed — by adopting indexing-based GPU parallelization to reduce computing times. Tests on public surveillance datasets confirmed that the proposed tracking framework offers state-of-the-art accuracy and is therefore ready for deployment.
    Multi-object tracking unlocks a plethora of applications ranging from autonomous driving to public surveillance, which can help combat crime and reduce the frequency of accidents. “We believe our methods can inspire other researchers to develop novel deep-learning-based approaches to ultimately improve public safety,” concludes Dr. Jeon. For everyone’s sake, let us hope their vision soon becomes a reality!
    Story Source:
    Materials provided by GIST (Gwangju Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    The mathematics of repulsion for new graphene catalysts

    A new mathematical model helps predict the tiny changes in carbon-based materials that could yield interesting properties.
    Scientists at Tohoku University and colleagues in Japan have developed a mathematical model that abstracts the key effects of changes to the geometries of carbon material and predicts its unique properties.
    The details were published in the journal Carbon.
    Scientists generally use mathematical models to predict the properties that might emerge when a material is changed in certain ways. Changing the geometry of three-dimensional (3D) graphene, which is made of networks of carbon atoms, by adding chemicals or introducing topological defects, can improve its catalytic properties, for example. But it has been difficult for scientists to understand why this happens exactly.
    The new mathematical model, called standard realization with repulsive interaction (SRRI), reveals the relationship between these changes and the properties that arise from them. It does this using less computational power than the typical model employed for this purpose, called density functional theory (DFT), but it is less accurate.
    With the SRRI model, the scientists have refined another existing model by showing the attractive and repulsive forces that exist between adjacent atoms in carbon-based materials. The SRRI model also takes into account two types of curvature in such materials: local curvatures and mean curvature.
    The researchers, led by Tohoku University mathematician Motoko Kotani, used their model to predict the catalytic properties that would arise when local curvatures and dopants were introduced into 3D graphene. Their results were similar to those produced by the DFT model.
    “The accuracy of the SRRI model showed a qualitative agreement with DFT calculations, and is able to screen through potential materials roughly one billion times faster than DFT,” says Kotani.
    The team next fabricated the material and determined its properties using scanning electrochemical cell microscopy. This method can show a direct link between the material’s geometry and its catalytic activity. It revealed that the catalytically active sites are on the local curvatures.
    “Our mathematical model can be used as an effective pre-screening tool for exploring new 2D and 3D carbon materials for unique properties before applying DFT modelling,” says Kotani. “This shows the importance of mathematics in accelerating material design.”
    The team next plans to use their model to look for links between the design of a material and its mechanical and electron transport properties.
    Story Source:
    Materials provided by Tohoku University. Note: Content may be edited for style and length. More

  • in

    Projecting bond properties with machine learning

    Designing materials that have the necessary properties to fulfill specific functions is a challenge faced by researchers working in areas from catalysis to solar cells. To speed up development processes, modeling approaches can be used to predict information to guide refinements. Researchers from The University of Tokyo Institute of Industrial Science have developed a machine learning model to determine characteristics of bonded and adsorbed materials based on parameters of the individual components. Their findings are published in Applied Physics Express.
    Factors such as the length and strength of bonds in materials play crucial roles in determining the structures and properties we experience on the macroscopic scale. The ability to easily predict these characteristics is therefore valuable when designing new materials.
    The density of states (DOS) is a parameter that can be calculated for individual atoms, molecules, and materials. Put simply, it describes the options available to the electrons that arrange themselves in a material. A modeling approach that can take this information for selected components and produce useful data for the desired product — with no need to make and analyze the material — is an attractive tool.
    The researchers used a machine learning approach — where the model refines its response without human intervention — to predict four different properties of products from the DOS information of the individual components. Although the DOS has been used as a descriptor to establish single parameters before, this is the first time multiple different properties have been predicted.
    “We were able to quantitatively predict the binding energy, bond length, number of covalent electrons, and the Fermi energy after bonding for three different general types of system,” explains study first author Eiki Suzuki. “And our predictions were very accurate across all of the properties.”
    Because the calculation of DOS of an isolated state is less complex than for bonded systems, the analysis is relatively efficient. In addition, the neural network model used performed well even when only 20% of the dataset was used for training.
    “A significant advantage of our model is that it is general and can be applied to a wide variety of systems,” study corresponding author Teruyasu Mizoguchi explains. “We believe that our findings could make a significant contribution to numerous development processes, for example in catalysis, and could be particularly useful in newer research areas such as nano clusters and nanowires.”
    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More