More stories

  • in

    Is your mobile provider tracking your location? New technology could stop it

    Right now, there is a good chance your phone is tracking your location — even with GPS services turned off. That’s because, to receive service, our phones reveal personal identifiers to cell towers owned by major network operators. This has led to vast and largely unregulated data-harvesting industries based around selling users’ location data to third parties without consent.
    For the first time, researchers at the University of Southern California (USC) Viterbi School of Engineering and Princeton University have found a way to stop this privacy breach using existing cellular networks. The new system, presented at USENIX Security conference on Aug. 11, protects users’ mobile privacy while providing normal mobile connectivity.
    The new architecture, called “Pretty Good Phone Privacy” or PGPP, decouples phone connectivity from authentication and billing by anonymizing personal identifiers sent to cell towers. The software-based solution, described by the researchers as an “architecture change,” does not alter cellular network hardware.
    “We’ve unwittingly accepted that our phones are tracking devices in disguise, but until now we’ve had no other option — using mobile devices meant accepting this tracking,” said study co-author Barath Raghavan, an assistant professor in computer science at USC. “We figured out how to decouple authentication from connectivity and ensure privacy while maintaining seamless connectivity, and it is all done in software.”
    Decoupling authentication and phone connectivity
    Currently, for your phone to work, the network has to know your location and identify you as paying customer. As such, both your identity and location data are tracked by the device at all times. Data brokers and major operators have taken advantage of this system to profit off revealing sensitive user data — to date, in the United States, there are no federal laws restricting the use of location data. More

  • in

    Toward next-generation brain-computer interface systems

    Brain-computer interfaces (BCIs) are emerging assistive devices that may one day help people with brain or spinal injuries to move or communicate. BCI systems depend on implantable sensors that record electrical signals in the brain and use those signals to drive external devices like computers or robotic prosthetics.
    Most current BCI systems use one or two sensors to sample up to a few hundred neurons, but neuroscientists are interested in systems that are able to gather data from much larger groups of brain cells.
    Now, a team of researchers has taken a key step toward a new concept for a future BCI system — one that employs a coordinated network of independent, wireless microscale neural sensors, each about the size of a grain of salt, to record and stimulate brain activity. The sensors, dubbed “neurograins,” independently record the electrical pulses made by firing neurons and send the signals wirelessly to a central hub, which coordinates and processes the signals.
    In a study published on August 12 in Nature Electronics, the research team demonstrated the use of nearly 50 such autonomous neurograins to record neural activity in a rodent.
    The results, the researchers say, are a step toward a system that could one day enable the recording of brain signals in unprecedented detail, leading to new insights into how the brain works and new therapies for people with brain or spinal injuries.
    “One of the big challenges in the field of brain-computer interfaces is engineering ways of probing as many points in the brain as possible,” said Arto Nurmikko, a professor in Brown’s School of Engineering and the study’s senior author. “Up to now, most BCIs have been monolithic devices — a bit like little beds of needles. Our team’s idea was to break up that monolith into tiny sensors that could be distributed across the cerebral cortex. That’s what we’ve been able to demonstrate here.”
    The team, which includes experts from Brown, Baylor University, University of California at San Diego and Qualcomm, began the work of developing the system about four years ago. The challenge was two-fold, said Nurmikko, who is affiliated with Brown’s Carney Institute for Brain Science. The first part required shrinking the complex electronics involved in detecting, amplifying and transmitting neural signals into the tiny silicon neurograin chips. The team first designed and simulated the electronics on a computer, and went through several fabrication iterations to develop operational chips. More

  • in

    New study shows the potential of DNA-based data-structures systems

    Newcastle University research offers important insights into how we could turn to DNA into a green-by-design data structure that organises data like conventional computers.
    The team, led by researchers from Newcastle University’s School of Computing, created new dynamic DNA data structures able to store and recall information in an ordered way from DNA molecules. They also analysed how these structures are able to be interfaced with external nucleic acid computing circuits.
    Publishing their findings in the journal Nature Communications, the scientists present an in vitro implementation of a stack data structure using DNA polymers. Developed as a DNA chemical reaction system, the stack system is able to record combinations of two different DNA signals (0s and 1s), release the signals into solution in reverse order, and then re-record.
    The stack, which is a linear data structure which follows a particular order in which the operations are performed, stores and retrieves information (DNA signal strands) in a last-in first-out order by building and truncating DNA “polymers” of single ssDNA strands. Such a stack data structure may eventually be embedded in an in vivo context to store messenger RNAs and reverse the temporal order of a translational response, among other applications.
    Professor Natalio Krasnogor, of Newcastle University’s School of Computing, who led the study explains: “Our civilisation is data hungry and all that information processing thirst is having a strong environmental impact. For example, digital technologies pollute more than the aviation industry, the top 7000 data centers in the world use around 2% of global electricity and we all heard about the environmental footprint of some cryptocurrencies.
    “In recent years DNA has been shown to be an excellent substrate to store data and the DNA is a renewable, sustainable resource. At Newcastle we are passionate about sustainability and thus we wanted to start taking baby steps into green-by-design molecular information processing in DNA and go beyond simply storing data. We wanted to be able to organise it. In computer science, data structures are at the core of all the algorithms that run our modern economy; this is so because you need a way to have a unified and standardised way to operate on the data that is stored. This is what data structures enable. We are the first to demonstrate a molecular realisation of this crucial component of the modern information age.”
    Study co-author, Dr Annunziata Lopiccolo, Research Associate at Newcastle University’s Centre for Synthetic Biology and the Bioeconomy, added: “If we start thinking about data storage, immediately our minds picture electronic microchips, USB drives and many other existing technologies. But over the last few years biologists challenged the data storage media sector demonstrating that the DNA nature, as a highly stable and resilient media, can function as a quaternary data storage, rather than binary. In our work we wanted to demonstrate that it is possible to use the quaternary code to craft readable inputs and outputs under the form of programmable signals, with a linear and organised data structure. Our work expands knowledge in the context of information processing at the nanoscale level.”
    Study co-author Dr Harold Fellermann, Lecturer at Newcastle University School of Computing added: “Our biomolecular data structure, where both data as well as operations are represented by short pieces of DNA, has been designed with biological implementations in mind. In principle, we can imagine such a device to be used inside a living cell, bacteria for example. This makes it possible to bring computational power to domains that are currently hard to access with traditional silicon-based, electronic computing. In the future, such data structures might be used in environmental monitoring, bioremediation, green production, and even personalised nanomedicine.”
    Study co-author, Dr Benjamin Shirt-Ediss, Research Associate, Newcastle University School of Computing, said: “It was really interesting to develop a computational model of the DNA chemistry and to see good agreement with experimental results coming out of the lab. The computational model allowed us to really get a handle on the performance of the DNA stack data structure — we could systematically explore its absolute limits and suggest future avenues for improvement.”
    The experimental DNA stack system constitutes proof-of principle that a polymerising DNA chemistry can be used as a dynamic data structure to store two types of DNA signal in a last-in first-out order. While more research is needed to determine the best-possible way to archive and access DNA-based data, the study highlights the enormous potential of this technology, and how it could help tackle the rapidly growing data demands.
    Story Source:
    Materials provided by Newcastle University. Note: Content may be edited for style and length. More

  • in

    Impenetrable optical OTP security platform

    An anticounterfeiting smart label and security platform which makes forgery fundamentally impossible has been proposed. The device accomplishes this by controlling a variety of information of light including the color, phase, and polarization in one optical device.
    A POSTECH research team — led by Professor Junsuk Rho of departments of mechanical engineering and chemical engineering, Dr. Inki Kim, and Ph.D. candidates Jaehyuck Jang and Gyeongtae Kim — has developed an encrypted hologram printing platform that works in both natural light and laser light using the metasurface, an ultra-thin optical material with the thickness of one-thousandth of a strand of human hair. The label printed with this technology can produce a holographic color image that retains a specific polarization. The researchers have labeled this a “vectorial hologram.” The findings from this study were recently published in Nature Communications.
    The metasurface devices reported so far can only modulate one property of light such as color, phase, or polarization. To overcome this limitation, the researchers have devised a pixelated bifunctional metasurface by grouping multiple metasurfaces.
    In the unit structure that is the basis of the metasurface, the research team designed a device that uses its size to control the color, the orientation angles to control the phase, and the relative angle difference and the ratio of the group — that generates the left-handed and right-handed circularly polarized light within the pixelized group — to express all polarizations of light. To freely modulate the various degrees of freedoms of light and to maximize the efficiency at the same time, the metasurface plays the roles of a resonator1 and an optical waveguide2 at the same time.
    The vectorial hologram label designed in this manner displays QR codes that contain a variety of colors to the naked eye or when scanned with a camera. Simultaneously, under laser illumination, polarization encoded 3D holographic images are rendered. This holographic image has a special polarization state for each part of the image, which sets it apart from previously reported holograms.
    The vectorial holographic color printing technology developed in this research is an optical approach to the two-level encrypted one-time password (OTP) security mechanism that generates a password required to access the current banking system that verifies the user. First, when a user scans the QR code of the meta-optical device with a smart phone, the first password composed of random numbers is generated. When this password is applied to the meta-optical device as voltage value, the secondary password is displayed as an encrypted holographic image.
    “This vectorial holographic color printing platform is more advanced than the metasurface devices reported so far, and has demonstrated that various degrees of freedoms of light can be modulated with one optical device,” explained Professor Junsuk Rho. “It is a highly perfected optical OTP device which shows promise to be an original optical encryption technology applicable in designing and analyzing meta-atoms.”
    The research team has been conducting leading research on metasurface optical devices for the past five years and the device under development this time shows much potential for commercialization in areas of optical sensors, holographic displays, security and anticounterfeiting applications.
    This study supported by the grant from the Samsung Research Funding & Incubation Center for Future Technology funded by Samsung Electronics.
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Deep learning model classifies brain tumors with single MRI scan

    A team of researchers at Washington University School of Medicine have developed a deep learning model that is capable of classifying a brain tumor as one of six common types using a single 3D MRI scan, according to a study published in Radiology: Artificial Intelligence.
    “This is the first study to address the most common intracranial tumors and to directly determine the tumor class or the absence of tumor from a 3D MRI volume,” said Satrajit Chakrabarty, M.S., a doctoral student under the direction of Aristeidis Sotiras, Ph.D., and Daniel Marcus, Ph.D., in Mallinckrodt Institute of Radiology’s Computational Imaging Lab at Washington University School of Medicine in St. Louis, Missouri.
    The six most common intracranial tumor types are high-grade glioma, low-grade glioma, brain metastases, meningioma, pituitary adenoma and acoustic neuroma. Each was documented through histopathology, which requires surgically removing tissue from the site of a suspected cancer and examining it under a microscope.
    According to Chakrabarty, machine and deep learning approaches using MRI data could potentially automate the detection and classification of brain tumors.
    “Non-invasive MRI may be used as a complement, or in some cases, as an alternative to histopathologic examination,” he said.
    To build their machine learning model, called a convolutional neural network, Chakrabarty and researchers from Mallinckrodt Institute of Radiology developed a large, multi-institutional dataset of intracranial 3D MRI scans from four publicly available sources. In addition to the institution’s own internal data, the team obtained pre-operative, post-contrast T1-weighted MRI scans from the Brain Tumor Image Segmentation, The Cancer Genome Atlas Glioblastoma Multiforme, and The Cancer Genome Atlas Low Grade Glioma. More

  • in

    A novel virtual reality technology to make MRI a new experience

    Researchers from King’s College London have created a novel interactive VR system to be used by patients when undertaking an MRI.
    In a new paper published in Scientific Reports, the researchers say they hope this advancement will make it easier for those who find having a MRI scan challenging such as children, people with cognitive difficulties or those who suffer from claustrophobia or anxiety.
    In normal circumstances, MRI scans fail in up to 50 percent of children under 5 years of age, which means that hospitals often rely on sedative medication or even anesthesia to get children successfully scanned.
    These measures are time consuming and expensive and have their own associated risks. From a neuroscience point of view, it also means that MRI based studies of brain function are generally only ever studied in these vulnerable populations during an artificial induced sleep state so may not be representative of how the brain works in normal circumstances.
    Lead researcher Dr Kun Qian from the School of Biomedical Engineering & Imaging Sciences at King’s College London said having an MRI scan can be quite an alien experience as it involves going into a narrow tunnel, with loud and often strange noises in the background, all while having to stay as still as possible.
    “We were keen to find other ways of enabling children and vulnerable people to have an MRI scan,” Dr Qian said. More

  • in

    System trains drones to fly around obstacles at high speeds

    If you follow autonomous drone racing, you likely remember the crashes as much as the wins. In drone racing, teams compete to see which vehicle is better trained to fly fastest through an obstacle course. But the faster drones fly, the more unstable they become, and at high speeds their aerodynamics can be too complicated to predict. Crashes, therefore, are a common and often spectacular occurrence.
    But if they can be pushed to be faster and more nimble, drones could be put to use in time-critical operations beyond the race course, for instance to search for survivors in a natural disaster.
    Now, aerospace engineers at MIT have devised an algorithm that helps drones find the fastest route around obstacles without crashing. The new algorithm combines simulations of a drone flying through a virtual obstacle course with data from experiments of a real drone flying through the same course in a physical space.
    The researchers found that a drone trained with their algorithm flew through a simple obstacle course up to 20 percent faster than a drone trained on conventional planning algorithms. Interestingly, the new algorithm didn’t always keep a drone ahead of its competitor throughout the course. In some cases, it chose to slow a drone down to handle a tricky curve, or save its energy in order to speed up and ultimately overtake its rival.
    “At high speeds, there are intricate aerodynamics that are hard to simulate, so we use experiments in the real world to fill in those black holes to find, for instance, that it might be better to slow down first to be faster later,” says Ezra Tal, a graduate student in MIT’s Department of Aeronautics and Astronautics. “It’s this holistic approach we use to see how we can make a trajectory overall as fast as possible.”
    “These kinds of algorithms are a very valuable step toward enabling future drones that can navigate complex environments very fast,” adds Sertac Karaman, associate professor of aeronautics and astronautics, and director of the Laboratory for Information and Decision Systems at MIT. “We are really hoping to push the limits in a way that they can travel as fast as their physical limits will allow.”
    Tal, Karaman, and MIT graduate student Gilhyun Ryou have published their results in the International Journal of Robotics Research. More

  • in

    Researchers use artificial intelligence to unlock extreme weather mysteries

    From lake-draining drought in California to bridge-breaking floods in China, extreme weather is wreaking havoc. Preparing for weather extremes in a changing climate remains a challenge, however, because their causes are complex and their response to global warming is often not well understood. Now, Stanford researchers have developed a machine learning tool to identify conditions for extreme precipitation events in the Midwest, which account for over half of all major U.S. flood disasters. Published in Geophysical Research Letters, their approach is one of the first examples using AI to analyze causes of long-term changes in extreme events and could help make projections of such events more accurate.
    “We know that flooding has been getting worse,” said study lead author Frances Davenport, a PhD student in Earth system science in Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth). “Our goal was to understand why extreme precipitation is increasing, which in turn could lead to better predictions about future flooding.”
    Among other impacts, global warming is expected to drive heavier rain and snowfall by creating a warmer atmosphere that can hold more moisture. Scientists hypothesize that climate change may affect precipitation in other ways, too, such as changing when and where storms occur. Revealing these impacts has remained difficult, however, in part because global climate models do not necessarily have the spatial resolution to model these regional extreme events.
    “This new approach to leveraging machine learning techniques is opening new avenues in our understanding of the underlying causes of changing extremes,” said study co-author Noah Diffenbaugh, the Kara J Foundation Professor in the School of Earth, Energy & Environmental Sciences. “That could enable communities and decision makers to better prepare for high-impact events, such as those that are so extreme that they fall outside of our historical experience.”
    Davenport and Diffenbaugh focused on the upper Mississippi watershed and the eastern part of the Missouri watershed. The highly flood-prone region, which spans parts of nine states, has seen extreme precipitation days and major floods become more frequent in recent decades. The researchers started by using publicly available climate data to calculate the number of extreme precipitation days in the region from 1981 to 2019. Then they trained a machine learning algorithm designed for analyzing grid data, such as images, to identify large-scale atmospheric circulation patterns associated with extreme precipitation (above the 95th percentile).
    “The algorithm we use correctly identifies over 90 percent of the extreme precipitation days, which is higher than the performance of traditional statistical methods that we tested,” Davenport said.
    The trained machine learning algorithm revealed that multiple factors are responsible for the recent increase in Midwest extreme precipitation. During the 21st century, the atmospheric pressure patterns that lead to extreme Midwest precipitation have become more frequent, increasing at a rate of about one additional day per year, although the researchers note that the changes are much weaker going back further in time to the 1980s.
    However, the researchers found that when these atmospheric pressure patterns do occur, the amount of precipitation that results has clearly increased. As a result, days with these conditions are more likely to have extreme precipitation now than they did in the past. Davenport and Diffenbaugh also found that increases in the precipitation intensity on these days were associated with higher atmospheric moisture flows from the Gulf of Mexico into the Midwest, bringing the water necessary for heavy rainfall in the region.
    The researchers hope to extend their approach to look at how these different factors will affect extreme precipitation in the future. They also envision redeploying the tool to focus on other regions and types of extreme events, and to analyze distinct extreme precipitation causes, such as weather fronts or tropical cyclones. These applications will help further parse climate change’s connections to extreme weather.
    “While we focused on the Midwest initially, our approach can be applied to other regions and used to understand changes in extreme events more broadly,” said Davenport. “This will help society better prepare for the impacts of climate change.”
    Story Source:
    Materials provided by Stanford University. Original written by Rob Jordan. Note: Content may be edited for style and length. More