More stories

  • in

    Spinal stimulators repurposed to restore touch in lost limb

    Imagine tying your shoes or taking a sip of coffee or cracking an egg but without any feeling in your hand. That’s life for users of even the most advanced prosthetic arms.
    Although it’s possible to simulate touch by stimulating the remaining nerves in the stump after an amputation, such a surgery is highly complex and individualized. But according to a new study from the University of Pittsburgh’s Rehab Neural Engineering Labs, spinal cord stimulators commonly used to relieve chronic pain could provide a straightforward and universal method for adding sensory feedback to a prosthetic arm.
    For this study, published today in eLife, four amputees received spinal stimulators, which, when turned on, create the illusion of sensations in the missing arm.
    “What’s unique about this work is that we’re using devices that are already implanted in 50,000 people a year for pain — physicians in every major medical center across the country know how to do these surgical procedures — and we get similar results to highly specialized devices and procedures,” said study senior author Lee Fisher, Ph.D., assistant professor of physical medicine and rehabilitation, University of Pittsburgh School of Medicine.
    The strings of implanted spinal electrodes, which Fisher describes as about the size and shape of “fat spaghetti noodles,” run along the spinal cord, where they sit slightly to one side, atop the same nerve roots that would normally transmit sensations from the arm. Since it’s a spinal cord implant, even a person with a shoulder-level amputation can use this device.
    Fisher’s team sent electrical pulses through different spots in the implanted electrodes, one at a time, while participants used a tablet to report what they were feeling and where.

    advertisement

    All the participants experienced sensations somewhere on their missing arm or hand, and they indicated the extent of the area affected by drawing on a blank human form. Three participants reported feelings localized to a single finger or part of the palm.
    “I was pretty surprised at how small the area of these sensations were that people were reporting,” Fisher said. “That’s important because we want to generate sensations only where the prosthetic limb is making contact with objects.”
    When asked to describe not just where but how the stimulation felt, all four participants reported feeling natural sensations, such as touch and pressure, though these feelings often were mixed with decidedly artificial sensations, such as tingling, buzzing or prickling.
    Although some degree of electrode migration is inevitable in the first few days after the leads are implanted, Fisher’s team found that the electrodes, and the sensations they generated, mostly stayed put across the month-long duration of the experiment. That’s important for the ultimate goal of creating a prosthetic arm that provides sensory feedback to the user.
    “Stability of these devices is really critical,” Fisher said. “If the electrodes are moving around, that’s going to change what a person feels when we stimulate.”
    The next big challenges are to design spinal stimulators that can be fully implanted rather than connecting to a stimulator outside the body and to demonstrate that the sensory feedback can help to improve the control of a prosthetic hand during functional tasks like tying shoes or holding an egg without accidentally crushing it. Shrinking the size of the contacts — the parts of the electrode where current comes out — is another priority. That might allow users to experience even more localized sensations.
    “Our goal here wasn’t to develop the final device that someone would use permanently,” Fisher said. “Mostly we wanted to demonstrate the possibility that something like this could work.” More

  • in

    3D hand-sensing wristband signals future of wearable tech

    In a potential breakthrough in wearable sensing technology, researchers from Cornell University and the University of Wisconsin, Madison, have designed a wrist-mounted device that continuously tracks the entire human hand in 3D.
    The bracelet, called FingerTrak, can sense and translate into 3D the many positions of the human hand, including 20 finger joint positions, using three or four miniature, low-resolution thermal cameras that read contours on the wrist. The device could be used in sign language translation, virtual reality, mobile health, human-robot interaction and other areas, the researchers said.
    “This was a major discovery by our team — that by looking at your wrist contours, the technology could reconstruct in 3D, with keen accuracy, where your fingers are,” said Cheng Zhang, assistant professor of information science and director of Cornell’s new SciFi Lab, where FingerTrak was developed. “It’s the first system to reconstruct your full hand posture based on the contours of the wrist.”
    Past wrist-mounted cameras have been considered too bulky and obtrusive for everyday use, and most could reconstruct only a few discrete hand gestures.
    FingerTrak’s breakthrough is a lightweight bracelet, allowing for free movement. Instead of using cameras to directly capture the position of the fingers, the focus of most prior research, FingerTrak uses a combination of thermal imaging and machine learning to virtually reconstruct the hand. The bracelet’s four miniature, thermal cameras — each about the size of a pea — snap multiple “silhouette” images to form an outline of the hand.
    A deep neural network then stitches these silhouette images together and reconstructs the virtual hand in 3D. Through this method, Zhang and his fellow researchers were able to capture the entire hand pose, even when the hand is holding an object.
    While the technology has a wide range of possible uses, Zhang said the most promising is its potential application in sign language translation.
    “Current sign language translation technology requires the user to either wear a glove or have a camera in the environment, both of which are cumbersome,” he said. “This could really push the current technology into new areas.”
    FingerTrak could also have an impact on health care applications, specifically in monitoring disorders that affect fine-motor skills, said Yin Li, assistant professor of biostatistics and medical informatics at the University of Wisconsin, Madison School of Medicine and Public Health, who contributed to the software behind FingerTrak.
    “How we move our hands and fingers often tells about our health condition,” Li said. “A device like this might be used to better understand how the elderly use their hands in daily life, helping to detect early signs of diseases like Parkinson’s and Alzheimer’s.”
    “FingerTrak: Continuous 3D Hand Pose Tracking by Deep Learning Hand Silhouettes Captured by Miniature Thermal Cameras on Wrist,” was published in the Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies. It also will be presented at the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing, taking place virtually Sept. 12-16.

    Story Source:
    Materials provided by Cornell University. Original written by Louis DiPietro. Note: Content may be edited for style and length. More

  • in

    Powerful human-like hands create safer human-robotics interactions

    Need a robot with a soft touch? A team of Michigan State University engineers has designed and developed a novel humanoid hand that may be able to help.
    In industrial settings, robots often are used for tasks that require repetitive grasping and manipulation of objects. The end of a robot where a human hand would be found is known as an end effector or gripper.
    “The novel humanoid hand design is a soft-hard hybrid flexible gripper. It can generate larger grasping force than a traditional pure soft hand, and simultaneously be more stable for accurate manipulation than other counterparts used for heavier objects,” said lead author Changyong Cao, director of the Laboratory for Soft Machines and Electronics at MSU and assistant professor in Packaging, Mechanical Engineering, and Electrical and Computer Engineering.
    This new research, “Soft Humanoid Hands with Large Grasping Force Enabled by Flexible Hybrid Pneumatic Actuators,” is published in Soft Robotics.
    Generally, soft-hand grippers — which are used primarily in settings where an object may be fragile, light and irregularly shaped — present several disadvantages: sharp surfaces, poor stability in grasping unbalanced loads and relatively weak grasping force for handling heavy loads.
    When designing the new model, Cao and his team took into consideration a number of human-environment interactions, from fruit picking to sensitive medical care. They identified that some processes require a safe but firm interaction with fragile objects; most existing gripping systems are not suitable for these purposes.

    advertisement

    The team explained that the design novelty resulted in a prototype demonstrating the merits of a responsive, fast, lightweight gripper capable of handling a multitude of tasks that traditionally required different types of gripping systems.
    Each finger of the soft humanoid hand is constructed from a flexible hybrid pneumatic actuator — or FHPA — driven to bend by pressurized air, creating a modular framework for movement in which each digit moves independently of the others.
    “Traditional rigid grippers for industrial applications are generally made of simple but re- liable rigid structures that help in generating large forces, high accuracy and repeatability,” Cao said. “The proposed soft humanoid hand has demonstrated excellent adaptability and compatibility in grasping complex-shaped and fragile objects while simultaneously maintaining a high level of stiffness for exerting strong clamping forces to lift heavy loads.”
    In essence, the best of both worlds, Cao explained.
    The FHPA is composed of both hard and soft components, built around a unique structural combination of actuated air bladders and a bone-like spring core.
    “They combine the advantages of the deformability, adaptability and compliance of soft grippers while maintaining the large output force originated from the rigidity of the actuator,” Cao said.
    He believes the prototype can be useful in industries such as fruit picking, automated packaging, medical care, rehabilitation and surgical robotics.
    With ample room for future research and development, the team hopes to combine its advances with Cao’s recent work on so-called ‘smart’ grippers, integrating printed sensors in the gripping material. And by combining the hybrid gripper with ‘soft arms’ models, the researchers aim to more accurately mimic precise human actions.

    Story Source:
    Materials provided by Michigan State University. Note: Content may be edited for style and length. More

  • in

    New model connects respiratory droplet physics with spread of Covid-19

    Respiratory droplets from a cough or sneeze travel farther and last longer in humid, cold climates than in hot, dry ones, according to a study on droplet physics by an international team of engineers. The researchers incorporated this understanding of the impact of environmental factors on droplet spread into a new mathematical model that can be used to predict the early spread of respiratory viruses including COVID-19, and the role of respiratory droplets in that spread.
    The team developed this new model to better understand the role that droplet clouds play in the spread of respiratory viruses. Their model is the first to be based on a fundamental approach taken to study chemical reactions called collision rate theory, which looks at the interaction and collision rates of a droplet cloud exhaled by an infected person with healthy people. Their work connects population-scale human interaction with their micro-scale droplet physics results on how far and fast droplets spread, and how long they last.
    Their results were published June 30 in the journal Physics of Fluids.
    “The basic fundamental form of a chemical reaction is two molecules are colliding. How frequently they’re colliding will give you how fast the reaction progresses,” said Abhishek Saha, a professor of mechanical engineering at the University of California San Diego, and one of the authors of the paper. “It’s exactly the same here; how frequently healthy people are coming in contact with an infected droplet cloud can be a measure of how fast the disease can spread.”
    They found that, depending on weather conditions, some respiratory droplets travel between 8 feet and 13 feet away from their source before evaporating, without even accounting for wind. This means that without masks, six feet of social distance may not be enough to keep one person’s exhalated particles from reaching someone else.
    “Droplet physics are significantly dependent on weather,” said Saha. “If you’re in a colder, humid climate, droplets from a sneeze or cough are going to last longer and spread farther than if you’re in a hot dry climate, where they’ll get evaporated faster. We incorporated these parameters into our model of infection spread; they aren’t included in existing models as far as we can tell.”
    The researchers hope that their more detailed model for rate of infection spread and droplet spread will help inform public health policies at a more local level, and can be used in the future to better understand the role of environmental factors in virus spread.

    advertisement

    They found that at 35C (95F) and 40 percent relative humidity, a droplet can travel about 8 feet. However, at 5C (41F) and 80 percent humidity, a droplet can travel up to 12 feet. The team also found that droplets in the range of 14-48 microns possess higher risk as they take longer to evaporate and travel greater distances. Smaller droplets, on the other hand, evaporate within a fraction of a second, while droplets larger than 100 microns quickly settle to the ground due to weight.
    This is further evidence of the importance of wearing masks, which would trap particles in this critical range.
    The team of engineers from the UC San Diego Jacobs School of Engineering, University of Toronto and Indian Institute of Science are all experts in the aerodynamics and physics of droplets for applications including propulsion systems, combustion or thermal sprays. They turned their attention and expertise to droplets released when people sneeze, cough or talk when it became clear that COVID-19 is spread through these respiratory droplets. They applied existing models for chemical reactions and physics principles to droplets of a salt water solution — saliva is high in sodium chloride — which they studied in an ultrasonic levitator to determine the size, spread, and lifespan of these particles in various environmental conditions.
    Many current pandemic models use fitting parameters to be able to apply the data to an entire population. The new model aims to change that.
    “Our model is completely based on “first principles” by connecting physical laws that are well understood, so there is next to no fitting involved,” said Swetaprovo Chaudhuri, professor at University of Toronto and a co-author. “Of course, we make idealized assumptions, and there are variabilities in some parameters, but as we improve each of the submodels with specific experiments and including the present best practices in epidemiology, maybe a first principles pandemic model with high predictive capability could be possible.”
    There are limitations to this new model, but the team is already working to increase the model’s versatility.
    “Our next step is to relax a few simplifications and to generalize the model by including different modes of transmission,” said Saptarshi Basu, professor at the Indian Institute of Science and a co-author. “A set of experiments are also underway to investigate the respiratory droplets that settle on commonly touched surfaces.” More

  • in

    Quantum exciton found in magnetic van der Waals material NiPS3

    Things can always be done faster, but can anything beat light? Computing with light instead of electricity is seen as a breakthrough to boost the computer speeds. Transistors, the building blocks of data circuits, require to switch electrical signals into light in order to transmit the information via a fiber-optic cable. Optical computing could potentially save the time and energy used to be spent for such conversion. In addition to the high-speed transmission, outstanding low-noise properties of photons make them ideal for exploring quantum mechanics. At the heart of such compelling applications is to secure a stable light source, especially in a quantum state.
    When light is shone onto electrons in a semiconductor crystal, a conduction electron can combine with a positively charged hole in the semiconductor to create a bound state, the so-called exciton. Flowing like electrons but emitting light when the electron-hole pair gets back together, excitons could speed up the overall data transmission circuits. In addition, plenty of exotic physical phases like superconductivity are speculated as phenomena arising from excitons. Despite the richness of exotic theoretical predictions and its long history (first reported in the 1930’s), much of the physics regarding excitons has been mostly about its initial concept of “simple” binding of an electron and a hole, rarely updated from the findings in the 1930s.
    In the latest issue of the journal Nature, a research team led by Professor PARK Je-Geun of the Department of Physics and Astronomy, Seoul National University — previously Associate Director of the Center for Correlated Electron Systems within the Institute for Basic Science (IBS, South Korea) — found a new type of exciton in magnetic van der Waals material NiPS3. “To host such a novel state of an exciton physics, it requires a direct bandgap and most importantly, magnetic order with strong quantum correlation. Notably, this study makes it the latter possible with NiPS3, a magnetic van der Waals material, an intrinsically correlated system,” notes Professor PARK Je-Geun, corresponding author of the study. Prof. Park’s group reported the first realization of exact 2D magnetic van der Waals materials using NiPS3 in 2016. Using the same material, they have demonstrated that NiPS3 hosts a completely different magnetic exciton state from the more conventional excitons known to date. This exciton state is intrinsically of many-body origin, which is an actual realization of a genuine quantum state. As such, this new work signals a significant shift in the vibrant field of research in its 80 years of history.
    All of this unusual exciton physics in NiPS3 began with bizarrely high peaks spotted in early PL (photoluminescence) experiments done in 2016 by Prof. CHEONG Hyeonsik of Sogang University. It was soon followed by another optical absorption experiment done by Prof. KIM Jae Hoon of Yonsei University. Both sets of optical data clearly indicated two points of significant importance: one is the temperature dependence and the other extremely narrow resonant nature of the exciton.
    To understand the unusual findings, Prof. Park used a resonant inelastic X-ray scattering technique, known as RIXS, together with Dr. Ke-Jin Zhou at the Diamond Facilities, the UK. This new experiment was critical to the success of the overall project. First, it confirmed the existence of the 1.5 eV exciton peak beyond any doubt. Secondly, it provided an inspiring guide on how we could come up with a theoretical model and the ensuing calculations. This connection between the experiment and the theory played a pivotal role for them to crack the big puzzle in NiPS3.
    Using the analytical process shown above, Dr. KIM Beom Hyun and Prof. SON Young-Woo of the Korea Institute for Advanced Study carried out massive theoretical many-body calculations. By exploring massive quantum states totaling 1,500,000 in the Hilbert space, they concluded that all the experimental results could be consistent with a particular set of parameters. When they compared the theoretical results with the RIXS data (Fig. 3-a), it was clear that they came to a full understanding of the very unusual exciton phase of NiPS3. At last, the team could theoretically understand the magnetic exciton state of many-body nature, i.e., a genuine quantum exciton state.
    There are several vital distinctions to be made about the quantum magnetic exciton discovered in NiPS3 as compared with the more conventional exciton found in other 2D materials and all the other insulators having an exciton state. First and foremost, the excitons found in NiPS3 is intrinsically a quantum state arising from a transition from a Zhang-Rice triplet to a Zhang-Rice singlet. Second, it is almost a resolution-limited state, indicative of some kind of coherence present among the states. For comparison, all the other exciton states reported before are from extended Bloch states.
    It is probably too early for us to make any definite predictions; it might as well bring on the future of the related field of magnetic van der Waals researches, not to mention our lives. However, it is clear even at this moment that “The quantum nature of the new exciton state is unique and will attract a lot of attention for its potentials in the field of quantum information and quantum computing, to name only a few. Our work opens an interesting possibility of many magnetic van der Waals materials having similar quantum exciton states,” explains Professor Park.

    Story Source:
    Materials provided by Institute for Basic Science. Note: Content may be edited for style and length. More

  • in

    Which way to the fridge? Common sense helps robots navigate

    A robot travelling from point A to point B is more efficient if it understands that point A is the living room couch and point B is a refrigerator, even if it’s in an unfamiliar place. That’s the common sense idea behind a “semantic” navigation system developed by Carnegie Mellon University and Facebook AI Research (FAIR).
    That navigation system, called SemExp, last month won the Habitat ObjectNav Challenge during the virtual Computer Vision and Pattern Recognition conference, edging a team from Samsung Research China. It was the second consecutive first-place finish for the CMU team in the annual challenge.
    SemExp, or Goal-Oriented Semantic Exploration, uses machine learning to train a robot to recognize objects — knowing the difference between a kitchen table and an end table, for instance — and to understand where in a home such objects are likely to be found. This enables the system to think strategically about how to search for something, said Devendra S. Chaplot, a Ph.D. student in CMU’s Machine Learning Department.
    “Common sense says that if you’re looking for a refrigerator, you’d better go to the kitchen,” Chaplot said. Classical robotic navigation systems, by contrast, explore a space by building a map showing obstacles. The robot eventually gets to where it needs to go, but the route can be circuitous.
    Previous attempts to use machine learning to train semantic navigation systems have been hampered because they tend to memorize objects and their locations in specific environments. Not only are these environments complex, but the system often has difficulty generalizing what it has learned to different environments.
    Chaplot — working with FAIR’s Dhiraj Gandhi, along with Abhinav Gupta, associate professor in the Robotics Institute, and Ruslan Salakhutdinov, professor in the Machine Learning Department — sidestepped that problem by making SemExp a modular system.
    The system uses its semantic insights to determine the best places to look for a specific object, Chaplot said. “Once you decide where to go, you can just use classical planning to get you there.”
    This modular approach turns out to be efficient in several ways. The learning process can concentrate on relationships between objects and room layouts, rather than also learning route planning. The semantic reasoning determines the most efficient search strategy. Finally, classical navigation planning gets the robot where it needs to go as quickly as possible.
    Semantic navigation ultimately will make it easier for people to interact with robots, enabling them to simply tell the robot to fetch an item in a particular place, or give it directions such as “go to the second door on the left.”
    Video: https://www.youtube.com/watch?v=FhIut4bqFyw&feature=emb_logo

    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Byron Spice. Note: Content may be edited for style and length. More

  • in

    Atomtronic device could probe boundary between quantum, everyday worlds

    A new device that relies on flowing clouds of ultracold atoms promises potential tests of the intersection between the weirdness of the quantum world and the familiarity of the macroscopic world we experience every day. The atomtronic Superconducting QUantum Interference Device (SQUID) is also potentially useful for ultrasensitive rotation measurements and as a component in quantum computers.
    “In a conventional SQUID, the quantum interference in electron currents can be used to make one of the most sensitive magnetic field detectors,” said Changhyun Ryu, a physicist with the Material Physics and Applications Quantum group at Los Alamos National Laboratory. “We use neutral atoms rather than charged electrons. Instead of responding to magnetic fields, the atomtronic version of a SQUID is sensitive to mechanical rotation.”
    Although small, at only about ten millionths of a meter across, the atomtronic SQUID is thousands of times larger than the molecules and atoms that are typically governed by the laws of quantum mechanics. The relatively large scale of the device lets it test theories of macroscopic realism, which could help explain how the world we are familiar with is compatible with the quantum weirdness that rules the universe on very small scales. On a more pragmatic level, atomtronic SQUIDs could offer highly sensitive rotation sensors or perform calculations as part of quantum computers.
    The researchers created the device by trapping cold atoms in a sheet of laser light. A second laser intersecting the sheet “painted” patterns that guided the atoms into two semicircles separated by small gaps known as Josephson Junctions.
    When the SQUID is rotated and the Josephson Junctions are moved toward each other, the populations of atoms in the semicircles change as a result of quantum mechanical interference of currents through Josephson Junctions. By counting the atoms in each section of the semicircle, the researchers can very precisely determine the rate the system is rotating.
    As the first prototype atomtronic SQUID, the device has a long way to go before it can lead to new guidance systems or insights into the connection between the quantum and classical worlds. The researchers expect that scaling the device up to produce larger diameter atomtronic SQUIDs could open the door to practical applications and new quantum mechanical insights.

    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Predicting your personality from your smartphone data

    Everyone who uses a smartphone unavoidably generates masses of digital data that are accessible to others, and these data provide clues to the user’s personality. Psychologists at Ludwig-Maximilians-Universitaet in Munich (LMU) are studying how revealing these clues are.
    For most people around the world, smartphones have become an integral and indispensable component of their daily lives. The digital data that these devices incessantly collect are a veritable goldmine — not only for the five largest American IT companies, who make use of them for advertising purposes. They are also of considerable interest in other contexts. For instance, computational social scientists utilize smartphone data in order to learn more about personality traits and social behavior. In a study that appears in the journal PNAS, a team of researchers led by LMU psychologist Markus Bühner set out to determine whether conventional data passively collected by smartphones (such as times or frequencies of use) provide insights into users’ personalities. The answer was clear cut. “Yes, automated analysis of these data does allow us to draw conclusions about the personalities of users, at least for most of the major dimensions of personality,” says Clemens Stachl, who used to work with Markus Bühner (Chair of Psychological Methodologies and Diagnostics at LMU) and is now a researcher at Stanford University in California.
    The LMU team recruited 624 volunteers for their PhoneStudy project. The participants agreed to fill out an extensive questionnaire describing their personality traits, and to install an app that had been developed specially for the study on their phones for 30 days. The app was designed to collect coded information relating to the behavior of the user. The researchers were primarily interested in data pertaining to communication patterns, social behavior and mobility, together with users’ choice and consumption of music, the selection of apps used, and the temporal distribution of their phone usage over the course of the day. All the data on personality and smartphone use were then analyzed with the aid of machine-learning algorithms, which were trained to recognize and extract patterns from the behavioral data, and relate these patterns to the information obtained from the personality surveys. The ability of the algorithms to predict the personality traits of the users was then cross-validated on the basis of a new dataset. “By far the most difficult part of the project was the pre-processing of the huge amount of data collected and the training of the predictive algorithms,” says Stachl. “In fact, in order to perform the necessary calculations, we had to resort to the cluster of high-performance computers at the Leibniz Supercomputing Centre in Garching (LRZ).”
    The researchers focused on the five most significant personality dimensions (the Big Five) identified by psychologists, which enable them to characterize personality differences between individuals in a comprehensive way. These dimensions relate to the self-assessed contribution of each of the following traits to a given individual’s personality: (1) openness (willingness to adopt new ideas, experiences and values), (2) conscientiousness (dependability, punctuality, ambitiousness and discipline), (3) extraversion (sociability, assertiveness, adventurousness, dynamism and friendliness), (4) agreeableness (willingness to trust others, good natured, outgoing, obliging, helpful) and (5) emotional stability (self-confidence, equanimity, positivity, self-control). The automated analysis revealed that the algorithm was indeed able to successfully derive most of these personality traits from combinations of the multifarious elements of their smartphone usage. Moreover, the results provide hints as to which types of digital behavior are most informative for specific self-assessments of personality. For example, data pertaining to communication patterns and social behavior (as reflected by smartphone use) correlated strongly with levels of self-reported extraversion, while information relating to patterns of day and night-time activity was significantly predictive of self-reported degrees of conscientiousness. Notably, links with the category ‘openness’ only became apparent when highly disparate types of data (e.g., app usage) were combined.
    The results of the study are of great value to researchers, as studies have so far been almost exclusively based on self-assessments. The conventional method has proven to be sufficiently reliable in predicting levels of professional success, for instance. “Nevertheless, we still know very little about how people actually behave in their everyday lives — apart from what they choose to tell us on our questionnaires,” says Markus Bühner. “Thanks to their broad distribution, their intensive use and their very high level of performance, smartphones are an ideal tool with which to probe the relationships between self-reported and real patterns of behavior.
    Clemens Stachl is aware that his research might further stimulate the appetites of the dominant IT firms for data. In addition to regulating the use of passively collected data and strengthening rights to privacy, we also need to take a comprehensive look at the field of artificial intelligence, he says. “The user, not the machine, must be the primary focus of research in this area. It would be a serious mistake to adopt machine-based methods of learning without serious consideration of their wider implications.” The potential of these applications — in both research and business — is tremendous. “The opportunities opened up by today’s data-driven society will undoubtedly improve the lives of large numbers of people,” Stachl says. “But we must ensure that all sections of the population share the benefits offered by digital technologies.” More