More stories

  • in

    AI learns to type on a phone like humans

    Touchscreens are notoriously difficult to type on. Since we can’t feel the keys, we rely on the sense of sight to move our fingers to the right places and check for errors, a combination of efforts we can’t pull off at the same time. To really understand how people type on touchscreens, researchers at Aalto University and the Finnish Center for Artificial Intelligence (FCAI) have created the first artificial intelligence model that predicts how people move their eyes and fingers while typing.
    The AI model can simulate how a human user would type any sentence on any keyboard design. It makes errors, detects them — though not always immediately — and corrects them, very much like humans would. The simulation also predicts how people adapt to alternating circumstances, like how their writing style changes when they start using a new auto-correction system or keyboard design.
    ‘Previously, touchscreen typing has been understood mainly from the perspective of how our fingers move. AI-based methods have helped shed new light on these movements: what we’ve discovered is the importance of deciding when and where to look. Now, we can make much better predictions on how people type on their phones or tablets,’ says Dr. Jussi Jokinen, who led the work.
    The study, to be presented at ACM CHI on 12 May, lays the groundwork for developing, for instance, better and even personalized text entry solutions.
    ‘Now that we have a realistic simulation of how humans type on touchscreens, it should be a lot easier to optimize keyboard designs for better typing — meaning less errors, faster typing, and, most importantly for me, less frustration,’ Jokinen explains.
    In addition to predicting how a generic person would type, the model is also able to account for different types of users, like those with motor impairments, and could be used to develop typing aids or interfaces designed with these groups in mind. For those facing no particular challenges, it can deduce from personal writing styles — by noting, for instance, the mistakes that repeatedly occur in texts and emails — what kind of a keyboard, or auto-correction system, would best serve a user.
    The novel approach builds on the group’s earlier empirical research, which provided the basis for a cognitive model of how humans type. The researchers then produced the generative model capable of typing independently. The work was done as part of a larger project on Interactive AI at the Finnish Center for Artificial Intelligence.
    The results are underpinned by a classic machine learning method, reinforcement learning, that the researchers extended to simulate people. Reinforcement learning is normally used to teach robots to solve tasks by trial and error; the team found a new way to use this method to generate behavior that closely matches that of humans — mistakes, corrections and all.
    ‘We gave the model the same abilities and bounds that we, as humans, have. When we asked it to type efficiently, it figured out how to best use these abilities. The end result is very similar to how humans type, without having to teach the model with human data,’ Jokinen says.
    Comparison to data of human typing confirmed that the model’s predictions were accurate. In the future, the team hopes to simulate slow and fast typing techniques to, for example, design useful learning modules for people who want to improve their typing.
    The paper, Touchscreen Typing As Optimal Supervisory Control, will be presented 12 May 2021 at the ACM CHI conference.
    Video: https://www.youtube.com/watch?v=6cl2OoTNB6g&t=1s
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Harnessing the hum of fluorescent lights for more efficient computing

    The property that makes fluorescent lights buzz could power a new generation of more efficient computing devices that store data with magnetic fields, rather than electricity.
    A team led by University of Michigan researchers has developed a material that’s at least twice as “magnetostrictive” and far less costly than other materials in its class. In addition to computing, it could also lead to better magnetic sensors for medical and security devices.
    Magnetostriction, which causes the buzz of fluorescent lights and electrical transformers, occurs when a material’s shape and magnetic field are linked — that is, a change in shape causes a change in magnetic field. The property could be key to a new generation of computing devices called magnetoelectrics.
    Magnetoelectric chips could make everything from massive data centers to cell phones far more energy efficient, slashing the electricity requirements of the world’s computing infrastructure.
    Made of a combination of iron and gallium, the material is detailed in a paper published May 12 in Nature Communication. The team is led by U-M materials science and engineering professor John Heron and includes researchers from Intel; Cornell University; University of California, Berkeley; University of Wisconsin; Purdue University and elsewhere.
    Magnetoelectric devices use magnetic fields instead of electricity to store the digital ones and zeros of binary data. Tiny pulses of electricity cause them to expand or contract slightly, flipping their magnetic field from positive to negative or vice versa. Because they don’t require a steady stream of electricity, as today’s chips do, they use a fraction of the energy. More

  • in

    Tiny, wireless, injectable chips use ultrasound to monitor body processes

    Widely used to monitor and map biological signals, to support and enhance physiological functions, and to treat diseases, implantable medical devices are transforming healthcare and improving the quality of life for millions of people. Researchers are increasingly interested in designing wireless, miniaturized implantable medical devices for in vivo and in situ physiological monitoring. These devices could be used to monitor physiological conditions, such as temperature, blood pressure, glucose, and respiration for both diagnostic and therapeutic procedures.
    To date, conventional implanted electronics have been highly volume-inefficient — they generally require multiple chips, packaging, wires, and external transducers, and batteries are often needed for energy storage. A constant trend in electronics has been tighter integration of electronic components, often moving more and more functions onto the integrated circuit itself.
    Researchers at Columbia Engineering report that they have built what they say is the world’s smallest single-chip system, consuming a total volume of less than 0.1 mm3. The system is as small as a dust mite and visible only under a microscope. In order to achieve this, the team used ultrasound to both power and communicate with the device wirelessly. The study was published online May 7 in Science Advances.
    “We wanted to see how far we could push the limits on how small a functioning chip we could make,” said the study’s leader Ken Shepard, Lau Family professor of electrical engineering and professor of biomedical engineering. “This is a new idea of ‘chip as system’ — this is a chip that alone, with nothing else, is a complete functioning electronic system. This should be revolutionary for developing wireless, miniaturized implantable medical devices that can sense different things, be used in clinical applications, and eventually approved for human use.”
    The team also included Elisa Konofagou, Robert and Margaret Hariri Professor of Biomedical engineering and professor of radiology, as well as Stephen A. Lee, PhD student in the Konofagou lab who assisted in the animal studies.
    The design was done by doctoral student Chen Shi, who is the first author of the study. Shi’s design is unique in its volumetric efficiency, the amount of function that is contained in a given amount of volume. Traditional RF communications links are not possible for a device this small because the wavelength of the electromagnetic wave is too large relative to the size of the device. Because the wavelengths for ultrasound are much smaller at a given frequency because the speed of sound is so much less than the speed of light, the team used ultrasound to both power and communicate with the device wirelessly. They fabricated the “antenna” for communicating and powering with ultrasound directly on top of the chip.
    The chip, which is the entire implantable/injectable mote with no additional packaging, was fabricated at the Taiwan Semiconductor Manufacturing Company with additional process modifications performed in the Columbia Nano Initiative cleanroom and the City University of New York Advanced Science Research Center (ASRC) Nanofabrication Facility.
    Shepard commented, “This is a nice example of ‘more than Moore’ technology — we introduced new materials onto standard complementary metal-oxide-semiconductor to provide new function. In this case, we added piezoelectric materials directly onto the integrated circuit to transducer acoustic energy to electrical energy.”
    Konofagou added, “Ultrasound is continuing to grow in clinical importance as new tools and techniques become available. This work continues this trend.”
    The team’s goal is to develop chips that can be injected into the body with a hypodermic needle and then communicate back out of the body using ultrasound, providing information about something they measure locally. The current devices measure body temperature, but there are many more possibilities the team is working on.
    Story Source:
    Materials provided by Columbia University School of Engineering and Applied Science. Original written by Holly Evarts. Note: Content may be edited for style and length. More

  • in

    Engine converts random jiggling of microscopic particle into stored energy

    Simon Fraser University researchers have designed a remarkably fast engine that taps into a new kind of fuel — information.
    The development of this engine, which converts the random jiggling of a microscopic particle into stored energy, is outlined in research published this week in the Proceedings of the National Academy of Sciences (PNAS) and could lead to significant advances in the speed and cost of computers and bio-nanotechnologies.
    SFU physics professor and senior author John Bechhoefer says researchers’ understanding of how to rapidly and efficiently convert information into “work” may inform the design and creation of real-world information engines.
    “We wanted to find out how fast an information engine can go and how much energy it can extract, so we made one,” says Bechhoefer, whose experimental group collaborated with theorists led by SFU physics professor David Sivak.
    Engines of this type were first proposed over 150 years ago but actually making them has only recently become possible.
    “By systematically studying this engine, and choosing the right system characteristics, we have pushed its capabilities over ten times farther than other similar implementations, thus making it the current best-in-class,” says Sivak.
    The information engine designed by SFU researchers consists of a microscopic particle immersed in water and attached to a spring which, itself, is fixed to a movable stage. Researchers then observe the particle bouncing up and down due to thermal motion.
    “When we see an upward bounce, we move the stage up in response,” explains lead author and PhD student Tushar Saha. “When we see a downward bounce, we wait. This ends up lifting the entire system using only information about the particle’s position.”
    Repeating this procedure, they raise the particle “a great height, and thus store a significant amount of gravitational energy,” without having to directly pull on the particle.
    Saha further explains that, “in the lab, we implement this engine with an instrument known as an optical trap, which uses a laser to create a force on the particle that mimics that of the spring and stage.”
    Joseph Lucero, a Master of Science student adds, “in our theoretical analysis, we find an interesting trade-off between the particle mass and the average time for the particle to bounce up. While heavier particles can store more gravitational energy, they generally also take longer to move up.”
    “Guided by this insight, we picked the particle mass and other engine properties to maximize how fast the engine extracts energy, outperforming previous designs and achieving power comparable to molecular machinery in living cells, and speeds comparable to fast-swimming bacteria,” says postdoctoral fellow Jannik Ehrich.
    Story Source:
    Materials provided by Simon Fraser University. Note: Content may be edited for style and length. More

  • in

    Novel circuitry solves a myriad of computationally intensive problems with minimum energy

    From the branching pattern of leaf veins to the variety of interconnected pathways that spread the coronavirus, nature thrives on networks — grids that link the different components of complex systems. Networks underlie such real-life problems as determining the most efficient route for a trucking company to deliver life-saving drugs and calculating the smallest number of mutations required to transform one string of DNA into another.
    Instead of relying on software to tackle these computationally intensive puzzles, researchers at the National Institute of Standards and Technology (NIST) took an unconventional approach. They created a design for an electronic hardware system that directly replicates the architecture of many types of networks.
    The researchers demonstrated that their proposed hardware system, using a computational technique known as race logic, can solve a variety of complex puzzles both rapidly and with a minimum expenditure of energy. Race logic requires less power and solves network problems more rapidly than competing general- purposed computers.
    The scientists, who include Advait Madhavan of NIST and the University of Maryland in College Park and Matthew Daniels and Mark Stiles of NIST, describe their work in Volume 17, Issue 3, May 2021 of the ACM Journal on Emerging Technologies in Computing Systems.
    A key feature of race logic is that it encodes information differently from a standard computer. Digital information is typically encoded and processed using values of computer bits — a “1” if a logic statement is true and a “0” if it’s false. When a bit flips its value, say from 0 to 1, it means that a particular logic operation has been performed in order to solve a mathematical problem.
    In contrast, race logic encodes and processes information by representing it as time signals — the time at which a particular group of computer bits transitions, or flips, from 0 to 1. Large numbers of bit flips are the primary cause of the large power consumption in standard computers. In this respect, race logic offers an advantage because signals encoded in time involve only a few carefully orchestrated bit flips to process information, requiring much less power than signals encoded as 0s or 1s. More

  • in

    Focus on outliers creates flawed snap judgments

    You enter a room and quickly scan the crowd to gain a sense of who’s there — how many men versus women. How reliable is your estimate?
    Not very, according to new research from Duke University.
    In an experimental study, researchers found that participants consistently erred in estimating the proportion of men and women in a group. And participants erred in a particular way: They overestimated whichever group was in the minority.
    “Our attention is drawn to outliers,” said Mel W. Khaw, a postdoctoral research associate at Duke and the study’s lead author. “We tend to overestimate people who stand out in a crowd.”
    For the study, which appears online in the journal Cognition, researchers recruited 48 observers ages 18-28. Participants were presented with a grid of 12 faces and were given just one second to glance at the grid. Study participants were then asked to estimate the number of men and women in the grid.
    Participants accurately assessed homogenous groups — groups containing all men or all women. But if a group contained fewer women, say, participants overestimated the number of women present. More

  • in

    Online therapy effective against OCD symptoms in the young

    Obsessive-compulsive disorder (OCD) in children and adolescents is associated with impaired education and worse general health later in life. Access to specialist treatment is often limited. According to a study from Centre for Psychiatry Research at Karolinska Institutet in Sweden and Region Stockholm, internet-delivered cognitive behavioural therapy (CBT) can be as effective as conventional CBT. The study, published in the journal JAMA, can help make treatment for OCD more widely accessible.
    Obsessive-compulsive disorder (OCD) is a potentially serious mental disorder that normally debuts in childhood.
    Symptoms include intrusive thoughts that trigger anxiety (obsessions), and associated repetitive behaviours (compulsions), which are distressing and time consuming.
    Early diagnosis and treatment are essential to minimise the long-term medical and socioeconomic consequences of the disorder, including suicide risk.
    The psychological treatment of OCD requires highly trained therapists and access to this kind of competence is currently limited to a handful of specialist centres across Sweden.
    Earlier research has shown that while CBT helps a majority of young people who receive it, several years can pass between the onset of symptoms and receipt of treatment. More

  • in

    Patients may not take advice from AI doctors who know their names

    As the use of artificial intelligence (AI) in health applications grows, health providers are looking for ways to improve patients’ experience with their machine doctors.
    Researchers from Penn State and University of California, Santa Barbara (UCSB) found that people may be less likely to take health advice from an AI doctor when the robot knows their name and medical history. On the other hand, patients want to be on a first-name basis with their human doctors.
    When the AI doctor used the first name of the patients and referred to their medical history in the conversation, study participants were more likely to consider an AI health chatbot intrusive and also less likely to heed the AI’s medical advice, the researchers added. However, they expected human doctors to differentiate them from other patients and were less likely to comply when a human doctor failed to remember their information.
    The findings offer further evidence that machines walk a fine line in serving as doctors, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State.
    “Machines don’t have the ability to feel and experience, so when they ask patients how they are feeling, it’s really just data to them,” said Sundar, who is also an affiliate of Penn State’s Institute for Computational and Data Sciences (ICDS). “It’s possibly a reason why people in the past have been resistant to medical AI.”
    Machines do have advantages as medical providers, said Joseph B. Walther, distinguished professor in communication and the Mark and Susan Bertelsen Presidential Chair in Technology and Society at UCSB. He said that, like a family doctor who has treated a patient for a long time, computer systems could — hypothetically — know a patient’s complete medical history. In comparison, seeing a new doctor or a specialist who knows only your latest lab tests might be a more common experience, said Walther, who is also director of the Center for Information Technology and Society at UCSB. More