More stories

  • in

    Locomotion Vault will help guide innovations in virtual reality locomotion

    Experts in virtual reality locomotion have developed a new resource that analyses all the different possibilities of locomotion currently available.
    Moving around in a virtual reality world can be very different to walking or employing a vehicle in the real world and new approaches and techniques are continually being developed to meet the challenges of different applications.
    Called Locomotion Vault, the project was developed by researchers at the Universities of Birmingham, Copenhagen, and Microsoft Research. It aims to provide a central, freely-available resource to analyse the numerous locomotion techniques currently available.
    The aim is to make it easier for developers to make informed decisions about the appropriate technique for their application and researchers to study which methods are best. By cataloguing available techniques in the Locomotion Vault, the project will also give creators and designers a head-start on identifying gaps where future investigation might be necessary. The database is an interactive resource, so it can be expanded through contributions from researchers and practitioners.
    Researcher Massimiliano Di Luca, of the University of Birmingham, said: “Locomotion is an essential part of virtual reality environments, but there are many challenges. A fundamental question, for example, is whether there should be a unique ‘best’ approach, or instead whether the tactics and methods used should be selected according to the application being designed or the idiosyncrasies of the available hardware. Locomotion Vault will help developers with these decisions.”
    The database also aims to address vital questions of accessibility and inclusivity. Both of these attributes were assessed in relation to each technique included in the Vault.
    Co-researcher, Mar Gonzalez-Franco, of Microsoft Research, said: “As new and existing technologies progress and become a more regular part of our lives, new challenges and opportunities around accessibility and inclusivity will present themselves. Virtual reality is a great example. We need to consider how VR can be designed to accommodate the variety of capabilities represented by those who want to use it.”
    The research team are presenting Locomotion Vault this week at the online Conference on Human Factors in Computing Systems (CHI 2021).
    “This is an area of constant and rapid innovation,” says co-author Hasti Seifi, of the University of Copenhagen. “Locomotion Vault is designed to help researchers tackle the challenges they face right now, but also to help support future discoveries in this exciting field.”
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Smaller chips open door to new RFID applications

    Researchers at North Carolina State University have made what is believed to be the smallest state-of-the-art RFID chip, which should drive down the cost of RFID tags. In addition, the chip’s design makes it possible to embed RFID tags into high value chips, such as computer chips, boosting supply chain security for high-end technologies.
    “As far as we can tell, it’s the world’s smallest Gen2-compatible RFID chip,” says Paul Franzon, corresponding author of a paper on the work and Cirrus Logic Distinguished Professor of Electrical and Computer Engineering at NC State.
    Gen2 RFID chips are state of the art and are already in widespread use. One of the things that sets these new RFID chips apart is their size. They measure 125 micrometers (?m) by 245?m. Manufacturers were able to make smaller RFID chips using earlier technologies, but Franzon and his collaborators have not been able to identify smaller RFID chips that are compatible with the current Gen2 technology.
    “The size of an RFID tag is largely determined by the size of its antenna — not the RFID chip,” Franzon says. “But the chip is the expensive part.”
    The smaller the chip, the more chips you can get from a single silicon wafer. And the more chips you can get from the silicon wafer, the less expensive they are.
    “In practical terms, this means that we can manufacture RFID tags for less than one cent each if we’re manufacturing them in volume,” Franzon says.
    That makes it more feasible for manufacturers, distributors or retailers to use RFID tags to track lower-cost items. For example, the tags could be used to track all of the products in a grocery store without requiring employees to scan items individually.
    “Another advantage is that the design of the circuits we used here is compatible with a wide range of semiconductor technologies, such as those used in conventional computer chips,” says Kirti Bhanushali, who worked on the project as a Ph.D. student at NC State and is first author of the paper. “This makes it possible to incorporate RFID tags into computer chips, allowing users to track individual chips throughout their life cycle. This could help to reduce counterfeiting, and allow you to verify that a component is what it says it is.”
    “We’ve demonstrated what is possible, and we know that these chips can be made using existing manufacturing technologies,” Franzon says. “We’re now interested in working with industry partners to explore commercializing the chip in two ways: creating low-cost RFID at scale for use in sectors such as grocery stores; and embedding RFID tags into computer chips in order to secure high-value supply chains.”
    The paper, “A 125?m×245?m Mainly Digital UHF EPC Gen2 Compatible RFID tag in 55nm CMOS process,” was presented April 29 at the IEEE International Conference on RFID. The paper was co-authored by Wenxu Zhao, who worked on the project as a Ph.D. student at NC State; and Shepherd Pitts, who worked on the project while a research assistant professor at NC State.
    The work was done with support from the National Science Foundation, under grant 1422172; and from NC State’s Chancellor’s Innovation Fund.
    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    AI learns to type on a phone like humans

    Touchscreens are notoriously difficult to type on. Since we can’t feel the keys, we rely on the sense of sight to move our fingers to the right places and check for errors, a combination of efforts we can’t pull off at the same time. To really understand how people type on touchscreens, researchers at Aalto University and the Finnish Center for Artificial Intelligence (FCAI) have created the first artificial intelligence model that predicts how people move their eyes and fingers while typing.
    The AI model can simulate how a human user would type any sentence on any keyboard design. It makes errors, detects them — though not always immediately — and corrects them, very much like humans would. The simulation also predicts how people adapt to alternating circumstances, like how their writing style changes when they start using a new auto-correction system or keyboard design.
    ‘Previously, touchscreen typing has been understood mainly from the perspective of how our fingers move. AI-based methods have helped shed new light on these movements: what we’ve discovered is the importance of deciding when and where to look. Now, we can make much better predictions on how people type on their phones or tablets,’ says Dr. Jussi Jokinen, who led the work.
    The study, to be presented at ACM CHI on 12 May, lays the groundwork for developing, for instance, better and even personalized text entry solutions.
    ‘Now that we have a realistic simulation of how humans type on touchscreens, it should be a lot easier to optimize keyboard designs for better typing — meaning less errors, faster typing, and, most importantly for me, less frustration,’ Jokinen explains.
    In addition to predicting how a generic person would type, the model is also able to account for different types of users, like those with motor impairments, and could be used to develop typing aids or interfaces designed with these groups in mind. For those facing no particular challenges, it can deduce from personal writing styles — by noting, for instance, the mistakes that repeatedly occur in texts and emails — what kind of a keyboard, or auto-correction system, would best serve a user.
    The novel approach builds on the group’s earlier empirical research, which provided the basis for a cognitive model of how humans type. The researchers then produced the generative model capable of typing independently. The work was done as part of a larger project on Interactive AI at the Finnish Center for Artificial Intelligence.
    The results are underpinned by a classic machine learning method, reinforcement learning, that the researchers extended to simulate people. Reinforcement learning is normally used to teach robots to solve tasks by trial and error; the team found a new way to use this method to generate behavior that closely matches that of humans — mistakes, corrections and all.
    ‘We gave the model the same abilities and bounds that we, as humans, have. When we asked it to type efficiently, it figured out how to best use these abilities. The end result is very similar to how humans type, without having to teach the model with human data,’ Jokinen says.
    Comparison to data of human typing confirmed that the model’s predictions were accurate. In the future, the team hopes to simulate slow and fast typing techniques to, for example, design useful learning modules for people who want to improve their typing.
    The paper, Touchscreen Typing As Optimal Supervisory Control, will be presented 12 May 2021 at the ACM CHI conference.
    Video: https://www.youtube.com/watch?v=6cl2OoTNB6g&t=1s
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Harnessing the hum of fluorescent lights for more efficient computing

    The property that makes fluorescent lights buzz could power a new generation of more efficient computing devices that store data with magnetic fields, rather than electricity.
    A team led by University of Michigan researchers has developed a material that’s at least twice as “magnetostrictive” and far less costly than other materials in its class. In addition to computing, it could also lead to better magnetic sensors for medical and security devices.
    Magnetostriction, which causes the buzz of fluorescent lights and electrical transformers, occurs when a material’s shape and magnetic field are linked — that is, a change in shape causes a change in magnetic field. The property could be key to a new generation of computing devices called magnetoelectrics.
    Magnetoelectric chips could make everything from massive data centers to cell phones far more energy efficient, slashing the electricity requirements of the world’s computing infrastructure.
    Made of a combination of iron and gallium, the material is detailed in a paper published May 12 in Nature Communication. The team is led by U-M materials science and engineering professor John Heron and includes researchers from Intel; Cornell University; University of California, Berkeley; University of Wisconsin; Purdue University and elsewhere.
    Magnetoelectric devices use magnetic fields instead of electricity to store the digital ones and zeros of binary data. Tiny pulses of electricity cause them to expand or contract slightly, flipping their magnetic field from positive to negative or vice versa. Because they don’t require a steady stream of electricity, as today’s chips do, they use a fraction of the energy. More

  • in

    Tiny, wireless, injectable chips use ultrasound to monitor body processes

    Widely used to monitor and map biological signals, to support and enhance physiological functions, and to treat diseases, implantable medical devices are transforming healthcare and improving the quality of life for millions of people. Researchers are increasingly interested in designing wireless, miniaturized implantable medical devices for in vivo and in situ physiological monitoring. These devices could be used to monitor physiological conditions, such as temperature, blood pressure, glucose, and respiration for both diagnostic and therapeutic procedures.
    To date, conventional implanted electronics have been highly volume-inefficient — they generally require multiple chips, packaging, wires, and external transducers, and batteries are often needed for energy storage. A constant trend in electronics has been tighter integration of electronic components, often moving more and more functions onto the integrated circuit itself.
    Researchers at Columbia Engineering report that they have built what they say is the world’s smallest single-chip system, consuming a total volume of less than 0.1 mm3. The system is as small as a dust mite and visible only under a microscope. In order to achieve this, the team used ultrasound to both power and communicate with the device wirelessly. The study was published online May 7 in Science Advances.
    “We wanted to see how far we could push the limits on how small a functioning chip we could make,” said the study’s leader Ken Shepard, Lau Family professor of electrical engineering and professor of biomedical engineering. “This is a new idea of ‘chip as system’ — this is a chip that alone, with nothing else, is a complete functioning electronic system. This should be revolutionary for developing wireless, miniaturized implantable medical devices that can sense different things, be used in clinical applications, and eventually approved for human use.”
    The team also included Elisa Konofagou, Robert and Margaret Hariri Professor of Biomedical engineering and professor of radiology, as well as Stephen A. Lee, PhD student in the Konofagou lab who assisted in the animal studies.
    The design was done by doctoral student Chen Shi, who is the first author of the study. Shi’s design is unique in its volumetric efficiency, the amount of function that is contained in a given amount of volume. Traditional RF communications links are not possible for a device this small because the wavelength of the electromagnetic wave is too large relative to the size of the device. Because the wavelengths for ultrasound are much smaller at a given frequency because the speed of sound is so much less than the speed of light, the team used ultrasound to both power and communicate with the device wirelessly. They fabricated the “antenna” for communicating and powering with ultrasound directly on top of the chip.
    The chip, which is the entire implantable/injectable mote with no additional packaging, was fabricated at the Taiwan Semiconductor Manufacturing Company with additional process modifications performed in the Columbia Nano Initiative cleanroom and the City University of New York Advanced Science Research Center (ASRC) Nanofabrication Facility.
    Shepard commented, “This is a nice example of ‘more than Moore’ technology — we introduced new materials onto standard complementary metal-oxide-semiconductor to provide new function. In this case, we added piezoelectric materials directly onto the integrated circuit to transducer acoustic energy to electrical energy.”
    Konofagou added, “Ultrasound is continuing to grow in clinical importance as new tools and techniques become available. This work continues this trend.”
    The team’s goal is to develop chips that can be injected into the body with a hypodermic needle and then communicate back out of the body using ultrasound, providing information about something they measure locally. The current devices measure body temperature, but there are many more possibilities the team is working on.
    Story Source:
    Materials provided by Columbia University School of Engineering and Applied Science. Original written by Holly Evarts. Note: Content may be edited for style and length. More

  • in

    Engine converts random jiggling of microscopic particle into stored energy

    Simon Fraser University researchers have designed a remarkably fast engine that taps into a new kind of fuel — information.
    The development of this engine, which converts the random jiggling of a microscopic particle into stored energy, is outlined in research published this week in the Proceedings of the National Academy of Sciences (PNAS) and could lead to significant advances in the speed and cost of computers and bio-nanotechnologies.
    SFU physics professor and senior author John Bechhoefer says researchers’ understanding of how to rapidly and efficiently convert information into “work” may inform the design and creation of real-world information engines.
    “We wanted to find out how fast an information engine can go and how much energy it can extract, so we made one,” says Bechhoefer, whose experimental group collaborated with theorists led by SFU physics professor David Sivak.
    Engines of this type were first proposed over 150 years ago but actually making them has only recently become possible.
    “By systematically studying this engine, and choosing the right system characteristics, we have pushed its capabilities over ten times farther than other similar implementations, thus making it the current best-in-class,” says Sivak.
    The information engine designed by SFU researchers consists of a microscopic particle immersed in water and attached to a spring which, itself, is fixed to a movable stage. Researchers then observe the particle bouncing up and down due to thermal motion.
    “When we see an upward bounce, we move the stage up in response,” explains lead author and PhD student Tushar Saha. “When we see a downward bounce, we wait. This ends up lifting the entire system using only information about the particle’s position.”
    Repeating this procedure, they raise the particle “a great height, and thus store a significant amount of gravitational energy,” without having to directly pull on the particle.
    Saha further explains that, “in the lab, we implement this engine with an instrument known as an optical trap, which uses a laser to create a force on the particle that mimics that of the spring and stage.”
    Joseph Lucero, a Master of Science student adds, “in our theoretical analysis, we find an interesting trade-off between the particle mass and the average time for the particle to bounce up. While heavier particles can store more gravitational energy, they generally also take longer to move up.”
    “Guided by this insight, we picked the particle mass and other engine properties to maximize how fast the engine extracts energy, outperforming previous designs and achieving power comparable to molecular machinery in living cells, and speeds comparable to fast-swimming bacteria,” says postdoctoral fellow Jannik Ehrich.
    Story Source:
    Materials provided by Simon Fraser University. Note: Content may be edited for style and length. More

  • in

    Novel circuitry solves a myriad of computationally intensive problems with minimum energy

    From the branching pattern of leaf veins to the variety of interconnected pathways that spread the coronavirus, nature thrives on networks — grids that link the different components of complex systems. Networks underlie such real-life problems as determining the most efficient route for a trucking company to deliver life-saving drugs and calculating the smallest number of mutations required to transform one string of DNA into another.
    Instead of relying on software to tackle these computationally intensive puzzles, researchers at the National Institute of Standards and Technology (NIST) took an unconventional approach. They created a design for an electronic hardware system that directly replicates the architecture of many types of networks.
    The researchers demonstrated that their proposed hardware system, using a computational technique known as race logic, can solve a variety of complex puzzles both rapidly and with a minimum expenditure of energy. Race logic requires less power and solves network problems more rapidly than competing general- purposed computers.
    The scientists, who include Advait Madhavan of NIST and the University of Maryland in College Park and Matthew Daniels and Mark Stiles of NIST, describe their work in Volume 17, Issue 3, May 2021 of the ACM Journal on Emerging Technologies in Computing Systems.
    A key feature of race logic is that it encodes information differently from a standard computer. Digital information is typically encoded and processed using values of computer bits — a “1” if a logic statement is true and a “0” if it’s false. When a bit flips its value, say from 0 to 1, it means that a particular logic operation has been performed in order to solve a mathematical problem.
    In contrast, race logic encodes and processes information by representing it as time signals — the time at which a particular group of computer bits transitions, or flips, from 0 to 1. Large numbers of bit flips are the primary cause of the large power consumption in standard computers. In this respect, race logic offers an advantage because signals encoded in time involve only a few carefully orchestrated bit flips to process information, requiring much less power than signals encoded as 0s or 1s. More

  • in

    Focus on outliers creates flawed snap judgments

    You enter a room and quickly scan the crowd to gain a sense of who’s there — how many men versus women. How reliable is your estimate?
    Not very, according to new research from Duke University.
    In an experimental study, researchers found that participants consistently erred in estimating the proportion of men and women in a group. And participants erred in a particular way: They overestimated whichever group was in the minority.
    “Our attention is drawn to outliers,” said Mel W. Khaw, a postdoctoral research associate at Duke and the study’s lead author. “We tend to overestimate people who stand out in a crowd.”
    For the study, which appears online in the journal Cognition, researchers recruited 48 observers ages 18-28. Participants were presented with a grid of 12 faces and were given just one second to glance at the grid. Study participants were then asked to estimate the number of men and women in the grid.
    Participants accurately assessed homogenous groups — groups containing all men or all women. But if a group contained fewer women, say, participants overestimated the number of women present. More