More stories

  • in

    Interactive typeface for digital text

    AdaptiFont has recently been presented at CHI, the leading Conference on Human Factors in Computing.
    Language is without doubt the most pervasive medium for exchanging knowledge between humans. However, spoken language or abstract text need to be made visible in order to be read, be it in print or on screen.
    How does the way a text looks affect its readability, that is, how it is being read, processed, and understood? A team at TU Darmstadt’s Centre for Cognitive Science investigated this question at the intersection of perceptual science, cognitive science, and linguistics. Electronic text is even more complex. Texts are read on different devices under different external conditions. And although any digital text is formatted initially, users might resize it on screen, change brightness and contrast of the display, or even select a different font when reading text on the web.
    The team of researchers from TU Darmstadt now developed a system that leaves font design to the user’s visual system. First, they needed to come up with a way of synthesizing new fonts. This was achieved by using a machine learning algorithm, which learned the structure of fonts analysing 25 popular and classic typefaces. The system is capable of creating an infinite number of new fonts that are any intermediate form of others — for example, visually halfway between Helvetica and Times New Roman.
    Since some fonts may make it more difficult to read the text, they may slow the reader down. Other fonts may help the user read more fluently. Measuring reading speed, a second algorithm can now generate more typefaces that increase the reading speed.
    In a laboratory experiment, in which users read texts over one hour, the research team showed that their algorithm indeed generates new fonts that increase individual user’s reading speed. Interestingly all readers had their own personalized font that made reading especially easy for them. However: This individual favorite typeface does not necessarily fit in all situations. “AdaptiFont therefore can be understood as a system which creates fonts for an individual dynamically and continuously while reading, which maximizes the reading speed at the time of use. This may depend on the content of the text, whether you are tired, or perhaps are using different display devices,” explains Professor Constantin A. Rothkopf, Centre for Cognitive Science und head of the institute of Psychology of Information Processing at TU Darmstadt.
    The AdaptiFont system was recently presented to the scientific community at the Conference on Human Factors in Computing Systems (CHI). A patent application has been filed. Future possible applications are with all electronic devices on which text is read.
    Story Source:
    Materials provided by Technische Universitat Darmstadt. Note: Content may be edited for style and length. More

  • in

    Brain computer interface turns mental handwriting into text on screen

    Scientists are exploring a number of ways for people with disabilities to communicate with their thoughts. The newest and fastest turns back to a vintage means for expressing oneself: handwriting.
    For the first time, researchers have deciphered the brain activity associated with trying to write letters by hand. Working with a participant with paralysis who has sensors implanted in his brain, the team used an algorithm to identify letters as he attempted to write them. Then, the system displayed the text on a screen — in real time.
    The innovation could, with further development, let people with paralysis rapidly type without using their hands, says study coauthor Krishna Shenoy, a Howard Hughes Medical Institute Investigator at Stanford University who jointly supervised the work with Jaimie Henderson, a Stanford neurosurgeon.
    By attempting handwriting, the study participant typed 90 characters per minute — more than double the previous record for typing with such a “brain-computer interface,” Shenoy and his colleagues report in the journal Nature on May 12, 2021.
    This technology and others like it have the potential to help people with all sorts of disabilities, says Jose Carmena, a neural engineer at the University of California, Berkeley, who was not involved in the study. Though the findings are preliminary, he says, “it’s a big advancement in the field.”
    Brain-computer interfaces convert thought into action, Carmena says. “This paper is a perfect example: the interface decodes the thought of writing and produces the action.”
    Thought-powered communication More

  • in

    How smartphones can help detect ecological change

    Leipzig/Jena/Ilmenau. Mobile apps like Flora Incognita that allow automated identification of wild plants cannot only identify plant species, but also uncover large scale ecological patterns. These patterns are surprisingly similar to the ones derived from long-term inventory data of the German flora, even though they have been acquired over much shorter time periods and are influenced by user behaviour. This opens up new perspectives for rapid detection of biodiversity changes. These are the key results of a study led by a team of researchers from Central Germany, which has recently been published in Ecography.
    With the help of Artificial Intelligence, plant species today can be classified with high accuracy. Smartphone applications leverage this technology to enable users to easily identify plant species in the field, giving laypersons access to biodiversity at their fingertips. Against the backdrop of climate change, habitat loss and land-use change, these applications may serve another use: by gathering information on the locations of identified plant species, valuable datasets are created, potentially providing researchers with information on changing environmental conditions.
    But is this information reliable — as reliable as the information provided by data collected over long time periods? A team of researchers from the German Centre for Integrative Biodiversity Research (iDiv), the Remote Sensing Centre for Earth System Research (RSC4Earth) of Leipzig University (UL) and Helmholtz Centre for Environmental Research (UFZ), the Max Planck Institute for Biogeochemistry (MPI-BGC) and Technical University Ilmenau wanted to find an answer to this question. The researchers analysed data collected with the mobile app Flora Incognita between 2018 and 2019 in Germany and compared it to the FlorKart database of the German Federal Agency for Nature Conservation (BfN). This database contains long-term inventory data collected by over 5,000 floristic experts over a period of more than 70 years.
    Mobile app uncovers macroecological patterns in Germany
    The researchers report that the Flora Incognita data, collected over only two years, allowed them to uncover macroecological patterns in Germany similar to those derived from long-term inventory data of German flora. The data was therefore also a reflection of the effects of several environmental drivers on the distribution of different plant species.
    However, directly comparing the two datasets revealed major differences between the Flora Incognita data and the long-term inventory data in regions with a low human population density. “Of course, how much data is collected in a region strongly depends on the number of smartphone users in that region,” said last author Dr. Jana Wäldchen from MPI-BGC, one of the developers of the mobile app. Deviations in the data were therefore more pronounces in rural areas, except for well-known tourist destinations such as the Zugspitze, Germany’s highest mountain, or Amrum, an island on the North Sea coast.
    User behaviour also influences which plant species are recorded by the mobile app. “The plant observations carried out with the app reflect what users see and what they are interested in,” said Jana Wäldchen. Common and conspicuous species were recorded more often than rare and inconspicuous species. Nonetheless, the large quantity of plant observations still allows a reconstruction of familiar biogeographical patterns. For their study, the researchers had access to more than 900,000 data entries created within the first two years after the app had been launched.
    Automated species recognition bears great potential
    The study shows the potential of this kind of data collection for biodiversity and environmental research, which could soon be integrated in strategies for long-term inventories. “We are convinced that automated species recognition bears much greater potential than previously thought and that it can contribute to a rapid detection of biodiversity changes,” said first author Miguel Mahecha, professor at UL and iDiv Member. In the future, a growing number of users of apps like Flora Incognita could help detect and analyse ecosystem changes worldwide in real time.
    The Flora Incognita mobile app was developed jointly by the research groups of Dr. Jana Wäldchen at MPI-BGC and the group of Professor Patrick Mäder at TU Ilmenau. It is the first plant identification app in Germany using deep neural networks (deep learning) in this context. Fed by thousands of plant images, that have been identified by experts, it can already identify over 4,800 plant species.
    “When we developed Flora Incognita, we realized there was a huge potential and growing interest in improved technologies for the detection of biodiversity data. As computer scientists we are happy to see how our technologies make an important contribution to biodiversity research,” said co-author Patrick Mäder, professor at TU Ilmenau. More

  • in

    Locomotion Vault will help guide innovations in virtual reality locomotion

    Experts in virtual reality locomotion have developed a new resource that analyses all the different possibilities of locomotion currently available.
    Moving around in a virtual reality world can be very different to walking or employing a vehicle in the real world and new approaches and techniques are continually being developed to meet the challenges of different applications.
    Called Locomotion Vault, the project was developed by researchers at the Universities of Birmingham, Copenhagen, and Microsoft Research. It aims to provide a central, freely-available resource to analyse the numerous locomotion techniques currently available.
    The aim is to make it easier for developers to make informed decisions about the appropriate technique for their application and researchers to study which methods are best. By cataloguing available techniques in the Locomotion Vault, the project will also give creators and designers a head-start on identifying gaps where future investigation might be necessary. The database is an interactive resource, so it can be expanded through contributions from researchers and practitioners.
    Researcher Massimiliano Di Luca, of the University of Birmingham, said: “Locomotion is an essential part of virtual reality environments, but there are many challenges. A fundamental question, for example, is whether there should be a unique ‘best’ approach, or instead whether the tactics and methods used should be selected according to the application being designed or the idiosyncrasies of the available hardware. Locomotion Vault will help developers with these decisions.”
    The database also aims to address vital questions of accessibility and inclusivity. Both of these attributes were assessed in relation to each technique included in the Vault.
    Co-researcher, Mar Gonzalez-Franco, of Microsoft Research, said: “As new and existing technologies progress and become a more regular part of our lives, new challenges and opportunities around accessibility and inclusivity will present themselves. Virtual reality is a great example. We need to consider how VR can be designed to accommodate the variety of capabilities represented by those who want to use it.”
    The research team are presenting Locomotion Vault this week at the online Conference on Human Factors in Computing Systems (CHI 2021).
    “This is an area of constant and rapid innovation,” says co-author Hasti Seifi, of the University of Copenhagen. “Locomotion Vault is designed to help researchers tackle the challenges they face right now, but also to help support future discoveries in this exciting field.”
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Smaller chips open door to new RFID applications

    Researchers at North Carolina State University have made what is believed to be the smallest state-of-the-art RFID chip, which should drive down the cost of RFID tags. In addition, the chip’s design makes it possible to embed RFID tags into high value chips, such as computer chips, boosting supply chain security for high-end technologies.
    “As far as we can tell, it’s the world’s smallest Gen2-compatible RFID chip,” says Paul Franzon, corresponding author of a paper on the work and Cirrus Logic Distinguished Professor of Electrical and Computer Engineering at NC State.
    Gen2 RFID chips are state of the art and are already in widespread use. One of the things that sets these new RFID chips apart is their size. They measure 125 micrometers (?m) by 245?m. Manufacturers were able to make smaller RFID chips using earlier technologies, but Franzon and his collaborators have not been able to identify smaller RFID chips that are compatible with the current Gen2 technology.
    “The size of an RFID tag is largely determined by the size of its antenna — not the RFID chip,” Franzon says. “But the chip is the expensive part.”
    The smaller the chip, the more chips you can get from a single silicon wafer. And the more chips you can get from the silicon wafer, the less expensive they are.
    “In practical terms, this means that we can manufacture RFID tags for less than one cent each if we’re manufacturing them in volume,” Franzon says.
    That makes it more feasible for manufacturers, distributors or retailers to use RFID tags to track lower-cost items. For example, the tags could be used to track all of the products in a grocery store without requiring employees to scan items individually.
    “Another advantage is that the design of the circuits we used here is compatible with a wide range of semiconductor technologies, such as those used in conventional computer chips,” says Kirti Bhanushali, who worked on the project as a Ph.D. student at NC State and is first author of the paper. “This makes it possible to incorporate RFID tags into computer chips, allowing users to track individual chips throughout their life cycle. This could help to reduce counterfeiting, and allow you to verify that a component is what it says it is.”
    “We’ve demonstrated what is possible, and we know that these chips can be made using existing manufacturing technologies,” Franzon says. “We’re now interested in working with industry partners to explore commercializing the chip in two ways: creating low-cost RFID at scale for use in sectors such as grocery stores; and embedding RFID tags into computer chips in order to secure high-value supply chains.”
    The paper, “A 125?m×245?m Mainly Digital UHF EPC Gen2 Compatible RFID tag in 55nm CMOS process,” was presented April 29 at the IEEE International Conference on RFID. The paper was co-authored by Wenxu Zhao, who worked on the project as a Ph.D. student at NC State; and Shepherd Pitts, who worked on the project while a research assistant professor at NC State.
    The work was done with support from the National Science Foundation, under grant 1422172; and from NC State’s Chancellor’s Innovation Fund.
    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    AI learns to type on a phone like humans

    Touchscreens are notoriously difficult to type on. Since we can’t feel the keys, we rely on the sense of sight to move our fingers to the right places and check for errors, a combination of efforts we can’t pull off at the same time. To really understand how people type on touchscreens, researchers at Aalto University and the Finnish Center for Artificial Intelligence (FCAI) have created the first artificial intelligence model that predicts how people move their eyes and fingers while typing.
    The AI model can simulate how a human user would type any sentence on any keyboard design. It makes errors, detects them — though not always immediately — and corrects them, very much like humans would. The simulation also predicts how people adapt to alternating circumstances, like how their writing style changes when they start using a new auto-correction system or keyboard design.
    ‘Previously, touchscreen typing has been understood mainly from the perspective of how our fingers move. AI-based methods have helped shed new light on these movements: what we’ve discovered is the importance of deciding when and where to look. Now, we can make much better predictions on how people type on their phones or tablets,’ says Dr. Jussi Jokinen, who led the work.
    The study, to be presented at ACM CHI on 12 May, lays the groundwork for developing, for instance, better and even personalized text entry solutions.
    ‘Now that we have a realistic simulation of how humans type on touchscreens, it should be a lot easier to optimize keyboard designs for better typing — meaning less errors, faster typing, and, most importantly for me, less frustration,’ Jokinen explains.
    In addition to predicting how a generic person would type, the model is also able to account for different types of users, like those with motor impairments, and could be used to develop typing aids or interfaces designed with these groups in mind. For those facing no particular challenges, it can deduce from personal writing styles — by noting, for instance, the mistakes that repeatedly occur in texts and emails — what kind of a keyboard, or auto-correction system, would best serve a user.
    The novel approach builds on the group’s earlier empirical research, which provided the basis for a cognitive model of how humans type. The researchers then produced the generative model capable of typing independently. The work was done as part of a larger project on Interactive AI at the Finnish Center for Artificial Intelligence.
    The results are underpinned by a classic machine learning method, reinforcement learning, that the researchers extended to simulate people. Reinforcement learning is normally used to teach robots to solve tasks by trial and error; the team found a new way to use this method to generate behavior that closely matches that of humans — mistakes, corrections and all.
    ‘We gave the model the same abilities and bounds that we, as humans, have. When we asked it to type efficiently, it figured out how to best use these abilities. The end result is very similar to how humans type, without having to teach the model with human data,’ Jokinen says.
    Comparison to data of human typing confirmed that the model’s predictions were accurate. In the future, the team hopes to simulate slow and fast typing techniques to, for example, design useful learning modules for people who want to improve their typing.
    The paper, Touchscreen Typing As Optimal Supervisory Control, will be presented 12 May 2021 at the ACM CHI conference.
    Video: https://www.youtube.com/watch?v=6cl2OoTNB6g&t=1s
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Harnessing the hum of fluorescent lights for more efficient computing

    The property that makes fluorescent lights buzz could power a new generation of more efficient computing devices that store data with magnetic fields, rather than electricity.
    A team led by University of Michigan researchers has developed a material that’s at least twice as “magnetostrictive” and far less costly than other materials in its class. In addition to computing, it could also lead to better magnetic sensors for medical and security devices.
    Magnetostriction, which causes the buzz of fluorescent lights and electrical transformers, occurs when a material’s shape and magnetic field are linked — that is, a change in shape causes a change in magnetic field. The property could be key to a new generation of computing devices called magnetoelectrics.
    Magnetoelectric chips could make everything from massive data centers to cell phones far more energy efficient, slashing the electricity requirements of the world’s computing infrastructure.
    Made of a combination of iron and gallium, the material is detailed in a paper published May 12 in Nature Communication. The team is led by U-M materials science and engineering professor John Heron and includes researchers from Intel; Cornell University; University of California, Berkeley; University of Wisconsin; Purdue University and elsewhere.
    Magnetoelectric devices use magnetic fields instead of electricity to store the digital ones and zeros of binary data. Tiny pulses of electricity cause them to expand or contract slightly, flipping their magnetic field from positive to negative or vice versa. Because they don’t require a steady stream of electricity, as today’s chips do, they use a fraction of the energy. More

  • in

    Tiny, wireless, injectable chips use ultrasound to monitor body processes

    Widely used to monitor and map biological signals, to support and enhance physiological functions, and to treat diseases, implantable medical devices are transforming healthcare and improving the quality of life for millions of people. Researchers are increasingly interested in designing wireless, miniaturized implantable medical devices for in vivo and in situ physiological monitoring. These devices could be used to monitor physiological conditions, such as temperature, blood pressure, glucose, and respiration for both diagnostic and therapeutic procedures.
    To date, conventional implanted electronics have been highly volume-inefficient — they generally require multiple chips, packaging, wires, and external transducers, and batteries are often needed for energy storage. A constant trend in electronics has been tighter integration of electronic components, often moving more and more functions onto the integrated circuit itself.
    Researchers at Columbia Engineering report that they have built what they say is the world’s smallest single-chip system, consuming a total volume of less than 0.1 mm3. The system is as small as a dust mite and visible only under a microscope. In order to achieve this, the team used ultrasound to both power and communicate with the device wirelessly. The study was published online May 7 in Science Advances.
    “We wanted to see how far we could push the limits on how small a functioning chip we could make,” said the study’s leader Ken Shepard, Lau Family professor of electrical engineering and professor of biomedical engineering. “This is a new idea of ‘chip as system’ — this is a chip that alone, with nothing else, is a complete functioning electronic system. This should be revolutionary for developing wireless, miniaturized implantable medical devices that can sense different things, be used in clinical applications, and eventually approved for human use.”
    The team also included Elisa Konofagou, Robert and Margaret Hariri Professor of Biomedical engineering and professor of radiology, as well as Stephen A. Lee, PhD student in the Konofagou lab who assisted in the animal studies.
    The design was done by doctoral student Chen Shi, who is the first author of the study. Shi’s design is unique in its volumetric efficiency, the amount of function that is contained in a given amount of volume. Traditional RF communications links are not possible for a device this small because the wavelength of the electromagnetic wave is too large relative to the size of the device. Because the wavelengths for ultrasound are much smaller at a given frequency because the speed of sound is so much less than the speed of light, the team used ultrasound to both power and communicate with the device wirelessly. They fabricated the “antenna” for communicating and powering with ultrasound directly on top of the chip.
    The chip, which is the entire implantable/injectable mote with no additional packaging, was fabricated at the Taiwan Semiconductor Manufacturing Company with additional process modifications performed in the Columbia Nano Initiative cleanroom and the City University of New York Advanced Science Research Center (ASRC) Nanofabrication Facility.
    Shepard commented, “This is a nice example of ‘more than Moore’ technology — we introduced new materials onto standard complementary metal-oxide-semiconductor to provide new function. In this case, we added piezoelectric materials directly onto the integrated circuit to transducer acoustic energy to electrical energy.”
    Konofagou added, “Ultrasound is continuing to grow in clinical importance as new tools and techniques become available. This work continues this trend.”
    The team’s goal is to develop chips that can be injected into the body with a hypodermic needle and then communicate back out of the body using ultrasound, providing information about something they measure locally. The current devices measure body temperature, but there are many more possibilities the team is working on.
    Story Source:
    Materials provided by Columbia University School of Engineering and Applied Science. Original written by Holly Evarts. Note: Content may be edited for style and length. More