More stories

  • in

    New research may explain shortages in STEM careers

    A new study by the University of Georgia revealed that more college students change majors within the STEM pipeline than leave the career path of science, technology, engineering and mathematics altogether.
    Funded by a National Institutes of Health grant and a National Science Foundation Postdoctoral Fellowship and done in collaboration with the University of Wisconsin, the study examined interviews, surveys and institutional data from 1,193 students at a U.S. midwestern university for more than six years to observe a single area of the STEM pipeline: biomedical fields of study.
    Out of 921 students who stayed in the biomedical pipeline through graduation, almost half changed their career plans within the biomedical fields.
    “This was almost double the number of students who left biomedical fields altogether,” said Emily Rosenzweig, co-author of the study and assistant professor in the Mary Frances Early College of Education’s department of educational psychology. “This suggests that if we want to fully understand why there are shortages in certain STEM careers, we need to look at those who change plans within the pipeline, not just those who leave it.”
    Rosenzweig examined students’ motivations for changing career plans and found that students were more often inspired to make a change because a new field seemed more attractive.
    This finding pointed to an underexplored research area that educators, policymakers and administrators should devote more attention to in the future. Rather than focusing only on what makes students disenchanted with a particular career, factors that make alternative career paths seem valuable to students need to be considered.
    “The sheer number of changes made by students who remained in the biomedical pipeline highlights the divergence of paths students take in their career decision-making,” Rosenzweig said. “We should not simply assume that students are staying on course and progressing smoothly toward intended careers just because they have not left the [STEM] pipeline.”
    Ultimately, the research provides new insights about students’ motivations for choosing various careers inside the STEM pipeline and demonstrates the importance of understanding this group if schools are to promote retention in particular STEM careers.
    Story Source:
    Materials provided by University of Georgia. Original written by Lauren Leathers. Note: Content may be edited for style and length. More

  • in

    Interactive typeface for digital text

    AdaptiFont has recently been presented at CHI, the leading Conference on Human Factors in Computing.
    Language is without doubt the most pervasive medium for exchanging knowledge between humans. However, spoken language or abstract text need to be made visible in order to be read, be it in print or on screen.
    How does the way a text looks affect its readability, that is, how it is being read, processed, and understood? A team at TU Darmstadt’s Centre for Cognitive Science investigated this question at the intersection of perceptual science, cognitive science, and linguistics. Electronic text is even more complex. Texts are read on different devices under different external conditions. And although any digital text is formatted initially, users might resize it on screen, change brightness and contrast of the display, or even select a different font when reading text on the web.
    The team of researchers from TU Darmstadt now developed a system that leaves font design to the user’s visual system. First, they needed to come up with a way of synthesizing new fonts. This was achieved by using a machine learning algorithm, which learned the structure of fonts analysing 25 popular and classic typefaces. The system is capable of creating an infinite number of new fonts that are any intermediate form of others — for example, visually halfway between Helvetica and Times New Roman.
    Since some fonts may make it more difficult to read the text, they may slow the reader down. Other fonts may help the user read more fluently. Measuring reading speed, a second algorithm can now generate more typefaces that increase the reading speed.
    In a laboratory experiment, in which users read texts over one hour, the research team showed that their algorithm indeed generates new fonts that increase individual user’s reading speed. Interestingly all readers had their own personalized font that made reading especially easy for them. However: This individual favorite typeface does not necessarily fit in all situations. “AdaptiFont therefore can be understood as a system which creates fonts for an individual dynamically and continuously while reading, which maximizes the reading speed at the time of use. This may depend on the content of the text, whether you are tired, or perhaps are using different display devices,” explains Professor Constantin A. Rothkopf, Centre for Cognitive Science und head of the institute of Psychology of Information Processing at TU Darmstadt.
    The AdaptiFont system was recently presented to the scientific community at the Conference on Human Factors in Computing Systems (CHI). A patent application has been filed. Future possible applications are with all electronic devices on which text is read.
    Story Source:
    Materials provided by Technische Universitat Darmstadt. Note: Content may be edited for style and length. More

  • in

    Brain computer interface turns mental handwriting into text on screen

    Scientists are exploring a number of ways for people with disabilities to communicate with their thoughts. The newest and fastest turns back to a vintage means for expressing oneself: handwriting.
    For the first time, researchers have deciphered the brain activity associated with trying to write letters by hand. Working with a participant with paralysis who has sensors implanted in his brain, the team used an algorithm to identify letters as he attempted to write them. Then, the system displayed the text on a screen — in real time.
    The innovation could, with further development, let people with paralysis rapidly type without using their hands, says study coauthor Krishna Shenoy, a Howard Hughes Medical Institute Investigator at Stanford University who jointly supervised the work with Jaimie Henderson, a Stanford neurosurgeon.
    By attempting handwriting, the study participant typed 90 characters per minute — more than double the previous record for typing with such a “brain-computer interface,” Shenoy and his colleagues report in the journal Nature on May 12, 2021.
    This technology and others like it have the potential to help people with all sorts of disabilities, says Jose Carmena, a neural engineer at the University of California, Berkeley, who was not involved in the study. Though the findings are preliminary, he says, “it’s a big advancement in the field.”
    Brain-computer interfaces convert thought into action, Carmena says. “This paper is a perfect example: the interface decodes the thought of writing and produces the action.”
    Thought-powered communication More

  • in

    How smartphones can help detect ecological change

    Leipzig/Jena/Ilmenau. Mobile apps like Flora Incognita that allow automated identification of wild plants cannot only identify plant species, but also uncover large scale ecological patterns. These patterns are surprisingly similar to the ones derived from long-term inventory data of the German flora, even though they have been acquired over much shorter time periods and are influenced by user behaviour. This opens up new perspectives for rapid detection of biodiversity changes. These are the key results of a study led by a team of researchers from Central Germany, which has recently been published in Ecography.
    With the help of Artificial Intelligence, plant species today can be classified with high accuracy. Smartphone applications leverage this technology to enable users to easily identify plant species in the field, giving laypersons access to biodiversity at their fingertips. Against the backdrop of climate change, habitat loss and land-use change, these applications may serve another use: by gathering information on the locations of identified plant species, valuable datasets are created, potentially providing researchers with information on changing environmental conditions.
    But is this information reliable — as reliable as the information provided by data collected over long time periods? A team of researchers from the German Centre for Integrative Biodiversity Research (iDiv), the Remote Sensing Centre for Earth System Research (RSC4Earth) of Leipzig University (UL) and Helmholtz Centre for Environmental Research (UFZ), the Max Planck Institute for Biogeochemistry (MPI-BGC) and Technical University Ilmenau wanted to find an answer to this question. The researchers analysed data collected with the mobile app Flora Incognita between 2018 and 2019 in Germany and compared it to the FlorKart database of the German Federal Agency for Nature Conservation (BfN). This database contains long-term inventory data collected by over 5,000 floristic experts over a period of more than 70 years.
    Mobile app uncovers macroecological patterns in Germany
    The researchers report that the Flora Incognita data, collected over only two years, allowed them to uncover macroecological patterns in Germany similar to those derived from long-term inventory data of German flora. The data was therefore also a reflection of the effects of several environmental drivers on the distribution of different plant species.
    However, directly comparing the two datasets revealed major differences between the Flora Incognita data and the long-term inventory data in regions with a low human population density. “Of course, how much data is collected in a region strongly depends on the number of smartphone users in that region,” said last author Dr. Jana Wäldchen from MPI-BGC, one of the developers of the mobile app. Deviations in the data were therefore more pronounces in rural areas, except for well-known tourist destinations such as the Zugspitze, Germany’s highest mountain, or Amrum, an island on the North Sea coast.
    User behaviour also influences which plant species are recorded by the mobile app. “The plant observations carried out with the app reflect what users see and what they are interested in,” said Jana Wäldchen. Common and conspicuous species were recorded more often than rare and inconspicuous species. Nonetheless, the large quantity of plant observations still allows a reconstruction of familiar biogeographical patterns. For their study, the researchers had access to more than 900,000 data entries created within the first two years after the app had been launched.
    Automated species recognition bears great potential
    The study shows the potential of this kind of data collection for biodiversity and environmental research, which could soon be integrated in strategies for long-term inventories. “We are convinced that automated species recognition bears much greater potential than previously thought and that it can contribute to a rapid detection of biodiversity changes,” said first author Miguel Mahecha, professor at UL and iDiv Member. In the future, a growing number of users of apps like Flora Incognita could help detect and analyse ecosystem changes worldwide in real time.
    The Flora Incognita mobile app was developed jointly by the research groups of Dr. Jana Wäldchen at MPI-BGC and the group of Professor Patrick Mäder at TU Ilmenau. It is the first plant identification app in Germany using deep neural networks (deep learning) in this context. Fed by thousands of plant images, that have been identified by experts, it can already identify over 4,800 plant species.
    “When we developed Flora Incognita, we realized there was a huge potential and growing interest in improved technologies for the detection of biodiversity data. As computer scientists we are happy to see how our technologies make an important contribution to biodiversity research,” said co-author Patrick Mäder, professor at TU Ilmenau. More

  • in

    Locomotion Vault will help guide innovations in virtual reality locomotion

    Experts in virtual reality locomotion have developed a new resource that analyses all the different possibilities of locomotion currently available.
    Moving around in a virtual reality world can be very different to walking or employing a vehicle in the real world and new approaches and techniques are continually being developed to meet the challenges of different applications.
    Called Locomotion Vault, the project was developed by researchers at the Universities of Birmingham, Copenhagen, and Microsoft Research. It aims to provide a central, freely-available resource to analyse the numerous locomotion techniques currently available.
    The aim is to make it easier for developers to make informed decisions about the appropriate technique for their application and researchers to study which methods are best. By cataloguing available techniques in the Locomotion Vault, the project will also give creators and designers a head-start on identifying gaps where future investigation might be necessary. The database is an interactive resource, so it can be expanded through contributions from researchers and practitioners.
    Researcher Massimiliano Di Luca, of the University of Birmingham, said: “Locomotion is an essential part of virtual reality environments, but there are many challenges. A fundamental question, for example, is whether there should be a unique ‘best’ approach, or instead whether the tactics and methods used should be selected according to the application being designed or the idiosyncrasies of the available hardware. Locomotion Vault will help developers with these decisions.”
    The database also aims to address vital questions of accessibility and inclusivity. Both of these attributes were assessed in relation to each technique included in the Vault.
    Co-researcher, Mar Gonzalez-Franco, of Microsoft Research, said: “As new and existing technologies progress and become a more regular part of our lives, new challenges and opportunities around accessibility and inclusivity will present themselves. Virtual reality is a great example. We need to consider how VR can be designed to accommodate the variety of capabilities represented by those who want to use it.”
    The research team are presenting Locomotion Vault this week at the online Conference on Human Factors in Computing Systems (CHI 2021).
    “This is an area of constant and rapid innovation,” says co-author Hasti Seifi, of the University of Copenhagen. “Locomotion Vault is designed to help researchers tackle the challenges they face right now, but also to help support future discoveries in this exciting field.”
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Smaller chips open door to new RFID applications

    Researchers at North Carolina State University have made what is believed to be the smallest state-of-the-art RFID chip, which should drive down the cost of RFID tags. In addition, the chip’s design makes it possible to embed RFID tags into high value chips, such as computer chips, boosting supply chain security for high-end technologies.
    “As far as we can tell, it’s the world’s smallest Gen2-compatible RFID chip,” says Paul Franzon, corresponding author of a paper on the work and Cirrus Logic Distinguished Professor of Electrical and Computer Engineering at NC State.
    Gen2 RFID chips are state of the art and are already in widespread use. One of the things that sets these new RFID chips apart is their size. They measure 125 micrometers (?m) by 245?m. Manufacturers were able to make smaller RFID chips using earlier technologies, but Franzon and his collaborators have not been able to identify smaller RFID chips that are compatible with the current Gen2 technology.
    “The size of an RFID tag is largely determined by the size of its antenna — not the RFID chip,” Franzon says. “But the chip is the expensive part.”
    The smaller the chip, the more chips you can get from a single silicon wafer. And the more chips you can get from the silicon wafer, the less expensive they are.
    “In practical terms, this means that we can manufacture RFID tags for less than one cent each if we’re manufacturing them in volume,” Franzon says.
    That makes it more feasible for manufacturers, distributors or retailers to use RFID tags to track lower-cost items. For example, the tags could be used to track all of the products in a grocery store without requiring employees to scan items individually.
    “Another advantage is that the design of the circuits we used here is compatible with a wide range of semiconductor technologies, such as those used in conventional computer chips,” says Kirti Bhanushali, who worked on the project as a Ph.D. student at NC State and is first author of the paper. “This makes it possible to incorporate RFID tags into computer chips, allowing users to track individual chips throughout their life cycle. This could help to reduce counterfeiting, and allow you to verify that a component is what it says it is.”
    “We’ve demonstrated what is possible, and we know that these chips can be made using existing manufacturing technologies,” Franzon says. “We’re now interested in working with industry partners to explore commercializing the chip in two ways: creating low-cost RFID at scale for use in sectors such as grocery stores; and embedding RFID tags into computer chips in order to secure high-value supply chains.”
    The paper, “A 125?m×245?m Mainly Digital UHF EPC Gen2 Compatible RFID tag in 55nm CMOS process,” was presented April 29 at the IEEE International Conference on RFID. The paper was co-authored by Wenxu Zhao, who worked on the project as a Ph.D. student at NC State; and Shepherd Pitts, who worked on the project while a research assistant professor at NC State.
    The work was done with support from the National Science Foundation, under grant 1422172; and from NC State’s Chancellor’s Innovation Fund.
    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    AI learns to type on a phone like humans

    Touchscreens are notoriously difficult to type on. Since we can’t feel the keys, we rely on the sense of sight to move our fingers to the right places and check for errors, a combination of efforts we can’t pull off at the same time. To really understand how people type on touchscreens, researchers at Aalto University and the Finnish Center for Artificial Intelligence (FCAI) have created the first artificial intelligence model that predicts how people move their eyes and fingers while typing.
    The AI model can simulate how a human user would type any sentence on any keyboard design. It makes errors, detects them — though not always immediately — and corrects them, very much like humans would. The simulation also predicts how people adapt to alternating circumstances, like how their writing style changes when they start using a new auto-correction system or keyboard design.
    ‘Previously, touchscreen typing has been understood mainly from the perspective of how our fingers move. AI-based methods have helped shed new light on these movements: what we’ve discovered is the importance of deciding when and where to look. Now, we can make much better predictions on how people type on their phones or tablets,’ says Dr. Jussi Jokinen, who led the work.
    The study, to be presented at ACM CHI on 12 May, lays the groundwork for developing, for instance, better and even personalized text entry solutions.
    ‘Now that we have a realistic simulation of how humans type on touchscreens, it should be a lot easier to optimize keyboard designs for better typing — meaning less errors, faster typing, and, most importantly for me, less frustration,’ Jokinen explains.
    In addition to predicting how a generic person would type, the model is also able to account for different types of users, like those with motor impairments, and could be used to develop typing aids or interfaces designed with these groups in mind. For those facing no particular challenges, it can deduce from personal writing styles — by noting, for instance, the mistakes that repeatedly occur in texts and emails — what kind of a keyboard, or auto-correction system, would best serve a user.
    The novel approach builds on the group’s earlier empirical research, which provided the basis for a cognitive model of how humans type. The researchers then produced the generative model capable of typing independently. The work was done as part of a larger project on Interactive AI at the Finnish Center for Artificial Intelligence.
    The results are underpinned by a classic machine learning method, reinforcement learning, that the researchers extended to simulate people. Reinforcement learning is normally used to teach robots to solve tasks by trial and error; the team found a new way to use this method to generate behavior that closely matches that of humans — mistakes, corrections and all.
    ‘We gave the model the same abilities and bounds that we, as humans, have. When we asked it to type efficiently, it figured out how to best use these abilities. The end result is very similar to how humans type, without having to teach the model with human data,’ Jokinen says.
    Comparison to data of human typing confirmed that the model’s predictions were accurate. In the future, the team hopes to simulate slow and fast typing techniques to, for example, design useful learning modules for people who want to improve their typing.
    The paper, Touchscreen Typing As Optimal Supervisory Control, will be presented 12 May 2021 at the ACM CHI conference.
    Video: https://www.youtube.com/watch?v=6cl2OoTNB6g&t=1s
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Harnessing the hum of fluorescent lights for more efficient computing

    The property that makes fluorescent lights buzz could power a new generation of more efficient computing devices that store data with magnetic fields, rather than electricity.
    A team led by University of Michigan researchers has developed a material that’s at least twice as “magnetostrictive” and far less costly than other materials in its class. In addition to computing, it could also lead to better magnetic sensors for medical and security devices.
    Magnetostriction, which causes the buzz of fluorescent lights and electrical transformers, occurs when a material’s shape and magnetic field are linked — that is, a change in shape causes a change in magnetic field. The property could be key to a new generation of computing devices called magnetoelectrics.
    Magnetoelectric chips could make everything from massive data centers to cell phones far more energy efficient, slashing the electricity requirements of the world’s computing infrastructure.
    Made of a combination of iron and gallium, the material is detailed in a paper published May 12 in Nature Communication. The team is led by U-M materials science and engineering professor John Heron and includes researchers from Intel; Cornell University; University of California, Berkeley; University of Wisconsin; Purdue University and elsewhere.
    Magnetoelectric devices use magnetic fields instead of electricity to store the digital ones and zeros of binary data. Tiny pulses of electricity cause them to expand or contract slightly, flipping their magnetic field from positive to negative or vice versa. Because they don’t require a steady stream of electricity, as today’s chips do, they use a fraction of the energy. More