More stories

  • in

    Displays controlled by flexible fins and liquid droplets more versatile, efficient than LED screens

    Flexible displays that can change color, convey information and even send veiled messages via infrared radiation are now possible, thanks to new research from the University of Illinois Urbana-Champaign. Engineers inspired by the morphing skins of animals like chameleons and octopuses have developed capillary-controlled robotic flapping fins to create switchable optical and infrared light multipixel displays that are 1,000 times more energy efficient than light-emitting devices.
    The new study led by mechanical science and engineering professor Sameh Tawfick demonstrates how bendable fins and fluids can simultaneously switch between straight or bent and hot and cold by controlling the volume and temperature of tiny fluid-filled pixels. Varying the volume of fluids within the pixels can change the directions in which the flaps flip — similar to old-fashioned flip clocks — and varying the temperature allows the pixels to communicate via infrared energy.
    The study findings are published in the journal Science Advances.
    Tawfick’s interest in the interaction of elastic and capillary forces — or elasto-capillarity — started as a graduate student, spanned the basic science of hair wetting and led to his research in soft robotic displays at Illinois.
    “An everyday example of elasto-capillarity is what happens to our hair when we get in the shower,” Tawfick said. “When our hair gets wet, it sticks together and bends or bundles as capillary forces are applied and released when it dries out.”
    In the lab, the team created small boxes, or pixels, a few millimeters in size, that contain fins made of a flexible polymer that bend when the pixels are filled with fluid and drained using a system of tiny pumps. The pixels can have single or multiple fins and are arranged into arrays that form a display to convey information, Tawfick said.

    “We are not limited to cubic pixel boxes, either,” Tawfick said. “The fins can be arranged in various orientations to create different images, even along curved surfaces. The control is precise enough to achieve complex motions, like simulating the opening of a flower bloom.”
    The study reports that another feature of the new displays is the ability to send two simultaneous signals — one that can be seen with the human eye and another that can only be seen with an infrared camera.
    “Because we can control the temperature of these individual droplets, we can display messages that can only be seen using an infrared device,” Tawfick said, “Or we can send two different messages at the same time.”
    However, there are a few limitations to the new displays, Tawfick said.
    While building the new devices, the team found that the tiny pumps needed to control the pixel fluids were not commercially available, and the entire device is sensitive to gravity — meaning that it only works while in a horizontal position.

    “Once we turn the display by 90 degrees, the performance is greatly degraded, which is detrimental to applications like billboards and other signs intended for the public,” Tawfick said. “The good news is, we know that when liquid droplets become small enough, they become insensitive to gravity, like when you see a rain droplet sticking on your window and it doesn’t fall. We have found that if we use fluid droplets that are five times smaller, gravity will no longer be an issue.”
    The team said that because the science behind gravity’s effect on droplets is well understood, it will provide the focal point for their next application of the emerging technology.
    Tawfick said he is very excited to see where this technology is headed because it brings a fresh idea to a big market space of large reflective displays. “We have developed a whole new breed of displays that require minimal energy, are scaleable and even flexible enough to be placed onto curved surfaces.”
    Illinois researchers Jonghyun Ha, Yun Seong Kim, Chengzhang Li, Jonghyun Hwang, Sze Chai Leung and Ryan Siu also participated in this research.
    The Airforce Office of Scientific Research and the National Science Foundation supported this research. More

  • in

    Robotic glove that ‘feels’ lends a ‘hand’ to relearn playing piano after a stroke

    For people who have suffered neurotrauma such as a stroke, everyday tasks can be extremely challenging because of decreased coordination and strength in one or both upper limbs. These problems have spurred the development of robotic devices to help enhance their abilities. However, the rigid nature of these assistive devices can be problematic, especially for more complex tasks like playing a musical instrument.
    A first-of-its-kind robotic glove is lending a “hand” and providing hope to piano players who have suffered a disabling stroke. Developed by researchers from Florida Atlantic University’s College of Engineering and Computer Science, the soft robotic hand exoskeleton uses artificial intelligence to improve hand dexterity.
    Combining flexible tactile sensors, soft actuators and AI, this robotic glove is the first to “feel” the difference between correct and incorrect versions of the same song and to combine these features into a single hand exoskeleton.
    “Playing the piano requires complex and highly skilled movements, and relearning tasks involves the restoration and retraining of specific movements or skills,” said Erik Engeberg, Ph.D., senior author, a professor in FAU’s Department of Ocean and Mechanical Engineering within the College of Engineering and Computer Science, and a member of the FAU Center for Complex Systems and Brain Sciences and the FAU Stiles-Nicholson Brain Institute. “Our robotic glove is composed of soft, flexible materials and sensors that provide gentle support and assistance to individuals to relearn and regain their motor abilities.”
    Researchers integrated special sensor arrays into each fingertip of the robotic glove. Unlike prior exoskeletons, this new technology provides precise force and guidance in recovering the fine finger movements required for piano playing. By monitoring and responding to users’ movements, the robotic glove offers real-time feedback and adjustments, making it easier for them to grasp the correct movement techniques.
    To demonstrate the robotic glove’s capabilities, researchers programmed it to feel the difference between correct and incorrect versions of the well-known tune, “Mary Had a Little Lamb,” played on the piano. To introduce variations in the performance, they created a pool of 12 different types of errors that could occur at the beginning or end of a note, or due to timing errors that were either premature or delayed, and that persisted for 0.1, 0.2 or 0.3 seconds. Ten different song variations consisted of three groups of three variations each, plus the correct song played with no errors.

    To classify the song variations, Random Forest (RF), K-Nearest Neighbor (KNN) and Artificial Neural Network (ANN) algorithms were trained with data from the tactile sensors in the fingertips. Feeling the differences between correct and incorrect versions of the song was done with the robotic glove independently and while worn by a person. The accuracy of these algorithms was compared to classify the correct and incorrect song variations with and without the human subject.
    Results of the study, published in the journal Frontiers in Robotics and AI, demonstrated that the ANN algorithm had the highest classification accuracy of 97.13 percent with the human subject and 94.60 percent without the human subject. The algorithm successfully determined the percentage error of a certain song as well as identified key presses that were out of time. These findings highlight the potential of the smart robotic glove to aid individuals who are disabled to relearn dexterous tasks like playing musical instruments.
    Researchers designed the robotic glove using 3D printed polyvinyl acid stents and hydrogel casting to integrate five actuators into a single wearable device that conforms to the user’s hand. The fabrication process is new, and the form factor could be customized to the unique anatomy of individual patients with the use of 3D scanning technology or CT scans.
    “Our design is significantly simpler than most designs as all the actuators and sensors are combined into a single molding process,” said Engeberg. “Importantly, although this study’s application was for playing a song, the approach could be applied to myriad tasks of daily life and the device could facilitate intricate rehabilitation programs customized for each patient.”
    Clinicians could use the data to develop personalized action plans to pinpoint patient weaknesses, which may present themselves as sections of the song that are consistently played erroneously and can be used to determine which motor functions require improvement. As patients progress, more challenging songs could be prescribed by the rehabilitation team in a game-like progression to provide a customizable path to improvement.
    “The technology developed by professor Engeberg and the research team is truly a gamechanger for individuals with neuromuscular disorders and reduced limb functionality,” said Stella Batalama, Ph.D., dean of the FAU College of Engineering and Computer Science. “Although other soft robotic actuators have been used to play the piano; our robotic glove is the only one that has demonstrated the capability to ‘feel’ the difference between correct and incorrect versions of the same song.”
    Study co-authors are Maohua Lin, first author and a Ph.D. student; Rudy Paul, a graduate student; and Moaed Abd, Ph.D., a recent graduate; all from the FAU College of Engineering and Computer Science; James Jones, Boise State University; Darryl Dieujuste, a graduate research assistant, FAU College of Engineering and Computer Science; and Harvey Chim, M.D., a professor in the Division of Plastic and Reconstructive Surgery at the University of Florida.
    This research was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health (NIH), the National Institute of Aging of the NIH and the National Science Foundation. This research was supported in part by a seed grant from the FAU College of Engineering and Computer Science and the FAU Institute for Sensing and Embedded Network Systems Engineering (I-SENSE). More

  • in

    Researchers demonstrate single-molecule electronic ‘switch’ using ladder-like molecules

    Researchers have demonstrated a new material for single-molecule electronic switches, which can effectively vary current at the nanoscale in response to external stimuli. The material for this molecular switch has a unique structure created by locking a linear molecular backbone into a ladder-type structure. A new study finds that the ladder-type molecular structure greatly enhances the stability of the material, making it highly promising for use in single-molecule electronics applications.
    Reported in the journal Chem, the study shows that the ladder-type molecule serves as a robust and reversible molecular switch over a wide range of conductivity levels and different molecular states.
    “Our work provides a significant step forward towards the development of functional molecular electronic devices,” says Charles Schroeder, who is the James Economy Professor of Materials Science and Engineering and Professor of Chemical and Biomolecular Engineering at the University of Illinois Urbana-Champaign.
    To enhance the chemical and mechanical stability of the molecule, the team used new strategies in chemical synthesis to lock the molecular backbone to prevent the molecule from rotating, like converting a rope ladder into something more stable like metal or wood.
    “Imagine a light switch that we turn on and off every day, but instead of flipping an actual switch, we add chemical or electrochemical stimuli to turn the electrical signal from the material on and off,” says lead author and former graduate student Jialing (Caroline) Li. Compared to bulk inorganic materials, organic single molecules can be made into basic electrical components, like wires and transistors, and will help enable the ultimate goal of shrinking electrical circuits.
    Single-molecule electronic devices are constructed as junctions with a single molecule bridge that is generally anchored to two terminal groups connected to metal electrodes. These devices can be made programmable by using a stimuli-responsive element in the bridge that can be switched on and off by using an array of stimuli such as pH, optical fields, electric fields, magnetic fields, mechanical forces and electrochemical control.
    “The molecular scale switch has been a very popular subject in studies of single molecule electronics,” Li explains. “But realizing a multi-state switch on a molecular scale is challenging because we require a material that is conductive and has several different molecular charge states, and we require the material to be very stable so it can be switched on and off for many cycles.”
    Though Li explored many other organic materials, the drawback of those materials was that they were not stable in ambient conditions and could break down easily when exposed to oxygen. After searching for the ideal material for a long time, Li struck gold when she stumbled upon a material from a research group at Texas A&M University (collaborators on this project) and immediately identified it as ideal for her purposes.
    Modifying the structure by locking the backbone of the molecule prevents hydrolysis, chemical breakdown due to reaction with water, and other degradation reactions from occurring, and makes characterization of the material easier since it cannot rotate and change forms. This rigid, coplanar form enhances the electronic properties of the molecule, making the flow of electrons through the material easier. The ladder-type structure allows for stable molecular charge states when external stimuli are applied that give rise to significantly different levels of conductivity- making multi-state switching possible.
    This material meets almost all of the requirements needed to serve in single-molecule electronic devices: it is stable in ambient conditions, can be cycled on/off many times, is conductive (although not as conductive as metal) and has different molecular states accessible to be utilized.
    “Researchers have been struggling to minimize the size of the transistor to fit as many as possible on chips for semiconductors, usually using inorganic materials like silicon,” Li says. “An alternative way of doing that is using organic materials like a single-molecule material to conduct the electrons and replace the inorganic counterparts.” The ladder-type structure used in this research shows promise to be used as functional materials for single-molecule transistors.
    For now, only one unit of the molecule is used for single-molecule electronics, but it is possible to extend the length to include many repeating units to make a longer molecular wire. The team believes that the material will still be highly conductive, even over a longer distance. More

  • in

    Discovering features of band topology in amorphous thin films

    In recent years, scientists have been studying special materials called topological materials, with special attention paid to the shape, i.e., topology, of their electronic structures (electronic bands). Although it is not visible in real space, their unusual shape in topological materials produces various unique properties that can be suitable for making next-generation devices.
    It was thought that in order to exploit topological physical properties, crystalline materials, where atoms are highly ordered and arranged in repeating patterns, were needed. Materials in the amorphous state, i.e., where atoms are disordered and only periodically arranged over short distances, were considered unsuitable for hosting the outstanding physical properties of topological materials.
    Now, a collaborative research group has verified that even amorphous materials can have these special properties. The group was led by Associate Professor Kohei Fujiwara and Professor Atsushi Tsukazaki from Tohoku University’s Institute for Materials Research (IMR); Lecturer Yasuyuki Kato and Professor Yukitoshi Motome from the University of Tokyo’s Graduate School of Engineering and Associate Professor Hitoshi Abe at the High Energy Accelerator Research Organization’s Institute for Materials Structure Science.
    Details of their findings were reported in the journal Nature Communications on June 13, 2023.
    “We discovered that the concept of band topology, which has been discussed mainly in crystals, is also valid and technologically useful in amorphous states,” stated Fujiwara.
    To make their discovery, the team performed experiments and model calculations on iron-tin amorphous thin films. They demonstrated that despite a short-range atom arrangement, the amorphous material still showed the same special effects as in the crystalline materials, notably the anomalous Hall effect and the Nernst effect.
    “Amorphous materials are easier and cheaper to make compared to crystals, so this opens up new possibilities for developing devices using these materials. This could lead to advancements in sensing technology, which is important for creating the Internet of Things (IoT) where many devices are connected and communicate with each other,” adds Fujiwara.
    Looking ahead, the group is eager to unearth more amorphous materials and develop innovative devices using them. More

  • in

    Scientists edge toward scalable quantum simulations on a photonic chip

    Scientists have made an important step toward developing computers advanced enough to simulate complex natural phenomena at the quantum level. While these types of simulations are too cumbersome or outright impossible for classical computers to handle, photonics-based quantum computing systems could provide a solution.
    A team of researchers from the University of Rochester’s Hajim School of Engineering & Applied Sciences developed a new chip-scale optical quantum simulation system that could help make such a system feasible. The team, led by Qiang Lin, a professor of electrical and computer engineering and optics, published their findings in Nature Photonics.
    Lin’s team ran the simulations in a synthetic space that mimics the physical world by controlling the frequency, or color, of quantum entangled photons as time elapses. This approach differs from the traditional photonics-based computing methods in which the paths of photons are controlled, and also drastically reduces the physical footprint and resource requirements.
    “For the first time, we have been able to produce a quantum-correlated synthetic crystal,” says Lin. “Our approach significantly extends the dimensions of the synthetic space, enabling us to perform simulations of several quantum-scale phenomena such as random walks of quantum entangled photons.”
    The researchers say that this system can serve as a basis for more intricate simulations in the future.
    “Though the systems being simulated are well understood, this proof-of-principle experiment demonstrates the power of this new approach for scaling up to more complex simulations and computation tasks, something we are very excited to investigate in the future,” says Usman Javid ’23 PhD (optics), the lead author on the study.
    Other coauthors from Lin’s group include Raymond Lopez-Rios, Jingwei Ling, Austin Graf, and Jeremy Staffa.
    The project was supported with funding from the National Science Foundation, the Defense Threat Reduction Agency’s Joint Science and Technology Office for Chemical and Biological Defense, and the Defense Advanced Research Projects Agency. More

  • in

    Researchers teach an AI to write better chart captions

    Chart captions that explain complex trends and patterns are important for improving a reader’s ability to comprehend and retain the data being presented. And for people with visual disabilities, the information in a caption often provides their only means of understanding the chart.
    But writing effective, detailed captions is a labor-intensive process. While autocaptioning techniques can alleviate this burden, they often struggle to describe cognitive features that provide additional context.
    To help people author high-quality chart captions, MIT researchers have developed a dataset to improve automatic captioning systems. Using this tool, researchers could teach a machine-learning model to vary the level of complexity and type of content included in a chart caption based on the needs of users.
    The MIT researchers found that machine-learning models trained for autocaptioning with their dataset consistently generated captions that were precise, semantically rich, and described data trends and complex patterns. Quantitative and qualitative analyses revealed that their models captioned charts more effectively than other autocaptioning systems.
    The team’s goal is to provide the dataset, called VisText, as a tool researchers can use as they work on the thorny problem of chart autocaptioning. These automatic systems could help provide captions for uncaptioned online charts and improve accessibility for people with visual disabilities, says co-lead author Angie Boggust, a graduate student in electrical engineering and computer science at MIT and member of the Visualization Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
    “We’ve tried to embed a lot of human values into our dataset so that when we and other researchers are building automatic chart-captioning systems, we don’t end up with models that aren’t what people want or need,” she says.

    Boggust is joined on the paper by co-lead author and fellow graduate student Benny J. Tang and senior author Arvind Satyanarayan, associate professor of computer science at MIT who leads the Visualization Group in CSAIL. The research will be presented at the Annual Meeting of the Association for Computational Linguistics.
    Human-centered analysis
    The researchers were inspired to develop VisText from prior work in the Visualization Group that explored what makes a good chart caption. In that study, researchers found that sighted users and blind or low-vision users had different preferences for the complexity of semantic content in a caption.
    The group wanted to bring that human-centered analysis into autocaptioning research. To do that, they developed VisText, a dataset of charts and associated captions that could be used to train machine-learning models to generate accurate, semantically rich, customizable captions.
    Developing effective autocaptioning systems is no easy task. Existing machine-learning methods often try to caption charts the way they would an image, but people and models interpret natural images differently from how we read charts. Other techniques skip the visual content entirely and caption a chart using its underlying data table. However, such data tables are often not available after charts are published.

    Given the shortfalls of using images and data tables, VisText also represents charts as scene graphs. Scene graphs, which can be extracted from a chart image, contain all the chart data but also include additional image context.
    “A scene graph is like the best of both worlds — it contains almost all the information present in an image while being easier to extract from images than data tables. As it’s also text, we can leverage advances in modern large language models for captioning,” Tang explains.
    They compiled a dataset that contains more than 12,000 charts — each represented as a data table, image, and scene graph — as well as associated captions. Each chart has two separate captions: a low-level caption that describes the chart’s construction (like its axis ranges) and a higher-level caption that describes statistics, relationships in the data, and complex trends.
    The researchers generated low-level captions using an automated system and crowdsourced higher-level captions from human workers.
    “Our captions were informed by two key pieces of prior research: existing guidelines on accessible descriptions of visual media and a conceptual model from our group for categorizing semantic content. This ensured that our captions featured important low-level chart elements like axes, scales, and units for readers with visual disabilities, while retaining human variability in how captions can be written,” says Tang.
    Translating charts
    Once they had gathered chart images and captions, the researchers used VisText to train five machine-learning models for autocaptioning. They wanted to see how each representation — image, data table, and scene graph — and combinations of the representations affected the quality of the caption.
    “You can think about a chart captioning model like a model for language translation. But instead of saying, translate this German text to English, we are saying translate this ‘chart language’ to English,” Boggust says.
    Their results showed that models trained with scene graphs performed as well or better than those trained using data tables. Since scene graphs are easier to extract from existing charts, the researchers argue that they might be a more useful representation.
    They also trained models with low-level and high-level captions separately. This technique, known as semantic prefix tuning, enabled them to teach the model to vary the complexity of the caption’s content.
    In addition, they conducted a qualitative examination of captions produced by their best-performing method and categorized six types of common errors. For instance, a directional error occurs if a model says a trend is decreasing when it is actually increasing.
    This fine-grained, robust qualitative evaluation was important for understanding how the model was making its errors. For example, using quantitative methods, a directional error might incur the same penalty as a repetition error, where the model repeats the same word or phrase. But a directional error could be more misleading to a user than a repetition error. The qualitative analysis helped them understand these types of subtleties, Boggust says.
    These sorts of errors also expose limitations of current models and raise ethical considerations that researchers must consider as they work to develop autocaptioning systems, she adds.
    Generative machine-learning models, such as those that power ChatGPT, have been shown to hallucinate or give incorrect information that can be misleading. While there is a clear benefit to using these models for autocaptioning existing charts, it could lead to the spread of misinformation if charts are captioned incorrectly.
    “Maybe this means that we don’t just caption everything in sight with AI. Instead, perhaps we provide these autocaptioning systems as authorship tools for people to edit. It is important to think about these ethical implications throughout the research process, not just at the end when we have a model to deploy,” she says.
    Boggust, Tang, and their colleagues want to continue optimizing the models to reduce some common errors. They also want to expand the VisText dataset to include more charts, and more complex charts, such as those with stacked bars or multiple lines. And they would also like to gain insights into what these autocaptioning models are actually learning about chart data.
    This research was supported, in part, by a Google Research Scholar Award, the National Science Foundation, the MLA@CSAIL Initiative, and the United States Air Force Research Laboratory. More

  • in

    Combining maths with music leads to higher scores, suggests review of 50 years of research

    Children do better at maths when music is a key part of their lessons, an analysis of almost 50 years of research on the topic has revealed.
    It is thought that music can make maths more enjoyable, keep students engaged and help any ease fear or anxiety they have about maths. Motivation may be increased and pupils may appreciate maths more, the peer-reviewed article in Educational Studies details.
    Techniques for integrating music into maths lessons range from clapping to pieces with different rhythms when learning numbers and fractions, to using maths to design musical instruments.
    Previous research has shown that children who are better at music also do better at maths. But whether teaching music to youngsters actually improves their maths has been less clear.
    To find out more, Turkish researcher Dr. Ayça Akin, from the Department of Software Engineering, Antalya Belek University, searched academic databases for research on the topic published between 1975 and 2022.
    She then combined the results of 55 studies from around the world, involving almost 78,000 young people from kindergarten pupils to university students, to come up with an answer.

    Three types of musical intervention were included the meta-analysis: standardised music interventions (typical music lessons, in which children sing and listen to, and compose, music), instrumental musical interventions (lessons in which children learn how to play instruments, either individually or as part of a band) and music-maths integrated interventions, in which music is integrated into maths lessons.
    Students took maths tests before and after taking part in the intervention and the change in their scores was compared with that of youngsters who didn’t take part in an intervention.
    The use of music, whether in separate lessons or as part of maths classes, was associated with greater improvement in maths over time.
    The integrated lessons had the biggest effect, with around 73% of students who had integrated lessons doing significantly better than youngsters who didn’t have any type of musical intervention.
    Some 69% of students who learned how to play instruments and 58% of students who had normal music lessons improved more than pupils with no musical intervention.

    The results also indicate that music helps more with learning arithmetic than other types of maths and has a bigger impact on younger pupils and those learning more basic mathematical concepts.
    Dr Akin, who carried out the research while at Turkey’s National Ministry of Education and Antalya Belek University, points out that maths and music have much in common, such as the use of symbols symmetry. Both subjects also require abstract thought and quantitative reasoning.
    Arithmetic may lend itself particularly well to being taught through music because core concepts, such as fractions and ratios, are also fundamental to music. For example, musical notes of different lengths can be represented as fractions and added together to create several bars of music.
    Integrated lessons may be especially effective because they allow pupils to build connections between the maths and music and provide extra opportunities to explore, interpret and understand maths.
    Plus, if they are more enjoyable than traditional maths lessons, any anxiety students feel about maths may be eased.
    Limitations of the analysis include the relatively small number of studies available for inclusion. This meant it wasn’t possible to look at the effect of factors such as gender, socio-economic status and length of musical instruction on the results.
    Dr Akin, who is now based at Antalya Belek University, concludes that while musical instruction overall has a small to moderate effect on achievement in maths, integrated lessons have a large impact.
    She adds: “Encouraging mathematics and music teachers to plan lessons together could help ease students’ anxiety about mathematics, while also boosting achievement.” More

  • in

    We are wasting up to 20 percent of our time on computer problems

    Even though our computers are now better than 15 years ago, they still malfunction between 11 and 20 per cent of the time, a new study from the University of Copenhagen and Roskilde University concludes. The researchers behind the study therefore find that there are major gains to be achieved for society by rethinking the systems and involving users more in their development.
    An endlessly rotating beach ball, a program that crashes without saving data or systems that require illogical procedures or simply do not work. Unfortunately, struggling with computers is still a familiar situation for most of us. Tearing your hair out over computers that do not work remains very common among users, according to new Danish research.
    In fact, so much that, on average, we waste between 11 and 20 per cent of our time in front of our computers on systems that do not work or that are so difficult to understand that we cannot perform the task we want to. And this is far from being good enough, says Professor Kasper Hornbæk, one of the researchers behind the study.
    “It’s incredible that the figure is so high. However, most people experience frustration when using computers and can tell a horror story about an important PowerPoint presentation that was not saved or a system that crashed at a critical moment. Everyone knows that it is difficult to create IT systems that match people’s needs, but the figure should be much lower, and one thing that it shows is that ordinary people aren’t involved enough when the systems are developed,” he says.
    Professor Morten Hertzum, the other researcher behind the study, emphasises that most frustrations are experienced in connection with the performance of completely ordinary tasks.
    “The frustrations are not due to people using their computers for something highly advanced, but because they experience problems in their performance of everyday tasks. This makes it easier to involve users in identifying problems. But it also means that problems that are not identified and solved will probably frustrate a large number of users,” says Morten Hertzum.

    The problems are only too recognisable
    To examine this issue, the researchers have been assisted by 234 participants who spend between six and eight hours in front of a computer in their day-to-day work.
    In one hour, the researchers told them to report the situations in which the computer would not work properly, or where the participants were frustrated about not being able to perform the task they wanted.
    The problems most often experienced by the participants included that: “the system was slow,” “the system froze temporarily,” “the system crashed,” “it is difficult to find things.” The participants had backgrounds such as student, accountant, consultant, but several of them actually worked in the IT industry.
    “A number of the participants in the survey were IT professionals, while most of the other participants were highly competent IT and computer users. Nevertheless, they encountered these problems, and it turns out that this involves some fundamental functions,” says Kasper Hornbæk.

    The participants in the survey also responded that 84 per cent of the episodes had occurred before and that 87 per cent of the episodes could happen again. And, according to Kasper Hornbæk, we are having the same fundamental problems today that we had 15-20 years ago.
    “The two biggest categories of problems are still about insufficient performance and lack of user-friendliness,” he says.
    Morten Hertzum adds: “Our technology can do more today, and it has also become better, but, at the same time, we expect more from it. Even though downloads are faster now, they are often still experienced as frustratingly slow. ”
    88 per cent use a computer at work
    According to Statistics Denmark, 88 per cent of Danes used computers, laptops, smartphones, tablets or other mobile devices at work in 2018. In this context, the new study indicates that a half to a whole day of a normal working week may be wasted on computer problems.
    “There is a lot of productivity lost in workplaces throughout Denmark because people are unable to perform their ordinary work because the computer is not running as it should. It also causes a lot of frustrations for the individual user,” says Kasper Hornbæk.
    This means that there are major benefits to be gained for society if we experienced fewer problems in front of our computers. According to Kasper Hornbæk, the gains can, for example, be achieved if more resources are invested in rethinking how faults are presented to us on the computer.
    “Part of the solution may be to shield us from knowing that the computer is working to solve a problem. In reality, there is no reason why we need to look at an incomprehensible box with commands or a frozen computer. The computer could easily solve the problems without displaying this, while it provided a back-up version of the system for us, so that we could continue to work with our tasks undisturbed,” says Kasper Hornbæk.
    At the same time, IT developers should involve the users even more when designing the systems to make them as easy to use — and understand — as possible. For, according to the researcher, there are no poor IT users, only poor systems.
    “When we’re all surrounded by IT systems that we’re cursing, it’s very healthy to ascertain that it’s probably not the users that are the problem, but those who make the systems. The study clearly shows that there is still much room for improvement, and we therefore hope that it can create more focus on making more user-friendly systems in the future,” concludes Kasper Hornbæk.
    Facts: 234 participants, aged 10-69, participated in the survey. The majority of the participants spent between 6-8 hours a day in front of a computer. The participants reported an average of one computer problem or frustration per hour. The participants in the survey also responded that 84 per cent of the episodes had occurred before and that 87 per cent of the episodes could happen again. A large part of the problems concerned slow systems, systems that did not respond or crashed. The researchers have created a new version of a previous study conducted 15 years ago, which showed that the participants wasted as much as 40-50 per cent of their time on frustrations about the computer. The study has been conducted by Morten Hertzum from Roskilde University and Kasper Hornbæk from the University of Copenhagen. More