More stories

  • in

    Accessibility toolkit for game engine Unity

    The growing popularity of video games is putting an increased focus on their accessibility for people with disabilities. While large productions are increasingly taking this into account by adding accessibility features, this aspect is usually completely absent in indie productions due to a lack of resources. To facilitate the implementation of accessibility features, Klemens Strasser developed a freely accessible toolkit for the Unity game engine as part of his master’s thesis at the Institute of Interactive Systems and Data Science at Graz University of Technology (TU Graz). It is available for free on GitHub. This makes it easy to integrate support tools for people with visual impairments into a games project. Together with his master’s thesis supervisor Johanna Pirker, Klemens Strasser has now published the toolkit and an action guide for more accessibility in games in a paper.
    Help with orientation
    When creating the “toolbox,” Klemens Strasser focused on four points: (1) support in operating menus, (2) perception of the game environment as well as (3) control on a fixed grid and (4) free navigation if the character can move in all directions. The first three points could be solved with a screen reader, but for the free navigation a so-called navigation agent had to be implemented. This guides the players to a destination they have specified via an audio signal after it has calculated the route to get there.
    For the screen reader solution to facilitate menu operation, environmental perception and control on a grid, it was first necessary to capture all visible and usable objects and characters on the screen. A tool known as an accessibility signifier was used to recognise the elements and assign them a label, traits, value and description. The game transfers this information to the screen reader used by the players, which reads it out to them.
    Developers with positive feedback
    The toolkit was evaluated in a test with nine game developers, all of whom have a university background in software engineering. Their task was to implement it in a simple match-3 game in which the aim is to arrange three identical symbols or elements next to each other by moving them. The feedback from the developers was consistently positive. The implementation was described as simple, the task was easy to understand and they comfortably found their way around the toolkit. Before the test, only three of the developers had worked with accessibility features, but afterwards most of them wanted to use them for their next project.
    “Games should be open to as many people as possible, which is why it is so important to make them more accessible for people with disabilities,” says Klemens Strasser. “With the Accessibility Toolkit for Unity, we want to make it as easy as possible for indie developers to implement these options. Since, according to the WHO, 253 million people worldwide live with a visual impairment, this would include a very large group. Nevertheless, there is still a lot to be done here, as there are numerous other impairments for which easy-to-implement solutions should be provided.” The Game Lab at TU Graz is constantly carrying out research on such solutions and other topics relating to accessibility in computer games.
    Years of success as an independent game developer
    Klemens Strasser himself has been working on the topic of accessibility for games for several years. Even during his studies and after completing his Master’s degree in Computer Science at Graz University of Technology (TU Graz), he independently developed games that take accessibility into account. In 2015, he won the Apple Design Award in the Student category with his game Elementary Minute, and was nominated for the award in the Inclusivity category in 2022 with Letter Rooms and 2023 with the Ancient Board Game Collection. His games published for iOS have been downloaded over 200,000 times to date.
    Link to the toolkit on GitHub: https://github.com/KlemensStrasser/KAP More

  • in

    Design rules and synthesis of quantum memory candidates

    In the quest to develop quantum computers and networks, there are many components that are fundamentally different than those used today. Like a modern computer, each of these components has different constraints. However, it is currently unclear what materials can be used to construct those components for the transmission and storage of quantum information.
    In new research published in the Journal of the American Chemical Society, University of Illinois Urbana Champaign materials science & engineering professor Daniel Shoemaker and graduate student Zachary Riedel used density functional theory (DFT) calculations to identify possible europium (Eu) compounds to serve as a new quantum memory platform. They also synthesized one of the predicted compounds, a brand new, air stable material that is a strong candidate for use in quantum memory, a system for storing quantum states of photons or other entangled particles without destroying the information held by that particle.
    “The problem that we are trying to tackle here is finding a material that can store that quantum information for a long time. One way to do this is to use ions of rare earth metals,” says Shoemaker.
    Found at the very bottom of the periodic table, rare earth elements, such as europium, have shown promise for use in quantum information devices due to their unique atomic structures. Specifically, rare earth ions have many electrons densely clustered close to the nucleus of the atom. The excitation of these electrons, from the resting state, can “live” for a long time — seconds or possibly even hours, an eternity in the world of computing. Such long-lived states are crucial to avoid the loss of quantum information and position rare earth ions as strong candidates for qubits, the fundamental units of quantum information.
    “Normally in materials engineering, you can go to a database and find what known material should work for a particular application,” Shoemaker explains. “For example, people have worked for over 200 years to find proper lightweight, high strength materials for different vehicles. But in quantum information, we have only been working at this for a decade or two, so the population of materials is actually very small, and you quickly find yourself in unknown chemical territory.”
    Shoemaker and Riedel imposed a few rules in their search of possible new materials. First, they wanted to use the ionic configuration Eu3+ (as opposed to the other possible configuration, Eu2+) because it operates at the right optical wavelength. To be “written” optically, the materials should be transparent. Second, they wanted a material made of other elements that have only one stable isotope. Elements with more than one isotope yield a mixture of different nuclear masses that vibrate at slightly different frequencies, scrambling the information being stored. Third, they wanted a large separation between individual europium ions to limit unintended interactions. Without separation, the large clouds of europium electrons would act like a canopy of leaves in a forest, rather than well-spaced-out trees in a suburban neighborhood, where the rustling of leaves from one tree would gently interact with leaves from another.
    With those rules in place, Riedel composed a DFT computational screening to predict which materials could form. Following this screening, Riedel was able to identify new Eu compound candidates, and further, he was able to synthesize the top suggestion from the list, the double perovskite halide Cs2NaEuF6. This new compound is air stable, which means it can be integrated with other components, a critical property in scalable quantum computing. DFT calculations also predicted several other possible compounds that have yet to be synthesized.

    “We have shown that there are a lot of unknown materials left to be made that are good candidates for quantum information storage,” Shoemaker says. “And we have shown that we can make them efficiently and predict which ones are going to be stable.”
    Daniel Shoemaker is also an affiliate of the Materials Research Laboratory (MRL) and the Illinois Quantum Information Science and Technology Center (IQUIST) at UIUC.
    Zachary Riedel is currently a postdoctoral researcher at Los Alamos National Laboratory.
    This research was supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Center Q-NEXT. The National Science Foundation through the University of Illinois Materials Research Science and Engineering Center supported the use of facilities and instrumentation. More

  • in

    Going top shelf with AI to better track hockey data

    Researchers from the University of Waterloo got a valuable assist from artificial intelligence (AI) tools to help capture and analyze data from professional hockey games faster and more accurately than ever before, with big implications for the business of sports.
    The growing field of hockey analytics currently relies on the manual analysis of video footage from games. Professional hockey teams across the sport, notably in the National Hockey League (NHL), make important decisions regarding players’ careers based on that information.
    “The goal of our research is to interpret a hockey game through video more effectively and efficiently than a human,” said Dr. David Clausi, a professor in Waterloo’s Department of Systems Design Engineering. “One person cannot possibly document everything happening in a game.”
    Hockey players move fast in a non-linear fashion, dynamically skating across the ice in short shifts. Apart from numbers and last names on jerseys that are not always visible to the camera, uniforms aren’t a robust tool to identify players — particularly at the fast-paced speed hockey is known for. This makes manually tracking and analyzing each player during a game very difficult and prone to human error.
    The AI tool developed by Clausi, Dr. John Zelek, a professor in Waterloo’s Department of Systems Design Engineering, research assistant professor Yuhao Chen, and a team of graduate students use deep learning techniques to automate and improve player tracking analysis.
    The research was undertaken in partnership with Stathletes, an Ontario-based professional hockey performance data and analytics company. Working through NHL broadcast video clips frame-by-frame, the research team manually annotated the teams, the players and the players’ movements across the ice. They ran this data through a deep learning neural network to teach the system how to watch a game, compile information and produce accurate analyses and predictions.
    When tested, the system’s algorithms delivered high rates of accuracy. It scored 94.5 per cent for tracking players correctly, 97 per cent for identifying teams and 83 per cent for identifying individual players.
    The research team is working to refine their prototype, but Stathletes is already using the system to annotate video footage of hockey games. The potential for commercialization goes beyond hockey. By retraining the system’s components, it can be applied to other team sports such as soccer or field hockey.
    “Our system can generate data for multiple purposes,” Zelek said. “Coaches can use it to craft winning game strategies, team scouts can hunt for players, and statisticians can identify ways to give teams an extra edge on the rink or field. It really has the potential to transform the business of sport.” More

  • in

    Flexible artificial intelligence optoelectronic sensors towards health monitoring

    From creating images, generating text, and enabling self-driving cars, the potential uses of artificial intelligence (AI) are vast and transformative. However, all this capability comes at a very high energy cost. For instance, estimates indicate that training OPEN AI’s popular GPT-3 model consumed over 1,287 MWh, enough to supply an average U.S. household for 120 years. This energy cost poses a substantial roadblock, particularly for using AI in large-scale applications like health monitoring where large amounts of critical health information are sent to centralized data centers for processing. This not only consumes a lot of energy but also raises concerns about sustainability, bandwidth overload, and communication delays.
    Achieving AI-based health monitoring and biological diagnosis requires a standalone sensor that operates independently without the need for constant connection to a central server. At the same time, the sensor must have a low power consumption for prolonged use, should be capable of handling the rapidly changing biological signals for real-time monitoring, be flexible enough to attach comfortably to the human body, and be easy to make and dispose of due to the need for frequent replacements for hygiene reasons.
    Considering these criteria, researchers from Tokyo University of Science (TUS) led by Associate Professor Takashi Ikuno have developed a flexible paper-based sensor that operates like the human brain. Their findings were published online in the journal Advanced Electronic Materialson 22 February 2024.
    “A paper-based optoelectronic synaptic device composed of nanocellulose and ZnO was developed for realizing physical reservoir computing. This device exhibits synaptic behavior and cognitive tasks at a suitable timescale for health monitoring,” says Dr. Ikuno.
    In the human brain, information travels between networks of neurons through synapses. Each neuron can process information on its own, enabling the brain to handle multiple tasks at the same time. This ability for parallel processing makes the brain much more efficient compared to traditional computing systems. To mimic this capability, the researchers fabricated a photo-electronic artificial synapse device composed of gold electrodes on top of a 10 µm transparent film consisting of zinc oxide (ZnO) nanoparticles and cellulose nanofibers (CNFs).
    The transparent film serves three main purposes. Firstly, it allows light to pass through, enabling it to handle optical input signals representing various biological information. Secondly, the cellulose nanofibers impart flexibility and can be easily disposed of by incineration. Thirdly, the ZnO nanoparticles are photoresponsive and generate a photocurrent when exposed to pulsed UV light and a constant voltage. This photocurrent mimics the responses transmitted by synapsis in the human brain, enabling the device to interpret and process biological information received from optical sensors.
    Notably, the film was able to distinguish 4-bit input optical pulses and generate distinct currents in response to time-series optical input, with a rapid response time on the order of subseconds. This quick response is crucial for detecting sudden changes or abnormalities in health-related signals. Furthermore, when exposed to two successive light pulses, the electrical current response was stronger for the second pulse. This behavior termed post-potentiation facilitation contributes to short-term memory processes in the brain and enhances the ability of synapses to detect and respond to familiar patterns.
    To test this, the researchers converted MNIST images, a dataset of handwritten digits, into 4-bit optical pulses. They then irradiated the film with these pulses and measured the current response. Using this data as input, a neural network was able to recognize handwritten numbers with an accuracy of 88%.
    Remarkably, this handwritten-digit recognition capability remained unaffected even when the device was repeatedly bent and stretched up to 1,000 times, demonstrating its ruggedness and feasibility for repeated use. “This study highlights the potential of embedding semiconductor nanoparticles in flexible CNF films for use as flexible synaptic devices for PRC,” concludes Dr. Ikuno.
    Let us hope that these advancements pave the way for wearable sensors in health monitoring applications! More

  • in

    AI may predict spread of lung cancer to brain

    Physicians treating patients with early-stage lung cancer face a conundrum: choosing potentially helpful yet toxic therapies such as chemotherapy, radiation or immunotherapy to knock out the cancer and lessen the risk of it spreading to the brain, or waiting to see if lung surgery alone proves sufficient. When up to 70% of such patients do not experience brain metastasis — the spread of cancer to the brain — the question arises: Who should receive additional aggressive treatments, and who can safely wait?
    A new study led by Washington University School of Medicine in St. Louis could help physicians strike the right balance between proactive intervention and cautious monitoring for patients with early-stage lung cancer. The study, published March 4 in The Journal of Pathology, uses an artificial intelligence (AI) method to study patients’ lung biopsy images and predict whether the cancer will spread to the brain.
    “There are no predictive tools available to help physicians when treating patients with lung cancer,” said Richard J. Cote, MD, the Edward Mallinckrodt Professor and head of the Department of Pathology & Immunology. “We have risk predictors that tell us which population is more likely to progress to more advanced stages, but we lack the ability to predict individual patient outcomes. Our study is an indication that AI methods may be able to make meaningful predictions that are specific and sensitive enough to impact patient management.”
    Lung cancer is the leading cause of cancer death in the U.S. and worldwide. Most lung cancers are characterized as non-small cell lung cancers, which are largely, but not exclusively, caused by smoking. For early-stage cancer patients, tumors are confined to the lung, and surgery is recommended as a first line of treatment. Roughly 30% of such patients progress to advanced stages, when the cancer spreads to the lymph nodes and other organs. With the brain often affected first, such patients require additional treatments, including chemotherapy, targeted drug therapy, radiation therapy and/or immunotherapy. However, physicians have no way of knowing whose cancer will progress, so they frequently treat patients with aggressive therapies out of caution.
    Cote worked with Ramaswamy Govindan, MD, the Anheuser Busch Endowed Chair in Medical Oncology and associate director of the oncology division at Washington University; Mark Watson, MD, PhD, the Margaret Gladys Smith Professor in the Department of Pathology & Immunology; and Changhuei Yang, PhD, a professor of electrical engineering, bioengineering, and medical engineering at the California Institute of Technology, to determine if AI could predict whether cancer will spread to the brain.
    In diagnostic testing, a pathologist examines biopsied tissues under a microscope to identify cellular abnormalities that may hint at disease. Advanced technologies — such as AI — are being explored to replicate what a pathologist sees when making diagnoses but with greater accuracy, Cote explained.
    A key question: Can AI detect abnormal features that a pathologist cannot?

    The researchers trained a machine-learning algorithm to predict brain metastasis using 118 lung biopsy samples from early-stage non-small cell lung cancer patients. Some of the patients developed brain cancer during a five-year monitoring period, and some did not and were in remission. Then the researchers tested the AI method on its ability to predict brain metastasis, and to identify patients who develop no metastasis, using 40 other patients’ lung biopsy samples.
    The algorithm was able to predict the eventual development of brain cancer with 87% accuracy. In comparison, four pathologists who participated in the study were an average of 57.3% accurate. Importantly, the algorithm was highly accurate in predicting which patients would not develop brain metastasis.
    “Our results need to be validated in a larger study, but we think there is great potential for AI to make accurate predictions and impact care decisions,” said Govindan, who treats lung cancer patients at Siteman Cancer Center, based at Barnes-Jewish Hospital and Washington University School of Medicine. “Systemic treatments such as chemotherapy, while effective in killing cancer cells, can also harm healthy cells and are not always the preferred treatment method for all early-stage cancer patients. Identification of patients who are likely to relapse in the brain may help us develop strategies to intercept cancer early in the process of metastasis. We think AI-based predictions could, one day, inform personalized treatments.”
    The AI system evaluates tumors’ and healthy cells’ features, similar to how the human brain allows us to scan facial features for quick recognition of familiar faces. However, what the algorithm sees is unknown; the scientists are working to understand the molecular and cellular features that AI uses for its predictions. This knowledge could lead to the development of novel therapeutics and influence the design of imaging instruments optimized for the collection of data for AI.
    “This study started as an attempt to find predictive biomarkers,” said Yang. “But we couldn’t find any. Instead, we found that AI has the potential to make predictions about cancer progression using biopsy samples that are already being collected for diagnosis. If we can get to a prediction accuracy that will allow us to use this algorithm clinically and not have to resort to expensive biomarkers, we are talking about significant ramifications in cost-effectiveness.” More

  • in

    AI-generated food images look tastier than real ones

    With the Global Nutrition and Hydration Week 2024 starting today, researchers have announced an intriguing discovery — consumers generally prefer AI-generated images of food over real food images, especially when they are unaware of their true nature. The new findings have been published in Food Quality and Preference.
    According to the researchers, the results suggest that AI-generated food visuals excel at enhancing the appeal of depicted foods by leveraging key features such as symmetry, shape, glossiness, and overall lighting and colour. All of these are known to contribute significantly to the attractiveness of food imagery.
    Even subtle tweaks in positioning may enhance the appeal of AI-generated food images. Lead author Giovanbattista Califano (Department of Agricultural Sciences, University of Naples Federico II) explained: ‘As humans, we tend to feel uneasy with objects pointing towards us, interpreting them as threats, even when it’s just food. When tasked with replicating food photos featuring items pointing at the viewer, such as a bunch of carrots or a piece of cake, the AI often positions the food so that it doesn’t directly point at the viewer. This warrants further studies, but it’s plausible that this approach enhances the perceived attractiveness of the depicted food.’
    In the study, the researchers asked 297 participants to rate real or AI-generated food images on a scale from “Not at all appetizing” to “Extremely appetizing.” The images depicted a range of natural, processed, and ultra-processed foods, from apples and carrots to chocolate milkshakes and potato fries. When participants were told how each image had been created — whether through photography or AI — they tended to rate real and AI-generated versions equally appealing. However, when participants were unaware of the image creation process, the AI-generated version was consistently rated as significantly more appetizing than the real food image.
    Study supervisor and co-author Professor Charles Spence (Department of Experimental Psychology, University of Oxford) said: ‘While AI-generated visuals may offer cost-saving opportunities for marketers and the industry by reducing the cost of commissioning food photoshoots, these findings highlight potential risks associated with exacerbating ‘visual hunger’ amongst consumers — the phenomenon where viewing images of food triggers appetite and cravings. This could potentially influence unhealthy eating behaviours or foster unrealistic expectations about food among consumers.’
    Additionally, the researchers also found that AI-generated images tend to depict foods to appear more energy-dense compared to the originals, particularly in the abundance portrayed. For instance, AI may increase the number of fries in the image or add more whipped cream to a dessert. Given that humans have an evolutionary drive to pay more attention to energy-dense foods, this raises concerns that widespread dissemination of such idealized food images could promote cue-induced eating of unhealthy foods.
    Furthermore, with the global movement towards more sustainable consumption patterns, including the promotion of ‘ugly’ fruits and vegetables, there is a concern that constant production of AI-enhanced food images might nudge consumers towards an unrealistic standard of how natural foods should look, potentially harming sustainability efforts. More

  • in

    Natural history specimens have never been so accessible

    With the help of 16 grants from the National Science Foundation, researchers have painstakingly taken computed topography (CT) scans of more than 13,000 individual specimens to create 3D images of more than half of all the world’s animal groups, including mammals, fishes, amphibians and reptiles.
    The research team, made of members from The University of Texas at Arlington and 25 other institutions, are now a quarter of the way through inputting nearly 30,000 media files to the open-source repository MorphoSource. This will allow researchers and scholars to share findings and improve access to material critical for scientific discovery.
    “Thanks to this exciting openVertebrate project, also called oVert, anyone — scientists, researchers, students, teachers, artists — can now look online to research the anatomy of just about any animal imaginable without leaving home,” said Gregory Pandelis, collections manager of UT Arlington’s Amphibian and Reptile Diversity Research Center. “This will help reduce wear and tear on many rare specimens while increasing access to them at the same time.”
    A summary of the project has just been published in the peer-reviewed journal BioScience reviewing the specimens that have been scanned to date and offering a glimpse of how the data might be used in the future.
    For example, one research team has used the data to conclude that Spinosaurus, a massive dinosaur that was larger than Tyrannosaurus rex and thought to be aquatic, would have actually been a poor swimmer, and thus likely stayed on land. Another study revealed that frogs have evolved to gain and lose the ability to grow teeth more than any other animal.
    The value of oVert extends beyond scientific inquiry. Artists are using the 3D models to create realistic animal replicas. Photographs of oVert specimens have been displayed as part of museum exhibits. In addition, specimens have been incorporated into virtual reality headsets that allow users to interact with the animals.
    Educators also are able to use oVert models in their classrooms. From the outset of the project, the research team placed a strong emphasis on K-12 outreach, organizing workshops where teachers could learn how to use the data in their classrooms.
    “As a kid who loved all things science- and nature-related and had a particular interest in skeletal anatomy, I would go through great pains to collect, preserve and study skulls and other specimens for my childhood natural history collection, the start of my scientific inspiration,” Pandelis said. “Realizing that you could study these things digitally with just a few clicks on a computer was eye-opening for me, and it opened up the path to my current research using CT scans of snake specimens to study their skull evolution. Now, this wealth of data has been opened and made publicly accessible to anyone who has a professional, recreational or educational interest in anatomy and morphology. Natural history specimens have never been so accessible and impactful.”
    In the next phase of the research project, the team will be creating sophisticated tools to analyze the data collected. Since researchers have never had digital access to so many 3D natural history specimens before, it will take further developments in machine learning and supercomputing to use them to their full potential. More

  • in

    How surface roughness influences the adhesion of soft materials

    Adhesive tape or sticky notes are easy to attach to a surface, but can be difficult to remove. This phenomenon, known as adhesion hysteresis, can be fundamentally observed in soft, elastic materials: Adhesive contact is formed more easily than it is broken. Researchers at the University of Freiburg, the University of Pittsburgh and the University of Akron in the US have now discovered that this adhesion hysteresis is caused by the surface roughness of the adherent soft materials. Through a combination of experimental observations and simulations, the team demonstrated that roughness interferes with the separation process, causing the materials to detach in minute, abrupt movements, which release parts of the adhesive bond incrementally. Dr. Antoine Sanner and Prof. Dr. Lars Pastewka from the Department of Microsystems Engineering and the livMatS Cluster of Excellence at the University of Freiburg, Dr. Nityanshu Kumar and Prof. Dr. Ali Dhinojwala from the University of Akron and Prof. Dr. Tevis Jacobs from the University of Pittsburgh have published their results in the journal Science Advances.
    “Our findings will make it possible to specifically control the adhesion properties of soft materials through surface roughness,” says Sanner. “They will also allow new and improved applications to be developed in soft robotics or production technology in the future, for example for grippers or placement systems.”
    Sudden jumping movement of the edge of the contact
    Until now, researchers have hypothesized that viscoelastic energy dissipation causes adhesion hysteresis in soft solids. In other words, energy is lost to heat in the material because it deforms in the contact cycle: It is compressed when making contact and expands during release. Those energy losses counteract the movement of the contact surface, which increases the adhesive force during separation. Contact ageing, i.e. the formation of chemical bonds on the contact surface, has also been suggested as a cause. Here the longer the contact exists, the greater the adhesion. “Our simulations show that the observed hysteresis can be explained without these specific energy dissipation mechanisms. The only source of energy dissipation in our numerical model is the sudden jumping movement of the edge of the contact, which is induced by the roughness,” says Sanner.
    Adhesion hysteresis calculated for realistic surface roughness
    This sudden jumping motion is clearly recognisable in the simulations of the Freiburg researchers and in the adhesion experiments of the University of Akron. “The abrupt change in the contact surface was already mentioned in the 1990s as a possible cause of adhesion hysteresis, but previous theoretical work on this was limited to simplified surface properties,” explains Kumar. “We have succeeded for the first time in calculating the adhesion hysteresis for realistic surface roughness. This is based on the efficiency of the numerical model and an extremely detailed surface characterisation carried out by researchers at the University of Pittsburgh,” says Jacobs. More