More stories

  • in

    Laser technology offers breakthrough in detecting illegal ivory

    A new way of quickly distinguishing between illegal elephant ivory and legal mammoth tusk ivory could prove critical to fighting the illegal ivory trade. A laser-based approach developed by scientists at the Universities of Bristol and Lancaster, could be used by customs worldwide to aid in the enforcement of illegal ivory from being traded under the guise of legal ivory. Results from the study are published in PLOS ONE today [24 April].
    Despite the Convention on the International Trade in Endangered Species (CITES) ban on ivory, poaching associated with its illegal trade has not prevented the suffering of elephants and is estimated to cause an eight per cent loss in the world’s elephant population every year. The 2016 African Elephant Database survey estimated a total of 410,000 elephants remaining in Africa, a decrease of approximately 90,000 elephants from the previous 2013 report.
    While trading/procuring elephant ivory is illegal, it is not illegal to sell ivory from extinct species, such as preserved mammoth tusk ivory. This legal source of ivory is now part of an increasing and lucrative ‘mammoth hunter’ industry. It also poses a time-consuming and enforcement problem for customs teams, as ivory from these two different types of tusk are broadly similar making it difficult to distinguish from one another, especially once specimens have become worked or carved.
    In this new study, scientists from Bristol’s School of Anatomy and Lancaster Medical School sought to establish whether Raman spectroscopy, which is already used in the study of bone and mineral chemistry, could be modified to accurately detect differences in the chemistry of mammoth and elephant ivory. The non-destructive technology, which involves shining a high-energy light at an ivory specimen, can detect small biochemical differences in the tusks from elephants and mammoths.
    Researchers scanned samples of mammoth and elephant tusks from London’s Natural History Museum using the laser based method, Raman spectroscopy. Results from the experiment found the technology provided accurate, quick and non-destructive species identification.
    Dr Rebecca Shepherd, formerly of Lancaster Medical School and now at the University of Bristol’s School of Anatomy, explained “The gold standard method of identification recommended by The United Nations Office on Drugs and Crime for assessing the legality of ivory predominantly are expensive, destructive and time-consuming techniques.
    “Raman spectroscopy can provide results quickly (a single scan takes only a few minutes), and is easier to use than current methods, making it easier to determine between illegal elephant ivory and legal mammoth tusk ivory. Increased surveillance and monitoring of samples passing through customs worldwide using Raman spectroscopy could act as a deterrent to those poaching endangered and critically endangered species of elephant.”
    Dr Jemma Kerns of Lancaster Medical School, added: “The combined approach of a non-destructive laser-based method of Raman spectroscopy with advanced data analysis holds a lot of promise for the identification of unknown samples of ivory, which is especially important, given the increase in available mammoth tusks and the need for timely identification.”

    Alice Roberts, Professor of Public Engagement in Science, from the University of Birmingham and one of the study’s co-authors, said: “There’s a real problem when it comes to stamping down on the illegal trade in elephant ivory. Because trading in ancient mammoth ivory is legal. The compete tusks of elephants and mammoths look very different, but if the ivory is cut into small pieces, it can be practically impossible to tell apart elephant ivory from well-preserved mammoth ivory. I was really pleased to be part of this project exploring a new technique for telling apart elephant and mammoth ivory. This is great science, and should help the enforcers — giving them a valuable and relatively inexpensive tool to help them spot illegal ivory.”
    Professor Adrian Lister, one of the study’s co-authors from the Natural History Museum, added: “Stopping the trade in elephant ivory has been compromised by illegal ivory objects being described or disguised as mammoth ivory (for which trade is legal). A quick and reliable method for distinguishing the two has long been a goal, as other methods (such as radiocarbon dating and DNA analysis) are time-consuming and expensive. The demonstration that the two can be separated by Raman spectroscopy is therefore a significant step forward; also, this method (unlike the others) does not require any sampling, leaving the ivory object intact.”
    Professor Charlotte Deane, Executive Chair of EPSRC, said: “By offering a quick and simple alternative to current methods, the use of Raman spectroscopy could play an important role in tackling the illegal ivory trade.
    “The researchers’ work illustrates how the development and adoption of innovative new techniques can help us to address problems of global significance.”
    The study was funded by the Engineering and Physical Sciences Research Council (EPSRC) and involved researchers from the Universities of Lancaster and Birmingham and the Natural History Museum.
    The 2016 African Elephant Database survey estimated a total of 410,000 elephants remaining in Africa, a decrease of approximately 90,000 elephants from the previous 2013 report. Although the percentage decline in Asian elephants as a result of illegal poaching is lower, as females do not have tusks, there has been a 50% decline over the last three generations of Asian elephants. More

  • in

    Why can’t robots outrun animals?

    Robotics engineers have worked for decades and invested many millions of research dollars in attempts to create a robot that can walk or run as well as an animal. And yet, it remains the case that many animals are capable of feats that would be impossible for robots that exist today.
    “A wildebeest can migrate for thousands of kilometres over rough terrain, a mountain goat can climb up a literal cliff, finding footholds that don’t even seem to be there, and cockroaches can lose a leg and not slow down,” says Dr. Max Donelan, Professor in Simon Fraser University’s Department of Biomedical Physiology and Kinesiology. “We have no robots capable of anything like this endurance, agility and robustness.”
    To understand why, and quantify how, robots lag behind animals, an interdisciplinary team of scientists and engineers from leading research universities completed a detailed study of various aspects of running robots, comparing them with their equivalents in animals, for a paper published in Science Robotics. The paper finds that, by the metrics engineers use, biological components performed surprisingly poorly compared to fabricated parts. Where animals excel, though, is in their integration and control of those components.
    Alongside Donelan, the team comprised Drs. Sam Burden, Associate Professor in the Department of Electrical & Computer Engineering at the University of Washington; Tom Libby, Senior Research Engineer, SRI International; Kaushik Jayaram, Assistant Professor in the Paul M Rady Department of Mechanical Engineering at the University of Colorado Boulder; and Simon Sponberg, Dunn Family Associate Professor of Physics and Biological Sciences at the Georgia Institute of Technology.
    The researchers each studied one of five different “subsystems” that combine to create a running robot — Power, Frame, Actuation, Sensing, and Control — and compared them with their biological equivalents. Previously, it was commonly accepted that animals’ outperformance of robots must be due to the superiority of biological components.
    “The way things turned out is that, with only minor exceptions, the engineering subsystems outperform the biological equivalents — and sometimes radically outperformed them,” says Libby. “But also what’s very, very clear is that, if you compare animals to robots at the whole system level, in terms of movement, animals are amazing. And robots have yet to catch up.”
    More optimistically for the field of robotics, the researchers noted that, if you compare the relatively short time that robotics has had to develop its technology with the countless generations of animals that have evolved over many millions of years, the progress has actually been remarkably quick.
    “It will move faster, because evolution is undirected,” says Burden. “Whereas we can very much correct how we design robots and learn something in one robot and download it into every other robot, biology doesn’t have that option. So there are ways that we can move much more quickly when we engineer robots than we can through evolution — but evolution has a massive head start.”
    More than simply an engineering challenge, effective running robots offer countless potential uses. Whether solving ‘last mile’ delivery challenges in a world designed for humans that is often difficult to navigate for wheeled robots, carrying out searches in dangerous environments or handling hazardous materials, there are many potential applications for the technology.
    The researchers hope that this study will help direct future development in robot technology, with an emphasis not on building a better piece of hardware, but in understanding how to better integrate and control existing hardware. Donelan concludes, “As engineering learns integration principles from biology, running robots will become as efficient, agile, and robust as their biological counterparts.” More

  • in

    On the trail of deepfakes, researchers identify ‘fingerprints’ of AI-generated video

    In February, OpenAI released videos created by its generative artificial intelligence program Sora. The strikingly realistic content, produced via simple text prompts, is the latest breakthrough for companies demonstrating the capabilities of AI technology. It also raised concerns about generative AI’s potential to enable the creation of misleading and deceiving content on a massive scale. According to new research from Drexel University, current methods for detecting manipulated digital media will not be effective against AI-generated video; but a machine-learning approach could be the key to unmasking these synthetic creations.
    In a paper accepted for presentation at the IEEE Computer Vision and Pattern Recognition Conference in June, researchers from Multimedia and Information Security Lab in Drexel’s College of Engineering explained that while existing synthetic image detection technology has failed thus far at spotting AI-generated video, they’ve had success with a machine learning algorithm that can be trained to extract and recognize digital “fingerprints” of many different video generators, such as Stable Video Diffusion, Video-Crafter and Cog-Video. Additionally, they have shown that this algorithm can learn to detect new AI generators after studying just a few examples of their videos.
    “It’s more than a bit unnerving that this video technology could be released before there is a good system for detecting fakes created by bad actors,” said Matthew Stamm, PhD, an associate professor in Drexel’s College of Engineering and director of the MISL. “Responsible companies will do their best to embed identifiers and watermarks, but once the technology is publicly available, people who want to use it for deception will find a way. That’s why we’re working to stay ahead of them by developing the technology to identify synthetic videos from patterns and traits that are endemic to the media.”
    Deepfake Detectives
    Stamm’s lab has been active in efforts to flag digitally manipulated images and videos for more than a decade, but the group has been particularly busy in the last year, as editing technology is being used to spread political misinformation.
    Until recently, these manipulations have been the product of photo and video editing programs that add, remove or shift pixels; or slow, speed up or clip out video frames. Each of these edits leaves a unique digital breadcrumb trail and Stamm’s lab has developed a suite of tools calibrated to find and follow them.
    The lab’s tools use a sophisticated machine learning program called a constrained neural network. This algorithm can learn, in ways similar to the human brain, what is “normal” and what is “unusual” at the sub-pixel level of images and videos, rather than searching for specific predetermined identifiers of manipulation from the outset. This makes the program adept at both identifying deepfakes from known sources, as well as spotting those created by a previously unknown program.

    The neural network is typically trained on hundreds or thousands of examples to get a very good feel for the difference between unedited media and something that has been manipulated — this can be anything from variation between adjacent pixels, to the order of spacing of frames in a video, to the size and compression of the files themselves.
    A New Challenge
    “When you make an image, the physical and algorithmic processing in your camera introduces relationships between various pixel values that are very different than the pixel values if you photoshop or AI-generate an image,” Stamm said. “But recently we’ve seen text-to video generators, like Sora, that can make some pretty impressive videos. And those pose a completely new challenge because they have not been produced by a camera or photoshopped.”
    Last year a campaign ad circulating in support of Florida Gov. Ron DeSantis appeared to show former President Donald Trump embracing and kissing Antony Fauci was the first to use generative-AI technology. This means the video was not edited or spliced together from others, rather it was created whole-cloth by an AI program.
    And if there is no editing, Stamm notes, then the standard clues do not exist — which poses a unique problem for detection.
    “Until now, forensic detection programs have been effective against edited videos by simply treating them as a series of images and applying the same detection process,” Stamm said. “But with AI-generated video, there is no evidence of image manipulation frame-to-frame, so for a detection program to be effective it will need to be able to identify new traces left behind by the way generative-AI programs construct their videos.”
    In the study, the team tested 11 publicly available synthetic image detectors. Each of these programs was highly effective — at least 90% accuracy — at identifying manipulated images. But their performance dropped by 20-30% when faced with discerning videos created by publicly available AI-generators, Luma, VideoCrafter-v1, CogVideo and Stable Diffusion Video.

    “These results clearly show that synthetic image detectors experience substantial difficulty detecting synthetic videos,” they wrote. “This finding holds consistent across multiple different detector architectures, as well as when detectors are pretrained by others or retrained using our dataset.”
    A Trusted Approach
    The team speculated that convolutional neural network-based detectors, like its MISLnet algorithm, could be successful against synthetic video because the program is designed to constantly shift its learning as it encounters new examples. By doing this, it’s possible to recognize new forensic traces as they evolve. Over the last several years, the team has demonstrated MISLnet’s acuity at spotting images that had been manipulated using new editing programs, including AI tools — so testing it against synthetic video was a natural step.
    “We’ve used CNN algorithms to detect manipulated images and video and audio deepfakes with reliable success,” said Tai D. Nguyen, a doctoral student in MISL, who was a coauthor of the paper. “Due to their ability to adapt with small amounts of new information we thought they could be an effective solution for identifying AI-generated synthetic videos as well.”
    For the test, the group trained eight CNN detectors, including MISLnet, with the same test dataset used to train the image detectors, which including real videos and AI-generated videos produced by the four publicly available programs. Then they tested the program against a set of videos that included a number created by generative AI programs that are not yet publicly available: Sora, Pika and VideoCrafter-v2.
    By analyzing a small portion — a patch — from a single frame from each video, the CNN detectors were able to learn what a synthetic video looks like at a granular level and apply that knowledge to the new set of videos. Each program was more than 93% effective at identify the synthetic videos, with MISLnet performing the best, at 98.3%.
    The programs were slightly more effective when conducting an analysis of the entire video, by pulling out a random sampling of a few dozen patches from various frames of the video and using those as a mini training set to learn the characteristics of the new video. Using a set of 80 patches, the programs were between 95-98% accurate.
    With a bit of additional training, the programs were also more than 90% accurate at identifying the program that was used to create the videos, which the team suggests is because of the unique, proprietary approach each program uses to produce a video.
    “Videos are generated using a wide variety of strategies and generator architectures,” the researchers wrote. “Since each technique imparts significant traces, this makes it much easier for networks to accurately discriminate between each generator.”
    A Quick Study
    While the programs struggled when faced with the challenge of detecting a completely new generator without previously being exposed to at least a small amount of video from it, with a small amount of fine tuning MISLnet could quickly learn to make the identification at 98% accuracy. This strategy, called “few-shot learning” is an important capability because new AI technology is being created every day, so detection programs must be agile enough to adapt with minimal training.
    “We’ve already seen AI-generated video being used to create misinformation,” Stamm said. “As these programs become more ubiquitous and easier to use, we can reasonably expect to be inundated with synthetic videos. While detection programs shouldn’t be the only line of defense against misinformation — information literacy efforts are key — having the technological ability to verify the authenticity of digital media is certainly an important step.”
    Further information: https://ductai199x.github.io/beyond-deepfake-images/ More

  • in

    AI designs new drugs based on protein structures

    A new computer process developed by chemists at ETH Zurich makes it possible to generate active pharmaceutical ingredients quickly and easily based on a protein’s three-​dimensional surface. The new process could revolutionise drug research.
    “It’s a real breakthrough for drug discovery,” says Gisbert Schneider, Professor at ETH Zurich’s Department of Chemistry and Applied Biosciences. Together with his former doctoral student Kenneth Atz, he has developed an algorithm that uses artificial intelligence (AI) to design new active pharmaceutical ingredients. For any protein with a known three-dimensional shape, the algorithm generates the blueprints for potential drug molecules that increase or inhibit the activity of the protein. Chemists can then synthesise and test these molecules in the laboratory.
    All the algorithm needs is a protein’s three-dimensional surface structure. Based on that, it designs molecules that bind specifically to the protein according to the lock-and-key principle so they can interact with it.
    Excluding side effects from the outset
    The new method builds on the decades-long efforts of chemists to elucidate the three-dimensional structure of proteins and to use computers to search for suitable potential drug molecules. Until now, this has often involved laborious manual work, and in many cases the search yielded molecules that were very difficult or impossible to synthesise. If researchers used AI in this process at all in recent years, it was primarily to improve existing molecules.
    Now, without human intervention, a generative AI is able to develop drug molecules from scratch that match a protein structure. This groundbreaking new process ensures right from the start that the molecules can be chemically synthesised. In addition, the algorithm suggests only molecules that interact with the specified protein at the desired location and hardly at all with any other proteins. “This means that when designing a drug molecule, we can be sure that it has as few side effects as possible,” Atz says.
    To create the algorithm, the scientists trained an AI model with information from hundreds of thousands of known interactions between chemical molecules and the corresponding three-dimensional protein structures.

    Successful tests with industry
    Together with researchers from the pharmaceutical company Roche and other cooperation partners, the ETH team tested the new process and demonstrated what it is capable of. The scientists searched for molecules that interact with proteins in the PPAR class — proteins that regulate sugar and fatty acid metabolism in the body. Several diabetes drugs used today increase the activity of PPARs, which causes the cells to absorb more sugar from the blood and the blood sugar level to fall.
    Straightaway the AI designed new molecules that also increase the activity of PPARs, like the drugs currently available, but without a lengthy discovery process. After the ETH researchers had produced these molecules in the lab, colleagues at Roche subjected them to a variety of tests. These showed that the new substances are indeed stable and non-toxic right from the start.
    The researchers aren’t now pursuing these molecules any further with a view to bringing drugs based on them to the market. Instead, the purpose of the molecules was to subject the new AI process to an initial rigorous test. Schneider says, however, that the algorithm is already being used for similar studies at ETH Zurich and in industry. One of these is a project with the Children’s Hospital Zurich for the treatment of medulloblastomas, the most common malignant brain tumours in children. Moreover, the researchers have published the algorithm and its software so that researchers worldwide can now use them for their own projects.
    “Our work has made the world of proteins accessible for generative AI in drug research,” Schneider says. “The new algorithm has enormous potential.” This is especially true for all medically relevant proteins in the human body that don’t interact with any known chemical compounds. More

  • in

    Advancing the safety of AI-driven machinery requires closer collaboration with humans

    An ongoing research project at Tampere University aims to create adaptable safety systems for highly automated off-road mobile machinery to meet industry needs. Research has revealed critical gaps in compliance with legislation related to public safety when using mobile working machines controlled by artificial intelligence.
    As the adoption of highly automated off-road machinery increases, so does the need for robust safety measures. Conventional safety processes often fail to consider the health and safety risks posed by systems controlled by artificial intelligence (AI).
    Marea de Koning, a doctoral researcher specialising in automation at Tampere University, conducts research with the aim of ensuring public safety without compromising technological advancements by developing a safety framework specifically tailored for autonomous mobile machines operating in collaboration with humans. This framework intents to enable original equipment manufacturers (OEM), safety & system engineers, and industry stakeholders to create safety systems that comply with evolving legislation.
    Balance between humans and autonomous machines
    Anticipating all the possible ways a hazard can emerge and ensuring that the AI can safely manage hazardous scenarios is practically impossible. We need to adjust our approach to safety to focus more on finding ways to successfully manage unforeseen events.
    We need robust risk management systems, often incorporating a human-in-the-loop safety option. Here a human supervisor is expected to intervene when necessary. But in autonomous machinery, relying on human intervention is impractical. According to de Koning, there can be measurable degradations in human performance when automation is used due to, for example, boredom, confusion, cognitive capacities, loss of situational awareness, and automation bias. These factors significantly impact safety, and a machine must become capable of safely managing its own behaviour.
    “Myapproach considers hazards with AI-driven decision-making, risk assessment, and adaptability to unforeseen scenarios. I think it is important to actively engage with industry partners to ensure real-world applicability. By collaborating with manufacturers, it is possible to bridge the gap between theoretical frameworks and practical implementation,” she says.

    The framework intents to support OEMs in designing and developing compliant safety systems and ensure that their products adhere to evolving regulations.
    Integrating framework to existing machinery
    Marea de Koning started her research in November 2020 and will finish it by November 2024. The project is funded partly by the Doctoral School of Industry Innovations and partly by a Finnish system supplier.
    De Koning’s next research project, starting in April, will focus on integrating a subset of her safety framework and rigorously testing its effectiveness. Regulation 2023/1230 replaces Directive 2006/42/ec as of January 2027, significantly challenging OEMs.
    “I’m doing everything I can to ensure that safety remains at the forefront of technological advancements,” she concludes.
    The research provides valuable insights for policymakers, engineers and safety professionals. The article presenting the findings titled A Comprehensive Approach to Safety for Highly Automated Off-Road Machinery under Regulation 2023/1230 was published in the prestigious Journal of Safety Science. More

  • in

    Social media can be used to increase fruit and vegetable intake in young people

    Researchers from Aston University have found that people following healthy eating accounts on social media for as little as two weeks ate more fruit and vegetables and less junk food.
    Previous research has shown that positive social norms about fruit and vegetables increases individuals’ consumption. The research team sought to investigate whether positive representation of healthier food on social media would have the same effect. The research was led by Dr Lily Hawkins, whose PhD study it was, supervised by Dr Jason Thomas and Professor Claire Farrow in the School of Psychology.
    The researchers recruited 52 volunteers, all social media users, with a mean age of 22, and split them into two groups. Volunteers in the first group, known as the intervention group, were asked to follow healthy eating Instagram accounts in addition to their usual accounts. Volunteers in the second group, known as the control group, were asked to follow interior design accounts. The experiment lasted two weeks, and the volunteers recorded what they ate and drank during the time period.
    Overall, participants following the healthy eating accounts ate an extra 1.4 portions of fruit and vegetables per day and 0.8 fewer energy dense items, such as high-calorie snacks and sugar-sweetened drinks, per day. This is a substantial improvement compared to previous educational and social media-based interventions attempting to improve diets.
    Dr Thomas and the team believe affiliation is a key component of the change in eating behaviour. For example, the effect was more pronounced amongst participants who felt affiliated with other Instagram users.
    The 2018 NHS Health Survey for England study showed that only 28% of the UK population consumed the recommended five portions of fruit and vegetables per day. Low consumption of such food is linked to heart disease, cancer and stroke, so identifying ways to encourage higher consumption is vital. Exposing people to positive social norms, using posters in canteens encouraging vegetable consumption, or in bars to discourage dangerous levels of drinking, have been shown to work. Social media is so prevalent now that the researchers believe it could be an ideal way to spread positive social norms around high fruit and vegetable consumption, particularly amongst younger people.
    Dr Thomas said:
    “This is only a pilot intervention study at the moment, but it’s quite an exciting suite of findings, as it suggests that even some minor tweaks to our social media accounts might lead to substantial improvements in diet, at zero cost! Our future work will examine whether such interventions actually do change our perceptions of what others are consuming, and also, whether these interventions produce effects that are sustained over time.”
    Dr Hawkins, who is now at the University of Exeter, said:
    “Our previous research has demonstrated that social norms on social media may nudge food consumption, but this pilot demonstrates that this translates to the real world. Of course, we would like to now understand whether this can be replicated in a larger, community sample.” More

  • in

    Computer game in school made students better at detecting fake news

    A computer game helped upper secondary school students become better at distinguishing between reliable and misleading news. This is shown by a study conducted by researchers at Uppsala University and elsewhere.
    “This is an important step towards equipping young people with the tools they need to navigate in a world full of disinformation. We all need to become better at identifying manipulative strategies — prebunking, as it is known — since it is virtually impossible to discern deep fakes, for example, and other AI-generated disinformation with the naked eye,” says Thomas Nygren, Professor of Education at Uppsala University.
    Along with three other researchers, he conducted a study involving 516 Swedish upper secondary school students in different programmes at four schools. The study, published in the Journal of Research on Technology in Education, investigated the effect of the game Bad News in a classroom setting — this is the first time the game has been scientifically tested in a normal classroom. The game has been created for research and teaching, and the participants assume the role of spreader of misleading news. The students in the study either played the game individually, in pairs or in whole class groups with a shared scorecard — all three methods had positive effects. This surprised the researchers, who believed students would learn more by working at the computer together.
    “The students improved their ability to identify manipulative techniques in social media posts and to distinguish between reliable and misleading news,” Nygren comments.
    The study also showed that students who already had a positive attitude towards trustworthy news sources were better at distinguishing disinformation, and this attitude became significantly more positive after playing the game. Moreover, many students improved their assessments of credibility and were able to explain how they could identify manipulative techniques in a more sophisticated way.
    The researchers noted that competitive elements in the game made for greater interest and enhanced its benefit. They therefore conclude that the study contributes insights for teachers into how serious games can be used in formal instruction to promote media and information literacy.
    “Some people believe that gamification can enhance learning in school. However, our results show that more gamification in the form of competitive elements does not necessarily mean that students learn more — though it can be perceived as more fun and interesting,” Nygren says.
    Participating researchers: Carl-Anton Werner Axelsson (Mälardalen and Uppsala), Thomas Nygren (Uppsala), Jon Roozenbeek (Cambridge) and Sander van der Linden (Cambridge). More

  • in

    Holographic displays offer a glimpse into an immersive future

    Setting the stage for a new era of immersive displays, researchers are one step closer to mixing the real and virtual worlds in an ordinary pair of eyeglasses using high-definition 3D holographic images, according to a study led by Princeton University researchers.
    Holographic images have real depth because they are three dimensional, whereas monitors merely simulate depth on a 2D screen. Because we see in three dimensions, holographic images could be integrated seamlessly into our normal view of the everyday world.
    The result is a virtual and augmented reality display that has the potential to be truly immersive, the kind where you can move your head normally and never lose the holographic images from view. “To get a similar experience using a monitor, you would need to sit right in front of a cinema screen,” said Felix Heide, assistant professor of computer science and senior author on a paper published April 22 in Nature Communications.
    And you wouldn’t need to wear a screen in front of your eyes to get this immersive experience. Optical elements required to create these images are tiny and could potentially fit on a regular pair of glasses. Virtual reality displays that use a monitor, as current displays do, require a full headset. And they tend to be bulky because they need to accommodate a screen and the hardware necessary to operate it.
    “Holography could make virtual and augmented reality displays easily usable, wearable and ultrathin,” said Heide. They could transform how we interact with our environments, everything from getting directions while driving, to monitoring a patient during surgery, to accessing plumbing instructions while doing a home repair.
    One of the most important challenges is quality. Holographic images are created by a small chip-like device called a spatial light modulator. Until now, these modulators could only create images that are either small and clear or large and fuzzy. This tradeoff between image size and clarity results in a narrow field of view, too narrow to give the user an immersive experience. “If you look towards the corners of the display, the whole image may disappear,” said Nathan Matsuda, research scientist at Meta and co-author on the paper.
    Heide, Matsuda and Ethan Tseng, doctoral student in computer science, have created a device to improve image quality and potentially solve this problem. Along with their collaborators, they built a second optical element to work in tandem with the spatial light modulator. Their device filters the light from the spatial light modulator to expand the field of view while preserving the stability and fidelity of the image. It creates a larger image with only a minimal drop in quality.
    Image quality has been a core challenge preventing the practical applications of holographic displays, said Matsuda. “The research brings us one step closer to resolving this challenge,” he said.
    The new optical element is like a very small custom-built piece of frosted glass, said Heide. The pattern etched into the frosted glass is the key. Designed using AI and optical techniques, the etched surface scatters light created by the spatial light modulator in a very precise way, pushing some elements of an image into frequency bands that are not easily perceived by the human eye. This improves the quality of the holographic image and expands the field of view.
    Still, hurdles to making a working holographic display remain. The image quality isn’t yet perfect, said Heide, and the fabrication process for the optical elements needs to be improved. “A lot of technology has to come together to make this feasible,” said Heide. “But this research shows a path forward.” More