More stories

  • in

    AI-based app can help physicians find skin melanoma

    A mobile app that uses artificial intelligence, AI, to analyse images of suspected skin lesions can diagnose melanoma with very high precision. This is shown in a study led from Linköping University in Sweden where the app has been tested in primary care. The results have been published in the British Journal of Dermatology.
    “Our study is the first in the world to test an AI-based mobile app for melanoma in primary care in this way. A great many studies have been done on previously collected images of skin lesions and those studies relatively agree that AI is good at distinguishing dangerous from harmless ones. We were quite surprised by the fact that no one had done a study on primary care patients,” says Magnus Falk, senior associate professor at the Department of Health, Medicine and Caring Sciences at Linköping University, specialist in general practice at Region Östergötland, who led the current study.
    Melanoma can be difficult to differentiate from other skin changes, even for experienced physicians. However, it is important to detect melanoma as early as possible, as it is a serious type of skin cancer.
    There is currently no established AI-based support for assessing skin lesions in Swedish healthcare.
    “Primary care physicians encounter many skin lesions every day and with limited resources need to make decisions about treatment in cases of suspected skin melanoma. This often results in an abundance of referrals to specialists or the removal of skin lesions, which in the majority of cases turn out to be harmless. We wanted to see if the AI support tool in the app could perform better than primary care physicians when it comes to identifying pigmented skin lesions as dangerous or not, in comparison with the final diagnosis,” says Panos Papachristou, researcher affiliated with Karolinska Institutet and specialist in general practice, main author of the study and co-founder of the company that developed the app.
    And the results are promising.
    “First of all, the app missed no melanoma. This disease is so dangerous that it’s essential not to miss it. But it’s almost equally important that the AI decision support tool could acquit many suspected skin lesions and determine that they were harmless,” says Magnus Falk.

    In the study, primary care physicians followed the usual procedure for diagnosing suspected skin tumours. If the physicians suspected melanoma, they either referred the patient to a dermatologist for diagnosis, or the skin lesion was cut away for tissue analysis and diagnosis.
    Only after the physician decided how to handle the suspected melanoma did they use the AI-based app. This involves the physician taking a picture of the skin lesion with a mobile phone equipped with an enlargement lens called a dermatoscope. The app analyses the image and provides guidance on whether or not the skin lesion appears to be melanoma.
    To find out how well the AI-based app worked as a decision support tool, the researchers compared the app’s response to the diagnoses made by the regular diagnostic procedure.
    Of the more than 250 skin lesions examined, physicians found 11 melanomas and 10 precursors of cancer, known as in situ melanoma. The app found all the melanomas, and missed only one precursor. In cases where the app responded that a suspected lesion was not a melanoma, including in situ melanoma, there was a 99.5 percent probability that this was correct.
    “It seems that this method could be useful. But in this study, physicians weren’t allowed to let their decision be influenced by the app’s response, so we don’t know what happens in practice if you use an AI-based decision support tool. So even if this is a very positive result, there is uncertainty and we need to continue to evaluate the usefulness of this tool with scientific studies,” says Magnus Falk.
    The researchers now plan to proceed with a large follow-up primary care study in several countries, where use of the app as an active decision support tool will be compared to not using it at all.
    The study was funded with support from Region Östergötland and the Analytic Imaging Diagnostics Arena, AIDA, in Linköping, which is funded by the strategic innovation programme Medtech4Health. More

  • in

    AI ethics are ignoring children, say researchers

    Researchers from the Oxford Martin Programme on Ethical Web and Data Architectures (EWADA), University of Oxford, have called for a more considered approach when embedding ethical principles in the development and governance of AI for children.
    In a perspective paper published today in Nature Machine Intelligence, the authors highlight that although there is a growing consensus around what high-level AI ethical principles should look like, too little is known about how to effectively apply them in principle for children. The study mapped the global landscape of existing ethics guidelines for AI and identified four main challenges in adapting such principles for children’s benefit: A lack of consideration for the developmental side of childhood, especially the complex and individual needs of children, age ranges, development stages, backgrounds, and characters. Minimal consideration for the role of guardians (e.g. parents) in childhood. For example, parents are often portrayed as having superior experience to children, when the digital world may need to reflect on this traditional role of parents. Too few child-centred evaluations that consider children’s best interests and rights. Quantitative assessments are the norm when assessing issues like safety and safeguarding in AI systems, but these tend to fall short when considering factors like the developmental needs and long-term wellbeing of children. Absence of a coordinated, cross-sectoral, and cross-disciplinary approach to formulating ethical AI principles for children that are necessary to effect impactful practice changes.The researchers also drew on real-life examples and experiences when identifying these challenges. They found that although AI is being used to keep children safe, typically by identifying inappropriate content online, there has been a lack of initiative to incorporate safeguarding principles into AI innovations including those supported by Large Language Models (LLMs). Such integration is crucial to prevent children from being exposed to biased content based on factors such as ethnicity, or to harmful content, especially for vulnerable groups, and the evaluation of such methods should go beyond mere quantitative metrics such as accuracy or precision. Through their partnership with the University of Bristol, the researchers are also designing tools to help children with ADHD, carefully considering their needs and designing interfaces to support their sharing of data with AI-related algorithms, in ways that are aligned with their daily routes, digital literacy skills, and need for simple yet effective interfaces.
    In response to these challenges, the researchers recommended: increasing the involvement of key stakeholders, including parents and guardians, AI developers, and children themselves; providing more direct support for industry designers and developers of AI systems, especially by involving them more in the implementation of ethical AI principles; establishing legal and professional accountability mechanisms that are child-centred; and increasing multidisciplinary collaboration around a child-centred approach involving stakeholders in areas such as human-computer interaction, design, algorithms, policy guidance, data protection law, and education.Dr Jun Zhao, Oxford Martin Fellow, Senior Researcher at the University’s Department of Computer Science, and lead author of the paper, said:
    “The incorporation of AI in children’s lives and our society is inevitable. While there are increased debates about who should ensure technologies are responsible and ethical, a substantial proportion of such burdens falls on parents and children to navigate this complex landscape.”
    ‘This perspective article examined existing global AI ethics principles and identified crucial gaps and future development directions. These insights are critical for guiding our industries and policymakers. We hope this research will serve as a significant starting point for cross-sectoral collaborations in creating ethical AI technologies for children and global policy development in this space.’
    The authors outlined several ethical AI principles that would especially need to be considered for children. They include ensuring fair, equal, and inclusive digital access, delivering transparency and accountability when developing AI systems, safeguarding privacy and preventing manipulation and exploitation, guaranteeing the safety of children, and creating age-appropriate systems while actively involving children in their development.
    Professor Sir Nigel Shadbolt, co-author, Director of the EWADA Programme, Principal of Jesus College Oxford and a Professor of Computing Science at the Department of Computer Science, said:
    “In an era of AI powered algorithms children deserve systems that meet their social, emotional, and cognitive needs. Our AI systems must be ethical and respectful at all stages of development, but this is especially critical during childhood.” More

  • in

    Powerful new AI can predict people’s attitudes to vaccines

    A powerful new tool in artificial intelligence is able to predict whether someone is willing to be vaccinated against COVID-19.
    The predictive system uses a small set of data from demographics and personal judgments such as aversion to risk or loss.
    The findings frame a new technology that could have broad applications for predicting mental health and result in more effective public health campaigns.
    A team led by researchers at the University of Cincinnati and Northwestern University created a predictive model using an integrated system of mathematical equations describing the lawful patterns in reward and aversion judgment with machine learning.
    “We used a small number of variables and minimal computational resources to make predictions,” said lead author Nicole Vike, a senior research associate in UC’s College of Engineering and Applied Science.
    “COVID-19 is unlikely to be the last pandemic we see in the next decades. Having a new form of AI for prediction in public health provides a valuable tool that could help prepare hospitals for predicting vaccination rates and consequential infection rates.”
    The study was published in the Journal of Medical Internet Research Public Health and Surveillance.

    Researchers surveyed 3,476 adults across the United States in 2021 during the COVID-19 pandemic. At the time of the survey, the first vaccines had been available for more than a year.
    Respondents provided information such as where they live, income, highest education level completed, ethnicity and access to the internet. The respondents’ demographics mirrored those of the United States based on U.S. Census Bureau figures.
    Participants were asked if they had received either of the available COVID-19 vaccines. About 73% of respondents said they were vaccinated, slightly more than the 70% of the nation’s population that had been vaccinated in 2021.
    Further, they were asked if they routinely followed four recommendations designed to prevent the spread of the virus: wearing a mask, social distancing, washing their hands and not gathering in large groups.
    Participants were asked to rate how much they liked or disliked a randomly sequenced set of 48 pictures on a seven-point scale of 3 to -3. The pictures were from the International Affective Picture Set, a large set of emotionally evocative color photographs, in six categories: sports, disasters, cute animals, aggressive animals, nature and food.
    Vike said the goal of this exercise is to quantify mathematical features of people’s judgments as they observe mildly emotional stimuli. Measures from this task include concepts familiar to behavioral economists — or even people who gamble — such aversion to risk (the point at which someone is willing to accept potential loss for a potential reward) and aversion to loss. This is the willingness to avoid risk by, for example, obtaining insurance.

    “The framework by which we judge what is rewarding or aversive is fundamental to how we make medical decisions,” said co-senior author Hans Breiter, a professor of computer science at UC. “A seminal paper in 2017 hypothesized the existence of a standard model of the mind. Using a small set of variables from mathematical psychology to predict medical behavior would support such a model. The work of this collaborative team has provided such support and argues that the mind is a set of equations akin to what is used in particle physics.”
    The judgment variables and demographics were compared between respondents who were vaccinated and those who were not. Three machine learning approaches were used to test how well the respondents’ judgment, demographics and attitudes toward COVID-19 precautions predicted whether they would get the vaccine.
    The study demonstrates that artificial intelligence can make accurate predictions about human attitudes with surprisingly little data or reliance on expensive and time-consuming clinical assessments.
    “We found that a small set of demographic variables and 15 judgment variables predict vaccine uptake with moderate to high accuracy and high precision,” the study said. “In an age of big-data machine learning approaches, the current work provides an argument for using fewer but more interpretable variables.”
    “The study is anti-big-data,” said co-senior author Aggelos Katsaggelos, an endowed professor of electrical engineering and computer science at Northwestern University. “It can work very simply. It doesn’t need super-computation, it’s inexpensive and can be applied with anyone who has a smartphone. We refer to it as computational cognition AI. It is likely you will be seeing other applications regarding alterations in judgment in the very near future.” More

  • in

    Bendable energy storage materials by cool science

    Imaging being able to wear your smartphone on your wrist, not as a watch, but literally as a flexible band that wraps around your arm. How about clothes that charge your gadgets just by wearing them?
    Recently, a collaborative team led by Professor Jin Kon Kim and Dr. Keon-Woo Kim of Pohang University of Science and Technology (POSTECH), Professor Taesung Kim and M.S./Ph.D. student Hyunho Seok of Sungkyunkwan University (SKKU), and Professor Hong Chul Moon of University of Seoul (UOS) has brought us a step closer to achieving this reality. This research work was published in Advanced Materials.
    Mesoporous metal oxides (MMOs) are characterized by pores ranging from 2 to 50 nanometers (nm) in size. Due to their extensive surface area, MMOs have various applications, such as high-performance energy storage and efficient catalysis, semiconductors, and sensors. However, the integration of MMOs on wearable and flexible devices remains a great challenge, because plastic substrates could not maintain their integrity at elevated temperatures (350°C or above) where MMOs could be synthesized.
    The research team tackled this problem by using synergetic effect of heat and plasma to synthesize various MMOs including vanadium oxide (V2O5), renowned high-performance energy storage materials, V6O13, TiO2, Nb2O5, and WO3, on flexible materials at much lower temperatures (150 ~ 200 oC). The high reactive plasma chemical moieties provide enough energy that could be compensated by high temperature. The fabricated devices could be bent thousands of times without losing the energy storage performance.
    Professor Jin Kon Kim, the leading researcher, expressed his opinion, stating: “We’re on the brink of a revolution in wearable tech.”
    “Our breakthrough could lead to gadgets that are not only more flexible but also much more adaptable to our daily needs.”
    This research was supported by National Creative Initiative Research Program, the Basic Research in Science & Engineering Program, and the Nano & Material Technology Development Program. More

  • in

    Brain-inspired wireless system to gather data from salt-sized sensors

    Tiny chips may equal a big breakthrough for a team of scientists led by Brown University engineers.
    Writing in Nature Electronics, the research team describes a novel approach for a wireless communication network that can efficiently transmit, receive and decode data from thousands of microelectronic chips that are each no larger than a grain of salt.
    The sensor network is designed so the chips can be implanted into the body or integrated into wearable devices. Each submillimeter-sized silicon sensor mimics how neurons in the brain communicate through spikes of electrical activity. The sensors detect specific events as spikes and then transmit that data wirelessly in real time using radio waves, saving both energy and bandwidth.
    “Our brain works in a very sparse way,” said Jihun Lee, a postdoctoral researcher at Brown and study lead author. “Neurons do not fire all the time. They compress data and fire sparsely so that they are very efficient. We are mimicking that structure here in our wireless telecommunication approach. The sensors would not be sending out data all the time — they’d just be sending relevant data as needed as short bursts of electrical spikes, and they would be able to do so independently of the other sensors and without coordinating with a central receiver. By doing this, we would manage to save a lot of energy and avoid flooding our central receiver hub with less meaningful data.”
    This radiofrequency transmission scheme also makes the system scalable and tackles a common problem with current sensor communication networks: they all need to be perfectly synced to work well.
    The researchers say the work marks a significant step forward in large-scale wireless sensor technology and may one day help shape how scientists collect and interpret information from these little silicon devices, especially since electronic sensors have become ubiquitous as a result of modern technology.
    “We live in a world of sensors,” said Arto Nurmikko, a professor in Brown’s School of Engineering and the study’s senior author. “They are all over the place. They’re certainly in our automobiles, they are in so many places of work and increasingly getting into our homes. The most demanding environment for these sensors will always be inside the human body.”
    That’s why the researchers believe the system can help lay the foundation for the next generation of implantable and wearable biomedical sensors. There is a growing need in medicine for microdevices that are efficient, unobtrusive and unnoticeable but that also operate as part of a large ensembles to map physiological activity across an entire area of interest.

    “This is a milestone in terms of actually developing this type of spike-based wireless microsensor,” Lee said. “If we continue to use conventional methods, we cannot collect the high channel data these applications will require in these kinds of next-generation systems.”
    The events the sensors identify and transmit can be specific occurrences such as changes in the environment they are monitoring, including temperature fluctuations or the presence of certain substances.
    The sensors are able to use as little energy as they do because external transceivers supply wireless power to the sensors as they transmit their data — meaning they just need to be within range of the energy waves sent out by the transceiver to get a charge. This ability to operate without needing to be plugged into a power source or battery make them convenient and versatile for use in many different situations.
    The team designed and simulated the complex electronics on a computer and has worked through several fabrication iterations to create the sensors. The work builds on previous research from Nurmikko’s lab at Brown that introduced a new kind of neural interface system called “neurograins.” This system used a coordinated network of tiny wireless sensors to record and stimulate brain activity.
    “These chips are pretty sophisticated as miniature microelectronic devices, and it took us a while to get here,” said Nurmikko, who is also affiliated with Brown’s Carney Institute for Brain Science. “The amount of work and effort that is required in customizing the several different functions in manipulating the electronic nature of these sensors — that being basically squeezed to a fraction of a millimeter space of silicon — is not trivial.”
    The researchers demonstrated the efficiency of their system as well as just how much it could potentially be scaled up. They tested the system using 78 sensors in the lab and found they were able to collect and send data with few errors, even when the sensors were transmitting at different times. Through simulations, they were able to show how to decode data collected from the brains of primates using about 8,000 hypothetically implanted sensors.
    The researchers say next steps include optimizing the system for reduced power consumption and exploring broader applications beyond neurotechnology.
    “The current work provides a methodology we can further build on,” Lee said. More

  • in

    Artificial nanofluidic synapses can store computational memory

    Memory, or the ability to store information in a readily accessible way, is an essential operation in computers and human brains. A key difference is that while brain information processing involves performing computations directly on stored data, computers shuttle data back and forth between a memory unit and a central processing unit (CPU). This inefficient separation (the von Neumann bottleneck) contributes to the rising energy cost of computers.
    Since the 1970s, researchers have been working on the concept of a memristor (memory resistor); an electronic component that can, like a synapse, both compute and store data. But Aleksandra Radenovic in the Laboratory of Nanoscale Biology (LBEN) in EPFL’s School of Engineering set her sight on something even more ambitious: a functional nanofluidic memristive device that relies on ions, rather than electrons and their oppositely charged counterparts (holes). Such an approach would more closely mimic the brain’s own — much more energy efficient — way of processing information.
    “Memristors have already been used to build electronic neural networks, but our goal is to build a nanofluidic neural network that takes advantage of changes in ion concentrations, similar to living organisms,” Radenovic says.
    “We have fabricated a new nanofluidic device for memory applications that is significantly more scalable and much more performant than previous attempts,” says LBEN postdoctoral researcher Théo Emmerich. “This has enabled us, for the very first time, to connect two such ‘artificial synapses’, paving the way for the design of brain-inspired liquid hardware.”
    The research has recently been published in Nature Electronics.
    Just add water
    Memristors can switch between two conductance states — on and off — through manipulation of an applied voltage. While electronic memristors rely on electrons and holes to process digital information, LBEN’s memristor can take advantage of a range of different ions. For their study, the researchers immersed their device in an electrolyte water solution containing potassium ions, but others could be used, including sodium and calcium.

    “We can tune the memory of our device by changing the ions we use, which affects how it switches from on to off, or how much memory it stores,” Emmerich explains.
    The device was fabricated on a chip at EPFL’s Center of MicroNanoTechnology by creating a nanopore at the center of a silicon nitride membrane. The researchers added palladium and graphite layers to create nano-channels for ions. As a current flows through the chip, the ions percolate through the channels and converge at the pore, where their pressure creates a blister between the chip surface and the graphite. As the graphite layer is forced up by the blister, the device becomes more conductive, switching its memory state to ‘on’. Since the graphite layer stays lifted, even without a current, the device ‘remembers’ its previous state. A negative voltage puts the layers back into contact, resetting the memory to the ‘off’ state.
    “Ion channels in the brain undergo structural changes inside a synapse, so this also mimics biology,” says LBEN PhD student Yunfei Teng, who worked on fabricating the devices — dubbed highly asymmetric channels (HACs) in reference to the shape of the ion flow toward the central pores.
    LBEN PhD student Nathan Ronceray adds that the team’s observation of the HAC’s memory action in real time is also a novel achievement in the field. “Because we were dealing with a completely new memory phenomenon, we built a microscope to watch it in action.”
    By collaborating with Riccardo Chiesa and Edoardo Lopriore of the Laboratory of Nanoscale Electronics and Structures, led by Andras Kis, the researchers succeeded in connecting two HACs with an electrode to form a logic circuit based on ion flow. This achievement represents the first demonstration of digital logic operations based on synapse-like ionic devices. But the researchers aren’t stopping there: their next goal is to connect a network of HACs with water channels to create fully liquid circuits. In addition to providing an in-built cooling mechanism, the use of water would facilitate the development of bio-compatible devices with potential applications in brain-computer interfaces or neuromedicine. More

  • in

    Researchers develop deep learning model to predict breast cancer

    Researchers have developed a new, interpretable artificial intelligence (AI) model to predict 5-year breast cancer risk from mammograms, according to a new study published today in Radiology, a journal of the Radiological Society of North America (RSNA).
    One in 8 women, or approximately 13% of the female population in the U.S., will develop invasive breast cancer in their lifetime and 1 in 39 women (3%) will die from the disease, according to the American Cancer Society. Breast cancer screening with mammography, for many women, is the best way to find breast cancer early when treatment is most effective. Having regularly scheduled mammograms can significantly lower the risk of dying from breast cancer. However, it remains unclear how to precisely predict which women will develop breast cancer through screening alone.
    Mirai, a state-of-the-art, deep learning-based algorithm, has demonstrated proficiency as a tool to help predict breast cancer but, because little is known about its reasoning process, the algorithm has the potential for overreliance by radiologists and incorrect diagnoses.
    “Mirai is a black box — a very large and complex neural network, similar in construction to ChatGPT — and no one knew how it made its decisions,” said the study’s lead author, Jon Donnelly, B.S., a Ph.D. student in the Department of Computer Science at Duke University in Durham, North Carolina. “We developed an interpretable AI method that allows us to predict breast cancer from mammograms 1 to 5 years in advance. AsymMirai is much simpler and much easier to understand than Mirai.”
    For the study, Donnelly and colleagues in the Department of Computer Science and Department of Radiology compared their newly developed mammography-based deep learning model called AsymMirai to Mirai’s 1- to 5-year breast cancer risk predictions. AsymMirai was built on the “front end” deep learning portion of Mirai, while replacing the rest of that complicated method with an interpretable module: local bilateral dissimilarity, which looks at tissue differences between the left and right breasts.
    “Previously, differences between the left and right breast tissue were used only to help detect cancer, not to predict it in advance,” Donnelly said. “We discovered that Mirai uses comparisons between the left and right sides, which is how we were able to design a substantially simpler network that also performs comparisons between the sides.”
    For the study, the researchers compared 210,067 mammograms from 81,824 patients in the EMory BrEast imaging Dataset (EMBED) from January 2013 to December 2020 using both Mirai and AsymMirai models. The researchers found that their simplified deep learning model performed almost as well as the state-of-the-art Mirai for 1- to 5-year breast cancer risk prediction.
    The results also supported the clinical importance of breast asymmetry and, as a result, highlights the potential of bilateral dissimilarity as a future imaging marker for breast cancer risk.
    Since the reasoning behind AsymMirai’s predictions is easy to understand, it could be a valuable adjunct to human radiologists in breast cancer diagnoses and risk prediction, Donnelly said.
    “We can, with surprisingly high accuracy, predict whether a woman will develop cancer in the next 1 to 5 years based solely on localized differences between her left and right breast tissue,” he said. “This could have public impact because it could, in the not-too-distant future, affect how often women receive mammograms.” More

  • in

    Backyard insect inspires invisibility devices, next gen tech

    Leafhoppers, a common backyard insect, secrete and coat themselves in tiny mysterious particles that could provide both the inspiration and the instructions for next-generation technology, according to a new study led by Penn State researchers. In a first, the team precisely replicated the complex geometry of these particles, called brochosomes, and elucidated a better understanding of how they absorb both visible and ultraviolet light.
    This could allow the development of bioinspired optical materials with possible applications ranging from invisible cloaking devices to coatings to more efficiently harvest solar energy, said Tak-Sing Wong, professor of mechanical engineering and biomedical engineering. Wong led the study, which was published today (March 18) in the Proceedings of the National Academy of Sciences of the United States of America (PNAS).
    The unique, tiny particles have an unusual soccer ball-like geometry with cavities, and their exact purpose for the insects has been something of a mystery to scientists since the 1950s. In 2017, Wong led the Penn State research team that was the first to create a basic, synthetic version of brochosomes in an effort to better understand their function.
    “This discovery could be very useful for technological innovation,” said Lin Wang, postdoctoral scholar in mechanical engineering and the lead author of the study. “With a new strategy to regulate light reflection on a surface, we might be able to hide the thermal signatures of humans or machines. Perhaps someday people could develop a thermal invisibility cloak based on the tricks used by leafhoppers. Our work shows how understanding nature can help us develop modern technologies.”
    Wang went on to explain that even though scientists have known about brochosome particles for three-quarters of a century, making them in a lab has been a challenge due to the complexity of the particle’s geometry.
    “It has been unclear why the leafhoppers produce particles with such complex structures,” Wang said, “We managed to make these brochosomes using a high-tech 3D-printing method in the lab. We found that these lab-made particles can reduce light reflection by up to 94%. This is a big discovery because it’s the first time we’ve seen nature do something like this, where it controls light in such a specific way using hollow particles.”
    Theories on why leafhoppers coat themselves with a brochosome armor have ranged from keeping them free of contaminants and water to a superhero-like invisibility cloak. However, a new understanding of their geometry raises a strong possibility that its main purpose could be the cloak to avoid predators, according to Tak-Sing Wong, professor of mechanical engineering and biomedical engineering and corresponding author of the study.

    The researchers have found that the size of the holes in the brochosome that give it a hollow, soccer ball-like appearance is extremely important. The size is consistent across leafhopper species, no matter the size of the insect’s body. The brochosomes are roughly 600 nanometers in diameter — about half the size of a single bacterium — and the brochosome pores are around 200 nanometers.
    “That makes us ask a question,” Wong said. “Why this consistency? What is the secret of having brochosomes of about 600 nanometers with about 200-nanometer pores? Does that serve some purpose?”
    The researchers found the unique design of brochosomes serves a dual purpose — absorbing ultraviolet (UV) light, which reduces visibility to predators with UV vision, such as birds and reptiles, and scattering visible light, creating an anti-reflective shield against potential threats. The size of the holes is perfect for absorbing light at the ultraviolet frequency.
    This potentially could lead to a variety of applications for humans using synthetic brochosomes, such as more efficient solar energy harvesting systems, coatings that protect pharmaceuticals from light-induced damage, advanced sunscreens for better skin protection against sun damage and even cloaking devices, researchers said. To test this, the team first had to make synthetic brochosomes, a major challenge in and of itself.
    In their 2017 study, the researchers mimicked some features of brochosomes, particularly the dimples and their distribution, using synthetic materials. This allowed them to begin understanding the optical properties. However, they were only able to make something that looked like brochosomes, not an exact replica.
    “This is the first time we are able to make the exact geometry of the natural brochosome,” Wong said, explaining that the researchers were able to create scaled synthetic replicas of the brochosome structures by using advanced 3D-printing technology.

    They printed a scaled-up version that was 20,000 nanometers in size, or roughly one-fifth the diameter of a human hair. The researchers precisely replicated the shape and morphology, as well as the number and placement of pores using 3D printing, to produce still-small faux brochosomes that were large enough to characterize optically.
    They used a Micro-Fourier transform infrared (FTIR) spectrometer to examine how the brochosomes interacted with infrared light of different wavelengths, helping the researchers understand how the structures manipulate the light.
    Next, the researchers said they plan to improve the synthetic brochosome fabrication to enable production at a scale closer to the size of natural brochosomes. They will also explore additional applications for synthetic brochosomes, such as information encryption, where brochosome-like structures could be used as part of an encryption system where data is only visible under certain light wavelengths.
    Wang noted that their brochosome work demonstrates the value of a biomimetic research approach, where scientists looks to nature for inspiration.
    “Nature has been a good teacher for scientists to develop novel advanced materials,” Wang said. “In this study, we have just focused on one insect species, but there are many more amazing insects out there that are waiting for material scientists to study, and they may be able to help us solve various engineering problems. They are not just bugs; they are inspirations.”
    Along with Wong and Wang from Penn State, other researchers on the study include Sheng Shen, professor of mechanical engineering, and Zhuo Li, doctoral candidate in mechanical engineering, both at Carnegie Mellon University, who contributed to the simulations in this study. Wang and Li contributed equally to this work, for which the researchers have filed a U.S. provisional patent. The Office of Naval Research supported this research. More