More stories

  • in

    Are these newly found rare cells a missing link in color perception?

    Scientists have long wondered how the eye’s three cone photoreceptor types work together to allow humans to perceive color. In a new study in the Journal of Neuroscience, researchers at the University of Rochester used adaptive optics to identify rare retinal ganglion cells (RGCs) that could help fill in the gaps in existing theories of color perception.
    The retina has three types of cones to detect color that are sensitive to either short, medium, or long wavelengths of light. Retinal ganglion cells transmit input from these cones to the central nervous system.
    In the 1980s, David Williams, the William G. Allyn Professor of Medical Optics, helped map the “cardinal directions” that explain color detection. However, there are differences in the way the eye detects color and how color appears to humans. Scientists suspected that while most RGCs follow the cardinal directions, they may work in tandem with small numbers of non-cardinal RGCs to create more complex perceptions.
    Recently, a team of researchers from Rochester’s Center for Visual Science, the Institute of Optics, and the Flaum Eye Institute identified some of these elusive non-cardinal RGCs in the fovea that could explain how humans see red, green, blue, and yellow.
    “We don’t really know anything for certain yet about these cells other than that they exist,” says Sara Patterson, a postdoctoral researcher at the Center for Visual Science who led the study. “There’s so much more that we have to learn about how their response properties operate, but they’re a compelling option as a missing link in how our retina processes color.”
    Using adaptive optics to overcome light distortion in the eye
    The team leveraged adaptive optics, which uses a deformable mirror to overcome light distortion and was first developed by astronomers to reduce image blur in ground-based telescopes. In the 1990s, Williams and his colleagues began applying adaptive optics to study the human eye. They created a camera that compensated for distortions caused by the eye’s natural aberrations, producing a clear image of individual photoreceptor cells.

    “The optics of the eye’s lens are imperfect and really reduce the amount of resolution you can get with an ophthalmoscope,” says Patterson. “Adaptive optics detects and corrects for these aberrations and gives us a crystal-clear look into the eye. This gives us unprecedented access to the retinal ganglion cells, which are the sole source of visual information to the brain.”
    Patterson says improving our understanding of the retina’s complex processes could ultimately help lead to better methods for restoring vision for people who have lost it.
    “Humans have more than 20 ganglion cells and our models of human vision are only based on three,” says Patterson. “There’s so much going on in the retina that we don’t know about. This is one of the rare areas where engineering has totally outpaced visual basic science. People are out there with retinal prosthetics in their eyes right now, but if we knew what all those cells do, we could actually have retinal prosthetics drive ganglion cells in accordance with their actual functional roles.”
    The work was supported through funding by the National Institutes of Health, Air Force Office of Scientific Research, and Research to Prevent Blindness. More

  • in

    Millions of gamers advance biomedical research

    Leveraging gamers and video game technology can dramatically boost scientific research according to a new study published today in Nature Biotechnology.
    4.5 million gamers around the world have advanced medical science by helping to reconstruct microbial evolutionary histories using a minigame included inside the critically and commercially successful video game, Borderlands 3. Their playing has led to a significantly refined estimate of the relationships of microbes in the human gut. The results of this collaboration will both substantially advance our knowledge of the microbiome and improve on the AI programs that will be used to carry out this work in future.
    Tracing the evolutionary relationships of bacteria
    By playing Borderlands Science, a mini-game within the looter-shooter video game Borderlands 3, these players have helped trace the evolutionary relationships of more than a million different kinds of bacteria that live in the human gut, some of which play a crucial role in our health. This information represents an exponential increase in what we have discovered about the microbiome up till now. By aligning rows of tiles which represent the genetic building blocks of different microbes, humans have been able to take on tasks that even the best existing computer algorithms have been unable to solve yet.
    The project was led by McGill University researchers, developed in collaboration with Gearbox Entertainment Company, an award-winning interactive entertainment company, and Massively Multiplayer Online Science (MMOS), a Swiss IT company connecting scientists to video games), and supported by the expertise and genomic material from the Microsetta Initiative led by Rob Knight from the Departments of Pediatrics, Bioengineering, and Computer Science & Engineering at the University of California San Diego.
    Humans improve on existing algorithms and lay groundwork for the future
    Not only have the gamers improved on the results produced by the existing programs used to analyze DNA sequences, but they are also helping lay the groundwork for improved AI programs that can be used in future.

    “We didn’t know whether the players of a popular game like Borderlands 3 would be interested or whether the results would be good enough to improve on what was already known about microbial evolution. But we’ve been amazed by the results.” says Jérôme Waldispühl, an associate professor in McGill’s School of Computer Science and senior author on the paper published today. “In half a day, the Borderlands Science players collected five times more data about microbial DNA sequences than our earlier game, Phylo, had collected over a 10-year period.”
    The idea for integrating DNA analysis into a commercial video game with mass market appeal came from Attila Szantner, an adjunct professor in McGill’s School of Computer Science and CEO and co-founder of MMOS. “As almost half of the world population is playing with videogames, it is of utmost importance that we find new creative ways to extract value from all this time and brainpower that we spend gaming,” says Szantner. “Borderlands Science shows how far we can get by teaming up with the game industry and its communities to tackle the big challenges of our times.”
    “Gearbox’s developers were eager to engage millions of Borderlands players globally with our creation of an appealing in-game experience to demonstrate how clever minds playing Borderlands are capable of producing tangible, useful, and valuable scientific data at a level not approachable with non-interactive technology and mediums,” said Randy Pitchford, founder and CEO of Gearbox Entertainment Company. “I’m proud that Borderlands Science has become one of the largest and most accomplished citizen science projects of all time, forecasting the opportunity for similar projects in future video games and pushing the boundaries of the positive effect that video games can make on the world.”
    Relating microbes to disease and lifestyle
    The tens of trillions of microbes that colonize our bodies play a crucial role in maintaining human health. But microbial communities can change over time in response to factors such as diet, medications, and lifestyle habits.
    Because of the sheer number of microbes involved, scientists are still only in the early days of being able to identify which microorganisms are affected by, or can affect, which conditions. Which is why the researchers’ project and the results from the gamers are so important.

    “We expect to be able to use this information to relate specific kinds of microbes to what we eat, to how we age, and to the many diseases ranging from inflammatory bowel disease to Alzheimer’s that we now know microbes to be involved in,” adds Knight, who also directs the Center for Microbiome Innovation at the UC San Diego. “Because evolution is a great guide to function, having a better tree relating our microbes to one another gives us a more precise view of what they are doing within and around us.”
    Building communities to advance knowledge
    “Here we have 4.5 million people who contributed to science. In a sense, this result is theirs too and they should feel proud about it,” says Waldispühl. “It shows that we can fight the fear or misconceptions that members of the public may have about science and start building communities who work collectively to advance knowledge.”
    “Borderlands Science created an incredible opportunity to engage with citizen scientists on a novel and important problem, using data generated by a separate massive citizen science project,” adds Daniel McDonald, the Scientific Director of the Microsetta Initiative. “These results demonstrate the remarkable value of open access data, and the scale of what is possible with inclusive practices in scientific endeavors.” More

  • in

    New colorful plastic films for versatile sensors and electronic displays

    Innovative electronics is one of the many applications of modern plastics. Some recent research efforts have used plastic to improve the color realism of display technologies.
    Now, in a study recently published in Angewandte Chemie International Edition, researchers from Osaka University and collaborating partners have developed a borane molecule that exhibits unusual light emission upon binding to fluoride. Incorporating their molecule into common plastic is straightforward, resulting in versatile materials for electronic display and chemical sensing applications.
    A class of molecules known as triarylboranes (TABs) has photochemical properties that are useful in optics. For example, upon binding to an anion such as fluoride, disruption of the TAB electronic structure often does two things to the light emission: shortens the wavelength (blue-shift) and reduces the intensity (turn-off response). Lengthening the emission wavelength (red-shift) is nearly unprecedented because corresponding design principles are unavailable. Developing a new class of TAB that exhibits a red-shifted sensing response, and can be easily incorporated into plastic electronics and similar technologies, is the problem the researchers aimed to address.
    “Our borane-based sensor exhibits a red-shifted response upon binding to an anion such as fluoride,” explains Nae Aota, lead author of the study. “Our method is based on reducing orbital energy gap of the molecule in the ground state and enhancing charge-transfer in the excited state by reversing the role of TAB from electronic acceptor to donor.”
    A highlight of the researchers’ work is facile incorporation of a TAB-fluoride into polystyrene and poly(methyl methacrylate) polymer films. The polymer matrix did not impair the red-shifted light emission. In fact, one film exhibited warm white light — a highly desired property that mimics sunlight. Furthermore, the color of the light emission was finely tunable by simply adjusting the quantity of added fluoride.
    “We’re excited by the versatility of our thin films,” says Youhei Takeda, senior author. “We can use the bipolarity of the phenazaboride to prepare plastic films ranging from blue to near-infrared, for displays and ultra-sensitive anion detection.”
    This work is an important step forward in electronic display technologies. Furthermore, by tuning the selectivity of the TAB to anion binding (i.e., detecting only one type of anion even in the presence of other potentially competing anions), applications to highly sought sensing technologies will be straightforward. More

  • in

    Quantum precision: A new kind of resistor

    Researchers at the University of Würzburg have developed a method that can improve the performance of quantum resistance standards. It´s based on a quantum phenomenon called Quantum Anomalous Hall effect.
    The precise measurement of electrical resistance is essential in industrial production or electronics — for example, in the manufacture of high-tech sensors, microchips and flight controls. “Very precise measurements are essential here, as even the smallest deviations can significantly affect these complex systems,” explains Professor Charles Gould, a physicist at the Institute for Topological Insulators at the University of Würzburg (JMU). “With our new measurement method, we can significantly improve the accuracy of resistance measurements, without any external magnetic field, using the Quantum Anomalous Hall Effect (QAHE).”
    How the New Method Works
    Many people may remember the classic Hall effect from their physics lessons: When a current flows through a conductor and it is exposed to a magnetic field, a voltage is created — the so-called Hall voltage. The Hall resistance, obtained by dividing this voltage by current, increases as the magnetic field strength increases. In thin layers and at large enough magnetic fields, this resistance begins to develop discreet steps with values of exactly h/ne2, where h is the Planck’s constant, e is the elementary charge, and n is an integer number. This is known as the Quantum Hall Effect because the resistance depends only on fundamental constants of nature (h and e), which makes it an ideal standard resistor.
    The special feature of the QAHE is that it allows the quantum Hall effect to exist at zero magnetic field. “The operation in the absence of any external magnetic field not only simplifies the experiment, but also gives an advantage when it comes to determining another physical quantity: the kilogram. To define a kilogram, one has to measure the electrical resistance and the voltage at the same time,” says Gould “but measuring the voltage only works without a magnetic field, so the QAHE is ideal for this.”
    Thus far, the QAHE was measured only at currents which are far too low for practical metrological use. The reason for this is an electric field that disrupts the QAHE at higher currents. The Würzburg physicists have now developed a solution to this problem. “We neutralize the electric field using two separate currents in a geometry we call a multi-terminal Corbino device.,” explains Gould. “With this new trick, the resistance remains quantized to h/e2 up to larger currents, making the resistance standard based on QAHE more robust.”
    On the Way to Practical Application
    In their feasibility study, the researchers were able to show that the new measurement method works at the precision level offered by basic d.c. techniques. Their next goal is to test the feasibility of this method using more precise metrological tools. To this end, the Würzburg group is working closely with the Physikalisch-Technische Bundesanstalt (German National Metrology Institute, PTB), who specialize in this kind of ultra-precise metrological measurements. Gould also notes: “This method is not limited to the QAHE. Given that conventional Quantum Hall Effect experiences similar electric field driven limitations at sufficiently large currents, this method can also improve the existing state of the art metrological standards, for applications where even larger currents are useful.”
    The research was funded by the Free State of Bavaria, the German Research Foundation DFG, the Cluster of Excellence ct.qmat (Complexity and Topology in Quantum Matter) and the European Commission. More

  • in

    AI can write you a poem and edit your video. Now, it can help you be funnier

    University of Sydney researchers have used an AI-assisted application to help people write cartoon captions for cartoons published in The New Yorker Cartoon Caption Contest.
    Twenty participants with little to no experience writing cartoon captions wrote 400 cartoon captions. 200 captions were written with the help from the AI tool, and the remainder were written without assistance.
    A second group of 67 people then rated how funny these cartoon captions were. The researchers found jokes written with the help of the tool were found to be significantly funnier than those written without the tool. Comparatively, ratings for the AI assisted captions were almost 30 percent closer to the winning captions in The New Yorker Cartoon Caption Contest.
    Participants said the tool helped them piece together humorous narratives and get started, helping to understand nuances and funny elements, and to come up with new ideas.
    Almost half, 95 out of the 200 jokes written with the help of AI were also rated as funnier than the original cartoon captions by The New Yorker.
    “The AI tool helps people be significantly funnier, but more importantly, it may be a cure for writer’s block,” said Dr Anusha Withana from the School of Computer Science and Digital Sciences Initiative.
    AI helps non-native speakers be funny in a new language
    Dr Withana and his team conceived the tool to help non-native speakers understand humour in their new language. The results also showed non-native speakers found the tool more helpful, bringing them 43 percent closer to the winning caption.

    Born in Sri Lanka and having lived in Japan, Singapore, Germany and now Australia, Dr Withana said understanding local humour could often be a “minefield” for a new arrival.
    “In a new country I would often find myself ‘off-key’,” he said. “For example, I once made a sarcastic comment that didn’t go down well in Germany. Here in Australia, it would have gotten a laugh.”
    Hasindu Kariyawasam led the research project as an undergraduate research intern.
    “Humour is such an important way to relate to others,” he said. “It is also important for emotional wellbeing and creativity, and for managing stress, depression, and anxiety. As a non-native speaker myself, I found the system helped me write jokes more easily, and it made the experience fun.”
    How can AI help us understand humour?
    The original aspiration for the research was to use technology to help get creative juices flowing and get words down on the page. Alister Palmer, a master’s student and amateur cartoonist conceived the idea to engage more people in cartooning.

    The tool works through an algorithm which assesses incongruity. It analyses the words in a description of the cartoon and generates incongruous words as hints for the cartoonist.
    For example, in one cartoon where a person is depicted wearing a rabbit suit to the office, the tool suggested the words “rabbit” and “soup” (derived from the incongruity with the word “suit”). One of the pilot study participants came up with the caption “I meant the rabbit soup, not suit.” The winning caption at The New Yorker competition was “It’s not just Henderson. Corporate laid off the entire bunny division.”
    Professor Judy Kay said this approach means we can explain how the AI works: “With AI playing a bigger role in our lives, our team wanted to create this tool so that people can feel in control.”
    Dr Withana said: “Ultimately, humans are still the ones creating the humour, but this research is a great example of how AI can augment and aid our social interactions.” More

  • in

    Clear guidelines needed for synthetic data to ensure transparency, accountability and fairness, study says

    Clear guidelines should be established for the generation and processing of synthetic data to ensure transparency, accountability and fairness, a new study says.
    Synthetic data — generated through machine learning algorithms from original real-world data — is gaining prominence because it may provide privacy-preserving alternatives to traditional data sources. It can be particularly useful in situations where the actual data is too sensitive to share, too scarce, or of too low quality.
    Synthetic data differs from real-world data as it is generated by algorithmic models known as synthetic data generators, such as Generative Adversarial Networks or Bayesian networks.
    The study warns existing data protection laws that only apply to personal data are not well-equipped to regulate the processing of all types of synthetic data.
    Laws such as the GDPR only apply to the processing of personal data. The GDPR’s definition of personal data encompasses ‘any information relating to an identified or identifiable natural person’. However, not all synthetic datasets are fully artificial — some may contain personal information or present a risk of re-identification. Fully synthetic datasets are, in principle, exempt from GDPR rules, except when there is a possibility of re-identification.
    It remains unclear what level of re-identification risk would be sufficient to trigger their application in the context of fully synthetic data processing. That creates legal uncertainty and practical difficulties for the processing of such datasets.
    The study, by Professor Ana Beduschi from the University of Exeter, is published in the journal Big Data and Society.
    It says there should be clear procedures for calling to account those responsible for the generation and processing of synthetic data. There should be guarantees synthetic data is not generated and used in ways that bring adverse effects on individuals and society, such as perpetuating existing biases or creating new ones.
    Professor Beduschi said: “Clear guidelines for all types of synthetic data should be established. They should prioritise transparency, accountability and fairness. Having such guidelines is especially important as generative AI and advanced language models such as DALL-E 3 and GPT-4 — which can both be trained on and generate synthetic data — may facilitate the dissemination of misleading information and have detrimental effects on society. Adhering to these principles could thus help mitigate potential harm and encourage responsible innovation.
    “Accordingly, synthetic data should be clearly labelled as such and that information about its generation should be provided to users.” More

  • in

    New computer vision tool wins prize for social impact

    A team of computer scientists at the University of Massachusetts Amherst working on two different problems — how to quickly detect damaged buildings in crisis zones and how to accurately estimate the size of bird flocks — recently announced an AI framework that can do both. The framework, called DISCount, blends the speed and massive data-crunching power of artificial intelligence with the reliability of human analysis to quickly deliver reliable estimates that can quickly pinpoint and count specific features from very large collections of images. The research, published by the Association for the Advancement of Artificial Intelligence, has been recognized by that association with an award for the best paper on AI for social impact.
    “DISCount came together as two very different applications,” says Subhransu Maji, associate professor of information and computer sciences at UMass Amherst and one of the paper’s authors. “Through UMass Amherst’s Center for Data Science, we have been working with the Red Cross for years in helping them to build a computer vision tool that could accurately count buildings damaged during events like earthquakes or wars. At the same time, we were helping ornithologists at Colorado State University and the University of Oklahoma interested in using weather radar data to get accurate estimates of the size of bird flocks.”
    Maji and his co-authors, lead author Gustavo Pérez, who completed this research as part of his doctoral training at UMass Amherst, and Dan Sheldon, associate professor of information and computer sciences at UMass Amherst, thought they could solve the damaged-buildings-and-bird-flock problems with computer vision, a type of AI that can scan enormous archives of images in search of something particular — a bird, a rubble pile — and count it.
    But the team was running into the same roadblocks on each project: “the standard computer visions models were not accurate enough,” says Pérez. “We wanted to build automated tools that could be used by non-AI experts, but which could provide a higher degree of reliability.”
    The answer, says Sheldon, was to fundamentally rethink the typical approaches to solving counting problems.
    “Typically, you either have humans do time-intensive and accurate hand-counts of a very small data set, or you have computer vision run less-accurate automated counts of enormous data sets,” Sheldon says. “We thought: why not do both?”
    DISCount is a framework that can work with any already existing AI computer vision model. It works by using the AI to analyze the very large data sets — say, all the images taken of a particular region in a decade — to determine which particular smaller set of data a human researcher should look at. This smaller set could, for example, be all the images from a few critical days that the computer vision model has determined best show the extent of building damage in that region. The human researcher could then hand-count the damaged buildings from the much smaller set of images and the algorithm will use them to extrapolate the number of buildings affected across the entire region. Finally, DISCount will estimate how accurate the human-derived estimate is.
    “DISCount works significantly better than random sampling for the tasks we considered,” says Pérez. “And part of the beauty of our framework is that it is compatible with any computer-vision model, which lets the researcher select the best AI approach for their needs. Because it also gives a confidence interval, it gives researchers the ability to make informed judgments about how good their estimates are.”
    “In retrospect, we had a relatively simple idea,” says Sheldon. “But that small mental shift — that we didn’t have to choose between human and artificial intelligence, has let us build a tool that is faster, more comprehensive, and more reliable than either approach alone.” More

  • in

    Artificial intelligence can help people feel heard

    A new study published in the Proceedings of the National Academy of Sciences (PNAS) found AI-generated messages made recipients feel more “heard” than messages generated by untrained humans, and that AI was better at detecting emotions than these individuals. However, recipients reported feeling less heard when they learned a message came from AI.
    As AI becomes more ubiquitous in daily life, understanding its potential and limitations in meeting human psychological needs becomes more pertinent. With dwindling empathetic connections in a fast-paced world, many are finding their human needs for feeling heard and validated increasingly unmet.
    The research conducted by Yidan Yin, Nan Jia, and Cheryl J. Wakslak from the USC Marshall School of Business addresses a pivotal question: Can AI, which lacks human consciousness and emotional experience, succeed in making people feel heard and understood?
    “In the context of an increasing loneliness epidemic, a large part of our motivation was to see whether AI can actually help people feel heard,” said the paper’s first author, Yidan Yin, a postdoctoral researcher at the Lloyd Greif Center for Entrepreneurial Studies at USC Marshall.
    The team’s findings highlight not only the potential of AI to augment human capacity for understanding and communication, but raises important conceptual questions about the meaning of being heard and practical questions about how best to leverage AI’s strengths to support greater human flourishing.
    In an experiment and subsequent follow-up study, “we identified that while AI demonstrates enhanced potential compared to non-trained human responders to provide emotional support, the devaluation of AI responses poses a key challenge for effectively deploying AI’s capabilities,” said Nan Jia, associate professor of strategic management.
    The USC Marshall research team investigated people’s feelings of being heard and other related perceptions and emotions after receiving a response from either AI or a human. The survey varied both the actual source of the message and the ostensible source of the message: Participants received messages that were actually generated by an AI or by a human responder, with the information that it was either AI or human generated.

    “What we found was that both the actual source of the message and the presumed source of the message played a role,” said Cheryl Wakslak, associate professor of management and organization at USC Marshall. “People felt more heard when they received an AI than a human message, but when they believed a message came from AI this made them feel less heard.”
    AI bias
    Yin noted that their research “basically finds a bias against AI. It’s useful, but they don’t like it.”
    Perceptions about AI are bound to change, added Wakslak, “Of course these effects may change over time, but one of the interesting things we found was that the two effects we observed were fairly similar in magnitude. Whereas there is a positive effect of getting an AI message, there is a similar degree of response bias when a message is identified as coming from AI, leading the two effects to essentially cancel each other out.”
    Individuals further reported an “uncanny valley” response — a sense of unease when made aware that the empathetic response originated from AI, highlighting the complex emotional landscape navigated by AI-human interactions.
    The research survey also asked participants about their general openness to AI, which moderated some of the effects, explained Wakslak.

    “People who feel more positively toward AI don’t exhibit the response penalty as much and that’s intriguing because over time, will people gain more positive attitudes toward AI?” she posed. “That remains to be seen … but it will be interesting to see how this plays out as people’s familiarity and experience with AI grows.”
    AI offers better emotional support
    The study highlighted important nuances. Responses generated by AI were associated with increased hope and lessened distress, indicating a positive emotional effect on recipients. AI also demonstrated a more disciplined approach than humans in offering emotional support and refrained from making overwhelming practical suggestions.
    Yin explained that, “Ironically, AI was better at using emotional support strategies that have been shown in prior research to be empathetic and validating. Humans may potentially learn from AI because a lot of times when our significant others are complaining about something, we want to provide that validation, but we don’t know how to effectively do so.”
    Instead of AI replacing humans, the research points to different advantages of AI and human responses. The advanced technology could become a valuable tool, empowering humans to use AI to help them better understand one another and learn how to respond in ways that provide emotional support and demonstrate understanding and validation.
    Overall, the paper’s findings have important implications for the integration of AI into more social contexts. Leveraging AI’s capabilities might provide an inexpensive scalable solution for social support, especially for those who might otherwise lack access to individuals who can provide them with such support. However, as the research team notes, their findings suggest that it is critical to give careful consideration to how AI is presented and perceived in order to maximize its benefits and reduce any negative responses. More