More stories

  • in

    Here’s how many shark bites there were in 2023

    Despite the sensationalized portrayal of sharks in movies like Jaws, the ocean’s apex predators have far more to fear from people than vice versa.

    Even though millions of people around the world swim in the ocean each year, just 91 people were bitten by sharks in 2023 and only 10 of those bites were fatal, according to a new report from the Florida Museum of Natural History in Gainesville. Out of all bites, 69 were unprovoked while 22 were provoked, defined as a human-initiated interaction such as trying to touch or feed a shark. These numbers — reported by beach safety officers, hospital staff and other emergency responders — are consistent with the five-year global average. More

  • in

    Technique could improve the sensitivity of quantum sensing devices

    In quantum sensing, atomic-scale quantum systems are used to measure electromagnetic fields, as well as properties like rotation, acceleration, and distance, far more precisely than classical sensors can. The technology could enable devices that image the brain with unprecedented detail, for example, or air traffic control systems with precise positioning accuracy.
    As many real-world quantum sensing devices are emerging, one promising direction is the use of microscopic defects inside diamonds to create “qubits” that can be used for quantum sensing. Qubits are the building blocks of quantum devices.
    Researchers at MIT and elsewhere have developed a technique that enables them to identify and control a greater number of these microscopic defects. This could help them build a larger system of qubits that can perform quantum sensing with greater sensitivity.
    Their method builds off a central defect inside a diamond, known as a nitrogen-vacancy (NV) center, which scientists can detect and excite using laser light and then control with microwave pulses. This new approach uses a specific protocol of microwave pulses to identify and extend that control to additional defects that can’t be seen with a laser, which are called dark spins.
    The researchers seek to control larger numbers of dark spins by locating them through a network of connected spins. Starting from this central NV spin, the researchers build this chain by coupling the NV spin to a nearby dark spin, and then use this dark spin as a probe to find and control a more distant spin which can’t be sensed by the NV directly. The process can be repeated on these more distant spins to control longer chains.
    “One lesson I learned from this work is that searching in the dark may be quite discouraging when you don’t see results, but we were able to take this risk. It is possible, with some courage, to search in places that people haven’t looked before and find potentially more advantageous qubits,” says Alex Ungar, a PhD student in electrical engineering and computer science and a member of the Quantum Engineering Group at MIT, who is lead author of a paper on this technique, which is published today in PRX Quantum.
    His co-authors include his advisor and corresponding author, Paola Cappellaro, the Ford Professor of Engineering in the Department of Nuclear Science and Engineering and professor of physics; as well as Alexandre Cooper, a senior research scientist at the University of Waterloo’s Institute for Quantum Computing; and Won Kyu Calvin Sun, a former researcher in Cappellaro’s group who is now a postdoc at the University of Illinois at Urbana-Champaign.

    Diamond defects
    To create NV centers, scientists implant nitrogen into a sample of diamond.
    But introducing nitrogen into the diamond creates other types of atomic defects in the surrounding environment. Some of these defects, including the NV center, can host what are known as electronic spins, which originate from the valence electrons around the site of the defect. Valence electrons are those in the outermost shell of an atom. A defect’s interaction with an external magnetic field can be used to form a qubit.
    Researchers can harness these electronic spins from neighboring defects to create more qubits around a single NV center. This larger collection of qubits is known as a quantum register. Having a larger quantum register boosts the performance of a quantum sensor.
    Some of these electronic spin defects are connected to the NV center through magnetic interaction. In past work, researchers used this interaction to identify and control nearby spins. However, this approach is limited because the NV center is only stable for a short amount of time, a principle called coherence. It can only be used to control the few spins that can be reached within this coherence limit.
    In this new paper, the researchers use an electronic spin defect that is near the NV center as a probe to find and control an additional spin, creating a chain of three qubits.

    They use a technique known as spin echo double resonance (SEDOR), which involves a series of microwave pulses that decouple an NV center from all electronic spins that are interacting with it. Then, they selectively apply another microwave pulse to pair the NV center with one nearby spin.
    Unlike the NV, these neighboring dark spins can’t be excited, or polarized, with laser light. This polarization is a required step to control them with microwaves.
    Once the researchers find and characterize a first-layer spin, they can transfer the NV’s polarization to this first-layer spin through the magnetic interaction by applying microwaves to both spins simultaneously. Then once the first-layer spin is polarized, they repeat the SEDOR process on the first-layer spin, using it as a probe to identify a second-layer spin that is interacting with it.
    Controlling a chain of dark spins
    This repeated SEDOR process allows the researchers to detect and characterize a new, distinct defect located outside the coherence limit of the NV center. To control this more distant spin, they carefully apply a specific series of microwave pulses that enable them to transfer the polarization from the NV center along the chain to this second-layer spin.
    “This is setting the stage for building larger quantum registers to higher-layer spins or longer spin chains, and also showing that we can find these new defects that weren’t discovered before by scaling up this technique,” Ungar says.
    To control a spin, the microwave pulses must be very close to the resonance frequency of that spin. Tiny drifts in the experimental setup, due to temperature or vibrations, can throw off the microwave pulses.
    The researchers were able to optimize their protocol for sending precise microwave pulses, which enabled them to effectively identify and control second-layer spins, Ungar says.
    “We are searching for something in the unknown, but at the same time, the environment might not be stable, so you don’t know if what you are finding is just noise. Once you start seeing promising things, you can put all your best effort in that one direction. But before you arrive there, it is a leap of faith,” Cappellaro says.
    While they were able to effectively demonstrate a three-spin chain, the researchers estimate they could scale their method to a fifth layer using their current protocol, which could provide access to hundreds of potential qubits. With further optimization, they may be able to scale up to more than 10 layers.
    In the future, they plan to continue enhancing their technique to efficiently characterize and probe other electronic spins in the environment and explore different types of defects that could be used to form qubits.
    This research is supported, in part, by the U.S. National Science Foundation and the Canada First Research Excellence Fund. More

  • in

    Combining materials may support unique superconductivity for quantum computing

    A new fusion of materials, each with special electrical properties, has all the components required for a unique type of superconductivity that could provide the basis for more robust quantum computing. The new combination of materials, created by a team led by researchers at Penn State, could also provide a platform to explore physical behaviors similar to those of mysterious, theoretical particles known as chiral Majoranas, which could be another promising component for quantum computing.
    The new study appeared online today (Feb. 8) in the journal Science. The work describes how the researchers combined the two magnetic materials in what they called a critical step toward realizing the emergent interfacial superconductivity, which they are currently working toward.
    Superconductors — materials with no electrical resistance — are widely used in digital circuits, the powerful magnets in magnetic resonance imaging (MRI) and particle accelerators, and other technology where maximizing the flow of electricity is crucial. When superconductors are combined with materials called magnetic topological insulators — thin films only a few atoms thick that have been made magnetic and restrict the movement of electrons to their edges — the novel electrical properties of each component work together to produce “chiral topological superconductors.” The topology, or specialized geometries and symmetries of matter, generates unique electrical phenomena in the superconductor, which could facilitate the construction of topological quantum computers.
    Quantum computers have the potential to perform complex calculations in a fraction of the time it takes traditional computers because, unlike traditional computers which store data as a one or a zero, the quantum bits of quantum computers store data simultaneously in a range of possible states. Topological quantum computers further improve upon quantum computing by taking advantage of how electrical properties are organized to make the computers robust to decoherence, or the loss of information that happens when a quantum system is not perfectly isolated.
    “Creating chiral topological superconductors is an important step toward topological quantum computation that could be scaled up for broad use,” said Cui-Zu Chang, Henry W. Knerr Early Career Professor and associate professor of physics at Penn State and co-corresponding author of the paper. “Chiral topological superconductivity requires three ingredients: superconductivity, ferromagnetism and a property called topological order. In this study, we produced a system with all three of these properties.”
    The researchers used a technique called molecular beam epitaxy to stack together a topological insulator that has been made magnetic and an iron chalcogenide (FeTe), a promising transition metal for harnessing superconductivity. The topological insulator is a ferromagnet — a type of magnet whose electrons spin the same way — while FeTe is an antiferromagnet, whose electrons spin in alternating directions. The researchers used a variety of imaging techniques and other methods to characterize the structure and electrical properties of the resulting combined material and confirmed the presence of all three critical components of chiral topological superconductivity at the interface between the materials.
    Prior work in the field has focused on combining superconductors and nonmagnetic topological insulators. According to the researchers, adding in the ferromagnet has been particularly challenging.

    “Normally, superconductivity and ferromagnetism compete with each other, so it is rare to find robust superconductivity in a ferromagnetic material system,” said Chao-Xing Liu, professor of physics at Penn State and co-corresponding author of the paper. “But the superconductivity in this system is actually very robust against the ferromagnetism. You would need a very strong magnetic field to remove the superconductivity.”
    The research team is still exploring why superconductivity and ferromagnetism coexist in this system.
    “It’s actually quite interesting because we have two magnetic materials that are non-superconducting, but we put them together and the interface between these two compounds produces very robust superconductivity,” Chang said. “Iron chalcogenide is antiferromagnetic, and we anticipate its antiferromagnetic property is weakened around the interface to give rise to the emergent superconductivity, but we need more experiments and theoretical work to verify if this is true and to clarify the superconducting mechanism.”
    The researchers said they believe this system will be useful in the search for material systems that exhibit similar behaviors as Majorana particles — theoretical subatomic particles first hypothesized in 1937. Majorana particles act as their own antiparticle, a unique property that could potentially allow them to be used as quantum bits in quantum computers.
    “Providing experimental evidence for the existence of chiral Majorana will be a critical step in the creation of a topological quantum computer,” Chang said. “Our field has had a rocky past in trying to find these elusive particles, but we think this is a promising platform for exploring Majorana physics.”
    In addition to Chang and Liu, the research team at Penn State at the time of the research included postdoctoral researcher Hemian Yi; graduate students Yi-Fan Zhao, Ruobing Mei, Zi-Jie Yan, Ling-Jie Zhou, Ruoxi Zhang, Zihao Wang, Stephen Paolini and Run Xiao; assistant research professors in the Materials Research Institute Ke Wang and Anthony Richardella; Evan Pugh University Professor Emeritus of Physics Moses Chan; and Verne M. Willaman Professor of Physics and Professor of Materials Science and Engineering Nitin Samarth. The research team also includes Ying-Ting Chan and Weida Wu at Rutgers University; Jiaqi Cai and Xiaodong Xu at the University of Washington; Xianxin Wu at the Chinese Academy of Sciences; John Singleton and Laurel Winter at the National High Magnetic Field Laboratory; Purnima Balakrishnan and Alexander Grutter at the National Institute of Standards and Technology; and Thomas Prokscha, Zaher Salman, and Andreas Suter at the Paul Scherrer Institute of Switzerland.
    This research is supported by the U.S. Department of Energy. Additional support was provided by the U.S. National Science Foundation (NSF), the NSF-funded Materials Research Science and Engineering Center for Nanoscale Science at Penn State, the Army Research Office, the Air Force Office of Scientific Research, the state of Florida and the Gordon and Betty Moore Foundation’s EPiQS Initiative. More

  • in

    AI model as diabetes early warning system when driving

    Based solely on driving behavior and head/gaze motion, the newly developed tool recognizes low blood sugar levels.
    Low blood sugar levels (hypoglycemia) are one of the most dangerous complications of diabetes and pose high risk during cognitively demanding tasks requiring complex motor skills, such as driving a car. The utility of current tools to detect hypoglycemia is limited by diagnostic delay, invasiveness, low availability, and high costs. A recent study published in the journal NEJM AI provides a novel way to detect hypoglycemia during driving. The research was the work of LMU scientists in collaboration with colleagues from the University Hospital of Bern (Inselspital), ETH Zurich, and the University of St. Gallen.
    In their study, the researchers collected data from 30 diabetics as they drove a real car. For each patient, data was recorded once during a state with normal blood sugar levels and once during a hypoglycemic state. To this end, each patient was deliberately put into a hypoglycemic state by medical professionals present in the car. The collected data comprised driving signals such as car speed and head/gaze motion data — for example, the speed of eye movements.
    Subsequently, the scientists developed a novel machine learning (ML) model capable of automatically and reliably detecting hypoglycemic episodes using only routinely collected driving data and head/gaze motion data. “This technology could serve as an early warning system in cars and enable drivers to take necessary precautions before hypoglycemic symptoms impair their ability to drive safely,” says Simon Schallmoser, doctoral candidate at the Institute of AI in Management at LMU and one of the contributing researchers.
    The newly developed ML model also performed well when only head/gaze motion data was used, which is crucial for future self-driving cars. Professor Stefan Feuerriegel, head of the Institute of AI in Management and project partner, explains: “This study not only showcases the potential for AI to improve individual health outcomes but also its role in improving safety on public roads.” More

  • in

    A new ‘metal swap’ method for creating lateral heterostructures of 2D materials

    Heterostructures of two-dimensional materials have unique properties. Among them, lateral heterostructures, which can be used to make electronic devices, are challenging to synthesize. To address this, researchers used a new transmetallation technique to fabricate heterostructures with in-plane heterojunctions using Zn3BHT coordination nanosheet. This simple and powerful method enables the fabrication of ultrathin electronic devices for ultralarge-scale integrated circuits, marking a significant step forward for 2D materials research.
    Electronically conducting two-dimensional (2D) materials are currently hot topics of research in both physics and chemistry owing to their unique properties that have the potential to open up new avenues in science and technology. Moreover, the combination of different 2D materials, called heterostructures, expands the diversity of their electrical, photochemical, and magnetic properties. This can lead to innovative electronic devices not achievable with a single material alone.
    Heterostructures can be fabricated in two ways: vertically, with materials stacked on top of each other, or laterally, where materials are stacked side-by-side on the same plane. Lateral arrangements offer a special advantage, confining charge carriers to a single plane and paving the way for exceptional “in-plane” electronic devices. However, the construction of lateral junctions is challenging.
    In this regard, conducting 2D materials made using organic materials, called “coordination nanosheets,” are promising. They can be created by combining metals and ligands, ranging from those with metallic properties such as graphene and semiconducting properties such as transition metal dichalcogenides to the ones possessing insulating properties such as boron nitride. These nanosheets enable a unique method called transmetallation. This allows the synthesis of lateral heterostructures with “heterojunctions,” which cannot be achieved through direct reaction. Heterojunctions are interfaces between two materials that have distinct electronic properties and therefore can serve as electronic devices. Furthermore, by utilizing heterojunctions of coordinated nanosheets, new electronic properties that have been difficult to get with conventional 2D materials can be created. Despite these advantages, the research on transmetallation as a method to fabricate heterostructures is still limited.
    To address this knowledge gap, a team of researchers from Japan, led by Professor Hiroshi Nishihara from the Research Institute for Science and Technology at Tokyo University of Science (TUS), Japan, used sequential transmetallation to synthesize lateral heterojunctions of Zn3BHT coordination nanosheets. The team included Dr. Choon Meng Tan, Assistant Professor Naoya Fukui, Assistant Professor Kenji Takada, and Assistant Professor Hiroaki Maeda, also from TUS. The study, a joint research effort by TUS, the University of Cambridge, the National Institute for Materials Science (NIMS), Kyoto Institute of Technology, and the Japan Synchrotron Radiation Research Institute (JASRI), was published in the journal Angewandte Chemie International Edition on January 05, 2024.
    The team first fabricated and characterized the Zn3BHT coordination nanosheet. Next, they investigated the transmetallation of Zn3BHT with copper and iron. Prof. Nishihara explains: “Via sequential and spatially limited immersion of the nanosheet into aqueous copper and iron ion solutions under mild conditions, we easily fabricated heterostructures with in-plane heterojunctions of transmetallated iron and copper nanosheets.”
    This method is a solution process at room temperature and atmospheric pressure, from the fabrication of coordinated nanosheets to the fabrication of in-plane heterojunctions. This process is completely different from the high-temperature, vacuum, gas-phase processing process that is used in lithography technology for silicon semiconductors. It is a simple and inexpensive process that does not require large equipment. The challenge is how to create highly crystalline thin films that are free of impurities. If clean rooms and highly purified reagents are available, commercially viable manufacturing techniques will soon be achieved.
    The resulting seamless heterojunction obtained by the researchers demonstrated rectifying behavior common in electronic circuits. Testing the characteristics of the diode revealed the versatility of the Zn3BHT coordination nanosheet. These characteristics can be changed easily without any special equipment. Moreover, this material also enables the fabrication of an integrated circuit from only a single coordination sheet, without any patchworking from different materials. Prof. Nishihara highlights the importance of this technique: “Ultrathin (nanometer-thick) rectifying elements obtained from our method will be quite useful for the fabrication of ultralarge-scale integrated circuits. Simultaneously, the unique physical properties of monoatomic layer films with in-plane heterojunctions can lead to the development of new elements.”
    Furthermore, by using this transmetallation reaction, it is possible to create junctions with various electronic properties, such as p-n, MIM (metal-insulator-metal) and MIS (metal-insulator-semiconductor) junctions. The ability to bond single-layer topological insulators will also enable new electronic devices such as electron splitters and multilevel devices that have only been theoretically predicted.
    Overall, this study presents a simple yet powerful technique for crafting lateral heterostructures, marking a significant step in 2D materials research. More

  • in

    Scientists code ChatGPT to design new medicine

    Generative artificial intelligence platforms, from ChatGPT to Midjourney, grabbed headlines in 2023. But GenAI can do more than create collaged images and help write emails — it can also design new drugs to treat disease.
    Today, scientists use advanced technology to design new synthetic drug compounds with the right properties and characteristics, also known as “de novo drug design.” However, current methods can be labor-, time-, and cost-intensive.
    Inspired by ChatGPT’s popularity and wondering if this approach could speed up the drug design process, scientists in the Schmid College of Science and Technology at Chapman University in Orange, California, decided to create their own genAI model, detailed in the new paper, “De Novo Drug Design using Transformer-based Machine Translation and Reinforcement Learning of Adaptive Monte-Carlo Tree Search,” to be published in the journal Pharmaceuticals. Dony Ang, Cyril Rakovski, and Hagop Atamian coded a model to learn a massive dataset of known chemicals, how they bind to target proteins, and the rules and syntax of chemical structure and properties writ large.
    The end result can generate countless unique molecular structures that follow essential chemical and biological constraints and effectively bind to their targets — promising to vastly accelerate the process of identifying viable drug candidates for a wide range of diseases, at a fraction of the cost.
    To create the breakthrough model, researchers integrated two cutting-edge AI techniques for the first time in the fields of bioinformatics and cheminformatics: the well-known “Encoder-Decoder Transformer architecture” and “Reinforcement Learning via Monte Carlo Tree Search” (RL-MCTS). The platform, fittingly named “drugAI,” allows users to input a target protein sequence (for instance, a protein typically involved in cancer progression). DrugAI, trained on data from the comprehensive public database BindingDB, can generate unique molecular structures from scratch, and then iteratively refine candidates, ensuring finalists exhibit strong binding affinities to respective drug targets — crucial for the efficacy of potential drugs. The model identifies 50-100 new molecules likely to inhibit these particular proteins.
    “This approach allows us to generate a potential drug that has never been conceived of,” Dr. Atamian said. “It’s been tested and validated. Now, we’re seeing magnificent results.”
    Researchers assessed the molecules drugAI generated along several criteria, and found drugAI’s results were of similar quality to those from two other common methods, and in some cases, better. They found that drugAI’s candidate drugs had a validity rate of 100% — meaning none of the drugs generated were present in the training set. DrugAI’s candidate drugs were also measured for drug-likeness, or the similarity of a compound’s properties to those of oral drugs, and candidate drugs were at least 42% and 75% higher than other models. Plus, all drugAI-generated molecules exhibited strong binding affinities to respective targets, comparable to those identified via traditional virtual screening approaches.
    Ang, Rakovski and Atamian also wanted to see how drugAI’s results for a specific disease compared to existing known drugs for that disease. In a different experiment, screening methods generated a list of natural products that inhibited COVID-19 proteins; drugAI generated a list of novel drugs targeting the same protein to compare their characteristics. They compared drug-likeness and binding affinity between the natural molecules and drugAI’s, and found similar measurements in both — but drugAI was able to identify these in a much quicker and less expensive way.
    Plus, the scientists designed the algorithm to have a flexible structure that allows future researchers to add new functions. “That means you’re going to end up with more refined drug candidates with an even higher probability of ending up as a real drug,” said Dr. Atamian. “We’re excited for the possibilities moving forward.” More

  • in

    How teachers make ethical judgments when using AI in the classroom

    A teacher’s gender and comfort with technology factor into whether artificial intelligence is adopted in the classroom, as shown in a new report from the USC Center for Generative AI and Society.
    The study, “AI in K-12 Classrooms: Ethical Considerations and Lessons Learned,” explores how teachers make ethical judgments about using AI in their classrooms. The paper — authored by Stephen Aguilar, associate director of the center and assistant professor of education at the USC Rossier School of Education — details differences in ethical evaluations of generative AI, as well as rule-based and outcome-based views regarding AI.
    “The way we teach critical thinking will change with AI,” said Aguilar. “Students will need to judge when, how and for what purpose they will use generative AI. Their ethical perspectives will drive those decisions.”
    The study is part of a larger report from the USC Center for Generative AI and Society titled “Critical Thinking and Ethics in the Age of Generative AI in Education.” In addition to the study, the report introduces the center’s inaugural AI Fellows Program to support critical thinking and writing in the age of AI for undergraduate students, and looks ahead to building the next generation of generative AI tools. The center advances the USC Frontiers of Computing initiative, a $1 billion-plus investment to promote and expand advanced computing research and education across the university in a strategic, thoughtful way.
    Ethical ramifications a key factor in adoption of AI in the classroom
    As AI technologies become more prevalent in the classroom, it is essential for educators to consider the ethical implications and foster critical thinking skills among students. Taking a thoughtful approach, educators will need to guide students in evaluating AI-generated content and encourage them to question the ethical considerations surrounding the use of AI.
    The study’s goal was to understand the teachers’ perspectives on ethics around AI. Teachers were asked to rate how much they agreed with different ethical ideas and to rate their willingness to use generative AI, like ChatGPT, in their classrooms.

    The study included 248 K-12 educators from public, charter and private schools, who had an average of 11 years of teaching experience. Of those who participated, 43% taught at elementary school, 16% taught middle school and 40% taught high school students. Over half of participants identified as women; educators from 41 states in the United States participated.
    The published results suggest gender-based nuances. “What we found was that women teachers in our study were more likely to rate their deontological approaches higher,” said Aguilar. “Male teachers cared more about the consequences of AI.” Female teachers supported rule-based (deontological) perspectives when compared to male teachers.
    This sample also suggests that self-efficacy (conindence in using technology) and anxiety (worry about using technology) were found to be important in both rule-based and outcome-based views regarding AI use. “Teachers who had more self-advocacy with using [AI] felt more confident using technologies or had less anxiety,” said Aguilar. “Both of those were important in terms of the sorts of judgments that they’re making.”
    Aguilar found the philosophical thought experiment the “trolley problem” applicable in his research. The trolley problem is a moral dilemma that questions whether it is morally acceptable to sacrifice one to save a greater number. In education, is a teacher a rule-follower (“deontological” perspective) or an outcome-seeker (“consequentialist” perspective)? Educators would have to decide when, where and how students can use generative AI in the classroom.
    In the study, Aguilar concluded that teachers are “active participants, grappling with the moral challenges posed by AI.” Educators are also asking deeper questions about AI system values and student fairness. While teachers have different points of view on AI, there is a consensus for the need to adopt an ethical framework for AI in education.
    Generative AI holds ‘great promise’ as educational tool
    The report is the first from the USC Center for Generative AI and Society. Announced in March 2023, the center was created to explore the transformative impact of AI on culture, education, media and society. The center is led by co-directors William Swartout, chief science officer for the Institute for Creative Technologies at the USC Viterbi School of Engineering, who leads the education effort; and Holly Willis, a professor and chair of the media arts + practice divisions at the USC School of Cinematic Arts, who is researching the intersection with media and culture.

    “Rather than banning generative AI from the classroom, we need to rethink the educational process and consider how generative AI might be used to improve education, much like we did years ago for mathematics education when cheap calculators became available,” said Swartout, whose analysis “Generative AI and Education: Deny and Detect or Embrace and Enhance?” appears in the overall report. “For example, by asking students to look at texts produced by generative AI and consider whether the facts are right and the arguments make sense, we could help improve their critical thinking skills.”
    Swartout said generative AI could be used to help a student brainstorm a topic before they begin writing. Posing questions like “Are there alternative points of view on this topic?” or “What would be a counterargument to what I’m proposing?” to generative AI can also be used to critique an essay, pointing out ways it could be improved, he added. Fears about using these tools to cheat could be alleviated with a process-based approach to evaluate a student’s work.
    “To reduce the risk of cheating, we need to record and evaluate the process that a student goes through in creating an essay, rather than just grading the artifact at the end,” he said.
    “Incorporating generative AI into the classroom — if done right — holds great promise as an educational tool.”
    The report also includes research from Gale Sinatra and Changzhao Wang of USC Rossier, undergraduate Eric Bui of the USC Dornsife College of Letters, Arts and Sciences and Benjamin Nye of the USC Institute for Creative Technologies, who also serves as an associate director for the Center for Generative AI and Society.
    “We must ensure that such technologies are employed to augment human capabilities, not to replace them, to preserve the inherently relational and emotional aspects of teaching and learning,” said USC Rossier Dean Pedro Noguera. “The USC Center for Generative AI and Society’s new report is an invitation to educators, policymakers, technologists and learners to examine how generative AI can contribute to the future of education.” More

  • in

    A machine learning framework that encodes images like a retina

    A major challenge to developing better neural prostheses is sensory encoding: transforming information captured from the environment by sensors into neural signals that can be interpreted by the nervous system. But because the number of electrodes in a prosthesis is limited, this environmental input must be reduced in some way, while still preserving the quality of the data that is transmitted to the brain.
    Demetri Psaltis (Optics Lab) and Christophe Moser (Laboratory of Applied Photonics Devices) collaborated with Diego Ghezzi of the Hôpital ophtalmique Jules-Gonin — Fondation Asile des Aveugles (previously Medtronic Chair in Neuroengineering at EPFL) to apply machine learning to the problem of compressing image data with multiple dimensions, such as color, contrast, etc. In their case, the compression goal was downsampling, or reducing the number of pixels of an image to be transmitted via a retinal prosthesis.
    “Downsampling for retinal implants is currently done by pixel averaging, which is essentially what graphics software does when you want to reduce a file size. But at the end of the day, this is a mathematical process; there is no learning involved,” Ghezzi explains.
    “We found that if we applied a learning-based approach, we got improved results in terms of optimized sensory encoding. But more surprising was that when we used an unconstrained neural network, it learned to mimic aspects of retinal processing on its own.”
    Specifically, the researchers’ machine learning approach, called an actor-model framework, was especially good at finding a “sweet spot” for image contrast. Ghezzi uses Photoshop as an example. “If you move the contrast slider too far in one or the other direction, the image becomes harder to see. Our network evolved filters to reproduce some of the characteristics of retinal processing.”
    The results have recently been published in Nature Communications.
    Validation both in-silico and ex-vivo
    In the actor-model framework, two neural networks work in a complementary fashion. The model portion, or forward model, acts as a digital twin of the retina: it is first trained to receive a high-resolution image and output a binary neural code that is as similar as possible to the neural code generated by a biological retina. The actor network is then trained to downsample a high-resolution image that can elicit a neural code from the forward model that is as close as possible to that produced by the biological retina in response to the original image.

    Using this framework, the researchers tested downsampled images on both the retina digital twin and on mouse cadaver retinas that had been removed (explanted) and placed in a culture medium. Both experiments revealed that the actor-model approach produced images eliciting a neuronal response more akin to the original image response than an image generated by a learning-free computation approach, such as pixel-averaging.
    Despite the methodological and ethical challenges involved in using explanted mouse retinas, Ghezzi says that it was this ex-vivo validation of their model that makes their study a true innovation in the field.
    “We cannot only trust the digital, or in-silico, model. This is why we did these experiments — to validate our approach.”
    Other sensory horizons
    Because the team has past experience working on retinal prostheses, this was their first use of the actor-model framework for sensory encoding. But Ghezzi sees potential to expand the framework’s applications within and beyond the realm of vision restoration. He adds that it will be important to determine how much of the model, which was validated using mice retinas, is applicable to humans.
    “The obvious next step is to see how can we compress an image more broadly, beyond pixel reduction, so that the framework can play with multiple visual dimensions at the same time. Another possibility is to transpose this retinal model to outputs from other regions of the brain. It could even potentially be linked to other devices, like auditory or limb prostheses,” Ghezzi says. More