More stories

  • in

    Deep neural networks don’t see the world the way we do

    Human sensory systems are very good at recognizing objects that we see or words that we hear, even if the object is upside down or the word is spoken by a voice we’ve never heard.
    Computational models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of what color its fur is, or a word regardless of the pitch of the speaker’s voice. However, a new study from MIT neuroscientists has found that these models often also respond the same way to images or words that have no resemblance to the target.
    When these neural networks were used to generate an image or a word that they responded to in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that were unrecognizable to human observers. This suggests that these models build up their own idiosyncratic “invariances” — meaning that they respond the same way to stimuli with very different features.
    The findings offer a new way for researchers to evaluate how well these models mimic the organization of human sensory perception, says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.
    “This paper shows that you can use these models to derive unnatural signals that end up being very diagnostic of the representations in the model,” says McDermott, who is the senior author of the study. “This test should become part of a battery of tests that we as a field are using to evaluate models.”
    Jenelle Feather PhD ’22, who is now a research fellow at the Flatiron Institute Center for Computational Neuroscience, is the lead author of the open-access paper, which appears today in Nature Neuroscience. Guillaume Leclerc, an MIT graduate student, and Aleksander M?dry, the Cadence Design Systems Professor of Computing at MIT, are also authors of the paper.
    Different perceptions
    In recent years, researchers have trained deep neural networks that can analyze millions of inputs (sounds or images) and learn common features that allow them to classify a target word or object roughly as accurately as humans do. These models are currently regarded as the leading models of biological sensory systems. More

  • in

    Virtual driving assessment predicts risk of crashing for newly licensed teen drivers

    New research published today by the journal Pediatrics found that driving skills measured at the time of licensure on a virtual driving assessment (VDA), which exposes drivers to common serious crash scenarios, helps predict crash risk in newly licensed young drivers.
    This study, conducted by the Center for Injury Research and Prevention (CIRP) at Children’s Hospital of Philadelphia (CHOP) with colleagues at the University of Pennsylvania and the University of Michigan, brings the research community one step closer to identifying which skill deficits put young new drivers at higher risk for crashes. With this cutting-edge information, more personalized interventions can be developed to improve the driving skills that prevent crashes.
    While drivers between the ages of 15 and 20 only make up about 5% of all drivers on the road, they are involved in approximately 12% of all vehicle crashes and 8.5% of fatal crashes. The time of greatest crash risk is in the months right after these young drivers receive their license, largely due to deficits in driving skills.
    However, many of these newly licensed drivers do avoid crashes. The challenge for policymakers, clinicians, and families has been identifying which drivers are at increased risk of crashing during the learning phase before they drive on their own. Early identification of at-risk drivers offers the opportunity to intervene with training and other resources known to help prevent crashes, making the roads safer for everyone.
    Over the past two decades, CIRP researchers have systematically determined the primary reason for novice driver crashes — inadequate driving skills, such as speed management — and conducted studies that informed the development and validation of a self-guided VDA that measures performance of these driving skills in common serious crash scenarios that cannot be evaluated with on-road testing. The VDA utilizes the Ready-Assess™ platform developed by Diagnostic Driving, Inc., an AI-driven virtual driving assessment that provides the driver with the insights and tools to improve.
    In this study, researchers examined the ability of the VDA, delivered at the time of the licensing road test, to predict crash risk in the first year after obtaining licensure in the state of Ohio. Using a unique study design, the results of the VDA were linked to police-reported crash records for the first year after obtaining a license.
    “Our previous research showed that performance on the VDA predicted actual on-road driving performance, as measured by failure on the licensing road test. This new study went further to determine whether VDA performance could identify unsafe driving performance predictive of future crash risk,” said lead study author Elizabeth Walshe, PhD, a cognitive neuroscientist and clinical researcher who directs the Neuroscience of Driving team at CIRP. “We found that drivers categorized by their performance as having major issues with dangerous behavior were at higher risk of crashing than average new drivers.”
    The researchers analyzed a unique integrated dataset of individual results of VDA performance, collected in the Ohio Bureau of Motor Vehicles before the licensing road test, linked to licensing and police-reported crash records in 16,914 first-time newly licensed drivers under the age of 25. Data were collected from applicants who completed the VDA between July 2017 and December 2019 on the day they passed the on-road licensing examination in Ohio. Researchers examined crash records up to mid-March 2020. More

  • in

    Photonic crystals bend light as though it were under the influence of gravity

    A collaborative group of researchers has manipulated the behavior of light as if it were under the influence of gravity. The findings, which were published in the journal Physical Review A on September 28, 2023, have far-reaching implications for the world of optics and materials science, and bear significance for the development of 6G communications.
    Albert Einstein’s theory of relativity has long established that the trajectory of electromagnetic waves — including light and terahertz electromagnetic waves — can be deflected by gravitational fields.
    Scientists have recently theoretically predicted that replicating the effects of gravity — i.e., pseudogravity — is possible by deforming crystals in the lower normalized energy (or frequency) region.
    “We set out to explore whether lattice distortion in photonic crystals can produce pseudogravity effects,” said Professor Kyoko Kitamura from Tohoku University’s Graduate School of Engineering.
    Photonic crystals possess unique properties that enable scientists to manipulate and control the behavior of light, serving as ‘traffic controllers’ for light within crystals. They are constructed by periodically arranging two or more different materials with varying abilities to interact with and slow down light in a regular, repeating pattern. Furthermore, pseudogravity effects due to adiabatic changes have been observed in photonic crystals.
    Kitamura and her colleagues modified photonic crystals by introducing lattice distortion: gradual deformation of the regular spacing of elements, which disrupted the grid-like pattern of protonic crystals. This manipulated the photonic band structure of the crystals, resulting in a curved beam trajectory in-medium — just like a light-ray passing by a massive celestial body such as a black hole.
    Specifically, they employed a silicon distorted photonic crystal with a primal lattice constant of 200 micrometers and terahertz waves. Experiments successfully demonstrated the deflection of these waves.
    “Much like gravity bends the trajectory of objects, we came up with a means to bend light within certain materials,” adds Kitamura. “Such in-plane beam steering within the terahertz range could be harnessed in 6G communication. Academically, the findings show that photonic crystals could harness gravitational effects, opening new pathways within the field of graviton physics,” said Associate Professor Masayuki Fujita from Osaka University. More

  • in

    Researchers measure global consensus over the ethical use of AI

    To examine the global state of AI ethics, a team of researchers from Brazil performed a systematic review and meta-analysis of global guidelines for AI use. Publishing October 13 in in the journal Patterns, the researchers found that, while most of the guidelines valued privacy, transparency, and accountability, very few valued truthfulness, intellectual property, or children’s rights. Additionally, most of the guidelines described ethical principles and values without proposing practical methods for implementing them and without pushing for legally binding regulation.
    “Establishing clear ethical guidelines and governance structures for the deployment of AI around the world is the first step to promoting trust and confidence, mitigating its risks, and ensuring that its benefits are fairly distributed,” says social scientist and co-author James William Santos of the Pontifical Catholic University of Rio Grande do Sul.
    “Previous work predominantly centered around North American and European documents, which prompted us to actively seek and include perspectives from regions such as Asia, Latin America, Africa, and beyond,” says lead author Nicholas Kluge Corrêa of the Pontifical Catholic University of Rio Grande do Sul and the University of Bonn.
    To determine whether a global consensus exists regarding the ethical development and use of AI, and to help guide such a consensus, the researchers conducted a systematic review of policy and ethical guidelines published between 2014 and 2022. From this, they identified 200 documents related to AI ethics and governance from 37 countries and six continents and written or translated into five different languages (English, Portuguese, French, German, and Spanish). These documents included recommendations, practical guides, policy frameworks, legal landmarks, and codes of conduct.
    Then, the team conducted a meta-analysis of these documents to identify the most common ethical principles, examine their global distribution, and assess biases in terms of the type of organizations or people producing these documents.
    The researchers found that the most common principles were transparency, security, justice, privacy, and accountability, which appeared in 82.5%, 78%, 75.5%, 68.5%, and 67% of the documents, respectively. The least common principles were labor rights, truthfulness, intellectual property, and children/adolescent rights, which appeared in 19.5%, 8.5%, 7%, and 6% of the documents, and the authors emphasize that these principles deserve more attention. For example, truthfulness — the idea that AI should provide truthful information — is becoming increasingly relevant with the release of generative AI technologies like ChatGPT. And since AI has the potential to displace workers and change the way we work, practical measures are to avoid mass unemployment or monopolies.
    Most (96%) of the guidelines were “normative” — describing ethical values that should be considered during AI development and use — while only 2% recommended practical methods of implementing AI ethics, and only 4.5% proposed legally binding forms of AI regulation. More

  • in

    Physicists demonstrate powerful physics phenomenon

    In a new breakthrough, researchers have used a novel technique to confirm a previously undetected physics phenomenon that could be used to improve data storage in the next generation of computer devices.
    Spintronic memories, like those used in some high-tech computers and satellites, use magnetic states generated by an electron’s intrinsic angular momentum to store and read information. Depending on its physical motion, an electron’s spin produces a magnetic current. Known as the “spin Hall effect,” this has key applications for magnetic materials across many different fields, ranging from low power electronics to fundamental quantum mechanics.
    More recently, scientists have found that electrons are also capable of generating electricity through a second kind of movement: orbital angular momentum, similar to how Earth revolves around the sun. This is known as the “orbital Hall effect,” said Roland Kawakami, co-author of the study and a professor in physics at The Ohio State University.
    Theorists predicted that by using light transition metals — materials that have weak spin Hall currents — magnetic currents generated by the orbital Hall effect would be easier to spot flowing alongside them. Until now, directly detecting such a thing has been a challenge, but the study, led by Igor Lyalin, a graduate student in physics, and published today in the journal Physical Review Letters,showed a method to observe the effect.
    “Over the decades, there’s been a continuous discovery of various Hall effects,” said Kawakami. “But the idea of these orbital currents is really a brand new one. The difficulty is that they are mixed with spin currents in typical heavy metals and it’s difficult to tell them apart.”
    Instead, Kawakami’s team demonstrated the orbital Hall effect by reflecting polarized light, in this case, a laser, onto various thin films of the light metal chromium to probe the metal’s atoms for a potential build-up of orbital angular momentum. After nearly a year of painstaking measurements, researchers were able to detect a clear magneto-optical signal which showed that electrons gathered at one end of the film exhibited strong orbital Hall effect characteristics.
    This successful detection could have huge consequences for future spintronics applications, said Kawakami. More

  • in

    Immune system aging can be revealed by CT scan

    Thymus, a small and relatively unknown organ, may play a bigger role in the immune system of adults than was previously believed. With age, the glandular tissue in the thymus is replaced by fat, but, according to a new study from Linköping University (LiU) in Sweden, the rate at which this happens is linked to sex, age and lifestyle factors. These findings also indicate that the appearance of the thymus reflects the ageing of the immune system.
    “We doctors can assess the appearance of the thymus from largely all chest CT scans, but we tend to not see this as very important. But now it turns out that the appearance of the thymus can actually provide a lot of valuable information that we could benefit from and learn more about,” says Mårten Sandstedt, MD, PhD, at the Department of Radiology in Linköping and Department of Health, Medicine and Caring Sciences, Faculty of Medicine and Health Sciences, Linköping University.
    The thymus is a gland located in the upper part of the chest. It has been long known that this small organ is important for immune defence development in children. After puberty, the thymus decreases in size and is eventually replaced by fat, in a process known as fatty degeneration. This has been taken to mean that it loses its function, which is why the thymus has for a long time been considered as being not important in adult life. This view has however been challenged in some minor research studies, mainly on animals, that indicate that having an active thymus as an adult may be an advantage and could provide increased resilience against infectious disease and cancer. Only very few studies so far have examined the thymus in adults.
    In the present study, published in Immunity & Ageing, the researchers have examined thymus appearance in chest CT scans of more than 1,000 Swedish individuals aged 50 to 64, who participated in the large SCAPIS study (Swedish cardiopulmonary bioimage study). SCAPIS includes both extensive imaging and comprehensive health assessments including lifestyle factors, such as dietary habits and physical activity. In their sub-study of SCAPIS, the researchers also analysed immune cells in the blood.
    “We saw a huge variation in thymus appearance. Six out of ten participants had complete fatty degeneration of thymus, which was much more common in men than in women, and in people with abdominal obesity. Lifestyle also mattered. Low intake of fibres in particular was associated with fatty degeneration of thymus,” says Mårten Sandstedt.
    The Linköping researchers study provides new knowledge by associating thymus appearance with lifestyle and health factors, and the immune system. In the development of the immune system, the thymus acts like a school for a type of immune cells known as T-cells (where the T stands for “thymus”). This is where the T-cells learn to recognise bacteria, viruses and other things that are alien to the body. They also learn to be tolerant and not attack anything that is part of the person’s own body, which could otherwise lead to various autoimmune diseases.
    In their study, the LiU researchers saw that individuals with fatty degeneration of the thymus showed lower T-cell regeneration.
    “This association with T-cell regeneration is interesting. It indicates that what we see in CT scans is not only an image, it actually also reflects the functionality of the thymus. You can’t do anything about your age and your sex, but lifestyle-related factors can be influenced. It might be possible to influence immune system ageing,” says Lena Jonasson, professor at the Department of Cardiology in Linköping and Department of Health, Medicine and Caring Sciences, Faculty of Medicine and Health Sciences, Linköping University.
    But more research is needed before it will be possible to know whether thymus appearance, and thereby immune defence ageing, will have any implications for our health. The researchers are now moving on to follow-up studies of the thymus of all 5,000 participants in SCAPIS Linköping to see whether CT scan thymus images can provide information on future risk of disease.
    This research was funded by the Heart-Lung Foundation, the Swedish Research Council, the Swedish Grandlodge of Freemasonry, and Region Östergötland and Linköping University through ALF Grants. Mårten Sandstedt is also affiliated with the Center for Medical Image Science and Visualization, CMIV, in Linköping. More

  • in

    New organ-on-a-chip model of human synovium could accelerate development of treatments for arthritis

    The synovium is a membrane-like structure that lines the knee joint and helps to keep the joint happy and healthy, mainly by producing and maintaining synovial fluid. Inflammation of this tissue is implicated in the onset and progression of arthritic diseases such as rheumatoid and osteoarthritis. Therefore, treatments that target the synovium are promising in treating these diseases. However, we need better models in the laboratory that allow us to find and test new treatments. We have developed an organ-on-a-chip based model of the human synovium, and its associated vasculature, to address this need.
    Researchers at Queen Mary University of London have developed a new organ-on-a-chip model of the human synovium, a membrane-like tissue that lines the joints. The model, published in the journal Biomedical Materials, could help researchers to better understand the mechanisms of arthritis and to develop new treatments for this group of debilitating diseases.
    In the UK, more than 10 million people live with a form of arthritis, which affects the joints and can cause pain, stiffness, and swelling. There is currently no cure for arthritis and the search for new therapeutics is limited by a lack of accurate models.
    The new synovium-on-a-chip model is a three-dimensional microfluidic device that contains human synovial cells and blood vessel cells. The device is subjected to mechanical loading, which mimics the forces applied to the synovium during joint movement.
    The developed synovium-on-a-chip model was able to mimic the behaviour of native human synovium, producing key synovial fluid components and responding to inflammation. This suggests that the new platform has immense potential to help researchers understand disease mechanisms and identify and test new therapies for arthritic diseases.
    “Our model is the first human, vascularised, synovium-on-a-chip model with applied mechanical loading and successfully replicates a number of key features of native synovium biology,” said Dr Timothy Hopkins, Versus Arthritis Foundation Fellow, joint lead author of the study. “The model was developed upon a commercially available platform (Emulate Inc.), that allows for widespread adoption without the need for specialist knowledge of chip fabrication. The vascularised synovium-on-a-chip can act as a foundational model for academic research, with which fundamental questions can be addressed, and complexity (further cell and tissue types) can be added. In addition, we envisage that our model could eventually form part of the drug discovery pipeline in an industrial setting. Some of these conversations have already commenced.”
    The researchers are currently using the synovium-on-a-chip model to study the disease mechanisms of arthritis and to develop stratified and personalized organ-on-a-chip models of human synovium and associated tissues.
    “We believe that our synovium-on-a-chip model, and related models of human joints currently under development in our lab, have the potential to transform pre-clinical testing, streamlining delivery of new therapeutics for treatment of arthritis,” Prof. Martin Knight, Professor of Mechanobiology said. “We are excited to share this model with the scientific community and to work with industry partners to bring new treatments to patients as quickly as possible.” More

  • in

    Self-correcting quantum computers within reach?

    Quantum computers promise to reach speeds and efficiencies impossible for even the fastest supercomputers of today. Yet the technology hasn’t seen much scale-up and commercialization largely due to its inability to self-correct. Quantum computers, unlike classical ones, cannot correct errors by copying encoded data over and over. Scientists had to find another way.
    Now, a new paper in Nature illustrates a Harvard quantum computing platform’s potential to solve the longstanding problem known as quantum error correction.
    Leading the Harvard team is quantum optics expert Mikhail Lukin, the Joshua and Beth Friedman University Professor in physics and co-director of the Harvard Quantum Initiative. The work reported in Nature was a collaboration among Harvard, MIT, and Boston-based QuEra Computing. Also involved was the group of Markus Greiner, the George Vasmer Leverett Professor of Physics.
    An effort spanning the last several years, the Harvard platform is built on an array of very cold, laser-trapped rubidium atoms. Each atom acts as a bit — or a “qubit” as it’s called in the quantum world — which can perform extremely fast calculations.
    The team’s chief innovation is configuring their “neutral atom array” to be able to dynamically change its layout by moving and connecting atoms — this is called “entangling” in physics parlance — mid-computation. Operations that entangle pairs of atoms, called two-qubit logic gates, are units of computing power.
    Running a complicated algorithm on a quantum computer requires many gates. However, these gate operations are notoriously error-prone, and a buildup of errors renders the algorithm useless.
    In the new paper, the team reports near-flawless performance of its two-qubit entangling gates with extremely low error rates. For the first time, they demonstrated the ability to entangle atoms with error rates below 0.5 percent. In terms of operation quality, this puts their technology’s performance on par with other leading types of quantum computing platforms, like superconducting qubits and trapped-ion qubits. More