More stories

  • in

    Boys who play video games have lower depression risk

    Boys who regularly play video games at age 11 are less likely to develop depressive symptoms three years later, finds a new study led by a UCL researcher.
    The study, published in Psychological Medicine, also found that girls who spend more time on social media appear to develop more depressive symptoms.
    Taken together, the findings demonstrate how different types of screen time can positively or negatively influence young people’s mental health, and may also impact boys and girls differently.
    Lead author, PhD student Aaron Kandola (UCL Psychiatry) said: “Screens allow us to engage in a wide range of activities. Guidelines and recommendations about screen time should be based on our understanding of how these different activities might influence mental health and whether that influence is meaningful.
    “While we cannot confirm whether playing video games actually improves mental health, it didn’t appear harmful in our study and may have some benefits. Particularly during the pandemic, video games have been an important social platform for young people.
    “We need to reduce how much time children — and adults — spend sitting down, for their physical and mental health, but that doesn’t mean that screen use is inherently harmful.”
    Kandola has previously led studies finding that sedentary behaviour (sitting still) appeared to increase the risk of depression and anxiety in adolescents. To gain more insight into what drives that relationship, he and colleagues chose to investigate screen time as it is responsible for much of sedentary behaviour in adolescents. Other studies have found mixed results, and many did not differentiate between different types of screen time, compare between genders, or follow such a large group of young people over multiple years.

    advertisement

    The research team from UCL, Karolinska Institutet (Sweden) and the Baker Heart and Diabetes Institute (Australia) reviewed data from 11,341 adolescents who are part of the Millennium Cohort Study, a nationally representative sample of young people who have been involved in research since they were born in the UK in 2000-2002.
    The study participants had all answered questions about their time spent on social media, playing video games, or using the internet, at age 11, and also answered questions about depressive symptoms, such as low mood, loss of pleasure and poor concentration, at age 14. The clinical questionnaire measures depressive symptoms and their severity on a spectrum, rather than providing a clinical diagnosis.
    In the analysis, the research team accounted for other factors that might have explained the results, such as socioeconomic status, physical activity levels, reports of bullying, and prior emotional symptoms.
    The researchers found that boys who played video games most days had 24% fewer depressive symptoms, three years later, than boys who played video games less than once a month, although this effect was only significant among boys with low physical activity levels, and was not found among girls. The researchers say this might suggest that less active boys could derive more enjoyment and social interaction from video games.
    While their study cannot confirm if the relationship is causal, the researchers say there are some positive aspects of video games which could support mental health, such as problem-solving, and social, cooperative and engaging elements.

    advertisement

    There may also be other explanations for the link between video games and depression, such as differences in social contact or parenting styles, which the researchers did not have data for. They also did not have data on hours of screen time per day, so they cannot confirm whether multiple hours of screen time each day could impact depression risks.
    The researchers found that girls (but not boys) who used social media most days at age 11 had 13% more depressive symptoms three years later than those who used social media less than once a month, although they did not find an association for more moderate use of social media. Other studies have previously found similar trends, and researchers have suggested that frequent social media use could increase feelings of social isolation.
    Screen use patterns between boys and girls may have influenced the findings, as boys in the study played video games more often than girls and used social media less frequently.
    The researchers did not find clear associations between general internet use and depressive symptoms in either gender.
    Senior author Dr Mats Hallgren (Karolinska Institutet) has conducted other studies in adults finding that mentally-active types of screen time, such as playing video games or working at a computer, might not affect depression risk in the way that more passive forms of screen time appear to do.
    He said: “The relationship between screen time and mental health is complex, and we still need more research to help understand it. Any initiatives to reduce young people’s screen time should be targeted and nuanced. Our research points to possible benefits of screen time; however, we should still encourage young people to be physically active and to break up extended periods of sitting with light physical activity.” More

  • in

    Explainable AI for decoding genome biology

    Researchers at the Stowers Institute for Medical Research, in collaboration with colleagues at Stanford University and Technical University of Munich have developed advanced explainable artificial intelligence (AI) in a technical tour de force to decipher regulatory instructions encoded in DNA. In a report published online February 18, 2021, in Nature Genetics, the team found that a neural network trained on high-resolution maps of protein-DNA interactions can uncover subtle DNA sequence patterns throughout the genome and provide a deeper understanding of how these sequences are organized to regulate genes.
    Neural networks are powerful AI models that can learn complex patterns from diverse types of data such as images, speech signals, or text to predict associated properties with impressive high accuracy. However, many see these models as uninterpretable since the learned predictive patterns are hard to extract from the model. This black-box nature has hindered the wide application of neural networks to biology, where interpretation of predictive patterns is paramount.
    One of the big unsolved problems in biology is the genome’s second code — its regulatory code. DNA bases (commonly represented by letters A, C, G, and T) encode not only the instructions for how to build proteins, but also when and where to make these proteins in an organism. The regulatory code is read by proteins called transcription factors that bind to short stretches of DNA called motifs. However, how particular combinations and arrangements of motifs specify regulatory activity is an extremely complex problem that has been hard to pin down.
    Now, an interdisciplinary team of biologists and computational researchers led by Stowers Investigator Julia Zeitlinger, PhD, and Anshul Kundaje, PhD, from Stanford University, have designed a neural network — named BPNet for Base Pair Network — that can be interpreted to reveal regulatory code by predicting transcription factor binding from DNA sequences with unprecedented accuracy. The key was to perform transcription factor-DNA binding experiments and computational modeling at the highest possible resolution, down to the level of individual DNA bases. This increased resolution allowed them to develop new interpretation tools to extract the key elemental sequence patterns such as transcription factor binding motifs and the combinatorial rules by which motifs function together as a regulatory code.
    “This was extremely satisfying,” says Zeitlinger, “as the results fit beautifully with existing experimental results, and also revealed novel insights that surprised us.”
    For example, the neural network models enabled the researchers to discover a striking rule that governs binding of the well-studied transcription factor called Nanog. They found that Nanog binds cooperatively to DNA when multiples of its motif are present in a periodic fashion such that they appear on the same side of the spiraling DNA helix.

    advertisement

    “There has been a long trail of experimental evidence that such motif periodicity sometimes exists in the regulatory code,” Zeitlinger says. “However, the exact circumstances were elusive, and Nanog had not been a suspect. Discovering that Nanog has such a pattern, and seeing additional details of its interactions, was surprising because we did not specifically search for this pattern.”
    “This is the key advantage of using neural networks for this task,” says ?iga Avsec, PhD, first author of the paper. Avsec and Kundaje created the first version of the model when Avsec visited Stanford during his doctoral studies in the lab of Julien Gagneur, PhD, at the Technical University in Munich, Germany.
    “More traditional bioinformatics approaches model data using pre-defined rigid rules that are based on existing knowledge. However, biology is extremely rich and complicated,” says Avsec. “By using neural networks, we can train much more flexible and nuanced models that learn complex patterns from scratch without previous knowledge, thereby allowing novel discoveries.”
    BPNet’s network architecture is similar to that of neural networks used for facial recognition in images. For instance, the neural network first detects edges in the pixels, then learns how edges form facial elements like the eye, nose, or mouth, and finally detects how facial elements together form a face. Instead of learning from pixels, BPNet learns from the raw DNA sequence and learns to detect sequence motifs and eventually the higher-order rules by which the elements predict the base-resolution binding data.
    Once the model is trained to be highly accurate, the learned patterns are extracted with interpretation tools. The output signal is traced back to the input sequences to reveal sequence motifs. The final step is to use the model as an oracle and systematically query it with specific DNA sequence designs, similar to what one would do to test hypotheses experimentally, to reveal the rules by which sequence motifs function in a combinatorial manner.
    “The beauty is that the model can predict way more sequence designs that we could test experimentally,” Zeitlinger says. “Furthermore, by predicting the outcome of experimental perturbations, we can identify the experiments that are most informative to validate the model.” Indeed, with the help of CRISPR gene editing techniques, the researchers confirmed experimentally that the model’s predictions were highly accurate.
    Since the approach is flexible and applicable to a variety of different data types and cell types, it promises to lead to a rapidly growing understanding of the regulatory code and how genetic variation impacts gene regulation. Both the Zeitlinger Lab and the Kundaje Lab are already using BPNet to reliably identify binding motifs for other cell types, relate motifs to biophysical parameters, and learn other structural features in the genome such as those associated with DNA packaging. To enable other scientists to use BPNet and adapt it for their own needs, the researchers have made the entire software framework available with documentation and tutorials. More

  • in

    Engineers place molecule-scale devices in precise orientation

    Engineers have developed a technique that allows them to precisely place microscopic devices formed from folded DNA molecules in not only a specific location but also in a specific orientation.
    As a proof-of-concept, they arranged more than 3,000 glowing moon-shaped nanoscale molecular devices into a flower-shaped instrument for indicating the polarization of light. Each of 12 petals pointed in a different direction around the center of the flower, and within in each petal about 250 moons were aligned to the direction of the petal. Because each moon only glows when struck by polarized light matching its orientation, the end result is a flower whose petals light up in sequence as the polarization of light shined upon it is rotated. The flower, which spans a distance smaller than the width of a human hair, demonstrates that thousands of molecules can be reliably oriented on the surface of a chip.
    This method for precisely placing and orienting DNA-based molecular devices may make it possible to use these molecular devices to power new kinds of chips that integrate molecular biosensors with optics and electronics for applications such as DNA sequencing or measuring the concentrations of thousands of proteins at once.
    The research, published on February 19 by the journal Science, builds on more than 15 years of work by Caltech’s Paul Rothemund (BS ’94), research professor of bioengineering, computing and mathematical sciences, and computation and neural systems, and his colleagues. In 2006, Rothemund showed that DNA could be directed to fold itself into precise shapes through a technique dubbed DNA origami. In 2009, Rothemund and colleagues at IBM Research Almaden described a technique through which DNA origami could be positioned at precise locations on surfaces. To do so, they used a printing process based on electron beams and created “sticky” patches having the same size and shape as the origami did. In particular, they showed that origami triangles bound precisely at the location of triangular sticky patches.
    Next, Rothemund and Ashwin Gopinath, formerly a Caltech senior postdoctoral scholar and now an assistant professor at MIT, refined and extended this technique to demonstrate that molecular devices constructed from DNA origami could be reliably integrated into larger optical devices. “The technological barrier has been how to reproducibly organize vast numbers of molecular devices into the right patterns on the kinds of materials used for chips,” says Rothemund.
    In 2016, Rothemund and Gopinath showed that triangular origami carrying fluorescent molecules could be used to reproduce a 65,000-pixel version of Vincent van Gogh’s The Starry Night. In that work, triangular DNA origami were used to position fluorescent molecules within bacterium-sized optical resonators; precise placement of the fluorescent molecules was critical since a move of just 100 nanometers to the left or right would dim or brighten the pixel by more than five times.
    But the technique had an Achilles’ heel: “Because the triangles were equilateral and were free to rotate and flip upside-down, they could stick flat onto the triangular sticky patch on the surface in any of six different ways. This meant we couldn’t use any devices that required a particular orientation to function. We were stuck with devices that would work equally well when pointed up, down, or in any direction,” says Gopinath. Molecular devices intended for DNA sequencing or measuring proteins absolutely have to land right side up, so the team’s older techniques would ruin 50 percent of the devices. For devices also requiring a unique rotational orientation, such as transistors, only 16 percent would function.
    The first problem to solve, then, was to get the DNA origami to reliably land with the correct side facing up. “It’s a bit like guaranteeing toast always magically lands butter side up when thrown on the floor,” says Rothemund. To the researchers surprise, coating origami with a carpet of flexible DNA strands on one side enabled more than 95 percent of them to land face up. But the problem of controlling rotation remained. Right triangles with three different edge lengths were the researchers’ first attempt at a shape that might land in the preferred rotation.
    However, after wrestling to get just 40 percent of right triangles to point in the correct orientation, Gopinath recruited computer scientists Chris Thachuk of the University of Washington, co-author of the Science paper, and a former Caltech postdoc; and David Kirkpatrick of the University of British Columbia, also a co-author of the Science paper. Their job was to find a shape which would only get stuck in the intended orientation, no matter what orientation it might land in. The computer scientists’ solution was a disk with an off-center hole, which the researchers termed a “small moon.” Mathematical proofs suggested that, unlike a right triangle, small moons could smoothly rotate to find the best alignment with their sticky patch without getting stuck. Lab experiments verified that over 98 percent of the small moons found the correct orientation on their sticky patches.
    The team then added special fluorescent molecules that jam themselves tightly into the DNA helices of the small moons, perpendicular to the axis of the helices. This ensured that the fluorescent molecules within a moon were all oriented in the same direction and would glow most brightly when stimulated with light of a particular polarization. “It’s as if every molecule carries a little antenna, which can accept energy from light most efficiently only when the polarization of light matches the orientation of the antenna,” says Gopinath. This simple effect is what enabled the construction of the polarization-sensitive flower.
    With robust methods for controlling the up-down and rotational orientation of DNA origami, a wide range of molecular devices may now be cheaply integrated into computer chips in high yield for a variety of potential applications. For example, Rothemund and Gopinath have founded a company, Palamedrix, to commercialize the technology for building semiconductor chips that enable simultaneous study of all the proteins relevant to human health. Caltech has filed patent applications for the work. More

  • in

    Smartphone study points to new ways to measure food consumption

    A team of researchers has devised a method using smartphones in order to measure food consumption — an approach that also offers new ways to predict physical well-being.
    “We’ve harnessed the expanding presence of mobile and smartphones around the globe to measure food consumption over time with precision and with the potential to capture seasonal shifts in diet and food consumption patterns,” explains Andrew Reid Bell, an assistant professor in New York University’s Department of Environmental Studies and an author of the paper, which appears in the journal Environmental Research Letters.
    Food consumption has traditionally been measured by questionnaires that require respondents to recall what they ate over the previous 24 hours, to keep detailed consumption records over a three-to-four-day period, or to indicate their typical consumption patterns over one-week to one-month periods. Because these methods ask for participants to report behaviors over extended periods of time, they raise concerns about the accuracy of such documentation.
    Moreover, these forms of data collection don’t capture “real-time” food consumption, preventing analyses that directly link nutrition with physical activity and other measures of well-being — a notable shortcoming given the estimated two billion people in the world who are affected by moderate to severe food insecurity.
    Finally, while food consumption as well as food production have a significant impact on the environment, “we do not yet have the tools to analyze food consumption in the same ways as we do for environmental variables and food production,” write the study’s authors, who also include Mary Killilea, a clinical professor in NYU’s Department of Environmental Studies, and Mari Roberts, an NYU graduate student. “This is a critical gap, as it hampers our understanding of how environmental shocks carry through to become consumption shocks to households, communities, or regions and how responses to these shocks feed back into further environmental stress.”
    The team, which also included researchers from the University of Minnesota, Imperial College London, the Palli Karma-Sahayak Foundation, and Duke Kunshan University, turned to smartphones as an alternative means to track food consumption and its relationship to physical activity.
    “Access to mobile devices is changing how we gather information in many ways, all the way down to the possibility of reaching respondents on their own time, on their own devices, and in their own spaces,” explains Bell.
    Participants included nearly 200 adults in Bangladesh who reported which among a set of general food types (e.g., nuts and seeds, oils, vegetables, leafy vegetables, fruits, meat and eggs, fish, etc.) their household had consumed in the immediately preceding 24 hours as well as which specific food items within the more general food types they had consumed (e.g., rice, wheat, barley, maize, etc.) and how much they ate. Finally, participants reported the age, gender, literacy, education level, occupation, height, and weight of each member of their household and as well as the following measures of their own physical well-being: whether they could stand up on their own after sitting down, whether they could walk for 5 kilometers (3.1 miles), and whether they could carry 20 liters (5.3 gallons) of water for 20 meters (65.6 feet). All of the information was entered by the participants on their phones using a data-collection app, with response rates as high as 90 percent.
    “Food stress is dynamic, and people’s needs — particularly for expectant mothers and young children — can change quickly,” explains Bell. “Reaching respondents in real time allows us to map those changes in a way conventional approaches don’t capture.”
    “Mainstreaming data collection by respondents themselves, through their own devices, would be transformative for understanding food security and for empirical social science in general,” he adds. “It would mean their voices being counted through participation on their own time and terms, and not only by giving up a half-day or longer of work. For researchers, it would mean having connections to rural communities and a picture of their well-being all the time, not just when resources flow to a place in response to crisis, potentially unearthing an understanding of resilience in the face of stressors that has never before been possible.”
    The authors recognize concerns about smartphone availability in both rural and impoverished communities. However, they point to recent studies that show how digital technologies, such as mobile phones and satellites, have offered new ways for rural populations in developing countries to access savings, credit, and insurance.
    “We now see mobile phone penetration almost everywhere in the world, with smartphone and mobile broadband subscriptions following the same trend,” says Bell. More

  • in

    Quantum computing: When ignorance is wanted

    Quantum computers promise not only to outperform classical machines in certain important tasks, but also to maintain the privacy of data processing. The secure delegation of computations has been an increasingly important issue since the possibility of utilizing cloud computing and cloud networks. Of particular interest is the ability to exploit quantum technology that allows for unconditional security, meaning that no assumptions about the computational power of a potential adversary need to be made.
    Different quantum protocols have been proposed, all of which make trade-offs between computational performance, security, and resources. Classical protocols, for example, are either limited to trivial computations or are restricted in their security. In contrast, homomorphic quantum encryption is one of the most promising schemes for secure delegated computation. Here, the client’s data is encrypted in such a way that the server can process it even though he cannot decrypt it. Moreover, opposed to other protocols, the client and server do not need to communicate during the computation which dramatically boosts the protocol’s performance and practicality.
    In an international collaboration led by Prof. Philip Walther from the University of Vienna scientists from Austria, Singapore and Italy teamed up to implement a new quantum computation protocol where the client has the option of encrypting his input data so that the computer cannot learn anything about them, yet can still perform the calculation. After the computation, the client can then decrypt the output data again to read out the result of the calculation. For the experimental demonstration, the team used quantum light, which consists of individual photons, to implement this so-called homomorphic quantum encryption in a quantum walk process. Quantum walks are interesting special-purpose examples of quantum computation because they are hard for classical computers, whereas being feasible for single photons.
    By combining an integrated photonic platform built at the Polytechnic University of Milan, together with a novel theoretical proposal developed at the Singapore University of Technology and Design, scientist from the University of Vienna demonstrated the security of the encrypted data and investigated the behavior increasing the complexity of the computations.
    The team was able to show that the security of the encrypted data improves the larger the dimension of the quantum walk calculation becomes. Furthermore, recent theoretical work indicates that future experiments taking advantage of various photonic degrees of freedom would also contribute to an improvement in data security; one can anticipate further optimizations in the future. “Our results indicate that the level of security improves even further, when increasing the number of photons that carry the data,” says Philip Walther and concludes “this is exciting and we anticipate further developments of secure quantum computing in the future.”

    Story Source:
    Materials provided by University of Vienna. Note: Content may be edited for style and length. More

  • in

    Blueprint for fault-tolerant qubits

    Building a universal quantum computer is a challenging task because of the fragility of quantum bits, or qubits for short. To deal with this problem, various types of error correction have been developed. Conventional methods do this by active correction techniques. In contrast, researchers led by Prof. David DiVincenzo from Forschungszentrum Jülich and RWTH Aachen University, together with partners from the University of Basel and QuTech Delft, have now proposed a design for a circuit with passive error correction. Such a circuit would already be inherently fault protected and could significantly accelerate the construction of a quantum computer with a large number of qubits.
    In order to encode quantum information in a reliable way, usually, several imperfect qubits are combined to form a so-called logical qubit. Quantum error correction codes, or QEC codes for short, thus make it possible to detect errors and subsequently correct them, so that the quantum information is preserved over a longer period of time.
    In principle, the techniques work in a similar way to active noise cancellation in headphones: In a first step, any fault is detected. Then, a corrective operation is performed to remove the error and restore the information to its original pure form.
    However, the application of such active error correction in a quantum computer is very complex and comes with an extensive use of hardware. Typically, complex error-correcting electronics are required for each qubit, making it difficult to build circuits with many qubits, as required to build a universal quantum computer.
    The proposed design for a superconducting circuit, on the other hand, has a kind of built-in error correction. The circuit is designed in such a way that it is already inherently protected against environmental noise while still controllable. The concept thus bypasses the need for active stabilization in a highly hardware-efficient manner, and would therefore be a promising candidate for a future large-scale quantum processor that has a large number of qubits.
    “By implementing a gyrator – a two port device that couples current on one port to voltage on the other – in between two superconducting devices (so called Josephson junctions), we could waive the demand of active error detection and stabilization: when cooled down, the qubit is inherently protected against common types of noise,” said Martin Rymarz, a PhD student in the group of David DiVincenzo and first author of the paper, published in Physical Review X.
    “I hope that our work will inspire efforts in the lab; I recognize that this, like many of our proposals, may be a bit ahead of its time”, said David DiVincenzo, Founding Director of the JARA-Institute for Quantum Information at RWTH Aachen University and Director of the Institute of Theoretical Nanoelectronics (PGI-2) at Forschungszentrum Jülich. “Nevertheless, given the professional expertise available, we recognize the possibility to test our proposal in the lab in the foreseeable future”.
    David DiVincenzo is considered a pioneer in the development of quantum computers. Among other things, his name is associated with the criteria that a quantum computer must fulfil, the so-called “DiVincenzo criteria”.
     

    Story Source:
    Materials provided by Forschungszentrum Juelich. Note: Content may be edited for style and length. More

  • in

    Identifying 'ugly ducklings' to catch skin cancer earlier

    Melanoma is by far the deadliest form of skin cancer, killing more than 7,000 people in the United States in 2019 alone. Early detection of the disease dramatically reduces the risk of death and the costs of treatment, but widespread melanoma screening is not currently feasible. There are about 12,000 practicing dermatologists in the US, and they would each need to see 27,416 patients per year to screen the entire population for suspicious pigmented lesions (SPLs) that can indicate cancer.
    Computer-aided diagnosis (CAD) systems have been developed in recent years to try to solve this problem by analyzing images of skin lesions and automatically identifying SPLs, but so far have failed to meaningfully impact melanoma diagnosis. These CAD algorithms are trained to evaluate each skin lesion individually for suspicious features, but dermatologists compare multiple lesions from an individual patient to determine whether they are cancerous — a method commonly called the “ugly duckling” criteria. No CAD systems in dermatology, to date, have been designed to replicate this diagnosis process.
    Now, that oversight has been corrected thanks to a new CAD system for skin lesions based on convolutional deep neural networks (CDNNs) developed by researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and the Massachusetts Institute of Technology (MIT). The new system successfully distinguished SPLs from non-suspicious lesions in photos of patients’ skin with ~90% accuracy, and for the first time established an “ugly duckling” metric capable of matching the consensus of three dermatologists 88% of the time.
    “We essentially provide a well-defined mathematical proxy for the deep intuition a dermatologist relies on when determining whether a skin lesion is suspicious enough to warrant closer examination,” said the study’s first author Luis Soenksen, Ph.D., a Postdoctoral Fellow at the Wyss Institute who is also a Venture Builder at MIT. “This innovation allows photos of patients’ skin to be quickly analyzed to identify lesions that should be evaluated by a dermatologist, allowing effective screening for melanoma at the population level.”
    The technology is described in Science Translational Medicine, and the CDNN’s source code is openly available on GitHub (https://github.com/lrsoenksen/SPL_UD_DL).
    Bringing ugly ducklings into focus
    Melanoma is personal for Soenksen, who has watched several close friends and family members suffer from the disease. “It amazed me that people can die from melanoma simply because primary care doctors and patients currently don’t have the tools to find the “odd” ones efficiently. I decided to take on that problem by leveraging many of the techniques I learned from my work in artificial intelligence at the Wyss and MIT,” he said.

    advertisement

    Soenksen and his collaborators discovered that all the existing CAD systems created for identifying SPLs only analyzed lesions individually, completely omitting the ugly duckling criteria that dermatologists use to compare several of a patient’s moles during an exam. So they decided to build their own.
    To ensure that their system could be used by people without specialized dermatology training, the team created a database of more than 33,000 “wide field” images of patients’ skin that included backgrounds and other non-skin objects, so that the CDNN would be able to use photos taken from consumer-grade cameras for diagnosis. The images contained both SPLs and non-suspicious skin lesions that were labeled and confirmed by a consensus of three board-certified dermatologists. After training on the database and subsequent refinement and testing, the system was able to distinguish between suspicious from non-suspicious lesions with 90.3% sensitivity and 89.9% specificity, improving upon previously published systems.
    But this baseline system was still analyzing the features of individual lesions, rather than features across multiple lesions as dermatologists do. To add the ugly duckling criteria into their model, the team used the extracted features in a secondary stage to create a 3D “map” of all of the lesions in a given image, and calculated how far away from “typical” each lesion’s features were. The more “odd” a given lesion was compared to the others in an image, the further away it was from the center of the 3D space. This distance is the first quantifiable definition of the ugly duckling criteria, and serves as a gateway to leveraging deep learning networks to overcome the challenging and time-consuming task of identifying and scrutinizing the differences between all the pigmented lesions in a single patient.
    Deep learning vs. dermatologists
    Their DCNN still had to pass one final test: performing as well as living, breathing dermatologists at the task of identifying SPLs from images of patients’ skin. Three dermatologists examined 135 wide-field photos from 68 patients, and assigned each lesion an “oddness” score that indicated how concerning it looked. The same images were analyzed and scored by the algorithm. When the assessments were compared, the researchers found that the algorithm agreed with the dermatologists’ consensus 88% of the time, and with the individual dermatologists 86% of the time.

    advertisement

    “This high level of consensus between artificial intelligence and human clinicians is an important advance in this field, because dermatologists’ agreement with each other is typically very high, around 90%,” said co-author Jim Collins, Ph.D., a Core Faculty member of the Wyss Institute and co-leader of its Predictive Bioanalytics Initiative who is also the Termeer Professor of Medical Engineering and Science at MIT. “Essentially, we’ve been able to achieve dermatologist-level accuracy in diagnosing potential skin cancer lesions from images that can be taken by anybody with a smartphone, which opens up huge potential for finding and treating melanoma earlier.”
    Recognizing that such a technology should be made available to as many people as possible for maximum benefit, the team has made their algorithm open-source on GitHub. They hope to partner with medical centers to launch clinical trials further demonstrating their system’s efficacy, and with industry to turn it into a product that could be used by primary care providers around the world. They also recognize that in order to be universally helpful, their algorithm needs to be able to function equally well across the full spectrum of human skin tones, which they plan to incorporate into future development.
    “Allowing our scientists to purse their passions and visions is key to the success of the Wyss Institute, and it’s wonderful to see this advance that can impact all of us in such a meaningful way emerge from a collaboration with our newly formed Predictive Bioanalytics Initiative,” said Wyss Founding Director Don Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.
    Additional authors of the paper include Regina Barzilay, Martha L. Gray, Timothy Kassis, Susan T. Conover, Berta Marti-Fuster, Judith S. Birkenfeld, Jason Tucker-Schwartz, and Asif Naseem from MIT, Robert R. Stavert from the Beth Israel Deaconess Medical Center, Caroline C. Kim from Tufts Medical Center, Maryanne M. Senna from Massachusetts General Hospital, and José Avilés-Izquierdo from Hospital General Universitario Gregorio Marañón.
    This research was supported by the Abdul Latif Jameel Clinic for Machine Learning in Health, the Consejería de Educación, Juventud y Deportes de la Comunidad de Madrid through the Madrid-MIT M+Visión Consortium and the People Programme of the European Union’s Seventh Framework Programme, the Mexico CONACyT grant 342369/40897, and the US DOE training grant DE-SC0008430. More

  • in

    This robot doesn't need any electronics

    Engineers at the University of California San Diego have created a four-legged soft robot that doesn’t need any electronics to work. The robot only needs a constant source of pressurized air for all its functions, including its controls and locomotion systems.
    The team, led by Michael T. Tolley, a professor of mechanical engineering at the Jacobs School of Engineering at UC San Diego, details its findings in the Feb. 17, 2021 issue of the journal Science Robotics.
    “This work represents a fundamental yet significant step towards fully-autonomous, electronics-free walking robots,” said Dylan Drotman, a Ph.D. student in Tolley’s research group and the paper’s first author.
    Applications include low-cost robotics for entertainment, such as toys, and robots that can operate in environments where electronics cannot function, such as MRI machines or mine shafts. Soft robots are of particular interest because they easily adapt to their environment and operate safely near humans.
    Most soft robots are powered by pressurized air and are controlled by electronic circuits. But this approach requires complex components like circuit boards, valves and pumps — often outside the robot’s body. These components, which constitute the robot’s brains and nervous system, are typically bulky and expensive. By contrast, the UC San Diego robot is controlled by a light-weight, low-cost system of pneumatic circuits, made up of tubes and soft valves, onboard the robot itself. The robot can walk on command or in response to signals it senses from the environment.
    “With our approach, you could make a very complex robotic brain,” said Tolley, the study’s senior author. “Our focus here was to make the simplest air-powered nervous system needed to control walking.”
    The robot’s computational power roughly mimics mammalian reflexes that are driven by a neural response from the spine rather than the brain. The team was inspired by neural circuits found in animals, called central pattern generators, made of very simple elements that can generate rhythmic patterns to control motions like walking and running.

    advertisement

    To mimic the generator’s functions, engineers built a system of valves that act as oscillators, controlling the order in which pressurized air enters air-powered muscles in the robot’s four limbs. Researchers built an innovative component that coordinates the robot’s gait by delaying the injection of air into the robot’s legs. The robot’s gait was inspired by sideneck turtles.
    The robot is also equipped with simple mechanical sensors — little soft bubbles filled with fluid placed at the end of booms protruding from the robot’s body. When the bubbles are depressed, the fluid flips a valve in the robot that causes it to reverse direction.
    The Science Robotics paper builds on previous work by other research groups that developed oscillators and sensors based on pneumatic valves, and adds the components necessary to achieve high-level functions like walking.
    How it works
    The robot is equipped with three valves acting as inverters that cause a high pressure state to spread around the air-powered circuit, with a delay at each inverter.

    advertisement

    Each of the robot’s four legs has three degrees of freedom powered by three muscles. The legs are angled downward at 45 degrees and composed of three parallel, connected pneumatic cylindrical chambers with bellows. When a chamber is pressurized, the limb bends in the opposite direction. As a result, the three chambers of each limb provide multi-axis bending required for walking. Researchers paired chambers from each leg diagonally across from one another, simplifying the control problem.
    A soft valve switches the direction of rotation of the limbs between counterclockwise and clockwise. That valve acts as what’s known as a latching double pole, double throw switch — a switch with two inputs and four outputs, so each input has two corresponding outputs it’s connected to. That mechanism is a little like taking two nerves and swapping their connections in the brain.
    Next steps
    In the future, researchers want to improve the robot’s gait so it can walk on natural terrains and uneven surfaces. This would allow the robot to navigate over a variety of obstacles. This would require a more sophisticated network of sensors and as a result a more complex pneumatic system.
    The team will also look at how the technology could be used to create robots, which are in part controlled by pneumatic circuits for some functions, such as walking, while traditional electronic circuits handle higher functions.
    This work is supported by the Office of Naval Research, grant numbers N00014-17-1-2062 and N00014-18-1-2277.
    Video: https://www.youtube.com/watch?v=X5caSAb4kz0&feature=emb_logo More