More stories

  • in

    Fostering creativity in researchers: How automation can revolutionize materials research

    At the heart of many past scientific breakthroughs lies the discovery of novel materials. However, the cycle of synthesizing, testing, and optimizing new materials routinely takes scientists long hours of hard work. Because of this, lots of potentially useful materials with exotic properties remain undiscovered. But what if we could automate the entire novel material development process using robotics and artificial intelligence, making it much faster?
    In a recent study published at APL Material, scientists from Tokyo Institute of Technology (Tokyo Tech), Japan, led by Associate Professor Ryota Shimizu and Professor Taro Hitosugi, devised a strategy that could make fully autonomous materials research a reality. Their work is centered around the revolutionary idea of laboratory equipment being ‘CASH’ (Connected, Autonomous, Shared, High-throughput). With a CASH setup in a materials laboratory, researchers need only decide which material properties they want to optimize and feed the system the necessary ingredients; the automatic system then takes control and repeatedly prepares and tests new compounds until the best one is found. Using machine learning algorithms, the system can employ previous knowledge to decide how synthesis conditions should be changed to approach the desired outcome in each cycle.
    To demonstrate that CASH is a feasible strategy in solid-state materials research, Associate Prof Shimizu and team created a proof-of-concept system comprising a robotic arm surrounded by several modules. Their setup was geared toward minimizing the electrical resistance of a titanium dioxide thin film by adjusting the deposition conditions. Therefore, the modules are a sputter deposition apparatus and a device for measuring resistance. The robotic arm transferred the samples from module to module as needed, and the system autonomously predicted the synthesis parameters for the next iteration based on previous data. For the prediction, they used the Bayesian optimization algorithm.
    Amazingly, their CASH setup managed to produce and test about twelve samples per day, a tenfold increase in throughput compared to what scientists can manually achieve in a conventional laboratory. In addition to this significant increase in speed, one of the main advantages of the CASH strategy is the possibility of creating huge shared databases describing how material properties vary according to synthesis conditions. In this regard, Prof Hitosugi remarks: “Today, databases of substances and their properties remain incomplete. With the CASH approach, we could easily complete them and then discover hidden material properties, leading to the discovery of new laws of physics and resulting in insights through statistical analysis.”
    The research team believes that the CASH approach will bring about a revolution in materials science. Databases generated quickly and effortlessly by CASH systems will be combined into big data and scientists will use advanced algorithms to process them and extract human-understandable expressions. However, as Prof Hitosugi notes, machine learning and robotics alone cannot find insights nor discover concepts in physics and chemistry. “The training of future materials scientists must evolve; they will need to understand what machine learning can solve and set the problem accordingly. The strength of human researchers lies in creating concepts or identifying problems in society. Combining those strengths with machine learning and robotics is very important,” he says.
    Overall, this perspective article highlights the tremendous benefits that automation could bring to materials science. If the weight of repetitive tasks is lifted off the shoulders of researchers, they will be able to focus more on uncovering the secrets of the material world for the benefit of humanity.

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Researchers establish proof of principle in superconductor study

    Three physicists in the Department of Physics and Astronomy at the University of Tennessee, Knoxville, together with their colleagues from the Southern University of Science and Technology and Sun Yat-sen University in China, have successfully modified a semiconductor to create a superconductor.
    Professor and Department Head Hanno Weitering, Associate Professor Steve Johnston, and PhD candidate Tyler Smith were part of the team that made the breakthrough in fundamental research, which may lead to unforeseen advancements in technology.
    Semiconductors are electrical insulators but conduct electrical currents under special circumstances. They are an essential component in many of the electronic circuits used in everyday items including mobile phones, digital cameras, televisions, and computers.
    As technology has progressed, so has the development of semiconductors, allowing the fabrication of electronic devices that are smaller, faster, and more reliable.
    Superconductors, first discovered in 1911, allow electrical charges to move without resistance, so current flows without any energy loss. Although scientists are still exploring practical applications, superconductors are currently used most widely in MRI machines.
    Using a silicon semiconductor platform — which is the standard for nearly all electronic devices — Weitering and his colleagues used tin to create the superconductor.
    “When you have a superconductor and you integrate it with a semiconductor, there are also new types of electronic devices that you can make,” Weitering stated.
    Superconductors are typically discovered by accident; the development of this novel superconductor is the first example ever of intentionally creating an atomically thin superconductor on a conventional semiconductor template, exploiting the knowledge base of high-temperature superconductivity in doped ‘Mott insulating’ copper oxide materials.
    “The entire approach — doping a Mott insulator, the tin on silicon — was a deliberate strategy. Then came proving we’re seeing the properties of a doped Mott insulator as opposed to anything else and ruling out other interpretations. The next logical step was demonstrating superconductivity, and lo and behold, it worked,” Weitering said.
    “Discovery of new knowledge is a core mission of UT,” Weitering stated. “Although we don’t have an immediate application for our superconductor, we have established a proof of principle, which may lead to future practical applications.”

    Story Source:
    Materials provided by University of Tennessee at Knoxville. Note: Content may be edited for style and length. More

  • in

    Parental restrictions on tech use have little lasting effect into adulthood

    “Put your phone away!” “No more video games!” “Ten more minutes of YouTube and you’re done!”
    Kids growing up in the mobile internet era have heard them all, often uttered by well-meaning parents fearing long-term problems from overuse.
    But new University of Colorado Boulder research suggests such restrictions have little effect on technology use later in life, and that fears of widespread and long-lasting “tech addiction” may be overblown.
    “Are lots of people getting addicted to tech as teenagers and staying addicted as young adults? The answer from our research is ‘no’,” said lead author Stefanie Mollborn, a professor of sociology at the Institute of Behavioral Science. “We found that there is only a weak relationship between early technology use and later technology use, and what we do as parents matters less than most of us believe it will.”
    The study, which analyzes a survey of nearly 1,200 young adults plus extensive interviews with another 56, is the first to use such data to examine how digital technology use evolves from childhood to adulthood.
    The data were gathered prior to the pandemic, which has resulted in dramatic increases in the use of technology as millions of students have been forced to attend school and socialize online. But the authors say the findings should come as some comfort to parents worried about all that extra screen time.

    advertisement

    “This research addresses the moral panic about technology that we so often see,” said Joshua Goode, a doctoral student in sociology and co-author of the paper. “Many of those fears were anecdotal, but now that we have some data, they aren’t bearing out.”
    Published in Advances in Life Course Research, the paper is part of a 4-year National Science Foundation-funded project aimed at exploring how the mobile internet age truly is shaping America’s youth.
    Since 1997, time spent with digital technology has risen 32% among 2- to 5-year-olds and 23% among 6- to 11-year-olds, the team’s previous papers found. Even before the pandemic, adolescents spent 33 hours per week using digital technology outside of school.
    For the latest study, the research team shed light on young adults ages 18 to 30, interviewing dozens of people about their current technology use, their tech use as teens and how their parents or guardians restricted or encouraged it. The researchers also analyzed survey data from a nationally representative sample of nearly 1,200 participants, following the same people from adolescence to young adulthood.
    Surprisingly, parenting practices like setting time limits or prohibiting kids from watching shows during mealtimes had no effect on how much the study subjects used technology as young adults, researchers found.

    advertisement

    Those study subjects who grew up with fewer devices in the home or spent less time using technology as kids tended to spend slightly less time with tech in young adulthood — but statistically, the relationship was weak.
    What does shape how much time young adults spend on technology? Life in young adulthood, the research suggests.
    Young adults who hang out with a lot of people who are parents spend more time with tech (perhaps as a means of sharing parenting advice). Those whose friends are single tend toward higher use than the married crowd. College students, meantime, tend to believe they spend more time with technology than they ever have before or ever plan to again, the study found.
    “They feel like they are using tech a lot because they have to, they have it under control and they see a future when they can use less of it,” said Mollborn.
    From the dawn of comic books and silent movies to the birth of radio and TV, technological innovation has bred moral panic among older generations, the authors note.
    “We see that everyone is drawn to it, we get scared and we assume it is going to ruin today’s youth,” said Mollborn.
    In some cases, excess can have downsides. For instance, the researchers found that adolescents who play a lot of video games tend to get less physical activity.
    But digital technology use does not appear to crowd out sleep among teens, as some had feared, and use of social media or online videos doesn’t squeeze out exercise.
    In many ways, Goode notes, teens today are just swapping one form of tech for another, streaming YouTube instead watching TV, or texting instead of talking on the phone.
    That is not to say that no one ever gets addicted, or that parents should never instill limits or talk to their kids about its pros and cons, Mollborn stresses.
    “What these data suggest is that the majority of American teens are not becoming irrevocably addicted to technology. It is a message of hope.”
    She recently launched a new study, interviewing teens and parents in the age of COVID-19. Interestingly, she said, parents seem less worried about their kids’ tech use during the pandemic than they were in the past.
    “They realize that kids need social interaction and the only way to get that right now is through screens. Many of them are saying, ‘Where would we be right now without technology?'” More

  • in

    For neural research, wireless chip shines light on the brain

    Researchers have developed a chip that is powered wirelessly and can be surgically implanted to read neural signals and stimulate the brain with both light and electrical current. The technology has been demonstrated successfully in rats and is designed for use as a research tool.
    “Our goal was to create a research tool that can be used to help us better understand the behavior of different regions of the brain, particularly in response to various forms of neural stimulation,” says Yaoyao Jia, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at North Carolina State University. “This tool will help us answer fundamental questions that could then pave the way for advances in addressing neurological disorders such as Alzheimer’s or Parkinson’s disease.”
    The new technology has two features that set it apart from the previous state of the art.
    First, it is fully wireless. Researchers can power the 5×3 mm2 chip, which has an integrated power receiver coil, by applying an electromagnetic field. For example, in testing the researchers did with lab rats, the electromagnetic field surrounded each rat’s cage — so the device was fully powered regardless of what the rat was doing. The chip is also capable of sending and receiving information wirelessly.
    The second feature is that the chip is trimodal, meaning that it can perform three tasks.
    Current state-of-the-art neural interface chips of this kind can do two things: they can read neural signals in targeted regions of the brain by detecting electrical changes in those regions; and they can stimulate the brain by introducing a small electrical current into the brain tissue.
    The new chip can do both of those things, but it can also shine light onto the brain tissue — a function called optical stimulation. But for optical stimulation to work, you have to first genetically modify targeted neurons to make them respond to specific wavelengths of light.
    “When you use electrical stimulation, you have little control over where the electrical current goes,” Jia says. “But with optical stimulation, you can be far more precise, because you have only modified those neurons that you want to target in order to make them sensitive to light. This is an active field of research in neuroscience, but the field has lacked the electronic tools it needs to move forward. That’s where this work comes in.”
    In other words, by helping researchers (literally) shine a light on neural tissue, the new chip will help them (figuratively) shine a light on how the brain works.

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    New test reveals AI still lacks common sense

    Natural language processing (NLP) has taken great strides recently — but how much does AI understand of what it reads? Less than we thought, according to researchers at USC’s Department of Computer Science. In a recent paper Assistant Professor Xiang Ren and PhD student Yuchen Lin found that despite advances, AI still doesn’t have the common sense needed to generate plausible sentences.
    “Current machine text-generation models can write an article that may be convincing to many humans, but they’re basically mimicking what they have seen in the training phase,” said Lin. “Our goal in this paper is to study the problem of whether current state-of-the-art text-generation models can write sentences to describe natural scenarios in our everyday lives.”
    Understanding scenarios in daily life
    Specifically, Ren and Lin tested the models’ ability to reason and showed there is a large gap between current text generation models and human performance. Given a set of common nouns and verbs, state-of-the-art NLP computer models were tasked with creating believable sentences describing an everyday scenario. While the models generated grammatically correct sentences, they were often logically incoherent.
    For instance, here’s one example sentence generated by a state-of-the-art model using the words “dog, frisbee, throw, catch”:
    “Two dogs are throwing frisbees at each other.”
    The test is based on the assumption that coherent ideas (in this case: “a person throws a frisbee and a dog catches it,”) can’t be generated without a deeper awareness of common-sense concepts. In other words, common sense is more than just the correct understanding of language — it means you don’t have to explain everything in a conversation. This is a fundamental challenge in the goal of developing generalizable AI — but beyond academia, it’s relevant for consumers, too.

    advertisement

    Without an understanding of language, chatbots and voice assistants built on these state-of-the-art natural-language models are vulnerable to failure. It’s also crucial if robots are to become more present in human environments. After all, if you ask a robot for hot milk, you expect it to know you want a cup of mile, not the whole carton.
    “We also show that if a generation model performs better on our test, it can also benefit other applications that need commonsense reasoning, such as robotic learning,” said Lin. “Robots need to understand natural scenarios in our daily life before they make reasonable actions to interact with people.”
    Joining Lin and Ren on the paper are USC’s Wangchunshu Zhou, Ming Shen, Pei Zhou; Chandra Bhagavatula from the Allen Institute of Artificial Intelligence; and Yejin Choi from the Allen Institute of Artificial Intelligence and Paul G. Allen School of Computer Science & Engineering, University of Washington.
    The common sense test
    Common-sense reasoning, or the ability to make inferences using basic knowledge about the world — like the fact that dogs cannot throw frisbees to each other — has resisted AI researchers’ efforts for decades. State-of-the-art deep-learning models can now reach around 90% accuracy, so it would seem that NLP has gotten closer to its goal.

    advertisement

    But Ren, an expert in natural language processing and Lin, his student, needed more convincing about this statistic’s accuracy. In their paper, published in the Findings of Empirical Methods in Natural Language Processing (EMNLP) conference on Nov. 16, they challenge the effectiveness of the benchmark and, therefore, the level of progress the field has actually made.
    “Humans acquire the ability to compose sentences by learning to understand and use common concepts that they recognize in their surrounding environment,” said Lin.
    “Acquiring this ability is regarded as a major milestone in human development. But we wanted to test if machines can really acquire such generative commonsense reasoning ability.”
    To evaluate different machine models, the pair developed a constrained text generation task called CommonGen, which can be used as a benchmark to test the generative common sense of machines. The researchers presented a dataset consisting of 35,141 concepts associated with 77,449 sentences. They found the even best performing model only achieved an accuracy rate of 31.6% versus 63.5% for humans.
    “We were surprised that the models cannot recall the simple commonsense knowledge that ‘a human throwing a frisbee’ should be much more reasonable than a dog doing it,” said Lin. “We find even the strongest model, called the T5, after training with a large dataset, can still make silly mistakes.”
    It seems, said the researchers, that previous tests have not sufficiently challenged the models on their common sense abilities, instead mimicking what they have seen in the training phase.
    “Previous studies have primarily focused on discriminative common sense,” said Ren. “They test machines with multi-choice questions, where the search space for the machine is small — usually four or five candidates.”
    For instance, a typical setting for discriminative common-sense testing is a multiple-choice question answering task, for example: “Where do adults use glue sticks?” A: classroom B: office C: desk drawer.
    The answer here, of course, is “B: office.” Even computers can figure this out without much trouble. In contrast, a generative setting is more open-ended, such as the CommonGen task, where a model is asked to generate a natural sentence from given concepts.
    Ren explains: “With extensive model training, it is very easy to have a good performance on those tasks. Unlike those discriminative commonsense reasoning tasks, our proposed test focuses on the generative aspect of machine common sense.”
    Ren and Lin hope the data set will serve as a new benchmark to benefit future research about introducing common sense to natural language generation. In fact, they even have a leaderboard depicting scores achieved by the various popular models to help other researchers determine their viability for future projects.
    “Robots need to understand natural scenarios in our daily life before they make reasonable actions to interact with people,” said Lin.
    “By introducing common sense and other domain-specific knowledge to machines, I believe that one day we can see AI agents such as Samantha in the movie Her that generate natural responses and interact with our lives.” More

  • in

    Researcher aids in the development of a pathway to solve cybersickness

    Associate Professor of Psychology and Director of the Neuroimaging Center at NYU Abu Dhabi Bas Rokers and a team of researchers have evaluated the state of research on cybersickness and formulated a research and development agenda to eliminate cybersickness, allowing for broader adoption of immersive technologies.
    In the paper titled Identifying Causes of and Solutions for Cybersickness in Immersive Technology: Reformulation of a Research and Development Agenda, published in the International Journal of Human-Computer Interaction, Rokers and his team discuss the process of creating a research and development agenda based on participant feedback from a workshop titled Cybersickness: Causes and Solutions and analysis of related research. The new agenda recommends prioritizing the creation of powerful, lightweight, and untethered head-worn displays, reducing visual latencies, standardizing symptom and aftereffect measurement, developing improved countermeasures, and improving the understanding of the magnitude of the problem and its implications for job performance.
    The results of this study have identified a clear path towards finding a solution for cybersickness and allowing for the widespread use of immersive technologies. In addition to its use in entertainment and gaming, VR and AR have significant applications in the domains of education, manufacturing, training, health care, retail, and tourism. For example, it can enable educators to introduce students to distant locations and immerse themselves in a way that textbooks cannot. It can also allow healthcare workers to reach patients in remote and underserved areas, where they can provide diagnostics, surgical planning and image-guided treatment.
    “As there are possible applications across many industries, understanding how to identify and evaluate the opportunities for mass adoption and the collaborative use of AR and VR is critical,” said Rokers. “Achieving the goal of resolving cybersickness will allow the world to embrace the potential of immersive technology to enhance training, performance, and recreation.”

    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    New electronic chip delivers smarter, light-powered AI

    Researchers have developed artificial intelligence technology that brings together imaging, processing, machine learning and memory in one electronic chip, powered by light.
    The prototype shrinks artificial intelligence technology by imitating the way that the human brain processes visual information.
    The nanoscale advance combines the core software needed to drive artificial intelligence with image-capturing hardware in a single electronic device.
    With further development, the light-driven prototype could enable smarter and smaller autonomous technologies like drones and robotics, plus smart wearables and bionic implants like artificial retinas.
    The study, from an international team of Australian, American and Chinese researchers led by RMIT University, is published in the journal Advanced Materials.
    Lead researcher Associate Professor Sumeet Walia, from RMIT, said the prototype delivered brain-like functionality in one powerful device.

    advertisement

    “Our new technology radically boosts efficiency and accuracy by bringing multiple components and functionalities into a single platform,” Walia who also co-leads the Functional Materials and Microsystems Research Group said.
    “It’s getting us closer to an all-in-one AI device inspired by nature’s greatest computing innovation — the human brain.
    “Our aim is to replicate a core feature of how the brain learns, through imprinting vision as memory.
    “The prototype we’ve developed is a major leap forward towards neurorobotics, better technologies for human-machine interaction and scalable bionic systems.”
    Total package: advancing AI
    Typically artificial intelligence relies heavily on software and off-site data processing.

    advertisement

    The new prototype aims to integrate electronic hardware and intelligence together, for fast on-site decisions.
    “Imagine a dash cam in a car that’s integrated with such neuro-inspired hardware — it can recognise lights, signs, objects and make instant decisions, without having to connect to the internet,” Walia said.
    “By bringing it all together into one chip, we can deliver unprecedented levels of efficiency and speed in autonomous and AI-driven decision-making.”
    The technology builds on an earlier prototype chip from the RMIT team, which used light to create and modify memories.
    New built-in features mean the chip can now capture and automatically enhance images, classify numbers, and be trained to recognise patterns and images with an accuracy rate of over 90%.
    The device is also readily compatible with existing electronics and silicon technologies, for effortless future integration.
    Seeing the light: how the tech works
    The prototype is inspired by optogenetics, an emerging tool in biotechnology that allows scientists to delve into the body’s electrical system with great precision and use light to manipulate neurons.
    The AI chip is based on an ultra-thin material — black phosphorous — that changes electrical resistance in response to different wavelengths of light.
    The different functionalities such as imaging or memory storage are achieved by shining different colours of light on the chip.
    Study lead author Dr Taimur Ahmed, from RMIT, said light-based computing was faster, more accurate and required far less energy than existing technologies.
    “By packing so much core functionality into one compact nanoscale device, we can broaden the horizons for machine learning and AI to be integrated into smaller applications,” Ahmed said.
    “Using our chip with artificial retinas, for example, would enable scientists to miniaturise that emerging technology and improve accuracy of the bionic eye.
    “Our prototype is a significant advance towards the ultimate in electronics: a brain-on-a-chip that can learn from its environment just like we do.” More

  • in

    Machine learning innovation to develop chemical library

    Machine learning has been used widely in the chemical sciences for drug design and other processes.
    The models that are prospectively tested for new reaction outcomes and used to enhance human understanding to interpret chemical reactivity decisions made by such models are extremely limited.
    Purdue University innovators have introduced chemical reactivity flowcharts to help chemists interpret reaction outcomes using statistically robust machine learning models trained on a small number of reactions. The work is published in Organic Letters.
    “Developing new and fast reactions is essential for chemical library design in drug discovery,” said Gaurav Chopra, an assistant professor of analytical and physical chemistry in Purdue’s College of Science. “We have developed a new, fast and one-pot multicomponent reaction (MCR) of N-sulfonylimines that was used as a representative case for generating training data for machine learning models, predicting reaction outcomes and testing new reactions in a blind prospective manner.
    “We expect this work to pave the way in changing the current paradigm by developing accurate, human understandable machine learning models to interpret reaction outcomes that will augment the creativity and efficiency of human chemists to discover new chemical reactions and enhance organic and process chemistry pipelines.”
    Chopra said the Purdue team’s human-interpretable machine learning approach, introduced as chemical reactivity flowcharts, can be extended to explore the reactivity of any MCR or any chemical reaction. It does not need large-scale robotics since these methods can be used by the chemists while doing reaction screening in their laboratories.
    “We provide the first report of a framework to combine fast synthetic chemistry experiments and quantum chemical calculations for understanding reaction mechanism and human-interpretable statistically robust machine learning models to identify chemical patterns for predicting and experimentally testing heterogeneous reactivity of N-sulfonylimines,” Chopra said.
    This work aligns with other innovations and research from Chopra’s labs, whose team members work with the Purdue Research Foundation Office of Technology Commercialization to patent numerous technologies.
    “The unprecedented use of a machine learning model in generating chemical reactivity flowcharts helped us to understand the reactivity of traditionally used different N-sulfonylimines in MCRs,” said Krupal Jethava, a postdoctoral fellow in Chopra’s laboratory, who co-authored the work. “We believe that working hand-to-hand with organic and computational chemists will open up a new avenue for solving complex chemical reactivity problems for other reactions in the future.”
    Chopra said the Purdue researchers hope their work will pave the way to become one of many examples that will showcase the power of machine learning for new synthetic methodology development for drug design and beyond in the future.
    “In this work, we strived to ensure that our machine learning model can be easily understood by chemists not well versed in this field,” said Jonathan Fine, a former Purdue graduate student, who co-authored the work. “We believe that these models have the ability not only be used to predict reactions but also be used to better understand when a given reaction will occur. To demonstrate this, we used our model to guide additional substrates to test whether a reaction will occur.”

    Story Source:
    Materials provided by Purdue University. Original written by Chris Adam. Note: Content may be edited for style and length. More