More stories

  • in

    Like adults, children by age 3 prefer seeing fractal patterns

    By the time children are 3 years old they already have an adult-like preference for visual fractal patterns commonly seen in nature, according to University of Oregon researchers.
    That discovery emerged among children who’ve been raised in a world of Euclidean geometry, such as houses with rooms constructed with straight lines in a simple non-repeating manner, said the study’s lead author Kelly E. Robles, a doctoral student in the UO’s Department of Psychology.
    “Unlike early humans who lived outside on savannahs, modern-day humans spend the majority of their early lives inside these humanmade structures,” Robles said. “So, since children are not heavily exposed to these natural low-to-moderate complexity fractal patterns, this preference must come from something earlier in development or perhaps are innate.”
    The study published online Nov. 25 in the Nature journal Humanities and Social Sciences Communication. In it, researchers explored how individual differences in processing styles may account for trends in fractal fluency. Previous research had suggested that a preference for fractal patterns may develop as a result of environmental and developmental factors acquired across a person’s lifespan.
    In the UO study, researchers exposed participants — 82 adults, ages 18-33, and 96 children, ages 3-10 — to images of fractal patterns, exact and statistical, ranging in complexity on computer screens.
    Exact fractals are highly ordered such that the same basic pattern repeats exactly at every scale and may possess spatial symmetry such as that seen in snowflakes. Statistical fractals, in contrast, repeat in a similar but not exact fashion across scale and do not possess spatial symmetry as seen in coastlines, clouds, mountains, rivers and trees. Both forms appear in art across many cultures.

    advertisement

    When viewing the fractal patterns, Robles said, subjects chose favorites between different pairs of images that differed in complexity. When looking at exact fractal patterns, selections involved different pairs of snowflake-like or tree-branch-like images. For the statistical fractals, selections involved choosing between pairs of cloud-like images.
    “Since people prefer a balance of simplicity and complexity, we were looking to confirm that people preferred low-to-moderate complexity in statistically repeating patterns, and that the presence of order in exact repeating patterns allowed for a tolerance of and preference for more complex patterns,” she said.
    Although there were some differences in the preferences of adults and children, the overall trend was similar. Exact patterns with greater complexity were more preferred, while preference for statistical patterns peaked at low-moderate complexity and then decreases with additional complexity.
    In subsequent steps with the participants, the UO team was able to rule out the possibility that age-related perceptual strategies or biases may have driven different preferences for statistical and exact patterns.
    “We found that people prefer the most common natural pattern, the statistical fractal patterns of low-moderate complexity, and that this preference does not stem from or vary across decades of exposure to nature or to individual differences in how we process images,” Robles said. “Our preferences for fractals are set before our third birthdays, suggesting that our visual system is tuned to better process these patterns that are highly prevalent in nature.”
    The aesthetic experience of viewing nature’s fractals holds huge potential benefits — ranging from stress-reduction to refreshing mental fatigue, said co-author Richard Taylor, professor and head of the UO’s Department of Physics.
    “Nature provides these benefits for free, but we increasingly find ourselves surrounded by urban landscapes devoid of fractals,” he said. “This study shows that incorporating fractals into urban environments can begin providing benefits from a very early age.”
    Taylor is using fractal inspired designs, in his own research, in an effort to create implants to treat macular degeneration. He and co-author Margaret Sereno, professor of psychology and director of the Integrative Perception Lab, also have published on the positive aesthetic benefits of installing fractal solar panels and window blinds.
    Fractal carpets, recently installed in the UO’s Phil and Penny Knight Campus for Accelerating Scientific Impact, are seen in the new facility’s virtual grand opening tour. Sereno and Taylor also are collaborating on future applications with Ihab Elzeyadi, a professor in the UO’s Department of Architecture. More

  • in

    New computational method validates images without 'ground truth'

    A realtor sends a prospective homebuyer a blurry photograph of a house taken from across the street. The homebuyer can compare it to the real thing — look at the picture, then look at the real house — and see that the bay window is actually two windows close together, the flowers out front are plastic and what looked like a door is actually a hole in the wall.
    What if you aren’t looking at a picture of a house, but something very small — like a protein? There is no way to see it without a specialized device so there’s nothing to judge the image against, no “ground truth,” as it’s called. There isn’t much to do but trust that the imaging equipment and the computer model used to create images are accurate.
    Now, however, research from the lab of Matthew Lew at the McKelvey School of Engineering at Washington University in St. Louis has developed a computational method to determine how much confidence a scientist should have that their measurements, at any given point, are accurate, given the model used to produce them.
    The research was published Dec. 11 in Nature Communications.
    “Fundamentally, this is a forensic tool to tell you if something is right or not,” said Lew, assistant professor in the Preston M. Green Department of Electrical & Systems Engineering. It’s not simply a way to get a sharper picture. “This is a whole new way of validating the trustworthiness of each detail within a scientific image.
    “It’s not about providing better resolution,” he added of the computational method, called Wasserstein-induced flux (WIF). “It’s saying, ‘This part of the image might be wrong or misplaced.'”
    The process used by scientists to “see” the very small — single-molecule localization microscopy (SMLM) — relies on capturing massive amounts of information from the object being imaged. That information is then interpreted by a computer model that ultimately strips away most of the data, reconstructing an ostensibly accurate image — a true picture of a biological structure, like an amyloid protein or a cell membrane.

    advertisement

    There are a few methods already in use to help determine whether an image is, generally speaking, a good representation of the thing being imaged. These methods, however, cannot determine how likely it is that any single data point within an image is accurate.
    Hesam Mazidi, a recent graduate who was a PhD student in Lew’s lab for this research, tackled the problem.
    “We wanted to see if there was a way we could do something about this scenario without ground truth,” he said. “If we could use modeling and algorithmic analysis to quantify if our measurements are faithful, or accurate enough.”
    The researchers didn’t have ground truth — no house to compare to the realtor’s picture — but they weren’t empty handed. They had a trove of data that is usually ignored. Mazidi took advantage of the massive amount of information gathered by the imaging device that usually gets discarded as noise. The distribution of noise is something the researchers can use as ground truth because it conforms to specific laws of physics.
    “He was able to say, ‘I know how the noise of the image is manifested, that’s a fundamental physical law,'” Lew said of Mazidi’s insight.

    advertisement

    “He went back to the noisy, imperfect domain of the actual scientific measurement,” Lew said. All of the data points recorded by the imaging device. “There is real data there that people throw away and ignore.”
    Instead of ignoring it, Mazidi looked to see how well the model predicted the noise — given the final image and the model that created it.
    Analyzing so many data points is akin to running the imaging device over and over again, performing multiple test runs to calibrate it.
    “All of those measurements give us statistical confidence,” Lew said.
    WIF allows them to determine not if the entire image is probable based on the model, but, considering the image, if any given point on the image is probable, based on the assumptions built into the model.
    Ultimately, Mazidi developed a method that can say with strong statistical confidence that any given data point in the final image should or should not be in a particular spot.
    It’s as if the algorithm analyzed the picture of the house and — without ever having seen the place — it cleaned up the image, revealing the hole in the wall.
    In the end, the analysis yields a single number per data point, between -1 and 1. The closer to one, the more confident a scientist can be that a point on an image is, in fact, accurately representing the thing being imaged.
    This process can also help scientists improve their models. “If you can quantify performance, then you can also improve your model by using the score,” Mazidi said. Without access to ground truth, “it allows us to evaluate performance under real experimental conditions rather than a simulation.”
    The potential uses for WIF are far-reaching. Lew said the next step is to use it to validate machine learning, where biased datasets may produce inaccurate outputs.
    How would a researcher know, in such a case, that their data was biased? “Using this model, you’d be able to test on data that has no ground truth, where you don’t know if the neural network was trained with data that are similar to real-world data.
    “Care has to be taken in every type of measurement you take,” Lew said. “Sometimes we just want to push the big red button and see what we get, but we have to remember, there’s a lot that happens when you push that button.” More

  • in

    Challenges of fusing robotics and neuroscience

    Combining neuroscience and robotic research has gained impressive results in the rehabilitation of paraplegic patients. A research team led by Prof. Gordon Cheng from the Technical University of Munich (TUM) was able to show that exoskeleton training not only helped patients to walk, but also stimulated their healing process. With these findings in mind, Prof. Cheng wants to take the fusion of robotics and neuroscience to the next level.
    Prof. Cheng, by training a paraplegic patient with the exoskeleton within your sensational study under the “Walk Again” project, you found that patients regained a certain degree of control over the movement of their legs. Back then, this came as a complete surprise to you …
    … and it somehow still is. Even though we had this breakthrough four years ago, this was only the beginning. To my regret, none of these patients is walking around freely and unaided yet. We have only touched the tip of the iceberg. To develop better medical devices, we need to dig deeper in understanding how the brain works and how to translate this into robotics.
    In your paper published in Science Robotics this month, you and your colleague Prof. Nicolelis, a leading expert in neuroscience and in particular in the area of the human-machine interface, argue that some key challenges in the fusion of neuroscience and robotics need to be overcome in order to take the next steps. One of them is to “close the loop between the brain and the machine” — what do you mean by that?
    The idea behind this is that the coupling between the brain and the machine should work in a way where the brain thinks of the machine as an extension of the body. Let’s take driving as an example. While driving a car, you don’t think about your moves, do you? But we still don’t know how this really works. My theory is that the brain somehow adapts to the car as if it is a part of the body. With this general idea in mind, it would be great to have an exoskeleton that would be embraced by the brain in the same way.
    How could this be achieved in practice?
    The exoskeleton that we were using for our research so far is actually just a big chunk of metal and thus rather cumbersome for the wearer. I want to develop a “soft” exoskeleton — something that you can just wear like a piece of clothing that can both sense the user’s movement intentions and provide instantaneous feedback. Integrating this with recent advances in brain-machine interfaces that allow real-time measurement of brain responses enables the seamless adaptation of such exoskeletons to the needs of individual users. Given the recent technological advances and better understanding of how to decode the user’s momentary brain activity, the time is ripe for their integration into more human-centered or, better ? brain-centered ? solutions.
    What other pieces are still missing? You talked about providing a “more realistic functional model” for both disciplines.
    We have to facilitate the transfer through new developments, for example robots that are closer to human behaviour and the construction of the human body and thus lower the threshold for the use of robots in neuroscience. This is why we need more realistic functional models, which means that robots should be able to mimic human characteristics. Let’s take the example of a humanoid robot actuated with artificial muscles. This natural construction mimicking muscles instead of the traditional motorized actuation would provide neuroscientists with a more realistic model for their studies. We think of this as a win-win situation to facilitate better cooperation between neuroscience and robotics in the future.
    You are not alone in the mission of overcoming these challenges. In your Elite Graduate Program in Neuroengineering, the first and only one of its kind in Germany combining experimental and theoretical neuroscience with in-depth training in engineering, you are bringing together the best students in the field.
    As described above, combining the two disciplines of robotics and neuroscience is a tough exercise, and therefore one of the main reasons why I created this master’s program in Munich. To me, it is important to teach the students to think more broadly and across disciplines, to find previously unimagined solutions. This is why lecturers from various fields, for example hospitals or the sports department, are teaching our students. We need to create a new community and a new culture in the field of engineering. From my standpoint, education is the key factor.

    Story Source:
    Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length. More

  • in

    'The robot made me do it': Robots encourage risk-taking behavior in people

    New research has shown robots can encourage people to take greater risks in a simulated gambling scenario than they would if there was nothing to influence their behaviours. Increasing our understanding of whether robots can affect risk-taking could have clear ethical, practical and policy implications, which this study set out to explore.
    Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton who led the study explained, “We know that peer pressure can lead to higher risk-taking behaviour. With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact.”
    This new research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduate students taking the Balloon Analogue Risk Task (BART), a computer assessment that asks participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen. With each press of the spacebar, the balloon inflates slightly, and 1 penny is added to the player’s “temporary money bank.” The balloons can explode randomly, meaning the player loses any money they have won for that balloon and they have the option to “cash-in” before this happens and move on to the next balloon.
    One-third of the participants took the test in a room on their own (the control group), one third took the test alongside a robot that only provided them with the instructions but was silent the rest of the time and the final, the experimental group, took the test with the robot providing instruction as well as speaking encouraging statements such as “why did you stop pumping?”
    The results showed that the group who were encouraged by the robot took more risks, blowing up their balloons significantly more frequently than those in the other groups did. They also earned more money overall. There was no significant difference in the behaviours of the students accompanied by the silent robot and those with no robot.
    Dr Hanoch said: “We saw participants in the control condition scale back their risk-taking behaviour following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before. So, receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and instincts.”
    The researcher now believe that further studies are needed to see whether similar results would emerge from human interaction with other artificial intelligence (AI) systems, such as digital assistants or on-screen avatars.
    Dr Hanoch concluded, “With the wide spread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community.”
    “On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behavior. On the other hand, our data points to the possibility of using robots and AI in preventive programs, such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”

    Story Source:
    Materials provided by University of Southampton. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence helps scientists develop new general models in ecology

    In ecology, millions of species interact in billions of different ways between them and with their environment. Ecosystems often seem chaotic, or at least overwhelming for someone trying to understand them and make predictions for the future.
    Artificial intelligence and machine learning are able to detect patterns and predict outcomes in ways that often resemble human reasoning. They pave the way to increasingly powerful cooperation between humans and computers.
    Within AI, evolutionary computation methods replicate in some sense the processes of evolution of species in the natural world. A particular method called symbolic regression allows the evolution of human-interpretable formulas that explain natural laws.
    “We used symbolic regression to demonstrate that computers are able to derive formulas that represent the way ecosystems or species behave in space and time. These formulas are also easy to understand. They pave the way for general rules in ecology, something that most methods in AI cannot do,” says Pedro Cardoso, curator at the Finnish Museum of Natural History, University of Helsinki.
    With the help of the symbolic regression method, an interdisciplinary team from Finland, Portugal, and France was able to explain why some species exist in some regions and not in others, and why some regions have more species than others.
    The researchers were able, for example, to find a new general model that explains why some islands have more species than others. Oceanic islands have a natural life-cycle, emerging from volcanoes and eventually submerging with erosion after millions of years. With no human input, the algorithm was able to find that the number of species of an island increases with the island age and peaks with intermediate ages, when erosion is still low.
    “The explanation was known, a couple of formulas already existed, but we were able to find new ones that outperform the existing ones under certain circumstances,” says Vasco Branco, PhD student working on the automation of extinction risk assessments at the University of Helsinki.
    The research proposes that explainable artificial intelligence is a field to explore and promotes the cooperation between humans and machines in ways that are only now starting to scratch the surface.
    “Evolving free-form equations purely from data, often without prior human inference or hypotheses, may represent a very powerful tool in the arsenal of a discipline as complex as ecology,” says Luis Correia, computer science professor at the University of Lisbon.

    Story Source:
    Materials provided by University of Helsinki. Note: Content may be edited for style and length. More

  • in

    Artificial intelligence improves control of powerful plasma accelerators

    Researchers have used AI to control beams for the next generation of smaller, cheaper accelerators for research, medical and industrial applications.
    Experiments led by Imperial College London researchers, using the Science and Technology Facilities Council’s Central Laser Facility (CLF), showed that an algorithm was able to tune the complex parameters involved in controlling the next generation of plasma-based particle accelerators.
    The algorithm was able to optimize the accelerator much more quickly than a human operator, and could even outperform experiments on similar laser systems.
    These accelerators focus the energy of the world’s most powerful lasers down to a spot the size of a skin cell, producing electrons and x-rays with equipment a fraction of the size of conventional accelerators.
    The electrons and x-rays can be used for scientific research, such as probing the atomic structure of materials; in industrial applications, such as for producing consumer electronics and vulcanised rubber for car tyres; and could also be used in medical applications, such as cancer treatments and medical imaging.
    Several facilities using these new accelerators are in various stages of planning and construction around the world, including the CLF’s Extreme Photonics Applications Centre (EPAC) in the UK, and the new discovery could help them work at their best in the future. The results are published today in Nature Communications.
    First author Dr Rob Shalloo, who completed the work at Imperial and is now at the accelerator centre DESY, said: “The techniques we have developed will be instrumental in getting the most out of a new generation of advanced plasma accelerator facilities under construction within the UK and worldwide.

    advertisement

    “Plasma accelerator technology provides uniquely short bursts of electrons and x-rays, which are already finding uses in many areas of scientific study. With our developments, we hope to broaden accessibility to these compact accelerators, allowing scientists in other disciplines and those wishing to use these machines for applications, to benefit from the technology without being an expert in plasma accelerators.”
    The team worked with laser wakefield accelerators. These combine the world’s most powerful lasers with a source of plasma (ionised gas) to create concentrated beams of electrons and x-rays. Traditional accelerators need hundreds of metres to kilometres to accelerate electrons, but wakefield accelerators can manage the same acceleration within the space of millimetres, drastically reducing the size and cost of the equipment.
    However, because wakefield accelerators operate in the extreme conditions created when lasers are combined with plasma, they can be difficult to control and optimise to get the best performance. In wakefield acceleration, an ultrashort laser pulse is driven into plasma, creating a wave that is used to accelerate electrons. Both the laser and plasma have several parameters that can be tweaked to control the interaction, such as the shape and intensity of the laser pulse, or the density and length of the plasma.
    While a human operator can tweak these parameters, it is difficult to know how to optimise so many parameters at once. Instead, the team turned to artificial intelligence, creating a machine learning algorithm to optimise the performance of the accelerator.
    The algorithm set up to six parameters controlling the laser and plasma, fired the laser, analysed the data, and re-set the parameters, performing this loop many times in succession until the optimal parameter configuration was reached.
    Lead researcher Dr Matthew Streeter, who completed the work at Imperial and is now at Queen’s University Belfast, said: “Our work resulted in an autonomous plasma accelerator, the first of its kind. As well as allowing us to efficiently optimise the accelerator, it also simplifies their operation and allows us to spend more of our efforts on exploring the fundamental physics behind these extreme machines.”
    The team demonstrated their technique using the Gemini laser system at the CLF, and have already begun to use it in further experiments to probe the atomic structure of materials in extreme conditions and in studying antimatter and quantum physics.
    The data gathered during the optimisation process also provided new insight into the dynamics of the laser-plasma interaction inside the accelerator, potentially informing future designs to further improve accelerator performance. More

  • in

    Artificial visual system of record-low energy consumption for the next generation of AI

    A joint research led by City University of Hong Kong (CityU) has built an ultralow-power consumption artificial visual system to mimic the human brain, which successfully performed data-intensive cognitive tasks. Their experiment results could provide a promising device system for the next generation of artificial intelligence (AI) applications.
    The research team is led by Professor Johnny Chung-yin Ho, Associate Head and Professor of the Department of Materials Science and Engineering (MSE) at CityU. Their findings have been published in the scientific journal Science Advances, titled “Artificial visual system enabled by quasi-two-dimensional electron gases in oxide superlattice nanowires.”
    As the advances in semiconductor technologies used in digital computing are showing signs of stagnation, the neuromorphic (brain-like) computing systems have been regarded as one of the alternatives in future. Scientists have been trying to develop the next generation of advanced AI computers which can be as lightweight, energy-efficient and adaptable as the human brain.
    “Unfortunately, effectively emulating the brain’s neuroplasticity — the ability to change its neural network connections or re-wire itself — in existing artificial synapses through an ultralow-power manner is still challenging,” said Professor Ho.
    Enhancing energy efficiency of artificial synapses
    Artificial synapse is an artificial version of synapse — the gap across which the two neurons pass through electrical signals to communicate with each other in the brain. It is a device that mimics the brain’s efficient neural signal transmission and memory formation process.

    advertisement

    To enhance the energy efficiency of the artificial synapses, Professor Ho’s research team has introduced quasi-two-dimensional electron gases (quasi-2DEGs) into artificial neuromorphic systems for the first time. By utilising oxide superlattice nanowires — a kind of semiconductor with intriguing electrical properties — developed by them, they have designed the quasi-2DEG photonic synaptic devices which have achieved a record-low energy consumption down to sub-femtojoule (0.7fJ) per synaptic event. It means a decrease of 93% energy consumption when compared with synapses in the human brain.
    “Our experiments have demonstrated that the artificial visual system based on our photonic synapses could simultaneously perform light detection, brain-like processing and memory functions in an ultralow-power manner. We believe our findings can provide a promising strategy to build artificial neuromorphic systems for applications in bionic devices, electronic eyes, and multifunctional robotics in future,” said Professor Ho.
    Resembling conductance change in synapses
    He explained that a two-dimensional electron gas occurs when electrons are confined to a two-dimensional interface between two different materials. Since there are no electron-electron interactions and electron-ion interactions, the electrons move freely in the interface.
    Upon exposure to light pulse, a series of reactions between the oxygen molecules from environment absorbed onto the nanowire surface and the free electrons from the two-dimensional electron gases inside the oxide superlattice nanowires were induced. Hence the conductance of the photonic synapses would change. Given the outstanding charge carrier mobility and sensitivity to light stimuli of superlattice nanowires, the change of conductance in the photonic synapses resembles that in biological synapse. Hence the quasi-2DEG photonic synapses can mimic how the neurons in the human brain transmit and memorise signals.

    advertisement

    A combo of photo-detection and memory functions
    “The special properties of the superlattice nanowire materials enable our synapses to have both the photo-detecting and memory functions simultaneously. In a simple word, the nanowire superlattice cores can detect the light stimulus in a high-sensitivity way, and the nanowire shells promote the memory functions. So there is no need to construct additional memory modules for charge storage in an image sensing chip. As a result, our device can save energy,” explained Professor Ho.
    With this quasi-2DEG photonic synapse, they have built an artificial visual system which could accurately and efficiently detect a patterned light stimulus and “memorise” the shape of the stimuli for an hour. “It is just like our brain will remember what we saw for some time,” described Professor Ho.
    He added that the way the team synthesised the photonic synapses and the artificial visual system did not require complex equipment. And the devices could be made on flexible plastics in a scalable and low-cost manner.
    Professor Ho is the corresponding author of the paper. The co-first authors are Meng You and Li Fangzhou, PhD students from MSE at CityU. Other team members include Dr Bu Xiuming, Dr Yip Sen-po, Kang Xiaolin, Wei Renjie, Li Dapan and Wang Fei, who are all from CityU. Other collaborating researchers come from University of Electronic Science and Technology of China, Kyushu University, and University of Tokyo.
    The study received funding support from CityU, the Research Grants Council of Hong Kong SAR, the National Natural Science Foundation of China and the Science, Technology and Innovation Commission of Shenzhen Municipality. More

  • in

    Artificial Chemist 2.0: quantum dot R&D in less than an hour

    A new technology, called Artificial Chemist 2.0, allows users to go from requesting a custom quantum dot to completing the relevant R&D and beginning manufacturing in less than an hour. The tech is completely autonomous, and uses artificial intelligence (AI) and automated robotic systems to perform multi-step chemical synthesis and analysis.
    Quantum dots are colloidal semiconductor nanocrystals, which are used in applications such as LED displays and solar cells.
    “When we rolled out the first version of Artificial Chemist, it was a proof of concept,” says Milad Abolhasani, corresponding author of a paper on the work and an assistant professor of chemical and biomolecular engineering at North Carolina State University. “Artificial Chemist 2.0 is industrially relevant for both R&D and manufacturing.”
    From a user standpoint, the whole process essentially consists of three steps. First, a user tells Artificial Chemist 2.0 the parameters for the desired quantum dots. For example, what color light do you want to produce? The second step is effectively the R&D stage, where Artificial Chemist 2.0 autonomously conducts a series of rapid experiments, allowing it to identify the optimum material and the most efficient means of producing that material. Third, the system switches over to manufacturing the desired amount of the material.
    “Quantum dots can be divided up into different classes,” Abolhasani says. “For example, well-studied II-VI, IV-VI, and III-V materials, or the recently emerging metal halide perovskites, and so on. Basically, each class consists of a range of materials that have similar chemistries.
    “And the first time you set up Artificial Chemist 2.0 to produce quantum dots in any given class, the robot autonomously runs a set of active learning experiments. This is how the brain of the robotic system learns the materials chemistry,” Abolhasani says. “Depending on the class of material, this learning stage can take between one and 10 hours. After that one-time active learning period, Artificial Chemist 2.0 can identify the best possible formulation for producing the desired quantum dots from 20 million possible combinations with multiple manufacturing steps in 40 minutes or less.”
    The researchers note that the R&D process will almost certainly become faster every time people use it, since the AI algorithm that runs the system will learn more — and become more efficient — with every material that it is asked to identify.
    Artificial Chemist 2.0 incorporates two chemical reactors, which operate in a series. The system is designed to be entirely autonomous, and allows users to switch from one material to another without having to shut down the system. Video of how the system works can be found at https://youtu.be/e_DyV-hohLw.
    “In order to do this successfully, we had to engineer a system that leaves no chemical residues in the reactors and allows the AI-guided robotic system to add the right ingredients, at the right time, at any point in the multi-step material production process,” Abolhasani says. “So that’s what we did.
    “We’re excited about what this means for the specialty chemicals industry. It really accelerates R&D to warp speed, but it is also capable of making kilograms per day of high-value, precisely engineered quantum dots. Those are industrially relevant volumes of material.”

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More