More stories

  • in

    Deep learning algorithm to speed up materials discovery in emerging tech industries

    Solid-state inorganic materials are critical to the growth and development of electric vehicle, cellphone, laptop battery and solar energy technologies. However, finding the ideal materials with the desired functions for these industries is extremely challenging. Jianjun Hu, an associate professor of computer science at the University of South Carolina is the lead researcher on a project to generate new hypothetical materials.
    Due to the vast chemical design space and the high sparsity of candidates, experimental trials and first-principle computational simulations cannot be used as screening tools to solve this problem. Instead, researchers have developed a deep learning-based smart algorithm that uses a technique called generative adversarial network (GAN) model to dramatically improve the material search efficiency up to two orders of magnitude. It has the potential to greatly speed up the discovery of novel functional materials.
    The work, published in NPJ Computational Materials, was a collaboration between researchers at the University of South Carolina College of Engineering and Computing and Guizhou University, a research university located in Guiyang, China.
    Inspired by the deep learning technique used in Google’s AlphaGo, which learned implicit rules of the board game Go to defeat the game’s top players, the researchers used their GAN neural network to learn the implicit chemical composition rules of atoms in different elements to assemble chemically valid formulas. By training their deep learning models using the tens of thousands of known inorganic materials deposited in databases such as ICSD and OQMD, they created a generative machine learning model capable of generating millions of new hypothetical inorganic material formulas.
    “There is almost an infinite number of new materials that could exist, but they haven’t been discovered yet,” said Jianjun Hu. “Our algorithm, it’s like a generation engine. Using this model, we can generate a lot of new hypothetical materials that have very high likelihoods to exist.”
    Without explicitly modeling or enforcing chemical constraints such as charge neutrality and electronegativity, the deep learning-based smart algorithm learned to observe such rules when generating millions of hypothetical materials’ formulas. The predictive power of the algorithm has been verified both by known materials and recent findings in materials discovery literature. “One major advantage of our algorithm is the high validity, uniqueness and novelty, which are the three major evaluation metrics of such generative models,” said Shaobo Li, a professor at Guizhou University who was involved in this study.
    This is not the first time that an algorithm has been created for materials discovery. Past algorithms were also able to produce millions of potential new materials. However, very few of the materials discovered by these algorithms were synthesizable due to their high free energy and instability. In contrast, nearly 70 percent of the inorganic materials identified by Hu’s team are very likely to be stable and then possibly synthesizable.
    “You can get any number of formula combinations by putting elements’ symbols together. But it doesn’t mean the physics can exist,” said Ming Hu, an associate professor of mechanical engineering at UofSC also involved in the research. “So, our algorithm and the next step, structure prediction algorithm, will dramatically increase the speed to screening new function materials by creating synthesizable compounds.”
    These new materials will help researchers in fields such as electric vehicles, green energy, solar energy and cellphone development as they continually search for new materials with optimized functionalities. With the current materials discovery process being so slow, these industries’ growth has been limited by the materials available to them.
    The next major step for the team is to predict the crystal structure of the generated formulas, which is currently a major challenge. However, the team has already started working on this challenge along with several leading international teams. Once solved, the two steps can be combined to discover many potential materials for energy conversion, storage and other applications.
    About University of South Carolina:
    The University of South Carolina is a globally recognized, high-impact research university committed to a superior student experience and dedicated to innovation in learning, research and community engagement. Founded in 1801, the university offers more than 350 degree programs and is the state’s only top-tier Carnegie Foundation research institution. More than 50,000 students are enrolled at one of 20 locations throughout the state, including the research campus in Columbia. With 56 nationally ranked academic programs including top-ranked programs in international business, the nation’s best honors college and distinguished programs in engineering, law, medicine, public health and the arts, the university is helping to build healthier, more educated communities in South Carolina and around the world. More

  • in

    Fifty new planets confirmed in machine learning first

    Fifty potential planets have had their existence confirmed by a new machine learning algorithm developed by University of Warwick scientists.
    For the first time, astronomers have used a process based on machine learning, a form of artificial intelligence, to analyse a sample of potential planets and determine which ones are real and which are ‘fakes’, or false positives, calculating the probability of each candidate to be a true planet.
    Their results are reported in a new study published in the Monthly Notices of the Royal Astronomical Society, where they also perform the first large scale comparison of such planet validation techniques. Their conclusions make the case for using multiple validation techniques, including their machine learning algorithm, when statistically confirming future exoplanet discoveries.
    Many exoplanet surveys search through huge amounts of data from telescopes for the signs of planets passing between the telescope and their star, known as transiting. This results in a telltale dip in light from the star that the telescope detects, but it could also be caused by a binary star system, interference from an object in the background, or even slight errors in the camera. These false positives can be sifted out in a planetary validation process.
    Researchers from Warwick’s Departments of Physics and Computer Science, as well as The Alan Turing Institute, built a machine learning based algorithm that can separate out real planets from fake ones in the large samples of thousands of candidates found by telescope missions such as NASA’s Kepler and TESS.
    It was trained to recognise real planets using two large samples of confirmed planets and false positives from the now retired Kepler mission. The researchers then used the algorithm on a dataset of still unconfirmed planetary candidates from Kepler, resulting in fifty new confirmed planets and the first to be validated by machine learning. Previous machine learning techniques have ranked candidates, but never determined the probability that a candidate was a true planet by themselves, a required step for planet validation.

    advertisement

    Those fifty planets range from worlds as large as Neptune to smaller than Earth, with orbits as long as 200 days to as little as a single day. By confirming that these fifty planets are real, astronomers can now prioritise these for further observations with dedicated telescopes.
    Dr David Armstrong, from the University of Warwick Department of Physics, said: “The algorithm we have developed lets us take fifty candidates across the threshold for planet validation, upgrading them to real planets. We hope to apply this technique to large samples of candidates from current and future missions like TESS and PLATO.
    “In terms of planet validation, no-one has used a machine learning technique before. Machine learning has been used for ranking planetary candidates but never in a probabilistic framework, which is what you need to truly validate a planet. Rather than saying which candidates are more likely to be planets, we can now say what the precise statistical likelihood is. Where there is less than a 1% chance of a candidate being a false positive, it is considered a validated planet.”
    Dr Theo Damoulas from the University of Warwick Department of Computer Science, and Deputy Director, Data Centric Engineering and Turing Fellow at The Alan Turing Institute, said: “Probabilistic approaches to statistical machine learning are especially suited for an exciting problem like this in astrophysics that requires incorporation of prior knowledge — from experts like Dr Armstrong — and quantification of uncertainty in predictions. A prime example when the additional computational complexity of probabilistic methods pays off significantly.”
    Once built and trained the algorithm is faster than existing techniques and can be completely automated, making it ideal for analysing the potentially thousands of planetary candidates observed in current surveys like TESS. The researchers argue that it should be one of the tools to be collectively used to validate planets in future.
    Dr Armstrong adds: “Almost 30% of the known planets to date have been validated using just one method, and that’s not ideal. Developing new methods for validation is desirable for that reason alone. But machine learning also lets us do it very quickly and prioritise candidates much faster.
    “We still have to spend time training the algorithm, but once that is done it becomes much easier to apply it to future candidates. You can also incorporate new discoveries to progressively improve it.
    “A survey like TESS is predicted to have tens of thousands of planetary candidates and it is ideal to be able to analyse them all consistently. Fast, automated systems like this that can take us all the way to validated planets in fewer steps let us do that efficiently.” More

  • in

    Teamwork can make the 5G dream work: A collaborative system architecture for 5G networks

    A research team led by Prof Jeongho Kwak from Daegu Gyeongbuk Institute of Science and Technology (DGIST) has designed a novel system architecture where collaboration between cloud service providers and mobile network operators plays a central role. Such a collaborative architecture would allow for optimizing the use of network, computing, and storage resources, thereby unlocking the potential of various novel services and applications.
    That many novel network- and cloud-dependent services will have become commonplace in the next few years is evident. This includes highly demanding technological feats like 8K video streaming, remote virtual reality, and large-scale data processing. But, it is also likely that today’s network infrastructures won’t make the cut unless significant improvements are made to enable the advanced, “killer” 5G applications expected in the imminent 5G era.
    So, instead of having cloud service providers (CSPs) and mobile network operators (MNOs) like Google and like Verizon independently improve their systems, what if they actively collaborated to achieve common goals? In a recent paper published in IEEE Network, a team of scientists, including Prof Jeongho Kwak from Daegu Gyeongbuk Institute of Science and Technology in Korea, explored the benefits and challenges of implementing a system focused on MNO-CSP collaboration.
    In their study, the scientists propose an overarching system architecture in which both CSPs and MNOs share information and exert unified control over the available network, computing, and storage resources. Prof Kwak explains, “The proposed architecture includes vertical collaboration from end devices to centralized cloud systems and horizontal collaboration between cloud providers and network providers. Hence, via vertical-horizontal optimization of the architecture, we can experience holistic improvement in the services for both current and future killer applications of 5G.” For example, by having MNOs share information about current traffic congestions and CSPs inform MNOs about their available computing resources, a collaborative system becomes more agile, flexible, and efficient.
    Through simulations, the research team went on to demonstrate how CSP-MNO collaboration could bring about potential performance improvements. Moreover, they discussed the present challenges that need to be overcome before such a system can be implemented, including calculating the financial incentives for each party and certain compatibility issues during the transition to a collaborative system architecture.
    Embracing collaboration between CSPs and MNOs might be necessary to unlock many of the features that were promised during the early development of 5G. Prof Kwak concludes, “We envision unconstrained use of augmented or virtual reality services and autonomous vehicles with almost zero latency. However, this ideal world will be possible only through the joint optimization of networking, processing, and storage resources.”
    One thing is clear: “teamwork,” among various service providers, is essential if we are to keep up with the current Information Age. More

  • in

    Cutting surgical robots down to size

    Teleoperated surgical robots are becoming commonplace in operating rooms, but many are massive (sometimes taking up an entire room) and difficult to manipulate. A new collaboration between Harvard’s Wyss Institute and Sony Corporation has created the mini-RCM, a surgical robot the size of a tennis ball that weighs as much as a penny, and performed significantly better than manually operated tools in delicate mock-surgical procedures.
    Minimally invasive laparoscopic surgery, in which a surgeon uses tools and a tiny camera inserted into small incisions to perform operations, has made surgical procedures safer for both patients and doctors over the last half-century. Recently, surgical robots have started to appear in operating rooms to further assist surgeons by allowing them to manipulate multiple tools at once with greater precision, flexibility, and control than is possible with traditional techniques. However, these robotic systems are extremely large, often taking up an entire room, and their tools can be much larger than the delicate tissues and structures on which they operate.
    A collaboration between Wyss Associate Faculty member Robert Wood, Ph.D. and Robotics Engineer Hiroyuki Suzuki of Sony Corporation has brought surgical robotics down to the microscale by creating a new, origami-inspired miniature remote center of motion manipulator (the “mini-RCM”). The robot is the size of a tennis ball, weighs about as much as a penny, and successfully performed a difficult mock surgical task, as described in a recent issue of Nature Machine Intelligence.
    “The Wood lab’s unique technical capabilities for making micro-robots have led to a number of impressive inventions over the last few years, and I was convinced that it also had the potential to make a breakthrough in the field of medical manipulators as well,” said Suzuki, who began working with Wood on the mini-RCM in 2018 as part of a Harvard-Sony collaboration. “This project has been a great success.”
    A mini robot for micro tasks
    To create their miniature surgical robot, Suzuki and Wood turned to the Pop-Up MEMS manufacturing technique developed in Wood’s lab, in which materials are deposited on top of each other in layers that are bonded together, then laser-cut in a specific pattern that allows the desired three-dimensional shape to “pop up,” as in a children’s pop-up picture book. This technique greatly simplifies the mass-production of small, complex structures that would otherwise have to be painstakingly constructed by hand.

    advertisement

    The team created a parallelogram shape to serve as the main structure of the robot, then fabricated three linear actuators (mini-LAs) to control the robot’s movement: one parallel to the bottom of the parallelogram that raises and lowers it, one perpendicular to the parallelogram that rotates it, and one at the tip of the parallelogram that extends and retracts the tool in use. The result was a robot that is much smaller and lighter than other microsurgical devices previously developed in academia.
    The mini-LAs are themselves marvels in miniature, built around a piezoelectric ceramic material that changes shape when an electrical field is applied. The shape change pushes the mini-LA’s “runner unit” along its “rail unit” like a train on train tracks, and that linear motion is harnessed to move the robot. Because piezoelectric materials inherently deform as they change shape, the team also integrated LED-based optical sensors into the mini-LA to detect and correct any deviations from the desired movement, such as those caused by hand tremors.
    Steadier than a surgeon’s hands
    To mimic the conditions of a teleoperated surgery, the team connected the mini-RCM to a Phantom Omni device, which manipulated the mini-RCM in response to the movements of a user’s hand controlling a pen-like tool. Their first test evaluated a human’s ability to trace a tiny square smaller than the tip of a ballpoint pen, looking through a microscope and either tracing it by hand, or tracing it using the mini-RCM. The mini-RCM tests dramatically improved user accuracy, reducing error by 68% compared to manual operation — an especially important quality given the precision required to repair small and delicate structures in the human body.
    Given the mini-RCM’s success on the tracing test, the researchers then created a mock version of a surgical procedure called retinal vein cannulation, in which a surgeon must carefully insert a needle through the eye to inject therapeutics into the tiny veins at the back of the eyeball. They fabricated a silicone tube the same size as the retinal vein (about twice the thickness of a human hair), and successfully punctured it with a needle attached to the end of the mini-RCM without causing local damage or disruption.
    In addition to its efficacy in performing delicate surgical maneuvers, the mini-RCM’s small size provides another important benefit: it is easy to set up and install and, in the case of a complication or electrical outage, the robot can be easily removed from a patient’s body by hand.
    “The Pop-Up MEMS method is proving to be a valuable approach in a number of areas that require small yet sophisticated machines, and it was very satisfying to know that it has the potential to improve the safety and efficiency of surgeries to make them even less invasive for patients,” said Wood, who is also the Charles River Professor of Engineering and Applied Sciences at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).
    The researchers aim to increase the force of the robot’s actuators to cover the maximum forces experienced during an operation, and improve its positioning precision. They are also investigating using a laser with a shorter pulse during the machining process, to improve the mini-LAs’ sensing resolution.
    “This unique collaboration between the Wood lab and Sony illustrates the benefits that can arise from combining the real-world focus of industry with the innovative spirit of academia, and we look forward to seeing the impact this work will have on surgical robotics in the near future,” said Wyss Institute Founding Director Don Ingber, M.D., Ph.D., who is also the the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and Professor of Bioengineering at SEAS. More

  • in

    Machines rival expert analysis of stored red blood cell quality

    Each year, nearly 120 million units* of donated blood flow from donor veins into storage bags at collection centres around the world. The fluid is packed, processed and reserved for later use. But once outside the body, stored red blood cells (RBCs) undergo continuous deterioration. By day 42 in most countries, the products are no longer usable.
    For years, labs have used expert microscopic examinations to assess the quality of stored blood. How viable is a unit by day 24? How about day 37? Depending on what technicians’ eyes perceive, answers may vary. This manual process is laborious, complex and subjective.
    Now, after three years of research, a study published in the Proceedings of the National Academy of Sciences unveils two new strategies to automate the process and achieve objective RBC quality scoring — with results that match and even surpass expert assessment.
    The methodologies showcase the potential in combining artificial intelligence with state-of-the-art imaging to solve a longstanding biomedical problem. If standardized, it could ensure more consistent, accurate assessments, with increased efficiency and better patient outcomes.
    Trained machines match expert human assessment
    The interdisciplinary collaboration spanned five countries, twelve institutions and nineteen authors, including universities, research institutes, and blood collection centres in Canada, USA, Switzerland, Germany and the UK. The research was led by computational biologist Anne Carpenter of the Broad Institute of Harvard and MIT, physicist Michael Kolios of Ryerson University’s Department of Physics, and Jason Acker of the Canadian Blood Services.

    advertisement

    They first investigated whether a neural network could be taught to “see” in images of RBCs the same six categories of cell degradation as human experts could. To generate the vast quantity of images required, imaging flow cytometry played a crucial role. Joseph Sebastian, co-author and Ryerson undergraduate then working under Kolios, explains.
    “With this technique, RBCs are suspended and flowed through the cytometer, an instrument that takes thousands of images of individual blood cells per second. We can then examine each RBC without handling or inadvertently damaging them, which sometimes happens during microscopic examinations.”
    The researchers used 40,900 cell images to train the neural networks on classifying RBCs into the six categories — in a collection that is now the world’s largest, freely available database of RBCs individually annotated with the various categories of deterioration.
    When tested, the machine learning algorithm achieved 77% agreement with human experts. Although a 23% error rate might sound high, perfectly matching an expert’s judgment in this test is impossible: even human experts agree only 83% of the time. Thus, this fully-supervised machine learning model could be effective to replace tedious visual examination by humans with little loss of accuracy.
    Even so, the team wondered: could a different strategy push the upper limits of accuracy further?

    advertisement

    Machines surpass human vision, detect cellular subtleties
    In the study’s second part, the researchers avoided human input altogether and devised an alternative, “weakly-supervised” deep learning model in which neural networks learned about RBC degradation on their own.
    Instead of being taught the six visual categories used by experts, the machines learned solely by analyzing over one million images of RBCs, unclassed and ordered only by blood storage duration time. Eventually, the machines correctly discerned features in single RBCs that correspond to the descent from healthy to unhealthy cells.
    “Allowing the computer to teach itself the progression of stored red blood cells as they degrade is a really exciting development,” says Carpenter, “particularly because it can capture more subtle changes in cells that humans don’t recognize.”
    When tested against other relevant tests such as a biochemical assay, the weakly-supervised trained machines predicted RBC quality better than the current six-category assessment method used by experts.
    Deep learning strategies: Blood quality and beyond
    Further training is still needed before the model is ready for clinical testing, but the outlook is promising. The fully-supervised machine learning model could soon automate and streamline the current manual method, minimizing sample handling, discrepancies and procedural errors in blood quality assessments.
    The second, alternative weakly-supervised framework may further eliminate human subjectivity from the process. Objective, accurate blood quality predictions would allow doctors to better personalize blood products to patients. Beyond stored blood, the time-based deep learning strategy may be transferable to other applications involving chronological progression, such as the spread of cancer.
    “People used to ask what the alternative is to the manual process,” says Kolios. “Now, we’ve developed an approach that integrates cutting-edge developments from several disciplines, including computational biology, transfusion medicine, and medical physics. It’s a testament to how technology and science are now interconnecting to solve today’s biomedical problems.”
    *Data reported by the World Health Organization More

  • in

    Storing information in antiferromagnetic materials

    Researchers at Mainz University were able to show that information can be stored in antiferromagnetic materials and to measure the efficiency of the writing operation.
    We all store more and more information, while the end devices are supposed to get smaller and smaller. However, due to continuous technological improvement, conventional electronics based on silicon is rapidly reaching its limits — for example limits of physical nature such as the bit size or the number of electrons required to store information. Spintronics, and antiferromagnetic materials in particular, offers an alternative. It is not only electrons that are used to store information, but also their spin containing magnetic information. In this way, twice as much information can be stored in the same room. So far, however, it has been controversial whether it is even possible to store information electrically in antiferromagnetic materials.
    Physicists unveil the potential of antiferromagnetic materials
    Researchers at Johannes Gutenberg University Mainz (JGU), in collaboration with Tohoku University in Sendai in Japan, have now been able to prove that it works: “We were not only able to show that information storage in antiferromagnetic materials is fundamentally possible, but also to measure how efficiently information can be written electrically in insulating antiferromagnetic materials,” said Dr. Lorenzo Baldrati, Marie Sklowdoska-Curie Fellow in Professor Mathias Kläui’s group at JGU. For their measurements, the researchers used the antiferromagnetic insulator Cobalt oxide CoO — a model material that paves the way for applications. The result: Currents are much more efficient than magnetic fields to manipulate antiferromagnetic materials. This discovery opens the way toward applications ranging from smart cards that cannot be erased by external magnetic fields to ultrafast computers — thanks to the superior properties of antiferromagnets over ferromagnets. The research paper has recently been published in Physical Review Letters. In further steps, the researchers at JGU want to investigate how quickly information can be saved and how “small” the memory can be written to.
    Active German-Japanese exchange
    “Our longstanding collaboration with the leading university in the field of spintronics, Tohoku University, has generated another exciting piece of work,” emphasized Professor Mathias Kläui. “With the support of the German Exchange Service, the Graduate School of Excellence Materials Science in Mainz, and the German Research Foundation, we initiated a lively exchange between Mainz and Sendai, working with theory groups at the forefront of this topic. We have opportunities for first joint degrees between our universities, which is noticed by students. This is a next step in the formation of an international team of excellence in the burgeoning field of antiferromagnetic spintronics.”

    Story Source:
    Materials provided by Johannes Gutenberg Universitaet Mainz. Note: Content may be edited for style and length. More

  • in

    Contagion model predicts flooding in urban areas

    Inspired by the same modeling and mathematical laws used to predict the spread of pandemics, researchers at Texas A&M University have created a model to accurately forecast the spread and recession process of floodwaters in urban road networks. With this new approach, researchers have created a simple and powerful mathematical approach to a complex problem.
    “We were inspired by the fact that the spread of epidemics and pandemics in communities has been studied by people in health sciences and epidemiology and other fields, and they have identified some principles and rules that govern the spread process in complex social networks,” said Dr. Ali Mostafavi, associate professor in the Zachry Department of Civil and Environmental Engineering. “So we ask ourselves, are these spreading processes the same for the spread of flooding in cities? We tested that, and surprisingly, we found that the answer is yes.”
    The findings of this study were recently published in Nature Scientific Reports.
    The contagion model, Susceptible-Exposed-Infected-Recovered (SEIR), is used to mathematically model the spread of infectious diseases. In relation to flooding, Mostafavi and his team integrated the SEIR model with the network spread process in which the probability of flooding of a road segment depends on the degree to which the nearby road segments are flooded.
    In the context of flooding, susceptible is a road that can be flooded because it is in a flood plain; exposed is a road that has flooding due to rainwater or overflow from a nearby channel; infected is a road that is flooded and cannot be used; and recovered is a road where the floodwater has receded.
    The research team verified the model’s use with high-resolution historical data of road flooding in Harris County during Hurricane Harvey in 2017. The results show that the model can monitor and predict the evolution of flooded roads over time.

    advertisement

    “The power of this approach is it offers a simple and powerful mathematical approach and provides great potential to support emergency managers, public officials, residents, first responders and other decision makers for flood forecast in road networks,” Mostafavi said.
    The proposed model can achieve decent precision and recall for the spatial spread of the flooded roads.
    “If you look at the flood monitoring system of Harris County, it can show you if a channel is overflowing now, but they’re not able to predict anything about the next four hours or next eight hours. Also, the existing flood monitoring systems provide limited information about the propagation of flooding in road networks and the impacts on urban mobility. But our models, and this specific model for the road networks, is robust at predicting the future spread of flooding,” he said. “In addition to flood prediction in urban networks, the findings of this study provide very important insights about the universality of the network spread processes across various social, natural, physical and engineered systems; this is significant for better modeling and managing cities, as complex systems.”
    The only limitation to this flood prediction model is that it cannot identify where the initial flooding will begin, but Mostafavi said there are other mechanisms in place such as sensors on flood gauges that can address this.
    “As soon as flooding is reported in these areas, we can use our model, which is very simple compared to hydraulic and hydrologic models, to predict the flood propagation in future hours. The forecast of road inundations and mobility disruptions is critical to inform residents to avoid high-risk roadways and to enable emergency managers and responders to optimize relief and rescue in impacted areas based on predicted information about road access and mobility. This forecast could be the difference between life and death during crisis response,” he said.
    Civil engineering doctoral student and graduate research assistant Chao Fan led the analysis and modeling of the Hurricane Harvey data, along with Xiangqi (Alex) Jiang, a graduate student in computer science, who works in Mostafavi’s UrbanResilience.AI Lab.
    “By doing this research, I realize the power of mathematical models in addressing engineering problems and real-world challenges.
    This research expands my research capabilities and will have a long-term impact on my career,” Fan said. “In addition, I am also very excited that my research can contribute to reducing the negative impacts of natural disasters on infrastructure services.”

    Story Source:
    Materials provided by Texas A&M University. Original written by Alyson Chapman. Note: Content may be edited for style and length. More

  • in

    Beam me up: Researchers use 'behavioral teleporting' to study social interactions

    Teleporting is a science fiction trope often associated with Star Trek. But a different kind of teleporting is being explored at the NYU Tandon School of Engineering, one that could let researchers investigate the very basis of social behavior, study interactions between invasive and native species to preserve natural ecosystems, explore predator/prey relationship without posing a risk to the welfare of the animals, and even fine-tune human/robot interfaces.
    The team, led by Maurizio Porfiri, Institute Professor at NYU Tandon, devised a novel approach to getting physically separated fish to interact with each other, leading to insights about what kinds of cues influence social behavior.
    The innovative system, called “behavioral teleporting” — the transfer of the complete inventory of behaviors and actions (ethogram) of a live zebrafish onto a remotely located robotic replica — allowed the investigators to independently manipulate multiple factors underpinning social interactions in real-time. The research, “Behavioral teleporting of individual ethograms onto inanimate robots: experiments on social interactions in live zebrafish,” appears in the Cell Press journal iScience.
    The team, including Mert Karakaya, a Ph.D. candidate in the Department of Mechanical and Aerospace Engineering at NYU Tandon, and Simone Macrì of the Centre for Behavioral Sciences and Mental Health, Istituto Superiore di Sanità, Rome, devised a setup consisting of two separate tanks, each containing one fish and one robotic replica. Within each tank, the live fish of the pair swam with the zebrafish replica matching the morphology and locomotory pattern of the live fish located in the other tank.
    An automated tracking system scored each of the live subjects’ locomotory patterns, which were, in turn, used to control the robotic replica swimming in the other tank via an external manipulator. Therefore, the system allowed the transfer of the complete ethogram of each fish across tanks within a fraction of a second, establishing a complex robotics-mediated interaction between two remotely-located live animals. By independently controlling the morphology of these robots, the team explored the link between appearance and movements in social behavior.
    The investigators found that the replica teleported the fish motion in almost all trials (85% of the total experimental time), with a 95% accuracy at a maximum time lag of less than two-tenths of a second. The high accuracy in the replication of fish trajectory was confirmed by equivalent analysis on speed, turn rate, and acceleration.

    advertisement

    Porfiri explained that the behavioral teleporting system avoids the limits of typical modeling using robots.
    “Since existing approaches involve the use of a mathematical representation of social behavior for controlling the movements of the replica, they often lead to unnatural behavioral responses of live animals,” he said. “But because behavioral teleporting ‘copy/pastes’ the behavior of a live fish onto robotic proxies, it confers a high degree of precision with respect to such factors as position, speed, turn rate, and acceleration.”
    Porfiri’s previous research proving robots are viable as behavior models for zebrafish showed that schools of zebrafish could be made to follow the lead of their robotic counterparts.
    “In humans, social behavior unfolds in actions, habits, and practices that ultimately define our individual life and our society,” added Macrì. “These depend on complex processes, mediated by individual traits — baldness, height, voice pitch, and outfit, for example — and behavioral feedback, vectors that are often difficult to isolate. This new approach demonstrates that we canisolate influences on the quality of social interaction and determine which visual features really matter.”
    The research included experiments to understand the asymmetric relationship between large and small fish and identify leader/follower roles, in which a large fish swam with a small replica that mirrored the behavior of the small fish positioned in the other tank and vice-versa.

    advertisement

    Karakaya said the team was surprised to find that the smaller — not larger — fish “led” the interactions.
    “There are no strongly conclusive results on why that could be, but one reason might be due to the ‘curious’ nature of the smaller individuals to explore a novel space,” he said. “In known environments, large fish tend to lead; however, in new environments larger and older animals can be cautious in their approach, whereas the smaller and younger ones could be ‘bolder.'”
    The method also led to the discovery that interaction between fish was not determined by locomotor patterns alone, but also by appearance.
    “It is interesting to see that, as is the case with our own species, there is a relationship between appearance and social interaction,” he added.
    Karakaya added that this could serve as an important tool for human interactions in the near future, whereby, through the closed-loop teleporting, people could use robots as proxies of themselves.
    “One example would be the colonies on Mars, where experts from Earth could use humanoid robots as an extension of themselves to interact with the environment and people there. This would provide easier and more accurate medical examination, improve human contact, and reduce isolation. Detailed studies on the behavioral and psychological effects of these proxies must be completed to better understand how these techniques can be implemented into daily life.”
    This work was supported by the National Science Foundation, the National Institute on Drug Abuse, and the Office of Behavioral and Social Sciences Research. More