More stories

  • in

    Microfluidic system with cell-separating powers may unravel how novel pathogens attack

    To develop effective therapeutics against pathogens, scientists need to first uncover how they attack host cells. An efficient way to conduct these investigations on an extensive scale is through high-speed screening tests called assays.
    Researchers at Texas A&M University have invented a high-throughput cell separation method that can be used in conjunction with droplet microfluidics, a technique whereby tiny drops of fluid containing biological or other cargo can be moved precisely and at high speeds. Specifically, the researchers successfully isolated pathogens attached to host cells from those that were unattached within a single fluid droplet using an electric field.
    “Other than cell separation, most biochemical assays have been successfully converted into droplet microfluidic systems that allow high-throughput testing,” said Arum Han, professor in the Department of Electrical and Computer Engineering and principal investigator of the project. “We have addressed that gap, and now cell separation can be done in a high-throughput manner within the droplet microfluidic platform. This new system certainly simplifies studying host-pathogen interactions, but it is also very useful for environmental microbiology or drug screening applications.”
    The researchers reported their findings in the August issue of the journal Lab on a Chip.
    Microfluidic devices consist of networks of micron-sized channels or tubes that allow for controlled movements of fluids. Recently, microfluidics using water-in-oil droplets have gained popularity for a wide range of biotechnological applications. These droplets, which are picoliters (or a million times less than a microliter) in volume, can be used as platforms for carrying out biological reactions or transporting biological materials. Millions of droplets within a single chip facilitate high-throughput experiments, saving not just laboratory space but the cost of chemical reagents and manual labor.
    Biological assays can involve different cell types within a single droplet, which eventually need to be separated for subsequent analyses. This task is extremely challenging in a droplet microfluidic system, Han said.
    “Getting cell separation within a tiny droplet is extremely difficult because, if you think about it, first, it’s a tiny 100-micron diameter droplet, and second, within this extremely tiny droplet, multiple cell types are all mixed together,” he said.
    To develop the technology needed for cell separation, Han and his team chose a host-pathogen model system consisting of the salmonella bacteria and the human macrophage, a type of immune cell. When both these cell types are introduced within a droplet, some of the bacteria adhere to the macrophage cells. The goal of their experiments was to separate the salmonella that attached to the macrophage from the ones that did not.
    For cell separation, Han and his team constructed two pairs of electrodes that generated an oscillating electric field in close proximity to the droplet containing the two cell types. Since the bacteria and the host cells have different shapes, sizes and electrical properties, they found that the electric field produced a different force on each cell type. This force resulted in the movement of one cell type at a time, separating the cells into two different locations within the droplet. To separate the mother droplet into two daughter droplets containing one type of cells, the researchers also made a downstream Y-shaped splitting junction.
    Han said although these experiments were carried with a host and pathogen whose interaction is well-established, their new microfluidic system equipped with in-drop separation is most useful when the pathogenicity of bacterial species is unknown. He added that their technology enables quick, high-throughput screening in these situations and for other applications where cell separation is required.
    “Liquid handling robotic hands can conduct millions of assays but are extremely costly. Droplet microfluidics can do the same in millions of droplets, much faster and much cheaper,” Han said. “We have now integrated cell separation technology into droplet microfluidic systems, allowing the precise manipulation of cells in droplets in a high-throughput manner, which was not possible before.”

    Story Source:
    Materials provided by Texas A&M University. Original written by Vandana Suresh. Note: Content may be edited for style and length. More

  • in

    Report assesses promises and pitfalls of private investment in conservation

    The Ecological Society of America (ESA) today released a report entitled “Innovative Finance for Conservation: Roles for Ecologists and Practitioners” that offers guidelines for developing standardized, ethical and effective conservation finance projects.
    Public and philanthropic sources currently supply most of the funds for protecting and conserving species and ecosystems. However, the private sector is now driving demand for market-based mechanisms that support conservation projects with positive environmental, social and financial returns. Examples of projects that can support this triple bottom line include green infrastructure for stormwater management, clean transport projects and sustainable production of food and fiber products.
    “The reality is that public and philanthropic funds are insufficient to meet the challenge to conserve the world’s biodiversity,” said Garvin Professor and Senior Director of Conservation Science at Cornell University Amanda Rodewald, the report’s lead author. “Private investments represent a new path forward both because of their enormous growth potential and their ability to be flexibly adapted to a wide variety of social and ecological contexts.”
    Today’s report examines the legal, social and ethical issues associated with innovative conservation finance and offers resources and guidelines for increasing private capital commitments to conservation. It also identifies priority actions that individuals and organizations working in conservation finance will need to adopt in order to “mainstream” the field.
    One priority action is to standardize the metrics that allow practitioners to compare and evaluate projects. While the financial services and investment sectors regularly employ standardized indicators of financial risk and return, it is more difficult to apply such indicators to conservation projects. Under certain conservation financing models, for example, returns on investment are partially determined by whether the conservation project is successful — but “success” can be difficult to quantify when it is defined by complex social or environmental changes, such as whether a bird species is more or less at risk of going extinct as a result of a conservation project.
    Another priority action is to establish safeguards and ethical standards for involving local stakeholders, including Indigenous communities. In the absence of robust accountability and transparency measures, mobilizing private capital in conservation can result in unjust land grabs or in unscrupulous investments where profits flow disproportionately to wealthy or powerful figures. The report offers guidelines for ensuring that conservation financing improves the prosperity of local communities.
    According to co-author Peter Arcese, a professor at the University of British Columbia and adjunct professor at Cornell University, opportunities in conservation finance are growing for patient investors who are interested in generating modest returns while simultaneously supporting sustainable development.
    “Almost all landowners I’ve worked with in Africa and North and South America share a deep desire to maintain or enhance the environmental, cultural and aesthetic values of the ecosystems their land supports,” Arcese said. “By creating markets and stimulating investment in climate mitigation, and forest, water and biodiversity conservation projects, we can offer landowners alternative income sources and measurably slow habitat loss and degradation.”
    Rodewald sees a similar landscape of interest and opportunity. “No matter the system — be it a coffee plantation in the Andes, a timber harvest in the Pacific Northwest, or a farm in the Great Plains — I am reminded again and again that conservation is most successful when we safeguard the health and well-being of local communities. Private investments can be powerful tools to do just that,” said Rodewald.
    Report: Amanda Rodewald, et al. 2020. “Innovative Finance for Conservation: Roles for Ecologists and Practitioners.

    Story Source:
    Materials provided by Ecological Society of America. Note: Content may be edited for style and length. More

  • in

    Esports: Fit gamers challenge ‘fat’ stereotype

    Esports players are up to 21 per cent healthier weight than the general population, hardly smoke and drink less too, finds a new QUT (Queensland University of Technology) study.
    The findings, published in the International Journal of Environmental Research and Public Health, were based on 1400 survey participants from 65 countries.
    First study to investigate the BMI (Body Mass Index) status of a global sample of esports players.
    Esports players were between 9 and 21 per cent more likely to be a healthy weight than the general population.
    Esports players drank and smoked less than the general population.
    The top 10 per cent of esports players were significantly more physically active than lower level players, showing that physical activity could influence esports expertise.
    QUT eSports researcher Michael Trotter said the results were surprising considering global obesity levels.
    “The findings challenge the stereotype of the morbidly obese gamer,” he said.
    Mr Trotter said the animated satire South Park poked fun at the unfit gamer but the link between video gaming and obesity had not been strongly established.
    “When you think of esports, there are often concerns raised regarding sedentary behaviour and poor health as a result, and the study revealed some interesting and mixed results,” he said.

    advertisement

    “As part of their training regime, elite esports athletes spend more than an hour per day engaging in physical exercise as a strategy to enhance gameplay and manage stress,” he said.
    The World Health Organisation guidelines for time that should be spent being physically active weekly is a minimum of 150 minutes.
    “Only top-level players surveyed met physical activity guidelines, with the best players exercising on average four days a week,” the PhD student said.
    However, the study found 4.03 per cent of esports players were more likely to be morbidly obese compared to the global population.
    Mr Trotter said strategies should be developed to support players classed at the higher end of BMI categories.

    advertisement

    “Exercise and physical activity play a role in success in esports and should be a focus for players and organisations training esports players,” Mr Trotter said.
    “This will mean that in the future, young gamers will have more reason and motivation to be physically active.
    “Grassroots esports pathways, such as growing university and high school esports are likely to be the best place for young esports players to develop good health habits for gamers.”
    The research also found esports players are 7.8 per cent more likely to abstain from drinking daily, and of those players that do drink, only 0.5 per cent reported drinking daily.
    The survey showed only 3.7 per cent of esports players smoked daily, with player smoking frequency lower compared to global data at 18.7 per cent.
    Future research will investigate how high-school and university esports programs can improve health outcomes and increase physical activity for gaming students.
    The study was led by QUT’s Faculty of Health School of Exercise and Nutrition Sciences and in collaboration with the Department of Psychology at Umeå University in Sweden.

    Story Source:
    Materials provided by Queensland University of Technology. Note: Content may be edited for style and length. More

  • in

    Teaching computers the meaning of sensor names in smart home

    The aim of smart homes is to make life easier for those living in them. Applications for environment-aided daily life may have a major social impact, fostering active ageing and enabling older adults to remain independent for longer. One of the keys to smart homes is the system’s ability to deduce the human activities taking place. To this end, different types of sensors are used to detect the changes triggered by inhabitants in this environment (turning lights on and off, opening and closing doors, etc.).
    Normally, the information generated by these sensors is processed using data analysis methods, and the most successful systems are based on supervised learning techniques (i.e., knowledge), with someone supervising the data and an algorithm automatically learning the meaning. Nevertheless, one of the main problems with smart homes is that a system trained in one environment is not valid in another one: ‘Algorithms are usually closely linked to a specific smart environment, to the types of sensor existing in that environment and their configuration, as well as to the concrete habits of one individual. The algorithm learns all this easily, but is then unable to transfer it to a different environment,’ explains Gorka Azkune, a member of the UPV/EHU’s IXA group.
    Giving sensors names
    To date, sensors have been identified using numbers, meaning that ‘they lost any meaning they may have had,’ continues Dr Azkune. ‘We propose using sensor names instead of identifiers, to enable their meaning, their semantics, to be used to determine the activity to which they are linked. Thus, what the algorithm learns in one environment may be valid in a different one, even if the sensors are not the same, because their semantics are similar. This is why we use natural language processing techniques.’
    The researcher also explains that the techniques used are totally automatic. ‘At the end of the day, the algorithm learns the words first and then the representation that we develop using those words. There is no human intervention. This is important from the perspective of scalability, since it has been proven to overcome the aforementioned difficulty.’ Indeed, the new approach has achieved similar results to those obtained using the knowledge-based method.

    Story Source:
    Materials provided by University of the Basque Country. Note: Content may be edited for style and length. More

  • in

    Math enables custom arrangements of liquid 'nesting dolls'

    While the mesmerizing blobs in a classic lava lamp may appear magical, the colorful shapes move in response to temperature-induced changes in density and surface tension. This process, known as liquid-liquid phase separation, is critical to many functions in living cells, and plays a part in making products like medicines and cosmetics.
    Now Princeton University researchers have overcome a major challenge in studying and engineering phase separation. Their system, reported in a paper published Nov. 19 in Physical Review Letters, allows for the design and control of complex mixtures with multiple phases — such as nested structures reminiscent of Russian matryoshka dolls, which are of special interest for applications such as drug synthesis and delivery.
    Their system provides researchers a new way to examine, predict and engineer interactions between multiple liquid phases, including arrangements of mixtures with an arbitrary number of separated phases, the researchers said.
    The arrangement of phases is based on the minimization of surface energies, which capture the interaction energies between molecules at the interfaces of phases. This tends to maximize the contact area between two phases with low surface tension, and minimize or eliminate contact between phases with high surface tension.
    The new method uses the mathematical tools of graph theory to track which phases contact each other within a mixture. The method can predict the final arrangements of phases in a mixture when the surface energies are known, and can also be used to reverse-engineer mixture properties that give rise to desired structures.
    “If you tell us which phases you have and what the surface tensions are, we can tell you how phases will arrange themselves. We can also do it the other way around — if you know how you want the phases to be arranged, we can tell you what surface tensions are needed,” said senior author Andrej Košmrlj, an assistant professor of mechanical and aerospace engineering.

    advertisement

    “The approach is very general, and we think it will have an impact on many different fields,” from cell biology and pharmaceuticals to 3D printing and carbon sequestration technologies, said Košmrlj.
    The work began as the junior paper of Milena Chakraverti-Wuerthwein, a physics concentrator from Princeton’s Class of 2020. She was working with Sheng Mao, then a postdoctoral research associate in Košmrlj’s group, building on previous research that explored phase-separated mixtures. That work developed a computational framework for predicting the number of separated phases and their composition, but did not systematically investigate the actual arrangements of phases.
    Chakraverti-Wuerthwein started drawing examples of multicomponent mixtures, with each phase represented by a different color. At one point, she said, she felt like she was “going in circles,” but then “took a step back and thought about the distinguishing feature that makes one of these morphologies different from another. I came up with the idea that it’s really the edges where phases are touching each other. That was the birth of the idea of using the graphs,” in which each phase is represented by a colored dot, and the lines between dots indicate which phases touch one another in a mixture.
    “That was the spark we needed, because once you can represent it in terms of graphs, then it’s very easy to enumerate all the possibilities” for different arrangements of phases, said Košmrlj.
    Chakraverti-Wuerthwein is a co-lead author of the paper along with Mao, who is now an assistant professor at Peking University in China. Coauthor Hunter Gaudio, a 2020 graduate of Villanova University, helped run simulations to produce all distinct arrangements of four phases during summer 2019 as a participant in the Princeton Center for Complex Materials’ Research Experience for Undergraduates program.

    advertisement

    “Normally, liquids like to make simple droplets, and not much else. With this theory, one can program droplets to spontaneously organize into chains, stacks, or nested layers, like Russian dolls,” said Eric Dufresne, a professor of soft and living materials at ETH Zürich in Switzerland, who was not involved in the research. “This could be useful for controlling a complex sequence of chemical reactions, as found in living cells. The next challenge will be to develop experimental methods to realize the interactions specified by the theory.”
    Košmrlj is part of a group of Princeton faculty members exploring various facets and applications of liquid-liquid phase separation — a major focus of an Interdisciplinary Research Group recently launched by the Princeton Center for Complex Materials with support from the National Science Foundation.
    In liquid environments, there is a tendency for small droplets to morph into larger droplets over time — a process called coarsening. However, in living cells and industrial processes it is desirable to achieve structures of specific size. Košmrlj said his team’s future work will consider how coarsening might be controlled to achieve mixtures with targeted small-scale structures. Another open question is how multicomponent mixtures form in living systems, where active biological processes and the basic physics of the materials are both contributing factors.
    Chakraverti-Wuerthwein, who will begin a Ph.D. program in biophysical sciences at the University of Chicago in 2021, said it was gratifying to see “that this kernel of an idea that I came up with ended up being something valuable that could be expanded into a more broadly applicable tool.”
    The work was supported by the U.S. National Science Foundation through the Princeton University Materials Research Science and Engineering Center, and through the Research Experience for Undergraduates program of the Princeton Center for Complex Materials. More

  • in

    AI model uses retinal scans to predict Alzheimer's disease

    A form of artificial intelligence designed to interpret a combination of retinal images was able to successfully identify a group of patients who were known to have Alzheimer’s disease, suggesting the approach could one day be used as a predictive tool, according to an interdisciplinary study from Duke University.
    The novel computer software looks at retinal structure and blood vessels on images of the inside of the eye that have been correlated with cognitive changes.
    The findings, appearing last week in the British Journal of Ophthalmology, provide proof-of-concept that machine learning analysis of certain types of retinal images has the potential to offer a non-invasive way to detect Alzheimer’s disease in symptomatic individuals.
    “Diagnosing Alzheimer’s disease often relies on symptoms and cognitive testing,” said senior author Sharon Fekrat, M.D., retina specialist at the Duke Eye Center. “Additional tests to confirm the diagnosis are invasive, expensive, and carry some risk. Having a more accessible method to identify Alzheimer’s could help patients in many ways, including improving diagnostic precision, allowing entry into clinical trials earlier in the disease course, and planning for necessary lifestyle adjustments.”
    Fekrat is part of an interdisciplinary team at Duke that also includes expertise from Duke’s departments of Neurology, Electrical and Computer Engineering, and Biostatistics and Bioinformatics. The team built on earlier work in which they identified changes in retinal blood vessel density that correlated with changes in cognition. They found decreased density of the capillary network around the center of the macula in patients with Alzheimer’s disease.
    Using that knowledge, they then trained a machine learning model, known as a convolutional neural network (CNN), using four types of retinal scans as inputs to teach a computer to discern relevant differences among images.
    Scans from 159 study participants were used to build the CNN; 123 patients were cognitively healthy, and 36 patients were known to have Alzheimer’s disease.
    “We tested several different approaches, but our best-performing model combined retinal images with clinical patient data,” said lead author C. Ellis Wisely, M.D., a comprehensive ophthalmologist at Duke. “Our CNN differentiated patients with symptomatic Alzheimer’s disease from cognitively healthy participants in an independent test group.”
    Wisely said it will be important to enroll a more diverse group of patients to build models that can predict Alzheimer’s in all racial groups as well as in those who have conditions such as glaucoma and diabetes, which can also alter retinal and vascular structures.
    “We believe additional training using images from a larger, more diverse population with known confounders will improve the model’s performance,” added co-author Dilraj S. Grewal, M.D., Duke retinal specialist.
    He said additional studies will also determine how well the AI approach compares to current methods of diagnosing Alzheimer’s disease, which often include expensive and invasive neuroimaging and cerebral spinal fluid tests.
    “Links between Alzheimer’s disease and retinal changes — coupled with non-invasive, cost-effective, and widely available retinal imaging platforms — position multimodal retinal image analysis combined with artificial intelligence as an attractive additional tool, or potentially even an alternative, for predicting the diagnosis of Alzheimer’s,” Fekrat said.

    Story Source:
    Materials provided by Duke University Medical Center. Note: Content may be edited for style and length. More

  • in

    Big data saves lives, and patient safeguards are needed

    The use of big data to address the opioid epidemic in Massachusetts poses ethical concerns that could undermine its benefits without clear governance guidelines that protect and respect patients and society, a University of Massachusetts Amherst study concludes.
    In research published in the open-access journal BMC Medical Ethics, Elizabeth Evans, associate professor in the School of Public Health and Health Sciences, sought to identify concerns and develop recommendations for the ethical handling of opioid use disorder (OUD) information stored in the Public Health Data Warehouse (PHD).
    “Efforts informed by big data are saving lives, yielding significant benefits,” the paper states. “Uses of big data may also undermine public trust in government and cause other unintended harms.”
    Maintained by the Massachusetts Department of Health, the PHD was established in 2015 as an unprecedented public health monitoring and research tool to link state government data sets and provide timely information to address health priorities, analyze trends and inform public policies. The initial focus was on the devastating opioid crisis.
    “It’s an amazing resource for research and public health planning,” Evans says, “but with a lot of information being linked on about 98% of the population of Massachusetts, I realized that it could cause some ethical issues that have not really been considered.”
    In 2019, Evans and a team of her students and staff interviewed and conducted focus groups with 39 big data stakeholders, including gatekeepers, researchers and patient advocates who were familiar with or interested in the PHD. They discussed the potential misuses of big data on opioids and how to create safeguards to ensure its ethical use.
    “While most participants understood that big data were anonymized and bound by other safeguards designed to preclude individual-level harms, some nevertheless worried that these data could be used to deny health insurance claims or use of social welfare programs, jeopardize employment, threaten parental rights, or increase criminal justice surveillance, prosecution, and incarceration,” the study states.
    One significant shortcoming of the data is the limited measurement of opioid and other substance use itself. “This blind spot and other ones like it are baked into big data, which can contribute to biased results, unjustified conclusions and policy implications, and not enough attention paid to the upstream or contextual contributors to OUD,” says Evans, whose research focuses on how health care systems and public policies can better promote health and wellness among vulnerable and underserved populations. “We know that people have addiction for many years before they come to the attention of public institutions.”
    A goal of the PHD is to improve health equity; however, “given data limitations, we do not examine or address conditions that enable the [opioid] epidemic, a problem that ultimately contributes to continued health disparities,” one focus group participant comments.
    The study participants helped develop recommendations for ethical big data governance that would prioritize health equity, set topics and methods that are off-limits and recognize the data’s blind spots.
    Shared data governance might include establishing community advisory boards, cultivating public trust by instituting safeguards and practicing transparency, and conducting engagement projects and media campaigns that communicate how the PHD serves the greater good.
    Special consideration should be given to people with opioid use disorder, the study emphasizes. “When considering big data policies and procedures, it may be useful to view individuals with OUD as a population whose status warrants added protections to guard against potential harms,” the paper concludes. “It is also important to ensure that big data research mitigates vulnerabilities rather than creates or exacerbates them. More

  • in

    Computer-aided creativity in robot design

    So, you need a robot that climbs stairs. What shape should that robot be? Should it have two legs, like a person? Or six, like an ant?
    Choosing the right shape will be vital for your robot’s ability to traverse a particular terrain. And it’s impossible to build and test every potential form. But now an MIT-developed system makes it possible to simulate them and determine which design works best.
    You start by telling the system, called RoboGrammar, which robot parts are lying around your shop — wheels, joints, etc. You also tell it what terrain your robot will need to navigate. And RoboGrammar does the rest, generating an optimized structure and control program for your robot.
    The advance could inject a dose of computer-aided creativity into the field. “Robot design is still a very manual process,” says Allan Zhao, the paper’s lead author and a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He describes RoboGrammar as “a way to come up with new, more inventive robot designs that could potentially be more effective.”
    Zhao is the lead author of the paper, which he will present at this month’s SIGGRAPH Asia conference. Co-authors include PhD student Jie Xu, postdoc Mina Konakovi?-Lukovi?, postdoc Josephine Hughes, PhD student Andrew Spielberg, and professors Daniela Rus and Wojciech Matusik, all of MIT.
    Ground rules
    Robots are built for a near-endless variety of tasks, yet “they all tend to be very similar in their overall shape and design,” says Zhao. For example, “when you think of building a robot that needs to cross various terrains, you immediately jump to a quadruped,” he adds, referring to a four-legged animal like a dog. “We were wondering if that’s really the optimal design.”

    advertisement

    Zhao’s team speculated that more innovative design could improve functionality. So they built a computer model for the task — a system that wasn’t unduly influenced by prior convention. And while inventiveness was the goal, Zhao did have to set some ground rules.
    The universe of possible robot forms is “primarily composed of nonsensical designs,” Zhao writes in the paper. “If you can just connect the parts in arbitrary ways, you end up with a jumble,” he says. To avoid that, his team developed a “graph grammar” — a set of constraints on the arrangement of a robot’s components. For example, adjoining leg segments should be connected with a joint, not with another leg segment. Such rules ensure each computer-generated design works, at least at a rudimentary level.
    Zhao says the rules of his graph grammar were inspired not by other robots but by animals — arthropods in particular. These invertebrates include insects, spiders, and lobsters. As a group, arthropods are an evolutionary success story, accounting for more than 80 percent of known animal species. “They’re characterized by having a central body with a variable number of segments. Some segments may have legs attached,” says Zhao. “And we noticed that that’s enough to describe not only arthropods but more familiar forms as well,” including quadrupeds. Zhao adopted the arthropod-inspired rules thanks in part to this flexibility, though he did add some mechanical flourishes. For example, he allowed the computer to conjure wheels instead of legs.
    A phalanx of robots
    Using Zhao’s graph grammar, RoboGrammar operates in three sequential steps: defining the problem, drawing up possible robotic solutions, then selecting the optimal ones. Problem definition largely falls to the human user, who inputs the set of available robotic components, like motors, legs, and connecting segments. “That’s key to making sure the final robots can actually be built in the real world,” says Zhao. The user also specifies the variety of terrain to be traversed, which can include combinations of elements like steps, flat areas, or slippery surfaces.

    advertisement

    With these inputs, RoboGrammar then uses the rules of the graph grammar to design hundreds of thousands of potential robot structures. Some look vaguely like a racecar. Others look like a spider, or a person doing a push-up. “It was pretty inspiring for us to see the variety of designs,” says Zhao. “It definitely shows the expressiveness of the grammar.” But while the grammar can crank out quantity, its designs aren’t always of optimal quality.
    Choosing the best robot design requires controlling each robot’s movements and evaluating its function. “Up until now, these robots are just structures,” says Zhao. The controller is the set of instructions that brings those structures to life, governing the movement sequence of the robot’s various motors. The team developed a controller for each robot with an algorithm called Model Predictive Control, which prioritizes rapid forward movement.
    “The shape and the controller of the robot are deeply intertwined,” says Zhao, “which is why we have to optimize a controller for every given robot individually.” Once each simulated robot is free to move about, the researchers seek high-performing robots with a “graph heuristic search.” This neural network algorithm iteratively samples and evaluates sets of robots, and it learns which designs tend to work better for a given task. “The heuristic function improves over time,” says Zhao, “and the search converges to the optimal robot.”
    This all happens before the human designer ever picks up a screw.
    “This work is a crowning achievement in the a 25-year quest to automatically design the morphology and control of robots,” says Hod Lipson, a mechanical engineer and computer scientist at Columbia University, who was not involved in the project. “The idea of using shape-grammars has been around for a while, but nowhere has this idea been executed as beautifully as in this work. Once we can get machines to design, make and program robots automatically, all bets are off.”
    Zhao intends the system as a spark for human creativity. He describes RoboGrammar as a “tool for robot designers to expand the space of robot structures they draw upon.” To show its feasibility, his team plans to build and test some of RoboGrammar’s optimal robots in the real world. Zhao adds that the system could be adapted to pursue robotic goals beyond terrain traversing. And he says RoboGrammar could help populate virtual worlds. “Let’s say in a video game you wanted to generate lots of kinds of robots, without an artist having to create each one,” says Zhao. “RoboGrammar would work for that almost immediately.”
    One surprising outcome of the project? “Most designs did end up being four-legged in the end,” says Zhao. Perhaps manual robot designers were right to gravitate toward quadrupeds all along. “Maybe there really is something to it.” More