More stories

  • in

    AI can predict the effectiveness of breast cancer chemotherapy

    Engineers at the University of Waterloo have developed artificial intelligence (AI) technology to predict if women with breast cancer would benefit from chemotherapy prior to surgery.
    The new AI algorithm, part of the open-source Cancer-Net initiative led by Dr. Alexander Wong, could help unsuitable candidates avoid the serious side effects of chemotherapy and pave the way for better surgical outcomes for those who are suitable.
    “Determining the right treatment for a given breast cancer patient is very difficult right now, and it is crucial to avoid unnecessary side effects from using treatments that are unlikely to have real benefit for that patient,” said Wong, a professor of systems design engineering.
    “An AI system that can help predict if a patient is likely to respond well to a given treatment gives doctors the tool needed to prescribe the best personalized treatment for a patient to improve recovery and survival.”
    In a project led by Amy Tai, a graduate student with the Vision and Image Processing (VIP) Lab, the AI software was trained with images of breast cancer made with a new magnetic image resonance modality, invented by Wong and his team, called synthetic correlated diffusion imaging (CDI).
    With knowledge gleaned from CDI images of old breast cancer cases and information on their outcomes, the AI can predict if pre-operative chemotherapy treatment would benefit new patients based on their CDI images.
    Known as neoadjuvant chemotherapy, the pre-surgical treatment can shrink tumours to make surgery possible or easier and reduce the need for major surgery such as mastectomies.
    “I’m quite optimistic about this technology as deep-learning AI has the potential to see and discover patterns that relate to whether a patient will benefit from a given treatment,” said Wong, a director of the VIP Lab and the Canada Research Chair in Artificial Intelligence and Medical Imaging.
    A paper on the project, Cancer-Net BCa: Breast Cancer Pathologic Complete Response Prediction using Volumetric Deep Radiomic Features from Synthetic Correlated Diffusion Imaging, was recently presented at Med-NeurIPS as part of NeurIPS 2022, a major international conference on AI.
    The new AI algorithm and the complete dataset of CDI images of breast cancer have been made publicly available through the Cancer-Net initiative so other researchers can help advance the field. More

  • in

    AI-Powered FRIDA robot collaborates with humans to create art

    Carnegie Mellon University’s Robotics Institute has a new artist-in-residence.
    FRIDA, a robotic arm with a paintbrush taped to it, uses artificial intelligence to collaborate with humans on works of art. Ask FRIDA to paint a picture, and it gets to work putting brush to canvas.
    “There’s this one painting of a frog ballerina that I think turned out really nicely,” said Peter Schaldenbrand, a School of Computer Science Ph.D. student in the Robotics Institute working with FRIDA and exploring AI and creativity. “It is really silly and fun, and I think the surprise of what FRIDA generated based on my input was really fun to see.”
    FRIDA, named after Frida Kahlo, stands for Framework and Robotics Initiative for Developing Arts. The project is led by Schaldenbrand with RI faculty members Jean Oh and Jim McCann, and has attracted students and researchers across CMU.
    Users can direct FRIDA by inputting a text description, submitting other works of art to inspire its style, or uploading a photograph and asking it to paint a representation of it. The team is experimenting with other inputs as well, including audio. They played ABBA’s “Dancing Queen” and asked FRIDA to paint it.
    “FRIDA is a robotic painting system, but FRIDA is not an artist,” Schaldenbrand said. “FRIDA is not generating the ideas to communicate. FRIDA is a system that an artist could collaborate with. The artist can specify high-level goals for FRIDA and then FRIDA can execute them.”
    The robot uses AI models similar to those powering tools like OpenAI’s ChatGPT and DALL-E 2, which generate text or an image, respectively, in response to a prompt. FRIDA simulates how it would paint an image with brush strokes and uses machine learning to evaluate its progress as it works.

    FRIDA’s final products are impressionistic and whimsical. The brushstrokes are bold. They lack the precision sought so often in robotic endeavors. If FRIDA makes a mistake, it riffs on it, incorporating the errant splotch of paint into the end result.
    “FRIDA is a project exploring the intersection of human and robotic creativity,” McCann said. “FRIDA is using the kind of AI models that have been developed to do things like caption images and understand scene content and applying it to this artistic generative problem.”
    FRIDA taps into AI and machine learning several times during its artistic process. First, it spends an hour or more learning how to use its paintbrush. Then, it uses large vision-language models trained on massive datasets that pair text and images scraped from the internet, such as OpenAI’s Contrastive Language-Image Pre-Training (CLIP), to understand the input. AI systems use these models to generate new text or images based on a prompt.
    Other image-generating tools such as OpenAI’s DALL-E 2, use large vision-language models to produce digital images. FRIDA takes that a step further and uses its embodied robotic system to produce physical paintings. One of the biggest technical challenges in producing a physical image is reducing the simulation-to-real gap, the difference between what FRIDA composes in simulation and what it paints on the canvas. FRIDA uses an idea known as real2sim2real. The robot’s actual brush strokes are used to train the simulator to reflect and mimic the physical capabilities of the robot and painting materials.
    FRIDA’s team also seeks to address some of the limitations in current large vision-language models by continually refining the ones they use. The team fed the models the headlines from news articles to give it a sense of what was happening in the world and further trained them on images and text more representative of diverse cultures to avoid an American or Western bias. This multicultural collaboration effort is led by Zhixuan Liu and Beverley-Claire Okogwu, first-year RI master’s students, and Youeun Shin and Youngsik Yun, visiting master’s students from Dongguk University in Korea. Their efforts include training data contributions from China, Japan, Korea, Mexico, Nigeria, Norway, Vietnam and other countries.

    Once FRIDA’s human user has specified a high-level concept of the painting they want to create, the robot uses machine learning to create its simulation and develop a plan to make a painting to achieve the user’s goals. FRIDA displays a color pallet on a computer screen for a human to mix and provide to the robot. Automatic paint mixing is currently being developed, led by Jiaying Wei, a master’s student in the School of Architecture, with Eunsu Kang, faculty in the Machine Learning Department.
    Armed with a brush and paint, FRIDA will make its first strokes. Every so often, the robot uses an overhead camera to capture an image of the painting. The image helps FRIDA evaluate its progress and refine its plan, if needed. The whole process takes hours.
    “People wonder if FRIDA is going to take artists’ jobs, but the main goal of the FRIDA project is quite the opposite. We want to really promote human creativity through FRIDA,” Oh said. “For instance, I personally wanted to be an artist. Now, I can actually collaborate with FRIDA to express my ideas in painting.”
    More information about FRIDA is available on its website. The team will present its latest research from the project, “FRIDA: A Collaborative Robot Painter With a Differentiable, Real2Sim2Real Planning Environment” at the 2023 IEEE International Conference on Robotics and Automation this May in London. FRIDA resides in the RI’s Bot Intelligence Group (BIG) lab in the Squirrel Hill neighborhood of Pittsburgh. More

  • in

    Solving a machine-learning mystery

    Large language models like OpenAI’s GPT-3 are massive neural networks that can generate human-like text, from poetry to programming code. Trained using troves of internet data, these machine-learning models take a small bit of input text and then predict the text that is likely to come next.
    But that’s not all these models can do. Researchers are exploring a curious phenomenon known as in-context learning, in which a large language model learns to accomplish a task after seeing only a few examples — despite the fact that it wasn’t trained for that task. For instance, someone could feed the model several example sentences and their sentiments (positive or negative), then prompt it with a new sentence, and the model can give the correct sentiment.
    Typically, a machine-learning model like GPT-3 would need to be retrained with new data for this new task. During this training process, the model updates its parameters as it processes new information to learn the task. But with in-context learning, the model’s parameters aren’t updated, so it seems like the model learns a new task without learning anything at all.
    Scientists from MIT, Google Research, and Stanford University are striving to unravel this mystery. They studied models that are very similar to large language models to see how they can learn without updating parameters.
    The researchers’ theoretical results show that these massive neural network models are capable of containing smaller, simpler linear models buried inside them. The large model could then implement a simple learning algorithm to train this smaller, linear model to complete a new task, using only information already contained within the larger model. Its parameters remain fixed.
    An important step toward understanding the mechanisms behind in-context learning, this research opens the door to more exploration around the learning algorithms these large models can implement, says Ekin Akyürek, a computer science graduate student and lead author of a paper exploring this phenomenon. With a better understanding of in-context learning, researchers could enable models to complete new tasks without the need for costly retraining.

    “Usually, if you want to fine-tune these models, you need to collect domain-specific data and do some complex engineering. But now we can just feed it an input, five examples, and it accomplishes what we want. So in-context learning is a pretty exciting phenomenon,” Akyürek says.
    Joining Akyürek on the paper are Dale Schuurmans, a research scientist at Google Brain and professor of computing science at the University of Alberta; as well as senior authors Jacob Andreas, the X Consortium Assistant Professor in the MIT Department of Electrical Engineering and Computer Science and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); Tengyu Ma, an assistant professor of computer science and statistics at Stanford; and Danny Zhou, principal scientist and research director at Google Brain. The research will be presented at the International Conference on Learning Representations.
    A model within a model
    In the machine-learning research community, many scientists have come to believe that large language models can perform in-context learning because of how they are trained, Akyürek says.
    For instance, GPT-3 has hundreds of billions of parameters and was trained by reading huge swaths of text on the internet, from Wikipedia articles to Reddit posts. So, when someone shows the model examples of a new task, it has likely already seen something very similar because its training dataset included text from billions of websites. It repeats patterns it has seen during training, rather than learning to perform new tasks.

    Akyürek hypothesized that in-context learners aren’t just matching previously seen patterns, but instead are actually learning to perform new tasks. He and others had experimented by giving these models prompts using synthetic data, which they could not have seen anywhere before, and found that the models could still learn from just a few examples. Akyürek and his colleagues thought that perhaps these neural network models have smaller machine-learning models inside them that the models can train to complete a new task.
    “That could explain almost all of the learning phenomena that we have seen with these large models,” he says.
    To test this hypothesis, the researchers used a neural network model called a transformer, which has the same architecture as GPT-3, but had been specifically trained for in-context learning.
    By exploring this transformer’s architecture, they theoretically proved that it can write a linear model within its hidden states. A neural network is composed of many layers of interconnected nodes that process data. The hidden states are the layers between the input and output layers.
    Their mathematical evaluations show that this linear model is written somewhere in the earliest layers of the transformer. The transformer can then update the linear model by implementing simple learning algorithms.
    In essence, the model simulates and trains a smaller version of itself.
    Probing hidden layers
    The researchers explored this hypothesis using probing experiments, where they looked in the transformer’s hidden layers to try and recover a certain quantity.
    “In this case, we tried to recover the actual solution to the linear model, and we could show that the parameter is written in the hidden states. This means the linear model is in there somewhere,” he says.
    Building off this theoretical work, the researchers may be able to enable a transformer to perform in-context learning by adding just two layers to the neural network. There are still many technical details to work out before that would be possible, Akyürek cautions, but it could help engineers create models that can complete new tasks without the need for retraining with new data.
    Moving forward, Akyürek plans to continue exploring in-context learning with functions that are more complex than the linear models they studied in this work. They could also apply these experiments to large language models to see whether their behaviors are also described by simple learning algorithms. In addition, he wants to dig deeper into the types of pretraining data that can enable in-context learning.
    “With this work, people can now visualize how these models can learn from exemplars. So, my hope is that it changes some people’s views about in-context learning,” Akyürek says. “These models are not as dumb as people think. They don’t just memorize these tasks. They can learn new tasks, and we have shown how that can be done.” More

  • in

    Researchers focus AI on finding exoplanets

    New research from the University of Georgia reveals that artificial intelligence can be used to find planets outside of our solar system. The recent study demonstrated that machine learning can be used to find exoplanets, information that could reshape how scientists detect and identify new planets very far from Earth.
    “One of the novel things about this is analyzing environments where planets are still forming,” said Jason Terry, doctoral student in the UGA Franklin College of Arts and Sciences department of physics and astronomy and lead author on the study. “Machine learning has rarely been applied to the type of data we’re using before, specifically for looking at systems that are still actively forming planets.”
    The first exoplanet was found in 1992, and though more than 5,000 are known to exist, those have been among the easiest for scientists to find. Exoplanets at the formation stage are difficult to see for two primary reasons. They are too far away, often hundreds of lights years from Earth, and the discs where they form are very thick, thicker than the distance of the Earth to the sun. Data suggests the planets tend to be in the middle of these discs, conveying a signature of dust and gases kicked up by the planet.
    The research showed that artificial intelligence can help scientists overcome these difficulties.
    “This is a very exciting proof of concept,” said Cassandra Hall, assistant professor of astrophysics, principal investigator of the Exoplanet and Planet Formation Research Group, and co-author on the study. “The power here is that we used exclusively synthetic telescope data generated by computer simulations to train this AI, and then applied it to real telescope data. This has never been done before in our field, and paves the way for a deluge of discoveries as James Webb Telescope data rolls in.”
    The James Webb Space Telescope, launched by NASA in 2021, has inaugurated a new level of infrared astronomy, bringing stunning new images and reams of data for scientists to analyze. It’s just the latest iteration of the agency’s quest to find exoplanets, scattered unevenly across the galaxy. The Nancy Grace Roman Observatory, a 2.4-meter survey telescope scheduled to launch in 2027 that will look for dark energy and exoplanets, will be the next major expansion in capability — and delivery of information and data — to comb through the universe for life.

    The Webb telescope supplies the ability for scientists to look at exoplanetary systems in an extremely bright, high resolution, with the forming environments themselves a subject of great interest as they determine the resulting solar system.
    “The potential for good data is exploding, so it’s a very exciting time for the field,” Terry said.
    New analytical tools are essential
    Next-generation analytical tools are urgently needed to greet this high-quality data, so scientists can spend more time on theoretical interpretations rather than meticulously combing through the data and trying to find tiny little signatures.
    “In a sense, we’ve sort of just made a better person,” Terry said. “To a large extent the way we analyze this data is you have dozens, hundreds of images for a specific disc and you just look through and ask ‘is that a wiggle?’ then run a dozen simulations to see if that’s a wiggle and … it’s easy to overlook them — they’re really tiny, and it depends on the cleaning, and so this method is one, really fast, and two, its accuracy gets planets that humans would miss.”
    Terry says this is what machine learning can already accomplish — improve on human capacity to save time and money as well as efficiently guide scientific time, investments and new proposals.
    “There remains, within science and particularly astronomy in general, skepticism about machine learning and of AI, a valid criticism of it being this black box — where you have hundreds of millions of parameters and somehow you get out an answer. But we think we’ve demonstrated pretty strongly in this work that machine learning is up to the task. You can argue about interpretation. But in this case, we have very concrete results that demonstrate the power of this method.”
    The research team’s work is designed to develop a concrete foundation for future applications on observational data, demonstrating the method’s effectiveness by using simulational observations. More

  • in

    Can pigeons match wits with artificial intelligence?

    Can a pigeon match wits with artificial intelligence? At a very basic level, yes.
    In a new study, psychologists at the University of Iowa examined the workings of the pigeon brain and how the “brute force” of the bird’s learning shares similarities with artificial intelligence.
    The researchers gave the pigeons complex categorization tests that high-level thinking, such as using logic or reasoning, would not aid in solving. Instead, the pigeons, by virtue of exhaustive trial and error, eventually were able to memorize enough scenarios in the test to reach nearly 70% accuracy.
    The researchers equate the pigeons’ repetitive, trial-and-error approach to artificial intelligence. Computers employ the same basic methodology, the researchers contend, being “taught” how to identify patterns and objects easily recognized by humans. Granted, computers, because of their enormous memory and storage power — and growing ever more powerful in those domains — far surpass anything the pigeon brain can conjure.
    Still, the basic process of making associations — considered a lower-level thinking technique — is the same between the test-taking pigeons and the latest AI advances.
    “You hear all the time about the wonders of AI, all the amazing things that it can do,” says Ed Wasserman, Stuit Professor of Experimental Psychology in the Department of Psychological and Brain Sciences at Iowa and the study’s corresponding author. “It can beat the pants off people playing chess, or at any video game, for that matter. It can beat us at all kinds of things. How does it do it? Is it smart? No, it’s using the same system or an equivalent system to what the pigeon is using here.”
    The researchers sought to tease out two types of learning: one, declarative learning, is predicated on exercising reason based on a set of rules or strategies — a so-called higher level of learning attributed mostly to people. The other, associative learning, centers on recognizing and making connections between objects or patterns, such as, say, “sky-blue” and “water-wet.”

    Numerous animal species use associative learning, but only a select few — dolphins and chimpanzees among them — are thought to be capable of declarative learning.
    Yet AI is all the rage, with computers, robots, surveillance systems, and so many other technologies seemingly “thinking” like humans. But is that really the case, or is AI simply a product of cunning human inputs? Or, as the study’s authors put it, have we shortchanged the power of associative learning in human and animal cognition?
    Wasserman’s team devised a “diabolically difficult” test, as he calls it, to find out.
    Each test pigeon was shown a stimulus and had to decide, by pecking a button on the right or on the left, to which category that stimulus belonged. The categories included line width, line angle, concentric rings, and sectioned rings. A correct answer yielded a tasty pellet; an incorrect response yielded nothing. What made the test so demanding, Wasserman says, is its arbitrariness: No rules or logic would help decipher the task.
    “These stimuli are special. They don’t look like one another, and they’re never repeated,” says Wasserman, who has studied pigeon intelligence for five decades. “You have to memorize the individual stimuli or regions from where the stimuli occur in order to do the task.”
    Each of the four test pigeons began by correctly answering about half the time. But over hundreds of tests, the quartet eventually upped their score to an average of 68% right.

    “The pigeons are like AI masters,” Wasserman says. “They’re using a biological algorithm, the one that nature has given them, whereas the computer is using an artificial algorithm that humans gave them.”
    The common denominator is that AI and pigeons both employ associative learning, and yet that base-level thinking is what allowed the pigeons to ultimately score successfully. If people were to take the same test, Wasserman says, they’d score poorly and would probably give up.
    “The goal was to see to what extent a simple associative mechanism was capable of solving a task that would trouble us because people rely so heavily on rules or strategies,” Wasserman adds. “In this case, those rules would get in the way of learning. The pigeon never goes through that process. It doesn’t have that high-level thinking process. But it doesn’t get in the way of their learning. In fact, in some ways it facilitates it.”
    Wasserman sees a paradox in how associative learning is viewed.
    “People are wowed by AI doing amazing things using a learning algorithm much like the pigeon,” he says, “yet when people talk about associative learning in humans and animals, it is discounted as rigid and unsophisticated.”
    The study, “Resolving the associative learning paradox by category learning in pigeons,” was published online Feb. 7 in the journal Current Biology.
    Study co-authors include Drew Kain, who graduated with a neuroscience degree from Iowa in 2022 and is pursuing a doctorate in neuroscience at Iowa; and Ellen O’Donoghue, who earned a doctorate in psychology at Iowa last year and is now a postdoctoral scholar at Cardiff University.
    The National Institutes of Health funded the research. More

  • in

    Physicists stored data in quantum holograms made of twisted light

    Particles of twisted light that have been entangled using quantum mechanics offer a new approach to dense and secure data storage.

    Holograms that produce 3-D images and serve as security features on credit cards are usually made with patterns laid down with beams of laser light. In recent years, physicists have found ways to create holograms with entangled photons instead. Now there is, literally, a new twist to the technology.

    Entangled photons that travel in corkscrew paths have resulted in holograms that offer the possibility of dense and ultrasecure data encryption, researchers report in a study to appear in Physical Review Letters.

    Science News headlines, in your inbox

    Headlines and summaries of the latest Science News articles, delivered to your email inbox every Thursday.

    Thank you for signing up!

    There was a problem signing you up.

    Light can move in a variety of ways, including the up-and-down and side-to-side patterns of polarized light. But when it carries a type of rotation known as orbital angular momentum, it can also propagate in spirals that resemble twisted rotini pasta.

    Like any other photons, the twisted versions can be entangled so that they essentially act as one entity. Something that affects one of an entangled photon pair instantly affects the other, even if they are very far apart.

    In previous experiments, researchers have sent data through the air in entangled pairs of twisted photons (SN: 8/5/15). The approach should allow high-speed data transmission because light can come with different amounts of twist, with each twist serving as a different channel of communication.

    Now the same approach has been applied to record data in holograms. Instead of transmitting information on multiple, twisted light channels, photon pairs with different amounts of twist create distinct sets of data in a single hologram. The more orbital angular momentum states involved, each with different amounts of twist, the more data researchers can pack into a hologram.

    In addition to cramming more data into holograms, increasing the variety of twists used to record the data boosts security. Anyone who wants to read the information out needs to know, or guess, how the light that recorded it was twisted.

    For a hologram relying on two types of twist, says physicist Xiangdong Zhang of the Beijing Institute of Technology, you would have to pick the right combination of the twists from about 80 possibilities to decode the data. Bumping that up to combinations of seven distinct twists leads to millions of possibilities. That, Zhang says, “should be enough to ensure our quantum holographic encryption system has enough security level.”

    Subscribe to Science News

    Get great science journalism, from the most trusted source, delivered to your doorstep.

    The researchers demonstrated their technique by encoding words and letters in holograms and reading the data back out again with twisted light. Although the researchers produced images from the holographic data, says physicist Hugo Defienne of the Paris Institute of Nanosciences, the storage itself should not be confused with holographic images.

    Defienne, who was not involved with the new research, says that other quantum holography schemes, such as his efforts with polarized photons, produce direct images of objects including microscopic structures.

    “[Their] idea there is very different . . . from our approach in this sense,” Defrienne says. “They’re using holography to store information,” rather than creating the familiar 3-D images that most people associate with holograms.

    The twisted light data storage that Zhang and his colleagues demonstrated is slow, requiring nearly 20 minutes to decode an image of the acronym “BIT,” for the Beijing Institute of Technology where the experiments were performed. And the security that the researchers have demonstrated is still relatively low because they included only up to six forms of twisted light in their experiments.

    Zhang is confident that both limitations can be overcome with technical improvements. “We think that our technology has potential application in quantum information encryption,” he says, “especially quantum image encryption.” More

  • in

    Engineers devise a modular system to produce efficient, scalable aquabots

    Underwater structures that can change their shapes dynamically, the way fish do, push through water much more efficiently than conventional rigid hulls. But constructing deformable devices that can change the curve of their body shapes while maintaining a smooth profile is a long and difficult process. MIT’s RoboTuna, for example, was composed of about 3,000 different parts and took about two years to design and build.
    Now, researchers at MIT and their colleagues — including one from the original RoboTuna team — have come up with an innovative approach to building deformable underwater robots, using simple repeating substructures instead of unique components. The team has demonstrated the new system in two different example configurations, one like an eel and the other a wing-like hydrofoil. The principle itself, however, allows for virtually unlimited variations in form and scale, the researchers say.
    The work is being reported in the journal Soft Robotics, in a paper by MIT research assistant Alfonso Parra Rubio, professors Michael Triantafyllou and Neil Gershenfeld, and six others.
    Existing approaches to soft robotics for marine applications are generally made on small scales, while many useful real-world applications require devices on scales of meters. The new modular system the researchers propose could easily be extended to such sizes and beyond, without requiring the kind of retooling and redesign that would be needed to scale up current systems.
    “Scalability is a strong point for us,” says Parra Rubio. Given the low density and high stiffness of the lattice-like pieces, called voxels, that make up their system, he says, “we have more room to keep scaling up,” whereas most currently used technologies “rely on high-density materials facing drastic problems” in moving to larger sizes.
    The individual voxels in the team’s experimental, proof-of-concept devices are mostly hollow structures made up of cast plastic pieces with narrow struts in complex shapes. The box-like shapes are load-bearing in one direction but soft in others, an unusual combination achieved by blending stiff and flexible components in different proportions.

    “Treating soft versus hard robotics is a false dichotomy,” Parra Rubio says. “This is something in between, a new way to construct things.” Gershenfeld, head of MIT’s Center for Bits and Atoms, adds that “this is a third way that marries the best elements of both.”
    “Smooth flexibility of the body surface allows us to implement flow control that can reduce drag and improve propulsive efficiency, resulting in substantial fuel saving,” says Triantafyllou, who is the Henry L. and Grace Doherty Professor in Ocean Science and Engineering, and was part of the RoboTuna team.
    In one of the devices produced by the team, the voxels are attached end-to-end in a long row to form a meter-long, snake-like structure. The body is made up of four segments, each consisting of five voxels, with an actuator in the center that can pull a wire attached to each of the two voxels on either side, contracting them and causing the structure to bend. The whole structure of 20 units is then covered with a rib-like supporting structure, and then a tight-fitting waterproof neoprene skin. The researchers deployed the structure in an MIT tow tank to show its efficiency in the water, and demonstrated that it was indeed capable of generating forward thrust sufficient to propel itself forward using undulating motions.
    “There have been many snake-like robots before,” Gershenfeld says. “But they’re generally made of bespoke components, as opposed to these simple building blocks that are scalable.”
    For example, Parra Rubio says, a snake-like robot built by NASA was made up of thousands of unique pieces, whereas for this group’s snake, “we show that there are some 60 pieces.” And compared to the two years spent designing and building the MIT RoboTuna, this device was assembled in about two days, he says.
    The other device they demonstrated is a wing-like shape, or hydrofoil, made up of an array of the same voxels but able to change its profile shape and therefore control the lift-to-drag ratio and other properties of the wing. Such wing-like shapes could be used for a variety of purposes, ranging from generating power from waves to helping to improve the efficiency of ship hulls — a pressing demand, as shipping is a significant source of carbon emissions.
    The wing shape, unlike the snake, is covered in an array of scale-like overlapping tiles, designed to press down on each other to maintain a waterproof seal even as the wing changes its curvature. One possible application might be in some kind of addition to a ship’s hull profile that could reduce the formation of drag-inducing eddies and thus improve its overall efficiency, a possibility that the team is exploring with collaborators in the shipping industry.
    Ultimately, the concept might be applied to a whale-like submersible craft, using its morphable body shape to create propulsion. Such a craft that could evade bad weather by staying below the surface, but without the noise and turbulence of conventional propulsion. The concept could also be applied to parts of other vessels, such as racing yachts, where having a keel or a rudder that could curve gently during a turn instead of remaining straight could provide an extra edge. “Instead of being rigid or just having a flap, if you can actually curve the way fish do, you can morph your way around the turn much more efficiently,” Gershenfeld says.
    The research team included Dixia Fan of the Westlake University in China; Benjamin Jenett SM ’15, PhD ‘ 20 of Discrete Lattice Industries; Jose del Aguila Ferrandis, Amira Abdel-Rahman and David Preiss of MIT; and Filippos Tourlomousis of the Demokritos Research Center of Greece. The work was supported by the U.S. Army Research Lab, CBA Consortia funding, and the MIT Sea Grant Program. More