More stories

  • in

    Baby mice have a skill that humans want, and this microchip might help us learn it

    Baby mice might be small, but they’re tough, too.
    For their first seven days of life, they have the special ability to regenerate damaged heart tissue.
    Humans, on the other hand, aren’t so lucky: any heart injuries we suffer could lead to permanent damage. But what if we could learn to repair our hearts, just like baby mice?
    A team of researchers led by UNSW Sydney have developed a microchip that can help scientists study the regenerative potential of mice heart cells. This microchip — which combines microengineering with biomedicine — could help pave the way for new regenerative heart medicine research.
    “We’ve developed a simple, reliable, cheap and fast way to identify and separate these important mouse heart cells,” says lead author Dr Hossein Tavassoli, a biomedical engineer and stem cell researcher at UNSW Medicine & Health who conducted this work as part of his doctoral thesis.
    “Our method uses a microchip that’s easy to fabricate and can be made in any laboratory in the world.”
    The process for identifying and separating mice heart cells is rather complex.

    advertisement

    First, scientists need to separate the right kind of heart cells (called proliferative cardiomyocytes) from other types of cells present in the heart.
    Their next challenge is keeping the cells alive.
    “Newborn mice heart cells (called proliferative cardiomyocytes) are very sensitive,” says Dr Vashe Chandrakanthan, a senior research fellow at UNSW Medicine & Health and co-senior author of the study.
    “Only about 20 per cent usually survive the conventional isolation and separation process. If we want to study these cells, we need to isolate them before they undergo cell death.”
    Dr Tavassoli says that this new method is much more efficient.

    advertisement

    “We reduced the stress applied on these cells by minimising the isolation and processing time,” he says. “Our method can purify millions of cells in less than 10 minutes.
    “Almost all of the cells survived when we used our microfluidic chip — over 90 per cent.”
    The spiral-shaped device is a microfluidic chip — that is, a chip designed to handle liquids on tiny scale. It filters cells according to their size, separating the cardiomyocytes from other cells. The chip costs less than $500 to produce, making it cheaper than other isolation and separation methods.
    This tool will make it easier for researchers to study how baby mice repair their hearts — and whether humans might be able to use the same technique.
    “Heart disease is the number one killer in the world,” says Dr Tavassoli. “In Australia, someone dies of heart disease every 12 minutes, and every four hours a baby is born with a heart defect.
    “We hope that our device will help accelerate heart disease research.”
    Characterising mice heart cells
    Once the heart cells were separated from other cells with the help of their chip, the researchers seized the opportunity to study the cells’ physico-mechanical properties — that is, the way they respond to force.
    This involved asking questions like ‘How do these individual heart cells beat?’, ‘Do the cells have distinct features?’ and ‘What are their differences in size, shape and elasticity?’.
    The findings could provide new insights for developing materials that repair heart tissue, like cardiac patches, scaffolds and hydrogels.
    “The fast, large-scale characterisation of cells’ physico-mechanical features is a relatively new field of research,” says Dr Tavassoli, who originally trained as an engineer before specialising in medicine.
    “This is the first time microfluidic technology has been used to study mechanical properties of baby mouse heart cells.”
    A multipurpose microchip
    Dr Chandrakanthan says that even though the microchip was created for baby mouse heart cells, it could potentially be adapted for use in other types of cell applications.
    “The principles are compatible with isolating cardiomyocytes from mouse heart cells of all ages,” he says.
    “We could potentially also use this method to separate not only the heart cells, but all sorts of cells from different organs.”
    Dr Tavassoli says this method could also help other areas of medical research, including cardiac biology, drug discovery and nanoengineering. He is currently conducting research at the Garvan Institute and Lowy Cancer Research Centre on how this method could help cancer diagnosis.
    “This microchip opens up the opportunity for new discoveries by researchers all over the world,” he says. More

  • in

    Pushing computing to the edge by rethinking microchips' design

    Responding to artificial intelligence’s exploding demands on computer networks, Princeton University researchers in recent years have radically increased the speed and slashed the energy use of specialized AI systems. Now, the researchers have moved their innovation closer to widespread use by creating co-designed hardware and software that will allow designers to blend these new types of systems into their applications.
    “Software is a critical part of enabling new hardware,” said Naveen Verma, a professor of electrical and computer engineering at Princeton and a leader of the research team. “The hope is that designers can keep using the same software system — and just have it work ten times faster or more efficiently.”
    By cutting both power demand and the need to exchange data from remote servers, systems made with the Princeton technology will be able to bring artificial intelligence applications, such as piloting software for drones or advanced language translators, to the very edge of computing infrastructure.
    “To make AI accessible to the real-time and often personal process all around us, we need to address latency and privacy by moving the computation itself to the edge,” said Verma, who is the director of the University’s Keller Center for Innovation in Engineering Education. “And that requires both energy efficiency and performance.”
    Two years ago, the Princeton research team fabricated a new chip designed to improve the performance of neural networks, which are the essence behind today’s artificial intelligence. The chip, which performed tens to hundreds of times better than other advanced microchips, marked a revolutionary approach in several measures. In fact, the chip was so different than anything being used for neural nets that it posed a challenge for the developers.
    “The chip’s major drawback is that it uses a very unusual and disruptive architecture,” Verma said in a 2018 interview. “That needs to be reconciled with the massive amount of infrastructure and design methodology that we have and use today.”
    Over the next two years, the researchers worked to refine the chip and to create a software system that would allow artificial intelligence systems to take advantage of the new chip’s speed and efficiency. In a presentation to the International Solid-State Circuits Virtual Conference on Feb. 22, lead author Hongyang Jia, a graduate student in Verma’s research lab, described how the new software would allow the new chips to work with different types of networks and allow the systems to be scalable both in hardware and execution of software.

    advertisement

    “It is programmable across all these networks,” Verma said. “The networks can be very big, and they can be very small.”
    Verma’s team developed the new chip in response to growing demand for artificial intelligence and to the burden AI places on computer networks. Artificial intelligence, which allows machines to mimic cognitive functions such as learning and judgement, plays a critical role in new technologies such as image recognition, translation, and self-driving vehicles. Ideally, the computation for technology such as drone navigation would be based on the drone itself, rather than in a remote network computer. But digital microchips’ power demand and need for memory storage can make designing such a system difficult. Typically, the solution places much of the computation and memory on a remote server, which communicates wirelessly with the drone. But this adds to the demands on the communications system, and it introduces security problems and delays in sending instructions to the drone.
    To approach the problem, the Princeton researchers rethought computing in several ways. First, they designed a chip that conducts computation and stores data in the same place. This technique, called in-memory computing, slashes the energy and time used to exchange information with dedicated memory. The technique boosts efficiency, but it introduces new problems: because it crams the two functions into a small area, in-memory computing relies on analog operation, which is sensitive to corruption by sources such as voltage fluctuation and temperature spikes. To solve this problem, the Princeton team designed their chips using capacitors rather than transistors. The capacitors, devices that store an electrical charge, can be manufactured with greater precision and are not highly affected by shifts in voltage. Capacitors can also be very small and placed on top of memory cells, increasing processing density and cutting energy needs.
    But even after making analog operation robust, many challenges remained. The analog core needed to be efficiently integrated in a mostly digital architecture, so that it could be combined with the other functions and software needed to actually make practical systems work. A digital system uses off-and-on switches to represent ones and zeros that computer engineers use to write the algorithms that make up computer programming. An analog computer takes a completely different approach. In an article in the IEEE Spectrum, Columbia University Professor Yannis Tsividis described an analog computer as a physical system designed to be governed by equations identical to those the programmer wants to solve. An abacus, for example, is a very simple analog computer. Tsividis says that a bucket and a hose can serve as an analog computer for certain calculus problems: to solve an integration function, you could do the math, or you could just measure the water in the bucket.
    Analog computing was the dominant technology through the Second World War. It was used to perform functions from predicting tides to directing naval guns. But analog systems were cumbersome to build and usually required highly trained operators. After the emergency of the transistor, digital systems proved more efficient and adaptable. But new technologies and new circuit designs have allowed engineers to eliminate many shortcomings of the analog systems. For applications such as neural networks, the analog systems offer real advantages. Now, the question is how to combine the best of both worlds. Verma points out that the two types of systems are complimentary. Digital systems play a central role while neural networks using analog chips can run specialized operations extremely fast and efficiently. That is why developing a software system that can integrate the two technologies seamlessly and efficiently is such a critical step.
    “The idea is not to put the entire network into in-memory computing,” he said. “You need to integrate the capability to do all the other stuff and to do it in a programmable way.” More

  • in

    Data transfer system connects silicon chips with a hair's-width cable

    Researchers have developed a data transfer system that can transmit information 10 times faster than a USB. The new link pairs high-frequency silicon chips with a polymer cable as thin a strand of hair. The system may one day boost energy efficiency in data centers and lighten the loads of electronics-rich spacecraft.
    The research was presented at this month’s IEEE International Solid-State Circuits Conference. The lead author is Jack Holloway ’03, MNG ’04, who completed his PhD in MIT’s Department of Electrical Engineering and Computer Science (EECS) last fall and currently works for Raytheon. Co-authors include Ruonan Han, associate professor and Holloway’s PhD adviser in EECS, and Georgios Dogiamis, a senior researcher at Intel.
    The need for snappy data exchange is clear, especially in an era of remote work. “There’s an explosion in the amount of information being shared between computer chips — cloud computing, the internet, big data. And a lot of this happens over conventional copper wire,” says Holloway. But copper wires, like those found in USB or HDMI cables, are power-hungry — especially when dealing with heavy data loads. “There’s a fundamental tradeoff between the amount of energy burned and the rate of information exchanged.” Despite a growing demand for fast data transmission (beyond 100 gigabits per second) through conduits longer than a meter, Holloway says the typical solution has been “increasingly bulky and costly” copper cables.
    One alternative to copper wire is fiber-optic cable, though that has its own problems. Whereas copper wires use electrical signaling, fiber-optics use photons. That allows fiber-optics to transmit data quickly and with little energy dissipation. But silicon computer chips generally don’t play well with photons, making interconnections between fiber-optic cables and computers a challenge. “There’s currently no way to efficiently generate, amplify, or detect photons in silicon,” says Holloway. “There are all kinds of expensive and complex integration schemes, but from an economics perspective, it’s not a great solution.” So, the researchers developed their own.
    The team’s new link draws on benefits of both copper and fiber optic conduits, while ditching their drawbacks. “It’s a great example of a complementary solution,” says Dogiamis. Their conduit is made of plastic polymer, so it’s lighter and potentially cheaper to manufacture than traditional copper cables. But when the polymer link is operated with sub-terahertz electromagnetic signals, it’s far more energy-efficient than copper in transmitting a high data load. The new link’s efficiency rivals that of fiber-optic, but has a key advantage: “It’s compatible directly with silicon chips, without any special manufacturing,” says Holloway.
    The team engineered such low-cost chips to pair with the polymer conduit. Typically, silicon chips struggle to operate at sub-terahertz frequencies. Yet the team’s new chips generate those high-frequency signals with enough power to transmit data directly into the conduit. That clean connection from the silicon chips to the conduit means the overall system can be manufactured with standard, cost-effective methods, the researchers say.
    The new link also beats out copper and fiber optic in terms of size. “The cross-sectional area of our cable is 0.4 millimeters by a quarter millimeter,” says Han. “So, it’s super tiny, like a strand of hair.” Despite its slim size, it can carry a hefty load of data, since it sends signals over three different parallel channels, separated by frequency. The link’s total bandwidth is 105 gigabits per second, nearly an order of magnitude faster than a copper-based USB cable. Dogiamis says the cable could “address the bandwidth challenges as we see this megatrend toward more and more data.”
    In future work, Han hopes to make the polymer conduits even faster by bundling them together. “Then the data rate will be off the charts,” he says. “It could be one terabit per second, still at low cost.”
    The researchers suggest “data-dense” applications, like server farms, could be early adopters of the new links, since they could dramatically cut data centers’ high energy demands. The link could also be a key solution for the aerospace and automotive industries, which place a premium on small, light devices. And one day, the link could replace the consumer electronic cables in homes and offices, thanks to the link’s simplicity and speed. “It’s far less costly than [copper or fiber optic] approaches, with significantly wider bandwidth and lower loss than conventional copper solutions,” says Holloway. “So, high fives all round.”
    This research was funded, in part, by Intel, Raytheon, the Naval Research Laboratory, and the Office of Naval Research. More

  • in

    Twin atoms: A source for entangled particles

    Heads or tails? If we toss two coins into the air, the result of one coin toss has nothing to do with the result of the other. Coins are independent objects. In the world of quantum physics, things are different: quantum particles can be entangled, in which case they can no longer be regarded as independent individual objects, they can only be described as one joint system.
    For years, it has been possible to produce entangled photons — pairs of light particles that move in completely different directions but still belong together. Spectacular results have been achieved, for example in the field of quantum teleportation or quantum cryptography. Now, a new method has been developed at TU Wien (Vienna) to produce entangled atom pairs — and not just atoms which are emitted in all directions, but well-defined beams. This was achieved with the help of ultracold atom clouds in electromagnetic traps.
    Entangled particles
    “Quantum entanglement is one of the essential elements of quantum physics,” says Prof. Jörg Schmiedmayer from the Institute of Atomic and Subatomic Physics at TU Wien. “If particles are entangled with each other, then even if you know everything there is to know about the total system, you still cannot say anything at all about one specific particle. Asking about the state of one particular particle makes no sense, only the overall state of the total system is defined.”
    There are different methods of creating quantum entanglement. For example, special crystals can be used to create pairs of entangled photons: a photon with high energy is converted by the crystal into two photons of lower energy — this is called “down conversion.” This allows large numbers of entangled photon pairs to be produced quickly and easily.
    Entangling atoms, however, is much more difficult. Individual atoms can be entangled using complicated laser operations — but then you only get a single pair of atoms. Random processes can also be used to create quantum entanglement: if two particles interact with each other in a suitable way, they can turn out to be entangled afterwards. Molecules can be broken up, creating entangled fragments. But these methods cannot be controlled. “In this case, the particles move in random directions. But when you do experiments, you want to be able to determine exactly where the atoms are moving,” says Jörg Schmiedmayer.

    advertisement

    The twin pair
    Controlled twin pairs could now be produced at TU Wien with a novel trick: a cloud of ultracold atoms is created and held in place by electromagnetic forces on a tiny chip. “We manipulate these atoms so that they do not end up in the state with the lowest possible energy, but in a state of higher energy,” says Schmiedmayer. From this excited state, the atoms then spontaneously return to the ground state with the lowest energy.
    However, the electromagnetic trap is constructed in such a way that this return to the ground state is physically impossible for a single atom — this would violate the conservation of momentum. The atoms can therefore only get trasferred to the ground state as pairs and fly away in opposite directions, so that their total momentum remains zero. This creates twin atoms that move exactly in the direction specified by the geometry of the electromagnetic trap on the chip.
    The double-slit experiment
    The trap consists of two elongated, parallel waveguides. The pair of twin atoms may have been created in the left or in the right waveguide — or, as quantum physics allows, in both simultaneously. “It’s like the well-known double-slit experiment, where you shoot a particle at a wall with two slits,” says Jörg Schmiedmayer. “The particle can pass through both the left and the right slit at the same time, behind which it interferes with itself, and this creates wave patterns that can be measured.”
    The same principle can be used to prove that the twin atoms are indeed entangled particles: only if you measure the entire system — i.e. both atoms at the same time — can you detect the wave-like superpositions typical of quantum phenomena. If, on the other hand, you restrict yourself to a single particle, the wave superposition disappears completely.
    “This shows us that in this case it makes no sense to look at the particles individually,” explains Jörg Schmiedmayer. “In the double-slit experiment, the superpositions disappear as soon as you measure whether the particle goes through the left or the right slit. As soon as this information is available, the quantum superposition is destroyed. It is very similar here: if the atoms are entangled and you only measure one of them, you could theoretically still use the other atom to measure whether they both originated in the left or the right part of the trap. Therefore, the quantum superpositions are destroyed.”
    Now that it has been proven that ultracold atom clouds can indeed be used to reliably produce entangled twin atoms in this way, further quantum experiments are to be carried out with these atom pairs — similar to those that have already been possible with photon pairs. More

  • in

    Scientists begin building highly accurate digital twin of our planet

    To become climate neutral by 2050, the European Union launched two ambitious programmes: “Green Deal” and “DigitalStrategy.” As a key component of their successful implementation, climate scientists and computer scientists launched the “Destination Earth” initiative, which will start in mid-2021 and is expected to run for up to ten years. During this period, a highly accurate digital model of the Earth is to be created, a digital twin of the Earth, to map climate development and extreme events as accurately as possible in space and time.
    Observational data will be continuously incorporated into the digital twin in order to make the digital Earth model more accurate for monitoring the evolution and predict possible future trajectories. But in addition to the observation data conventionally used for weather and climate simulations, the researchers also want to integrate new data on relevant human activities into the model. The new “Earth system model” will represent virtually all processes on the Earth’s surface as realistically as possible, including the influence of humans on water, food and energy management, and the processes in the physical Earth system.
    Information system for decision-making
    The digital twin of the Earth is intended to be an information system that develops and tests scenarios that show more sustainable development and thus better inform policies. “If you are planning a two-metre high dike in The Netherlands, for example, I can run through the data in my digital twin and check whether the dike will in all likelihood still protect against expected extreme events in 2050,” says Peter Bauer, deputy director for Research at the European Centre for Medium-Range Weather Forecasts (ECMWF) and co-initiator of Destination Earth. The digital twin will also be used for strategic planning of fresh water and food supplies or wind farms and solar plants.
    The driving forces behind Destination Earth are the ECMWF, the European Space Agency (ESA), and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT). Together with other scientists, Bauer is driving the climate science and meteorological aspects of the Earth’s digital twin, but they also rely on the know-how of computer scientists from ETH Zurich and the Swiss National Supercomputing Centre (CSCS), namely ETH professors Torsten Hoefler, from the Institute for High Performance Computing Systems, and Thomas Schulthess, Director of CSCS.
    In order to take this big step in the digital revolution, Bauer emphasises the need for earth sciences to be married to the computer sciences. In a recent publication in Nature Computational Science, the team of researchers from the earth and computer sciences discusses which concrete measures they would like to use to advance this “digital revolution of earth-system sciences,” where they see the challenges and what possible solutions can be found.

    advertisement

    Weather and climate models as a basis
    In their paper, the researchers look back on the steady development of weather models since the 1940s, a success story that took place quietly. Meteorologists pioneered, so to speak, simulations of physical processes on the world’s largest computers. As a physicist and computer scientist, CSCS’s Schulthess is therefore convinced that today’s weather and climate models are ideally suited to identify completely new ways for many more scientific disciplines how to use supercomputers efficiently.
    In the past, weather and climate modelling used different approaches to simulate the Earth system. Whereas climate models represent a very broad set of physical processes, they typically neglect small-scale processes, which, however, are essential for the more precise weather forecasts that in turn, focus on a smaller number of processes. The digital twin will bring both areas together and enable high-resolution simulations that depict the complex processes of the entire Earth system. But in order to achieve this, the codes of the simulation programmes must be adapted to new technologies promising much enhanced computing power.
    With the computers and algorithms available today, the highly complex simulations can hardly be carried out at the planned extremely high resolution of one kilometre because for decades, code development stagnated from a computer science perspective. Climate research benefited from being able to gain higher performance by ways of new generations of processors without having to fundamentally change their programme. This free performance gain with each new processor generation stopped about 10 years ago. As a result, today’s programmes can often only utilise 5 per cent of the peak performance of conventional processors (CPU).
    For achieving the necessary improvements, the authors emphasize the need of co-design, i.e. developing hardware and algorithms together and simultaneously, as CSCS successfully demonstrated during the last ten years. They suggest to pay particular attention to generic data structures, optimised spatial discretisation of the grid to be calculated and optimisation of the time step lengths. The scientists further propose to separate the codes for solving the scientific problem from the codes that optimally perform the computation on the respective system architecture. This more flexible programme structure would allow a faster and more efficient switch to future architectures.
    Profiting from artificial intelligence
    The authors also see great potential in artificial intelligence (AI). It can be used, for example, for data assimilation or the processing of observation data, the representation of uncertain physical processes in the models and data compression. AI thus makes it possible to speed up the simulations and filter out the most important information from large amounts of data. Additionally, the researchers assume that the use of machine learning not only makes the calculations more efficient, but also can help describing the physical processes more accurately.
    The scientists see their strategy paper as a starting point on the path to a digital twin of the Earth. Among the computer architectures available today and those expected in the near future, supercomputers based on graphics processing units (GPU) appear to be the most promising option. The researchers estimate that operating a digital twin at full scale would require a system with about 20,000 GPUs, consuming an estimated 20MW of power. For both economic and ecological reasons, such a computer should be operated at a location where CO2-neutral generated electricity is available in sufficient quantities.

    Story Source:
    Materials provided by ETH Zurich. Original written by Simone Ulmer. Note: Content may be edited for style and length. More

  • in

    Impact of online communities

    The Governance Lab (The GovLab) at the NYU Tandon School of Engineering released a report, “The Power of Virtual Communities,” which examines the role online groups play in creating opportunities for people to build new kinds of meaningful communities they often could not form in real space.
    This first-of-its-kind research was built on interviews with 50 Facebook community leaders in 17 countries, 26 global experts from academia and industry, unique access to Facebook’s underlying research and an original global survey conducted by YouGov of 15,000 people in 15 countries who are currently members of online and in-person communities, which found that in 11 of those countries the majority of people said that the most meaningful communities to which they belong are primarily online.
    “Around the world, people who are otherwise voiceless in physical space are becoming powerful leaders of groups that confer a true sense of meaning and belonging for their members,” said Beth Simone Noveck, director of The GovLab. “This brief report, which tells the stories of several of those leaders and how they govern global communities is, we hope, the beginning of greater and much needed study of online groups and their impact on social and political life.”
    Many of these Facebook groups cut across traditional social groupings and bring together people around a shared trait or interest:
    Female IN (FIN), created as a safe space for women in the Nigerian diaspora to discuss and seek support for problems associated with such challenges as relationship struggles, health issues, abuse, grief and loss. Female IN grew by word-of-mouth into a 1.8 million-person community with members in more than 100 countries.
    Surviving Hijab encourages its 920,000 female members to take up or continue wearing the Muslim head covering in the face of political and social criticism.
    Blind PenPals enables its 7,000 blind and visually impaired members to share stories and advice.
    Canterbury Residents Group acts as a public square in the British city of Canterbury and has 38,000 members, about the same size as the city’s population.
    Subtle Asian Traits, which began as a modest initiative among nine young Australians of Chinese background to share funny memes about their Asian heritage, has expanded to a group of 1.82 million people who discuss and share the experience of growing up Asian in mostly majority-White societies.
    The GovLab’s report findings note that:
    Membership in online communities confers a strong sense of community, the lack of physical proximity notwithstanding.
    Online groups are a still fluid form of human organization that in many cases attract members and leaders who are marginalized in the physical societies they inhabit, and who use the platform to build new kinds of communities that would be difficult to form otherwise.
    Many of these groups have counter-cultural norms and are what political scientists might call “cross-cleavage” communities. These groups cut across traditional social groupings, and bring together people normally divided by geography around a shared trait or interest.
    The flexible affordances of online platforms have enabled new kinds of leaders to emerge in these groups with unique skills in moderating often divisive dialogues, sometimes among millions of members.
    Most groups are run as a labor of love; many leaders are neither trained nor paid and the rules that govern their internal operations are often uncodified and the hosting platform — in this case Facebook — holds significant power over their operations and future.
    These groups, some of which have huge memberships, remain emergent and largely unrecognized: they are outside traditional power structures, institutions and forms of governance.
    More research is needed to understand whether and how these groups will operate as genuine communities over the long term, especially given the tensions that derive from conducting public life on a private platform such as Facebook, and how such groups and their leaders can be supported to ensure they provide maximum voice, participation and benefit to their members
    Further, results from the YouGov survey and the interviews with group leaders indicated that the three most essential traits and behaviors for leaders to exhibit were welcoming differences of opinions, being visible and communicating well, and acting ethically at all times.
    This report, published in six languages, further shines a light on the role leaders have and why it is important to further support them in running their community.

    Story Source:
    Materials provided by NYU Tandon School of Engineering. Note: Content may be edited for style and length. More

  • in

    An intelligent soft material that curls under pressure or expands when stretched

    Plants and animals can rapidly respond to changes in their environment, such as a Venus flytrap snapping shut when a fly touches it. However, replicating similar actions in soft robots requires complex mechanics and sensors. Now, researchers reporting in ACS Applied Materials & Interfaces have printed liquid metal circuits onto a single piece of soft polymer, creating an intelligent material that curls under pressure or mechanical strain.
    Ideally, soft robots could mimic intelligent and autonomous behaviors in nature, combining sensing and controlled movement. But the integration of sensors and the moving parts that respond can be clunky or require an external computer. A single-unit design is needed that responds to environmental stimuli, such as mechanical pressure or stretching. Liquid metals could be the solution, and some researchers have already investigated their use in soft robots. These materials can be used to create thin, flexible circuits in soft materials, and the circuits can rapidly produce heat when an electric current is generated, either from an electrical source or from pressure applied to the circuit. When the soft circuits are stretched, the current drops, cooling the material. To make a soft robot capable of autonomous, intelligent movement, Chao Zhao, Hong Liu and colleagues wanted to integrate liquid metal circuits with liquid crystal elastomers (LCE) — polymers that can undergo large changes to their shape when heated or cooled.
    The researchers applied a nickel-infused gallium-indium alloy onto an LCE and magnetically moved the liquid metal into lines to form an uninterrupted circuit. A silicone sealant that changed from pink to dark red when warmed kept the circuit protected and in place. In response to a current, the soft material curled as the temperature increased, and the film turned redder over time. The team used the material to develop autonomous grippers that perceived and responded to pressure or stretching applied to the circuits. The grippers could pick up small round objects and then drop them when the pressure was released or the material was stretched. Finally, the researchers formed the film into a spiral shape. When pressure was applied to the circuit at the bottom of the spiral, it unfurled with a rotating motion, as the spiral’s temperature increased. The researchers say that these pressure- and stretch-sensitive materials could be adapted for use in soft robots performing complex tasks or locomotion.

    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    Quantum systems learn joint computing

    Researchers realize the first quantum-logic computer operation between two separate quantum modules in different laboratories.
    Today’s quantum computers contain up to several dozen memory and processing units, the so-called qubits. Severin Daiss, Stefan Langenfeld, and colleagues from the Max Planck Institute of Quantum Optics in Garching have successfully interconnected two such qubits located in different labs to a distributed quantum computer by linking the qubits with a 60-meter-long optical fiber. Over such a distance they realized a quantum-logic gate — the basic building block of a quantum computer. It makes the system the worldwide first prototype of a distributed quantum computer.
    The limitations of previous qubit architectures
    Quantum computers are considerably different from traditional “binary” computers: Future realizations of them are expected to easily perform specific calculations for which traditional computers would take months or even years — for example in the field of data encryption and decryption. While the performance of binary computers results from large memories and fast computing cycles, the success of the quantum computer rests on the fact that one single memory unit — a quantum bit, also called “qubit” — can contain superpositions of different possible values at the same time. Therefore, a quantum computer does not only calculate one result at a time, but instead many possible results in parallel. The more qubits there are interconnected in a quantum computer; the more complex calculations it can perform.
    The basic computing operations of a quantum computer are quantum-logic gates between two qubits. Such an operation changes — depending on the initial state of the qubits — their quantum mechanical states. For a quantum computer to be superior to a normal computer for various calculations, it would have to reliably interconnect many dozens, or even thousands of qubits for equally thousands of quantum operations. Despite great successes, all current laboratories are still struggling to build such a large and reliable quantum computer, since every additionally required qubit makes it much harder to build a quantum computer in just one single set-up. The qubits are implemented, for instance, with single atoms, superconductive elements, or light particles, all of which need to be isolated perfectly from each other and the environment. The more qubits are arranged next to one another, the harder it is to both isolate and control them from outside at the same time.
    Data line and processing unit combined
    One way to overcome the technical difficulties in the construction of quantum computers is presented in a new study in the journal Science by Severin Daiss, Stefan Langenfeld and colleagues from the research group of Gerhard Rempe at the Max Planck Institute of Quantum Optics in Garching. In this work supported by the Institute of Photonic Sciences (Castelldefels, Spain), the team succeeded in connecting two qubit modules across a 60-meter distance in such a way that they effectively form a basic quantum computer with two qubits. “Across this distance, we perform a quantum computing operation between two independent qubit setups in different laboratories,” Daiss emphasizes. This enables the possibility to merge smaller quantum computers to a joint processing unit.
    Simply coupling distant qubits to generate entanglement between them has been achieved in the past, but now, the connection can additionally be used for quantum computations. For this purpose, the researchers employed modules consisting of a single atom as a qubit that is positioned amidst two mirrors. Between these modules, they send one single light quanta, a photon, that is transported in the optical fiber. This photon is then entangled with the quantum states of the qubits in the different modules. Subsequently, the state of one of the qubits is changed according to the measured state of the “ancilla photon,” realizing a quantum mechanical CNOT-operation with a fidelity of 80 percent. A next step would be to connect more than two modules and to host more qubits in the individual modules.
    Higher performance quantum computers through distributed computing
    Team leader and institute director Gerhard Rempe believes the result will allow to further advance the technology: “Our scheme opens up a new development path for distributed quantum computing.” It could enable, for instance, to build a distributed quantum computer consisting of many modules with few qubits that are interconnected with the newly introduced method. This approach could circumvent the limitation of existing quantum computers to integrate more qubits into a single setup and could therefore allow more powerful systems.

    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More