More stories

  • in

    Scientists induce artificial 'magnetic texture' in graphene

    Graphene is incredibly strong, lightweight, conductive … the list of its superlative properties goes on.
    It is not, however, magnetic — a shortcoming that has stunted its usefulness in spintronics, an emerging field that scientists say could eventually rewrite the rules of electronics, leading to more powerful semiconductors, computers and other devices.
    Now, an international research team led by the University at Buffalo is reporting an advancement that could help overcome this obstacle.
    In a study published today in the journal Physical Review Letters, researchers describe how they paired a magnet with graphene, and induced what they describe as “artificial magnetic texture” in the nonmagnetic wonder material.
    “Independent of each other, graphene and spintronics each possess incredible potential to fundamentally change many aspects of business and society. But if you can blend the two together, the synergistic effects are likely to be something this world hasn’t yet seen,” says lead author Nargess Arabchigavkani, who performed the research as a PhD candidate at UB and is now a postdoctoral research associate at SUNY Polytechnic Institute.
    Additional authors represent UB, King Mongkut’s Institute of Technology Ladkrabang in Thailand, Chiba University in Japan, University of Science and Technology of China, University of Nebraska Omaha, University of Nebraska Lincoln, and Uppsala University in Sweden.

    advertisement

    For their experiments, researchers placed a 20-nanometer-thick magnet in direct contact with a sheet of graphene, which is a single layer of carbon atoms arranged in a two-dimensional honeycomb lattice that is less than 1 nanometer thick.
    “To give you a sense of the size difference, it’s a bit like putting a brick on a sheet of paper,” says the study’s senior author Jonathan Bird, PhD, professor and chair of electrical engineering at the UB School of Engineering and Applied Sciences.
    Researchers then placed eight electrodes in different spots around the graphene and magnet to measure their conductivity.
    The electrodes revealed a surprise — the magnet induced an artificial magnetic texture in the graphene that persisted even in areas of the graphene away from the magnet. Put simply, the intimate contact between the two objects caused the normally nonmagnetic carbon to behave differently, exhibiting magnetic properties similar to common magnetic materials like iron or cobalt.
    Moreover, it was found that these properties could overwhelm completely the natural properties of the graphene, even when looking several microns away from the contact point of the graphene and the magnet. This distance (a micron is a millionth of a meter), while incredibly small, is relatively large microscopically speaking.
    The findings raise important questions relating to the microscopic origins of the magnetic texture in the graphene.
    Most importantly, Bird says, is the extent to which the induced magnetic behavior arises from the influence of spin polarization and/or spin-orbit coupling, which are phenomena known to be intimately connected to the magnetic properties of materials and to the emerging technology of spintronics.
    Rather than utilizing the electrical charge carried by electrons (as in traditional electronics), spintronic devices seek to exploit the unique quantum property of electrons known as spin (which is analogous to the earth spinning on its own axis). Spin offers the potential to pack more data into smaller devices, thereby increasing the power of semiconductors, quantum computers, mass storage devices and other digital electronics.
    The work was supported by funding from the U.S. Department of Energy. Additional support came from the U.S. National Science Foundation; nCORE, a wholly owned subsidiary of the Semiconductor Research Corporation; the Swedish Research Council; and the Japan Society for the Promotion of Science.

    Story Source:
    Materials provided by University at Buffalo. Original written by Cory Nealon. Note: Content may be edited for style and length. More

  • in

    Light unbound: Data limits could vanish with new optical antennas

    Researchers at the University of California, Berkeley, have found a new way to harness properties of light waves that can radically increase the amount of data they carry. They demonstrated the emission of discrete twisting laser beams from antennas made up of concentric rings roughly equal to the diameter of a human hair, small enough to be placed on computer chips.
    The new work, reported in a paper published Thursday, Feb. 25, in the journal Nature Physics, throws wide open the amount of information that can be multiplexed, or simultaneously transmitted, by a coherent light source. A common example of multiplexing is the transmission of multiple telephone calls over a single wire, but there had been fundamental limits to the number of coherent twisted lightwaves that could be directly multiplexed.
    “It’s the first time that lasers producing twisted light have been directly multiplexed,” said study principal investigator Boubacar Kanté, the Chenming Hu Associate Professor at UC Berkeley’s Department of Electrical Engineering and Computer Sciences. “We’ve been experiencing an explosion of data in our world, and the communication channels we have now will soon be insufficient for what we need. The technology we are reporting overcomes current data capacity limits through a characteristic of light called the orbital angular momentum. It is a game-changer with applications in biological imaging, quantum cryptography, high-capacity communications and sensors.”
    Kanté, who is also a faculty scientist in the Materials Sciences Division at Lawrence Berkeley National Laboratory (Berkeley Lab), has been continuing this work at UC Berkeley after having started the research at UC San Diego. The first author of the study is Babak Bahari, a former Ph.D. student in Kanté’s lab.
    Kanté said that current methods of transmitting signals through electromagnetic waves are reaching their limit. Frequency, for example, has become saturated, which is why there are only so many stations one can tune into on the radio. Polarization, where lightwaves are separated into two values — horizontal or vertical — can double the amount of information transmitted. Filmmakers take advantage of this when creating 3D movies, allowing viewers with specialized glasses to receive two sets of signals — one for each eye — to create a stereoscopic effect and the illusion of depth.
    Harnessing the potential in a vortex
    But beyond frequency and polarization is orbital angular momentum, or OAM, a property of light that has garnered attention from scientists because it offers exponentially greater capacity for data transmission. One way to think about OAM is to compare it to the vortex of a tornado.

    advertisement

    “The vortex in light, with its infinite degrees of freedom, can, in principle, support an unbounded quantity of data,” said Kanté. “The challenge has been finding a way to reliably produce the infinite number of OAM beams. No one has ever produced OAM beams of such high charges in such a compact device before.”
    The researchers started with an antenna, one of the most important components in electromagnetism and, they noted, central to ongoing 5G and upcoming 6G technologies. The antennas in this study are topological, which means that their essential properties are retained even when the device is twisted or bent.
    Creating rings of light
    To make the topological antenna, the researchers used electron-beam lithography to etch a grid pattern onto indium gallium arsenide phosphide, a semiconductor material, and then bonded the structure onto a surface made of yttrium iron garnet. The researchers designed the grid to form quantum wells in a pattern of three concentric circles — the largest about 50 microns in diameter — to trap photons. The design created conditions to support a phenomenon known as the photonic quantum Hall effect, which describes the movement of photons when a magnetic field is applied, forcing light to travel in only one direction in the rings.
    “People thought the quantum Hall effect with a magnetic field could be used in electronics but not in optics because of the weak magnetism of existing materials at optical frequencies,” said Kanté. “We are the first to show that the quantum Hall effect does work for light.”
    By applying a magnetic field perpendicular to their two-dimensional microstructure, the researchers successfully generated three OAM laser beams traveling in circular orbits above the surface. The study further showed that the laser beams had quantum numbers as large as 276, referring to the number of times light twists around its axis in one wavelength.
    “Having a larger quantum number is like having more letters to use in the alphabet,” said Kanté. “We’re allowing light to expand its vocabulary. In our study, we demonstrated this capability at telecommunication wavelengths, but in principle, it can be adapted to other frequency bands. Even though we created three lasers, multiplying the data rate by three, there is no limit to the possible number of beams and data capacity.”
    Kanté said the next step in his lab is to make quantum Hall rings that use electricity as power sources. More

  • in

    Computer training to reduce trauma symptoms

    Computer training applied in addition to psychotherapy can potentially help reduce the symptoms of post-traumatic stress disorder (PTSD). These are the results found by researchers from Ruhr-Universität Bochum and their collaborating partners in a randomised controlled clinical trial with 80 patients with PTSD. With the computerised training, the patients learned to appraise recurring and distressing trauma symptoms in a less negative light and instead to interpret them as a normal and understandable part of processing the trauma. The results are described by a team headed by Dr. Marcella Woud and Dr. Simon Blackwell from the Department of Clinical Psychology and Psychotherapy, together with the group led by Professor Henrik Kessler from the Clinic for Psychosomatic Medicine and Psychotherapy at the LWL University Hospital Bochum in the journal Psychotherapy and Psychosomatics, published online on 23 February 2021.
    Intrusions are a core symptom of post-traumatic stress disorder. Images of the traumatic experience suddenly and uncontrollably re-enter consciousness, often accompanied by strong sensory impressions such as the sounds or certain smells at the scene of the trauma, sometimes even making patients feel as if they are reliving the trauma. “Patients appraise the fact that they are experiencing these intrusions very negatively; they are often afraid that it is a sign that they are losing their mind,” explains Marcella Woud. “The feeling of having no control over the memories and experiencing the wide variety of intense negative emotions that often accompany intrusions make them even more distressing, which in turn reinforces negative appraisals.”
    A sentence completion task could help patients to reappraise symptoms
    Consequently, trauma therapies specifically address negative appraisals of symptoms such as intrusions. The Bochum-based team set out to establish whether a computerised training targeting these appraisals could also reduce symptoms and, at the same time, help to understand more about the underlying mechanisms of negative appraisals in PTSD. During the training, the patients are shown trauma-relevant sentences on the computer, which they have to complete. For example: “Since the incident, I sometimes react more anxiously than usual. This reaction is under_tand_ble.” Or: “I often think that I myself am to blame for the trauma. Such thoughts are un_ound_d.” The patients’ task is to fill in the word fragment’s first missing letter and by doing so to systematically appraise the statements in a more positive way. The aim is thus to learn that their symptoms are normal reactions and part of the processing of what they have experienced.
    Approximately half of the study participants underwent this “Cognitive Bias Modification-Appraisal” training, while the other half received a placebo control training — a visual concentration training — which was not designed to change negative appraisals. Both trainings took place during the first two weeks of the patients’ treatment in the clinic, with four sessions each week. One session lasted about 20 minutes. During and after the inpatient treatment, various measurements were collected to record any changes to the symptoms.
    Fewer trauma symptoms
    Patients who had participated in the appraisal training subsequently rated their symptoms such as intrusions and trauma-relevant thoughts less negatively than patients in the control group, and they also showed fewer other trauma-relevant symptoms after the training. “This leads us to conclude that the training appears to work — at least in the short-term,” says Marcella Woud. “Our study was not designed to examine long-term effects, which is something we will have to do in future studies on top of studying the training’s mechanisms in more detail.”

    Story Source:
    Materials provided by Ruhr-University Bochum. Note: Content may be edited for style and length. More

  • in

    AI identifies social bias trends in Bollywood, Hollywood movies

    Babies whose births were depicted in Bollywood films from the 1950s and 60s were more often than not boys; in today’s films, boy and girl newborns are about evenly split. In the 50s and 60s, dowries were socially acceptable; today, not so much. And Bollywood’s conception of beauty has remained consistent through the years: beautiful women have fair skin.
    Fans and critics of Bollywood — the popular name for a $2.1 billion film industry centered in Mumbai, India — might have some inkling of all this, particularly as movies often reflect changes in the culture. But these insights came via an automated computer analysis designed by Carnegie Mellon University computer scientists.
    The researchers, led by Kunal Khadilkar and Ashiqur R. KhudaBukhsh of CMU’s Language Technologies Institute (LTI), gathered 100 Bollywood movies from each of the past seven decades along with 100 of the top-grossing Hollywood moves from the same periods. They then used statistical language models to analyze subtitles of those 1,400 films for gender and social biases, looking for such factors as what words are closely associated with each other.
    “Most cultural studies of movies might consider five or 10 movies,” said Khadilkar, a master’s student in LTI. “Our method can look at 2,000 movies in a matter of days.”
    It’s a method that enables people to study cultural issues with much more precision, said Tom Mitchell, Founders University Professor in the School of Computer Science and a co-author of the study.
    “We’re talking about statistical, automated analysis of movies at scale and across time,” Mitchell said. “It gives us a finer probe for understanding the cultural themes implicit in these films.” The same natural language processing tools might be used to rapidly analyze hundreds or thousands of books, magazine articles, radio transcripts or social media posts, he added.
    For instance, the researchers assessed beauty conventions in movies by using a so-called cloze test. Essentially, it’s a fill-in-the-blank exercise: “A beautiful woman should have BLANK skin.” A language model normally would predict “soft” as the answer, they noted. But when the model was trained with the Bollywood subtitles, the consistent prediction became “fair.” The same thing happened when Hollywood subtitles were used, though the bias was less pronounced.
    To assess the prevalence of male characters, the researchers used a metric called Male Pronoun Ratio (MPR), which compares the occurrence of male pronouns such as “he” and “him” with the total occurrences of male and female pronouns. From 1950 through today, the MPR for Bollywood and Hollywood movies ranged from roughly 60 to 65 MPR. By contrast, the MPR for a selection of Google Books dropped from near 75 in the 1950s to parity, about 50, in the 2020s.
    Dowries — monetary or property gifts from a bride’s family to the groom’s — were common in India before they were outlawed in the early 1960s. Looking at words associated with dowry over the years, the researchers found such words as “loan,” “debt” and “jewelry” in Bollywood films of the 50s, which suggested compliance. By the 1970s, other words, such as “consent” and “responsibility,” began to appear. Finally, in the 2000s, the words most closely associated with dowry — including “trouble,” “divorce” and “refused” — indicate noncompliance or its consequences.
    “All of these things we kind of knew,” said KhudaBukhsh, an LTI project scientist, “but now we have numbers to quantify them. And we can also see the progress over the last 70 years as these biases have been reduced.”
    A research paper by Khadilkar, KhudaBukhsh and Mitchell was presented at the Association for the Advancement of Artificial Intelligence virtual conference earlier this month.

    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Byron Spice. Note: Content may be edited for style and length. More

  • in

    Molecular bridges power up printed electronics

    The exfoliation of graphite into graphene layers inspired the investigation of thousands of layered materials: amongst them transition metal dichalcogenides (TMDs). These semiconductors can be used to make conductive inks to manufacture printed electronic and optoelectronic devices. However, defects in their structure may hinder their performance. Now, Graphene Flagship researchers have overcome these hurdles by introducing ‘molecular bridges’- small molecules that interconnect the TMD flakes, thereby boosting the conductivity and overall performance.
    The results, published in Nature Nanotechnology, come from a multidisciplinary collaboration between Graphene Flagship partners the University of Strasbourg and CNRS, France, AMBER and Trinity College Dublin, Ireland, and Cambridge Graphene Centre, University of Cambridge, UK. The employed molecular bridges increase the carrier mobility — a physical parameter related to the electrical conductivity — tenfold.
    TMD inks are used in many fields, from electronics and sensors to catalysis and biomedicine. They are usually manufactured using liquid-phase exfoliation, a technique developed by the Graphene Flagship that allows for the mass production of graphene and layered materials. But, although this technology yields high volumes of product, it has some limitations. The exfoliation process may create defects that affect the layered material’s performance, particularly when it comes to conducting electricity.
    Inspired by organic electronics — the field behind successful technologies such as organic light-emitting diodes (OLEDs) and low-cost solar cells — the Graphene Flagship team found a solution: molecular bridges. With these chemical structures, the researchers managed to kill two birds with one stone. First, they connected TMD flakes to one another, creating a network that facilitates the charge transport and conductivity. The molecular bridges double up as walls, healing the chemical defects at the edges of the flakes and eliminating electrical vacancies that would otherwise promote energy loss.
    Furthermore, molecular bridges provide researchers with a new tool to tailor the conductivity of TMD inks on demand. If the bridge is a conjugated molecule — a structure with double bonds or aromatic rings — the carrier mobility is higher than when using saturated molecules, such as hydrocarbons. “The structure of the molecular bridge plays a key role,” explains Paolo Samorì, from Graphene Flagship partner the University of Strasbourg, France, who led the study. “We use molecules called di-thiols, which you can readily buy from any chemical supplier’s catalogue,” he adds. Their available structural diversity opens a world of possibilities to regulate the conductivity, adapting it to each specific application. “Molecular bridges will help us integrate many new functions in TMD-based devices,” continues Samorì. “These inks can be printed on any surface, like plastic, fabric or paper, enabling a whole variety of new circuitry and sensors for flexible electronics and wearables.”
    Maria Smolander, Graphene Flagship Work Package Leader for Flexible Electronics, adds: “This work is of high importance as a crucial step towards the full exploitation of solution-based fabrication methods like printing in flexible electronics. The use of the covalently bound bridges improves both the structural and electrical properties of the thin layers based on TMD flakes.”
    Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship and Chair of its Management Panel, adds: “The Graphene Flagship pioneered both liquid phase exfoliation and inkjet printing of graphene and layered materials. These techniques can produce and handle large volumes of materials. This paper is a key step to make semiconducting layered materials available for printed, flexible and wearable electronics, and yet again pushes forward the state of the art.”

    Story Source:
    Materials provided by Graphene Flagship. Original written by Fernando Gomollón-Bel. Note: Content may be edited for style and length. More

  • in

    Baby mice have a skill that humans want, and this microchip might help us learn it

    Baby mice might be small, but they’re tough, too.
    For their first seven days of life, they have the special ability to regenerate damaged heart tissue.
    Humans, on the other hand, aren’t so lucky: any heart injuries we suffer could lead to permanent damage. But what if we could learn to repair our hearts, just like baby mice?
    A team of researchers led by UNSW Sydney have developed a microchip that can help scientists study the regenerative potential of mice heart cells. This microchip — which combines microengineering with biomedicine — could help pave the way for new regenerative heart medicine research.
    “We’ve developed a simple, reliable, cheap and fast way to identify and separate these important mouse heart cells,” says lead author Dr Hossein Tavassoli, a biomedical engineer and stem cell researcher at UNSW Medicine & Health who conducted this work as part of his doctoral thesis.
    “Our method uses a microchip that’s easy to fabricate and can be made in any laboratory in the world.”
    The process for identifying and separating mice heart cells is rather complex.

    advertisement

    First, scientists need to separate the right kind of heart cells (called proliferative cardiomyocytes) from other types of cells present in the heart.
    Their next challenge is keeping the cells alive.
    “Newborn mice heart cells (called proliferative cardiomyocytes) are very sensitive,” says Dr Vashe Chandrakanthan, a senior research fellow at UNSW Medicine & Health and co-senior author of the study.
    “Only about 20 per cent usually survive the conventional isolation and separation process. If we want to study these cells, we need to isolate them before they undergo cell death.”
    Dr Tavassoli says that this new method is much more efficient.

    advertisement

    “We reduced the stress applied on these cells by minimising the isolation and processing time,” he says. “Our method can purify millions of cells in less than 10 minutes.
    “Almost all of the cells survived when we used our microfluidic chip — over 90 per cent.”
    The spiral-shaped device is a microfluidic chip — that is, a chip designed to handle liquids on tiny scale. It filters cells according to their size, separating the cardiomyocytes from other cells. The chip costs less than $500 to produce, making it cheaper than other isolation and separation methods.
    This tool will make it easier for researchers to study how baby mice repair their hearts — and whether humans might be able to use the same technique.
    “Heart disease is the number one killer in the world,” says Dr Tavassoli. “In Australia, someone dies of heart disease every 12 minutes, and every four hours a baby is born with a heart defect.
    “We hope that our device will help accelerate heart disease research.”
    Characterising mice heart cells
    Once the heart cells were separated from other cells with the help of their chip, the researchers seized the opportunity to study the cells’ physico-mechanical properties — that is, the way they respond to force.
    This involved asking questions like ‘How do these individual heart cells beat?’, ‘Do the cells have distinct features?’ and ‘What are their differences in size, shape and elasticity?’.
    The findings could provide new insights for developing materials that repair heart tissue, like cardiac patches, scaffolds and hydrogels.
    “The fast, large-scale characterisation of cells’ physico-mechanical features is a relatively new field of research,” says Dr Tavassoli, who originally trained as an engineer before specialising in medicine.
    “This is the first time microfluidic technology has been used to study mechanical properties of baby mouse heart cells.”
    A multipurpose microchip
    Dr Chandrakanthan says that even though the microchip was created for baby mouse heart cells, it could potentially be adapted for use in other types of cell applications.
    “The principles are compatible with isolating cardiomyocytes from mouse heart cells of all ages,” he says.
    “We could potentially also use this method to separate not only the heart cells, but all sorts of cells from different organs.”
    Dr Tavassoli says this method could also help other areas of medical research, including cardiac biology, drug discovery and nanoengineering. He is currently conducting research at the Garvan Institute and Lowy Cancer Research Centre on how this method could help cancer diagnosis.
    “This microchip opens up the opportunity for new discoveries by researchers all over the world,” he says. More

  • in

    Pushing computing to the edge by rethinking microchips' design

    Responding to artificial intelligence’s exploding demands on computer networks, Princeton University researchers in recent years have radically increased the speed and slashed the energy use of specialized AI systems. Now, the researchers have moved their innovation closer to widespread use by creating co-designed hardware and software that will allow designers to blend these new types of systems into their applications.
    “Software is a critical part of enabling new hardware,” said Naveen Verma, a professor of electrical and computer engineering at Princeton and a leader of the research team. “The hope is that designers can keep using the same software system — and just have it work ten times faster or more efficiently.”
    By cutting both power demand and the need to exchange data from remote servers, systems made with the Princeton technology will be able to bring artificial intelligence applications, such as piloting software for drones or advanced language translators, to the very edge of computing infrastructure.
    “To make AI accessible to the real-time and often personal process all around us, we need to address latency and privacy by moving the computation itself to the edge,” said Verma, who is the director of the University’s Keller Center for Innovation in Engineering Education. “And that requires both energy efficiency and performance.”
    Two years ago, the Princeton research team fabricated a new chip designed to improve the performance of neural networks, which are the essence behind today’s artificial intelligence. The chip, which performed tens to hundreds of times better than other advanced microchips, marked a revolutionary approach in several measures. In fact, the chip was so different than anything being used for neural nets that it posed a challenge for the developers.
    “The chip’s major drawback is that it uses a very unusual and disruptive architecture,” Verma said in a 2018 interview. “That needs to be reconciled with the massive amount of infrastructure and design methodology that we have and use today.”
    Over the next two years, the researchers worked to refine the chip and to create a software system that would allow artificial intelligence systems to take advantage of the new chip’s speed and efficiency. In a presentation to the International Solid-State Circuits Virtual Conference on Feb. 22, lead author Hongyang Jia, a graduate student in Verma’s research lab, described how the new software would allow the new chips to work with different types of networks and allow the systems to be scalable both in hardware and execution of software.

    advertisement

    “It is programmable across all these networks,” Verma said. “The networks can be very big, and they can be very small.”
    Verma’s team developed the new chip in response to growing demand for artificial intelligence and to the burden AI places on computer networks. Artificial intelligence, which allows machines to mimic cognitive functions such as learning and judgement, plays a critical role in new technologies such as image recognition, translation, and self-driving vehicles. Ideally, the computation for technology such as drone navigation would be based on the drone itself, rather than in a remote network computer. But digital microchips’ power demand and need for memory storage can make designing such a system difficult. Typically, the solution places much of the computation and memory on a remote server, which communicates wirelessly with the drone. But this adds to the demands on the communications system, and it introduces security problems and delays in sending instructions to the drone.
    To approach the problem, the Princeton researchers rethought computing in several ways. First, they designed a chip that conducts computation and stores data in the same place. This technique, called in-memory computing, slashes the energy and time used to exchange information with dedicated memory. The technique boosts efficiency, but it introduces new problems: because it crams the two functions into a small area, in-memory computing relies on analog operation, which is sensitive to corruption by sources such as voltage fluctuation and temperature spikes. To solve this problem, the Princeton team designed their chips using capacitors rather than transistors. The capacitors, devices that store an electrical charge, can be manufactured with greater precision and are not highly affected by shifts in voltage. Capacitors can also be very small and placed on top of memory cells, increasing processing density and cutting energy needs.
    But even after making analog operation robust, many challenges remained. The analog core needed to be efficiently integrated in a mostly digital architecture, so that it could be combined with the other functions and software needed to actually make practical systems work. A digital system uses off-and-on switches to represent ones and zeros that computer engineers use to write the algorithms that make up computer programming. An analog computer takes a completely different approach. In an article in the IEEE Spectrum, Columbia University Professor Yannis Tsividis described an analog computer as a physical system designed to be governed by equations identical to those the programmer wants to solve. An abacus, for example, is a very simple analog computer. Tsividis says that a bucket and a hose can serve as an analog computer for certain calculus problems: to solve an integration function, you could do the math, or you could just measure the water in the bucket.
    Analog computing was the dominant technology through the Second World War. It was used to perform functions from predicting tides to directing naval guns. But analog systems were cumbersome to build and usually required highly trained operators. After the emergency of the transistor, digital systems proved more efficient and adaptable. But new technologies and new circuit designs have allowed engineers to eliminate many shortcomings of the analog systems. For applications such as neural networks, the analog systems offer real advantages. Now, the question is how to combine the best of both worlds. Verma points out that the two types of systems are complimentary. Digital systems play a central role while neural networks using analog chips can run specialized operations extremely fast and efficiently. That is why developing a software system that can integrate the two technologies seamlessly and efficiently is such a critical step.
    “The idea is not to put the entire network into in-memory computing,” he said. “You need to integrate the capability to do all the other stuff and to do it in a programmable way.” More

  • in

    Data transfer system connects silicon chips with a hair's-width cable

    Researchers have developed a data transfer system that can transmit information 10 times faster than a USB. The new link pairs high-frequency silicon chips with a polymer cable as thin a strand of hair. The system may one day boost energy efficiency in data centers and lighten the loads of electronics-rich spacecraft.
    The research was presented at this month’s IEEE International Solid-State Circuits Conference. The lead author is Jack Holloway ’03, MNG ’04, who completed his PhD in MIT’s Department of Electrical Engineering and Computer Science (EECS) last fall and currently works for Raytheon. Co-authors include Ruonan Han, associate professor and Holloway’s PhD adviser in EECS, and Georgios Dogiamis, a senior researcher at Intel.
    The need for snappy data exchange is clear, especially in an era of remote work. “There’s an explosion in the amount of information being shared between computer chips — cloud computing, the internet, big data. And a lot of this happens over conventional copper wire,” says Holloway. But copper wires, like those found in USB or HDMI cables, are power-hungry — especially when dealing with heavy data loads. “There’s a fundamental tradeoff between the amount of energy burned and the rate of information exchanged.” Despite a growing demand for fast data transmission (beyond 100 gigabits per second) through conduits longer than a meter, Holloway says the typical solution has been “increasingly bulky and costly” copper cables.
    One alternative to copper wire is fiber-optic cable, though that has its own problems. Whereas copper wires use electrical signaling, fiber-optics use photons. That allows fiber-optics to transmit data quickly and with little energy dissipation. But silicon computer chips generally don’t play well with photons, making interconnections between fiber-optic cables and computers a challenge. “There’s currently no way to efficiently generate, amplify, or detect photons in silicon,” says Holloway. “There are all kinds of expensive and complex integration schemes, but from an economics perspective, it’s not a great solution.” So, the researchers developed their own.
    The team’s new link draws on benefits of both copper and fiber optic conduits, while ditching their drawbacks. “It’s a great example of a complementary solution,” says Dogiamis. Their conduit is made of plastic polymer, so it’s lighter and potentially cheaper to manufacture than traditional copper cables. But when the polymer link is operated with sub-terahertz electromagnetic signals, it’s far more energy-efficient than copper in transmitting a high data load. The new link’s efficiency rivals that of fiber-optic, but has a key advantage: “It’s compatible directly with silicon chips, without any special manufacturing,” says Holloway.
    The team engineered such low-cost chips to pair with the polymer conduit. Typically, silicon chips struggle to operate at sub-terahertz frequencies. Yet the team’s new chips generate those high-frequency signals with enough power to transmit data directly into the conduit. That clean connection from the silicon chips to the conduit means the overall system can be manufactured with standard, cost-effective methods, the researchers say.
    The new link also beats out copper and fiber optic in terms of size. “The cross-sectional area of our cable is 0.4 millimeters by a quarter millimeter,” says Han. “So, it’s super tiny, like a strand of hair.” Despite its slim size, it can carry a hefty load of data, since it sends signals over three different parallel channels, separated by frequency. The link’s total bandwidth is 105 gigabits per second, nearly an order of magnitude faster than a copper-based USB cable. Dogiamis says the cable could “address the bandwidth challenges as we see this megatrend toward more and more data.”
    In future work, Han hopes to make the polymer conduits even faster by bundling them together. “Then the data rate will be off the charts,” he says. “It could be one terabit per second, still at low cost.”
    The researchers suggest “data-dense” applications, like server farms, could be early adopters of the new links, since they could dramatically cut data centers’ high energy demands. The link could also be a key solution for the aerospace and automotive industries, which place a premium on small, light devices. And one day, the link could replace the consumer electronic cables in homes and offices, thanks to the link’s simplicity and speed. “It’s far less costly than [copper or fiber optic] approaches, with significantly wider bandwidth and lower loss than conventional copper solutions,” says Holloway. “So, high fives all round.”
    This research was funded, in part, by Intel, Raytheon, the Naval Research Laboratory, and the Office of Naval Research. More