More stories

  • in

    Engineers create hybrid chips with processors and memory to run AI on battery-powered devices

    Smartwatches and other battery-powered electronics would be even smarter if they could run AI algorithms. But efforts to build AI-capable chips for mobile devices have so far hit a wall — the so-called “memory wall” that separates data processing and memory chips that must work together to meet the massive and continually growing computational demands imposed by AI.
    “Transactions between processors and memory can consume 95 percent of the energy needed to do machine learning and AI, and that severely limits battery life,” said computer scientist Subhasish Mitra, senior author of a new study published in Nature Electronics.
    Now, a team that includes Stanford computer scientist Mary Wootters and electrical engineer H.-S. Philip Wong has designed a system that can run AI tasks faster, and with less energy, by harnessing eight hybrid chips, each with its own data processor built right next to its own memory storage.
    This paper builds on the team’s prior development of a new memory technology, called RRAM, that stores data even when power is switched off — like flash memory — only faster and more energy efficiently. Their RRAM advance enabled the Stanford researchers to develop an earlier generation of hybrid chips that worked alone. Their latest design incorporates a critical new element: algorithms that meld the eight, separate hybrid chips into one energy-efficient AI-processing engine.
    “If we could have built one massive, conventional chip with all the processing and memory needed, we’d have done so, but the amount of data it takes to solve AI problems makes that a dream,” Mitra said. “Instead, we trick the hybrids into thinking they’re one chip, which is why we call this the Illusion System.”
    The researchers developed Illusion as part of the Electronics Resurgence Initiative (ERI), a $1.5 billion program sponsored by the Defense Advanced Research Projects Agency. DARPA, which helped spawn the internet more than 50 years ago, is supporting research investigating workarounds to Moore’s Law, which has driven electronic advances by shrinking transistors. But transistors can’t keep shrinking forever.
    “To surpass the limits of conventional electronics, we’ll need new hardware technologies and new ideas about how to use them,” Wootters said.
    The Stanford-led team built and tested its prototype with help from collaborators at the French research institute CEA-Leti and at Nanyang Technological University in Singapore. The team’s eight-chip system is just the beginning. In simulations, the researchers showed how systems with 64 hybrid chips could run AI applications seven times faster than current processors, using one-seventh as much energy.
    Such capabilities could one day enable Illusion Systems to become the brains of augmented and virtual reality glasses that would use deep neural networks to learn by spotting objects and people in the environment, and provide wearers with contextual information — imagine an AR/VR system to help birdwatchers identify unknown specimens.
    Stanford graduate student Robert Radway, who is first author of the Nature Electronics study, said the team also developed new algorithms to recompile existing AI programs, written for today’s processors, to run on the new multi-chip systems. Collaborators from Facebook helped the team test AI programs that validated their efforts. Next steps include increasing the processing and memory capabilities of individual hybrid chips and demonstrating how to mass produce them cheaply.
    “The fact that our fabricated prototype is working as we expected suggests we’re on the right track,” said Wong, who believes Illusion Systems could be ready for marketability within three to five years.
    This research was supported by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation, the Semiconductor Research Corporation, the Stanford SystemX Alliance and Intel Corporation.

    Story Source:
    Materials provided by Stanford School of Engineering. Original written by Tom Abate. Note: Content may be edited for style and length. More

  • in

    Robot displays a glimmer of empathy to a partner robot

    Like a longtime couple who can predict each other’s every move, a Columbia Engineering robot has learned to predict its partner robot’s future actions and goals based on just a few initial video frames.
    When two primates are cooped up together for a long time, we quickly learn to predict the near-term actions of our roommates, co-workers or family members. Our ability to anticipate the actions of others makes it easier for us to successfully live and work together. In contrast, even the most intelligent and advanced robots have remained notoriously inept at this sort of social communication. This may be about to change.
    The study, conducted at Columbia Engineering’s Creative Machines Lab led by Mechanical Engineering Professor Hod Lipson, is part of a broader effort to endow robots with the ability to understand and anticipate the goals of other robots, purely from visual observations.
    The researchers first built a robot and placed it in a playpen roughly 3×2 feet in size. They programmed the robot to seek and move towards any green circle it could see. But there was a catch: Sometimes the robot could see a green circle in its camera and move directly towards it. But other times, the green circle would be occluded by a tall red carboard box, in which case the robot would move towards a different green circle, or not at all.
    After observing its partner puttering around for two hours, the observing robot began to anticipate its partner’s goal and path. The observing robot was eventually able to predict its partner’s goal and path 98 out of 100 times, across varying situations — without being told explicitly about the partner’s visibility handicap.
    “Our initial results are very exciting,” says Boyuan Chen, lead author of the study, which was conducted in collaboration with Carl Vondrick, assistant professor of computer science, and published today by Nature Scientific Reports. “Our findings begin to demonstrate how robots can see the world from another robot’s perspective. The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy.”
    When they designed the experiment, the researchers expected that the Observer Robot would learn to make predictions about the Subject Robot’s near-term actions. What the researchers didn’t expect, however, was how accurately the Observer Robot could foresee its colleague’s future “moves” with only a few seconds of video as a cue.
    The researchers acknowledge that the behaviors exhibited by the robot in this study are far simpler than the behaviors and goals of humans. They believe, however, that this may be the beginning of endowing robots with what cognitive scientists call “Theory of Mind” (ToM). At about age three, children begin to understand that others may have different goals, needs and perspectives than they do. This can lead to playful activities such as hide and seek, as well as more sophisticated manipulations like lying. More broadly, ToM is recognized as a key distinguishing hallmark of human and primate cognition, and a factor that is essential for complex and adaptive social interactions such as cooperation, competition, empathy, and deception.
    In addition, humans are still better than robots at describing their predictions using verbal language. The researchers had the observing robot make its predictions in the form of images, rather than words, in order to avoid becoming entangled in the thorny challenges of human language. Yet, Lipson speculates, the ability of a robot to predict the future actions visually is not unique: “We humans also think visually sometimes. We frequently imagine the future in our mind’s eyes, not in words.”
    Lipson acknowledges that there are many ethical questions. The technology will make robots more resilient and useful, but when robots can anticipate how humans think, they may also learn to manipulate those thoughts.
    “We recognize that robots aren’t going to remain passive instruction-following machines for long,” Lipson says. “Like other forms of advanced AI, we hope that policymakers can help keep this kind of technology in check, so that we can all benefit.” More

  • in

    New statistical method exponentially increases ability to discover genetic insights

    Pleiotropy analysis, which provides insight on how individual genes result in multiple characteristics, has become increasingly valuable as medicine continues to lean into mining genetics to inform disease treatments. Privacy stipulations, though, make it difficult to perform comprehensive pleiotropy analysis because individual patient data often can’t be easily and regularly shared between sites. However, a statistical method called Sum-Share, developed at Penn Medicine, can pull summary information from many different sites to generate significant insights. In a test of the method, published in Nature Communications, Sum-Share’s developers were able to detect more than 1,700 DNA-level variations that could be associated with five different cardiovascular conditions. If patient-specific information from just one site had been used, as is the norm now, only one variation would have been determined.
    “Full research of pleiotropy has been difficult to accomplish because of restrictions on merging patient data from electronic health records at different sites, but we were able to figure out a method that turns summary-level data into results that are exponentially greater than what we could accomplish with individual-level data currently available,” said the one of the study’s senior authors, Jason Moore, PhD, director of the Institute for Biomedical Informatics and a professor of Biostatistics, Epidemiology and Informatics. “With Sum-Share, we greatly increase our abilities to unveil the genetic factors behind health conditions that range from those dealing with heart health, as was the case in this study, to mental health, with many different applications in between.”
    Sum-Share is powered by bio-banks that pool de-identified patient data, including genetic information, from electronic health records (EHRs) for research purposes. For their study, Moore, co-senior author Yong Chen, PhD, an associate professor of Biostatistics, lead author Ruowang Li, PhD, a post-doc fellow at Penn, and their colleagues used eMERGE to pull seven different sets of EHRs to run through Sum-Share in an attempt to detect the genetic effects between five cardiovascular-related conditions: obesity, hypothyroidism, type 2 diabetes, hypercholesterolemia, and hyperlipidemia.
    With Sum-Share, the researchers found 1,734 different single-nucleotide polymorphisms (SNPs, which are differences in the building blocks of DNA) that could be tied to the five conditions. Then, using results from just one site’s EHR, only one SNP was identified that could be tied to the conditions.
    Additionally, they determined that their findings were identical whether they used summary-level data or individual-level data in Sum-Share, making it a “lossless” system.
    To determine the effectiveness of Sum-Share, the team then compared their method’s results with the previous leading method, PheWAS. This method operates best when it pulls what individual-level data has been made available from different EHRs. But when putting the two on a level playing field, allowing both to use individual-level data, Sum-Share was statistically determined to be more powerful in its findings than PheWAS. So, since Sum-Share’s summary-level data findings have been determined to be as insightful as when it uses individual-level data, it appears to be the best method for determining genetic characteristics.
    “This was notable because Sum-Share enables loss-less data integration, while PheWAS loses some information when integrating information from multiple sites,” Li explained. “Sum-Share can also reduce the multiple hypothesis testing penalties by jointly modeling different characteristics at once.”
    Currently, Sum-Share is mainly designed to be used as a research tool, but there are possibilities for using its insights to improve clinical operations. And, moving forward, there is a chance to use it for some of the most pressing needs facing health care today.
    “Sum-Share could be used for COVID-19 with research consortia, such as the Consortium for Clinical Characterization of COVID-19 by EHR (4CE),” Yong said. “These efforts use a federated approach where the data stay local to preserve privacy.”
    This study was supported by the National Institutes of Health (grant number NIH LM010098).
    Co-authors on the study include Rui Duan, Xinyuan Zhang, Thomas Lumley, Sarah Pendergrass, Christopher Bauer, Hakon Hakonarson, David S. Carrell, Jordan W. Smoller, Wei-Qi Wei, Robert Carroll, Digna R. Velez Edwards, Georgia Wiesner, Patrick Sleiman, Josh C. Denny, Jonathan D. Mosley, and Marylyn D. Ritchie. More

  • in

    Entangling electrons with heat

    A joint group of scientists from Finland, Russia, China and the USA have demonstrated that temperature difference can be used to entangle pairs of electrons in superconducting structures. The experimental discovery, published in Nature Communications, promises powerful applications in quantum devices, bringing us one step closer towards applications of the second quantum revolution.
    The team, led by Professor Pertti Hakonen from Aalto University, has shown that the thermoelectric effect provides a new method for producing entangled electrons in a new device. “Quantum entanglement is the cornerstone of the novel quantum technologies. This concept, however, has puzzled many physicists over the years, including Albert Einstein who worried a lot about the spooky interaction at a distance that it causes,” says Prof. Hakonen.
    In quantum computing, entanglement is used to fuse individual quantum systems into one, which exponentially increases their total computational capacity. “Entanglement can also be used in quantum cryptography, enabling the secure exchange of information over long distances,” explains Prof. Gordey Lesovik, from the Moscow Institute of Physics and Technology, who has acted several times as a visiting professor at Aalto University School of Science. Given the significance of entanglement to quantum technology, the ability to create entanglement easily and controllably is an important goal for researchers.
    The researchers designed a device where a superconductor was layered withed graphene and metal electrodes. “Superconductivity is caused by entangled pairs of electrons called “cooper pairs.” Using a temperature difference, we cause them to split, with each electron then moving to different normal metal electrode,” explains doctoral candidate Nikita Kirsanov, from Aalto University. “The resulting electrons remain entangled despite being separated for quite long distances.”
    Along with the practical implications, the work has significant fundamental importance. The experiment has shown that the process of Cooper pair splitting works as a mechanism for turning temperature difference into correlated electrical signals in superconducting structures. The developed experimental scheme may also become a platform for original quantum thermodynamical experiments.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    For the right employees, even standard information technology can spur creativity

    In a money-saving revelation for organizations inclined to invest in specialized information technology to support the process of idea generation, new research suggests that even non-specialized, everyday organizational IT can encourage employees’ creativity.
    Recently published in the journal Information and Organization, these findings from Dorit Nevo, an associate professor in the Lally School of Management at Rensselaer Polytechnic Institute, show standard IT can be used for innovation. Furthermore, this is much more likely to happen when the technology is in the hands of employees who are motivated to master technology, understand their role in the organization, are recognized for their efforts, and are encouraged to develop their skills.
    “What this study reveals is that innovation is found not just by using technology specifically created to support idea-generation,” Nevo said. “Creativity comes from both the tool and the person who uses it.”
    Most businesses and organizations use common computer technologies, such as business analytics programs, knowledge management systems, and point-of-sale systems, to enable employees to complete basic job responsibilities. Nevo wanted to know if this standard IT could also be used by employees to create new ideas in the front end of the innovation process, where ideas are generated, developed, and then championed.
    By developing a theoretically grounded model to examine IT-enabled innovation in an empirical study, Nevo found that employees who are motivated to master IT can use even standard technology as a creativity tool, increasing the return on investment on the technologies companies already have in-house.
    “An organization can get a lot more value out of their IT technology if they let the right people use them and then support them,” Nevo said. “This added value will, in turn, save organizations money because they don’t always have to invest in specialized technology in order for their employees to generate solutions to work-related issues or ideas for improvement in the workplace. You just have to trust your employees to be able to innovate with the technologies you have.”

    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Jeanne Hedden Gallagher. Note: Content may be edited for style and length. More

  • in

    Patterns in primordial germ cell migration

    Whenever an organism develops and forms organs, a tumour creates metastases or the immune system becomes active in inflammation, cells migrate within the body. As they do, they interact with surrounding tissues which influence their function. The migrating cells react to biochemical signals, as well as to biophysical properties of their environment, for example whether a tissue is soft or stiff. Gaining detailed knowledge about such processes provides scientists with a basis for understanding medical conditions and developing treatment approaches.
    A team of biologists and mathematicians at the Universities of Münster and Erlangen-Nürnberg has now developed a new method for analysing cell migration processes in living organisms. The researchers investigated how primordial germ cells whose mode of locomotion is similar to other migrating cell types, including cancer cells, behave in zebrafish embryos when deprived of their biochemical guidance cue. The team developed new software that makes it possible to merge three-dimensional microscopic images of multiple embryos in order to recognise patterns in the distribution of cells and thus highlight tissues that influence cell migration. With the help of the software, researchers determined domains that the cells either avoided, to which they responded by clustering, or in which they maintained their normal distribution. In this way, they identified a physical barrier at the border of the organism’s future backbone where the cells changed their path. “We expect that our experimental approach and the newly developed tools will be of great benefit in research on developmental biology, cell biology and biomedicine,” explains Prof Dr Erez Raz, a cell biologist and project director at the Center for Molecular Biology of Inflammation at Münster University. The study has been published in the journal Science Advances.
    Details on methods and results
    For their investigations, the researchers made use of primordial germ cells in zebrafish embryos. Primordial germ cells are the precursors of sperm and egg cells and, during the development of many organisms, they migrate to the place where the reproductive organs form. Normally, these cells are guided by chemokines — i.e. attractants produced by surrounding cells that initiate signalling pathways by binding to receptors on the primordial germ cells. By genetically modifying the cells, the scientists deactivated the chemokine receptor Cxcr4b so that the cells remained motile but no longer migrated in a directional manner. “Our idea was that the distribution of the cells within the organism — when not being controlled by guidance cues — can provide clues as to which tissues influence cell migration, and then we can analyse the properties of these tissues,” explains ?ukasz Truszkowski, one of the three lead authors of the study.
    “To obtain statistically significant data on the spatial distribution of the migrating cells, we needed to study several hundred zebrafish embryos, because at the developmental stage at which the cells are actively migrating, a single embryo has only around 20 primordial germ cells,” says Sargon Groß-Thebing, also a first author and, like his colleague, a PhD student in the graduate programme of the Cells in Motion Interfaculty Centre at the University of Münster. In order to digitally merge the three-dimensional data of multiple embryos, the biology researchers joined forces with a team led by the mathematician Prof Dr Martin Burger, who was also conducting research at the University of Münster at that time and is now continuing the collaboration from the University of Erlangen-Nürnberg. The team developed a new software tool that pools the data automatically and recognises patterns in the distribution of primordial germ cells. The challenge was to account for the varying sizes and shapes of the individual zebrafish embryos and their precise three-dimensional orientation in the microscope images.
    The software named “Landscape” aligns the images captured from all the embryos with each other. “Based on a segmentation of the cell nuclei, we can estimate the shape of the embryos and correct for their size. Afterwards, we adjust the orientation of the organisms,” says mathematician Dr Daniel Tenbrinck, the third lead author of the study. In doing so, a tissue in the midline of the embryos serves as a reference structure which is marked by a tissue-specific expression of the so-called green fluorescent protein (GFP). In technical jargon the whole process is called image registration. The scientists verified the reliability of their algorithms by capturing several images of the same embryo, manipulating them with respect to size and image orientation, and testing the ability of the software to correct for the manipulations. To evaluate the ability of the software to recognise cell-accumulation patterns, they used microscopic images of normally developing embryos, in which the migrating cells accumulate at a known specific location in the embryo. The researchers also demonstrated that the software can be applied to embryos of another experimental model, embryos of the fruit fly Drosophila, which have a shape that is different from that of zebrafish embryos.
    Using the new method, the researchers analysed the distribution of 21,000 primordial germ cells in 900 zebrafish embryos. As expected, the cells lacking a chemokine receptor were distributed in a pattern that differs from that observed in normal embryos. However, the cells were distributed in a distinct pattern that could not be recognised by monitoring single embryos. For example, in the midline of the embryo, the cells were absent. The researchers investigated that region more closely and found it to function as a physical barrier for the cells. When the cells came in contact with this border, they changed the distribution of actin protein within them, which in turn led to a change of cell migration direction and movement away from the barrier. A deeper understanding of how cells respond to physical barriers may be relevant in metastatic cancer cells that invade neighbouring tissues and where this process may be disrupted.

    Story Source:
    Materials provided by University of Münster. Note: Content may be edited for style and length. More

  • in

    Vaccine myths on social media can be effectively reduced with credible fact checking

    Social media misinformation can negatively influence people’s attitudes about vaccine safety and effectiveness, but credible organizations — such as research universities and health institutions — can play a pivotal role in debunking myths with simple tags that link to factual information, University of California, Davis, researchers, suggest in a new study.
    Researchers found that fact-check tags located immediately below or near a post can generate more positive attitudes toward vaccines than misinformation alone, and perceived source expertise makes a difference. “In fact, fact-checking labels from health institutions and research universities were seen as more ‘expert’ than others, indirectly resulting in more positive attitudes toward vaccines,” said Jingwen Zhang, assistant professor of communication and lead author of the study.
    The findings were published online Wednesday, Jan. 6, in the journal Preventive Medicine.
    Has implications for COVID-19
    The data was collected in 2018 — before the COVID-19 pandemic — but the study’s results could influence public communications about COVID-19 vaccines, researchers said.
    “The most important thing I learned from this paper is that fact checking is effective…giving people a simple label can change their attitude,” Zhang said. “Secondly, I am calling for more researchers and scientists to engage in public health and science communications. We need to be more proactive. We are not using our power right now.”
    While there is a strong consensus in the medical community that vaccines are safe, cost-effective and successful in preventing diseases, widespread vaccine hesitancy has resurged in many countries, the study said. The United States has faced issues with lower-than-preferred vaccine participation for influenza and even measles, which medical experts blamed for a 2019 measles outbreak. “Because both individuals and groups can post misinformation, such as false claims about vaccines, social media have played a role in spreading misinformation,” Zhang said.

    advertisement

    Study authors tested the effects of simple fact-checking labels with 1,198 people nationwide who showed different levels of vaccine hesitancy. In the experiment, researchers used multiple misinformation messages covering five vaccine types and five categories of 13 different fact-checking sources. They avoided any explanations that repeated the false information.
    Using a mock twitter account, one post, for example, consisted of a misinformation claim on a specific vaccine and a picture of a vaccine bottle. It read: “According to a US Vaccine Adverse Events Reporting System (VAERS) there were 93,000 adverse reactions to last year’s Flu Shot including 1,080 deaths & 8,888 hospitalizations.”
    Researchers then used alternating fact-checking labels from various sources in media, health organizations such as the Centers for Disease Control and Prevention, Johns Hopkins University, and algorithms. One read, for example, “This post is falsified. Fact-checked by the Centers For Disease Control. Learn why this is falsified.”
    The results showed that those exposed to fact-checking labels were more likely to develop more positive attitudes toward vaccines than misinformation alone. Further, the labels’ effect was not moderated by vaccine skepticism, the type of vaccine misinformation or political ideology.
    “What approaches are most effective at targeting vaccine misinformation on social media among users unlikely to visit fact-checking websites or engage with thorough corrections?” researchers asked in the paper. “This project shows that seeing a fact-checking label immediately below a misinformation post can make viewers more favorable toward vaccines.”
    She explained that a tag could be as simple as a reply to a misinforming tweet that explains the information is false, and links to credible information at a university or institutional web site.
    Ideally, she said, tagging should be done by social media companies such as Facebook and Twitter. She said social media companies are working with entities, such as the WHO, to correct misinformation. “We are headed in the right direction, but more needs to happen,” she said.
    Study co-authors included Magdalena Wojcieszak, associate professor of communication, and doctoral students Jieyu Ding Featherstone (Department of Communication) and Christopher Calabrese (Department of Public Health Sciences), all of UC Davis. More

  • in

    World's fastest optical neuromorphic processor

    An international team of researchers led by Swinburne University of Technology has demonstrated the world’s fastest and most powerful optical neuromorphic processor for artificial intelligence (AI), which operates faster than 10 trillion operations per second (TeraOPs/s) and is capable of processing ultra-large scale data.
    Published in the journal Nature, this breakthrough represents an enormous leap forward for neural networks and neuromorphic processing in general.
    Artificial neural networks, a key form of AI, can ‘learn’ and perform complex operations with wide applications to computer vision, natural language processing, facial recognition, speech translation, playing strategy games, medical diagnosis and many other areas. Inspired by the biological structure of the brain’s visual cortex system, artificial neural networks extract key features of raw data to predict properties and behaviour with unprecedented accuracy and simplicity.
    Led by Swinburne’s Professor David Moss, Dr Xingyuan (Mike) Xu (Swinburne, Monash University) and Distinguished Professor Arnan Mitchell from RMIT University, the team achieved an exceptional feat in optical neural networks: dramatically accelerating their computing speed and processing power.
    The team demonstrated an optical neuromorphic processor operating more than 1000 times faster than any previous processor, with the system also processing record-sized ultra-large scale images — enough to achieve full facial image recognition, something that other optical processors have been unable to accomplish.
    “This breakthrough was achieved with ‘optical micro-combs’, as was our world-record internet data speed reported in May 2020,” says Professor Moss, Director of Swinburne’s Optical Sciences Centre and recently named one of Australia’s top research leaders in physics and mathematics in the field of optics and photonics by The Australian.

    advertisement

    While state-of-the-art electronic processors such as the Google TPU can operate beyond 100 TeraOPs/s, this is done with tens of thousands of parallel processors. In contrast, the optical system demonstrated by the team uses a single processor and was achieved using a new technique of simultaneously interleaving the data in time, wavelength and spatial dimensions through an integrated micro-comb source.
    Micro-combs are relatively new devices that act like a rainbow made up of hundreds of high-quality infrared lasers on a single chip. They are much faster, smaller, lighter and cheaper than any other optical source.
    “In the 10 years since I co-invented them, integrated micro-comb chips have become enormously important and it is truly exciting to see them enabling these huge advances in information communication and processing. Micro-combs offer enormous promise for us to meet the world’s insatiable need for information,” Professor Moss says.
    “This processor can serve as a universal ultrahigh bandwidth front end for any neuromorphic hardware — optical or electronic based — bringing massive-data machine learning for real-time ultrahigh bandwidth data within reach,” says co-lead author of the study, Dr Xu, Swinburne alum and postdoctoral fellow with the Electrical and Computer Systems Engineering Department at Monash University.
    “We’re currently getting a sneak-peak of how the processors of the future will look. It’s really showing us how dramatically we can scale the power of our processors through the innovative use of microcombs,” Dr Xu explains.
    RMIT’s Professor Mitchell adds, “This technology is applicable to all forms of processing and communications — it will have a huge impact. Long term we hope to realise fully integrated systems on a chip, greatly reducing cost and energy consumption.”
    “Convolutional neural networks have been central to the artificial intelligence revolution, but existing silicon technology increasingly presents a bottleneck in processing speed and energy efficiency,” says key supporter of the research team, Professor Damien Hicks, from Swinburne and the Walter and Elizabeth Hall Institute.
    He adds, “This breakthrough shows how a new optical technology makes such networks faster and more efficient and is a profound demonstration of the benefits of cross-disciplinary thinking, in having the inspiration and courage to take an idea from one field and using it to solve a fundamental problem in another.” More