More stories

  • in

    A novel method for easy and quick fabrication of biomimetic robots with life-like movement

    Ultraviolet-laser processing is a promising technique for developing intricate microstructures, enabling complex alignment of muscle cells, required for building life-like biohybrid actuators, as shown by Tokyo Tech researchers. Compared to traditional complex methods, this innovative technique enables easy and quick fabrication of microstructures with intricate patterns for achieving different muscle cell arrangements, paving the way for biohybrid actuators capable of complex, flexible movements.
    Biomimetic robots, which mimic the movements and biological functions of living organisms, are a fascinating area of research that can not only lead to more efficient robots but also serve as a platform for understanding muscle biology. Among these, biohybrid actuators, made up of soft materials and muscular cells that can replicate the forces of actual muscles, have the potential to achieve life-like movements and functions, including self-healing, high efficiency, and high power-to-weight ratio, which have been difficult for traditional bulky robots that require heavy energy sources. One way to achieve these life-like movements is to arrange muscle cells in biohybrid actuators in an anisotropic manner. This involves aligning them in a specific pattern where they are oriented in different directions, like what is found in living organisms. While previous studies have reported biohybrid actuators with significant movement using this technique, they have mostly focused on anisotropically aligning muscle cells in a straight line, resulting in only simple motions, as opposed to the complex movement of native muscle tissues such as twisting, bending, and shrinking. Real muscle tissues have a complex arrangement of muscle cells, including curved and helical patterns.
    Creating such complex arrangements requires the formation of curved microgrooves (MGs) on a substrate, which then serve as the guide for aligning muscle cells in the required patterns. Fabrication of complex MGs has been achieved by methods such as photolithography, wavy micrography and micro-contact printing. However, these methods involve multiple intricate steps and are not suitable for rapid fabrication.
    To address this, a team of researchers from Tokyo Institute of Technology (Tokyo Tech) in Japan, led by Associate Professor Toshinori Fujie from the School of Life Science and Technology, has developed an ultraviolet (UV) laser-processing technique for fabricating complex microstructures. “Based on our previous prototypes, we hypothesized that biohybrid actuators using an SBS (hard rubber) thin film with arbitrary anisotropic MGs fabricated by a UV laser processing can control cellular alignment in an arbitrarily anisotropic direction to reproduce more life-like flexible movements,” explains Dr. Fujie. Their study has been published in the journal Biofabrication.
    The novel technique includes forming curved MGs on a polyimide through UV-laser processing, which are then transcribed onto a thin film made of SBS. Next, skeletal muscle cells called myotubes, found in living organisms, are aligned using the MGs to achieve an anisotropic curved muscle pattern. The researchers used this method to develop two different biohybrid actuators: one tethered to the glass substrate and the other untethered. Upon electrical stimulation, both actuators deformed through a twisting-like motion. Interestingly, the biohybrid actuator when untethered, transformed into a 3D free-standing structure, due to the curved alignment of myotubes like a native sphincter.
    “These results signify that compared to traditional methods, UV-laser con is a quicker and easier method for the fabrication of tunable MG patterns. This method raises intriguing opportunities for achieving more life-like biohybrid actuators through guided alignment of myotubes,” remarks Dr. Fujie, emphasizing the potential of this innovative technique.
    Overall, this study demonstrates the potential of UV-laser processing for the fabrication of different anisotropic muscle tissue patterns, paving the way for more life-like biohybrid actuators capable of complex, flexible movements! More

  • in

    Scientists closer to solving mysteries of universe after measuring gravity in quantum world

    Scientists are a step closer to unravelling the mysterious forces of the universe after working out how to measure gravity on a microscopic level.
    Experts have never fully understood how the force which was discovered by Isaac Newton works in the tiny quantum world.
    Even Einstein was baffled by quantum gravity and, in his theory of general relativity, said there is no realistic experiment which could show a quantum version of gravity.
    But now physicists at the University of Southampton, working with scientists in Europe, have successfully detected a weak gravitational pull on a tiny particle using a new technique.
    They claim it could pave the way to finding the elusive quantum gravity theory.
    The experiment, published in the Science Advances journal, used levitating magnets to detect gravity on microscopic particles — small enough to boarder on the quantum realm.
    Lead author Tim Fuchs, from the University of Southampton, said the results could help experts find the missing puzzle piece in our picture of reality.

    He added: “For a century, scientists have tried and failed to understand how gravity and quantum mechanics work together.
    “Now we have successfully measured gravitational signals at a smallest mass ever recorded, it means we are one step closer to finally realising how it works in tandem.
    “From here we will start scaling the source down using this technique until we reach the quantum world on both sides.
    “By understanding quantum gravity, we could solve some of the mysteries of our universe — like how it began, what happens inside black holes, or uniting all forces into one big theory.”
    The rules of the quantum realm are still not fully understood by science — but it is believed that particles and forces at a microscopic scale interact differently than regular-sized objects.
    Academics from Southampton conducted the experiment with scientists at Leiden University in the Netherlands and the Institute for Photonics and Nanotechnologies in Italy, with funding from the EU Horizon Europe EIC Pathfinder grant (QuCoM).

    Their study used a sophisticated setup involving superconducting devices, known as traps, with magnetic fields, sensitive detectors and advanced vibration isolation.
    It measured a weak pull, just 30aN, on a tiny particle 0.43mg in size by levitating it in freezing temperatures a hundredth of a degree above absolute zero — about minus-273 degrees Celsius.
    The results open the door for future experiments between even smaller objects and forces, said Professor of Physics Hendrik Ulbricht also at the University of Southampton.
    He added: “We are pushing the boundaries of science that could lead to new discoveries about gravity and the quantum world.
    “Our new technique that uses extremely cold temperatures and devices to isolate vibration of the particle will likely prove the way forward for measuring quantum gravity.
    “Unravelling these mysteries will help us unlock more secrets about the universe’s very fabric, from the tiniest particles to the grandest cosmic structures.” More

  • in

    Measuring the properties of light: Scientists realize new method for determining quantum states

    Scientists at Paderborn University have used a new method to determine the characteristics of optical, i.e. light-based, quantum states. For the first time, they are using certain photon detectors — devices that can detect individual light particles — for so-called homodyne detection. The ability to characterise optical quantum states makes the method an essential tool for quantum information processing. Precise knowledge of the characteristics is important for use in quantum computers, for example. The results have now been published in the specialist journal Optica Quantum.
    “Homodyne detection is a method frequently used in quantum optics to investigate the wave-like nature of optical quantum states,” explains Timon Schapeler from the Paderborn “Mesoscopic Quantum Optics” working group at the Department of Physics. Together with Dr Maximilian Protte, he has used the method to investigate the so-called continuous variables of optical quantum states. This involves the variable properties of light waves. These can be, for example, the amplitude or phase, i.e. the oscillation behaviour of waves, which are important for the targeted manipulation of light, among other things.
    For the first time, the physicists have used superconducting nanowire single photon detectors for the measurements — currently the fastest devices for photon counting. With their special experimental setup, the two scientists have shown that a homodyne detector with superconducting single photon detectors has a linear response to the input photon flux. Translated, this means that the measured signal is proportional to the input signal.
    “In principle, the integration of superconducting single-photon detectors brings many advantages in the area of continuous variables, not least the intrinsic phase stability. These systems also have almost 100 per cent on-chip detection efficiency. This means that no particles are lost during detection. Our results could enable the development of highly efficient homodyne detectors with single-photon sensitive detectors,” says Schapeler.
    Working with continuous variables of light opens up new and exciting possibilities in quantum information processing beyond qubits, the usual computing units of quantum computers. More

  • in

    Mixed-dimensional transistors enable high-performance multifunctional electronic devices

    Downscaling of electronic devices, such as transistors, has reached a plateau, posing challenges for semiconductor fabrication. However, a research team led by materials scientists from City University of Hong Kong (CityUHK) recently discovered a new strategy for developing highly versatile electronics with outstanding performance, using transistors made of mixed-dimensional nanowires and nanoflakes. This innovation paves the way for simplified chip circuit design, offering versatility and low power dissipation in future electronics.
    In recent decades, as the continuous scaling of transistors and integrated circuits has started to reach physical and economic limits, fabricating semiconductor devices in a controllable and cost-effective manner has become challenging. Further scaling of transistor size increases current leakage and thus power dissipation. Complex wiring networks also have an adverse impact on power consumption.
    Multivalued logic (MVL) has emerged as a promising technology for overcoming increasing power consumption. It transcends the limitations of conventional binary logic systems by greatly reducing the number of transistor components and their interconnections, enabling higher information density and lower power dissipation. Significant efforts have been devoted to constructing various multivalued logic devices, including anti-ambipolar transistors (AAT).
    Anti-ambipolar devices are a class of transistors in which positive (holes) and negative (electron) charge carriers can both transport concurrently within the semi-conducting channel. However, existing AAT-based devices utilize predominately 2D or organic materials, which are unstable for large-scale semiconductor device integration. Also, their frequency characteristics and energy efficiency have rarely been explored.
    To address these limitations, a research team led by Professor Johnny Ho, Associate Vice-President (Enterprise) and Associate Head in the Department of Materials Science and Engineering at CityUHK, embarked on research to develop anti-ambipolar device-based circuits with higher information density and fewer interconnections, and explore their frequency characteristics.
    The team created an advanced chemical vapour-deposition technique to create a novel, mixed-dimensional hetero-transistor, which combines the unique properties of high-quality GaAsSb nanowires and MoS2 nanoflakes.
    The new anti-ambipolar transistors had exceptional performance. Owing to the strong interfacial coupling and band-structure alignment properties of the mixed-dimensional GaAsSb/MoS2 junction, the hetero-transistor has prominent anti-ambipolar transfer characteristics with the flipping of transconductance.

    The flipping of transconductance doubles the frequency in response to the input analog circuit signal, greatly reducing the number of devices required compared to conventional frequency multiplier in CMOS technology.
    “Our mixed-dimensional, anti-ambipolar transistors can implement multi-valued logic circuits and frequency multipliers simultaneously, making this the first of its kind in the field of anti-ambipolar transistor applications,” said Professor Ho.
    The multi-valued logic characteristics simplify the complicated wiring networks and reduce chip power dissipation. The shrinking of device dimensionality, together with the downscaled junction region, render the device fast and energy efficient, resulting in high-performance digital and analog circuits.
    “Our findings show that mixed-dimensional anti-ambipolar devices enable chip circuit design with high information storage density and information processing capacity,” said Professor Ho. “So far, most researchers in the semiconductor industry have focused on device miniaturization to keep Moore’s law rolling. But the advent of the anti-ambipolar device shows the comparative superiority of the existing binary logic-based technology. The technology developed in this research represents a big step towards next-generation multifunctional integrated circuits and telecommunications technologies.”
    The research also opens the possibility of further simplifying complex integrated circuit designs to improve performance.
    The mixed-dimensional anti-ambipolar device’s transconductance-flipping feature has shown the possibility of versatile applications in digital and analog signal processing, including ternary logic inverters, advanced optoelectronics and frequency-doubling circuits. “The new device structure heralds the potential of a technological revolution in future versatile electronics,” added Professor Ho. More

  • in

    Researchers harness 2D magnetic materials for energy-efficient computing

    Experimental computer memories and processors built from magnetic materials use far less energy than traditional silicon-based devices. Two-dimensional magnetic materials, composed of layers that are only a few atoms thick, have incredible properties that could allow magnetic-based devices to achieve unprecedented speed, efficiency, and scalability.
    While many hurdles must be overcome until these so-called van der Waals magnetic materials can be integrated into functioning computers, MIT researchers took an important step in this direction by demonstrating precise control of a van der Waals magnet at room temperature.
    This is key, since magnets composed of atomically thin van der Waals materials can typically only be controlled at extremely cold temperatures, making them difficult to deploy outside a laboratory.
    The researchers used pulses of electrical current to switch the direction of the device’s magnetization at room temperature. Magnetic switching can be used in computation, the same way a transistor switches between open and closed to represent 0s and 1s in binary code, or in computer memory, where switching enables data storage.
    The team fired bursts of electrons at a magnet made of a new material that can sustain its magnetism at higher temperatures. The experiment leveraged a fundamental property of electrons known as spin, which makes the electrons behave like tiny magnets. By manipulating the spin of electrons that strike the device, the researchers can switch its magnetization.
    “The heterostructure device we have developed requires an order of magnitude lower electrical current to switch the van der Waals magnet, compared to that required for bulk magnetic devices,” says Deblina Sarkar, the AT&T Career Development Assistant Professor in the MIT Media Lab and Center for Neurobiological Engineering, head of the Nano-Cybernetic Biotrek Lab, and the senior author of a paper on this technique. “Our device is also more energy efficient than other van der Waals magnets that are unable to switch at room temperature.”
    In the future, such a magnet could be used to build faster computers that consume less electricity. It could also enable magnetic computer memories that are nonvolatile, which means they don’t leak information when powered off, or processors that make complex AI algorithms more energy-efficient.

    “There is a lot of inertia around trying to improve materials that worked well in the past. But we have shown that if you make radical changes, starting by rethinking the materials you are using, you can potentially get much better solutions,” says Shivam Kajale, a graduate student in Sarkar’s lab and co-lead author of the paper.
    Kajale and Sarkar are joined on the paper by co-lead author Thanh Nguyen, a graduate student in the Department of Nuclear Science and Engineering (NSE); Corson Chao, a graduate student in the Department of Materials Science and Engineering (DSME); David Bono, a DSME research scientist; Artittaya Boonkird, an NSE graduate student; and Mingda Li, associate professor of nuclear science and engineering. The research appears this week in Nature Communications.
    An atomically thin advantage
    Methods to fabricate tiny computer chips in a clean room from bulk materials like silicon can hamper devices. For instance, the layers of material may be barely 1 nanometer thick, so minuscule rough spots on the surface can be severe enough to degrade performance.
    By contrast, van der Waals magnetic materials are intrinsically layered and structured in such a way that the surface remains perfectly smooth, even as researchers peel off layers to make thinner devices. In addition, atoms in one layer won’t leak into other layers, enabling the materials to retain their unique properties when stacked in devices.
    “In terms of scaling and making these magnetic devices competitive for commercial applications, van der Waals materials are the way to go,” Kajale says.

    But there’s a catch. This new class of magnetic materials have typically only been operated at temperatures below 60 kelvins (-351 degrees Fahrenheit). To build a magnetic computer processor or memory, researchers need to use electrical current to operate the magnet at room temperature.
    To achieve this, the team focused on an emerging material called iron gallium telluride. This atomically thin material has all the properties needed for effective room temperature magnetism and doesn’t contain rare earth elements, which are undesirable because extracting them is especially destructive to the environment.
    Nguyen carefully grew bulk crystals of this 2D material using a special technique. Then, Kajale fabricated a two-layer magnetic device using nanoscale flakes of iron gallium telluride underneath a six-nanometer layer of platinum.
    Tiny device in hand, they used an intrinsic property of electrons known as spin to switch its magnetization at room temperature.
    Electron ping-pong
    While electrons don’t technically “spin” like a top, they do possess the same kind of angular momentum. That spin has a direction, either up or down. The researchers can leverage a property known as spin-orbit coupling to control the spins of electrons they fire at the magnet.
    The same way momentum is transferred when one ball hits another, electrons will transfer their “spin momentum” to the 2D magnetic material when they strike it. Depending on the direction of their spins, that momentum transfer can reverse the magnetization.
    In a sense, this transfer rotates the magnetization from up to down (or vice-versa), so it is called a “torque,” as in spin-orbit torque switching. Applying a negative electric pulse causes the magnetization to go downward, while a positive pulse causes it to go upward.
    The researchers can do this switching at room temperature for two reasons: the special properties of iron gallium telluride and the fact that their technique uses small amounts of electrical current. Pumping too much current into the device would cause it to overheat and demagnetize.
    The team faced many challenges over the two years it took to achieve this milestone, Kajale says. Finding the right magnetic material was only half the battle. Since iron gallium telluride oxidizes quickly, fabrication must be done inside a glovebox filled with nitrogen.
    “The device is only exposed to air for 10 or 15 seconds, but even after that I have to do a step where I polish it to remove any oxide,” he says.
    Now that they have demonstrated room-temperature switching and greater energy efficiency, the researchers plan to keep pushing the performance of magnetic van der Waals materials.
    “Our next milestone is to achieve switching without the need for any external magnetic fields. Our aim is to enhance our technology and scale up to bring the versatility of van der Waals magnet to commercial applications,” Sarkar says.
    This work was carried out, in part, using the facilities at MIT.Nano and the Harvard University Center for Nanoscale Systems. More

  • in

    Improving efficiency, reliability of AI medical summarization tools

    Medical summarization, a process that uses artificial intelligence (AI) to condense complex patient information, is currently used in health care settings for tasks such as creating electronic health records and simplifying medical text for insurance claims processing. While the practice is intended to create efficiencies, it can be labor-intensive, according to Penn State researchers, who created a new method to streamline the way AI creates these summaries, efficiently producing more reliable results.
    In their work, which was presented at the Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing in Singapore last December, the researchers introduced a framework to fine-tune the training of natural language processing (NLP) models that are used to create medical summaries.
    “There is a faithfulness issue with the current NLP tools and machine learning algorithms used in medical summarization,” said Nan Zhang, a graduate student pursing a doctorate in informatics the College of Information Sciences and Technology (IST) and the first author on the paper. “To ensure records of doctor-patient interactions are reliable, a medical summarization model should remain 100% consistent with the reports and conversations they are documenting.”
    Existing medical text summarization tools involve human supervision to prevent the generation of unreliable summaries that could lead to serious health care risks, according to Zhang. This “unfaithfulness” has been understudied despite its importance for ensuring safety and efficiency in healthcare reporting.
    The researchers began by examining three datasets — online health question summarization, radiology report summarization and medical dialogue summarization — generated by existing AI models. They randomly selected between 100 and 200 summaries from each dataset and manually compared them to the doctors’ original medical reports, or source text, from which they were condensed. Summaries that did not accurately reflect the source text were placed into error categories.
    “There are various types of errors that can occur with models that generate text,” Zhang said. “The model may miss a medical term or change it to something else. Summarization that is untrue or not consistent with source inputs can potentially cause harm to a patient.”
    The data analysis revealed instances of summarization that were contradictory to the source text. For example, a doctor prescribed a medication to be taken three times a day, but the summary reported that the patient should not take said medication. The datasets also included what Zhang called “hallucinations,” resulting in summaries that contained extraneous information not supported by the source text.

    The researchers set out to mitigate the unfaithfulness problem with their Faithfulness for Medical Summarization (FaMeSumm) framework. They began by using simple problem-solving techniques to construct sets of contrastive summaries — a set of faithful, error-free summaries and a set of unfaithful summaries containing errors. They also identified medical terms through external knowledge graphs or human annotations. Then, they fine-tuned existing pre-trained language models to the categorized data, modified objective functions to learn from the contrastive summaries and medical terms and made sure the models were trained to address each type of error instead of just mimicking specific words.
    “Medical summarization models are trained to pay more attention to medical terms,” Zhang said. “But it’s important that those medical terms be summarized precisely as intended, which means including non-medical words like no, not or none. We don’t want the model to make modifications near or around those words, or the error is likely to be higher.”
    FaMeSumm effectively and accurately summarized information from different kinds of training data. For example, if the provided training data comprised doctor notes, then the trained AI product was suited to generate summaries that facilitate doctors’ understanding of their notes. If the training data contained complex questions from patients, the trained AI product generated summaries that helped both patients and doctors understand the questions.
    “Our method works on various kinds of datasets involving medical terms and for the mainstream, pre-trained language models we tested,” Zhang said. “It delivered a consistent improvement in faithfulness, which was confirmed by the medical doctors who checked our work.”
    Fine-tuning large language models (LLMs) can be expensive and unnecessary, according to Zhang, so the experiments were conducted on five smaller mainstream language models.
    “We did compare one of our fine-tuned models against GPT-3, which is an example of a large language model,” he said. “We found that our model reached significantly better performance in terms of faithfulness and showed the strong capability of our method, which is promising for its use on LLMs.”
    This work contributes to the future of automated medical summarization, according to Zhang.

    “Maybe, in the near future, AI will be trained to generate medical summaries as templates,” he said. “Doctors could simply doublecheck the output and make minor edits, which could significantly reduce the amount of time it takes to create the summaries.”
    Prasenjit Mitra, professor in the College of IST and Zhang’s graduate adviser; Rui Zhang, assistant professor in the College of Engineering and Zhang’s graduate co-adviser; and Yusen Zhang, doctoral student in the College of Engineering — all from Penn State — and Wu Guo, with the Children’s Hospital Affiliated to Zhengzhou University in China, contributed to this research.
    The Federal Ministry of Education and Research in Germany, under the LeibnizKILabor project, partially funded this research. Rui Zhang supported the travel funding. More

  • in

    Graphene research: Numerous products, no acute dangers found by study

    Think big. Despite its research topic, this could well be the motto of the Graphene Flagship, which was launched in 2013: With an overall budget of one billion Euros, it was Europe’s largest research initiative to date, alongside the Human Brain Flagship, which was launched at the same time. The same applies to the review article on the effects of graphene and related materials on health and the environment, which Empa researchers Peter Wick and Tina Bürki just published together with 30 international colleagues in the scientific journal ACS Nano; on 57 pages, they summarize the findings on the health and ecological risks of graphene materials, the reference list includes almost 500 original publications.
    A wealth of knowledge — which also gives the all-clear. “We have investigated the potential acute effects of various graphene and graphene-like materials on the lungs, in the gastrointestinal tract and in the placenta — and no serious acute cell-damaging effects were observed in any of the studies,” says Wick, summarizing the results. Although stress reactions can certainly occur in lung cells, the tissue recovers rather quickly. However, some of the newer 2D materials such as boron nitrides, transition metal dichalcogenides, phosphenes and MXenes have not yet been investigated much, Wick points out; further investigations were needed here.
    In their analyses, Wick and Co. did not limit themselves to newly produced graphene-like materials, but also looked at the entire life cycle of various applications of graphene-containing materials. In other words, they investigated questions such as: What happens when these materials are abraded or burnt? Are graphene particles released, and can this fine dust harm cells, tissues or the environment?
    One example: The addition of a few percent graphene to polymers, such as epoxy resins or polyamides, significantly improves material properties such as mechanical stability or conductivity, but the abrasion particles do not cause any graphene-specific nanotoxic effect on the cells and tissues tested. Wick’s team will be able to continue this research even after the flagship project has come to an end, also thanks to funding from the EU as part of so-called Spearhead projects, of which Wick is deputy head.
    In addition to Wick’s team, Empa researchers led by Bernd Nowack have used material flow analyses as part of the Graphene Flagship to calculate the potential future environmental impact of materials containing graphene and have modeled which ecosystems are likely to be impacted and to what extent. Roland Hischier’s team, like Nowack’s at Empa’s Technology and Society lab, used life cycle assessments to investigate the environmental sustainability of different production methods and application examples for various graphene-containing materials. And Roman Fasel’s team from Empa’s nanotech@surfaces lab has advanced the development of electronic components based on narrow graphene ribbons.
    A European success story for research and innovation
    Launched in 2013, the Graphene Flagship represented a completely new form of joint, coordinated research on an unprecedented scale. The aim of the large-scale project was to bring together researchers from research institutions and industry to bring practical applications based on graphene from the laboratory to the market within ten years, thereby creating economic growth, new jobs and new opportunities for Europe in key technologies. Over its ten-year lifetime, the consortium consisted of more than 150 academic and industrial research teams in 23 countries plus numerous associated members.

    Last September, the ten-year funding period ended with the Graphene Week in Gothenburg, Sweden. The final report impressively demonstrates the success of the ambitious large-scale project: The Flagship has “produced” almost 5,000 scientific publications and more than 80 patents. It has created 17 spin-offs in the graphene sector, which have raised a total of more than 130 million Euros in venture capital. According to a study by the German economic research institute WifOR, the Graphene Flagship has led to a total added value of around 5.9 billion Euros in the participating countries and created more than 80,000 new jobs in Europe. This means that the impact of the Graphene Flagship is more than 10 times greater than shorter EU projects.
    In the course of the project, Empa received a total of around three million Swiss francs in funding — which had a “catalytic” effect, as Peter Wick emphasizes: “We have roughly tripled this sum through follow-up projects totaling around 5.5 million Swiss francs, including further EU projects, projects funded by the Swiss National Science Foundation (SNSF) and direct cooperation projects with our industrial partners — and all this in the last five years.”
    But the advantage of such projects goes far beyond the generous funding, emphasizes Wick: “It is truly unique to be involved in such a large project and broad network over such a long period of time. On the one hand, it has resulted in numerous new collaborations and ideas for projects. On the other hand, working together with international partners over such a long period of time has a completely different quality, we trust each other almost blindly; and such a well-coordinated team is much more efficient and produces better scientific results,” Wick is convinced. Last but not least, many personal friendships came about.
    A new dimension: graphene and other 2D materials
    Graphene is an enormously promising material. It consists of a single layer of carbon atoms arranged in a honeycomb pattern and has extraordinary properties: exceptional mechanical strength, flexibility, transparency and outstanding thermal and electrical conductivity. If the already two-dimensional material is spatially restricted even more, for example into a narrow ribbon, controllable quantum effects can be created. This could enable a wide range of applications, from vehicle construction and energy storage to quantum computing.
    For a long time, this “miracle material” existed only in theory. It was not until 2004 that physicists Konstantin Novoselov and Andre Geim at the University of Manchester were able to specifically produce and characterize graphene. To do this, the researchers removed layers of graphite with a piece of adhesive tape until they had flakes just one atom thick. They were awarded the Nobel Prize in Physics for this work in 2010.
    Since then, graphene has been the subject of intensive research. In the meantime, researchers have discovered more 2D materials, such as graphene-derived graphene acid, graphene oxide and cyanographs, which could have applications in medicine. Researchers want to use inorganic 2D materials such as boron nitride or MXenes to build batteries that are more powerful, develop electronic components or improve other materials. More

  • in

    Method identified to double computer processing speeds

    Imagine doubling the processing power of your smartphone, tablet, personal computer, or server using the existing hardware already in these devices.
    Hung-Wei Tseng, a UC Riverside associate professor of electrical and computer engineering, has laid out a paradigm shift in computer architecture to do just that in a recent paper titled, “Simultaneous and Heterogeneous Multithreading.”
    Tseng explained that today’s computer devices increasingly have graphics processing units (GPUs), hardware accelerators for artificial intelligence (AI) and machine learning (ML), or digital signal processing units as essential components. These components process information separately, moving information from one processing unit to the next, which in effect creates a bottleneck.
    In their paper, Tseng and UCR computer science graduate student Kuan-Chieh Hsu introduce what they call “simultaneous and heterogeneous multithreading” or SHMT. They describe their development of a proposed SHMT framework on an embedded system platform that simultaneously uses a multi-core ARM processor, an NVIDIA GPU, and a Tensor Processing Unit hardware accelerator.
    The system achieved a 1.96 times speedup and a 51% reduction in energy consumption.
    “You don’t have to add new processors because you already have them,” Tseng said.
    The implications are huge.
    Simultaneous use of existing processing components could reduce computer hardware costs while also reducing carbon emissions from the energy produced to keep servers running in warehouse-size data processing centers. It also could reduce the need for scarce freshwater used to keep servers cool.
    Tseng’s paper, however, cautions that further investigation is needed to answer several questions about system implementation, hardware support, code optimization, and what kind of applications stand to benefit the most, among other issues.
    The paper was presented at the 56th Annual IEEE/ACM International Symposium on Microarchitecture held in October in Toronto, Canada. The paper garnered recognition from Tseng’s professional peers in the Institute of Electrical and Electronics Engineers, or IEEE, who selected it as one of 12 papers included in the group’s “Top Picks from the Computer Architecture Conferences” issue to be published this coming summer. More