More stories

  • in

    New quantum computing feat is a modern twist on a 150-year-old thought experiment

    A team of quantum engineers at UNSW Sydney has developed a method to reset a quantum computer — that is, to prepare a quantum bit in the ‘0’ state — with very high confidence, as needed for reliable quantum computations. The method is surprisingly simple: it is related to the old concept of ‘Maxwell’s demon’, an omniscient being that can separate a gas into hot and cold by watching the speed of the individual molecules.
    “Here we used a much more modern ‘demon’ — a fast digital voltmeter — to watch the temperature of an electron drawn at random from a warm pool of electrons. In doing so, we made it much colder than the pool it came from, and this corresponds to a high certainty of it being in the ‘0’ computational state,” says Professor Andrea Morello of UNSW, who led the team.
    “Quantum computers are only useful if they can reach the final result with very low probability of errors. And one can have near-perfect quantum operations, but if the calculation started from the wrong code, the final result will be wrong too. Our digital ‘Maxwell’s demon’ gives us a 20x improvement in how accurately we can set the start of the computation.”
    The research was published in Physical Review X, a journal published by the American Physical Society.
    Watching an electron to make it colder
    Prof. Morello’s team has pioneered the use of electron spins in silicon to encode and manipulate quantum information, and demonstrated record-high fidelity — that is, very low probability of errors — in performing quantum operations. The last remaining hurdle for efficient quantum computations with electrons was the fidelity of preparing the electron in a known state as the starting point of the calculation. More

  • in

    Basketball study automates patterns of play to compare teams' performance

    New analysis of elite women’s basketball automatically pinpoints a team’s chances of high or low scoring plays despite the ball’s trajectory looking the same, in research developed by QUT data scientists.
    The study, published recently in PLOS One, used existing data from 72 women’s international basketball games ranging from the 2014 FIBA World Championships to the 2016 Rio Olympics.
    The results provide insights to support coaches to scrutinise teams’ effective or problematic plays by classifying and tracking the dynamics of ball movements as to whether a team scored and relating types of play to scoring outcomes.
    While data on spatial characteristics like a basketball’s bounce, its speed, the possession time, points scored as well as previous teams’ history are already wide ranging, research attempting to cluster play types using ball movement is limited.
    This new research, applying the concept of ‘dynamic time warping’, was led by Distinguished Professor Kerrie Mengersen and Dr Paul Wu from QUT’s Centre for Data Science, Dr Wade Hobbs from the Australian Institute of Sport, and University of Sydney and QUT student Alan Yu.
    Dr Wu said the study was prompted by questions about the unpredictability of plays and whether that led to better scoring outcomes. More

  • in

    Making 'transport' robots smarter

    Imagine a team of humans and robots working together to process online orders — real-life workers strategically positioned among their automated coworkers who are moving intelligently back and forth in a warehouse space, picking items for shipping to the customer. This could become a reality sooner than later, thanks to researchers at the University of Missouri, who are working to speed up the online delivery process by developing a software model designed to make “transport” robots smarter.
    “The robotic technology already exists,” said Sharan Srinivas, an assistant professor with a joint appointment in the Department of Industrial and Manufacturing Systems Engineering and the Department of Marketing. “Our goal is to best utilize this technology through efficient planning. To do this, we’re asking questions like ‘given a list of items to pick, how do you optimize the route plan for the human pickers and robots?’ or ‘how many items should a robot pick in a given tour? or ‘in what order should the items be collected for a given robot tour?’ Likewise, we have a similar set of questions for the human worker. The most challenging part is optimizing the collaboration plan between the human pickers and robots.”
    Currently, a lot of human effort and labor costs are involved with fulfilling online orders. To help optimize this process, robotic companies have already developed collaborative robots — also known as cobots or autonomous mobile robots (AMRs) — to work in a warehouse or distribution center. The AMRs are equipped with sensors and cameras to help them navigate around a controlled space like a warehouse. The proposed model will help create faster fulfillment of customer orders by optimizing the key decisions or questions pertaining to collaborative order picking, Srinivas said.
    “The robot is intelligent, so if it’s instructed to go to a particular location, it can navigate the warehouse and not hit any workers or other obstacles along the way,” Srinivas said.
    Srinivas, who specializes in data analytics and operations research, said AMRs are not designed to replace human workers, but instead can work collaboratively alongside them to help increase the efficiency of the order fulfillment process. For instance, AMRs can help fulfill multiple orders at a time from separate areas of the warehouse quicker than a person, but human workers are still needed to help pick items from shelves and place them onto the robots to be transported to a designated drop-off point inside the warehouse.
    “The one drawback is these robots do not have good grasping abilities,” Srinivas said. “But humans are good at grasping items, so we are trying to leverage the strength of both resources — the human workers and the collaborative robots. So, what happens in this case is the humans are at different points in the warehouse, and instead of one worker going through the entire isle to pick up multiple items along the way, the robot will come to the human worker, and the human worker will take an item and put it on the robot. Therefore, the human worker will not have to strain himself or herself in order to move large carts of heavy items throughout the warehouse.”
    Srinivas said a future application of their software could also be applied in other locations such as grocery stores, where robots could be used to fill orders while also navigating among members of the general public. He could see this potentially happening within the next three-to-five years.
    “Collaborative order picking with multiple pickers and robots: Integrated approach for order batching, sequencing and picker-robot routing” was published in the International Journal of Production Economics. Shitao Yu, a doctoral candidate in the Department of Industrial and Manufacturing Systems Engineering at MU, is a co-author of the study.
    Story Source:
    Materials provided by University of Missouri-Columbia. Note: Content may be edited for style and length. More

  • in

    Smart inverters' vulnerability to cyberattacks needs to be identified and countered, according to researchers

    The emergence of distributed energy resources (DERs) — facilities owned by individuals or small companies that can generate, store and return power to energy grids — is transforming the way power is used across the world.
    The technology is becoming more prevalent as society looks for alternative sources of energy, but its rapid growth brings with it a new field of vulnerabilities open to cyberattacks.
    DERs like home-based solar panels or electric vehicle chargers rely on field devices known as smart inverters to interface with power grids. As a new study by Concordia researchers shows, these devices’ reliance on digital information and communication technology can be attacked in multiple ways by malicious actors, with serious repercussions for the public.
    The paper, published in the IEEE: Transactions on Power Electronics journal, surveys the landscape of smart inverter cybersecurity and identifies attack strategies at the device and grid level. It also looks at ways to defend against, mitigate and prevent them.
    “We are still in the first decade of trying to understand the problem and identifying the most prominent risks,” says the paper’s co-author Jun Yan, associate professor at the Concordia Institute for Information Systems Engineering.
    “Threats are inevitable. We have so many homeowners and third parties using these devices that having a perfect line of defence is impossible. We must look at our strategic priorities to start.”
    Yuanling Li, a Concordia PhD student and research intern at Ericsson’s Global Artificial Intelligence Accelerator (GAIA), is the paper’s lead author. More

  • in

    Protons fix a long-standing issue in silicon carbide electronics

    Silicon carbide (SiC) is a semiconductor material that outperforms pure silicon-based semiconductors in several applications. Used mostly in power inverters, motor drives, and battery chargers, SiC devices offer benefits such as a high power density and reduced power losses at high frequencies even at high voltages. Although these properties and its relatively low cost make SiC a promising contender in various sectors of the semiconductor market, its poor long-term reliability has been an insurmountable barrier since the past two decades.
    One of the most pressing issues with 4H-SiC -a SiC type with superior physical properties- is bipolar degradation. This phenomenon is caused by the expansion of stacking faults in 4H-SiC crystals. Put simply, small dislocations in the crystal structure grow over time into large defects called “single Shockley stacking faults” that progressively degrade performance and cause the device to fail. Although some methods to mitigate this problem exist, they make the device fabrication process more expensive.
    Fortunately, a team of researchers from Japan led by Associate Professor Masashi Kato from the Nagoya Institute of Technology, have now found a feasible solution for this issue. In their study made available online on 5 November 2022 and published in the journal Scientific Reports on 5 November 2022, they present a fault suppression technique called “proton implantation” that can prevent bipolar degradation in 4H-SiC semiconductor wafers when applied prior to the device fabrication process. Explaining the motivation for this study, Dr. Kato says, “Even in the recently developed SiC epitaxial wafers, bipolar degradation persists in the substrate layers. We wanted to help the industry navigate this challenge and find a way for developing reliable SiC devices, and, therefore, decided to investigate this method for eliminating bipolar degradation.” Associate professor Shunta Harada from Nagoya University, and Hitoshi Sakane, an academic researcher from SHI-ATEX, both in Japan, were also part of this study.
    Proton implantation involves “injecting” hydrogen ions into the substrate using a particle accelerator. The idea is to prevent the formation of single Shockley stacking faults by pinning down partial dislocations in the crystal, one of the effects of introducing proton impurities. However, proton implantation itself can damage the 4H-SiC substrate, due to which high-temperature annealing is used as an additional processing step to repair this damage.
    The research team wanted to verify if proton implantation would be effective when applied before the device fabrication process, which typically includes a high-temperature annealing step. Accordingly, they applied proton implantation at different doses on 4H-SiC wafers and used them to fabricate PiN diodes. They then analyzed the current-voltage characteristics of these diodes and compared them to those of a regular diode without proton implantation. Finally, they captured electroluminescence images of the diodes to check whether stacking faults had formed or not.
    Overall, the results were very promising as diodes that had undergone proton implantation performed just as well as regular ones but without signs of bipolar degradation. The deterioration of the current-voltage characteristics of the diodes caused by proton implantation at lower doses were not significant. However, the suppression of the expansion of single Shockley stacking faults was significant.
    The researchers hope that these findings will help realize more reliable and cost-effective SiC devices that can reduce power consumption in trains and vehicles. “Although the additional fabrication costs of proton implantation should be considered, they would be similar to those incurred in aluminum-ion implantation, currently an essential step in the fabrication of 4H-SiC power devices.” speculates Dr. Kato. “Moreover, with further optimization of implantation conditions, there is a possibility of applying this method for the fabrication of other kinds of devices based on 4H-SiC.”
    Hopefully, these findings will help unlock the full potential of SiC as a semiconductor material for powering next-generation electronics.
    Story Source:
    Materials provided by Nagoya Institute of Technology. Note: Content may be edited for style and length. More

  • in

    High-performance and compact vibration energy harvester created for self-charging wearable devices

    Walking can boost not only your own energy but also, potentially, the energy of your wearable electronic devices. Osaka Metropolitan University scientists made a significant advance toward self-charging wearable devices with their invention of a dynamic magnifier-enhanced piezoelectric vibration energy harvester that can amplify power generated from impulsive vibrations, such as from human walking, by about 90 times, while remaining as small as currently developed energy harvesters. The results were published in Applied Physics Letters.
    These days, people carry multiple electronic devices such as smartphones, and wearable devices are expected to become increasingly widespread in the near future. The resulting demand for more efficient recharging of these devices has increased the attention paid to energy harvesting, a technology that converts energy such as heat and light into electricity that can power small devices. One form of energy harvesting called vibration energy harvesting is deemed highly practical given that it can transform the kinetic energy from vibration into electricity and is not affected by weather or climate.
    A research team led by Associate Professor Takeshi Yoshimura from the Graduate School of Engineering at Osaka Metropolitan University has developed a microelectromechanical system (MEMS) piezoelectric vibration energy harvester that is only approximately 2 cm in diameter with a U-shaped metal component called a dynamic magnifier. Compared with conventional harvesters, the new harvester allows for an increase of about 90 times in the power converted from impulsive vibrations, which can be generated by human walking motion.
    The team has been working on developing vibration energy harvesters that utilize the piezoelectric effect, a phenomenon in which specific types of materials produce an electric charge or voltage in response to applied pressure. So far, they have succeeded in generating microwatt-level electricity from mechanical vibrations with a constant frequency, such as those generated by motors and washing machines. However, the power generation of these harvesters drops drastically when the applied vibrations are nonstationary and impulsive, such as those generated by human walking.
    Responding to this challenge, the team developed and incorporated the U-shaped vibration amplification component under the harvester. The component allowed for improvement in power generation without increasing the device size. The technology is expected to generate electric power from non-steady vibrations, including walking motion, in order to power small wearable devices such as smartphones and wireless earphones.
    Professor Yoshimura concluded, “Since electronic devices are expected to become more energy-efficient, we hope that this invention will contribute to the realization of self-charging wearable devices.”
    Story Source:
    Materials provided by Osaka Metropolitan University. Note: Content may be edited for style and length. More

  • in

    Sinonasal cancer: AI facilitates breakthrough in diagnostics

    Researchers at LMU and Charité hospital in Berlin have developed a method for classifying difficult-to-diagnose nasal cavity tumors.
    Although tumors in the nasal cavity and the paranasal sinus are confined to a small space, they encompass a very broad spectrum with many tumor types. As they often do not exhibit any specific pattern or appearance, they are difficult to diagnose. This applies especially to so-called sinonasal undifferentiated carcinomas (SNUCs).
    Now a team led by Dr. Philipp Jurmeister and Prof. Frederick Klauschen from the Institute of Pathology at LMU and Prof. David Capper from Charité University Hospital as well as the German Cancer Consortium (DKTK) ), partner sites Munich and Berlin, has achieved a decisive improvement in diagnostics. The team developed an AI tool that reliably distinguishes tumors on the basis of chemical DNA modifications and assigns the SNUCs, which the methods available before now have been unable to distinguish, to four clearly distinct groups. This breakthrough could open up new opportunities for targeted therapies.
    Tumor-specific DNA modifications
    Chemical modifications in DNA play a vital role in the regulation of gene activity. This includes DNA methylation, whereby an extra methyl group is added to the DNA building blocks. In earlier studies, the scientists had already demonstrated that the methylation pattern of the genome is specific for different tumor types, because it can be traced back to the tumor’s cell of origin.
    “On this basis, we’ve now recorded the DNA methylation patterns of almost 400 tumors in the nasal cavity and paranasal sinus,” says Capper. Thanks to an extensive international collaboration, the researchers managed to compile such a large number of samples even though these tumors are rare and comprise only about four percent of all malignant tumors in the nose and throat area.
    Four tumor groups with different prognoses
    For the analysis of the DNA methylation data, the researchers have developed an AI model that assigns the tumors to different classes. “Due to the large volumes of data involved, machine learning methods are indispensable,” says Jurmeister. “To actually recognize patterns, we had to evaluate several thousand methylation positions in our study.” This revealed that SNUCs can be classified into four groups, which also differ in terms of further molecular characteristics.
    Furthermore, these results are clinically relevant, as the various groups have different prognoses. “One group takes a surprisingly good course, for example, even though the tumors look very aggressive under the microscope,” says Klauschen. “Whereas another group has a poor prognosis.” On the basis of the molecular characteristics of the groups, researchers may also be able to develop targeted new therapy approaches in the future.
    Story Source:
    Materials provided by Ludwig-Maximilians-Universität München. Note: Content may be edited for style and length. More

  • in

    Breaking the scaling limits of analog computing

    As machine-learning models become larger and more complex, they require faster and more energy-efficient hardware to perform computations. Conventional digital computers are struggling to keep up.
    An analog optical neural network could perform the same tasks as a digital one, such as image classification or speech recognition, but because computations are performed using light instead of electrical signals, optical neural networks can run many times faster while consuming less energy.
    However, these analog devices are prone to hardware errors that can make computations less precise. Microscopic imperfections in hardware components are one cause of these errors. In an optical neural network that has many connected components, errors can quickly accumulate.
    Even with error-correction techniques, due to fundamental properties of the devices that make up an optical neural network, some amount of error is unavoidable. A network that is large enough to be implemented in the real world would be far too imprecise to be effective.
    MIT researchers have overcome this hurdle and found a way to effectively scale an optical neural network. By adding a tiny hardware component to the optical switches that form the network’s architecture, they can reduce even the uncorrectable errors that would otherwise accumulate in the device.
    Their work could enable a super-fast, energy-efficient, analog neural network that can function with the same accuracy as a digital one. With this technique, as an optical circuit becomes larger, the amount of error in its computations actually decreases. More