More stories

  • in

    Computer simulation models potential asteroid collisions

    An asteroid impact can be enough to ruin anyone’s day, but several small factors can make the difference between an out-of-this-world story and total annihilation. In AIP Advances, by AIP Publishing, a researcher from the National Institute of Natural Hazards in China developed a computer simulation of asteroid collisions to better understand these factors.
    The computer simulation initially sought to replicate model asteroid strikes performed in a laboratory. After verifying the accuracy of the simulation, Duoxing Yang believes it could be used to predict the result of future asteroid impacts or to learn more about past impacts by studying their craters.
    “From these models, we learn generally a destructive impact process, and its crater formation,” said Yang. “And from crater morphologies, we could learn impact environment temperatures and its velocity.”
    Yang’s simulation was built using the space-time conservation element and solution element method, designed by NASA and used by many universities and government agencies, to model shock waves and other acoustic problems.
    The goal was to simulate a small rocky asteroid striking a larger metal asteroid at several thousand meters per second. Using his simulation, Yang was able to calculate the effects this would have on the metal asteroid, such as the size and shape of the crater.
    The simulation results were compared against mock asteroid impacts created experimentally in a laboratory. The simulation held up against these experimental tests, which means the next step in the research is to use the simulation to generate more data that can’t be produced in the laboratory.
    This data is being created in preparation for NASA’s Psyche mission, which aims to be the first spacecraft to explore an asteroid made entirely of metal. Unlike more familiar rocky asteroids, which are made of roughly the same materials as the Earth’s crust, metal asteroids are made of materials found in the Earth’s inner core. NASA believes studying such an asteroid can reveal more about the conditions found in the center of our own planet.
    Yang believes computer simulation models can generalize his results to all metal asteroid impacts and, in the process, answer several existing questions about asteroid interactions.
    “What kind of geochemistry components will be generated after impacts?” said Yang. “What kinds of impacts result in good or bad consequences to local climate? Can we change trajectory of asteroids heading to us?”
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Researchers develop new measurements for designing cooler electronics

    When cell phones, electric vehicle chargers, or other electronic devices get too hot, performance degrades, and eventually overheating can cause them to shut down or fail. In order to prevent that from happening researchers are working to solve the problem of dissipating heat produced during performance. Heat that is generated in the device during operation has to flow out, ideally with little hinderance to reduce the temperature rise. Often this thermal energy must cross several dissimilar materials during the process and the interface between these materials can cause challenges by impeding heat flow.
    A new study from researchers at the Georgia Institute of Technology, Notre Dame, University of California Los Angeles, University of California Irvine, Oak Ridge National Laboratory, and the Naval Research Laboratory observed interfacial phonon modes which only exist at the interface between silicon (Si) and germanium (Ge). This discovery, published in the journal Nature Communications, shows experimentally that decades-old conventional theories for interfacial heat transfer are not complete and the inclusion of these phonon modes are warranted.
    “The discovery of interfacial phonon modes suggests that the conventional models of heat transfer at interfaces which only use bulk phonon properties are not accurate,” said the Zhe Cheng, a Ph.D. graduate from Georgia Tech’s George W. Woodruff School of Mechanical Engineering who is now a postdoc at University of Illinois at Urbana-Champaign (UIUC). “There is more space for research at the interfaces. Even though these modes are localized, they can contribute to thermal conductance across interfaces.”
    The discovery opens a new pathway for consideration when engineering thermal conductance at interfaces for electronics cooling and other applications where phonons are majority heat carriers at material interfaces.
    “These results will lead to great progress in real-world engineering applications for thermal management of power electronics,” said co-author Samuel Graham, a professor in the Woodruff School of Mechanical Engineering at Georgia Tech and new dean of engineering at University of Maryland. “Interfacial phonon modes should exist widely at solid interfaces. The understanding and manipulation of these interface modes will give us the opportunity to enhance thermal conductance across technologically-important interfaces, for example, GaN-SiC, GaN-diamond, β-Ga2O3-SiC, and β-Ga2O3-diamond interfaces.”
    Presence of Interfacial Phonon Modes Confirmed in Lab
    The researchers observed the interfacial phonon modes experimentally at a high-quality Si-Ge epitaxial interface by using Raman Spectroscopy and high-energy resolution electron energy-loss spectroscopy (EELS). To figure out the role of interfacial phonon modes in heat transfer at interfaces, they used a technique called time-domain thermoreflectance in labs at Georgia Tech and UIUC to determine the temperature-dependent thermal conductance across these interfaces.
    They also observed a clean additional peak showing up in Raman Spectroscopy measurements when they measured the sample with Si-Ge interface, which was not observed when they measured a Si wafer and a Ge wafer with the same system. Both the observed interfacial modes and thermal boundary conductance were fully captured by molecular dynamics (MD) simulations and were confined to the interfacial region as predicted by theory.
    “This research is the result of great team work with all the collaborators,” said Graham. “Without this team and the unique tools that were available to us, this work would not have been possible.”
    Moving forward the researchers plan to continue to pursue the measurement and prediction of interfacial modes, increase the understanding of their contribution to heat transfer, and determine ways to manipulate these phonon modes to increase thermal transport. Breakthroughs in this area could lead to better performance in semiconductors used in satellites, 5G devices, and advanced radar systems, among other devices.
    The epitaxial Si-Ge samples used in this research were grown at the U.S. Naval Research Lab. The TEM and EELS measurements were done at University of California, Irvine and Oak Ridge National Labs. The MD simulations were performed by the University of Notre Dame. The XRD study was done at UCLA.
    This work is financially supported by U.S. Office of Naval Research under a MURI project. The EELS study at UC Irvine is supported by U.S. Department of Energy.
    Story Source:
    Materials provided by Georgia Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Study finds artificial intelligence accurately detects fractures on x-rays, alert human readers

    Emergency room and urgent care clinics are typically busy and patients often have to wait many hours before they can be seen, evaluated and receive treatment. Waiting for x-rays to be interpreted by radiologists can contribute to this long wait time because radiologists often read x-rays for a large number of patients.
    A new study has found that artificial intelligence (AI) can help physicians in interpreting x-rays after an injury and suspected fracture.
    “Our AI algorithm can quickly and automatically detect x-rays that are positive for fractures and flag those studies in the system so that radiologists can prioritize reading x-rays with positive fractures. The system also highlights regions of interest with bounding boxes around areas where fractures are suspected. This can potentially contribute to less waiting time at the time of hospital or clinic visit before patients can get a positive diagnosis of fracture,” explained corresponding Ali Guermazi, MD, PhD, chief of radiology at VA Boston Healthcare System and Professor of Radiology & Medicine at Boston University School of Medicine (BUSM).
    Fracture interpretation errors represents up to 24 percent of harmful diagnostic errors seen in the emergency department. Furthermore, inconsistencies in radiographic diagnosis of fractures are more common during the evening and overnight hours (5 p.m. to 3 a.m.), likely related to non-expert reading and fatigue.
    The AI algorithm (AI BoneView), was trained on a very large number of X-rays from multiple institutions to detect fractures of the limbs, pelvis, torso and lumbar spine and rib cage. Expert human readers (musculoskeletal radiologists, who are subspecialized radiology doctors after receiving focused training on reading bone x-rays) defined the gold standard in this study and compared the performance of human readers with and without AI assistance.
    A variety of readers were used to simulate real life scenario, including radiologists, orthopedic surgeons, emergency physicians and physician assistants, rheumatologists, and family physicians, all of whom read x-rays in real clinical practice to diagnose fractures in their patients. Each reader’s diagnostic accuracy of fractures, with and without AI assistance, were compared against the gold standard. They also assessed the diagnostic performance of AI alone against the gold standard. AI assistance helped reduce missed fractures by 29% and increased readers’ sensitivity by 16%, and by 30% for exams with more than 1 fracture, while improving specificity by 5%.
    Guermazi believes that AI can be a powerful tool to help radiologists and other physicians to improve diagnostic performance and increase efficiency, while potentially improving patient experience at the time of hospital or clinic visit. “Our study was focused on fracture diagnosis, but similar concept can be applied to other diseases and disorders. Our ongoing research interest is to how best to utilize AI to help human healthcare providers to improve patient care, rather than making AI replace human healthcare providers. Our study showed one such example,” he added.
    These findings appear online in the journal Radiology.
    Funding for this study was provided by GLEAMER Inc.
    Story Source:
    Materials provided by Boston University School of Medicine. Note: Content may be edited for style and length. More

  • in

    Face detection in untrained deep neural networks?

    Researchers have found that higher visual cognitive functions can arise spontaneously in untrained neural networks. A KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering has shown that visual selectivity of facial images can arise even in completely untrained deep neural networks.
    This new finding has provided revelatory insights into mechanisms underlying the development of cognitive functions in both biological and artificial neural networks, also making a significant impact on our understanding of the origin of early brain functions before sensory experiences.
    The study published in Nature Communications on December 16 demonstrates that neuronal activities selective to facial images are observed in randomly initialized deep neural networks in the complete absence of learning, and that they show the characteristics of those observed in biological brains.
    The ability to identify and recognize faces is a crucial function for social behavior, and this ability is thought to originate from neuronal tuning at the single or multi-neuronal level. Neurons that selectively respond to faces are observed in young animals of various species, and this raises intense debate whether face-selective neurons can arise innately in the brain or if they require visual experience.
    Using a model neural network that captures properties of the ventral stream of the visual cortex, the research team found that face-selectivity can emerge spontaneously from random feedforward wirings in untrained deep neural networks. The team showed that the character of this innate face-selectivity is comparable to that observed with face-selective neurons in the brain, and that this spontaneous neuronal tuning for faces enables the network to perform face detection tasks.
    These results imply a possible scenario in which the random feedforward connections that develop in early, untrained networks may be sufficient for initializing primitive visual cognitive functions.
    Professor Paik said, “Our findings suggest that innate cognitive functions can emerge spontaneously from the statistical complexity embedded in the hierarchical feedforward projection circuitry, even in the complete absence of learning.”
    He continued, “Our results provide a broad conceptual advance as well as advanced insight into the mechanisms underlying the development of innate functions in both biological and artificial neural networks, which may unravel the mystery of the generation and evolution of intelligence.”
    This work was supported by the National Research Foundation of Korea (NRF) and by the KAIST singularity research project.
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More

  • in

    Swinging on the quantum level

    After the “first quantum revolution” — the development of devices such as lasers and the atomic clock — the “second quantum revolution” is currently in full swing. Experts from all over the world are developing fundamentally new technologies based on quantum physics. One key application is quantum communication, where information is written and sent in light. For many applications making use of quantum effects, the light has to be in a certain state — namely a single photon state. But what is the best way of generating such single photon states? In the PRX Quantum journal, researchers from Münster, Bayreuth and Berlin (Germany) have now proposed an entirely new way of preparing quantum systems in order to develop components for quantum technology.
    In the experts’ view it is highly promising to use quantum systems for generating single photon states. One well-known example of such a quantum system is a quantum dot. This is a semiconductor structure, just a few nanometres in size. Quantum dots can be controlled using laser pulses. Although quantum dots have properties similar to those of atoms, they are embedded in a crystal matrix, which is often more practical for applications. “Quantum dots are excellent for generating single photons, and that is something we are already doing in our labs almost every day,” says Dr. Tobias Heindel, who runs an experimental lab for quantum communication at the Technical University of Berlin. “But there is still much room for improvement, especially in transferring this technology from the lab to real applications,” he adds.
    One difficulty that has to be overcome is to separate the generated single photons from the exciting laser pulse. In their work, the researchers propose an entirely new method of solving this problem. “The excitation exploits a swing-up process in the quantum system,” explains Münster University’s Thomas Bracht, the lead author of the study. “For this, we use one or more laser pulses which have frequencies which differ greatly from those in the system. This makes spectral filtering very easy.”
    Scientists define the “swing-up process” as a particular behaviour of the particles excited by the laser light in the quantum system — the electrons or, to be more precise, electron-hole pairs (excitons). Here, laser light from two lasers is used which emit light pulses almost simultaneously. As a result of the interaction of the pulses with one another, a rapid modulation occurs, and in each modulation cycle, the particle is always excited a little, but then dips towards the ground state again. In this process, however, it does not fall back to its previous level, but is excited more strongly with each swing up until it reaches the maximum state. The advantage of this method is that the laser light does not have the same frequency as the light emitted by the excited particles. This means that photons generated from the quantum dot can be clearly assigned.
    The team simulated this process in the quantum system, thus providing guidelines for experimental implementation. “We also explain the physics of the swing-up process, which helps us to gain a better understanding of the dynamics in the quantum system,” says associate professor Dr. Doris Reiter, who led the study.
    In order to be able to use the photons in quantum communication, they have to possess certain properties. In addition, any preparation of the quantum system should not be negatively influenced by environmental processes or disruptive influences. In quantum dots, especially the interaction with the surrounding semiconductor material is often a big problem for such preparation schemes. “Our numerical simulations show that the properties of the photons generated after the swing-up process are comparable with the results of established methods for generating single photons, which are less practical,” adds Prof. Martin Axt, who heads the team of researchers from Bayreuth.
    The study constitutes theoretical work. As a result of the collaboration between theoretical and experimental groups, however, the proposal is very close to realistic experimental laboratory conditions, and the authors are confident that an experimental implementation of the scheme will soon be possible. With their results, the researchers are taking a further step towards developing the quantum technologies of tomorrow.
    Story Source:
    Materials provided by University of Münster. Note: Content may be edited for style and length. More

  • in

    IT security: Computer attacks with laser light

    Computer systems that are physically isolated from the outside world (air-gapped) can still be attacked. This is demonstrated by IT security experts of the Karlsruhe Institute of Technology (KIT) in the LaserShark project. They show that data can be transmitted to light-emitting diodes of regular office devices using a directed laser. With this, attackers can secretly communicate with air-gapped computer systems over distances of several meters. In addition to conventional information and communication technology security, critical IT systems need to be protected optically as well.
    Hackers attack computers with lasers. This sounds like a scene from the latest James Bond movie, but it actually is possible in reality. Early December 2021, researchers of KIT, TU Braunschweig, and TU Berlin presented the LaserShark attack at the 37th Annual Computer Security Applications Conference (ACSAC). This research project focuses on hidden communication via optical channels. Computers or networks in critical infrastructures are often physically isolated to prevent external access. “Air-gapping” means that these systems have neither wired nor wireless connections to the outside world. Previous attempts to bypass such protection via electromagnetic, acoustic, or optical channels merely work at short distances or low data rates. Moreover, they frequently allow for data exfiltration only, that is, receiving data.
    Hidden Optical Channel Uses LEDs in Commercially Available Office Devices
    The Intelligent System Security Group of KASTEL — Institute of Information Security and Dependability of KIT, in cooperation with researchers from TU Braunschweig and TU Berlin, have now demonstrated a new attack: With a directed laser beam, an adversary can introduce data into air-gapped systems and retrieve data without additional hardware on-side at the attacked device. “This hidden optical communication uses light-emitting diodes already build into office devices, for instance, to display status messages on printers or telephones,” explains Professor Christian Wressnegger, Head of the Intelligent System Security Group of KASTEL. Light-emitting diodes (LEDs) can receiving light, although they are not designed to do so.
    Data Are Transmitted in Both Directions
    By directing laser light to already installed LEDs and recording their response, the researchers establish a hidden communication channel over a distance of up to 25 m that can be used bidirectionally (in both directions). It reaches data rates of 18.2 kilobits per second inwards and 100 kilobits per second outwards. This optical attack is possible in commercially available office devices used at companies, universities, and authorities. “The LaserShark project demonstrates how important it is to additionally protect critical IT systems optically next to conventional information and communication technology security measures,” Christian Wressnegger says.
    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More

  • in

    Consciousness in humans, animals and artificial intelligence

    Two researchers at Ruhr-Universität Bochum (RUB) have come up with a new theory of consciousness. They have long been exploring the nature of consciousness, the question of how and where the brain generates consciousness, and whether animals also have consciousness. The new concept describes consciousness as a state that is tied to complex cognitive operations — and not as a passive basic state that automatically prevails when we are awake.
    Professor Armin Zlomuzica from the Behavioral and Clinical Neuroscience research group at RUB and Professor Ekrem Dere, formerly at Université Paris-Sorbonne, now at RUB, describe their theory in the journal Behavioural Brain Research. The printed version will be published on 15 February 2022, the online article has been available since November 2021.
    “The hypotheses underlying our platform theory of consciousness can be tested in experimental studies,” as the authors describe one advantage of their concept over alternative models. “Thus, the process of consciousness can be explored in humans and animals or even in the context of artificial intelligence.”
    The platform theory in detail
    The complex cognitive operations that, according to platform theory, are associated with consciousness are applied to mental representations that are maintained and processed. They can include perceptions, emotions, sensations, memories, imaginations and associations. Conscious cognitive operations are necessary, for example, in situations where learned behaviour or habits are no longer sufficient for coping. People don’t necessarily need consciousness to drive a car or take a shower. But when something unexpected happens, conscious cognitive actions are required to resolve the situation. They are also necessary to predict future events or problems and to develop suitable coping strategies. Most importantly, conscious cognitive operations are at the basis for adaptive and flexible behaviour that enables humans and animals to adapt to new environmental conditions.
    According to the new theory, conscious cognitive actions take place on the basis of a so-called online platform, a kind of central executive that controls subordinate platforms. The subordinate platforms can act, for example, as storage media for knowledge or activities.
    Electrical junctions between nerve cells crucial
    Conscious cognitive operations are facilitated by the interaction of different neuronal networks. Armin Zlomuzica and Ekrem Dere consider electrical synapses, also known as gap junctions, to be crucial in this context. These structures enable extremely fast transmission of signals between nerve cells. They work much faster than chemical synapses, where communication between cells takes place through the exchange of neurotransmitters and -modulators.
    A possible experiment
    The authors suggest for example the following experiment to test their platform theory: a human, an experimental animal or artificial intelligence is confronted with a novel problem that can only be solved by combining two or more rules learned in a different context. This creative combination of stored information and application to a new problem can only be accomplished using conscious cognitive operations.
    By administering pharmacological substances that block gap junctions, the researchers would be able to test whether gap junctions do indeed play a decisive role in the processes. Gap junction blockers should inhibit performance in the experiment. However, routine execution of the individual rules, in the contexts in which they were learned, should still be possible.
    “To what extent an artificial intelligence which is capable of independently solving a new and complex problem for which it has no predefined solution algorithm can likewise be considered conscious has to be tested,” point out the authors. “Several conditions would have to be fulfilled: The first one, for example, would be fulfilled, if it successfully proposes a strategy to combat a pandemic by autonomously screening, evaluating, selecting and creatively combining information from the Internet.”
    Story Source:
    Materials provided by Ruhr-University Bochum. Original written by Julia Weiler. Note: Content may be edited for style and length. More

  • in

    Shellac for printed circuits

    More precise, faster, cheaper: Researchers all over the world have been working for years on producing electrical circuits using additive processes such as robotic 3-printing (so-called robocasting) — with great success, but this is now becoming a problem. The metal particles that make such “inks” electrically conductive are exacerbating the problem of electronic waste. Especially since the waste generated is likely to increase in the future in view of new types of disposable sensors, some of which are only used for a few days.
    Unnecessary waste, thinks Gustav Nyström, head of Empa’s Cellulose & Wood Materials lab: “There is an urgent need for materials that balance electronic performance, cost and sustainability.” To develop an environmentally friendly ink, Nyström’s team therefore set ambitious goals: metal-free, non-toxic, biodegradable. And with practical applications in mind: easily formable and stable to moisture and moderate heat.
    With carbon and shellac
    The researchers chose inexpensive carbon as the conductive material, as they recently reported in the journal Scientific Reports. More precisely: elongated graphite platelets mixed with tiny soot particles that establish electrical contact between these platelets — all this in a matrix made of a well-known biomaterial: shellac, which is obtained from the excretions of scale insects. In the past, it was used to make records; today it is used, among other things, as a varnish for wooden instruments and fingernails. Its advantages correspond exactly to the researchers’ desired profile. And on top of that, it is soluble in alcohol — an inexpensive solvent that evaporates after the ink is applied so that it dries.
    Despite these ingredients, the task proved challenging. That’s because whether used in simple screen printing or with modern 3D printers, the ink must exhibit shear thinning behavior: At “rest,” the ink is rather viscous. But at the moment of printing, when it is subjected to a lateral shear force, it becomes somewhat more fluid — just like a non-drip wall paint that only acquires a softer consistency when applied by the force of the roller. When used in additive manufacturing such as 3D printing with a robotic arm, however, this is particularly tricky: An ink that is too viscous would be too tough — but if it becomes too liquid during printing, the solid components could separate and clog the printer’s tiny nozzle.
    Tests with real applications
    To meet the requirements, the researchers tinkered intensively with the formulation for their ink. They tested two sizes of graphite platelets: 40 micrometers and 7 to 10 micrometers in length. Many variations were also needed in the mixing ratio of graphite and carbon black, because too much carbon black makes the material brittle — with the risk of cracking as the ink dries. By optimizing the formulation and the relative composition of the components, the team was able to develop several variants of the ink that can be used in different 2D and 3D printing processes.
    “The biggest challenge was to achieve high electrical conductivity,” says Xavier Aeby, one of the researchers involved, “and at the same time form a gel-like network of carbon, graphite and shellac.” The team investigated how this material behaves in practice in several steps. For example, with a tiny test cuboid: 15 superimposed grids from the 3D printer — made of fine strands just 0.4 millimeters in diameter. This showed that the ink was also sufficient for demanding processes such as robocasting.
    To prove its suitability for real components, the researchers constructed, among other things, a sensor for deformations: a thin PET strip with an ink structure printed on it, whose electrical resistance changed precisely with varying degrees of bending. In addition, tests for tensile strength, stability under water and other properties showed promising results — and so the research team is confident that the new material, which has already been patented, could prove itself in practice. “We hope that this ink system can be used for applications in sustainable printed electronics,” says Gustav Nyström, “for example, for conductive tracks and sensor elements in smart packaging and biomedical devices or in the field of food and environmental sensing.” More