More stories

  • in

    A potential breakthrough for production of superior battery technology

    Micro supercapacitors could revolutionise the way we use batteries by increasing their lifespan and enabling extremely fast charging. Manufacturers of everything from smartphones to electric cars are therefore investing heavily into research and development of these electronic components. Now, researchers at Chalmers University of Technology, Sweden, have developed a method that represents a breakthrough for how such supercapacitors can be produced.
    “When discussing new technologies, it is easy to forget how important the manufacturing method is, so that they can actually be commercially produced and be impactful in society. Here, we have developed methods that can really work in production,” explains Agin Vyas, doctoral student at the Department of Microtechnology and Nanoscience at Chalmers University of Technology and lead author of the article.
    Supercapacitors consist of two electrical conductors separated by an insulating layer. They can store electrical energy and have many positive properties compared to a normal battery, such as much more rapid charging, more efficient energy distribution, and a much greater lifespan without loss of performance, with regards to the charge and discharge cycle. When a supercapacitor is combined with a battery in an electrically powered product, the battery life can be extended many times -up to 4 times for commercial electric vehicles. And whether for personal electronic devices or industrial technologies, the benefits for the end consumer could be huge.
    “It would of course be very convenient to be able to quickly charge, for example, an electric car or not have to change or charge batteries as often as we currently do in our smartphones. But it would also represent a great environmental benefit and be much more sustainable, if batteries had a longer lifespan and did not need to be recycled in complicated processes,” says Agin Vyas.
    Manufacturing a big challenge
    But in practice, today’s supercapacitors are too large for many applications where they could be useful. They need to be about the same size as the battery they are connected to, which is an obstacle to integrating them in mobile phones or electric cars. Therefore, a large part of today’s research and development of supercapacitors is about making them smaller — significantly so.
    Agin Vyas and his colleagues have been working with developing ‘micro’ supercapacitors. These are so small that they can fit on the system circuits which control various functions in mobile phones, computers, electric motors and almost all electronics we use today. This solution is also called ‘system-on-a-chip’.
    One of the most important challenges is that the minimal units need to be manufactured in such a way that they become compatible with other components in a system circuit and can easily be tailored for different areas of use. The new paper demonstrates a manufacturing process in which micro-supercapacitors are integrated with the most common way of manufacturing system circuits (known as CMOS).
    “We used a method known as spin coating, a cornerstone technique in many manufacturing processes. This allows us to choose different electrode materials. We also use alkylamine chains in reduced graphene oxide, to show how that leads to a higher charging and storage capacity,” explains Agin Vyas.
    “Our method is scalable and would involve reduced costs for the manufacturing process. It represents a great step forward in production technology and an important step towards the practical application of micro-supercapacitors in both everyday electronics and industrial applications.”
    A method has also been developed for producing micro-supercapacitors of up to ten different materials in one unified manufacturing process, which means that properties can be easily tailored to suit several different end applications.
    Story Source:
    Materials provided by Chalmers University of Technology. Original written by Karin Wik. Note: Content may be edited for style and length. More

  • in

    A new gravity sensor used atoms’ weird quantum behavior to peer underground

    The best way to find buried treasure may be with a quantum gravity sensor.

    In these devices, free-falling atoms reveal subtle variations in Earth’s gravitational pull at different places. Those variations reflect differences in the density of material beneath the sensor — effectively letting the instrument peer underground. In a new experiment, one of these machines teased out the tiny gravitational signature of an underground tunnel, researchers report in the Feb. 24 Nature.

    “Instruments like this would find many, many applications,” says Nicola Poli, an experimental physicist at the University of Florence, who coauthored a commentary on the study in the same issue of Nature.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Poli imagines using quantum gravity sensors to monitor groundwater or magma beneath volcanoes, or to help archaeologists uncover hidden tombs or other artifacts without having to dig them up (SN: 11/2/17). These devices could also help farmers check soil quality or help engineers inspect potential construction sites for unstable ground.

    “There are many tools to measure gravity,” says Xuejian Wu, an atomic physicist at Rutgers University in Newark, N.J., who wasn’t involved in the study. Some devices measure how far gravity pulls down a mass hanging from a spring. Other tools use lasers to clock how fast an object tumbles down a vacuum chamber. But free-falling atoms, like those in quantum gravity sensors, are the most pristine, reliable test masses out there, Wu says. As a result, quantum sensors promise to be more accurate and stable in the long run than other gravity probes.

    Inside a quantum gravity sensor, a cloud of supercooled atoms is dropped down a chute. A pulse of light then splits each of the falling atoms into a superposition state — a quantum limbo where each atom exists in two places at once (SN: 11/7/19). Due to their slightly different positions in Earth’s gravitational field, the two versions of each atom feel a different downward tug as they fall. Another light pulse then recombines the split atoms.

    Thanks to the atoms’ wave-particle duality — a strange rule of quantum physics that says atoms can act like waves — the reunited atoms interfere with each other (SN: 1/13/22). That is, as the atom waves overlap, their crests and troughs can reinforce or cancel each other out, creating an interference pattern. That pattern reflects the slightly different downward pulls that the split versions of each atom felt as they fell — revealing the gravity field at the atom cloud’s location.

    Extremely precise measurements made by such atom-based devices have helped test Einstein’s theory of gravity (SN: 10/28/20) and measure fundamental constants, such as Newton’s gravitational constant (SN: 4/12/18). But atom-based gravity sensors are highly sensitive to vibrations from seismic activity, traffic and other sources.

    “Even very, very small vibrations create enough noise that you have to measure for a long time” at any location to weed out background tremors, says Michael Holynski, a physicist at the University of Birmingham in England. That has made quantum gravity sensing impractical for many uses outside the lab.  

    Holynski’s team solved that problem by building a gravity sensor with not one but two falling clouds of rubidium atoms. With one cloud suspended a meter above the other, the instrument could gauge the strength of gravity at two different heights in a single location. Comparing those measurements allowed the researchers to cancel out the effects of background noise.

    Holynski and colleagues tested whether their sensor — a 2-meter-tall chute on wheels tethered to a rolling cart of equipment — could detect an underground passageway on the University of Birmingham campus. The 2-by-2-meter concrete tunnel lay beneath a road between two multistory buildings. The quantum sensor measured the local gravitational field every 0.5 meters along an 8.5-meter line that crossed over the tunnel. Those readouts matched the predictions of a computer simulation, which had estimated the gravitational signal of the tunnel based on its structure and other factors that could influence the local gravitational field, such as nearby buildings.

    Based on the machine’s sensitivity in this experiment, it could probably provide a reliable gravity measurement at each location in less than two minutes, the researchers estimate. That’s about one-tenth the time needed for other types of gravity sensors.

    The team has since built a downsized version of the gravity sensor used in the tunnel-detecting experiment. The new machine weighs about 15 kilograms, compared with the 300-kilogram beast used for the tunnel test. Other upgrades could also boost the gravity sensor’s speed.

    In the future, engineer Nicole Metje envisions building a quantum gravity sensor that could be pushed from place to place like a lawn mower. But portability isn’t the only challenge for making these tools more user-friendly, says Metje, a coauthor on the study who is also at the University of Birmingham. “At the moment, we still need someone with a physics degree to operate the sensor.”

    So hopeful beachcombers may be waiting a long time to trade in their metal detectors for quantum gravity sensors. More

  • in

    Research team makes breakthrough discovery in light interactions with nanoparticles, paving the way for advances in optical computing

    Computers are an indispensable part of our daily lives, and the need for ones that can work faster, solve complex problems more efficiently, and leave smaller environmental footprints by minimizing the required energy for computation is increasingly urgent. Recent progress in photonics has shown that it’s possible to achieve more efficient computing through optical devices that use interactions between metamaterials and light waves to apply mathematical operations of interest on the input signals, and even solve complex mathematical problems. But to date, such computers have required a large footprint and precise, large-area fabrication of the components, which, because of their size, are difficult to scale into more complex networks.
    A newly published paper in Physical Review Letters from researchers at the Advanced Science Research Center at the CUNY Graduate Center (CUNY ASRC) details a breakthrough discovery in nanomaterials and light-wave interactions that paves the way for development of small, low-energy optical computers capable of advanced computing.
    “The increasing energy demands of large data centers and inefficiencies in current computing architectures have become a real challenge for our society,” said Andrea Alù, Ph.D., the paper’s corresponding author, founding director of the CUNY ASRC’s Photonics Initiative and Einstein Professor of Physics at the Graduate Center. “Our work demonstrates that it’s possible to design a nanoscale object that can efficiently interact with light to solve complex mathematical problems with unprecedented speeds and nearly zero energy demands.”
    In their study, CUNY ASRC researchers designed a nanoscale object made of silicon so that, when interrogated with light waves carrying an arbitrary input signal, it is able to encode the corresponding solution of a complex mathematical problem into the scattered light. The solution is calculated at the speed of light, and with minimal energy consumption.”
    “This finding is promising because it offers a practical pathway for creating a new generation of very energy-efficient, ultrafast, ultracompact nanoscale optical computers and other nanophotonic technologies that can be used for classical and quantum computations,” said Heedong Goh, Ph.D., the paper’s lead author and a postdoctoral research associate with Alù’s lab. “The very small size of these nanoscale optical computers is particularly appealing for scalability, because multiple nanostructures can be combined and connected together through light scattering to realize complex nanoscale computing networks.”
    Story Source:
    Materials provided by Advanced Science Research Center, GC/CUNY. Note: Content may be edited for style and length. More

  • in

    Researcher urges caution on AI in mammography

    Analyzing breast-cancer tumors with artificial intelligence has the potential to improve healthcare efficiency and outcomes. But doctors should proceed cautiously, because similar technological leaps previously led to higher rates of false-positive tests and over-treatment.
    That’s according to a new editorial in JAMA Health Forum co-written by Joann G. Elmore, MD, MPH, a researcher at the UCLA Jonsson Comprehensive Cancer Center, the Rosalinde and Arthur Gilbert Foundation Endowed Chair in Health Care Delivery and professor of medicine at the David Geffen School of Medicine at UCLA.
    “Without a more robust approach to the evaluation and implementation of AI, given the unabated adoption of emergent technology in clinical practice, we are failing to learn from our past mistakes in mammography,” the JAMA Health Forum editorial states. The piece, posted online Friday, was co-written with Christoph I. Lee, MD, MS, MBA, a professor of radiology at the University of Washington School of Medicine.
    One of those “past mistakes in mammography,” according to the authors, was adjunct computer-aided detection (CAD) tools, which grew rapidly in popularity in the field of breast cancer screening starting more than two decades ago. CAD was approved by the FDA in 1998, and by 2016 more than 92% of U.S. imaging facilities were using the technology to interpret mammograms and hunt for tumors. But the evidence showed CAD did not improve mammography accuracy. “CAD tools are associated with increased false positive rates, leading to overdiagnosis of ductal carcinoma in situ and unnecessary diagnostic testing,” the authors wrote. Medicare stopped paying for CAD in 2018, but by then the tools had racked up more than $400 million a year in unnecessary health costs.
    “The premature adoption of CAD is a premonitory symptom of the wholehearted embrace of emergent technologies prior to fully understanding their impact on patient outcomes,” Elmore and Lee wrote.
    The doctors suggest several safeguards to put in place to avoid “repeating past mistakes,” including tying Medicare reimbursement to “improved patient outcomes, not just improved technical performance in artificial settings.”
    Story Source:
    Materials provided by University of California – Los Angeles Health Sciences. Note: Content may be edited for style and length. More

  • in

    New imager microchip helps devices bring hidden objects to light

    Researchers from The University of Texas at Dallas and Oklahoma State University have developed an innovative terahertz imager microchip that can enable devices to detect and create images through obstacles that include fog, smoke, dust and snow.
    The team is working on a device for industrial applications that require imaging up to 20 meters away. The technology could also be adapted for use in cars to help drivers or autonomous vehicle systems navigate through hazardous conditions that reduce visibility. On an automotive display, for example, the technology could show pixelated outlines and shapes of objects, such as another vehicle or pedestrians.
    “The technology allows you to see in vision-impaired environments. In industrial settings, for example, devices using the microchips could help with packaging inspections for manufacturing process control, monitoring moisture content or seeing through steam. If you are a firefighter, it could help you see through smoke and fire,” said Dr. Kenneth K. O, professor of electrical and computer engineering and the Texas Instruments Distinguished University Chair in the Erik Jonsson School of Engineering and Computer Science.
    Yukun Zhu, a doctoral candidate in electrical engineering, announced the imaging technology on Feb. 21 at the virtual International Solid-State Circuits Conference, sponsored by the Institute of Electrical and Electronics Engineers (IEEE) and its Solid-State Circuits Society.
    The advance is the result of more than 15 years of work by O and his team of students, researchers and collaborators. This latest effort is supported by through its TI Foundational Technology Research Program.
    “TI has been part of the journey through much of the 15 years,” said O, who is director of the Texas Analog Center of Excellence (TxACE) at UT Dallas. “The company has been a key supporter of the research.”
    The microchip emits radiation beams in the terahertz range (430 GHz) of the electromagnetic spectrum from pixels no larger than a grain of sand. The beams travel through fog, dust and other obstacles that optical light cannot penetrate and bounce off objects and back to the microchip, where the pixels pick up the signal to create images. Without the use of external lenses, the terahertz imager includes the microchip and a reflector that increases the imaging distance and quality and reduces power consumption.
    The researchers designed the imager using complementary metal-oxide semiconductor (CMOS) technology. This type of integrated circuit technology is used to manufacture the bulk of consumer electronics devices, which makes the imager affordable. O’s group was one of the first to show that CMOS technology was viable, and since then they have worked to develop a variety of new applications.
    “Another breakthrough result enabled through innovations that overcame fundamental active-gain limits of CMOS is that this imaging technology consumes more than 100 times less power than the phased arrays currently being investigated for the same imaging applications. This and the use of CMOS make consumer applications of this technology possible,” said O, a fellow of the IEEE.
    TxACE is supported by the Semiconductor Research Corp., TI, the UT System and UT Dallas.
    “UT Dallas and Oklahoma State continue to discover technological innovations that will help shape the future,” said Dr. Swaminathan Sankaran, design director and Distinguished Member Technical Staff at TI Kilby Labs. “What Dr. O and his research team were able to accomplish was truly remarkable with this terahertz monostatic reflection-mode imager work. Their research paves a path for improved raw angular resolution and low-power, cost system integration, and we are excited to see what applications and use cases this terahertz imaging technology will lead to.”
    Story Source:
    Materials provided by University of Texas at Dallas. Original written by Kim Horner. Note: Content may be edited for style and length. More

  • in

    Using artificial intelligence to find anomalies hiding in massive datasets

    Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.
    Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.
    Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.
    “In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.
    The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.
    Probing probabilities
    The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies. More

  • in

    A security technique to fool would-be cyber attackers

    Multiple programs running on the same computer may not be able to directly access each other’s hidden information, but because they share the same memory hardware, their secrets could be stolen by a malicious program through a “memory timing side-channel attack.”
    This malicious program notices delays when it tries to access a computer’s memory, because the hardware is shared among all programs using the machine. It can then interpret those delays to obtain another program’s secrets, like a password or cryptographic key.
    One way to prevent these types of attacks is to allow only one program to use the memory controller at a time, but this dramatically slows down computation. Instead, a team of MIT researchers has devised a new approach that allows memory sharing to continue while providing strong security against this type of side-channel attack. Their method is able to speed up programs by 12 percent when compared to state-of-the-art security schemes.
    In addition to providing better security while enabling faster computation, the technique could be applied to a range of different side-channel attacks that target shared computing resources, the researchers say.
    “Nowadays, it is very common to share a computer with others, especially if you are do computation in the cloud or even on your own mobile device. A lot of this resource sharing is happening. Through these shared resources, an attacker can seek out even very fine-grained information,” says senior author Mengjia Yan, the Homer A. Burnell Career Development Assistant Professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
    The co-lead authors are CSAIL graduate students Peter Deutsch and Yuheng Yang. Additional co-authors include Joel Emer, a professor of the practice in EECS, and CSAIL graduate students Thomas Bourgeat and Jules Drean. The research will be presented at the International Conference on Architectural Support for Programming Languages and Operating Systems. More

  • in

    Deep neural network to find hidden turbulent motion on the sun

    Scientists developed a neural network deep learning technique to extract hidden turbulent motion information from observations of the Sun. Tests on three different sets of simulation data showed that it is possible to infer the horizontal motion from data for the temperature and vertical motion. This technique will benefit solar astronomy and other fields such as plasma physics, fusion science, and fluid dynamics.
    The Sun is important to the Sustainable Development Goal of Affordable and Clean Energy, both as the source of solar power and as a natural example of fusion energy. Our understanding of the Sun is limited by the data we can collect. It is relatively easy to observe the temperature and vertical motion of solar plasma, gas so hot that the component atoms break down into electrons and ions. But it is difficult to determine the horizontal motion.
    To tackle this problem, a team of scientists led by the National Astronomical Observatory of Japan and the National Institute for Fusion Science created a neural network model, and fed it data from three different simulations of plasma turbulence. After training, the neural network was able to correctly infer the horizontal motion given only the vertical motion and the temperature.
    The team also developed a novel coherence spectrum to evaluate the performance of the output at different size scales. This new analysis showed that the method succeeded at predicting the large-scale patterns in the horizontal turbulent motion, but had trouble with small features. The team is now working to improve the performance at small scales. It is hoped that this method can be applied to future high resolution solar observations, such as those expected from the SUNRISE-3 balloon telescope, as well as to laboratory plasmas, such as those created in fusion science research for new energy.
    Story Source:
    Materials provided by National Institutes of Natural Sciences. Note: Content may be edited for style and length. More