More stories

  • in

    Black hole or no black hole: On the outcome of neutron star collisions

    A new study lead by GSI scientists and international colleagues investigates black-hole formation in neutron star mergers. Computer simulations show that the properties of dense nuclear matter play a crucial role, which directly links the astrophysical merger event to heavy-ion collision experiments at GSI and FAIR. These properties will be studied more precisely at the future FAIR facility. The results have now been published in Physical Review Letters. With the award of the 2020 Nobel Prize in Physics for the theoretical description of black holes and for the discovery of a supermassive object at the center of our galaxy the topic currently also receives a lot of attention.
    But under which conditions does a black hole actually form? This is the central question of a study lead by the GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt within an international collaboration. Using computer simulations, the scientists focus on a particular process to form black holes namely the merging of two neutron stars (simulation movie).
    Neutron stars consists of highly compressed dense matter. The mass of one and a half solar masses is squeezed to the size of just a few kilometers. This corresponds to similar or even higher densities than in the inner of atomic nuclei. If two neutron stars merge, the matter is additionally compressed during the collision. This brings the merger remnant on the brink to collapse to a black hole. Black holes are the most compact objects in the universe, even light cannot escape, so these objects cannot be observed directly.
    “The critical parameter is the total mass of the neutron stars. If it exceeds a certain threshold the collapse to a black hole is inevitable” summarizes Dr. Andreas Bauswein from the GSI theory department. However, the exact threshold mass depends on the properties of highly dense nuclear matter. In detail these properties of high-density matter are still not completely understood, which is why research labs like GSI collide atomic nuclei — like a neutron star merger but on a much smaller scale. In fact, the heavy-ion collisions lead to very similar conditions as mergers of neutron stars. Based on theoretical developments and physical heavy-ion experiments, it is possible to compute certain models of neutron star matter, so-call equations of state.
    Employing numerous of these equations of state, the new study calculated the threshold mass for black-hole formation. If neutron star matter or nuclear matter, respectively, is easily compressible — if the equation of state is “soft” — already the merger a relatively light neutron stars leads to the formation of a black hole. If nuclear matter is “stiffer” and less compressible, the remnant is stabilized against the so-called gravitational collapse and a massive rotating neutron star remnant forms from the collision. Hence, the threshold mass for collapse itself informs about properties of high-density matter. The new study revealed furthermore that the threshold to collapse may even clarify whether during the collision nucleon dissolve into their constituents, the quarks.
    “We are very excited about this results because we expect that future observations can reveal the threshold mass” adds Professor Nikolaos Stergioulas of the department of physics of the Aristotle University Thessaloniki in Greece. Just a few years ago a neutron star merger was observed for the first time by measuring gravitational waves from the collision. Telescopes also found the “electromagnetic counterpart” and detected light from the merger event. If a black hole is directly formed during the collision, the optical emission of the merger is pretty dim. Thus, the observational data indicates if a black hole was created. At the same time the gravitational-wave signal carries information about the total mass of the system. The more massive the stars the stronger is the gravitational-wave signal, which thus allows determining the threshold mass.
    While gravitational-wave detectors and telescopes wait for the next neutron star mergers, the course is being set in Darmstadt for knowledge that is even more detailed. The new accelerator facility FAIR, currently under construction at GSI, will create conditions, which are even more similar to those in neutron star mergers. Finally, only the combination of astronomical observations, computer simulations and heavy-ion experiments can settle the questions about the fundamental building blocks of matter and their properties, and, by this, they will also clarify how the collapse to a black hole occurs.

    Story Source:
    Materials provided by GSI Helmholtzzentrum für Schwerionenforschung GmbH. Note: Content may be edited for style and length. More

  • in

    Computer model can predict how COVID-19 spreads in cities

    A team of researchers has created a computer model that accurately predicted the spread of COVID-19 in 10 major cities this spring by analyzing three factors that drive infection risk: where people go in the course of a day, how long they linger and how many other people are visiting the same place at the same time.
    “We built a computer model to analyze how people of different demographic backgrounds, and from different neighborhoods, visit different types of places that are more or less crowded. Based on all of this, we could predict the likelihood of new infections occurring at any given place or time,” said Jure Leskovec, the Stanford computer scientist who led the effort, which involved researchers from Northwestern University.
    The study, published today in the journal Nature, merges demographic data, epidemiological estimates and anonymous cellphone location information, and appears to confirm that most COVID-19 transmissions occur at “superspreader” sites, like full-service restaurants, fitness centers and cafes, where people remain in close quarters for extended periods. The researchers say their model’s specificity could serve as a tool for officials to help minimize the spread of COVID-19 as they reopen businesses by revealing the tradeoffs between new infections and lost sales if establishments open, say, at 20 percent or 50 percent of capacity.
    Study co-author David Grusky, a professor of sociology at Stanford’s School of Humanities and Sciences, said this predictive capability is particularly valuable because it provides useful new insights into the factors behind the disproportionate infection rates of minority and low-income people. “In the past, these disparities have been assumed to be driven by preexisting conditions and unequal access to health care, whereas our model suggests that mobility patterns also help drive these disproportionate risks,” he said.
    Grusky, who also directs the Stanford Center on Poverty and Inequality, said the model shows how reopening businesses with lower occupancy caps tend to benefit disadvantaged groups the most. “Because the places that employ minority and low-income people are often smaller and more crowded, occupancy caps on reopened stores can lower the risks they face,” Grusky said. “We have a responsibility to build reopening plans that eliminate — or at least reduce — the disparities that current practices are creating.”
    Leskovec said the model “offers the strongest evidence yet” that stay-at-home policies enacted this spring reduced the number of trips outside the home and slowed the rate of new infections.

    advertisement

    Following footsteps
    The study traced the movements of 98 million Americans in 10 of the nation’s largest metropolitan areas through half a million different establishments, from restaurants and fitness centers to pet stores and new car dealerships.
    The team included Stanford PhD students Serina Chang, Pang Wei Koh and Emma Pierson, who graduated this summer, and Northwestern University researchers Jaline Gerardin and Beth Redbird, who assembled study data for the 10 metropolitan areas. In population order, these cities include: New York, Los Angeles, Chicago, Dallas, Washington, D.C., Houston, Atlanta, Miami, Philadelphia and San Francisco.
    SafeGraph, a company that aggregates anonymized location data from mobile applications, provided the researchers data showing which of 553,000 public locations such as hardware stores and religious establishments people visited each day; for how long; and, crucially, what the square footage of each establishment was so that researchers could determine the hourly occupancy density.
    The researchers analyzed data from March 8 to May 9 in two distinct phases. In phase one, they fed their model mobility data and designed their system to calculate a crucial epidemiological variable: the transmission rate of the virus under a variety of different circumstances in the 10 metropolitan areas. In real life, it is impossible to know in advance when and where an infectious and susceptible person come in contact to create a potential new infection. But in their model, the researchers developed and refined a series of equations to compute the probability of infectious events at different places and times. The equations were able to solve for the unknown variables because the researchers fed the computer one, important known fact: how many COVID-19 infections were reported to health officials in each city each day.

    advertisement

    The researchers refined the model until it was able to determine the transmission rate of the virus in each city. The rate varied from city to city depending on factors ranging from how often people ventured out of the house to which types of locations they visited.
    Once the researchers obtained transmission rates for the 10 metropolitan areas, they tested the model during phase two by asking it to multiply the rate for each city against their database of mobility patterns to predict new COVID-19 infections. The predictions tracked closely with the actual reports from health officials, giving the researchers confidence in the model’s reliability.
    Predicting infections
    By combining their model with demographic data available from a database of 57,000 census block groups — 600 to 3,000-person neighborhoods — the researchers show how minority and low-income people leave home more often because their jobs require it, and shop at smaller, more crowded establishments than people with higher incomes, who can work-from-home, use home-delivery to avoid shopping and patronize roomier businesses when they do go out. For instance, the study revealed that it’s roughly twice as risky for non-white populations to buy groceries compared to whites. “By merging mobility, demographic and epidemiological datasets, we were able to use our model to analyze the effectiveness and equity of different reopening policies,” Chang said.
    The team has made its tools and data publicly available so other researchers can replicate and build on the findings.
    “In principle, anyone can use this model to understand the consequences of different stay-at-home and business closure policy decisions,” said Leskovec, whose team is now working to develop the model into a user-friendly tool for policymakers and public health officials.
    Jure Leskovec is an associate professor of computer science at Stanford Engineering, a member of Stanford Bio-X and the Wu Tsai Neurosciences Institute. David Grusky is Edward Ames Edmonds Professor in the School of Humanities and Sciences, and a senior fellow at the Stanford Institute for Economic Policy Research (SIEPR).
    This research was supported by the National Science Foundation, the Stanford Data Science Initiative, the Wu Tsai Neurosciences Institute and the Chan Zuckerberg Biohub. More

  • in

    With Theta, 2020 sets the record for most named Atlantic storms

    It’s official: 2020 now has the most named storms ever recorded in the Atlantic in a single year.
    On November 9, a tropical disturbance brewing in the northeastern Atlantic Ocean gained enough strength to become a subtropical storm. With that, Theta became the year’s 29th named storm, topping the 28 that formed in 2005.
    With maximum sustained winds near 110 kilometers per hour as of November 10, Theta is expected to churn over the open ocean for several days. It’s too early to predict Theta’s ultimate strength and trajectory, but forecasters with the National Oceanic and Atmospheric Administration say they expect the storm to weaken later in the week.
    If so, like most of the storms this year, Theta likely won’t become a major hurricane. That track record might be the most surprising thing about this season — there’s been a record-breaking number of storms, but overall they’ve been relatively weak. Only five — Laura, Teddy, Delta, Epsilon and Eta — have become major hurricanes with winds topping 178 kilometers per hour, although only Laura and Eta made landfall near the peak of their strength as Category 4 storms.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Even so, the 2020 hurricane season started fast, with the first nine storms arriving earlier than ever before (SN: 9/7/20). And the season has turned out to be the most active since naming began in 1953, thanks to warmer-than-usual water in the Atlantic and the arrival of La Niña, a regularly-occurring period of cooling in the Pacific, which affects winds in the Atlantic and helps hurricanes form (SN: 9/21/19). If a swirling storm reaches wind speeds of 63 kilometers per hour, it gets a name from a list of 21 predetermined names. When that list runs out, the storm gets a Greek letter.
    While the wind patterns and warm Atlantic water temperatures set the stage for the string of storms, it’s unclear if climate change is playing a role in the number of storms. As the climate warms, though, you would expect to see more of the destructive, high-category storms, says Kerry Emanuel, an atmospheric scientist at MIT. “And this year is not a poster child for that.” So far, no storm in 2020 has been stronger than a Category 4. The 2005 season had multiple Category 5 storms, including Hurricane Katrina (SN: 12/20/05).
    There’s a lot amount of energy in the ocean and atmosphere this year, including the unusually warm water, says Emanuel. “The fuel supply could make a much stronger storm than we’ve seen,” says Emanuel, “so the question is: What prevents a lot of storms from living up to their potential?”
    On September 14, five named storms (from left to right, Sally, Paulette, Rene, Teddy and Vicky) swirled in the Atlantic simultaneously. The last time the Atlantic held five at once was 1971.NOAA
    A major factor is wind shear, a change in the speed or direction of wind at different altitudes. Wind shear “doesn’t seem to have stopped a lot of storms from forming this year,” Emanuel says, “but it inhibits them from getting too intense.” Hurricanes can also create their own wind shear, so when multiple hurricanes form in close proximity, they can weaken each other, Emanuel says. And at times this year, several storms did occupy the Atlantic simultaneously — on September 14, five storms swirled at once.
    It’s not clear if seeing hurricane season run into the Greek alphabet is a “new normal,” says Emanuel. The historical record, especially before the 1950s is spotty, he says, so it’s hard to put this year’s record-setting season into context. It’s possible that there were just as many storms before naming began in the ‘50s, but that only the big, destructive ones were recorded or noticed. Now, of course, forecasters have the technology to detect all of them, “so I wouldn’t get too bent out of shape about this season,” Emanuel says.
    Some experts are hesitant to even use the term “new normal.”
    “People talk about the ‘new normal,’ and I don’t think that is a good phrase,” says James Done, an atmospheric scientist at the National Center for Atmospheric Research in Boulder, Colo. “It implies some new stable state. We’re certainly not in a stable state — things are always changing.” More

  • in

    Skills development in Physical AI could give birth to lifelike intelligent robots

    The research suggests that teaching materials science, mechanical engineering, computer science, biology and chemistry as a combined discipline could help students develop the skills they need to create lifelike artificially intelligent (AI) robots as researchers.
    Known as Physical AI, these robots would be designed to look and behave like humans or other animals while possessing intellectual capabilities normally associated with biological organisms. These robots could in future help humans at work and in daily living, performing tasks that are dangerous for humans, and assisting in medicine, caregiving, security, building and industry.
    Although machines and biological beings exist separately, the intelligence capabilities of the two have not yet been combined. There have so far been no autonomous robots that interact with the surrounding environment and with humans in a similar way to how current computer and smartphone-based AI does.
    Co-lead author Professor Mirko Kovac of Imperial’s Department of Aeronautics and the Swiss Federal Laboratories for Materials Science and Technology (Empa)’s Materials and Technology Centre of Robotics said: “The development of robot ‘bodies’ has significantly lagged behind the development of robot ‘brains’. Unlike digital AI, which has been intensively explored in the last few decades, breathing physical intelligence into them has remained comparatively unexplored.”
    The researchers say that the reason for this gap might be that no systematic educational approach has yet been developed for teaching students and researchers to create robot bodies and brains combined as whole units.
    This new research, which is published today in Nature Machine Intelligence defines the term Physical AI. It also suggests an approach for overcoming the gap in skills by integrating scientific disciplines to help future researchers create lifelike robots with capabilities associated with intelligent organisms, such as developing bodily control, autonomy and sensing at the same time.

    advertisement

    The authors identified five main disciplines that are essential for creating Physical AI: materials science, mechanical engineering, computer science, biology and chemistry.
    Professor Kovac said: “The notion of AI is often confined to computers, smartphones and data intensive computation. We are proposing to think of AI in a broader sense and co-develop physical morphologies, learning systems, embedded sensors, fluid logic and integrated actuation. This Physical AI is the new frontier in robotics research and will have major impact in the decades to come, and co-evolving students’ skills in an integrative and multidisciplinary way could unlock some key ideas for students and researchers alike.”
    The researchers say that achieving nature-like functionality in robots requires combining conventional robotics and AI with other disciplines to create Physical AI as its own discipline.
    Professor Kovac said: “We envision Physical AI robots being evolved and grown in the lab by using a variety of unconventional materials and research methods. Researchers will need a much broader stock of skills for building lifelike robots. Cross-disciplinary collaborations and partnerships will be very important.”
    One example of such a partnership is the Imperial-Empa joint Materials and Technology Centre of Robotics that links up Empa’s material science expertise with Imperial’s Aerial Robotics Laboratory.

    advertisement

    The authors also propose intensifying research activities in Physical AI by supporting teachers on both the institutional and community level. They suggest hiring and supporting faculty members whose priority will be multidisciplinary Physical AI research.
    Co-lead author Dr Aslan Miriyev of Empa and the Department of Aeronautics at Imperial said: “Such backing is especially needed as working in the multidisciplinary playground requires daring to leave the comfort zones of narrow disciplinary knowledge for the sake of a high-risk research and career uncertainty.
    “Creating lifelike robots has thus far been an impossible task, but it could be made possible by including Physical AI in the higher education system. Developing skills and research in Physical AI could bring us closer than ever to redefining human-robot and robot-environment interaction.”
    The researchers hope that their work will encourage active discussion of the topic and will lead to integration of Physical AI disciplines in the educational mainstream.
    The researchers intend to implement the Physical AI methodology in their research and education activities to pave the way to human-robot ecosystems.

    Story Source:
    Materials provided by Imperial College London. Original written by Caroline Brogan. Note: Content may be edited for style and length. More

  • in

    Five mistakes people make when sharing COVID-19 data visualizations on Twitter

    The frantic swirl of coronavirus-related information sharing that took place this year on social media is the subject of a new analysis led by researchers at the School of Informatics and Computing at IUPUI.
    Published in the open-access journal Informatics, the study focuses on the sharing of data visualizations on Twitter — by health experts and average citizens alike — during the initial struggle to grasp the scope of the COVID-19 pandemic, and its effects on society. Many social media users continue to encounter similar charts and graphs every day, especially as a new wave of coronavirus cases has begun to surge across the globe.
    The work found that more than half of the analyzed visualizations from average users contained one of five common errors that reduced their clarity, accuracy or trustworthiness.
    “Experts have not yet begun to explore the world of casual visualizations on Twitter,” said Francesco Cafaro, an assistant professor in the School of Informatics and Computing, who led the study. “Studying the new ways people are sharing information online to understand the pandemic and its effect on their lives is an important step in navigating these uncharted waters.”
    Casual data visualizations refer to charts and graphs that rely upon tools available to average users in order to visually depict information in a personally meaningful way. These visualizations differ from traditional data visualization because they aren’t generated or distributed by the traditional “gatekeepers” of health information, such as the Centers for Disease Control and Prevention or the World Health Organization, or by the media.
    “The reality is that people depend upon these visualizations to make major decisions about their lives: whether or not it’s safe to send their kids back to school, whether or not it’s safe to take a vacation, and where to go,” Cafaro said. “Given their influence, we felt it was important to understand more about them, and to identify common issues that can cause people creating or viewing them to misinterpret data, often unintentionally.”
    For the study, IU researchers crawled Twitter to identify 5,409 data visualizations shared on the social network between April 14 and May 9, 2020. Of these, 540 were randomly selected for analysis — with full statistical analysis reserved for 435 visualizations based upon additional criteria. Of these, 112 were made by average citizens.
    Broadly, Cafaro said the study identified five pitfalls common to the data visualizations analyzed. In addition to identifying these problems, the study’s authors suggest steps to overcome or reduce their negative impact:
    Mistrust: Over 25 percent of the posts analyzed failed to clearly identify the source of their data, sowing distrust in the accuracy. This information was often obscured due to poor design — such as bad color choices, busy layout, or typos — not intentional obfuscation. To overcome these issues, the study’s authors suggest clearly labeling data sources as well as placing this information on the graphic itself rather than the accompanying text, as images are often unpaired from their original post during social sharing.
    Proportional reasoning: Eleven percent of posts exhibited issues related to proportional reasoning, which refers to the users’ ability to compare variables based on ratios or fractions. Understanding infection rates across different geographic locations is a challenge of proportional reasoning, for example, since similar numbers of infections can indicate different levels of severity in low- versus high-population settings. To overcome this challenge, the study’s authors suggest using labels such as number of infections per 1,000 people to compare regions with disparate populations, as this metric is easier to understand than absolute numbers or percentages.
    Temporal reasoning: The researchers identified 7 percent of the posts with issues related to temporal reasoning, which refers to users’ ability to understand change over time. These included visualizations that compared the numbers of deaths from flu in a full year to the number of deaths from COVID-19 in a few months, or visualizations that failed to account for the delay between the date of infection and deaths. Recommendations to address these issues included breaking metrics that depend upon different time scales in separate charts, as opposed to conveying the data in a single chart.
    Cognitive bias: A small percentage of posts (0.5 percent) contained text that seemed to encourage users to misinterpret data based upon the creator’s “biases related to race, country and immigration.” The researchers state that information should be presented with clear, objective descriptions carefully separated from any accompanying political commentary.
    Misunderstanding about virus: Two percent of visualizations were based upon misunderstandings about the novel coronavirus, such as the use of data related to SARS or influenza.
    The study also found certain types of data visualizations performed strongest on social media. Data visualizations that showed change over time, such as line or bar graphs, were most commonly shared. They also found that users engaged more frequently with charts conveying numbers of deaths as opposed to numbers of infections or impact on the economy, suggesting that people were more interested in the virus’s lethality than its other negative health or societal effects.
    “The challenge of accurately conveying information visually is not limited to information-sharing on Twitter, but we feel these communications should be considered especially carefully given the influence of social media on people’s decision-making,” Cafaro said. “We believe our findings can help government agencies, news media and average people better understand the types of information about which people care the most, as well as the challenges people may face while interpreting visual information related to the pandemic.”
    Additional leading authors on the study are Milka Trajkova, A’aeshah Alhakamy, Sanika Vedak, Rashmi Mallappa and Sreekanth R. Kankara, research assistants in the School of Informatics and Computing at IUPUI at the time of the study. Alhakamy is currently a lecturer at the University of University of Tabuk in Saudi Arabia.

    Story Source:
    Materials provided by Indiana University. Note: Content may be edited for style and length. More

  • in

    Scientists develop AI-powered 'electronic nose' to sniff out meat freshness

    A team of scientists led by Nanyang Technological University, Singapore (NTU Singapore) has invented an artificial olfactory system that mimics the mammalian nose to assess the freshness of meat accurately.
    The ‘electronic nose’ (e-nose) comprises a ‘barcode’ that changes colour over time in reaction to the gases produced by meat as it decays, and a barcode ‘reader’ in the form of a smartphone app powered by artificial intelligence (AI). The e-nose has been trained to recognise and predict meat freshness from a large library of barcode colours.
    When tested on commercially packaged chicken, fish and beef meat samples that were left to age, the team found that their deep convolutional neural network AI algorithm that powers the e-nose predicted the freshness of the meats with a 98.5 per cent accuracy. As a comparison, the research team assessed the prediction accuracy of a commonly used algorithm to measure the response of sensors like the barcode used in this e-nose. This type of analysis showed an overall accuracy of 61.7 per cent.
    The e-nose, described in a paper published in the scientific journal Advanced Materials in October, could help to reduce food wastage by confirming to consumers whether meat is fit for consumption, more accurately than a ‘Best Before’ label could, said the research team from NTU Singapore, who collaborated with scientists from Jiangnan University, China, and Monash University, Australia.
    Co-lead author Professor Chen Xiaodong, the Director of Innovative Centre for Flexible Devices at NTU, said: “Our proof-of-concept artificial olfactory system, which we tested in real-life scenarios, can be easily integrated into packaging materials and yields results in a short time without the bulky wiring used for electrical signal collection in some e-noses that were developed recently.
    “These barcodes help consumers to save money by ensuring that they do not discard products that are still fit for consumption, which also helps the environment. The biodegradable and non-toxic nature of the barcodes also means they could be safely applied in all parts of the food supply chain to ensure food freshness.”
    A patent has been filed for this method of real-time monitoring of food freshness, and the team is now working with a Singapore agribusiness company to extend this concept to other types of perishables.

    advertisement

    A nose for freshness
    The e-nose developed by NTU scientists and their collaborators comprises two elements: a coloured ‘barcode’ that reacts with gases produced by decaying meat; and a barcode ‘reader’ that uses AI to interpret the combination of colours on the barcode. To make the e-nose portable, the scientists integrated it into a smartphone app that can yield results in 30 seconds.
    The e-nose mimics how a mammalian nose works. When gases produced by decaying meat bind to receptors in the mammalian nose, signals are generated and transmitted to the brain. The brain then collects these responses and organises them into patterns, allowing the mammal to identify the odour present as meat ages and rots.
    In the e-nose, the 20 bars in the barcode act as the receptors. Each bar is made of chitosan (a natural sugar) embedded on a cellulose derivative and loaded with a different type of dye. These dyes react with the gases emitted by decaying meat and change colour in response to the different types and concentrations of gases, resulting in a unique combination of colours that serves as a ‘scent fingerprint’ for the state of any meat.
    For instance, the first bar in the barcode contains a yellow dye that is weakly acidic. When exposed to nitrogen-containing compounds produced by decaying meat (called bioamines), this yellow dye changes into blue as the dye reacts with these compounds. The colour intensity changes with an increasing concentration of bioamines as meat decays further.
    For this study, the scientists first developed a classification system (fresh, less fresh, or spoiled) using an international standard that determines meat freshness. This is done by extracting and measuring the amount of ammonia and two other bioamines found in fish packages wrapped in widely-used transparent PVC (polyvinyl chloride) packaging film and stored at 4°C (39°Fahrenheit) over five days at different intervals.
    They concurrently monitored the freshness of these fish packages with barcodes glued on the inner side of the PVC film without touching the fish. Images of these barcodes were taken at different intervals over five days. More

  • in

    'Electronic skin' promises cheap and recyclable alternative to wearable devices

    Researchers at the University of Colorado Boulder are developing a wearable electronic device that’s “really wearable” — a stretchy and fully-recyclable circuit board that’s inspired by, and sticks onto, human skin.
    The team, led by Jianliang Xiao and Wei Zhang, describes its new “electronic skin” in a paper published today in the journal Science Advances. The device can heal itself, much like real skin. It also reliably performs a range of sensory tasks, from measuring the body temperature of users to tracking their daily step counts.
    And it’s reconfigurable, meaning that the device can be shaped to fit anywhere on your body.
    “If you want to wear this like a watch, you can put it around your wrist,” said Xiao, an associate professor in the Paul M. Rady Department of Mechanical Engineering at CU Boulder. “If you want to wear this like a necklace, you can put it on your neck.”
    He and his colleagues are hoping that their creation will help to reimagine what wearable devices are capable of. The group said that, one day, such high-tech skin could allow people to collect accurate data about their bodies — all while cutting down on the world’s surging quantities of electronic waste.
    “Smart watches are functionally nice, but they’re always a big chunk of metal on a band,” said Zhang, a professor in the Department of Chemistry. “If we want a truly wearable device, ideally it will be a thin film that can comfortably fit onto your body.”
    Stretching out

    advertisement

    Those thin, comfortable films have long been a staple of science fiction. Picture skin peeling off the face of Arnold Schwarzenegger in the Terminator film franchise. “Our research is kind of going in that direction, but we still have a long way to go,” Zhang said.
    His team’s goals, however, are both robot and human. The researchers previously described their design for electronic skin in 2018. But their latest version of the technology makes a lot of improvements on the concept — for a start, it’s far more elastic, not to mention functional.
    To manufacture their bouncy product, Xiao and his colleagues use screen printing to create a network of liquid metal wires. They then sandwich those circuits in between two thin films made out of a highly flexible and self-healing material called polyimine.
    The resulting device is a little thicker than a Band-Aid and can be applied to skin with heat. It can also stretch by 60% in any direction without disrupting the electronics inside, the team reports.
    “It’s really stretchy, which enables a lot of possibilities that weren’t an option before,” Xiao said.

    advertisement

    The team’s electronic skin can do a lot of the same things that popular wearable fitness devices like Fitbits do: reliably measuring body temporary, heart rate, movement patterns and more.
    Less waste
    Arnold may want to take note: The team’s artificial epidermis is also remarkably resilient.
    If you slice a patch of electronic skin, Zhang said, all you have to do is pinch the broken areas together. Within a few minutes, the bonds that hold together the polyimine material will begin to reform. Within 13 minutes, the damage will be almost entirely undetectable.
    “Those bonds help to form a network across the cut. They then begin to grow together,” Zhang said. “It’s similar to skin healing, but we’re talking about covalent chemical bonds here.”
    Xiao added that the project also represents a new approach to manufacturing electronics — one that could be much better for the planet. By 2021, estimates suggest that humans will have produced over 55 million tons of discarded smart phones, laptops and other electronics.
    His team’s stretchy devices, however, are designed to skip the landfills. If you dunk one of these patches into a recycling solution, the polyimine will depolymerize, or separate into its component molecules, while the electronic components sink to the bottom. Both the electronics and the stretchy material can then be reused.
    “Our solution to electronic waste is to start with how we make the device, not from the end point, or when it’s already been thrown away,” Xiao said. “We want a device that is easy to recycle.”
    The team’s electronic skin is a long way away from being able to compete with the real thing. For now, these devices still need to be hooked up to an external source of power to work. But, Xiao said, his group’s research hints that cyborg skin could soon be the fashion fad of the future.
    “We haven’t realized all of these complex functions yet,” he said. “But we are marching toward that device function.” More

  • in

    Getting single-crystal diamond ready for electronics

    Silicon has been the workhorse of electronics for decades because it is a common element, is easy to process, and has useful electronic properties. A limitation of silicon is that high temperatures damage it, which limits the operating speed of silicon-based electronics. Single-crystal diamond is a possible alternative to silicon. Researchers recently fabricated a single-crystal diamond wafer, but common methods of polishing the surface — a requirement for use in electronics — are a combination of slow and damaging.
    In a study recently published in Scientific Reports, researchers from Osaka University and collaborating partners polished a single-crystal diamond wafer to be nearly atomically smooth. This procedure will be useful for helping diamond replace at least some of the silicon components of electronic devices.
    Diamond is the hardest known substance and essentially does not react with chemicals. Polishing it with a similarly hard tool damages the surface and conventional polishing chemistry is slow. In this study, the researchers in essence first modified the quartz glass surface and then polished diamond with modified quartz glass tools.
    “Plasma-assisted polishing is an ideal technique for single-crystal diamond,” explains lead author Nian Liu. “The plasma activates the carbon atoms on the diamond surface without destroying the crystal structure, which lets a quartz glass plate gently smooth away surface irregularities.”
    The single-crystal diamond, before polishing, had many step-like features and was wavy overall, with an average root mean square roughness of 0.66 micrometers. After polishing, the topographical defects were gone, and the surface roughness was far less: 0.4 nanometers.
    “Polishing decreased the surface roughness to near-atomic smoothness,” says senior author Kazuya Yamamura. “There were no scratches on the surface, as seen in scaife mechanical smoothing approaches.”
    Furthermore, the researchers confirmed that the polished surface was unaltered chemically. For example, they detected no graphite — therefore, no damaged carbon. The only detected impurity was a very small amount of nitrogen from the original wafer preparation.
    “Using Raman spectroscopy, the full width at half maximum of the diamond lines in the wafer were the same, and the peak positions were almost identical,” says Liu. “Other polishing techniques show clear deviations from pure diamond.”
    With this research development, high-performance power devices and heat sinks based on single-crystal diamond are now attainable. Such technologies will dramatically lower the power use and carbon input, and improve the performance, of future electronic devices.

    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More