More stories

  • in

    Graphene-hBN breakthrough to spur new LEDs, quantum computing

    In a discovery that could speed research into next-generation electronics and LED devices, a University of Michigan research team has developed the first reliable, scalable method for growing single layers of hexagonal boron nitride on graphene.
    The process, which can produce large sheets of high-quality hBN with the widely used molecular-beam epitaxy process, is detailed in a study in Advanced Materials.
    Graphene-hBN structures can power LEDs that generate deep-UV light, which is impossible in today’s LEDs, said Zetian Mi, U-M professor of electrical engineering and computer science and a corresponding author of the study. Deep-UV LEDs could drive smaller size and greater efficiency in a variety of devices including lasers and air purifiers.
    “The technology used to generate deep-UV light today is mercury-xenon lamps, which are hot, bulky, inefficient and contain toxic materials,” Mi said. “If we can generate that light with LEDs, we could see an efficiency revolution in UV devices similar to what we saw when LED light bulbs replaced incandescents.”
    Hexagonal boron nitride is the world’s thinnest insulator while graphene is the thinnest of a class of materials called semimetals, which have highly malleable electrical properties and are important for their role in computers and other electronics.
    Bonding hBN and graphene together in smooth, single-atom-thick layers unleashes a treasure trove of exotic properties. In addition to deep-UV LEDs, graphene-hBN structures could enable quantum computing devices, smaller and more efficient electronics and optoelectronics and a variety of other applications. More

  • in

    Researchers create miniature wide-angle camera with flat metalenses

    Researchers have designed a new compact camera that acquires wide-angle images of high-quality using an array of metalenses — flat nanopatterned surfaces used to manipulate light. By eliminating the bulky and heavy lenses typically required for this type of imaging, the new approach could enable wide-angle cameras to be incorporated into smartphones and portable imaging devices for vehicles such as cars or drones.
    Tao Li and colleagues from Nanjing University in China report their new ultrathin camera in Optica, Optica Publishing Group’s journal for high-impact research. The new camera, which is just 0.3 centimeters thick, can produce clear images of a scene with a viewing angle of more than 120 degrees.
    Wide-angle imaging is useful for capturing large amounts of information that can create stunning, high-quality images. For machine vision applications such as autonomous driving and drone-based surveillance, wide-angle imaging can enhance performance and safety, for example by revealing an obstacle you couldn’t otherwise see while backing up in a vehicle.
    “To create an extremely compact wide-angle camera, we used an array of metalenses that each capture certain parts of the wide-angle scene,” said Li. “The images are then stitched together to create a wide-angle image without any degradation in image quality.”
    Miniaturizing the wide-angle lens
    Wide-angle imaging is usually accomplished with a fish-eye compound lens or other type of multilayer lens. Although researchers have previously tried to use metalenses to create wide-angle cameras, they tend to suffer from poor image quality or other drawbacks.
    In the new work, the researchers used an array of metalenses that are each carefully designed to focus a different range of illumination angles. This allows each lens to clearly image part of a wide-angle object or scene. The clearest parts of each image can then be computationally stitched together to create the final image.
    “Thanks to the flexible design of the metasurfaces, the focusing and imaging performance of each lens can be optimized independently,” said Li. “This gives rise to a high quality final wide-angle image after a stitching process. What’s more, the array can be manufactured using just one layer of material, which helps keep cost down.”
    Seeing more with flat lenses
    To demonstrate the new approach, the researchers used nanofabrication to create a metalens array and mounted it directly to a CMOS sensor, creating a planar camera that measured about 1 cm × 1 cm × 0.3 cm. They then used this camera to image a wide-angle scene created by using two projectors to illuminate a curved screen surrounding the camera at a distance of 15 cm.
    They compared their new planar camera with one based on a single traditional metalens while imaging the words “Nanjing University” projected across the curved screen. The planar camera produced an image that showed every letter clearly and had a viewing angle larger than 120°, more than three times larger than that of the camera based on a traditional metalens.
    The researchers note that the planar camera demonstrated in this research used individual metalenses just 0.3 millimeters in diameter. They plan to enlarge these to about 1 to 5 millimeters to increase the camera’s imaging quality. After optimization, the array could be mass produced to reduce the cost of each device.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    Coastal cities around the globe are sinking

    Coastal cities around the globe are sinking by up to several centimeters per year, on average, satellite observations reveal. The one-two punch of subsiding land and rising seas means that these coastal regions are at greater risk for flooding than previously thought, researchers report in the April 16 Geophysical Research Letters.

    Matt Wei, an earth scientist at the University of Rhode Island in Narragansett, and colleagues studied 99 coastal cities on six continents. “We tried to balance population and geographic location,” he says. While subsidence has been measured in cities previously, earlier research has tended to focus on just one city or region. This investigation is different, Wei says. “It’s one of the first to really use data with global coverage.”

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Wei and his team relied on observations made from 2015 to 2020 by a pair of European satellites. Instruments onboard beam microwave signals toward Earth and then record the waves that bounce back. By measuring the timing and intensity of those reflected waves, the team determined the height of the ground with millimeter accuracy. And because each satellite flies over the same part of the planet every 12 days, the researchers were able to trace how the ground deformed over time.

    The largest subsidence rates — up to five centimeters per year —are mostly in Asian cities like Tianjin, China; Karachi, Pakistan; and Manila, Philippines, the team found. What’s more, one-third, or 33, of the analyzed cities are sinking in some places by more than a centimeter per year.

    That’s a worrying trend, says Darío Solano-Rojas, an earth scientist at the National Autonomous University of Mexico in Mexico City who was not involved in the research. These cities are being hit with a double whammy: At the same time that sea levels are rising due to climate change, the land is sinking (SN: 8/15/18). “Understanding that part of the problem is a big deal,” Solano-Rojas says.

    Wei and his colleagues think that the subsidence is largely caused by people. When the researchers looked at Google Earth imagery of the regions within cities that were rapidly sinking, the team saw mostly residential or commercial areas. That’s a tip-off that the culprit is groundwater extraction, the team concluded. Landscapes tend to settle as water is pumped out of aquifers (SN: 10/22/12).

    But there’s reason to be hopeful. In the past, cities such as Shanghai and Indonesia’s Jakarta were sinking by more than 10 centimeters per year, on average. But now subsidence in those places has slowed, possibly due to recent governmental regulations limiting groundwater extraction. More

  • in

    Tear-free hair brushing? All you need is math

    As anyone who has ever had to brush long hair knows, knots are a nightmare. But with enough experience, most learn the tricks of detangling with the least amount of pain — start at the bottom, work your way up to the scalp with short, gentle brushes, and apply detangler when necessary.
    L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics, learned the mechanics of combing years ago while brushing his young daughter’s hair.
    “I recall that detangling spray seemed to work sometimes, but I still had to be careful to comb gently, by starting from the free ends,” said Mahadevan. “But I was soon fired from the job as I was not very patient.”
    While Mahadevan lost his role as hairdresser, he was still a scientist and the topology, geometry and mechanics of detangling posed interesting mathematical questions that are relevant to a range of applications including textile manufacturing and chemical processes such as polymer processing.
    In a new paper, published in the journal Soft Matter, Mahadevan and co-authors Thomas Plumb Reyes and Nicholas Charles, explore the mathematics of combing and explain why the brushing technique used by so many is the most effective method to detangle a bundle of fibers.
    To simplify the problem, the researchers simulated two helically entwined filaments, rather than a whole head of hair.
    “Using this minimal model, we study the detangling of the double helix via a single stiff tine that moves along it, leaving two untangled filaments in its wake,” said Plumb-Reyes, a graduate student at SEAS. “We measured the forces and deformations associated with combing and then simulated it numerically.”
    “Short strokes that start at the free end and move towards the clamped end remove tangles by creating a flow of a mathematical quantity called the ‘link density’ that characterizes the amount that hair strands that are braided with each other, consistent with simulations of the process” said Nicholas Charles, a graduate student at SEAS.
    The researchers also identified the optimal minimum length for each stroke — any smaller and it would take forever to comb out all the tangles and any longer and it would be too painful.
    The mathematical principles of brushing developed by Plumb-Reyes, Charles and Mahadevan were recently used by Professor Daniela Rus and her team at MIT to design algorithms for brushing hair by a robot.
    Next, the team aims to study the mechanics of brushing curlier hair and how it responds to humidity and temperature, which may lead to a mathematical understanding of a fact every person with curly hair knows: never brush dry hair.
    This research was supported by funds from the US National Science Foundation, and the Henri Seydoux Fund. More

  • in

    Joystick-operated robot could help surgeons treat stroke remotely

    MIT engineers have developed a telerobotic system to help surgeons quickly and remotely treat patients experiencing a stroke or aneurysm. With a modified joystick, surgeons in one hospital may control a robotic arm at another location to safely operate on a patient during a critical window of time that could save the patient’s life and preserve their brain function.
    The robotic system, whose movement is controlled through magnets, is designed to remotely assist in endovascular intervention — a procedure performed in emergency situations to treat strokes caused by a blood clot. Such interventions normally require a surgeon to manually guide a thin wire to the clot, where it can physically clear the blockage or deliver drugs to break it up.
    One limitation of such procedures is accessibility: Neurovascular surgeons are often based at major medical institutions that are difficult to reach for patients in remote areas, particularly during the “golden hour” — the critical period after a stroke’s onset, during which treatment should be administered to minimize any damage to the brain.
    The MIT team envisions that its robotic system could be installed at smaller hospitals and remotely guided by trained surgeons at larger medical centers. The system includes a medical-grade robotic arm with a magnet attached to its wrist. With a joystick and live imaging, an operator can adjust the magnet’s orientation and manipulate the arm to guide a soft and thin magnetic wire through arteries and vessels.
    The researchers demonstrated the system in a “phantom,” a transparent model with vessels replicating complex arteries of the brain. With just an hour of training, neurosurgeons were able to remotely control the robot’s arm to guide a wire through a maze of vessels to reach target locations in the model.
    “We imagine, instead of transporting a patient from a rural area to a large city, they could go to a local hospital where nurses could set up this system. A neurosurgeon at a major medical center could watch live imaging of the patient and use the robot to operate in that golden hour. That’s our future dream,” says Xuanhe Zhao, a professor of mechanical engineering and of civil and environmental engineering at MIT. More

  • in

    Layered controls can significantly curb exposure to COVID-19

    As the COVID-19 pandemic unfolded, a team at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory set out to better understand how well face masks, ventilation, and physical distancing can cut down transmission of airborne pathogens like SARS-CoV-2, the virus that causes COVID-19.
    Using a new computational model that simulates the life cycle of pathogen-laden particles, the researchers found that a combination of distancing of six feet, universal mask-wearing, and increased room ventilation could reduce the risk of infection by more than 98 percent in more than 95 percent of scenarios studied.
    “Wide adoption of layered controls dramatically reduces exposure to existing airborne viruses, such as SARS-CoV-2, and will be critical to control outbreaks of novel airborne viruses in the future,” said Laura Fierce, an atmospheric scientist formerly with Brookhaven Lab, now at DOE’s Pacific Northwest National Laboratory. “These nonpharmaceutical interventions can be applied in combination with vaccinations.”
    The study is published in the journal Indoor Air. It focuses on how face masks and ventilation work alone and in combination with distancing to reduce the likelihood of someone inhaling virus-laden aerosol particles in particular scenarios — namely, where an infectious person is speaking continuously in an indoor space for three-hours — while also accounting for uncertainty in factors governing airborne transmission.
    Fierce collaborated with Alison Robey and Catherine Hamilton — who were participants in the DOE’s Science Undergraduate Laboratory Internships (SULI) program at Brookhaven — to develop the model of respiratory aerosols and droplets used in the study. The model simulates how virus-laden particles move through the jet of air expelled by an infectious person and within the larger indoor space. It considers how expelled particles change in size as water evaporates, how pathogens within those particles become inactive, and how particles are removed through ventilation, deposition on surfaces, and gravitational settling.
    The researchers’ simulations showed that exposure to airborne pathogens is significantly lowered by individual controls, such as face masks. But layering controls — that is, using them in combination — can be even more effective. According to the study, the combination of universal mask-wearing and distancing of even just three feet reduced a susceptible person’s risk of infection by 99 percent. On the other hand, without the use of face masks, distancing of at least six feet was needed to avoid increased exposure to respiratory pathogens near an infectious person. The team also showed that increasing ventilation rates by completely replacing the air in a room with fresh or filtered air four times per hour reduces the risk of transmission by more than 70 percent, so long as the infectious person and susceptible person are distanced by at least six feet. On the other hand, ventilation does little to reduce the risk of infection when the infectious person is close by.
    “Our detailed modeling of respiratory particles shows how different controls on airborne transmission work in combination, which is important for prioritizing mitigation strategies for different indoor spaces,” Fierce said.
    This research was supported by the DOE Office of Science through the National Virtual Biotechnology Laboratory, a consortium of DOE national laboratories focused on response to COVID-19, with funding provided by the Coronavirus CARES Act. This project was supported in part by the U.S. Department of Energy through the Office of Science, Office of Workforce Development for Teachers and Scientists (WDTS) under the Science Undergraduate Laboratory Internships Program (SULI). The quadrature-based model was originally developed with support from the DOE Atmospheric System Research program.
    Story Source:
    Materials provided by DOE/Brookhaven National Laboratory. Original written by Kelly Zegers. Note: Content may be edited for style and length. More

  • in

    How to compete with robots

    When it comes to the future of intelligent robots, the first question people ask is often: how many jobs will they make disappear? Whatever the answer, the second question is likely to be: how can I make sure that my job is not among them?
    In a study just published in Science Robotics, a team of roboticists from EPFL and economists from the University of Lausanne offers answers to both questions. By combining the scientific and technical literature on robotic abilities with employment and wage statistics, they have developed a method to calculate which of the currently existing jobs are more at risk of being performed by machines in the near future. Additionally, they have devised a method for suggesting career transitions to jobs that are less at risk and require smallest retraining efforts.
    “There are several studies predicting how many jobs will be automated by robots, but they all focus on software robots, such as speech and image recognition, financial robo-advisers, chatbots, and so forth. Furthermore, those predictions wildly oscillate depending on how job requirements and software abilities are assessed. Here, we consider not only artificial intelligence software, but also real intelligent robots that perform physical work and we developed a method for a systematic comparison of human and robotic abilities used in hundreds of jobs,” says Prof. Dario Floreano, Director of EPFL’s Laboratory of Intelligent System, who led the study at EPFL.
    The key innovation of the study is a new mapping of robot capabilities onto job requirements. The team looked into the European H2020 Robotic Multi-Annual Roadmap (MAR), a strategy document by the European Commission that is periodically revised by robotics experts. The MAR describes dozens of abilities that are required from current robot or may be required by future ones, ranging, organised in categories such as manipulation, perception, sensing, interaction with humans. The researchers went through research papers, patents, and description of robotic products to assess the maturity level of robotic abilities, using a well-known scale for measuring the level of technology development, “technology readiness level” (TRL).
    For human abilities, they relied on the O*net database, a widely-used resource database on the US job market, that classifies approximately 1,000 occupations and breaks down the skills and knowledge that are most crucial for each of them
    After selectively matching the human abilities from O*net list to robotic abilities from the MAR document, the team could calculate how likely each existing job occupation is to be performed by a robot. Say, for example, that a job requires a human to work at millimetre-level precision of movements. Robots are very good at that, and the TRL of the corresponding ability is thus the highest. If a job requires enough such skills, it will be more likely to be automated than one that requires abilities such as critical thinking or creativity.
    The result is a ranking of the 1,000 jobs, with “Physicists” being the ones who have the lowest risk of being replaced by a machine, and “Slaughterers and Meat Packers,” who face the highest risk. In general, jobs in food processing, building and maintenance, construction and extraction appear to have the highest risk.
    “The key challenge for society today is how to become resilient against automation” says Prof. Rafael Lalive. who co-led the study at the University of Lausanne. “Our work provides detailed career advice for workers who face high risks of automation, which allows them to take on more secure jobs while re-using many of the skills acquired on the old job. Through this advice, governments can support society in becoming more resilient against automation.”
    The authors then created a method to find, for any given job, alternative jobs that have a significantly lower automation risk and are reasonably close to the original one in terms of the abilities and knowledge they require — thus keeping the retraining effort minimal and making the career transition feasible. To test how that method would perform in real life, they used data from the US workforce and simulated thousands of career moves based on the algorithm’s suggestions, finding that it would indeed allow workers in the occupations with the highest risk to shift towards medium-risk occupations, while undergoing a relatively low retraining effort.
    The method could be used by governments to measure how many workers could face automation risks and adjust retraining policies, by companies to assess the costs of increasing automation, by robotics manufacturers to better tailor their products to the market needs; and by the public to identify the easiest route to reposition themselves on the job market.
    Finally, the authors translated the new methods and data into an algorithm that predicts the risk of automation for hundreds of jobs and suggests resilient career transitions at minimal retraining effort, publicly accessible at https://lis2.epfl.ch/resiliencetorobots. More

  • in

    New polymer materials make fabricating optical interconnects easier

    Researchers have developed new polymer materials that are ideal for making the optical links necessary to connect chip-based photonic components with board-level circuits or optical fibers. The polymers can be used to easily create interconnects between photonic chips and optical printed circuit boards, the light-based equivalent of electronic printed circuit boards.
    “These new materials and the processes they enable could lead to powerful new photonic modules based on silicon photonics,” said research team leader Robert Norwood from the University of Arizona. “They could also be useful for optical sensing or making holographic displays for augmented and virtual reality applications.”
    Silicon photonics technology allows light-based components to be integrated onto a tiny chip. Although many of the basic building blocks of silicon photonic devices have been demonstrated, better methods are needed to fabricate the optical connections that link these components together to make more complex systems.
    In the journal Optical Materials Express, the researchers report new polymer materials that feature a refractive index that can be adjusted with ultraviolet (UV) light and low optical losses. These materials allow a single-mode optical interconnect to be printed directly into a dry film material using a low cost, high throughput lithography system that is compatible with the CMOS manufacturing techniques used to make chip-based photonic components.
    “This technology makes it more practical to fabricate optical interconnects, which can be used to make the Internet — especially the data centers that make it run — more efficient,” said Norwood. “Compared to their electronic counterparts, optical interconnects can increase data throughput while also generating less heat. This reduces power consumption and cooling requirements.”
    Replacing wires with light
    The research expands on a vinylthiophenol polymer material system known as S-BOC that the investigators developed previously. This material has a refractive index that can be modified using UV illumination. In the new work, the researchers partially fluorinated S-BOC to improve its light efficiency. The new material system, called FS-BOC, exhibits lower optical propagation losses than many other optical interconnect materials. More