More stories

  • in

    Robotic drug capsule can deliver drugs to gut

    One reason that it’s so difficult to deliver large protein drugs orally is that these drugs can’t pass through the mucus barrier that lines the digestive tract. This means that insulin and most other “biologic drugs” — drugs consisting of proteins or nucleic acids — have to be injected or administered in a hospital.
    A new drug capsule developed at MIT may one day be able to replace those injections. The capsule has a robotic cap that spins and tunnels through the mucus barrier when it reaches the small intestine, allowing drugs carried by the capsule to pass into cells lining the intestine.
    “By displacing the mucus, we can maximize the dispersion of the drug within a local area and enhance the absorption of both small molecules and macromolecules,” says Giovanni Traverso, the Karl van Tassel Career Development Assistant Professor of Mechanical Engineering at MIT and a gastroenterologist at Brigham and Women’s Hospital.
    In a study appearing today in Science Robotics, the researchers demonstrated that they could use this approach to deliver insulin as well as vancomycin, an antibiotic peptide that currently has to be injected.
    Shriya Srinivasan, a research affiliate at MIT’s Koch Institute for Integrative Cancer Research and a junior fellow at the Society of Fellows at Harvard University, is the lead author of the study.
    Tunneling through
    For several years, Traverso’s lab has been developing strategies to deliver protein drugs such as insulin orally. This is a difficult task because protein drugs tend to be broken down in acidic environment of the digestive tract, and they also have difficulty penetrating the mucus barrier that lines the tract. More

  • in

    Fluidic circuits add analog options for controlling soft robots

    In a study published online this week, robotics researchers, engineers and materials scientists from Rice University and Harvard University showed it is possible to make programmable, nonelectronic circuits that control the actions of soft robots by processing information encoded in bursts of compressed air.
    “Part of the beauty of this system is that we’re really able to reduce computation down to its base components,” said Rice undergraduate Colter Decker, lead author of the study in the Proceedings of the National Academy of Sciences. He said electronic control systems have been honed and refined for decades, and recreating computer circuitry “with analogs to pressure and flow rate instead of voltage and current” made it easier to incorporate pneumatic computation.
    Decker, a senior majoring in mechanical engineering, constructed his soft robotic control system primarily from everyday materials like plastic drinking straws and rubber bands. Despite its simplicity, experiments showed the system’s air-driven logic gates could be configured to perform operations called Boolean functions that are the meat and potatoes of modern computing.
    “The goal was never to entirely replace electronic computers,” Decker said. He said there are many cases where soft robots or wearables need only be programmed for a few simple movements, and it’s possible the technology demonstrated in the paper “would be much cheaper and safer for use and much more durable” than traditional electronic controls.
    As a freshman, Decker began working in the lab of Daniel Preston, an assistant professor of mechanical engineering at Rice. Decker studied fluidic control systems and became interested in creating one when he won a competitive summer research fellowship that would allow him to spend a few months working in the lab of Harvard chemist and materials scientist George Whitesides. More

  • in

    The pros and cons of telemental health

    New research led by the National Institute for Health & Care Research (NIHR) Mental Health Policy Research Unit (MHPRU) at King’s College London and University College London (UCL), has shown that certain groups of people benefit from the freedom of choice that telemental health provides, but this is not true for all.
    The research, published today in the Interactive Journal of Medical Research, investigates which telemental health approaches work (or do not work) for whom, in which contexts, and through which mechanisms. Telemental health was found to be effective overall, but researchers highlight that there is no ‘one size fits all’.
    Telemental health (or telemedicine) is mental health care — patient care, administrative activities and health education — delivered via ‘telecommunications technologies’ e.g. video calls, telephone calls or SMS text messages. It has become increasingly widespread, as it can be useful in providing care to service users in remote communities, or during an emergency restricting face-to-face contact, such as the COVID-19 pandemic.
    The study found telemental health can be effective in reducing treatment gaps and barriers, by improving access to mental health care across different service user groups (e.g. adult, child and adolescent, older adults, and ethnic minority groups) and across personal contexts (e.g. difficulty accessing services, caring responsibilities or condition). However, it is crucial that providers consider that there are a set of key factors which lead to variations in peoples’ response to telemental health; for example, variations in access to a private and confidential space, ability to develop therapeutic relationships, individual preferences and circumstances as well as the internet connection quality.
    King’s researcher Dr Katherine Saunders, from NIHR MHPRU and joint lead author said, “We live in an increasingly digital world, and the COVID-19 pandemic accelerated the role of technology in mental health care. Our study found that, while certain groups do benefit from the opportunities telemental health can provide, it is not a one size fits all solution. Receiving telemental health requires access to a device, an internet connection and an understanding of technology. If real world barriers to telemental health are ignored in favour of wider implementation, we risk further embedding inequalities into our healthcare system.”
    Important limitations have been reported that implementing telemental health could risk the reinforcement of pre-existing inequalities in service provision. Those who benefit less are people without access to internet or phone, those experiencing social and economic disadvantages, cognitive difficulties, auditory or visual impairments, or severe mental health problems (such as psychosis).
    Professor Sonia Johnson from UCL and Director, NIHR MHPRU and senior author adds “Our research findings emphasise the importance of personal choice, privacy and safety, and therapeutic relationships in telemental health care. The review also identified particular service users likely to be disadvantaged by telemental health implementation. For those people, we recommend a need to ensure that face-to-face care of equivalent timeliness remains available”
    The authors suggest the findings have implications across the board of clinical practice, service planning, policy and research. If telemental health is to be widely incorporated into routine care, a clear understanding is needed of when and for whom it is an acceptable and effective approach and when face-to-face care is needed.
    Professor Alan Simpson, from King’s and Co-Director, NIHR MHPRU concludes “As well as reviewing a huge amount of research literature, in this study we also involved and consulted with many clinicians and users of mental health services. This included young people, those that worked in or used inpatient and crisis services, and those who had personal lived experience of telemental throughout the pandemic. This gives this research a relevance that will be of interest to policy makers, service providers and those working in and using our services.”
    Merle Schlief, joint lead author from NIHR MHPRU at UCL said “Working entirely online to conduct this study gave us access to experts and stakeholders who we simply would not have been able to include if we had been working in-person, including people living and working internationally, and those who would have been unable to travel. This highlights one of the key strengths of technology.”
    The authors recommend that guidelines and strategies are co-produced with service users and frontline staff are needed to optimize telemental health implementation in real-world settings.
    The MHPRU is a joint enterprise between researchers at UCL and King’s College London with a national network of collaborators. We conduct research commissioned by the NIHR Policy Research Programme to help the Department of Health and Social Care and others involved in making nationwide plans for mental health services to make decisions based on good evidence. The MHPRU contributed research evidence to the national review of the Mental Health Act and is currently undertaking a number of studies. More

  • in

    Discovery of new nanowire assembly process could enable more powerful computer chips

    In a newly-published study, a team of researchers in Oxford University’s Department of Materials led by Harish Bhaskaran, Professor of Applied Nanomaterials, describe a breakthrough approach to pick up single nanowires from the growth substrate and place them on virtually any platform with sub-micron accuracy.
    The innovative method uses novel tools, including ultra-thin filaments of polyethylene terephthalate (PET) with tapered nanoscale tips that are used to pick up individual nanowires. At this fine scale, adhesive van der Waals (tiny forces of attraction that occur between atoms and molecules) cause the nanowires to ‘jump’ into contact with the tips. The nanowires are then transferred to a transparent dome-shaped elastic stamp mounted on a glass slide. This stamp is then turned upside down and aligned with the device chip, with the nanowire then printed gently onto the surface.
    Deposited nanowires showed strong adhesive qualities, remaining in place even when the device was immersed in liquid. The research team were also able to place nanowires on fragile substrates, such as ultra-thin 50 nanometre membranes, demonstrating the delicacy and versatility of the stamping technique.
    In addition, the researchers used the method to build an optomechanical sensor (an instrument that uses laser light to measure vibrations) that was 20 times more sensitive than existing nanowire-based devices.
    Nanowires, materials with diameters 1000 times smaller than a human hair and fascinating physical properties, could enable major advancements in many different fields, from energy harvesters and sensors, to information and quantum technologies. In particular, their minuscule size could allow the development of smaller transistors and miniaturised computer chips. A major obstacle, however, to realising the full potential of nanowires has been the inability to precisely position them within devices.
    Most electronic device manufacturing techniques cannot tolerate the conditions needed to produce nanowires. Consequently, nanowires are usually grown on a separate substrate and then mechanically or chemically transferred to the device. In all existing nanowire transfer techniques, however, the nanowires are placed randomly onto the chip surface, which limits their application in commercial devices.
    DPhil student Utku Emre Ali (Department of Materials), who developed the technique, said: ‘This new pick-and-place assembly process has enabled us to create first-of-its-kind devices in the nanowire realm. We believe that it will inexpensively advance nanowire research by allowing users to incorporate nanowires with existing on-chip platforms, be it electronic or photonic, unlocking physical properties that have not been attainable so far. Furthermore, this technique could be fully automated, making full-scale fabrication of high quality nanowire-integrated chips a real possibility.’
    Professor Harish Bhaskaran (Department of Materials) added: ‘This technique is readily scalable to larger areas, and brings the promise of nanowires to devices made on any substrate and using any process. This is what makes this technique so powerful.’
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    Do humans think computers make fair decisions?

    Today, machine learning helps determine the loan we qualify for, the job we get, and even who goes to jail. But when it comes to these potentially life-altering decisions, can computers make a fair call? In a study published September 29 in the journal Patterns, researchers from Germany showed that with human supervision, people think a computer’s decision can be as fair as a decision primarily made by humans.
    “A lot of the discussion on fairness in machine learning has focused on technical solutions, like how to fix unfair algorithms and how to make the systems fair,” says computational social scientist and co-author Ruben Bach of the University of Mannheim, Germany. “But our question is, what do people think is fair? It’s not just about developing algorithms. They need to be accepted by society and meet normative beliefs in the real world.”
    Automated decision-making, where a conclusion is made solely by a computer, excels at analyzing large datasets to detect patterns. Computers are often considered objective and neutral compared with humans, whose biases can cloud judgments. Yet, bias can creep into computer systems as they learn from data that reflects discriminatory patterns in our world. Understanding fairness in computer and human decisions is crucial to building a more equitable society.
    To understand what people consider fair on automated decision-making, the researchers surveyed 3,930 individuals in Germany. The researchers gave them hypothetical scenarios related to the bank, job, prison, and unemployment systems. Within the scenarios, they further compared different situations, including whether the decision leads to a positive or negative outcome, where the data for evaluation comes from, and who makes the final decision — human, computer, or both.
    “As expected, we saw that completely automated decision-making was not favored,” says computational social scientist and co-first author Christoph Kern of the University of Mannheim. “But what was interesting is that when you have human supervision over the automated decision-making, the level of perceived fairness becomes similar to human-centered decision-making.” The results showed that people perceive a decision as fairer when humans are involved.
    People also had more concerns over fairness when decisions related to the criminal justice system or job prospects, where the stakes are higher. Possibly viewing the weight of losses greater than the weight of gains, the participants deemed decisions that can lead to positive outcomes fairer than negative ones. Compared with systems that only rely on scenario-related data, those that draw on additional unrelated data from the internet were considered less fair, confirming the importance of data transparency and privacy. Together, the results showed that context matters. Automated decision-making systems need to be carefully designed when concerns for fairness arise.
    While hypothetical situations in the survey may not fully translate to the real world, the team is already brainstorming next steps to better understand fairness. They plan on taking the study further to understand how different people define fairness. They also want to use similar surveys to ask more questions about ideas such as distributive justice, the fairness of resource allocation among the community.
    “In a way, we hope that people in the industry can take these results as food for thought and as things they should check before developing and deploying an automated decision-making system,” says Bach. “We also need to ensure that people understand how the data is processed and how decisions are made based on it.”
    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Bitcoin mining is environmentally unsustainable, researchers find

    Taken as a share of the market price, the climate change impacts of mining the digital cryptocurrency Bitcoin is more comparable to the impacts of extracting and refining crude oil than mining gold, according to an analysis published in Scientific Reports by researchers at The University of New Mexico.
    The authors suggest that rather than being considered akin to ‘digital gold’, Bitcoin should instead be compared to much more energy-intensive products such as beef, natural gas, and crude oil.
    “We find no evidence that Bitcoin mining is becoming more sustainable over time,” said UNM Economics Associate Professor Benjamin A. Jones. “Rather, our results suggest the opposite: Bitcoin mining is becoming dirtier and more damaging to the climate over time. In short, Bitcoin’s environmental footprint is moving in the wrong direction.”
    In December 2021, Bitcoin had an approximately 960 billion US dollars market capitalization with a roughly 41 percent global market share among cryptocurrencies. Although known to be energy intensive, the extent of Bitcoin’s climate damages is unclear.
    Jones and colleagues Robert Berrens and Andrew Goodkind present economic estimates of climate damages from Bitcoin mining between January 2016 and December 2021. They report that in 2020 Bitcoin mining used 75.4 terawatt hours of electricity (TWh) — higher electricity usage than Austria (69.9 TWh) or Portugal (48.4 TWh) in that year.
    “Globally, the mining, or production, of Bitcoin is using tremendous amounts of electricity, mostly from fossil fuels, such as coal and natural gas. This is causing huge amounts of air pollution and carbon emissions, which is negatively impacting our global climate and our health,” said Jones. “We find several instances between 2016-2021 where Bitcoin is more damaging to the climate than a single Bitcoin is actually worth. Put differently, Bitcoin mining, in some instances, creates climate damages in excess of a coin’s value. This is extremely troubling from a sustainability perspective.”
    The authors assessed Bitcoin climate damages according to three sustainability criteria: whether the estimated climate damages are increasing over time; whether the climate damages of Bitcoin exceeds the market price; and how the climate damages as a share of market price compare to other sectors and commodities.
    They find that the CO2 equivalent emissions from electricity generation for Bitcoin mining have increased 126-fold from 0.9 tonnes per coin in 2016, to 113 tonnes per coin in 2021. Calculations suggest each Bitcoin mined in 2021 generated 11,314 US Dollars (USD) in climate damages, with total global damages exceeding 12 billion USD between 2016 and 2021. Damages peaked at 156% of the coin price in May 2020, suggesting that each 1 USD of Bitcoin market value generated led to 1.56 USD in global climate damages that month.
    “Across the class of digitally scarce goods, our focus is on those cryptocurrencies that rely on proof-of-work (POW) production techniques, which can be highly energy intensive,” said Regents Professor of Economics Robert Berrens. “Within broader efforts to mitigate climate change, the policy challenge is creating governance mechanisms for an emergent, decentralized industry, which includes energy-intensive POW cryptocurrencies. We believe that such efforts would be aided by measurable, empirical signals concerning potentially unsustainable climate damages, in monetary terms.”
    Finally, the authors compared Bitcoin climate damages to damages from other industries and products such as electricity generation from renewable and non-renewable sources, crude oil processing, agricultural meat production, and precious metal mining. Climate damages for Bitcoin averaged 35% of its market value between 2016 and 2021. This share for Bitcoin was slightly less than the climate damages as a share of market value of electricity produced by natural gas (46%) and gasoline produced from crude oil (41%), but more than those of beef production (33%) and gold mining (4%).
    The authors conclude that Bitcoin does not meet any of the three key sustainability criteria they assessed it against. Absent voluntary switching away from proof-of-work mining, as very recently done for the cryptocurrency Ether, then potential regulation may be required to make Bitcoin mining sustainable. More

  • in

    Jacky Austermann looks to the solid earth for clues to sea level rise

    It’s no revelation that sea levels are rising. Rising temperatures brought on by human-caused climate change are melting ice sheets and expanding ocean water. What’s happening inside Earth will also shape future shorelines. Jacky Austermann is trying to understand those inner dynamics.

    A geophysicist at Columbia University’s Lamont-Doherty Earth Observatory, Austermann didn’t always know she would end up studying climate. Her fascination with math from a young age coupled with her love of nature and the outdoors — she grew up hiking in the Alps — led her to study physics as an undergraduate, and later geophysics.

    As Austermann dug deeper into Earth’s geosystems, she learned just how much the movement of hot rock in the mantle influences life on the surface. “I got really interested in this entire interplay of the solid earth and the oceans and the climate,” she says.

    Big goal

    Much of Austermann’s work focuses on how that interplay influences changes in sea level. The global average sea level has risen more than 20 centimeters since 1880, and the yearly rise is increasing. But shifts in local sea level can vary, with those levels rising or falling along different shorelines, Austermann says, and the solid earth plays a role.

    “We think about sea level change generally as ‘ice is melting, so sea level is rising.’ But there’s a lot more nuance to it,” she says. “A lot of sea level change is driven by land motion.” 

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Understanding that nuance could lead to more accurate climate models for predicting sea level rise in the future. Such work should help inform practical solutions for communities in at-risk coastal areas.

    So Austermann is building computer models that reconstruct sea level changes over the last few million years. Her models incorporate data on how the creeping churning of the mantle and other geologic phenomena have altered land and sea elevation, particularly during interglacial periods when Earth’s temperatures were a few degrees higher than they are today.

    Standout research

    Previous studies had suggested that this churning, known as mantle convection, sculpted Earth’s surface millions of years ago. “It pushes the surface up where hot material wells up,” Austermann says. “And it also drags [the surface] down where cold material sinks back into the mantle.”

    In 2015, Austermann and colleagues were the first to show that mantle-induced topographic changes influenced the melting of Antarctic ice over the last 3 million years. Near the ice sheet’s edges, ice retreated more quickly in areas where the land surface was lower due to convection.

    What’s more, mantle convection is affecting land surfaces even on relatively short time scales. Since the last interglacial period, around 130,000 to 115,000 years ago, mantle convection has warped ancient shorelines by as much as several meters, her team reported in Science Advances in 2017.  

    Jacky Austermann builds computer models that reconstruct sea level changes over the last few million years. The work could improve models that forecast the future.Bill Menke

    The growing and melting of ice sheets can deform the solid earth too, Austermann says. As land sinks under the weight of accumulating ice, local sea levels rise. And as land uplifts where the ice melts, water falls. This effect, as well as how the ice sheet tugs on the water around it, is shifting local sea levels around the globe today, she says, making it very relevant to coastal areas planning their defenses in the current climate crisis.

    Understanding these geologic processes can help improve models of past sea level rise. Austermann’s team is gathering more data from the field, scouring the coasts of Caribbean islands for clues to what areas were once near or below sea level. Such clues include fossilized corals and water ripples etched in stone, as well as tiny chutes in rocks that indicate air bubbles once rose through sand on ancient beaches. The work is “really fun,” Austermann says. “It’s essentially like a scavenger hunt.”

    Her efforts put the solid earth at the forefront of the study of sea level changes, says Douglas Wiens, a seismologist at Washington University in St. Louis. Before, “a lot of those factors were kind of ignored.” What’s most remarkable is her ability “to span what we normally consider to be several different disciplines and bring them together to solve the sea level problem,” he says.

    Building community

    Austermann says the most enjoyable part of her job is working with her students and postdocs. More than writing the next big paper, she wants to cultivate a happy, healthy and motivated research group. “It’s really rewarding to see them grow academically, scientifically, come up with their own ideas … and also help each other out.”

    Roger Creel, a Ph.D. student in Austermann’s group and the first to join her lab, treasures Austermann’s mentorship. She offers realistic, clear and fluid expectations, gives prompt and thoughtful feedback and meets for regular check-ins, he says. “Sometimes I think of it like water-skiing, and Jacky’s the boat.”

    For Oana Dumitru, a postdoc in the group, one aspect of that valued mentorship came in the form of a gentle push to write and submit a grant proposal on her own. “I thought I was not ready for it, but she was like, you’ve got to try,” Dumitru says.

    Austermann prioritizes her group’s well-being, which fosters collaboration, Creel and Dumitru say. That sense of inclusion, support and community “is the groundwork for having an environment where great ideas can blossom,” Austermann says.

    Want to nominate someone for the next SN 10 list? Send their name, affiliation and a few sentences about them and their work to sn10@sciencenews.org. More

  • in

    3D printing can now manufacture customized sensors for robots, pacemakers, and more

    A newly-developed 3D printing technique could be used to cost-effectively produce customized electronic “machines” the size of insects which enable advanced applications in robotics, medical devices and others.
    The breakthrough could be a potential game-changer for manufacturing customized chip-based microelectromechanical systems (MEMS). These mini-machines are mass-produced in large volumes for hundreds of electronic products, including smartphones and cars, where they provide positioning accuracy. But for more specialized manufacturing of sensors in smaller volumes, such as accelerometers for aircraft and vibration sensors for industrial machinery, MEMS technologies demand costly customization.
    Frank Niklaus, who led the research at KTH Royal Institute of Technology in Stockholm, says the new 3D printing technique, which was published in Nature Microsystems & Nanoengineering, provides a way to get around the limitations of conventional MEMS manufacturing.
    “The costs of manufacturing process development and device design optimizations do not scale down for lower production volumes,” he says. The result is engineers are faced with a choice of suboptimal off-the-shelf MEMS devices or economically unviable start-up costs.
    Other low-volume products that could benefit from the technique include motion and vibration control units for robots and industrial tools, as well as wind turbines.
    The researchers built on a process called two-photon polymerization, which can produce high resolution objects as small as few hundreds of nanometers in size, but not capable of sensing functionality. To form the transducing elements, the method uses a technique called shadow-masking, which works something like a stencil. On the 3D-printed structure they fabricate features with a T-shaped cross-section, which work like umbrellas. They then deposit metal from above, and as a result, the sides of the T-shaped features are not coated with the metal. This means the metal on the top of the T is electrically isolated from the rest of the structure.
    With this method, he says it takes only few hours to manufacture a dozen or so custom designed MEMS accelerometers using relatively inexpensive commercial manufacturing tools. The method can be used for prototyping MEMS devices and manufacturing small- and medium-sized batches of tens of thousands to a few thousand MEMS sensors per year in an economically viable way, he says.
    “This is something that has not been possible until now, because the start-up costs for manufacturing a MEMS product using conventional semiconductor technology are on the order of hundreds of thousands of dollars and the lead times are several months or more,” he says. “The new capabilities offered by 3D-printed MEMS could result in a new paradigm in MEMS and sensor manufacturing.
    “Scalability isn’t just an advantage in MEMS production, it’s a necessity. This method would enable fabrication of many kinds of new, customized devices.”
    Story Source:
    Materials provided by KTH, Royal Institute of Technology. Original written by David Callahan. Note: Content may be edited for style and length. More