More stories

  • in

    Fluidic circuits add analog options for controlling soft robots

    In a study published online this week, robotics researchers, engineers and materials scientists from Rice University and Harvard University showed it is possible to make programmable, nonelectronic circuits that control the actions of soft robots by processing information encoded in bursts of compressed air.
    “Part of the beauty of this system is that we’re really able to reduce computation down to its base components,” said Rice undergraduate Colter Decker, lead author of the study in the Proceedings of the National Academy of Sciences. He said electronic control systems have been honed and refined for decades, and recreating computer circuitry “with analogs to pressure and flow rate instead of voltage and current” made it easier to incorporate pneumatic computation.
    Decker, a senior majoring in mechanical engineering, constructed his soft robotic control system primarily from everyday materials like plastic drinking straws and rubber bands. Despite its simplicity, experiments showed the system’s air-driven logic gates could be configured to perform operations called Boolean functions that are the meat and potatoes of modern computing.
    “The goal was never to entirely replace electronic computers,” Decker said. He said there are many cases where soft robots or wearables need only be programmed for a few simple movements, and it’s possible the technology demonstrated in the paper “would be much cheaper and safer for use and much more durable” than traditional electronic controls.
    As a freshman, Decker began working in the lab of Daniel Preston, an assistant professor of mechanical engineering at Rice. Decker studied fluidic control systems and became interested in creating one when he won a competitive summer research fellowship that would allow him to spend a few months working in the lab of Harvard chemist and materials scientist George Whitesides. More

  • in

    The pros and cons of telemental health

    New research led by the National Institute for Health & Care Research (NIHR) Mental Health Policy Research Unit (MHPRU) at King’s College London and University College London (UCL), has shown that certain groups of people benefit from the freedom of choice that telemental health provides, but this is not true for all.
    The research, published today in the Interactive Journal of Medical Research, investigates which telemental health approaches work (or do not work) for whom, in which contexts, and through which mechanisms. Telemental health was found to be effective overall, but researchers highlight that there is no ‘one size fits all’.
    Telemental health (or telemedicine) is mental health care — patient care, administrative activities and health education — delivered via ‘telecommunications technologies’ e.g. video calls, telephone calls or SMS text messages. It has become increasingly widespread, as it can be useful in providing care to service users in remote communities, or during an emergency restricting face-to-face contact, such as the COVID-19 pandemic.
    The study found telemental health can be effective in reducing treatment gaps and barriers, by improving access to mental health care across different service user groups (e.g. adult, child and adolescent, older adults, and ethnic minority groups) and across personal contexts (e.g. difficulty accessing services, caring responsibilities or condition). However, it is crucial that providers consider that there are a set of key factors which lead to variations in peoples’ response to telemental health; for example, variations in access to a private and confidential space, ability to develop therapeutic relationships, individual preferences and circumstances as well as the internet connection quality.
    King’s researcher Dr Katherine Saunders, from NIHR MHPRU and joint lead author said, “We live in an increasingly digital world, and the COVID-19 pandemic accelerated the role of technology in mental health care. Our study found that, while certain groups do benefit from the opportunities telemental health can provide, it is not a one size fits all solution. Receiving telemental health requires access to a device, an internet connection and an understanding of technology. If real world barriers to telemental health are ignored in favour of wider implementation, we risk further embedding inequalities into our healthcare system.”
    Important limitations have been reported that implementing telemental health could risk the reinforcement of pre-existing inequalities in service provision. Those who benefit less are people without access to internet or phone, those experiencing social and economic disadvantages, cognitive difficulties, auditory or visual impairments, or severe mental health problems (such as psychosis).
    Professor Sonia Johnson from UCL and Director, NIHR MHPRU and senior author adds “Our research findings emphasise the importance of personal choice, privacy and safety, and therapeutic relationships in telemental health care. The review also identified particular service users likely to be disadvantaged by telemental health implementation. For those people, we recommend a need to ensure that face-to-face care of equivalent timeliness remains available”
    The authors suggest the findings have implications across the board of clinical practice, service planning, policy and research. If telemental health is to be widely incorporated into routine care, a clear understanding is needed of when and for whom it is an acceptable and effective approach and when face-to-face care is needed.
    Professor Alan Simpson, from King’s and Co-Director, NIHR MHPRU concludes “As well as reviewing a huge amount of research literature, in this study we also involved and consulted with many clinicians and users of mental health services. This included young people, those that worked in or used inpatient and crisis services, and those who had personal lived experience of telemental throughout the pandemic. This gives this research a relevance that will be of interest to policy makers, service providers and those working in and using our services.”
    Merle Schlief, joint lead author from NIHR MHPRU at UCL said “Working entirely online to conduct this study gave us access to experts and stakeholders who we simply would not have been able to include if we had been working in-person, including people living and working internationally, and those who would have been unable to travel. This highlights one of the key strengths of technology.”
    The authors recommend that guidelines and strategies are co-produced with service users and frontline staff are needed to optimize telemental health implementation in real-world settings.
    The MHPRU is a joint enterprise between researchers at UCL and King’s College London with a national network of collaborators. We conduct research commissioned by the NIHR Policy Research Programme to help the Department of Health and Social Care and others involved in making nationwide plans for mental health services to make decisions based on good evidence. The MHPRU contributed research evidence to the national review of the Mental Health Act and is currently undertaking a number of studies. More

  • in

    Discovery of new nanowire assembly process could enable more powerful computer chips

    In a newly-published study, a team of researchers in Oxford University’s Department of Materials led by Harish Bhaskaran, Professor of Applied Nanomaterials, describe a breakthrough approach to pick up single nanowires from the growth substrate and place them on virtually any platform with sub-micron accuracy.
    The innovative method uses novel tools, including ultra-thin filaments of polyethylene terephthalate (PET) with tapered nanoscale tips that are used to pick up individual nanowires. At this fine scale, adhesive van der Waals (tiny forces of attraction that occur between atoms and molecules) cause the nanowires to ‘jump’ into contact with the tips. The nanowires are then transferred to a transparent dome-shaped elastic stamp mounted on a glass slide. This stamp is then turned upside down and aligned with the device chip, with the nanowire then printed gently onto the surface.
    Deposited nanowires showed strong adhesive qualities, remaining in place even when the device was immersed in liquid. The research team were also able to place nanowires on fragile substrates, such as ultra-thin 50 nanometre membranes, demonstrating the delicacy and versatility of the stamping technique.
    In addition, the researchers used the method to build an optomechanical sensor (an instrument that uses laser light to measure vibrations) that was 20 times more sensitive than existing nanowire-based devices.
    Nanowires, materials with diameters 1000 times smaller than a human hair and fascinating physical properties, could enable major advancements in many different fields, from energy harvesters and sensors, to information and quantum technologies. In particular, their minuscule size could allow the development of smaller transistors and miniaturised computer chips. A major obstacle, however, to realising the full potential of nanowires has been the inability to precisely position them within devices.
    Most electronic device manufacturing techniques cannot tolerate the conditions needed to produce nanowires. Consequently, nanowires are usually grown on a separate substrate and then mechanically or chemically transferred to the device. In all existing nanowire transfer techniques, however, the nanowires are placed randomly onto the chip surface, which limits their application in commercial devices.
    DPhil student Utku Emre Ali (Department of Materials), who developed the technique, said: ‘This new pick-and-place assembly process has enabled us to create first-of-its-kind devices in the nanowire realm. We believe that it will inexpensively advance nanowire research by allowing users to incorporate nanowires with existing on-chip platforms, be it electronic or photonic, unlocking physical properties that have not been attainable so far. Furthermore, this technique could be fully automated, making full-scale fabrication of high quality nanowire-integrated chips a real possibility.’
    Professor Harish Bhaskaran (Department of Materials) added: ‘This technique is readily scalable to larger areas, and brings the promise of nanowires to devices made on any substrate and using any process. This is what makes this technique so powerful.’
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    Do humans think computers make fair decisions?

    Today, machine learning helps determine the loan we qualify for, the job we get, and even who goes to jail. But when it comes to these potentially life-altering decisions, can computers make a fair call? In a study published September 29 in the journal Patterns, researchers from Germany showed that with human supervision, people think a computer’s decision can be as fair as a decision primarily made by humans.
    “A lot of the discussion on fairness in machine learning has focused on technical solutions, like how to fix unfair algorithms and how to make the systems fair,” says computational social scientist and co-author Ruben Bach of the University of Mannheim, Germany. “But our question is, what do people think is fair? It’s not just about developing algorithms. They need to be accepted by society and meet normative beliefs in the real world.”
    Automated decision-making, where a conclusion is made solely by a computer, excels at analyzing large datasets to detect patterns. Computers are often considered objective and neutral compared with humans, whose biases can cloud judgments. Yet, bias can creep into computer systems as they learn from data that reflects discriminatory patterns in our world. Understanding fairness in computer and human decisions is crucial to building a more equitable society.
    To understand what people consider fair on automated decision-making, the researchers surveyed 3,930 individuals in Germany. The researchers gave them hypothetical scenarios related to the bank, job, prison, and unemployment systems. Within the scenarios, they further compared different situations, including whether the decision leads to a positive or negative outcome, where the data for evaluation comes from, and who makes the final decision — human, computer, or both.
    “As expected, we saw that completely automated decision-making was not favored,” says computational social scientist and co-first author Christoph Kern of the University of Mannheim. “But what was interesting is that when you have human supervision over the automated decision-making, the level of perceived fairness becomes similar to human-centered decision-making.” The results showed that people perceive a decision as fairer when humans are involved.
    People also had more concerns over fairness when decisions related to the criminal justice system or job prospects, where the stakes are higher. Possibly viewing the weight of losses greater than the weight of gains, the participants deemed decisions that can lead to positive outcomes fairer than negative ones. Compared with systems that only rely on scenario-related data, those that draw on additional unrelated data from the internet were considered less fair, confirming the importance of data transparency and privacy. Together, the results showed that context matters. Automated decision-making systems need to be carefully designed when concerns for fairness arise.
    While hypothetical situations in the survey may not fully translate to the real world, the team is already brainstorming next steps to better understand fairness. They plan on taking the study further to understand how different people define fairness. They also want to use similar surveys to ask more questions about ideas such as distributive justice, the fairness of resource allocation among the community.
    “In a way, we hope that people in the industry can take these results as food for thought and as things they should check before developing and deploying an automated decision-making system,” says Bach. “We also need to ensure that people understand how the data is processed and how decisions are made based on it.”
    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Bitcoin mining is environmentally unsustainable, researchers find

    Taken as a share of the market price, the climate change impacts of mining the digital cryptocurrency Bitcoin is more comparable to the impacts of extracting and refining crude oil than mining gold, according to an analysis published in Scientific Reports by researchers at The University of New Mexico.
    The authors suggest that rather than being considered akin to ‘digital gold’, Bitcoin should instead be compared to much more energy-intensive products such as beef, natural gas, and crude oil.
    “We find no evidence that Bitcoin mining is becoming more sustainable over time,” said UNM Economics Associate Professor Benjamin A. Jones. “Rather, our results suggest the opposite: Bitcoin mining is becoming dirtier and more damaging to the climate over time. In short, Bitcoin’s environmental footprint is moving in the wrong direction.”
    In December 2021, Bitcoin had an approximately 960 billion US dollars market capitalization with a roughly 41 percent global market share among cryptocurrencies. Although known to be energy intensive, the extent of Bitcoin’s climate damages is unclear.
    Jones and colleagues Robert Berrens and Andrew Goodkind present economic estimates of climate damages from Bitcoin mining between January 2016 and December 2021. They report that in 2020 Bitcoin mining used 75.4 terawatt hours of electricity (TWh) — higher electricity usage than Austria (69.9 TWh) or Portugal (48.4 TWh) in that year.
    “Globally, the mining, or production, of Bitcoin is using tremendous amounts of electricity, mostly from fossil fuels, such as coal and natural gas. This is causing huge amounts of air pollution and carbon emissions, which is negatively impacting our global climate and our health,” said Jones. “We find several instances between 2016-2021 where Bitcoin is more damaging to the climate than a single Bitcoin is actually worth. Put differently, Bitcoin mining, in some instances, creates climate damages in excess of a coin’s value. This is extremely troubling from a sustainability perspective.”
    The authors assessed Bitcoin climate damages according to three sustainability criteria: whether the estimated climate damages are increasing over time; whether the climate damages of Bitcoin exceeds the market price; and how the climate damages as a share of market price compare to other sectors and commodities.
    They find that the CO2 equivalent emissions from electricity generation for Bitcoin mining have increased 126-fold from 0.9 tonnes per coin in 2016, to 113 tonnes per coin in 2021. Calculations suggest each Bitcoin mined in 2021 generated 11,314 US Dollars (USD) in climate damages, with total global damages exceeding 12 billion USD between 2016 and 2021. Damages peaked at 156% of the coin price in May 2020, suggesting that each 1 USD of Bitcoin market value generated led to 1.56 USD in global climate damages that month.
    “Across the class of digitally scarce goods, our focus is on those cryptocurrencies that rely on proof-of-work (POW) production techniques, which can be highly energy intensive,” said Regents Professor of Economics Robert Berrens. “Within broader efforts to mitigate climate change, the policy challenge is creating governance mechanisms for an emergent, decentralized industry, which includes energy-intensive POW cryptocurrencies. We believe that such efforts would be aided by measurable, empirical signals concerning potentially unsustainable climate damages, in monetary terms.”
    Finally, the authors compared Bitcoin climate damages to damages from other industries and products such as electricity generation from renewable and non-renewable sources, crude oil processing, agricultural meat production, and precious metal mining. Climate damages for Bitcoin averaged 35% of its market value between 2016 and 2021. This share for Bitcoin was slightly less than the climate damages as a share of market value of electricity produced by natural gas (46%) and gasoline produced from crude oil (41%), but more than those of beef production (33%) and gold mining (4%).
    The authors conclude that Bitcoin does not meet any of the three key sustainability criteria they assessed it against. Absent voluntary switching away from proof-of-work mining, as very recently done for the cryptocurrency Ether, then potential regulation may be required to make Bitcoin mining sustainable. More

  • in

    3D printing can now manufacture customized sensors for robots, pacemakers, and more

    A newly-developed 3D printing technique could be used to cost-effectively produce customized electronic “machines” the size of insects which enable advanced applications in robotics, medical devices and others.
    The breakthrough could be a potential game-changer for manufacturing customized chip-based microelectromechanical systems (MEMS). These mini-machines are mass-produced in large volumes for hundreds of electronic products, including smartphones and cars, where they provide positioning accuracy. But for more specialized manufacturing of sensors in smaller volumes, such as accelerometers for aircraft and vibration sensors for industrial machinery, MEMS technologies demand costly customization.
    Frank Niklaus, who led the research at KTH Royal Institute of Technology in Stockholm, says the new 3D printing technique, which was published in Nature Microsystems & Nanoengineering, provides a way to get around the limitations of conventional MEMS manufacturing.
    “The costs of manufacturing process development and device design optimizations do not scale down for lower production volumes,” he says. The result is engineers are faced with a choice of suboptimal off-the-shelf MEMS devices or economically unviable start-up costs.
    Other low-volume products that could benefit from the technique include motion and vibration control units for robots and industrial tools, as well as wind turbines.
    The researchers built on a process called two-photon polymerization, which can produce high resolution objects as small as few hundreds of nanometers in size, but not capable of sensing functionality. To form the transducing elements, the method uses a technique called shadow-masking, which works something like a stencil. On the 3D-printed structure they fabricate features with a T-shaped cross-section, which work like umbrellas. They then deposit metal from above, and as a result, the sides of the T-shaped features are not coated with the metal. This means the metal on the top of the T is electrically isolated from the rest of the structure.
    With this method, he says it takes only few hours to manufacture a dozen or so custom designed MEMS accelerometers using relatively inexpensive commercial manufacturing tools. The method can be used for prototyping MEMS devices and manufacturing small- and medium-sized batches of tens of thousands to a few thousand MEMS sensors per year in an economically viable way, he says.
    “This is something that has not been possible until now, because the start-up costs for manufacturing a MEMS product using conventional semiconductor technology are on the order of hundreds of thousands of dollars and the lead times are several months or more,” he says. “The new capabilities offered by 3D-printed MEMS could result in a new paradigm in MEMS and sensor manufacturing.
    “Scalability isn’t just an advantage in MEMS production, it’s a necessity. This method would enable fabrication of many kinds of new, customized devices.”
    Story Source:
    Materials provided by KTH, Royal Institute of Technology. Original written by David Callahan. Note: Content may be edited for style and length. More

  • in

    Physicists take self-assembly to new level by mimicking biology

    A team of physicists has created a new way to self-assemble particles — an advance that offers new promise for building complex and innovative materials at the microscopic level.
    Self-assembly, introduced in the early 2000s, gives scientists a means to “pre-program” particles, allowing for the building of materials without further human intervention — the microscopic equivalent of Ikea furniture that can assemble itself.
    The breakthrough, reported in the journal Nature, centers on emulsions — droplets of oil immersed in water — and their use in the self-assembly of foldamers, which are unique shapes that can be theoretically predicted from the sequence of droplet interactions.
    The self-assembly process borrows from the field of biology, mimicking the folding of proteins and RNA using colloids. In the Nature work, the researchers created tiny, oil-based droplets in water, possessing an array of DNA sequences that served as assembly “instructions.” These droplets first assemble into flexible chains and then sequentially collapse, or fold, via sticky DNA molecules. This folding yields a dozen types of foldamers, and further specificity could encode more than half of 600 possible geometric shapes.
    “Being able to pre-program colloidal architectures gives us the means to create materials with intricate and innovative properties,” explains Jasna Brujic, a professor in New York University’s Department of Physics and one of the researchers. “Our work shows how hundreds of self-assembled geometries can be uniquely created, offering new possibilities for the creation of the next generation of materials.”
    The research also included Angus McMullen, a postdoctoral fellow in NYU’s Department of Physics, as well as Maitane Muñoz Basagoiti and Zorana Zeravcic of ESPCI Paris.
    The scientists emphasize the counterintuitive, and pioneering, aspect of the method: Rather than requiring a large number of building blocks to encode precise shapes, its folding technique means only a few are necessary because each block can adopt a variety of forms.
    “Unlike a jigsaw puzzle, in which every piece is different, our process uses only two types of particles, which greatly reduces the variety of building blocks needed to encode a particular shape,” explains Brujic. “The innovation lies in using folding similar to the way that proteins do, but on a length scale 1,000 times bigger — about one-tenth the width of a strand of hair. These particles first bind together to make a chain, which then folds according to preprogrammed interactions that guide the chain through complex pathways into a unique geometry.”
    “The ability to obtain a lexicon of shapes opens the path to further assembly into larger scale materials, just as proteins hierarchically aggregate to build cellular compartments in biology,” she adds.
    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Scalable and fully coupled quantum-inspired processor solves optimization problems

    Have you ever been faced with a problem where you had to find an optimal solution out of many possible options, such as finding the quickest route to a certain place, considering both distance and traffic? If so, the problem you were dealing with is what is formally known as a “combinatorial optimization problem.” While mathematically formulated, these problems are common in the real world and spring up across several fields, including logistics, network routing, machine learning, and materials science.
    However, large-scale combinatorial optimization problems are very computationally intensive to solve using standard computers, making researchers turn to other approaches. One such approach is based on the “Ising model,” which mathematically represents the magnetic orientation of atoms, or “spins,” in a ferromagnetic material. At high temperatures, these atomic spins are oriented randomly. But as the temperature decreases, the spins line up to reach the minimum energy state where the orientation of each spin depends on its neighbors. It turns out that this process, known as “annealing,” can be used to model combinatorial optimization problems such that the final state of the spins yields the optimal solution.
    Researchers have tried creating annealing processors that mimic the behavior of spins using quantum devices, and have attempted to develop semiconductor devices using large-scale integration (LSI) technology aiming to do the same. In particular, Professor Takayuki Kawahara’s research group at Tokyo University of Science (TUS) in Japan has been making important breakthroughs in this particular field.
    In 2020, Prof. Kawahara and his colleagues presented at the 2020 international conference, IEEE SAMI 2020, one of the first fully coupled (that is, accounting for all possible spin-spin interactions instead of interactions with only neighboring spins) LSI annealing processors, comprising 512 fully-connected spins. Their work appeared in the journal IEEE Transactions on Circuits and Systems I: Regular Papers. These systems are notoriously hard to implement and upscale owing to the sheer number of connections between spins that needs to be considered. While using multiple fully connected chips in parallel was a potential solution to the scalability problem, this made the required number of interconnections (wires) between chips prohibitively large.
    In a recent study published in Microprocessors and Microsystems, Prof. Kawahara and his colleague demonstrated a clever solution to this problem. They developed a new method in which the calculation of the system’s energy state is divided among multiple fully coupled chips first, forming an “array calculator.” A second type of chip, called “control chip,” then collects the results from the rest of the chips and computes the total energy, which is used to update the values of the simulated spins. “The advantage of our approach is that the amount of data transmitted between the chips is extremely small,” explains Prof. Kawahara. “Although its principle is simple, this method allows us to realize a scalable, fully connected LSI system for solving combinatorial optimization problems through simulated annealing.”
    The researchers successfully implemented their approach using commercial FPGA chips, which are widely used programmable semiconductor devices. They built a fully connected annealing system with 384 spins and used it to solve several optimization problems, including a 92-node graph coloring problem and a 384-node maximum cut problem. Most importantly, these proof-of-concept experiments showed that the proposed method brings true performance benefits. Compared with a standard modern CPU modeling the same annealing system, the FPGA implementation was 584 faster and 46 times more energy efficient when solving the maximum cut problem.
    Now, with this successful demonstration of the operating principle of their method in FPGA, the researchers plan to take it to the next level. “We wish to produce a custom-designed LSI chip to increase the capacity and greatly improve the performance and power efficiency of our method,” Prof. Kawahara remarks. “This will enable us to realize the performance required in the fields of material development and drug discovery, which involve very complex optimization problems.”
    Finally, Prof. Kawahara notes that he wishes to promote the implementation of their results to solve real problems in society. His group hopes to engage in joint research with companies and bring their approach to the core of semiconductor design technology, opening doors to the revival of semiconductors in Japan.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More