More stories

  • in

    Do humans think computers make fair decisions?

    Today, machine learning helps determine the loan we qualify for, the job we get, and even who goes to jail. But when it comes to these potentially life-altering decisions, can computers make a fair call? In a study published September 29 in the journal Patterns, researchers from Germany showed that with human supervision, people think a computer’s decision can be as fair as a decision primarily made by humans.
    “A lot of the discussion on fairness in machine learning has focused on technical solutions, like how to fix unfair algorithms and how to make the systems fair,” says computational social scientist and co-author Ruben Bach of the University of Mannheim, Germany. “But our question is, what do people think is fair? It’s not just about developing algorithms. They need to be accepted by society and meet normative beliefs in the real world.”
    Automated decision-making, where a conclusion is made solely by a computer, excels at analyzing large datasets to detect patterns. Computers are often considered objective and neutral compared with humans, whose biases can cloud judgments. Yet, bias can creep into computer systems as they learn from data that reflects discriminatory patterns in our world. Understanding fairness in computer and human decisions is crucial to building a more equitable society.
    To understand what people consider fair on automated decision-making, the researchers surveyed 3,930 individuals in Germany. The researchers gave them hypothetical scenarios related to the bank, job, prison, and unemployment systems. Within the scenarios, they further compared different situations, including whether the decision leads to a positive or negative outcome, where the data for evaluation comes from, and who makes the final decision — human, computer, or both.
    “As expected, we saw that completely automated decision-making was not favored,” says computational social scientist and co-first author Christoph Kern of the University of Mannheim. “But what was interesting is that when you have human supervision over the automated decision-making, the level of perceived fairness becomes similar to human-centered decision-making.” The results showed that people perceive a decision as fairer when humans are involved.
    People also had more concerns over fairness when decisions related to the criminal justice system or job prospects, where the stakes are higher. Possibly viewing the weight of losses greater than the weight of gains, the participants deemed decisions that can lead to positive outcomes fairer than negative ones. Compared with systems that only rely on scenario-related data, those that draw on additional unrelated data from the internet were considered less fair, confirming the importance of data transparency and privacy. Together, the results showed that context matters. Automated decision-making systems need to be carefully designed when concerns for fairness arise.
    While hypothetical situations in the survey may not fully translate to the real world, the team is already brainstorming next steps to better understand fairness. They plan on taking the study further to understand how different people define fairness. They also want to use similar surveys to ask more questions about ideas such as distributive justice, the fairness of resource allocation among the community.
    “In a way, we hope that people in the industry can take these results as food for thought and as things they should check before developing and deploying an automated decision-making system,” says Bach. “We also need to ensure that people understand how the data is processed and how decisions are made based on it.”
    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Bitcoin mining is environmentally unsustainable, researchers find

    Taken as a share of the market price, the climate change impacts of mining the digital cryptocurrency Bitcoin is more comparable to the impacts of extracting and refining crude oil than mining gold, according to an analysis published in Scientific Reports by researchers at The University of New Mexico.
    The authors suggest that rather than being considered akin to ‘digital gold’, Bitcoin should instead be compared to much more energy-intensive products such as beef, natural gas, and crude oil.
    “We find no evidence that Bitcoin mining is becoming more sustainable over time,” said UNM Economics Associate Professor Benjamin A. Jones. “Rather, our results suggest the opposite: Bitcoin mining is becoming dirtier and more damaging to the climate over time. In short, Bitcoin’s environmental footprint is moving in the wrong direction.”
    In December 2021, Bitcoin had an approximately 960 billion US dollars market capitalization with a roughly 41 percent global market share among cryptocurrencies. Although known to be energy intensive, the extent of Bitcoin’s climate damages is unclear.
    Jones and colleagues Robert Berrens and Andrew Goodkind present economic estimates of climate damages from Bitcoin mining between January 2016 and December 2021. They report that in 2020 Bitcoin mining used 75.4 terawatt hours of electricity (TWh) — higher electricity usage than Austria (69.9 TWh) or Portugal (48.4 TWh) in that year.
    “Globally, the mining, or production, of Bitcoin is using tremendous amounts of electricity, mostly from fossil fuels, such as coal and natural gas. This is causing huge amounts of air pollution and carbon emissions, which is negatively impacting our global climate and our health,” said Jones. “We find several instances between 2016-2021 where Bitcoin is more damaging to the climate than a single Bitcoin is actually worth. Put differently, Bitcoin mining, in some instances, creates climate damages in excess of a coin’s value. This is extremely troubling from a sustainability perspective.”
    The authors assessed Bitcoin climate damages according to three sustainability criteria: whether the estimated climate damages are increasing over time; whether the climate damages of Bitcoin exceeds the market price; and how the climate damages as a share of market price compare to other sectors and commodities.
    They find that the CO2 equivalent emissions from electricity generation for Bitcoin mining have increased 126-fold from 0.9 tonnes per coin in 2016, to 113 tonnes per coin in 2021. Calculations suggest each Bitcoin mined in 2021 generated 11,314 US Dollars (USD) in climate damages, with total global damages exceeding 12 billion USD between 2016 and 2021. Damages peaked at 156% of the coin price in May 2020, suggesting that each 1 USD of Bitcoin market value generated led to 1.56 USD in global climate damages that month.
    “Across the class of digitally scarce goods, our focus is on those cryptocurrencies that rely on proof-of-work (POW) production techniques, which can be highly energy intensive,” said Regents Professor of Economics Robert Berrens. “Within broader efforts to mitigate climate change, the policy challenge is creating governance mechanisms for an emergent, decentralized industry, which includes energy-intensive POW cryptocurrencies. We believe that such efforts would be aided by measurable, empirical signals concerning potentially unsustainable climate damages, in monetary terms.”
    Finally, the authors compared Bitcoin climate damages to damages from other industries and products such as electricity generation from renewable and non-renewable sources, crude oil processing, agricultural meat production, and precious metal mining. Climate damages for Bitcoin averaged 35% of its market value between 2016 and 2021. This share for Bitcoin was slightly less than the climate damages as a share of market value of electricity produced by natural gas (46%) and gasoline produced from crude oil (41%), but more than those of beef production (33%) and gold mining (4%).
    The authors conclude that Bitcoin does not meet any of the three key sustainability criteria they assessed it against. Absent voluntary switching away from proof-of-work mining, as very recently done for the cryptocurrency Ether, then potential regulation may be required to make Bitcoin mining sustainable. More

  • in

    Jacky Austermann looks to the solid earth for clues to sea level rise

    It’s no revelation that sea levels are rising. Rising temperatures brought on by human-caused climate change are melting ice sheets and expanding ocean water. What’s happening inside Earth will also shape future shorelines. Jacky Austermann is trying to understand those inner dynamics.

    A geophysicist at Columbia University’s Lamont-Doherty Earth Observatory, Austermann didn’t always know she would end up studying climate. Her fascination with math from a young age coupled with her love of nature and the outdoors — she grew up hiking in the Alps — led her to study physics as an undergraduate, and later geophysics.

    As Austermann dug deeper into Earth’s geosystems, she learned just how much the movement of hot rock in the mantle influences life on the surface. “I got really interested in this entire interplay of the solid earth and the oceans and the climate,” she says.

    Big goal

    Much of Austermann’s work focuses on how that interplay influences changes in sea level. The global average sea level has risen more than 20 centimeters since 1880, and the yearly rise is increasing. But shifts in local sea level can vary, with those levels rising or falling along different shorelines, Austermann says, and the solid earth plays a role.

    “We think about sea level change generally as ‘ice is melting, so sea level is rising.’ But there’s a lot more nuance to it,” she says. “A lot of sea level change is driven by land motion.” 

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Understanding that nuance could lead to more accurate climate models for predicting sea level rise in the future. Such work should help inform practical solutions for communities in at-risk coastal areas.

    So Austermann is building computer models that reconstruct sea level changes over the last few million years. Her models incorporate data on how the creeping churning of the mantle and other geologic phenomena have altered land and sea elevation, particularly during interglacial periods when Earth’s temperatures were a few degrees higher than they are today.

    Standout research

    Previous studies had suggested that this churning, known as mantle convection, sculpted Earth’s surface millions of years ago. “It pushes the surface up where hot material wells up,” Austermann says. “And it also drags [the surface] down where cold material sinks back into the mantle.”

    In 2015, Austermann and colleagues were the first to show that mantle-induced topographic changes influenced the melting of Antarctic ice over the last 3 million years. Near the ice sheet’s edges, ice retreated more quickly in areas where the land surface was lower due to convection.

    What’s more, mantle convection is affecting land surfaces even on relatively short time scales. Since the last interglacial period, around 130,000 to 115,000 years ago, mantle convection has warped ancient shorelines by as much as several meters, her team reported in Science Advances in 2017.  

    Jacky Austermann builds computer models that reconstruct sea level changes over the last few million years. The work could improve models that forecast the future.Bill Menke

    The growing and melting of ice sheets can deform the solid earth too, Austermann says. As land sinks under the weight of accumulating ice, local sea levels rise. And as land uplifts where the ice melts, water falls. This effect, as well as how the ice sheet tugs on the water around it, is shifting local sea levels around the globe today, she says, making it very relevant to coastal areas planning their defenses in the current climate crisis.

    Understanding these geologic processes can help improve models of past sea level rise. Austermann’s team is gathering more data from the field, scouring the coasts of Caribbean islands for clues to what areas were once near or below sea level. Such clues include fossilized corals and water ripples etched in stone, as well as tiny chutes in rocks that indicate air bubbles once rose through sand on ancient beaches. The work is “really fun,” Austermann says. “It’s essentially like a scavenger hunt.”

    Her efforts put the solid earth at the forefront of the study of sea level changes, says Douglas Wiens, a seismologist at Washington University in St. Louis. Before, “a lot of those factors were kind of ignored.” What’s most remarkable is her ability “to span what we normally consider to be several different disciplines and bring them together to solve the sea level problem,” he says.

    Building community

    Austermann says the most enjoyable part of her job is working with her students and postdocs. More than writing the next big paper, she wants to cultivate a happy, healthy and motivated research group. “It’s really rewarding to see them grow academically, scientifically, come up with their own ideas … and also help each other out.”

    Roger Creel, a Ph.D. student in Austermann’s group and the first to join her lab, treasures Austermann’s mentorship. She offers realistic, clear and fluid expectations, gives prompt and thoughtful feedback and meets for regular check-ins, he says. “Sometimes I think of it like water-skiing, and Jacky’s the boat.”

    For Oana Dumitru, a postdoc in the group, one aspect of that valued mentorship came in the form of a gentle push to write and submit a grant proposal on her own. “I thought I was not ready for it, but she was like, you’ve got to try,” Dumitru says.

    Austermann prioritizes her group’s well-being, which fosters collaboration, Creel and Dumitru say. That sense of inclusion, support and community “is the groundwork for having an environment where great ideas can blossom,” Austermann says.

    Want to nominate someone for the next SN 10 list? Send their name, affiliation and a few sentences about them and their work to sn10@sciencenews.org. More

  • in

    3D printing can now manufacture customized sensors for robots, pacemakers, and more

    A newly-developed 3D printing technique could be used to cost-effectively produce customized electronic “machines” the size of insects which enable advanced applications in robotics, medical devices and others.
    The breakthrough could be a potential game-changer for manufacturing customized chip-based microelectromechanical systems (MEMS). These mini-machines are mass-produced in large volumes for hundreds of electronic products, including smartphones and cars, where they provide positioning accuracy. But for more specialized manufacturing of sensors in smaller volumes, such as accelerometers for aircraft and vibration sensors for industrial machinery, MEMS technologies demand costly customization.
    Frank Niklaus, who led the research at KTH Royal Institute of Technology in Stockholm, says the new 3D printing technique, which was published in Nature Microsystems & Nanoengineering, provides a way to get around the limitations of conventional MEMS manufacturing.
    “The costs of manufacturing process development and device design optimizations do not scale down for lower production volumes,” he says. The result is engineers are faced with a choice of suboptimal off-the-shelf MEMS devices or economically unviable start-up costs.
    Other low-volume products that could benefit from the technique include motion and vibration control units for robots and industrial tools, as well as wind turbines.
    The researchers built on a process called two-photon polymerization, which can produce high resolution objects as small as few hundreds of nanometers in size, but not capable of sensing functionality. To form the transducing elements, the method uses a technique called shadow-masking, which works something like a stencil. On the 3D-printed structure they fabricate features with a T-shaped cross-section, which work like umbrellas. They then deposit metal from above, and as a result, the sides of the T-shaped features are not coated with the metal. This means the metal on the top of the T is electrically isolated from the rest of the structure.
    With this method, he says it takes only few hours to manufacture a dozen or so custom designed MEMS accelerometers using relatively inexpensive commercial manufacturing tools. The method can be used for prototyping MEMS devices and manufacturing small- and medium-sized batches of tens of thousands to a few thousand MEMS sensors per year in an economically viable way, he says.
    “This is something that has not been possible until now, because the start-up costs for manufacturing a MEMS product using conventional semiconductor technology are on the order of hundreds of thousands of dollars and the lead times are several months or more,” he says. “The new capabilities offered by 3D-printed MEMS could result in a new paradigm in MEMS and sensor manufacturing.
    “Scalability isn’t just an advantage in MEMS production, it’s a necessity. This method would enable fabrication of many kinds of new, customized devices.”
    Story Source:
    Materials provided by KTH, Royal Institute of Technology. Original written by David Callahan. Note: Content may be edited for style and length. More

  • in

    Physicists take self-assembly to new level by mimicking biology

    A team of physicists has created a new way to self-assemble particles — an advance that offers new promise for building complex and innovative materials at the microscopic level.
    Self-assembly, introduced in the early 2000s, gives scientists a means to “pre-program” particles, allowing for the building of materials without further human intervention — the microscopic equivalent of Ikea furniture that can assemble itself.
    The breakthrough, reported in the journal Nature, centers on emulsions — droplets of oil immersed in water — and their use in the self-assembly of foldamers, which are unique shapes that can be theoretically predicted from the sequence of droplet interactions.
    The self-assembly process borrows from the field of biology, mimicking the folding of proteins and RNA using colloids. In the Nature work, the researchers created tiny, oil-based droplets in water, possessing an array of DNA sequences that served as assembly “instructions.” These droplets first assemble into flexible chains and then sequentially collapse, or fold, via sticky DNA molecules. This folding yields a dozen types of foldamers, and further specificity could encode more than half of 600 possible geometric shapes.
    “Being able to pre-program colloidal architectures gives us the means to create materials with intricate and innovative properties,” explains Jasna Brujic, a professor in New York University’s Department of Physics and one of the researchers. “Our work shows how hundreds of self-assembled geometries can be uniquely created, offering new possibilities for the creation of the next generation of materials.”
    The research also included Angus McMullen, a postdoctoral fellow in NYU’s Department of Physics, as well as Maitane Muñoz Basagoiti and Zorana Zeravcic of ESPCI Paris.
    The scientists emphasize the counterintuitive, and pioneering, aspect of the method: Rather than requiring a large number of building blocks to encode precise shapes, its folding technique means only a few are necessary because each block can adopt a variety of forms.
    “Unlike a jigsaw puzzle, in which every piece is different, our process uses only two types of particles, which greatly reduces the variety of building blocks needed to encode a particular shape,” explains Brujic. “The innovation lies in using folding similar to the way that proteins do, but on a length scale 1,000 times bigger — about one-tenth the width of a strand of hair. These particles first bind together to make a chain, which then folds according to preprogrammed interactions that guide the chain through complex pathways into a unique geometry.”
    “The ability to obtain a lexicon of shapes opens the path to further assembly into larger scale materials, just as proteins hierarchically aggregate to build cellular compartments in biology,” she adds.
    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Scalable and fully coupled quantum-inspired processor solves optimization problems

    Have you ever been faced with a problem where you had to find an optimal solution out of many possible options, such as finding the quickest route to a certain place, considering both distance and traffic? If so, the problem you were dealing with is what is formally known as a “combinatorial optimization problem.” While mathematically formulated, these problems are common in the real world and spring up across several fields, including logistics, network routing, machine learning, and materials science.
    However, large-scale combinatorial optimization problems are very computationally intensive to solve using standard computers, making researchers turn to other approaches. One such approach is based on the “Ising model,” which mathematically represents the magnetic orientation of atoms, or “spins,” in a ferromagnetic material. At high temperatures, these atomic spins are oriented randomly. But as the temperature decreases, the spins line up to reach the minimum energy state where the orientation of each spin depends on its neighbors. It turns out that this process, known as “annealing,” can be used to model combinatorial optimization problems such that the final state of the spins yields the optimal solution.
    Researchers have tried creating annealing processors that mimic the behavior of spins using quantum devices, and have attempted to develop semiconductor devices using large-scale integration (LSI) technology aiming to do the same. In particular, Professor Takayuki Kawahara’s research group at Tokyo University of Science (TUS) in Japan has been making important breakthroughs in this particular field.
    In 2020, Prof. Kawahara and his colleagues presented at the 2020 international conference, IEEE SAMI 2020, one of the first fully coupled (that is, accounting for all possible spin-spin interactions instead of interactions with only neighboring spins) LSI annealing processors, comprising 512 fully-connected spins. Their work appeared in the journal IEEE Transactions on Circuits and Systems I: Regular Papers. These systems are notoriously hard to implement and upscale owing to the sheer number of connections between spins that needs to be considered. While using multiple fully connected chips in parallel was a potential solution to the scalability problem, this made the required number of interconnections (wires) between chips prohibitively large.
    In a recent study published in Microprocessors and Microsystems, Prof. Kawahara and his colleague demonstrated a clever solution to this problem. They developed a new method in which the calculation of the system’s energy state is divided among multiple fully coupled chips first, forming an “array calculator.” A second type of chip, called “control chip,” then collects the results from the rest of the chips and computes the total energy, which is used to update the values of the simulated spins. “The advantage of our approach is that the amount of data transmitted between the chips is extremely small,” explains Prof. Kawahara. “Although its principle is simple, this method allows us to realize a scalable, fully connected LSI system for solving combinatorial optimization problems through simulated annealing.”
    The researchers successfully implemented their approach using commercial FPGA chips, which are widely used programmable semiconductor devices. They built a fully connected annealing system with 384 spins and used it to solve several optimization problems, including a 92-node graph coloring problem and a 384-node maximum cut problem. Most importantly, these proof-of-concept experiments showed that the proposed method brings true performance benefits. Compared with a standard modern CPU modeling the same annealing system, the FPGA implementation was 584 faster and 46 times more energy efficient when solving the maximum cut problem.
    Now, with this successful demonstration of the operating principle of their method in FPGA, the researchers plan to take it to the next level. “We wish to produce a custom-designed LSI chip to increase the capacity and greatly improve the performance and power efficiency of our method,” Prof. Kawahara remarks. “This will enable us to realize the performance required in the fields of material development and drug discovery, which involve very complex optimization problems.”
    Finally, Prof. Kawahara notes that he wishes to promote the implementation of their results to solve real problems in society. His group hopes to engage in joint research with companies and bring their approach to the core of semiconductor design technology, opening doors to the revival of semiconductors in Japan.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Engineers discover new process for synthetic material growth, enabling soft robots that grow like plants

    An interdisciplinary team of University of Minnesota Twin Cities scientists and engineers has developed a first-of-its-kind, plant-inspired extrusion process that enables synthetic material growth. The new approach will allow researchers to build better soft robots that can navigate hard-to-reach places, complicated terrain, and potentially areas within the human body.
    The paper is published in the Proceedings of the National Academy of Sciences (PNAS).
    “This is the first time these concepts have been fundamentally demonstrated,” said Chris Ellison, a lead author of the paper and professor in the University of Minnesota Twin Cities Department of Chemical Engineering and Materials Science. “Developing new ways of manufacturing are paramount for the competitiveness of our country and for bringing new products to people. On the robotic side, robots are being used more and more in dangerous, remote environments, and these are the kinds of areas where this work could have an impact.”
    Soft robotics is an emerging field where robots are made of soft, pliable materials as opposed to rigid ones. Soft growing robots can create new material and “grow” as they move. These machines could be used for operations in remote areas where humans can’t go, such as inspecting or installing tubes underground or navigating inside the human body for biomedical applications.
    Current soft growing robots drag a trail of solid material behind them and can use heat and/or pressure to transform that material into a more permanent structure, much like how a 3D printer is fed solid filament to produce its shaped product. However, the trail of solid material gets more difficult to pull around bends and turns, making it hard for the robots to navigate terrain with obstacles or winding paths.
    The University of Minnesota team solved this problem by developing a new means of extrusion, a process where material is pushed through an opening to create a specific shape. Using this new process allows the robot to create its synthetic material from a liquid instead of a solid. More

  • in

    As few as 1 in 5 COVID cases may have been counted worldwide, mathematical models suggest

    Mathematical models indicate that as few as one in five cases of COVID-19 which occurred during the first 29 months of the pandemic are accounted for in the half billion cases officially reported.
    The World Health Organization reported 513,955,910 cases from Jan. 1, 2020 to May 6, 2022 and 6,190,349 deaths, numbers which already moved COVID-19 to a top killer in some countries, including the United States, just behind heart disease and cancer, according to the Centers for Disease Control and Prevention.
    Still mathematical models indicate overall underreporting of cases ranging from 1 in 1.2 to 1 in 4.7, investigators report in the journal Current Science. That underreporting translates to global pandemic estimates between 600 million and 2.4 billion cases.
    “We all acknowledge a huge impact on us as individuals, a nation and the world, but the true number of cases is very likely much higher than we realize,” says Dr. Arni S.R. Srinivasa Rao, director of the Laboratory for Theory and Mathematical Modeling in the Division of Infectious Diseases at the Medical College of Georgia. “We are trying to understand the extent of underreported cases.”
    The wide range of estimated cases generated by their models indicate the problems with accuracy of reported numbers, which include data tampering, the inability to conduct accurate case tracking and the lack of uniformity in how cases are reported, write Rao and his colleagues Dr. Steven G. Krantz, professor of mathematics at Washington University in St. Louis Missouri and Dr. David A. Swanson, Edward A. Dickson Emeritus Professor in the Department of Sociology at the University of California, Riverside.
    A dearth of information and inconsistency in reporting cases has been a major problem with getting a true picture of the impact of the pandemic, Rao says.
    Mathematical models use whatever information is available as well as relevant factors like global transmission rates and the number of people in the world, including the average population over the 29-month timeframe. That average, referred to as the effective population, better accounts for those who were born and died for any reason and so provides a more realistic number of the people out there who could potentially be infected, Rao says.
    “You have to know the true burden on patients and their families, on hospitals and caregivers, on the economy and the government,” Rao says. More accurate numbers also help in assessing indirect implications like the underdiagnosis of potentially long-term neurological and mental disorders that are now known to be directly associated with infection, he says.
    The mathematics experts had published similar model-based estimates for eight countries earlier in the pandemic in 2020, to provide more perspective on what they said then was clear underreporting. Their modeling predicted countries like Italy, despite their diligence in reporting, were likely capturing 1 in 4 actual cases while in China, where population numbers are tremendous, they calculated a huge range of potential underreporting, from 1 in 149 to 1 in 1,104 cases.
    Other contributors to underreporting include the reality that everyone who has gotten COVID-19 has not been tested. Also, a significant percentage of people, even vaccinated and boosted individuals, are getting infected more than once, and may only go to the doctor for PCR resting the first time and potentially use at home tests or even no test for subsequent illnesses. For example, a recent report in JAMA on reinfection rates in Iceland during the first 74 days of the Omicron variant wave there indicates, based on PCR testing, that reinfection rates were at 10.9% — a high of 15.1% among those 18-29-year-olds — for those who received two or more doses of a vaccine.
    The number of fully vaccinated individuals globally reached a reported 5.1 billion by the end of their 29-month study timeframe.
    The CDC was reporting downward trends in new cases, hospitalizations and deaths in the United States from August to September. More