More stories

  • in

    Machine learning helps in predicting when immunotherapy will be effective

    When it comes to defense, the body relies on attack thanks to the lymphatic and immune systems. The immune system is like the body’s own personal police force as it hunts down and eliminates pathogenic villains.
    “The body’s immune system is very good at identifying cells that are acting strangely. These include cells that could develop into tumors or cancer in the future,” says Federica Eduati from the department of Biomedical Engineering at TU/e. “Once detected, the immune system strikes and kills the cells.”
    Stopping the attack
    But it’s not always so straightforward as tumor cells can develop ways to hide themselves from the immune system.
    “Unfortunately, tumor cells can block the natural immune response. Proteins on the surface of a tumor cell can turn off the immune cells and effectively put them in sleep mode,” says Oscar Lapuente-Santana, PhD researcher in the Computational Biology group.
    Fortunately, there is a way to wake up the immune cells and restore their antitumor immunity, and it’s based on immunotherapy. More

  • in

    Common errors in internet energy analysis

    When it comes to understanding and predicting trends in energy use, the internet is a tough nut to crack. So say energy researchers Eric Masanet, of UC Santa Barbara, and Jonathan Koomey, of Koomey Analytics. The two just published a peer-reviewed commentary in the journal Joule discussing the pitfalls that plague estimates of the internet’s energy and carbon impacts.
    The paper describes how these errors can lead well-intentioned studies to predict massive energy growth in the information technology (IT) sector, which often doesn’t materialize. “We’re not saying the energy use of the internet isn’t a problem, or that we shouldn’t worry about it,” Masanet explained. “Rather, our main message is that we all need to get better at analyzing internet energy use and avoiding these pitfalls moving forward.”
    Masanet, the Mellichamp Chair in Sustainability Science for Emerging Technologies at UCSB’s Bren School of Environmental Science & Management, has researched energy analysis of IT systems for more than 15 years. Koomey, who has studied the subject for over three decades, was for many years a staff scientist and group leader at Lawrence Berkeley National Lab, and has served as a visiting professor at Stanford University, Yale University and UC Berkeley. The article, which has no external funding source, arose out of their combined experiences and observations and was motivated by the rising public interest in internet energy use. Although the piece contains no new data or conclusions about the current energy use or environmental impacts of different technologies and sectors, it raises some important technical issues the field currently faces.
    Masanet and Koomey’s work involves gathering data and building models of energy use to understand trends and make predictions. Unfortunately, IT systems are complicated and data is scarce. “The internet is a really complex system of technologies and it changes fast,” Masanet said. What’s more, in the competitive tech industry, companies often guard energy and performance data as proprietary trade secrets. “There’s a lot of engineering that goes into their operations,” he added, “and they often don’t want to give that up.”
    Four fallacies
    This feeds directly into the first of four major pitfalls the two researchers identified: oversimplification. Every model is a simplification of a real-world system. It has to be. But simplification becomes a pitfall when analysts overlook important aspects of the system. For example, models that underestimate improvements to data center efficiency often overestimate growth in their energy use. More

  • in

    Researchers look to human 'social sensors' to better predict elections and other trends

    Election outcomes are notoriously difficult to predict. In 2016, for example, most polls suggested that Hillary Clinton would win the presidency, but Donald Trump defeated her. Researchers cite multiple explanations for the unreliability in election forecasts — some voters are difficult to reach, and some may wish to remain hidden. Among those who do respond to surveys, some may change their minds after being polled, while others may be embarrassed or afraid to report their true intentions.
    In a new perspective piece for Nature, Santa Fe Institute researchers Mirta Galesic, Jonas Dalege, Henrik Olsson, Daniel Stein, Tamara van der Does, and their collaborators* propose a surprising way to get around these shortcomings in survey design — not just in the world of politics, but in other types of research as well. While it’s widely assumed that cognitive bias clouds our assessment of the people around us, their research and that of others suggests that in fact, our estimations of what our friends and family believe are often accurate.
    “We realized that if we ask a national sample of people about who their friends are going to vote for, we get more accurate predictions than if we ask them who they’re going to vote for,” says Galesic, who is the corresponding author. “We found that people are actually pretty good at estimating the beliefs of people around them.”
    That means researchers can gather highly accurate information about social trends and groups by asking about a person’s social circle rather than interrogating their own individual beliefs. That’s because as highly social creatures, we have become very good at sizing up those around us — what researchers call “social sensing.”
    When people are selected to represent a particular group, their perceptions, combined with new computational models of human social dynamics, can be used to identify emerging trends and better predict political and health-related developments in particular, the team writes. This approach, combining elements of psychology and sociology, can even be harnessed to devise interventions that “could steer social systems in different directions” after a major event, such as a natural disaster or a mass shooting, they suggest.
    “I really hope human social sensing will be included in the standard social science toolbox, because I think it can be a very useful strategy for predicting and modeling societal trends,” Galesic says.
    * Mirta Galesic (Santa Fe Institute), Wändi Bruine de Bruin (University of Southern California), Jonas Dalege (Santa Fe Institute), Scott Feld (Purdue University); Frauke Kreuter (LMU Munich, University of Maryland); Henrik Olsson (Santa Fe Institute); Drazen Prelec (Sloan School of Management, MIT); Daniel Stein (New York University, Santa Fe Institute), and Tamara van der Does (Santa Fe Institute) are co-authors on the perspective piece.
    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    New research lifts the clouds on land clearing and biodiversity loss

    QUT researchers have developed a new machine learning mathematical system that helps to identify and detect changes in biodiversity, including land clearing, when satellite imagery is obstructed by clouds.
    Using statistical methods to quantify uncertainty, the research, published in Remote Sensing in Ecology and Conservation, analysed available satellite images of an 180km square area in central south-east Queensland.
    The region is home to many native species including the critically endangered northern hairy-nosed wombat and the vulnerable greater glider, and the area mainly consists of forest, pasture, and agricultural land.
    Dr Jacinta Holloway-Brown says measuring changes in forest cover over time is essential to track and preserve habitats and is a key sustainable development goal by the United Nations and World Bank to manage forests sustainably.
    “Satellite imagery is important as it is too difficult and expensive to frequently collect field data over large, forested areas,” Dr Holloway-Brown said.
    “The problem with using satellite imagery is large portions of the earth are obscured by clouds and this cloud cover causes large and frequent amounts of missing data.”
    Dr Holloway-Brown said it was estimated based on 12 years of satellite imagery on average approximately 67 per cent of the earth is obscured by cloud cover. More

  • in

    Thinking in 3D improves mathematical skills

    Spatial reasoning ability in small children reflects how well they will perform in mathematics later. Researchers from the University of Basel recently came to this conclusion, making the case for better cultivation of spatial reasoning.
    Good math skills open career doors in the natural sciences as well as technical and engineering fields. However, a nationwide study on basic skills conducted in Switzerland in 2019 found that schoolchildren achieved only modest results in mathematics. But it seems possible to begin promoting math skills from a young age, as Dr. Wenke Möhring’s team of researchers from the University of Basel reported after studying nearly 600 children.
    The team found a correlation between children’s spatial sense at the age of three and their mathematical abilities in primary school. “We know from past studies that adults think spatially when working with numbers — for example, represent small numbers to the left and large ones to the right,” explains Möhring. “But little research has been done on how spatial reasoning at an early age affects children’s learning and comprehension of mathematics later.”
    The study, which was published in the journal Learning and Instruction, suggests that there is a strong correlation between early spatial skills and the comprehension of mathematical concepts later. The researchers also ruled out the possibility that this correlation is due to other factors, such as socio-economic status or language ability. Exactly how spatial ability affects mathematical skills in children is still unclear, but the spatial conception of numbers might play a role.
    The findings are based on the analysis of data from 586 children in Basel, Switzerland. As part of a project on language acquisition of German as a second language, the researchers gave three-year-old children a series of tasks to test cognitive, socio-emotional and spatial abilities. For example, the children were asked to arrange colored cubes in certain shapes. The researchers repeated these tests four times at an interval of about 15 months and compared the results with the academic performance of seven-year-old children in the first grade.
    The researchers also closely examined whether the pace of development, i.e. particularly rapid development of spatial abilities, can predict future mathematical ability. Past studies with a small sample size had found a correlation, but Möhring and her colleagues were unable to confirm this in their own study. Three-year-old children who started out with low spatial abilities improved them faster in the subsequent years, but still performed at a lower level in mathematics when they were seven years old. Despite faster development, by the time they began school these children had still not fully caught up with the children possessing higher initial spatial reasoning skills.
    “Parents often push their children in the area of language skills,” says Möhring. “Our results suggest how important it is to cultivate spatial reasoning at an early age as well.” There are simple ways to do this, such as using “spatial language” (larger, smaller, same, above, below) and toys — e.g. building blocks — that help improve spatial reasoning ability.
    Spatial reasoning and gender
    The researchers found that boys and girls are practically indistinguishable in terms of their spatial reasoning ability at the age of three, but in subsequent years this develops more slowly in girls. Möhring and her colleagues suspect that boys may hear more “spatial language” and that toys typically designed for boys often promote spatial reasoning, whereas toys for girls focus mainly on social skills. Children may also internalize their parents’ and teacher’s expectations and then, as they grow up, live up to stereotypes — for example, that women do not perform as well in the areas of spatial reasoning and mathematics as men.
    Story Source:
    Materials provided by University of Basel. Note: Content may be edited for style and length. More

  • in

    Technology only two atoms thick could enable storage of information in thinnest unit

    Researchers from Tel Aviv University have engineered the world’s tiniest technology, with a thickness of only two atoms. According to the researchers, the new technology proposes a way for storing electric information in the thinnest unit known to science, in one of the most stable and inert materials in nature. The allowed quantum-mechanical electron tunneling through the atomically thin film may boost the information reading process much beyond current technologies.
    The research was performed by scientists from the Raymond and Beverly Sackler School of Physics and Astronomy and Raymond and Beverly Sackler School of Chemistry. The group includes Maayan Vizner Stern, Yuval Waschitz, Dr. Wei Cao, Dr. Iftach Nevo, Prof. Eran Sela, Prof. Michael Urbakh, Prof. Oded Hod, and Dr. Moshe Ben Shalom. The work is now published in Science magazine.
    “Our research stems from curiosity about the behavior of atoms and electrons in solid materials, which has generated many of the technologies supporting our modern way of life,” says Dr. Ben Shalom. “We (and many other scientists) try to understand, predict, and even control the fascinating properties of these particles as they condense into an ordered structure that we call a crystal. At the heart of the computer, for example, lies a tiny crystalline device designed to switch between two states indicating different responses — “yes” or “no,” “up” or “down” etc. Without this dichotomy — it is not possible to encode and process information. The practical challenge is to find a mechanism that would enable switching in a small, fast, and inexpensive device.
    Current state-of-the-art devices consist of tiny crystals that contain only about a million atoms (about a hundred atoms in height, width, and thickness) so that a million of these devices can be squeezed about a million times into the area of one coin, with each device switching at a speed of about a million times per second.
    Following the technological breakthrough, the researchers were able, for the first time, to reduce the thickness of the crystalline devices to two atoms only. Dr. Ben Shalom emphasizes that such a thin structure enables memories based on the quantum ability of electrons to hop quickly and efficiently through barriers that are just several atoms thick. Thus, it may significantly improve electronic devices in terms of speed, density, and energy consumption.
    In the study, the researchers used a two-dimensional material: one-atom-thick layers of boron and nitrogen, arranged in a repetitive hexagonal structure. In their experiment, they were able to break the symmetry of this crystal by artificially assembling two such layers. “In its natural three-dimensional state, this material is made up of a large number of layers placed on top of each other, with each layer rotated 180 degrees relative to its neighbors (antiparallel configuration)” says Dr. Ben Shalom. “In the lab, we were able to artificially stack the layers in a parallel configuration with no rotation, which hypothetically places atoms of the same kind in perfect overlap despite the strong repulsive force between them (resulting from their identical charges). In actual fact, however, the crystal prefers to slide one layer slightly in relation to the other, so that only half of each layer’s atoms are in perfect overlap, and those that do overlap are of opposite charges — while all others are located above or below an empty space — the center of the hexagon. In this artificial stacking configuration the layers are quite distinct from one another. For example, if in the top layer only the boron atoms overlap, in the bottom layer it’s the other way around.”
    Dr. Ben Shalom also highlights the work of the theory team, who conducted numerous computer simulations “Together we established deep understanding of why the system’s electrons arrange themselves just as we had measured in the lab. Thanks to this fundamental understanding, we expect fascinating responses in other symmetry-broken layered systems as well,” he says.
    Maayan Wizner Stern, the PhD student who led the study, explains: “The symmetry breaking we created in the laboratory, which does not exist in the natural crystal, forces the electric charge to reorganize itself between the layers and generate a tiny internal electrical polarization perpendicular to the layer plane. When we apply an external electric field in the opposite direction the system slides laterally to switch the polarization orientation. The switched polarization remains stable even when the external field is shut down. In this the system is similar to thick three-dimensional ferroelectric systems, which are widely used in technology today.”
    “The ability to force a crystalline and electronic arrangement in such a thin system, with unique polarization and inversion properties resulting from the weak Van der Waals forces between the layers, is not limited to the boron and nitrogen crystal,” adds Dr. Ben Shalom. “We expect the same behaviors in many layered crystals with the right symmetry properties. The concept of interlayer sliding as an original and efficient way to control advanced electronic devices is very promising, and we have named it Slide-Tronics.”
    Maayan Vizner Stern concludes: “We are excited about discovering what can happen in other states we force upon nature and predict that other structures that couple additional degrees of freedom are possible. We hope that miniaturization and flipping through sliding will improve today’s electronic devices, and moreover, allow other original ways of controlling information in future devices. In addition to computer devices, we expect that this technology will contribute to detectors, energy storage and conversion, interaction with light, etc. Our challenge, as we see it, is to discover more crystals with new and slippery degrees of freedom.”
    The study was funded through support from the European Research Council (ERC starting grant), the Israel Science Foundation (ISF), and the Ministry of Science and Technology (MOST). More

  • in

    Novel heat-management material keeps computers running cool

    UCLA engineers have demonstrated successful integration of a novel semiconductor material into high-power computer chips to reduce heat on processors and improve their performance. The advance greatly increases energy efficiency in computers and enables heat removal beyond the best thermal-management devices currently available.
    The research was led by Yongjie Hu, an associate professor of mechanical and aerospace engineering at the UCLA Samueli School of Engineering. Nature Electronics recently published the finding in this article.
    Computer processors have shrunk down to nanometer scales over the years, with billions of transistors sitting on a single computer chip. While the increased number of transistors helps make computers faster and more powerful, it also generates more hot spots in a highly condensed space. Without an efficient way to dissipate heat during operation, computer processors slow down and result in unreliable and inefficient computing. In addition, the highly concentrated heat and soaring temperatures on computer chips require extra energy to prevent processers from overheating.
    In order to solve the problem, Hu and his team had pioneered the development of a new ultrahigh thermal-management material in 2018. The researchers developed defect-free boron arsenide in their lab and found it to be much more effective in drawing and dissipating heat than other known metal or semiconductor materials such as diamond and silicon carbide. Now, for the first time, the team has successfully demonstrated the material’s effectiveness by integrating it into high-power devices.
    In their experiments, the researchers used computer chips with state-of-the-art, wide bandgap transistors made of gallium nitride called high-electron-mobility transistors (HEMTs). When running the processors at near maximum capacity, chips that used boron arsenide as a heat spreader showed a maximum heat increase from room temperatures to nearly 188 degrees Fahrenheit. This is significantly lower than chips using diamond to spread heat, with temperatures rising to approximately 278 degrees Fahrenheit, or the ones with silicon carbide showing a heat increase to about 332 degrees Fahrenheit.
    “These results clearly show that boron-arsenide devices can sustain much higher operation power than processors using traditional thermal-management materials,” Hu said. “And our experiments were done under conditions where most current technologies would fail. This development represents a new benchmark performance and shows great potential for applications in high-power electronics and future electronics packaging.”
    According to Hu, boron arsenide is ideal for heat management because it not only exhibits excellent thermal conductivity but also displays low heat-transport resistance.
    “When heat crosses a boundary from one material to another, there’s typically some slowdown to get into the next material,” Hu said. “The key feature in our boron arsenide material is its very low thermal- boundary resistance. This is sort of like if the heat just needs to step over a curb, versus jumping a hurdle.”
    The team has also developed boron phosphide as another excellent heat-spreader candidate. During their experiments, the researchers first illustrated the way to build a semiconductor structure using boron arsenide and then integrated the material into a HEMT-chip design. The successful demonstration opens up a path for industry adoption of the technology.
    Story Source:
    Materials provided by University of California – Los Angeles. Note: Content may be edited for style and length. More

  • in

    Bronze Age: how the market began

    Knowing the weight of a commodity provides an objective way to value goods in the marketplace. But did a self-regulating market even exist in the Bronze Age? And what can weight systems tell us about this? A team of researchers from the University of Göttingen researched this by investigating the dissemination of weight systems throughout Western Eurasia. Their new simulation indicates that the interaction of merchants, even without substantial intervention from governments or institutions, is likely to explain the spread of Bronze Age technology to weigh goods. The results were published in Proceedings of the National Academy of Sciences (PNAS).
    To determine how different units of weight emerged in different regions, researchers compared all the weight systems in use between Western Europe and the Indus Valley from 3,000-1,000 BC. Analysis of 2,274 balance weights from 127 sites revealed that, with the exception of those from the Indus Valley, new and very similar units of weight appeared in a gradual spread west of Mesopotamia. To find out if the gradual formation of these systems could be due to propagation of error from a single weight system, the researchers modelled the creation of 100 new units. Taking into account factors such as measurement error, the simulation supported a single origin between Mesopotamia and Europe. It also showed that the Indus Valley probably developed an independent weight system. The research demonstrated that if information flow in Eurasia trade was free enough to support a common weight system, it was likely to be sufficient to react to local price fluctuations.
    The weight systems that emerged between Mesopotamia and Europe were very similar. This meant that a single merchant could travel, for instance, from Mesopotamia to the Aegean and from there to Central Europe and never need to change their own set of weights. The merchant could trade with foreign partners while simply relying on approximating the weights. There was no international authority that could have regulated the accuracy of weight systems over such a wide territory and long time span. In Europe, beyond the Aegean, centralised authorities did not even exist at this time. The researchers conclude that the emergence of accurate weight systems must have been the outcome of a global network regulating itself from the bottom-up.
    “With the results of our statistical analysis and experimental tests, it is now possible to prove the long-held hypothesis that free entrepreneurship was already a primary driver of the world economy even as early as the Bronze Age,” explains Professor Lorenz Rahmstorf from the Institute for Prehistory and Early History, University of Göttingen. Merchants could interact freely, establish profitable partnerships, and take advantage of the opportunities offered by long-distance trade. “The idea of a self-regulating market existing some 4,000 years ago puts a new perspective on the global economy of the modern era,” says Dr Nicola Ialongo, University of Göttingen. He adds, “Try to imagine all the international institutions that currently regulate our modern world economy: is global trade possible thanks to these institutions, or in spite of them?”
    Story Source:
    Materials provided by University of Göttingen. Note: Content may be edited for style and length. More