More stories

  • in

    Creating meaningful change in cities takes decades, not years, and starts from the bottom

    Newly published research in Science Advances by University of Chicago researcher Luis Bettencourt proposes a new perspective and models on several known paradoxes of cities. Namely, if cities are engines of economic growth, why do poverty and inequality persist? If cities thrive on faster activity and more diversity, why are so many things so hard to change? And if growth and innovation are so important, how can urban planners and economists get away with describing cities with Groundhog Day-style models of equilibrium?
    Developing improved collective actions and policies, and creating more equitable, prosperous and environmentally sustainable pathways requires transcending these apparent paradoxes. The paper finds it critical that societies embrace and utilize the natural tensions of cities revealed by urban science in order to advance more holistic solutions.
    “To understand how cities can be simultaneously fast and slow, rich and poor, innovative and unstable, requires reframing our fundamental understanding of what cities are and how they work,” says Bettencourt. “There is plenty of room in cities to embody all this complexity, but to harness natural urban processes for good requires that we modify current thinking and action to include different scales and diverse kinds of people in interaction.”
    This is the goal of a new paper entitled “Urban Growth and the Emergent Statistics of Cities,” by Luis Bettencourt, the Inaugural Director of the Mansueto Institute for Urban Innovation and Professor of Ecology and Evolution at the University of Chicago. In the paper, Bettencourt develops a new set of mathematical models to describe cities along a sliding scale of processes of change, starting with individuals and deriving emergent properties of cities and nations as urban systems.
    At the heart of these models is a balancing act: humans must struggle to balance their budgets over time, including incomes and costs in units of money or energy. For most people, incomes and costs vary over time in unpredictable ways that are out of their full control. In cities — where we are all part of complicated webs of interdependence for jobs, services and many forms of collective action — these challenges gain new dimensions that require both individual and collective action. Accounting for these dynamics allows us to see how meaningful change at the levels of cities and nations can emerge from the aggregate daily hustle of millions of people, but also how all this struggle can fail to add up to much.
    The paper shows that relative changes in the status of cities are exceedingly slow, tied to variations in their growth rates, which are now very small in high-income nations such as the U.S. This leads to the problem that the effects of innovation across cities are barely observable, taking place on the time scale of several decades — much slower than any mayoral term, which blunts the ability to judge positive from harmful policies.
    Of especial importance is the negative effect of uncertainty — which tends to befall people in poverty but also everyone during the current pandemic — on processes of innovation and growth. Another challenge are policies that optimize for aggregate growth (such as GDP), which the paper shows typically promotes increasing inequality and social instability. In the paper, these ideas are tested using a long time series for 382 U.S. metropolitan areas over nearly five decades.
    “Growth and change accumulate through the compounding of many small changes in how we lead our daily lives, allocate our time and effort, and interact with each other, especially in cities. Helping more people be creative and gain agency, in part by reducing crippling uncertainties, is predicted to make all the difference between a society that can face difficulties and thrive or one that becomes caught up in endless struggles and eventually decay,” says Bettencourt.

    Story Source:
    Materials provided by University of Chicago. Note: Content may be edited for style and length. More

  • in

    Deep learning will help future Mars rovers go farther, faster, and do more science

    NASA’s Mars rovers have been one of the great scientific and space successes of the past two decades.
    Four generations of rovers have traversed the red planet gathering scientific data, sending back evocative photographs, and surviving incredibly harsh conditions — all using on-board computers less powerful than an iPhone 1. The latest rover, Perseverance, was launched on July 30, 2020, and engineers are already dreaming of a future generation of rovers.
    While a major achievement, these missions have only scratched the surface (literally and figuratively) of the planet and its geology, geography, and atmosphere.
    “The surface area of Mars is approximately the same as the total area of the land on Earth,” said Masahiro (Hiro) Ono, group lead of the Robotic Surface Mobility Group at the NASA Jet Propulsion Laboratory (JPL) — which has led all the Mars rover missions — and one of the researchers who developed the software that allows the current rover to operate.
    “Imagine, you’re an alien and you know almost nothing about Earth, and you land on seven or eight points on Earth and drive a few hundred kilometers. Does that alien species know enough about Earth?” Ono asked. “No. If we want to represent the huge diversity of Mars we’ll need more measurements on the ground, and the key is substantially extended distance, hopefully covering thousands of miles.”
    Travelling across Mars’ diverse, treacherous terrain with limited computing power and a restricted energy diet — only as much sun as the rover can capture and convert to power in a single Martian day, or sol — is a huge challenge.

    advertisement

    The first rover, Sojourner, covered 330 feet over 91 sols; the second, Spirit, travelled 4.8 miles in about five years; Opportunity, travelled 28 miles over 15 years; and Curiosity has travelled more than 12 miles since it landed in 2012.
    “Our team is working on Mars robot autonomy to make future rovers more intelligent, to enhance safety, to improve productivity, and in particular to drive faster and farther,” Ono said.
    NEW HARDWARE, NEW POSSIBILITIES
    The Perseverance rover, which launched this summer, computes using RAD 750s — radiation-hardened single board computers manufactured by BAE Systems Electronics.
    Future missions, however, would potentially use new high-performance, multi-core radiation hardened processors designed through the High Performance Spaceflight Computing (HPSC) project. (Qualcomm’s Snapdragon processor is also being tested for missions.) These chips will provide about one hundred times the computational capacity of current flight processors using the same amount of power.

    advertisement

    “All of the autonomy that you see on our latest Mars rover is largely human-in-the-loop” — meaning it requires human interaction to operate, according to Chris Mattmann, the deputy chief technology and innovation officer at JPL. “Part of the reason for that is the limits of the processors that are running on them. One of the core missions for these new chips is to do deep learning and machine learning, like we do terrestrially, on board. What are the killer apps given that new computing environment?”
    The Machine Learning-based Analytics for Autonomous Rover Systems (MAARS) program — which started three years ago and will conclude this year — encompasses a range of areas where artificial intelligence could be useful. The team presented results of the MAARS project at hIEEE Aerospace Conference in March 2020. The project was a finalist for the NASA Software Award.
    “Terrestrial high performance computing has enabled incredible breakthroughs in autonomous vehicle navigation, machine learning, and data analysis for Earth-based applications,” the team wrote in their IEEE paper. “The main roadblock to a Mars exploration rollout of such advances is that the best computers are on Earth, while the most valuable data is located on Mars.”
    Training machine learning models on the Maverick2 supercomputer at the Texas Advanced Computing Center (TACC), as well as on Amazon Web Services and JPL clusters, Ono, Mattmann and their team have been developing two novel capabilities for future Mars rovers, which they call Drive-By Science and Energy-Optimal Autonomous Navigation.
    ENERGY-OPTIMAL AUTONOMOUS NAVIGATION
    Ono was part of the team that wrote the on-board pathfinding software for Perseverance. Perseverance’s software includes some machine learning abilities, but the way it does pathfinding is still fairly naïve.
    “We’d like future rovers to have a human-like ability to see and understand terrain,” Ono said. “For rovers, energy is very important. There’s no paved highway on Mars. The drivability varies substantially based on the terrain — for instance beach versus. bedrock. That is not currently considered. Coming up with a path with all of these constraints is complicated, but that’s the level of computation that we can handle with the HPSC or Snapdragon chips. But to do so we’re going to need to change the paradigm a little bit.”
    Ono explains that new paradigm as commanding by policy, a middle ground between the human-dictated: “Go from A to B and do C,” and the purely autonomous: “Go do science.”
    Commanding by policy involves pre-planning for a range of scenarios, and then allowing the rover to determine what conditions it is encountering and what it should do.
    “We use a supercomputer on the ground, where we have infinite computational resources like those at TACC, to develop a plan where a policy is: if X, then do this; if y, then do that,” Ono explained. “We’ll basically make a huge to-do list and send gigabytes of data to the rover, compressing it in huge tables. Then we’ll use the increased power of the rover to de-compress the policy and execute it.”
    The pre-planned list is generated using machine learning-derived optimizations. The on-board chip can then use those plans to perform inference: taking the inputs from its environment and plugging them into the pre-trained model. The inference tasks are computationally much easier and can be computed on a chip like those that may accompany future rovers to Mars.
    “The rover has the flexibility of changing the plan on board instead of just sticking to a sequence of pre-planned options,” Ono said. “This is important in case something bad happens or it finds something interesting.”
    DRIVE-BY SCIENCE
    Current Mars missions typically use tens of images a Sol from the rover to decide what to do the next day, according to Mattmann. “But what if in the future we could use one million image captions instead? That’s the core tenet of Drive-By Science,” he said. “If the rover can return text labels and captions that were scientifically validated, our mission team would have a lot more to go on.”
    Mattmann and the team adapted Google’s Show and Tell software — a neural image caption generator first launched in 2014 — for the rover missions, the first non-Google application of the technology.
    The algorithm takes in images and spits out human-readable captions. These include basic, but critical information, like cardinality — how many rocks, how far away? — and properties like the vein structure in outcrops near bedrock. “The types of science knowledge that we currently use images for to decide what’s interesting,” Mattmann said.
    Over the past few years, planetary geologists have labeled and curated Mars-specific image annotations to train the model.
    “We use the one million captions to find 100 more important things,” Mattmann said. “Using search and information retrieval capabilities, we can prioritize targets. Humans are still in the loop, but they’re getting much more information and are able to search it a lot faster.”
    Results of the team’s work appear in the September 2020 issue of Planetary and Space Science.
    TACC’s supercomputers proved instrumental in helping the JPL team test the system. On Maverick 2, the team trained, validated, and improved their model using 6,700 labels created by experts.
    The ability to travel much farther would be a necessity for future Mars rovers. An example is the Sample Fetch Rover, proposed to be developed by the European Space Association and launched in late 2020s, whose main task will be to pick up samples dug up by the Mars 2020 rover and collect them.
    “Those rovers in a period of years would have to drive 10 times further than previous rovers to collect all the samples and to get them to a rendezvous site,” Mattmann said. “We’ll need to be smarter about the way we drive and use energy.”
    Before the new models and algorithms are loaded onto a rover destined for space, they are tested on a dirt training ground next to JPL that serves as an Earth-based analogue for the surface of Mars.
    The team developed a demonstration that shows an overhead map, streaming images collected by the rover, and the algorithms running live on the rover, and then exposes the rover doing terrain classification and captioning on board. They had hoped to finish testing the new system this spring, but COVID-19 shuttered the lab and delayed testing.
    In the meantime, Ono and his team developed a citizen science app, AI4Mars, that allows the public to annotate more than 20,000 images taken by the Curiosity rover. These will be used to further train machine learning algorithms to identify and avoid hazardous terrains.
    The public have generated 170,000 labels so far in less than three months. “People are excited. It’s an opportunity for people to help,” Ono said. “The labels that people create will help us make the rover safer.”
    The efforts to develop a new AI-based paradigm for future autonomous missions can be applied not just to rovers but to any autonomous space mission, from orbiters to fly-bys to interstellar probes, Ono says.
    “The combination of more powerful on-board computing power, pre-planned commands computed on high performance computers like those at TACC, and new algorithms has the potential to allow future rovers to travel much further and do more science.” More

  • in

    Understanding the inner workings of the human heart

    Researchers have investigated the function of a complex mesh of muscle fibers that line the inner surface of the heart. The study, published in the journal Nature, sheds light on questions asked by Leonardo da Vinci 500 years ago, and shows how the shape of these muscles impacts heart performance and heart failure.
    In humans, the heart is the first functional organ to develop and starts beating spontaneously only four weeks after conception. Early in development, the heart grows an intricate network of muscle fibers — called trabeculae — that form geometric patterns on the heart’s inner surface. These are thought to help oxygenate the developing heart, but their function in adults has remained an unsolved puzzle since the 16th century.
    “Our work significantly advanced our understanding of the importance of myocardial trabeculae,” explains Hannah Meyer, a Cold Spring Harbor Laboratory Fellow. “Perhaps even more importantly, we also showed the value of a truly multidisciplinary team of researchers. Only the combination of genetics, clinical research, and bioengineering led us to discover the unexpected role of myocardial trabeculae in the function of the adult heart.”
    To understand the roles and development of trabeculae, an international team of researchers used artificial intelligence to analyse 25,000 magnetic resonance imaging (MRI) scans of the heart, along with associated heart morphology and genetic data. The study reveals how trabeculae work and develop, and how their shape can influence heart disease. UK Biobank has made the study data openly available.
    Leonardo da Vinci was the first to sketch trabeculae and their snowflake-like fractal patterns in the 16th century. He speculated that they warm the blood as it flows through the heart, but their true importance has not been recognized until now.
    “Our findings answer very old questions in basic human biology. As large-scale genetic analyses and artificial intelligence progress, we’re rebooting our understanding of physiology to an unprecedented scale,” says Ewan Birney, deputy director general of EMBL.
    The research suggests that the rough surface of the heart ventricles allows blood to flow more efficiently during each heartbeat, just like the dimples on a golf ball reduce air resistance and help the ball travel further.
    The study also highlights six regions in human DNA that affect how the fractal patterns in these muscle fibers develop. Intriguingly, the researchers found that two of these regions also regulate branching of nerve cells, suggesting a similar mechanism may be at work in the developing brain.
    The researchers discovered that the shape of trabeculae affects the performance of the heart, suggesting a potential link to heart disease. To confirm this, they analyzed genetic data from 50,000 patients and found that different fractal patterns in these muscle fibers affected the risk of developing heart failure. Nearly five million Americans suffer from congestive heart failure.
    Further research on trabeculae may help scientists better understand how common heart diseases develop and explore new approaches to treatment.
    “Leonardo da Vinci sketched these intricate muscles inside the heart 500 years ago, and it’s only now that we’re beginning to understand how important they are to human health. This work offers an exciting new direction for research into heart failure,” says Declan O’Regan, clinical scientist and consultant radiologist at the MRC London Institute of Medical Sciences. This project included collaborators at Cold Spring Harbor Laboratory, EMBL’s European Bioinformatics Institute (EMBL-EBI), the MRC London Institute of Medical Sciences, Heidelberg University, and the Politecnico di Milano.

    Story Source:
    Materials provided by Cold Spring Harbor Laboratory. Note: Content may be edited for style and length. More

  • in

    Digital contact tracing alone may not be miracle answer for COVID-19

    In infectious disease outbreaks, digital contact tracing alone could reduce the number of cases, but not as much as manual contract tracing, new University of Otago-led research published in the Cochrane Library reveals.
    Senior Research Fellow in the Department of Preventive and Social Medicine, Dr Andrew Anglemyer, led this systematic review of the effectiveness of digital technologies for identifying contacts of an identified positive case of an infectious disease, in order to isolate them and reduce further transmission of the disease.
    The team of researchers summarised the findings of six observational studies from outbreaks of different infectious diseases in Sierra Leone, Botswana and the USA and six studies that simulated the spread of diseases in an epidemic with mathematical models.
    The results of the review suggest the need for caution by health authorities relying heavily on digital contact tracing systems.
    “Digital technologies, combined with other public health interventions, may help to prevent the spread of infectious diseases but the technology is largely unproven in real-world, outbreak settings,” Dr Anglemyer says.
    “Modelling studies provide low certainty of evidence of a reduction in cases, and this only occurred when digital contact tracing solutions were used together with other public health measures such as self-isolation,” he says.

    advertisement

    “However, limited evidence shows that the technology itself may produce more reliable counts of contacts.”
    Overall, the team of researchers from New Zealand, the USA, the UK and Australia conclude there is a place for digital technologies in contact tracing.
    “The findings of our review suggest that to prevent the spread of infectious diseases, governments should consider digital technologies as a way to improve current contact tracing methods, not to replace them,” the researchers state.
    “In the real world, they won’t be pitted against each other, the technology would hopefully just augment the current contact tracing methods in a given country.”
    They recommend governments consider issues of privacy and equity when choosing digital contact tracing systems.

    advertisement

    “If governments implement digital contact tracing technologies, they should ensure that at-risk populations are not disadvantaged and they need to take privacy concerns into account.
    “The COVID-19 pandemic is disproportionately affecting ethnic minorities, the elderly and people living in high deprivation. These health inequities could be magnified with the introduction of digital solutions that do not consider these at-risk populations, who are likely to have poor access to smartphones with full connectivity.”
    Contact tracing teams in the studies reviewed reported that digital data entry and management systems were faster to use than paper systems for recording of new contacts and monitoring of known contacts and possibly less prone to data loss.
    But the researchers conclude there is “very low certainty evidence” that contact tracing apps could make a substantial impact on the spread of COVID-19, while issues of low adoption, technological variation and health equity persist.
    Accessibility or privacy and safety concerns were identified in some of the studies. Problems with system access included patchy network coverage, lack of data, technical problems with hardware or software that were unable to be resolved by local technical teams and higher staff training needs including the need for refresher training. Staff also noted concerns around accessibility and logistical issues in administering the systems, particularly in marginalised or under-developed areas of the world.
    The research, published today by the Cochrane Library a collection of high-quality, independent evidence to inform healthcare decision-making, has been carried out as the COVID-19 pandemic shows no signs of waning and the World Health Organization and more than 30 countries are exploring how digital technology solutions could help stop the spread of the virus.
    Senior Research Fellow Tim Chambers from the University of Otago, Wellington, and Associate Professor Matthew Parry from the Department of Statistics, were also co-authors of the paper. More

  • in

    Portrait of a virus

    More than a decade ago, electronic medical records were all the rage, promising to transform health care and help guide clinical decisions and public health response.
    With the arrival of COVID-19, researchers quickly realized that electronic medical records (EMRs) had not lived up to their full potential — largely due to widespread decentralization of records and clinical systems that cannot “talk” to one another.
    Now, in an effort to circumvent these impediments, an international group of researchers has successfully created a centralized medical records repository that, in addition to rapid data collection, can perform data analysis and visualization.
    The platform, described Aug.19 in Nature Digital Medicine, contains data from 96 hospitals in five countries and has yielded intriguing, albeit preliminary, clinical clues about how the disease presents, evolves and affects different organ systems across different categories of patients COVID-19.
    For now, the platform represents more of a proof-of-concept than a fully evolved tool, the research team cautions, adding that the initial observations enabled by the data raise more questions than they answer.
    However, as data collection grows and more institutions begin to contribute such information, the utility of the platform will evolve accordingly, the team said.

    advertisement

    “COVID-19 caught the world off guard and has exposed important deficiencies in our ability to use electronic medical records to glean telltale insights that could inform response during a shapeshifting pandemic,” said Isaac Kohane, senior author on the research and chair of the Department of Biomedical Informatics in the Blavatnik Institute at Harvard Medical School. “The new platform we have created shows that we can, in fact, overcome some of these challenges and rapidly collect critical data that can help us confront the disease at the bedside and beyond.”
    In its report, the Harvard Medical School-led multi-institutional research team provides insights from early analysis of records from 27,584 patients and 187,802 lab tests collected in the early days of epidemic, from Jan. 1 to April 11. The data came from 96 hospitals in the United States, France, Italy, Germany and Singapore, as part of the 4CE Consortium, an international research repository of electronic medical records used to inform studies of the COVID-19 pandemic.
    “Our work demonstrates that hospital systems can organize quickly to collaborate across borders, languages and different coding systems,” said study first author Gabriel Brat, HMS assistant professor of surgery at Beth Israel Deaconess Medical Center and a member of the Department of Biomedical Informatics. “I hope that our ongoing efforts to generate insights about COVID-19 and improve treatment will encourage others from around the world to join in and share data.”
    The new platform underscores the value of such agile analytics in the rapid generation of knowledge, particularly during a pandemic that places extra urgency on answering key questions, but such tools must also be approached with caution and be subject to scientific rigor, according to an accompanying editorial penned by leading experts in biomedical data science.
    “The bar for this work needs to be set high, but we must also be able to move quickly. Examples such as the 4CE Collaborative show that both can be achieved,” writes Harlan Krumholz, senior author on the accompanying editorial and professor of medicine and cardiology and director of the Center for Outcomes Research and Evaluation at Yale-New Haven Hospital.

    advertisement

    What kind of intel can EMRs provide?
    In a pandemic, particularly one involving a new pathogen, rapid assessment of clinical records can provide information not only about the rate of new infections and the prevalence of disease, but also about key clinical features that can portend good or bad outcomes, disease severity and the need for further testing or certain interventions.
    These data can also yield clues about differences in disease course across various demographic groups and indicative fluctuations in biomarkers associated with the function of the heart, kidney, liver, immune system and more. Such insights are especially critical in the early weeks and months after a novel disease emerges and public health experts, physicians and policymakers are flying blind. Such data could prove critical later: Indicative patterns can tell researchers how to design clinical trials to better understand the underlying drivers that influence observed outcomes. For example, if records are showing consistent changes in the footprints of a protein that heralds aberrant blood clotting, the researchers can choose to focus their monitoring, treatments on organ systems whose dysfunction is associated with these abnormalities or focus on organs that could be damaged by clots, notably the brain, heart and lungs.
    The analysis of the data collected in March demonstrates that it is possible to quickly create a clinical sketch of the disease that can later be filled in as more granular details emerge, the researchers said.
    In the current study, researchers tracked the following data:
    Total number of COVID-19 patients
    Number of intensive care unit admissions and discharges
    Seven-day average of new cases per 100,000 people by country
    Daily death toll
    Demographic breakdown of patients
    Laboratory tests to assess cardiac, immune and kidney and liver function, measure red and white blood cell counts, inflammatory markers such as C-reactive protein, as well as two proteins related to blood clotting (D-dimer) and cardiac muscle injury (troponin)
    Telltale patterns
    The report’s observations included:
    Demographic analyses by country showed variations in the age of hospitalized patients, with Italy having the largest proportion of elderly patients (over 70 years) diagnosed with COVID-19.
    At initial presentation to the hospital, patients showed remarkable consistency in lab tests measuring cardiac, immune, blood-clotting and kidney and liver function.
    On day one of admission, most patients had relatively moderate disease as measured by lab tests, with initial tests showing moderate abnormalities but no indication of organ failure.
    Major abnormalities were evident on day one of diagnosis for C-reactive protein — a measure of inflammation — and D-dimer protein, a chemical that measures blood clotting with test results progressively worsening in patients who went on to develop more severe disease or died.
    Levels of the liver enzyme bilirubin, which indicate liver function, were initially normal across hospitals but worsened among persistently hospitalized patients, a finding suggesting that most patients did not have liver impairment on initial presentation.
    Creatinine levels — which measure how well the kidneys are filtering waste — showed wide variations across hospitals, a finding that may reflect cross-country variations in testing, in the use of fluids to manage kidney function or differences in timing of patient presentation at various stages of the disease.
    On average, white blood cell counts — a measure of immune response — were within normal ranges for most patients but showed elevations among those who had severe disease and remained hospitalized longer.
    Even though the findings of the report are observations and cannot be used to draw conclusions, the trends they point to could provide a foundation for more focused and in-depth studies that get to the root of these observations, the team said.
    “It’s clear that amid an emerging pathogen, uncertainty far outstrips knowledge,” Kohane said. “Our efforts establish a framework to monitor the trajectory of COVID-19 across different categories of patients and help us understand response to different clinical interventions.”
    Co-investigators included Griffin Weber, Nils Gehlenborg, Paul Avillach, Nathan Palmer, Luca Chiovato, James Cimino, Lemuel Waitman, Gilbert Omenn, Alberto Malovini; Jason Moore, Brett Beaulieu-Jones; Valentina Tibollo; Shawn Murphy; Sehi L’Yi; Mark Keller; Riccardo Bellazzi; David Hanauer; Arnaud Serret-Larmande; Alba Gutierrez-Sacristan; John Holmes; Douglas Bell; Kenneth Mandl; Robert Follett; Jeffrey Klann; Douglas Murad; Luigia Scudeller; Mauro Bucalo; Katie Kirchoff; Jean Craig; Jihad Obeid; Vianney Jouhet; Romain Griffier; Sebastien Cossin; Bertrand Moal; Lav Patel; Antonio Bellasi; Hans Prokosch; Detlef Kraska; Piotr Sliz; Amelia Tan; Kee Yuan Ngiam; Alberto Zambelli; Danielle Mowery; Emily Schiver; Batsal Devkota; Robert Bradford; Mohamad Daniar; Christel Daniel; Vincent Benoit; Romain Bey; Nicolas Paris; Patricia Serre; Nina Orlova; Julien Dubiel; Martin Hilka; Anne Sophie Jannot; Stephane Breant; Judith Leblanc; Nicolas Griffon; Anita Burgun; Melodie Bernaux; Arnaud Sandrin; Elisa Salamanca; Sylvie Cormont; Thomas Ganslandt; Tobias Gradinger; Julien Champ; Martin Boeker; Patricia Martel; Loic Esteve; Alexandre Gramfort; Olivier Grisel; Damien Leprovost; Thomas Moreau; Gael Varoquaux; Jill-Jênn Vie; Demian Wassermann; Arthur Mensch; Charlotte Caucheteux; Christian Haverkamp; Guillaume Lemaitre; Silvano Bosari, Ian Krantz; Andrew South; Tianxi Cai.
    Relevant disclosures:
    Co-authors Riccardo Bellazzi of the University of Pavia and Arthur Mensch, of PSL University, are shareholders in Biomeris, a biomedical data analysis company. More

  • in

    New tool improves fairness of online search rankings

    When you search for something on the internet, do you scroll through page after page of suggestions — or pick from the first few choices?
    Because most people choose from the tops of these lists, they rarely see the vast majority of the options, creating a potential for bias in everything from hiring to media exposure to e-commerce.
    In a new paper, Cornell University researchers introduce a tool they’ve developed to improve the fairness of online rankings without sacrificing their usefulness or relevance.
    “If you could examine all your choices equally and then decide what to pick, that may be considered ideal. But since we can’t do that, rankings become a crucial interface to navigate these choices,” said computer science doctoral student Ashudeep Singh, co-first author of “Controlling Fairness and Bias in Dynamic Learning-to-Rank,” which won the Best Paper Award at the Association for Computing Machinery SIGIR Conference on Research and Development in Information Retrieval.
    “For example, many YouTubers will post videos of the same recipe, but some of them get seen way more than others, even though they might be very similar,” Singh said. “And this happens because of the way search results are presented to us. We generally go down the ranking linearly and our attention drops off fast.”
    The researchers’ method, called FairCo, gives roughly equal exposure to equally relevant choices and avoids preferential treatment for items that are already high on the list. This can correct the unfairness inherent in existing algorithms, which can exacerbate inequality and political polarization, and curtail personal choice.
    “What ranking systems do is they allocate exposure. So how do we make sure that everybody receives their fair share of exposure?” said Thorsten Joachims, professor of computer science and information science, and the paper’s senior author. “What constitutes fairness is probably very different in, say, an e-commerce system and a system that ranks resumes for a job opening. We came up with computational tools that let you specify fairness criteria, as well as the algorithm that will provably enforce them.”
    Algorithms seek the most relevant items to searchers, but because the vast majority of people choose one of the first few items in a list, small differences in relevance can lead to huge discrepancies in exposure. For example, if 51% of the readers of a news publication prefer opinion pieces that skew conservative, and 49% prefer essays that are more liberal, all of the top stories highlighted on the home page could conceivably lean conservative, according to the paper.
    “When small differences in relevance lead to one side being amplified, that often causes polarization, where some people tend to dominate the conversation and other opinions get dropped without their fair share of attention,” Joachims said. “You might want to use it in an e-commerce system to make sure that if you’re producing a product that 30% of people like, you’re getting a certain amount of exposure based on that. Or if you have a resume database, you could formulate safeguards to make sure it’s not discriminating by race or gender.”
    The research was partly supported by the National Science Foundation and by Workday.

    Story Source:
    Materials provided by Cornell University. Original written by Melanie Lefkowitz. Note: Content may be edited for style and length. More

  • in

    Social connection boosts fitness app appeal

    New research led by Flinders University PhD candidate Jasmine Petersen examining commercial physical activity apps has found that the social components of these apps hold great potential to increase physical activity engagement.
    Sharing physical activity outcomes and progress to app communities and social networking platforms provides the necessary encouragement for people to engage more enthusiastically with their apps.
    “Sharing posts and receiving encouragement provides the social support many people need to stay motivated with exercise programs — and this doesn’t change across different age groups,” says study co-author Dr Ivanka Prichard, from Flinders University’s Caring Futures Institute.
    The study — “Psychological mechanisms underlying the relationship between commercial physical activity app use and physical activity engagement,” by Jasmine Petersen, Lucy Lewis, Eva Kemps and Ivanka Prichard — is published in Psychology of Sport and Exercise.
    The study examined close to 1300 adults (88% female, aged between 18 and 83 years), over half of whom used a commercial physical activity app (e.g. Fitbit, Garmin, Strava). Results found that more competitive individuals responded best to the apps, engaging in significantly higher levels of physical activity due to the game-like incentives and rewards built into the apps.
    Dr Prichard says this suggests that people with a general disposition toward competition may benefit most from using activity apps.
    “App users are motivated by both the enjoyment derived from physical activity (intrinsic motivation) and the personal value placed on the outcomes of physical activity (identified regulation), and these combined motivations result in greater engagement in physical activity,” says Ms Petersen.
    This study shows that the social components of physical activity apps are particularly beneficial in promoting engagement in physical activity due to their capacity to facilitate social support, and positively influence motivation and beliefs in one’s ability to perform physical activity.
    However, it was also found that online interactions can have a negative effect on exercisers if social networking is used to make direct comparisons.
    “Engagement in comparisons was associated with lower self-efficacy and higher external regulation, and in turn, lower physical activity,” says Dr Prichard, emphasising the importance of exercising for enjoyment and the benefits that exercise can provide to general health.
    The team are now following up participants to see how commercial physical activity apps might support physical activity behaviour in light of COVID-19 restrictions.

    Story Source:
    Materials provided by Flinders University. Original written by Megan Andrews. Note: Content may be edited for style and length. More

  • in

    Mathematicians unravel a thread of string theory

    Simply put, string theory is a proposed method of explaining everything. Actually, there’s nothing simple about it. String theory is a theoretical framework from physics that describes one-dimensional, vibrating fibrous objects called “strings,” which propagate through space and interact with each other. Piece by piece, energetic minds are discovering and deciphering fundamental strings of the physical universe using mathematical models. Among these intrepid explorers are Utah State University mathematicians Thomas Hill and his faculty mentor, Andreas Malmendier.
    With colleague Adrian Clingher of the University of Missouri-St. Louis, the team published findings about two branches of string theory in the paper, “The Duality Between F-theory and the Heterotic String in D=8 with Two Wilson Lines,” in the August 7, 2020 online edition of ‘Letters in Mathematical Physics.’ The USU researchers’ work is supported by a grant from the Simons Foundation.
    “We studied a special family of K3 surfaces — compact, connected complex surfaces of dimension 2 — which are important geometric tools for understanding symmetries of physical theories,” says Hill, who graduated from USU’s Honors Program with a bachelor’s degree in mathematics in 2018 and completed a master’s degree in mathematics this past spring. “In this case, we were examining a string duality between F-theory and heterotic string theory in eight dimensions.”
    Hill says the team proved the K3 surfaces they investigated admit four unique ways to slice the surfaces as Jacobian elliptic fibrations, formations of torus-shaped fibers. The researchers constructed explicit equations for each of these fibrations.
    “An important part of this research involves identifying certain geometric building blocks, called ‘divisors,’ within each K3 surface,” he says. “Using these divisors, crucial geometric information is then encoded in an abstract graph.”
    This process, Hill says, enables researchers to investigate symmetries of underlying physical theories demonstrated by the graph.
    “You can think of this family of surfaces as a loaf of bread and each fibration as a ‘slice’ of that loaf,” says Malmendier, associate professor in USU’s Department of Mathematics and Statistics. “By examining the sequence of slices, we can visualize, and better understand, the entire loaf.”
    The undertaking described in the paper, he says, represents hours of painstaking “paper and pencil” work to prove theorems of each of the four fibrations, followed by pushing each theorem through difficult algebraic formulas.
    “For the latter part of this process, we used Maple Software and the specialized Differential Geometry Package developed at USU, which streamlined our computational efforts,” Malmendier says.

    Story Source:
    Materials provided by Utah State University. Original written by Mary-Ann Muffoletto. Note: Content may be edited for style and length. More