More stories

  • in

    The unexpected origins of a modern finance tool

    In the early 1600s, the officials running Durham Cathedral, in England, had serious financial problems. Soaring prices had raised expenses. Most cathedral income came from renting land to tenant farmers, who had long leases so officials could not easily raise the rent. Instead, church leaders started charging periodic fees, but these often made tenants furious. And the 1600s, a time of religious schism, was not the moment to alienate church members.
    But in 1626, Durham officials found a formula for fees that tenants would accept. If tenant farmers paid a fee equal to one year’s net value of the land, it earned them a seven-year lease. A fee equal to 7.75 years of net value earned a 21-year lease.
    This was a form of discounting, the now-common technique for evaluating the present and future value of money by assuming a certain rate of return on that money. The Durham officials likely got their numbers from new books of discounting tables. Volumes like this had never existed before, but suddenly local church officials were applying the technique up and down England.
    As financial innovation stories go, this one is unusual. Normally, avant-garde financial tools might come from, well, the financial avant-garde — bankers, merchants, and investors hunting for short-term profits, not clergymen.
    “Most people have assumed these very sophisticated calculations would have been implemented by hard-nosed capitalists, because really powerful calculations would allow you to get an economic edge and increase profits,” says MIT historian William Deringer, an expert in the deployment of quantitative reasoning in public life. “But that was not the primary or only driver in this situation.”
    Deringer has published a new research article about this episode, “Mr. Aecroid’s Tables: Economic Calculations and Social Customs in the Early Modern Countryside,” appearing in the current issue of the Journal of Modern History. In it, he uses archival research to explore how the English clergy started using discounting, and where. And one other question: Why?
    Enter inflation
    Today, discounting is a pervasive tool. A dollar in the present is worth more than a dollar a decade from now, since one can earn money investing it in the meantime. This concept heavily informs investment markets, corporate finance, and even the NFL draft (where trading this year’s picks yields a greater haul of future picks). As the historian William N. Goetzmann has written, the related idea of net present value “is the most important tool in modern finance.” But while discounting was known as far back as the mathematician Leonardo of Pisa (often called Fibonacci) in the 1200s, why were English clergy some of its most enthusiastic early adopters?

    The answer involves a global change in the 1500s: the “price revolution,” in which things began costing more, after a long period when prices had been constant. That is, inflation hit the world.
    “People up to that point lived with the expectation that prices would stay the same,” Deringer says. “The idea that prices changed in a systematic way was shocking.”
    For Durham Cathedral, inflation meant the organization had to pay more for goods while three-quarters of its revenues came from tenant rents, which were hard to alter. Many leases were complex, and some were locked in for a tenant’s lifetime. The Durham leaders did levy intermittent fees on tenants, but that led to angry responses and court cases.
    Meanwhile, tenants had additional leverage against the Church of England: religious competition following the Reformation. England’s political and religious schisms would lead it to a midcentury civil war. Maybe some private landholders could drastically increase fees, but the church did not want to lose followers that way.
    “Some individual landowners could be ruthlessly economic, but the church couldn’t, because it’s in the midst of incredible political and religious turmoil after the Reformation,” Deringer says. “The Church of England is in this precarious position. They’re walking a line between Catholics who don’t think there should have been a Reformation, and Puritans who don’t think there should be bishops. If they’re perceived to be hurting their flock, it would have real consequences. The church is trying to make the finances work but in a way that’s just barely tolerable to the tenants.”
    Enter the books of discounting tables, which allowed local church leaders to finesse the finances. Essentially, discounting more carefully calibrated the upfront fees tenants would periodically pay. Church leaders could simply plug in the numbers as compromise solutions.

    In this period, England’s first prominent discounting book with tables was published in 1613; its most enduring, Ambrose Acroyd’s “Table of Leasses and Interest,” dated to 1628-29. Acroyd was the bursar at Trinity College at Cambridge University, which as a landholder (and church-affiliated institution) faced the same issues concerning inflation and rent. Durham Cathedral began using off-the-shelf discounting formulas in 1626, resolving decades of localized disagreement as well.
    Performing fairness
    The discounting tables from books did not only work because the price was right. Once circulating clergy had popularized the notion throughout England, local leaders could justify using the books because others were doing it. The clergy were “performing fairness,” as Deringer puts it.
    “Strict calculative rules assured tenants and courts that fines were reasonable, limiting landlords’ ability to maximize revenues,” Deringer writes in the new article.
    To be sure, local church leaders in England were using discounting for their own economic self-interest. It just wasn’t the largest short-term economic self-interest possible. And it was a sound strategy.
    “In Durham they would fight with tenants every 20 years [in the 1500s] and come to a new deal, but eventually that evolves into these sophisticated mechanisms, the discounting tables,” Deringer adds. “And you get standardization. By about 1700, it seems like these procedures are used everywhere.”
    Thus, as Deringer writes, “mathematical tables for setting fines were not so much instruments of a capitalist transformation as the linchpin holding together what remained of an older system of customary obligations stretched nearly to breaking by macroeconomic forces.”
    Once discounting was widely introduced, it never went away. Deringer’s Journal of Modern History article is part of a larger book project he is currently pursuing, about discounting in many facets of modern life.
    Deringer was able piece together the history of discounting in 17th-century England thanks in part to archival clues. For instance, Durham University owns a 1686 discounting book self-described as an update to Acroyd’s work; that copy was owned by a Durham Cathedral administrator in the 1700s. Of the 11 existing copies of Acroyd’s work, two are at Canterbury Cathedral and Lincoln Cathedral.
    Hints like that helped Deringer recognize that church leaders were very interested in discounting; his further research helped him see that this chapter in the history of discounting is not merely about finance; it also opens a new window into the turbulent 1600s.
    “I never expected to be researching church finances, I didn’t expect it to have anything to do with the countryside, landlord-tenant relationships, and tenant law,” Deringer says. “I was seeing this as an interesting example of a story about bottom-line economic calculation, and it wound up being more about this effort to use calculation to resolve social tensions.” More

  • in

    Study offers a better way to make AI fairer for everyone

    n a new paper, researchers from Carnegie Mellon University and Stevens Institute of Technology show a new way of thinking about the fair impacts of AI decisions. They draw on a well-established tradition known as social welfare optimization, which aims to make decisions fairer by focusing on the overall benefits and harms to individuals. This method can be used to evaluate the industry standard assessment tools for AI fairness, which look at approval rates across protected groups.
    “In assessing fairness, the AI community tries to ensure equitable treatment for groups that differ in economic level, race, ethnic background, gender, and other categories,” explained John Hooker,  professor of operations research at the Tepper School at Carnegie Mellon, who coauthored the study and presented the paper at the International Conference on the Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR) on May 29 in Uppsala, Sweden. The paper received the Best Paper Award.
    Imagine a situation where an AI system decides who gets approved for a mortgage or who gets a job interview. Traditional fairness methods might only ensure that the same percentage of people from different groups get approved. But what if being denied a mortgage has a much bigger negative impact on someone from a disadvantaged group than on someone from an advantaged group? By employing a social welfare optimization method, AI systems can make decisions that lead to better outcomes for everyone, especially for those in disadvantaged groups.
    The study focuses on “alpha fairness,” a method of finding a balance between being fair and getting the most benefit for everyone. Alpha fairness can be adjusted to balance fairness and efficiency more or less, depending on the situation.
    Hooker and his coauthors show how social welfare optimization can be used to compare different assessments for group fairness currently used in AI. By using this method, we can understand the benefits of applying different group fairness tools in different contexts. It also ties these group fairness assessment tools to the larger world of fairness-efficiency standards used in economics and engineering.
    “Our findings suggest that social welfare optimization can shed light on the intensely discussed question of how to achieve group fairness in AI,” Leben said.
    The study is important for both AI system developers and policymakers. Developers can create more equitable and effective AI models by adopting a broader approach to fairness and understanding the limitations of fairness measures. It also highlights the importance of considering social justice in AI development, ensuring that technology promotes equity across diverse groups in society. More

  • in

    How do you know where a fish goes?

    When scientists want to study the long-distance movement of marine animals, they will instrument them with a small device called an acoustic transmitter — or tag — which emits unique signals or “pings.” These signals are picked up by receivers anchored to the seafloor that record the date and time of each detection when the tagged animal comes within range.
    Data collected by the receivers are stored until they are retrieved by researchers and shared across members of cooperative acoustic telemetry networks. This information provides valuable insights into animal behavior, migration patterns, habitat preferences and ecosystem dynamics — all of which are vital for conservation and wildlife management efforts.
    However, this method is not without limitations. Receivers must be physically retrieved to access the data they have collected. For use in marine animals, receivers are often placed near the coast for easy access, but the distribution of receivers can be uneven, with some areas having few receivers and others having many. This can lead to biased data collection, especially for animals that move across large distances.
    A pioneering study by researchers at Florida Atlantic University and the Smithsonian Environmental Research Center addresses these limitations by filling in the gaps in sporadic detection data and tackles the tradeoff between spatial coverage and cost. Using a movement model, researchers reconstructed animal tracks and leveraged an iterative process to measure the accuracy and precision of these reconstructions from acoustic telemetry data.
    Results of the study, published in the journal Methods in Ecology and Evolution, demonstrate how researchers can apply these techniques and measure the accuracy and precision of the methods to their study sites.
    For the study, researchers simulated animal tracks on a computer, then tested how well their method could accurately reconstruct the tracks if they received detection data only from a cooperative acoustic telemetry array. While most of the data used were simulated, they further tested their methodology with data from highly migratory blacktip sharks (Carcharhinus limbatus) to demonstrate how this method can be applied ecologically.
    Findings demonstrate that their novel method can improve track reconstruction, even in regions with uneven receiver coverage. The track reconstruction methods performed well in coastal regions from Palm Beach County to Long Island, minimizing the clustering effect of high densities of receivers and closing the gaps in some regions that were lacking receiver coverage. Performance was primarily affected by the presence or absence of receivers, and to a lesser extent by receiver density and water depth, depending on the grid resolution.

    “Our approach could significantly reduce gaps in data collection and improve the reliability of ecological insights,” said Beth Bowers, Ph.D., senior author and a post-doctoral researcher at the Smithsonian Environmental Research Center, who conducted the research as a Ph.D. student with her mentor, Stephen Kajiura, Ph.D., in FAU’s Charles E. Schmidt College of Science. “Importantly, this method doesn’t rely on costly field techniques such as motorboat towed tests, which makes it suitable for large-scale studies across diverse habitats.”
    This new method increases the utility of acoustic telemetry technology and provides a framework for future studies to assess the accuracy and precision of animal movement calculated from track reconstructions that use acoustic telemetry.
    “Results from our study will enable resource managers and others to infer the reliability of ecological results in their decision-making processes,” said Kajiura, co-author and a professor of biological sciences, FAU College of Science.
    To foster collaboration and innovation, the researchers have made their data repository accessible, empowering fellow scientists to adapt and apply the methodology to their respective study organisms and habitats, whether that encompasses marine, freshwater or terrestrial habitats.
    “Importantly, implications of our findings extend beyond marine environments, offering a transformative approach to wildlife monitoring across aquatic and terrestrial landscapes,” said Bowers. More

  • in

    People feel more connected to ‘tweezer-like’ bionic tools that don’t resemble human hands

    Some say the next step in human evolution will be the integration of technology with flesh. Now, researchers have used virtual reality to test whether humans can feel embodiment — the sense that something is part of one’s body — toward prosthetic “hands” that resemble a pair of tweezers. They report June 6 in the journal iScience that participants felt an equal degree of embodiment for the tweezer-hands and were also faster and more accurate in completing motor tasks in virtual reality than when they were equipped with a virtual human hand.
    “For our biology to merge seamlessly with tools, we need to feel that the tools are part of our body,” says first author and cognitive neuroscientist Ottavia Maddaluno, who conducted the work at the Sapienza University of Rome and the Santa Lucia Foundation IRCCS with Viviana Betti. “Our findings demonstrate that humans can experience a grafted tool as an integral part of their own body.”
    Previous studies have shown that tool use induces plastic changes in the human brain, as does the use of anthropomorphic prosthetic limbs. However, an open scientific question is whether humans can embody bionic tools or prostheses that don’t resemble human anatomy.
    To investigate this possibility, the researchers used virtual reality to conduct a series of experiments on healthy participants. In the virtual reality environment, participants had either a human-like hand or “bionic tool” resembling a large pair of tweezers grafted onto the end of their wrist. To test their motor ability and dexterity, participants were asked to pop bubbles of a specific color (by pinching them with their tweezers or between their index finger and thumb). For this simple task, the researchers found that participants were faster and more accurate at popping virtual bubbles when they had tweezer-hands.
    Next, the team used a test called the “cross-modal congruency task” to compare implicit or unconscious embodiment for the virtual hand and bionic tool. During this test, the researchers applied small vibrations to the participants’ fingertips and asked them to identify which fingers were stimulated. At the same time, a flickering light was displayed on the virtual reality screen, either on the same finger as the tactile stimulus or on a different finger. By comparing the participants’ accuracy and reaction times during trials with matched and mismatched stimuli, the researchers were able to assess how distracted they were by the visual stimulus.
    “This is an index of how much of a mismatch there is in your brain between what you feel and what you see,” says Maddaluno. “But this mismatch could only happen if your brain thinks that what you see is part of your own body; if I don’t feel that the bionic tool that I’m seeing through virtual reality is part of my own body, the visual stimulus should not give any interference.”
    In both cases, participants were faster and more accurate at identifying which of their real fingers were stimulated during trials with matched tactile and visual stimuli, indicating that participants felt a sense of embodiment toward both the virtual human hand and the tweezer-hands.

    However, there was a bigger difference between matched and mismatched trials when participants had tweezer- rather than human hands, indicating that the non-anthropomorphic prosthesis resulted in an even greater sense of embodiment. The researchers speculate that this is due to the tweezer-hands’ relative simplicity compared to a human-like hand, which might make it easy for the brain to compute and accept.
    “In terms of the pinching task, the tweezers are functionally similar to a human hand, but simpler, and simple is also better computationally for the brain.” says Maddaluno.
    They note that it could also relate to the “uncanny valley” hypothesis, since the virtual human hands might have been too eerily similar yet distinct for perfect embodiment.
    In addition to the tweezer-hands, the researchers also tested a wrench-shaped bionic tool and a virtual human hand holding a pair of tweezers. They found evidence of embodiment in all cases, but the participants had higher embodiment and were more dexterous when the tweezers were grafted directly onto their virtual wrists than when they held them in their virtual hand.
    Participants also displayed a higher sense of embodiment for the bionic tools when they had the opportunity to explore the virtual reality environment before undertaking the cross-modal congruency test. “During the cross-modal congruency task participants had to stay still, whereas during the motor task, they actively interacted with the virtual environment, and these interactions in the virtual environment induce a sense of agency,” says Maddaluno.
    Ultimately, the researchers say that this study could inform robotics and prosthetic limb design. “The next step is to study if these bionic tools could be embodied in patients that have lost limbs,” says Maddaluno. “And we also want to investigate the plastic changes that this kind of bionic tool can induce in the brains of both healthy participants and amputees.” More

  • in

    Novel AI method could improve tissue, tumor analysis and advance treatment of disease

    Researchers at the University of Michigan and Brown University have developed a new computational method to analyze complex tissue data that could transform our current understanding of diseases and how we treat them.
    Integrative and Reference-Informed tissue Segmentation, or IRIS, is a novel machine learning and artificial intelligence method that gives biomedical researchers the ability to view more precise information about tissue development, disease pathology and tumor organization.
    The findings are published in the journal Nature Methods.
    IRIS draws from data generated by spatially resolved transcriptomics and uniquely leverages single-cell RNA sequencing data as the reference to examine multiple layers of tissue simultaneously and distinguish various regions with unprecedented accuracy and computational speed.
    Unlike traditional techniques that yield averaged data from tissue samples, SRT provides a much more granular view, pinpointing thousands of locations within a single tissue section. However, the challenge has always been to interpret this vast and detailed dataset, says Xiang Zhou, professor of biostatistics at the University of Michigan School of Public Health and senior author of the study.
    Interpreting large and complex datasets is where IRIS becomes a helpful tool — its algorithms sort through the data to identify and segment various functional domains, such as tumor regions, and provide insights into cell interactions and disease progression mechanisms.
    “Different from existing methods, IRIS directly characterizes the cellular landscape of the tissue and identifies biologically interpretable spatial domains, thus facilitating the understanding of the cellular mechanism underlying tissue function,” said U-M doctoral alum Ying Ma, assistant professor of biostatistics at Brown University, who helped develop IRIS.
    “We anticipate that IRIS will serve as a powerful tool for large-scale multisample spatial transcriptomics data analysis across a wide range of biological systems.”
    Zhou and Ma applied IRIS to six SRT datasets and compared its performance to other commonly used spatial domain methods. Ultimately, as SRT technology continues to grow in popularity and use, the researchers hope to see methods like IRIS help to potentially develop targets for clinical interventions or drug targets, improving personalized treatment plans and patient health outcomes.
    “The computational approach of IRIS pioneers a novel avenue for biologists to delve into the intricate architecture of complex tissues, offering unparalleled opportunities to explore the dynamic processes shaping tissue structure during development and disease progression,” Zhou said. “Through characterizing refined tissue structures and elucidating their alterations during disease states, IRIS holds the potential to unveil mechanistic insights crucial for understanding and combating various diseases.” More

  • in

    Pushing an information engine to its limits

    The molecules that make up the matter around us are in constant motion. What if we could harness that energy and put it to use?
    Over 150 years ago Maxwell theorized that if molecules’ motion could be measured accurately, this information could be used to power an engine. Until recently this was a thought experiment, but technological breakthroughs have made it possible to build working information engines in the lab.
    With funding from the Foundational Questions Institute, SFU Physics professors John Bechhoefer and David Sivak teamed up to build an information engine and test its limits. Their work has greatly advanced our understanding of how these engines function, and a paper led by postdoctoral fellow Johan du Buisson and published recently in Advances in Physics: X summarizes the findings made during their collaboration.
    “We live in a world full of extra unused energy that potentially could be used,” says Bechhoefer. Understanding how information engines function can not only help us put that energy to work, it can also suggest ways that existing engines could be redesigned to use energy more efficiently, and help us learn how biological motors work in organisms and the human body.
    The team’s information engine consists of a tiny bead in a water bath that is held in place with an optical trap. When fluctuations in the water cause the bead to move in the desired direction, the trap can be adjusted to prevent the bead from returning to the place where it was before. By taking accurate measurements of the bead’s location and using that information to adjust the trap, the engine is able to convert the heat energy of the water into work.
    To understand how fast and efficient the engine could be, the team tested multiple variables such as the mass of the bead and sampling frequency, and developed algorithms to reduce the uncertainty of their measurements.
    “Stripped down to its simplest essence, we can systematically understand how things like temperature and the size of the system changes the things we can take advantage of,” Sivak says. “What are the strategies that work best? How do they change with all those different properties?”
    The team was able to achieve the fastest speed recorded to date for an information engine, approximately ten times faster than the speed of E. coli, and comparable to the speed of motile bacteria found in marine environments.

    Next, the team wanted to learn if an information engine could harvest more energy than it costs to run. “In equilibrium, that’s always a losing game,” Bechhoefer says. “The costs of gathering the information and processing it will always exceed what you’re getting out of it, but when you have an environment that has extra energy, [molecules doing] extra jiggling around, then that can change the balance if it’s strong enough.”
    They found that in a non-equilibrium environment, where the engine was in a heat bath with a higher temperature than the measuring apparatus, it could output significantly more power than it cost to run.
    All energy on Earth comes from the sun, and it eventually radiates out into space. That directional flow of energy manifests itself in many different ways, such as wind or ocean currents that can be harvested. Understanding the principles behind information engines can help us make better use of that energy.
    “We’re coming at [energy harvesting] from a very different point of view, and we hope that this different perspective can lead to some different insights about how to be more efficient,” Bechhoefer says.
    The pair is looking forward to working together on other projects in the future. “We were lucky to get a joint grant together. That really helped with the collaboration,” says Bechhoefer.
    Sivak, a theorist, and Bechhoefer, an experimentalist, bring complementary approaches to their work, and they have been able to attract trainees who want to work with both. “We have different styles in terms of how we go about mentoring and leading a group,” says Sivak. “Our students and post-docs can benefit from both approaches.” More

  • in

    Artificial intelligence blood test provides a reliable way to identify lung cancer

    Using artificial intelligence technology to identify patterns of DNA fragments associated with lung cancer, researchers from the Johns Hopkins Kimmel Cancer Center and other institutions have developed and validated a liquid biopsy that may help identify lung cancer earlier.
    In a prospective study published June 3 in Cancer Discovery, the team demonstrated that artificial intelligence technology could identify people more likely to have lung cancer based on DNA fragment patterns in the blood. The study enrolled about 1,000 participants with and without cancer who met the criteria for traditional lung cancer screening with low-dose computed tomography (CT). Individuals were recruited to participate at 47 centers in 23 U.S. states. By helping to identify patients most at risk and who would benefit from follow-up CT screening, this new blood test could potentially boost lung cancer screening and reduce death rates, according to computer modeling by the team.
    “We have a simple blood test that could be done in a doctor’s office that would tell patients whether they have potential signs of lung cancer and should get a follow-up CT scan,” says the study’s corresponding author, Victor E. Velculescu, M.D., Ph.D., professor of oncology and co-director of the Cancer Genetics and Epigenetics program at the Johns Hopkins Kimmel Cancer Center.Lung cancer is the deadliest cancer in the United States, according to the National Cancer Institute, and worldwide, according to the World Health Organization. Yearly screening with CT scans in high-risk patients can help detect lung cancers early, when they are most treatable, and help avert lung cancer deaths. Screening is recommended by the U.S. Preventive Services Task Force for 15 million people nationally who are between ages 50 and 80 and have a smoking history, yet only about 6%-10% of eligible individuals are screened each year. People may be reticent to follow through on screening, Velculescu explains, due to the time it takes to arrange and go to an appointment, and the low doses of radiation they are exposed to from the scan.
    To help overcome some of these hurdles, Velculescu and his colleagues developed a test over the past five years that uses artificial intelligence to detect patterns of DNA fragments found in patients with lung cancer. It takes advantage of differences in how DNA is packaged in normal and cancer cells. DNA is neatly and consistently folded up in healthy cells, almost like a rolled-up ball of yarn, but DNA in cancer cells is more disorganized. When both types of cells die, fragments of DNA end up in the blood. The DNA fragments in patients with cancer tend to be more chaotic and irregular than the DNA fragments found in individuals who do not have cancer.
    The team trained artificial intelligence software to identify the specific patterns of DNA fragments seen in the blood of 576 people with or without lung cancer. Then, they verified that the method worked in a second group of 382 people with and without cancer. Based on their analyses, the test has a negative predictive value of 99.8%, meaning that only 2 in 1,000 individuals tested may be missed and have lung cancer.
    The group’s computer simulations showed that if the test boosted the rate of lung cancer screening to 50% within five years, it could quadruple the number of lung cancers detected and increase the proportion of cancers detected early — when they are most treatable — by about 10%. That could prevent about 14,000 cancer deaths over five years.
    “The test is inexpensive and could be done at a very large scale,” Velculescu says. “We believe it will make lung cancer screening more accessible and help many more people get screened. This will lead to more cancers being detected and treated early.”
    The test is currently available through DELFI Diagnostics for use as a laboratory-based test under the Clinical Laboratory Improvement Amendments. However, the team plans to seek approval from the U.S. Food and Drug Administration for lung cancer screening. Velculescu colleagues also plan to study whether a similar approach could be used to detect other types of cancer.

    Robert B. Scharpf of Johns Hopkins co-authored the study. Additional co-authors were from the Cleveland Clinic, DELFI Diagnostics, Medicus Economics LLC, Miami Cancer Institute, the Pan American Center for Oncology, Washington University, Centura Health, Vanderbilt Health, Stratevi, Massachusetts General Hospital, the Medical University of South Carolina, the Department of Veterans Affairs, the Perelman School of Medicine at the University of Pennsylvania, New York University Langone Health, Allegheny Health Network and Memorial Sloan Kettering Cancer Center.
    The work was supported in part by DELFI Diagnostics, the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation, Stand Up To Cancer-LUNGevity-American Lung Association Lung Cancer Interception Dream Team Translational Research Grant, Stand Up To Cancer-DCS International Translational Cancer Research Dream Team Grant, the Gray Foundation, The Honorable Tina Brozman Foundation, the Commonwealth Foundation, the Cole Foundation and the National Institutes of Health.
    Velculescu and Scharpf are inventors on patent applications submitted by The Johns Hopkins University related to cell-free DNA for cancer detection that have been licensed to DELFI Diagnostics, LabCorp, Qiagen, Sysmex, Agios, Genzyme, Esoterix, Ventana and ManaT Bio. Velculescu divested his equity in Personal Genome Diagnostics (PGDx) to LabCorp in February 2022. Velculescu is a founder of DELFI Diagnostics, serves on the board of directors, and owns DELFI Diagnostics stock. Scharpf is a founder and consultant of DELFI Diagnostics and owns DELFI Diagnostics stock. Velculescu, Scharpf and Johns Hopkins receive royalties and fees from the company. The Johns Hopkins University also owns equity in DELFI Diagnostics. Velculescu is an adviser to Viron Therapeutics and Epitope. These relationships are managed by Johns Hopkins in accordance with its conflict-of-interest policies. More

  • in

    Seeking social proximity improves flight routes among pigeons

    A new study conducted by Dr. Edwin Dalmaijer, a cognitive neuroscientist at the University of Bristol, UK, looked at the social influences on pigeon flight routes. Comparing the flight patterns of pairs of pigeons to a computer model, the researcher found that flight paths are improved as younger birds learn the route from older birds and also make route improvements, leading to overall more efficient routes over generations. The study publishes June 6 in the open-access journal PLOS Biology.
    Pigeons are known for their ability to travel long distances to specific locations. Like many birds, they navigate using the sun and by sensing the Earth’s magnetic field. Though these senses help pigeons find their bearings, they do not usually generate the most efficient routes.
    Dr. Dalmaijer gathered data from previously published studies where pigeons that were familiar with a route were paired with pigeons that had not flown the route before. These data demonstrated that when the inexperienced pigeon is introduced, the pair flies a more direct route to their destination. However, these previous studies could not determine how the paired birds generate more efficient routes.
    Dr. Dalmaijer compared the pigeon flight data to a computer model that prioritized four main factors. These four factors represent what might be involved in choosing a flight path with minimal cognition, including: direction to the goal, representing the bird’s internal compass; proximity to the other pigeon; the remembered route; and general consistency, since the birds are unlikely to make erratic turns.
    In the model, the simulated birds, referred to as “agents,” made over 60 journeys. Once every 12 journeys, one of the agents was replaced with an agent that had not made the trip before, simulating a young bird. This resulted in a generational increase in the efficiency of the flight routes. These improvements are similar to those seen in the real-life data from pigeon pairs, though the pigeon data did not match the most optimal version of the model, likely because pigeons are influenced by additional factors that the model could not account for.
    When some of the parameters of the model were removed, such as memory of the route or the desire to be near the other pigeon, there was no generational improvement. “These results suggest that stepwise improvement between generations can occur when individuals simply seek proximity to others,” Dr. Dalmaijer said.
    The model demonstrates learning in both directions. As expected, the younger agent benefits from the older agent by learning the route. However, it also shows that the older agent benefits from the younger agent. Since younger agents are not following an internal route, they are more oriented to the final destination. The agents’ desire for social proximity between the two balances these draws, leading to an overall more efficient route. Additionally, these findings may be applicable to other species beyond pigeons, such as ants and some types of fish, which also make journeys based on memory and social factors.
    Dr. Dalmaijeradds, “I grew up in the Netherlands, in a city where pigeons constantly walk into oncoming bicycle traffic, so I don’t have the highest opinion of pigeon intellect. On the one hand, this study vindicates that, by showing the gradual improvement in route efficiency also emerges in ‘dumb’ artificial agents. On the other hand, I have gained a huge respect for all the impressive work done in pigeon navigation and cumulative culture, and even a little bit for the humble pigeon (as long as they stay away from my bike).” More