More stories

  • in

    Peers crucial in shaping boys’ confidence in math skills

    Boys are good at math, girls not so much? A study from the University of Zurich has analyzed the social mechanisms that contribute to the gender gap in math confidence. While peer comparisons seem to play a crucial role for boys, girls’ subjective evaluations are more likely to be based on objective performance.
    Research has shown that in Western societies, the average secondary school girl has less confidence in her mathematical abilities than the average boy of the same age. At the same time, no significant difference has been found between girls’ and boys’ performance in mathematics. This phenomenon is often framed as girls not being confident enough in their abilities, or that boys might in fact be overconfident.
    This math confidence gap has far-reaching consequences: self-perceived competence influences educational and occupational choices and young people choose university subjects and careers that they believe they are talented in. As a result, women are underrepresented in STEM (science, technology, engineering, math) subjects at university level and in high-paying STEM careers.
    Peer processes provide nuanced insights into varying self-perceptions
    A study from the University of Zurich (UZH) focuses on a previously neglected aspect of the math confidence gap: the role of peer relationships. “Especially in adolescence, peers are the primary social reference for individual development. Peer processes that operate through friendship networks determine a wide range of individual outcomes,” said the study’s lead author Isabel Raabe from the Department of Sociology at UZH. The study analyzed data from 8,812 individuals in 358 classrooms in a longitudinal social network analysis.
    As expected, the main predictor of math confidence is individual math grades. While girls translated their grades — more or less directly — into self-assessment, boys with below-average grades nevertheless believed they were good at math.
    Boys tend to be overconfident and sensitive to social processes
    “In general, boys seem to be more sensitive to social processes in their self-perception — they compare themselves more with others for validation and then adjust their confidence accordingly,” Raabe explains. “When they were confronted with girls’ self-assessments in cross-gender friendships, their math confidence tended to be lower.” Peers’ self-assessment was less relevant to girls’ math confidence. Their subjective evaluation seemed to be driven more by objective performance.
    Gender stereotypes did not appear to have negative social consequences for either boys or girls. “We found that confidence in mathematics is often associated with better social integration, both in same-sex and cross-sex friendships,” said Raabe. Thus, there was no evidence of harmful peer norms pressuring girls to underestimate their math skills.
    The results of the study suggest that math skills are more important to boys, who adjust their self-assessment in peer processes, while math confidence does not seem to be socially relevant for girls. More

  • in

    Miniaturizing a laser on a photonic chip

    Lasers have revolutionized the world since the 60’s and are now indispensable in modern applications, from cutting-edge surgery and precise manufacturing to data transmission across optical fibers.
    But as the need for laser-based applications grows, so do challenges. For example, there is a growing market for fiber lasers, which are currently used in industrial cutting, welding, and marking applications.
    Fiber lasers use an optical fiber doped with rare-earth elements (erbium, ytterbium, neodymium etc) as their optical gain source (the part that produces the laser’s light). They emit high-quality beams, they have high power output, and they are efficient, low-maintenance, durable, and they are typically smaller than gas lasers. Fiber lasers are also the ‘gold standard’ for low phase noise, meaning that their beams remain stable over time.
    But despite all that, there is a growing demand for miniaturizing fiber lasers on a chip-scale level. Erbium-based fiber lasers are especially interesting, as they meet all the requirements for maintaining a laser’s high coherence and stability. But miniaturizing them has been met by challenges in maintaining their performance at small scales.
    Now, scientists led by Dr Yang Liu and Professor Tobias Kippenberg at EPFL have built the first ever chip-integrated erbium-doped waveguide laser that approaches the performance with fiber-based lasers, combining wide wavelength tunability with the practicality of chip-scale photonic integration. The breakthrough is published in Nature Photonics.
    Building a chip-scale laser
    The researchers developed their chip-scale erbium laser using a state-of-the-art fabrication process. They began by constructing a meter-long, on-chip optical cavity (a set of mirrors that provide optical feedback) based on ultralow-loss silicon nitride photonic integrated circuit.

    “We were able to design the laser cavity to be meter-scale in length despite the compact chip size, thanks to the integration of these microring resonators that effectively extend the optical path without physically enlarging the device,” says Dr. Liu.
    The team then implanted the circuit with high-concentration erbium ions to selectively create the active gain medium necessary for lasing. Finally, they integrated the citcuit with a III-V semiconductor pump laser to excite the erbium ions to enable them to emit light and produce the laser beam.
    To refine the laser’s performance and achieve precise wavelength control, the researchers engineered an innovative intra-cavity design featuring microring-based Vernier filters, a type of optical filter that can select specific frequencies of light.
    The filters allow for dynamic tuning of the laser’s wavelength over a broad range, making it versatile and usable in various applications. This design supports stable, single-mode lasing with an impressively narrow intrinsic linewidth of just 50 Hz.
    It also allows for significant side mode suppression — the laser’s ability to emit light at a single, consistent frequency while minimizing the intensity of other frequencies (‘side modes’). This ensures “clean” and stable output across the light spectrum for high-precision applications.
    Power, precision, stability, and low noise
    The chip-scale erbium-based fiber laser features output power exceeding 10 mW and a side mode suppression ratio greater than 70 dB, outperforming many conventional systems.

    It also has a very narrow linewidth, which means the light it emits is very pure and steady, which is important for coherent applications such as sensing, gyroscopes, LiDAR, and optical frequency metrology.
    The microring-based Vernier filter gives the laser broad wavelength tunability across 40 nm within the C- and L-bands (ranges of wavelengths used in telecommunications), surpassing legacy fiber lasers in both tuning and low spectral spurs metrics (“spurs” are unwanted frequencies), while remaining compatible with current semiconductor manufacturing processes.
    Next-generation lasers
    Miniaturizing and integrating erbium fiber lasers into chip-scale devices can reduce their overall costs, making them accessible for portable and highly integrated systems across telecommunications, medical diagnostics, and consumer electronics.
    It can also scale down optical technologies in various other applications, such as LiDAR, microwave photonics, optical frequency synthesis, and free-space communications.
    “The application areas of such a new class of erbium-doped integrated lasers are virtually unlimited,” says Liu.
    The lab spin-off,EDWATEC SA, is an Integrated Device Manufacturer that can now offer Rare-Earth Ion-Doped Photonic Integrated Circuit-based Devices including high-performance amplifiers and lasers. More

  • in

    Robotic device restores wavelike muscular function involved in processes like digestion, aiding patients with compromised organs

    A team of Vanderbilt researchers has developed a wirelessly activated device that mimics the wavelike muscular function in the esophagus and small intestine responsible for transporting food and viscous fluids for digestion.
    The soft-robotic prototype, which is driven by strong magnets controlled by a wearable external actuator, can aid patients suffering from blockages caused by tumors or those requiring stents. For example, traditional esophageal stents are metal tubes used in patients with esophageal cancer, mostly in an aging population. These patients risk food being blocked from entering the stomach, potentially causing a dangerous situation where food instead enters the lung.
    Restoring the natural motion of peristalsis, the wavelike muscular transport function that takes place inside tubular human organs, “paves the way for next-generation robotic medical devices to improve the quality of life especially for the aging population,” researchers wrote in a new paper in the journal Advanced Functional Materials describing the device.
    The study was led by Xiaoguang Dong, Assistant Professor of Mechanical Engineering. This work was done in collaboration with Vanderbilt University Medical Center colleague, Dr. Rishi Naik, Assistant Professor of Medicine in the Division of Gastroenterology, Hepatology and Nutrition.
    The device itself consists of a soft sheet of small magnets arrayed in parallel rows that are activated in a precise undulating motion that produces the torque required to pump various solid and liquid cargoes. “Magnetically actuated soft robotic pumps that can restore peristalsis and seamlessly integrate with medical stents have not been reported before,” Dong and the researchers report in the paper.
    Dong, who also holds appointments in Biomedical Engineering and Electrical and Computer Engineering, said further refinements of the device could aid in other biological processes that may have been compromised by disease. For example, he said the design could be used to help transport human eggs from the ovaries when muscular function in the fallopian tubes has been impaired. In addition, the researchers said with advanced manufacturing processes, the device could be scaled down to adapt to even narrower passageways.
    Vanderbilt University School of Engineering provided funding support. Oak Ridge National Laboratory provided facility support for this research. The research team is affiliated with the Vanderbilt Institute for Surgery and Engineering (VISE). More

  • in

    Digital babies created to improve infant healthcare

    Researchers at University of Galway have created digital babies to better understand infants’ health in their critical first 180 days of life.
    The team created 360 advanced computer models that simulate the unique metabolic processes of each baby.
    The digital babies are the first sex-specific computational whole-body models representing newborn and infant metabolism with 26 organs, six cell types, and more than 80,000 metabolic reactions.
    Real-life data from 10,000 newborns, including sex, birth weight and metabolite concentrations, enabled the creation and validation of the models, which can be personalised — enabling scientists to investigate an individual infant’s metabolism for precision medicine applications.
    The work was conducted by a team of scientists at University of Galway’s Digital Metabolic Twin Centre and Heidelberg University, led by APC Microbiome Ireland principal investigator Professor Ines Thiele.
    The team’s research aims to advance precision medicine using computational modelling. They describe the computational modelling of babies as seminal, as it enhances understanding of infant metabolism and creates opportunities to improve the diagnosis and treatment of medical conditions during the early days of a baby’s life, such as inherited metabolic diseases.
    Lead author Elaine Zaunseder, Heidelberg University,said: “Babies are not just small adults — they have unique metabolic features that allow them to develop and grow up healthy. For instance, babies need more energy for regulating body temperature due to, for example, their high surface-area-to-mass ratio, but they cannot shiver in the first six months of life, so metabolic processes must ensure the infant keeps warm.

    “Therefore, an essential part of this research work was to identify these metabolic processes and translate them into mathematical concepts that could be applied in the computational model. We captured metabolism in an organ-specific manner, which offers the unique opportunity to model organ-specific energy demands that are very different in infants compared to adults.
    “As nutrition is the fuel for metabolism, we can use breast milk data from real newborns in our models to simulate the associated metabolism throughout the baby’s entire body, including various organs. Based on their nutrition, we simulated the development of digital babies over six months and showed that they will grow at the same rate as real-world infants.”
    Professor Ines Thiele, study lead on the project, said: “New-born screening programmes are crucial for detecting metabolic diseases early on, enhancing infant survival rates and health outcomes. However, the variability observed in how these diseases manifest in babies underscores the urgent need for personalised approaches to disease management.
    “Our models allow researchers to investigate the metabolism of healthy infants as well as infants suffering from inherited metabolic diseases, including those investigated in newborn screening. When simulating the metabolism of infants with a disease, the models showed we can predict known biomarkers for these diseases. Furthermore, the models accurately predicted metabolic responses to various treatment strategies, showcasing their potential in clinical settings.”
    Elaine Zaunseder added: “This work is a first step towards establishing digital metabolic twins for infants, providing a detailed view of their metabolic processes. Such digital twins have the potential to revolutionise paediatric healthcare by enabling tailored disease management for each infant’s unique metabolic needs.” More

  • in

    With programmable pixels, novel sensor improves imaging of neural activity

    Neurons communicate electrically so to understand how they produce brain functions such as memory, neuroscientists must track how their voltage changes — sometimes subtly — on the timescale of milliseconds. In a new paper in Nature Communications, MIT researchers describe a novel image sensor with the capability to substantially increase that ability.
    The invention led by Jie Zhang, a postdoctoral scholar in The Picower Institute for Learning and Memory lab of Sherman Fairchild Professor Matt Wilson, is a new take on the standard “CMOS” technology used in scientific imaging. In that standard approach, all pixels turn on and off at the same time — a configuration with an inherent trade-off in which fast sampling means capturing less light. The new chip enables each pixel’s timing to be controlled individually. That arrangement provides a “best of both worlds” in which neighboring pixels can essentially complement each other to capture all the available light without sacrificing speed.
    In experiments described in the study, Zhang and Wilson’s team demonstrates how “pixelwise” programmability enabled them to improve visualization of neural voltage “spikes,” which are the signals neurons use to communicate with each other, and even the more subtle, momentary fluctuations in their voltage that constantly occur between those spiking events.
    “Measuring with single-spike resolution is really important as part of our research approach,” said senior author Wilson, a Professor in MIT’s Departments of Biology and Brain and Cognitive Sciences (BCS), whose lab studies how the brain encodes and refines spatial memories both during wakeful exploration and during sleep. “Thinking about the encoding processes within the brain, single spikes and the timing of those spikes is important in understanding how the brain processes information.”
    For decades Wilson has helped to drive innovations in the use of electrodes to tap into neural electrical signals in real-time, but like many researchers he has also sought visual readouts of electrical activity because they can highlight large areas of tissue and still show which exact neurons are electrically active at any given moment. Being able to identify which neurons are active can enable researchers to learn which types of neurons are participating in memory processes, providing important clues about how brain circuits work.
    In recent years, neuroscientists including co-senior author Ed Boyden, Y. Eva Tan Professor of Neurotechnology in BCS and The McGovern Institute for Brain Research and a Picower Institute affiliate, have worked to meet that need by inventing “genetically encoded voltage indicators” (GEVIs), that make cells glow as their voltage changes in real-time. But as Zhang and Wilson have tried to employ GEVIs in their research, they’ve found that conventional CMOS image sensors were missing a lot of the action. If they operated too fast, they wouldn’t gather enough light. If they operated too slow, they’d miss rapid changes.
    But image sensors have such fine resolution that many pixels are really looking at essentially the same place on the scale of a whole neuron, Wilson said. Recognizing that there was resolution to spare, Zhang applied his expertise in sensor design to invent an image sensor chip that would enable neighboring pixels to each have their own timing. Faster ones could capture rapid changes. Slower-working ones could gather more light. No action or photons would be missed. Zhang also cleverly engineered the required control electronics so they barely cut into the space available for light-sensitive elements on a pixels. This ensured the sensor’s high sensitivity under low light conditions, Zhang said.

    Two demos
    In the study the researchers demonstrated two ways in which the chip improved imaging of voltage activity of mouse hippocampus neurons cultured in a dish. They ran their sensor head-to-head against an industry standard scientific CMOS image sensor chip.
    In the first set of experiments the team sought to image the fast dynamics of neural voltage. On the conventional CMOS chip, each pixel had a zippy 1.25 millisecond exposure time. On the pixel-wise sensor each pixel in neighboring groups of four stayed on for 5 milliseconds, but their start times were staggered so that each one turned on and off 1.25 seconds later than the next. In the study, the team shows that each pixel, because it was on longer, gathered more light but because each one was capturing a new view every 1.25 milliseconds, it was equivalent to simply having a fast temporal resolution. The result was a doubling of the signal-to-noise ratio for the pixelwise chip. This achieves high temporal resolution at a fraction of the sampling rate compared to conventional CMOS chips, Zhang said.
    Moreover, the pixelwise chip detected neural spiking activities that the conventional sensor missed. And when the researchers compared the performance of each kind of sensor against the electrical readings made with a traditional patch clamp electrode, they found that the staggered pixelwise measurements better matched that of the patch clamp.
    In the second set of experiments, the team sought to demonstrate that the pixelwise chip could capture both the fast dynamics and also the slower, more subtle “subthreshold” voltage variances neurons exhibit. To do so they varied the exposure durations of neighboring pixels in the pixelwise chip, ranging from 15.4 milliseconds down to just 1.9 milliseconds. In this way, fast pixels sampled every quick change (albeit faintly), while slower pixels integrated enough light over time to track even subtle slower fluctuations. By integrating the data from each pixel, the chip was indeed able to capture both fast spiking and slower subthreshold changes, the researchers reported.
    The experiments with small clusters of neurons in a dish was only a proof-of concept, Wilson said. His lab’s ultimate goal is to conduct brain-wide, real-time measurements of activity in distinct types of neurons in animals even as they are freely moving about and learning how to navigate mazes. The development of GEVIs and of image sensors like the pixelwise chip that can successfully take advantage of what they show is crucial to making that goal feasible.

    “That’s the idea of everything we want to put together: large-scale voltage imaging of genetically tagged neurons in freely behaving animals,” Wilson said.
    To achieve this, Zhang added, “We are already working on the next iteration of chips with lower noise, higher pixel counts, time-resolution of multiple kHz, and small form factors for imaging in freely behaving animals.”
    The research is advancing pixel by pixel.
    In addition to Zhang, Wilson and Boyden the paper’s other authors are Jonathan Newman, Zeguan Wang, Yong Qian, Pedro Feliciano-Ramos, Wei Guo, Takato Honda, Zhe Sage Chen, Changyang Linghu, Ralph-Etienne Cummings, and Eric Fossum.
    The Picower Institute for Learning and Memory, The JPB Foundation, the Alana Foundation, The Louis B. Thalheimer Fund for Translational Research, the National Institutes of Health, HHMI, Lisa Yang and John Doerr provided support for the research. More

  • in

    Discovery highlights ‘critical oversight’ in perceived security of wireless networks

    A research team led by Rice University’s Edward Knightly has uncovered an eavesdropping security vulnerability in high-frequency and high-speed wireless backhaul links, widely employed in critical applications such as 5G wireless cell phone signals and low-latency financial trading on Wall Street.
    Contrary to the common belief that these links are inherently secure due to their elevated positioning and highly directive millimeter-wave and sub-terahertz “pencil-beams,” the team exposed a novel method of interception using a metasurface-equipped drone dubbed MetaFly. Their findings were published by the world’s premier security conference, IEEE Symposium on Security and Privacy, in May 2024.
    “The implications of our research are far-reaching, potentially affecting a broad spectrum of companies, government agencies and individuals relying on these links,” said Knightly, the Sheafor-Lindsay Professor of Electrical and Computer Engineering and professor of computer science. “Importantly, understanding this vulnerability is the first step toward developing robust countermeasures.”
    Wireless backhaul links, crucial for the backbone of modern communication networks connecting end users to the main networks, have been assumed immune from eavesdropping because of their underlying physical and technological barriers.
    Knightly and electrical and computer engineering Ph.D. research assistant Zhambyl Shaikhanov, in collaboration with researchers at Brown University and Northeastern University, have demonstrated how a strong adversary can bypass these defenses with alarming ease. By deploying MetaFly, they intercepted high-frequency signals between rooftops in the Boston metropolitan area, leaving almost no trace.
    “Our discovery highlights a critical oversight in the perceived security of our wireless backhaul links,” Shaikhanov said.
    As wireless technology advances into the realms of 5G and beyond, ensuring the security of these networks is paramount. The Rice team’s work is a significant step toward understanding sophisticated threats such as MetaFly and also safeguarding the communication infrastructure.
    Other members of the research team include Sherif Badran, Northeastern graduate researcher and co-lead author; Josep M. Jornet, professor of electrical and computer engineering at Northeastern; Hichem Guerboukha, assistant professor of electrical and computer engineering at University of Missouri-Kansas City; and Daniel M. Mittleman, professor of engineering at Brown. More

  • in

    Liquid metal-based electronic logic device that mimics intelligent prey-capture mechanism of Venus flytrap

    A research team led by the School of Engineering of the Hong Kong University of Science and Technology (HKUST) has developed a liquid metal-based electronic logic device that mimics the intelligent prey-capture mechanism of Venus flytraps. Exhibiting memory and counting properties, the device can intelligently respond to various stimulus sequences without the need for additional electronic components. The intelligent strategies and logic mechanisms in the device provide a fresh perspective on understanding “intelligence” in nature and offer inspiration for the development of “embodied intelligence.”
    The unique prey-capture mechanism of Venus flytraps has always been an intriguing research focus in the realm of biological intelligence. This mechanism allows them to effectively distinguish between various external stimuli such as single and double touches, thereby distinguishing between environmental disturbances such as raindrops (single touch) and insects (double touches), ensuring successful prey capture. This functionality is primarily attributed to the sensory hairs on the carnivorous plants, which exhibit features akin to memory and counting, enabling them to perceive stimuli, generate action potentials (a change of electrical signals in cells in response to stimulus), and remember the stimuli for a short duration.
    Inspired by the internal electrical signal accumulation/decay model of Venus flytraps, Prof. SHEN Yajing, Associate Professor of the Department of Electronic and Computer Engineering (ECE) at HKUST, who led the research, joined hands with his former PhD student at City University of Hong Kong, Dr. YANG Yuanyuan, now Associate Professor at Xiamen University, proposed a liquid metal-based logic module (LLM) based on the extension/contraction deformation of liquid metal wires. The device employs liquid metal wires in sodium hydroxide solution as the conductive medium, controlling the length of the liquid metal wires based on electrochemical effects, thereby regulating cathode output according to the stimuli applied to the anode and gate. Research results demonstrate that the LLM itself can memorize the duration and interval of electrical stimuli, calculate the accumulation of signals from multiple stimuli, and exhibit significant logical functions similar to those of Venus flytraps.
    To demonstrate, Prof. Shen and Dr. Yang constructed an artificial Venus flytrap system comprising the LLM intelligent decision-making device, switch-based sensory hair, and soft electric actuator-based petal, replicating the predation process of Venus flytraps. Furthermore, they showcased the potential applications of LLM in functional circuit integration, filtering, artificial neural networks, and more. Their work not only provides insights into simulating intelligent behaviors in plants, but also serves as a reliable reference for the development of subsequent biological signal simulator devices and biologically inspired intelligent systems.
    “When people mention ‘artificial intelligence’, they generally think of intelligence that mimics animal nervous systems. However, in nature, many plants can also demonstrate intelligence through specific material and structural combinations. Research in this direction provides a new perspective and approach for us to understand ‘intelligence’ in nature and construct ‘life-like intelligence’,” said Prof. Shen.
    “Several years ago, when Dr. Yang was still pursuing her PhD in my research group, we discussed the idea of constructing intelligent entities inspired by plants together. It is gratifying that after several years of effort, we have achieved the conceptual verification and simulation of Venus flytrap intelligence. However, it is worth noting that this work is still relatively preliminary, and there is much work to be done in the future, such as designing more efficient structures, reducing the size of devices, and improving system responsiveness,” added Prof. Shen. More

  • in

    The unexpected origins of a modern finance tool

    In the early 1600s, the officials running Durham Cathedral, in England, had serious financial problems. Soaring prices had raised expenses. Most cathedral income came from renting land to tenant farmers, who had long leases so officials could not easily raise the rent. Instead, church leaders started charging periodic fees, but these often made tenants furious. And the 1600s, a time of religious schism, was not the moment to alienate church members.
    But in 1626, Durham officials found a formula for fees that tenants would accept. If tenant farmers paid a fee equal to one year’s net value of the land, it earned them a seven-year lease. A fee equal to 7.75 years of net value earned a 21-year lease.
    This was a form of discounting, the now-common technique for evaluating the present and future value of money by assuming a certain rate of return on that money. The Durham officials likely got their numbers from new books of discounting tables. Volumes like this had never existed before, but suddenly local church officials were applying the technique up and down England.
    As financial innovation stories go, this one is unusual. Normally, avant-garde financial tools might come from, well, the financial avant-garde — bankers, merchants, and investors hunting for short-term profits, not clergymen.
    “Most people have assumed these very sophisticated calculations would have been implemented by hard-nosed capitalists, because really powerful calculations would allow you to get an economic edge and increase profits,” says MIT historian William Deringer, an expert in the deployment of quantitative reasoning in public life. “But that was not the primary or only driver in this situation.”
    Deringer has published a new research article about this episode, “Mr. Aecroid’s Tables: Economic Calculations and Social Customs in the Early Modern Countryside,” appearing in the current issue of the Journal of Modern History. In it, he uses archival research to explore how the English clergy started using discounting, and where. And one other question: Why?
    Enter inflation
    Today, discounting is a pervasive tool. A dollar in the present is worth more than a dollar a decade from now, since one can earn money investing it in the meantime. This concept heavily informs investment markets, corporate finance, and even the NFL draft (where trading this year’s picks yields a greater haul of future picks). As the historian William N. Goetzmann has written, the related idea of net present value “is the most important tool in modern finance.” But while discounting was known as far back as the mathematician Leonardo of Pisa (often called Fibonacci) in the 1200s, why were English clergy some of its most enthusiastic early adopters?

    The answer involves a global change in the 1500s: the “price revolution,” in which things began costing more, after a long period when prices had been constant. That is, inflation hit the world.
    “People up to that point lived with the expectation that prices would stay the same,” Deringer says. “The idea that prices changed in a systematic way was shocking.”
    For Durham Cathedral, inflation meant the organization had to pay more for goods while three-quarters of its revenues came from tenant rents, which were hard to alter. Many leases were complex, and some were locked in for a tenant’s lifetime. The Durham leaders did levy intermittent fees on tenants, but that led to angry responses and court cases.
    Meanwhile, tenants had additional leverage against the Church of England: religious competition following the Reformation. England’s political and religious schisms would lead it to a midcentury civil war. Maybe some private landholders could drastically increase fees, but the church did not want to lose followers that way.
    “Some individual landowners could be ruthlessly economic, but the church couldn’t, because it’s in the midst of incredible political and religious turmoil after the Reformation,” Deringer says. “The Church of England is in this precarious position. They’re walking a line between Catholics who don’t think there should have been a Reformation, and Puritans who don’t think there should be bishops. If they’re perceived to be hurting their flock, it would have real consequences. The church is trying to make the finances work but in a way that’s just barely tolerable to the tenants.”
    Enter the books of discounting tables, which allowed local church leaders to finesse the finances. Essentially, discounting more carefully calibrated the upfront fees tenants would periodically pay. Church leaders could simply plug in the numbers as compromise solutions.

    In this period, England’s first prominent discounting book with tables was published in 1613; its most enduring, Ambrose Acroyd’s “Table of Leasses and Interest,” dated to 1628-29. Acroyd was the bursar at Trinity College at Cambridge University, which as a landholder (and church-affiliated institution) faced the same issues concerning inflation and rent. Durham Cathedral began using off-the-shelf discounting formulas in 1626, resolving decades of localized disagreement as well.
    Performing fairness
    The discounting tables from books did not only work because the price was right. Once circulating clergy had popularized the notion throughout England, local leaders could justify using the books because others were doing it. The clergy were “performing fairness,” as Deringer puts it.
    “Strict calculative rules assured tenants and courts that fines were reasonable, limiting landlords’ ability to maximize revenues,” Deringer writes in the new article.
    To be sure, local church leaders in England were using discounting for their own economic self-interest. It just wasn’t the largest short-term economic self-interest possible. And it was a sound strategy.
    “In Durham they would fight with tenants every 20 years [in the 1500s] and come to a new deal, but eventually that evolves into these sophisticated mechanisms, the discounting tables,” Deringer adds. “And you get standardization. By about 1700, it seems like these procedures are used everywhere.”
    Thus, as Deringer writes, “mathematical tables for setting fines were not so much instruments of a capitalist transformation as the linchpin holding together what remained of an older system of customary obligations stretched nearly to breaking by macroeconomic forces.”
    Once discounting was widely introduced, it never went away. Deringer’s Journal of Modern History article is part of a larger book project he is currently pursuing, about discounting in many facets of modern life.
    Deringer was able piece together the history of discounting in 17th-century England thanks in part to archival clues. For instance, Durham University owns a 1686 discounting book self-described as an update to Acroyd’s work; that copy was owned by a Durham Cathedral administrator in the 1700s. Of the 11 existing copies of Acroyd’s work, two are at Canterbury Cathedral and Lincoln Cathedral.
    Hints like that helped Deringer recognize that church leaders were very interested in discounting; his further research helped him see that this chapter in the history of discounting is not merely about finance; it also opens a new window into the turbulent 1600s.
    “I never expected to be researching church finances, I didn’t expect it to have anything to do with the countryside, landlord-tenant relationships, and tenant law,” Deringer says. “I was seeing this as an interesting example of a story about bottom-line economic calculation, and it wound up being more about this effort to use calculation to resolve social tensions.” More