More stories

  • in

    Assessing regulatory fairness through machine learning

    The perils of machine learning — using computers to identify and analyze data patterns, such as in facial recognition software — have made headlines lately. Yet the technology also holds promise to help enforce federal regulations, including those related to the environment, in a fair, transparent way, according to a new study by Stanford researchers.
    The analysis, published this week in the proceedings of the Association of Computing Machinery Conference on Fairness, Accountability and Transparency(link is external), evaluates machine learning techniques designed to support a U.S. Environmental Protection Agency (EPA) initiative to reduce severe violations of the Clean Water Act. It reveals how two key elements of so-called algorithmic design influence which communities are targeted for compliance efforts and, consequently, who bears the burden of pollution violations. The analysis — funded through the Stanford Woods Institute for the Environment’s Realizing Environmental Innovation Program — is timely given recent executive actions(link is external) calling for renewed focus on environmental justice.
    “Machine learning is being used to help manage an overwhelming number of things that federal agencies are tasked to do — as a way to help increase efficiency,” said study co-principal investigator Daniel Ho, the William Benjamin Scott and Luna M. Scott Professor of Law at Stanford Law School. “Yet what we also show is that simply designing a machine learning-based system can have an additional benefit.”
    Pervasive noncompliance
    The Clean Water Act aims to limit pollution from entities that discharge directly into waterways, but in any given year, nearly 30 percent of such facilities self-report persistent or severe violations of their permits. In an effort to halve this type of noncompliance by 2022, EPA has been exploring the use of machine learning to target compliance resources.
    To test this approach, EPA reached out to the academic community. Among its chosen partners: Stanford’s Regulation, Evaluation and Governance Lab (RegLab), an interdisciplinary team of legal experts, data scientists, social scientists and engineers that Ho heads. The group has done ongoing work with federal and state agencies to aid environmental compliance.

    advertisement

    In the new study, RegLab researchers examined how permits with similar functions, such as wastewater treatment plants, were classified by each state in ways that would affect their inclusion in the EPA national compliance initiative. Using machine learning models, they also sifted through hundreds of millions of observations — an impossible task with conventional approaches — from EPA databases on historical discharge volumes, compliance history and permit-level variables to predict the likelihood of future severe violations and the amount of pollution each facility would likely generate. They then evaluated demographic data, such as household income and minority population, for the areas where each model indicated the riskiest facilities were located.
    Devil in the details
    The team’s algorithmic process helped surface two key ways that the design of the EPA compliance initiative could influence who receives resources. These differences centered on which types of permits were included or excluded, as well as how the goal itself was articulated.
    In the process of figuring out how to achieve the compliance goal, the researchers first had to translate the overall objective into a series of concrete instructions — an algorithm — needed to fulfill it. As they were assessing which facilities to run predictions on, they noticed an important embedded decision. While the EPA initiative expands covered permits by at least sevenfold relative to prior efforts, it limits its scope to “individual permits,” which cover a specific discharging entity, such as a single wastewater treatment plant. Left out are “general permits,” intended to cover multiple dischargers engaged in similar activities and with similar types of effluent. A related complication: Most permitting and monitoring authority is vested in state environmental agencies. As a result, functionally similar facilities may be included or excluded from the federal initiative based on how states implement their pollution permitting process.
    “The impact of this environmental federalism makes partnership with states critical to achieving these larger goals in an equitable way,” said co-author Reid Whitaker, a RegLab affiliate and 2020 graduate of Stanford Law School now pursuing a PhD in the Jurisprudence and Social Policy Program at the University of California, Berkeley.
    Second, the current EPA initiative focuses on reducing rates of noncompliance. While there are good reasons for this policy goal, the researchers’ algorithmic design process made clear that favoring this over pollution discharges that exceed the permitted limit would have a powerful unintended effect. Namely, it would shift enforcement resources away from the most severe violators, which are more likely to be in densely populated minority communities, and toward smaller facilities in more rural, predominantly white communities, according to the researchers.
    “Breaking down the big idea of the compliance initiative into smaller chunks that a computer could understand forced a conversation about making implicit decisions explicit,” said study lead author Elinor Benami, a faculty affiliate at the RegLab and assistant professor of agricultural and applied economics at Virginia Tech. “Careful algorithmic design can help regulators transparently identify how objectives translate to implementation while using these techniques to address persistent capacity constraints.” More

  • in

    Someone to watch over AI and keep it honest – and it's not the public!

    The public doesn’t need to know how Artificial Intelligence works to trust it. They just need to know that someone with the necessary skillset is examining AI and has the authority to mete out sanctions if it causes or is likely to cause harm.
    Dr Bran Knowles, a senior lecturer in data science at Lancaster University, says: “I’m certain that the public are incapable of determining the trustworthiness of individual AIs… but we don’t need them to do this. It’s not their responsibility to keep AI honest.”
    Dr Knowles presents (March 8) a research paper ‘The Sanction of Authority: Promoting Public Trust in AI’ at the ACM Conference on Fairness, Accountability and Transparency (ACM FAccT).
    The paper is co-authored by John T. Richards, of IBM’s T.J. Watson Research Center, Yorktown Heights, New York.
    The general public are, the paper notes, often distrustful of AI, which stems both from the way AI has been portrayed over the years and from a growing awareness that there is little meaningful oversight of it.
    The authors argue that greater transparency and more accessible explanations of how AI systems work, perceived to be a means of increasing trust, do not address the public’s concerns.

    advertisement

    A ‘regulatory ecosystem’, they say, is the only way that AI will be meaningfully accountable to the public, earning their trust.
    “The public do not routinely concern themselves with the trustworthiness of food, aviation, and pharmaceuticals because they trust there is a system which regulates these things and punishes any breach of safety protocols,” says Dr Richards.
    And, adds Dr Knowles: “Rather than asking that the public gain skills to make informed decisions about which AIs are worthy of their trust, the public needs the same guarantees that any AI they might encounter is not going to cause them harm.”
    She stresses the critical role of AI documentation in enabling this trustworthy regulatory ecosystem. As an example, the paper discusses work by IBM on AI Factsheets, documentation designed to capture key facts regarding an AI’s development and testing.
    But, while such documentation can provide information needed by internal auditors and external regulators to assess compliance with emerging frameworks for trustworthy AI, Dr Knowles cautions against relying on it to directly foster public trust.
    “If we fail to recognise that the burden to oversee trustworthiness of AI must lie with highly skilled regulators, then there’s a good chance that the future of AI documentation is yet another terms and conditions-style consent mechanism — something no one really reads or understands,” she says.
    The paper calls for AI documentation to be properly understood as a means to empower specialists to assess trustworthiness.
    “AI has material consequences in our world which affect real people; and we need genuine accountability to ensure that the AI that pervades our world is helping to make that world better,” says Dr Knowles.
    ACM FAccT is a computer science conference that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.

    Story Source:
    Materials provided by Lancaster University. Note: Content may be edited for style and length. More

  • in

    Algorithm helps artificial intelligence systems dodge 'adversarial' inputs

    In a perfect world, what you see is what you get. If this were the case, the job of artificial intelligence systems would be refreshingly straightforward.
    Take collision avoidance systems in self-driving cars. If visual input to on-board cameras could be trusted entirely, an AI system could directly map that input to an appropriate action — steer right, steer left, or continue straight — to avoid hitting a pedestrian that its cameras see in the road.
    But what if there’s a glitch in the cameras that slightly shifts an image by a few pixels? If the car blindly trusted so-called “adversarial inputs,” it might take unnecessary and potentially dangerous action.
    A new deep-learning algorithm developed by MIT researchers is designed to help machines navigate in the real, imperfect world, by building a healthy “skepticism” of the measurements and inputs they receive.
    The team combined a reinforcement-learning algorithm with a deep neural network, both used separately to train computers in playing video games like Go and chess, to build an approach they call CARRL, for Certified Adversarial Robustness for Deep Reinforcement Learning.
    The researchers tested the approach in several scenarios, including a simulated collision-avoidance test and the video game Pong, and found that CARRL performed better — avoiding collisions and winning more Pong games — over standard machine-learning techniques, even in the face of uncertain, adversarial inputs.

    advertisement

    “You often think of an adversary being someone who’s hacking your computer, but it could also just be that your sensors are not great, or your measurements aren’t perfect, which is often the case,” says Michael Everett, a postdoc in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “Our approach helps to account for that imperfection and make a safe decision. In any safety-critical domain, this is an important approach to be thinking about.”
    Everett is the lead author of a study outlining the new approach, which appears in IEEE’s Transactions on Neural Networks and Learning Systems. The study originated from MIT PhD student Björn Lütjens’ master’s thesis and was advised by MIT AeroAstro Professor Jonathan How.
    Possible realities
    To make AI systems robust against adversarial inputs, researchers have tried implementing defenses for supervised learning. Traditionally, a neural network is trained to associate specific labels or actions with given inputs. For instance, a neural network that is fed thousands of images labeled as cats, along with images labeled as houses and hot dogs, should correctly label a new image as a cat.
    In robust AI systems, the same supervised-learning techniques could be tested with many slightly altered versions of the image. If the network lands on the same label — cat — for every image, there’s a good chance that, altered or not, the image is indeed of a cat, and the network is robust to any adversarial influence.

    advertisement

    But running through every possible image alteration is computationally exhaustive and difficult to apply successfully to time-sensitive tasks such as collision avoidance. Furthermore, existing methods also don’t identify what label to use, or what action to take, if the network is less robust and labels some altered cat images as a house or a hotdog.
    “In order to use neural networks in safety-critical scenarios, we had to find out how to take real-time decisions based on worst-case assumptions on these possible realities,” Lütjens says.
    The best reward
    The team instead looked to build on reinforcement learning, another form of machine learning that does not require associating labeled inputs with outputs, but rather aims to reinforce certain actions in response to certain inputs, based on a resulting reward. This approach is typically used to train computers to play and win games such as chess and Go.
    Reinforcement learning has mostly been applied to situations where inputs are assumed to be true. Everett and his colleagues say they are the first to bring “certifiable robustness” to uncertain, adversarial inputs in reinforcement learning.
    Their approach, CARRL, uses an existing deep-reinforcement-learning algorithm to train a deep Q-network, or DQN — a neural network with multiple layers that ultimately associates an input with a Q value, or level of reward.
    The approach takes an input, such as an image with a single dot, and considers an adversarial influence, or a region around the dot where it actually might be instead. Every possible position of the dot within this region is fed through a DQN to find an associated action that would result in the most optimal worst-case reward, based on a technique developed by recent MIT graduate student Tsui-Wei “Lily” Weng PhD ’20.
    An adversarial world
    In tests with the video game Pong, in which two players operate paddles on either side of a screen to pass a ball back and forth, the researchers introduced an “adversary” that pulled the ball slightly further down than it actually was. They found that CARRL won more games than standard techniques, as the adversary’s influence grew.
    “If we know that a measurement shouldn’t be trusted exactly, and the ball could be anywhere within a certain region, then our approach tells the computer that it should put the paddle in the middle of that region, to make sure we hit the ball even in the worst-case deviation,” Everett says.
    The method was similarly robust in tests of collision avoidance, where the team simulated a blue and an orange agent attempting to switch positions without colliding. As the team perturbed the orange agent’s observation of the blue agent’s position, CARRL steered the orange agent around the other agent, taking a wider berth as the adversary grew stronger, and the blue agent’s position became more uncertain.
    There did come a point when CARRL became too conservative, causing the orange agent to assume the other agent could be anywhere in its vicinity, and in response completely avoid its destination. This extreme conservatism is useful, Everett says, because researchers can then use it as a limit to tune the algorithm’s robustness. For instance, the algorithm might consider a smaller deviation, or region of uncertainty, that would still allow an agent to achieve a high reward and reach its destination.
    In addition to overcoming imperfect sensors, Everett says CARRL may be a start to helping robots safely handle unpredictable interactions in the real world.
    “People can be adversarial, like getting in front of a robot to block its sensors, or interacting with them, not necessarily with the best intentions,” Everett says. “How can a robot think of all the things people might try to do, and try to avoid them? What sort of adversarial models do we want to defend against? That’s something we’re thinking about how to do.”
    This research was supported, in part, by Ford Motor Company as part of the Ford-MIT Alliance. More

  • in

    In a leap for battery research, machine learning gets scientific smarts

    Scientists have taken a major step forward in harnessing machine learning to accelerate the design for better batteries: Instead of using it just to speed up scientific analysis by looking for patterns in data, as researchers generally do, they combined it with knowledge gained from experiments and equations guided by physics to discover and explain a process that shortens the lifetimes of fast-charging lithium-ion batteries.
    It was the first time this approach, known as “scientific machine learning,” has been applied to battery cycling, said Will Chueh, an associate professor at Stanford University and investigator with the Department of Energy’s SLAC National Accelerator Laboratory who led the study. He said the results overturn long-held assumptions about how lithium-ion batteries charge and discharge and give researchers a new set of rules for engineering longer-lasting batteries.
    The research, reported today in Nature Materials, is the latest result from a collaboration between Stanford, SLAC, the Massachusetts Institute of Technology and Toyota Research Institute (TRI). The goal is to bring together foundational research and industry know-how to develop a long-lived electric vehicle battery that can be charged in 10 minutes.
    “Battery technology is important for any type of electric powertrain,” said Patrick Herring, senior research scientist for Toyota Research Institute. “By understanding the fundamental reactions that occur within the battery we can extend its life, enable faster charging and ultimately design better battery materials. We look forward to building on this work through future experiments to achieve lower-cost, better-performing batteries.”
    A trio of advances
    The new study builds on two previous advances where the group used more conventional forms of machine learning to dramatically accelerate both battery testing and the process of winnowing down many possible charging methods to find the ones that work best. While these studies allowed researchers to make much faster progress — reducing the time needed to determine battery lifetimes by 98%, for instance — they didn’t reveal the underlying physics or chemistry that made some batteries last longer than others, as the latest study did.

    advertisement

    Combining all three approaches could potentially slash the time needed to bring a new battery technology from the lab bench to the consumer by as much as two-thirds, Chueh said.
    “In this case, we are teaching the machine how to learn the physics of a new type of failure mechanism that could help us design better and safer fast-charging batteries,” Chueh said. “Fast charging is incredibly stressful and damaging to batteries, and solving this problem is key to expanding the nation’s fleet of electric vehicles as part of the overall strategy for fighting climate change.”
    The new combined approach can also be applied to developing the grid-scale battery systems needed for a greater deployment of wind and solar electricity, which will become even more urgent as the nation pursues recently announced Biden Administration goals of eliminating fossil fuels from electric power generation by 2035 and achieving net-zero carbon emissions by 2050.
    Zooming in for closeups
    The new study zoomed in on battery electrodes, which are made of nano-sized grains glommed together into particles. Lithium ions slosh back and forth between the cathode and anode during charging and discharging, seeping into the particles and flowing back out again. This constant back-and-forth makes particles swell, shrink and crack, gradually decreasing their ability to store charge, and fast charging just makes things worse.

    advertisement

    To look at this process in more detail, the team observed the behavior of cathode particles made of nickel, manganese and cobalt, a combination known as NMC that’s one of the most widely used materials in electric vehicle batteries. These particles absorb lithium ions when the battery discharges and release them when it charges.
    Stanford postdoctoral researchers Stephen Dongmin Kang and Jungjin Park used X-rays from SLAC’s Stanford Synchrotron Radiation Lightsource to get an overall look at particles that were undergoing fast charging. Then they took particles to Lawrence Berkeley National Laboratory’s Advanced Light Source to be examined with scanning X-ray transmission microscopy, which homes in on individual particles.
    The data from those experiments, along with information from mathematical models of fast charging and equations that describe the chemistry and physics of the process, were incorporated into scientific machine learning algorithms.
    “Rather than having the computer directly figure out the model by simply feeding it data, as we did in the two previous studies, we taught the computer how to choose or learn the right equations, and thus the right physics,” said Kang, who performed the modeling with MIT graduate student Hongbo Zhao, working with chemical engineering professor Martin Bazant.
    The rich-get-richer effect
    Until now, scientists had assumed that the differences between particles were insignificant and that their ability to store and release ions was limited by how fast lithium could move inside the particles, Kang said. In this way of seeing things, lithium ions flow in and out of all the particles at the same time and at roughly the same speed.
    But the new approach revealed that the particles themselves control how fast lithium ions move out of cathode particles when a battery charges, he said. Some particles immediately release a lot of their ions while others release very few or none at all. And the quick-to-release particles go on releasing ions at a faster rate than their neighbors – a positive feedback, or “rich get richer,” effect that had not been identified before.
    “We now have a picture — literally a movie — of how lithium moves around inside the battery, and it’s very different than scientists and engineers thought it was,” Kang said. “This uneven charging and discharging puts more stress on the electrodes and decreases their working lifetimes. Understanding this process on a fundamental level is an important step toward solving the fast charging problem.”
    The scientists say their new method has potential for improving the cost, storage capacity, durability and other important properties of batteries for a wide range of applications, from electric vehicles to laptops to large-scale storage of renewable energy on the grid.
    “It was just two years ago that the 2019 Nobel Prize in chemistry was awarded for the development of rechargeable lithium-ion batteries, which dates back to the 1970s,” Chueh said. “So I am encouraged that there’s still so much to learn about how to make batteries better.”
    This research was funded by Toyota Research Institute. The Stanford Synchrotron Radiation Lightsource and Advanced Light Source are DOE Office of Science user facilities, and work there was supported by the DOE Office of Science and the DOE Advanced Battery Materials Research Program. More

  • in

    Virtual avatar coaching with community context for adult-child dyads

    Virtual reality avatar-based coaching shows promise to increase access to and extend the reach of nutrition education programs to children at risk for obesity, according to a new study in the Journal of Nutrition Education and Behavior, published by Elsevier.
    Researchers introduced 15 adult-child dyads to a virtual avatar-based coaching program that incorporated age-specific information on growth; physical, social, and emotional development; healthy lifestyles; common nutrition concerns; and interview questions around eating behaviors and food resources and counseling.
    “We developed a virtual reality avatar computer program as a way to get kids engaged in learning about nutrition education. The goal was to make this a program that could work to prevent childhood obesity,” said lead investigator Jared T. McGuirt, PhD, MPH, Department of Nutrition, University of North Carolina Greensboro, Greensboro, NC, USA. “We were primarily interested in how kids and parents reacted to this program — particularly lower income kids and parents who may not have been able to access this kind of experience in the past.”
    A key finding in the study was the avatar’s ability to spark dialogue between the children and adults around dietary habits and behavior. All children and adults reported liking the program and planned to use it in the future, as they found it fun, informational, and motivating. The personalized social aspect of the avatar experience was appealing, as participants thought the avatar would reinforce guidance and provide support while acting as a cue to change health behaviors.
    Looking to future implementation of this program in the public health field, Dr. McGuirt noted: “We feel we have a good start. The program was designed so others can build on it, and hopefully advance this technique into community nutrition education programs.

    Story Source:
    Materials provided by Elsevier. Note: Content may be edited for style and length. More

  • in

    Reduced heat leakage improves wearable health device

    North Carolina State University engineers continue to improve the efficiency of a flexible device worn on the wrist that harvests heat energy from the human body to monitor health.
    In a paper published in npj Flexible Electronics, the NC State researchers report significant enhancements in preventing heat leakage in the flexible body heat harvester they first reported in 2017 and updated in 2020. The harvesters use heat energy from the human body to power wearable technologies — think of smart watches that measure your heart rate, blood oxygen, glucose and other health parameters — that never need to have their batteries recharged. The technology relies on the same principles governing rigid thermoelectric harvesters that convert heat to electrical energy.
    Flexible harvesters that conform to the human body are highly desired for use with wearable technologies. Mehmet Ozturk, an NC State professor of electrical and computer engineering and the corresponding author of the paper, mentioned superior skin contact with flexible devices, as well as the ergonomic and comfort considerations to the device wearer, as the core reasons behind building flexible thermoelectric generators, or TEGs.
    The performance and efficiency of flexible harvesters, however, historically trail well behind rigid devices, which have been superior in their ability to convert body heat into usable energy.
    The NC State proof-of-concept TEG originally reported in 2017 employed semiconductor elements that were connected electrically in series using liquid-metal interconnects made of EGaIn — a non-toxic alloy of gallium and indium. EGaIn provided both metal-like electrical conductivity and stretchability. The entire device was embedded in a stretchable silicone elastomer.
    The upgraded device reported in 2020 employed the same architecture but significantly improved the thermal engineering of the previous version, while increasing the density of the semiconductor elements responsible for converting heat into electricity. One of the improvements was a high thermal conductivity silicone elastomer — essentially a type of rubber — that encapsulated the EGaIn interconnects.
    The newest iteration adds aerogel flakes to the silicone elastomer to reduce the elastomer’s thermal conductivity. Experimental results showed that this innovation reduced the heat leakage through the elastomer by half.
    “The addition of aerogel stops the heat from leaking between the device’s thermoelectric ‘legs,'” Ozturk said. “The higher the heat leakage, the lower the temperature that develops across the device, which translates to lower output power.
    “The flexible device reported in this paper is performing an order of magnitude better than the device we reported in 2017 and continues to approach the performance of rigid devices,” Ozturk added.
    Ozturk said that one of the strengths of the NC State-patented technology is that it employs the very same semiconductor elements used in rigid devices perfected after decades of research. The approach also provides a low-cost opportunity to existing rigid thermoelectric module manufacturers to enter the flexible thermoelectric market.
    He added that his lab will continue to focus on improving the efficiency of these flexible devices.

    Story Source:
    Materials provided by North Carolina State University. Original written by Mick Kulikowski. Note: Content may be edited for style and length. More

  • in

    Building networks not enough to expand rural broadband

    Public grants to build rural broadband networks may not be sufficient to close the digital divide, new Cornell University research finds.
    High operations and maintenance costs and low population density in some rural areas result in prohibitively high service fees — even for a subscriber-owned cooperative structured to prioritize member needs over profits, the analysis found.
    Decades ago, cooperatives were key to the expansion of electric and telephone service to underserved rural areas, spurred by New Deal legislation providing low-interest government grants and loans. Public funding for rural broadband access should similarly consider its critical role supporting economic development, health care and education, said Todd Schmit, associate professor in the Charles H. Dyson School of Applied Economics and Management.
    “The New Deal of broadband has to incorporate more than building the systems,” Schmit said. “We have to think more comprehensively about the importance of getting equal access to these technologies.”
    Schmit is the co-author with Roberta Severson, an extension associate in Dyson, of “Exploring the Feasibility of Rural Broadband Cooperatives in the United States: The New New Deal?” The research was published Feb. 13 in Telecommunications Policy.
    More than 90% of Americans had broadband access in 2015, according to the study, but the total in rural areas was below 70%. Federal programs have sought to help close that gap, including a $20.4 billion Federal Communications Commission initiative announced last year to subsidize network construction in underserved areas.

    advertisement

    Schmit and Severson studied the feasibility of establishing a rural broadband cooperative to improve access in Franklin County in northern New York state, which received funding for a feasibility study from the U.S. Department of Agriculture’s Rural Business Development Program.
    The researchers partnered with Slic Network Solutions, a local internet service provider, to develop estimates of market prices, the cost to build a fiber-to-the-home network, operations and maintenance costs, and the potential subscriber base — about 1,600 residents — and model a cooperative that would break even over a 10-year cycle.
    Federal and state grants and member investment would cover almost the entire estimated $8 million construction cost, so that wasn’t a significant factor in the analysis, the researchers said.
    But even with those subsidies, the study determined the co-op would need to charge $231 per month for its high-speed service option — 131% above market rates. At that price, it’s unlikely 40% of year-round residents would opt for high-speed broadband as the model had assumed, casting further doubt on its feasibility.
    The $231 fee included a surcharge to subsidize a lower-speed service option costing no more than $60 — a restriction the construction grants imposed to ensure affordability. Without that restriction, the high-speed price would drop to $175 and the low-speed climb to $105.

    advertisement

    “In short,” the authors wrote, “grants covering investment and capital construction alone do not solve the rural broadband problem, at least in our study area.”
    As an alternative — though not one available in Franklin County — Schmit and Severson examined the possibility of an existing rural electric or telecommunications co-op expanding into broadband. They would gain efficiencies from already operating infrastructure such as the poles that would carry fiber lines. In that scenario, the high-speed price improved to $144 a month — still 44% above market rates.
    “These systems are very costly to operate and maintain,” Schmit said, “particularly in areas like we looked at that are very low density.”
    The feasibility improves with growth in a coverage area’s density and “take rate,” or percentage of potential subscribers signing up at different speeds, according to the analysis. But in Franklin County, the researchers determined a startup co-op would need 14 potential subscribers per mile to break even over 10 years — more than twice the study area’s actual density.
    To better serve such areas, Schmit and Severson said, policymakers should explore eliminating property taxes on broadband infrastructure and payments to rent space on poles owned by regulated utilities, which respectively accounted for 16% and 18% of the proposed co-op’s annual expenses. Those measures reduced an expanding rural utility co-op’s high-speed fee to 25% above market rates, a level members might be willing to pay, the authors said.
    “Consideration of the public benefits of broadband access arguably needs to be added to the equation,” they wrote. “The case was made for electricity and telephone services in the 1930s and similar arguments would seem to hold for this technology today.” More

  • in

    Study reveals how egg cells get so big

    Egg cells are by far the largest cells produced by most organisms. In humans, they are several times larger than a typical body cell and about 10,000 times larger than sperm cells.
    There’s a reason why egg cells, or oocytes, are so big: They need to accumulate enough nutrients to support a growing embryo after fertilization, plus mitochondria to power all of that growth. However, biologists don’t yet understand the full picture of how egg cells become so large.
    A new study in fruit flies, by a team of MIT biologists and mathematicians, reveals that the process through which the oocyte grows significantly and rapidly before fertilization relies on physical phenomena analogous to the exchange of gases between balloons of different sizes. Specifically, the researchers showed that “nurse cells” surrounding the much larger oocyte dump their contents into the larger cell, just as air flows from a smaller balloon into a larger one when they are connected by small tubes in an experimental setup.
    “The study shows how physics and biology come together, and how nature can use physical processes to create this robust mechanism,” says Jörn Dunkel, an MIT associate professor of physical applied mathematics. “If you want to develop as an embryo, one of the goals is to make things very reproducible, and physics provides a very robust way of achieving certain transport processes.”
    Dunkel and Adam Martin, an MIT associate professor of biology, are the senior authors of the paper, which appears this week in the Proceedings of the National Academy of Sciences. The study’s lead authors are postdoc Jasmin Imran Alsous and graduate student Nicolas Romeo. Jonathan Jackson, a Harvard University graduate student, and Frank Mason, a research assistant professor at Vanderbilt University School of Medicine, are also authors of the paper.
    A physical process
    In female fruit flies, eggs develop within cell clusters known as cysts. An immature oocyte undergoes four cycles of cell division to produce one egg cell and 15 nurse cells. However, the cell separation is incomplete, and each cell remains connected to the others by narrow channels that act as valves that allow material to pass between cells.

    advertisement

    Members of Martin’s lab began studying this process because of their longstanding interest in myosin, a class of proteins that can act as motors and help muscle cells contract. Imran Alsous performed high-resolution, live imaging of egg formation in fruit flies and found that myosin does indeed play a role, but only in the second phase of the transport process. During the earliest phase, the researchers were puzzled to see that the cells did not appear to be increasing their contractility at all, suggesting that a mechanism other than “squeezing” was initiating the transport.
    “The two phases are strikingly obvious,” Martin says. “After we saw this, we were mystified, because there’s really not a change in myosin associated with the onset of this process, which is what we were expecting to see.”
    Martin and his lab then joined forces with Dunkel, who studies the physics of soft surfaces and flowing matter. Dunkel and Romeo wondered if the cells might be behaving the same way that balloons of different sizes behave when they are connected. While one might expect that the larger balloon would leak air to the smaller until they are the same size, what actually happens is that air flows from the smaller to the larger.
    This happens because the smaller balloon, which has greater curvature, experiences more surface tension, and therefore higher pressure, than the larger balloon. Air is therefore forced out of the smaller balloon and into the larger one. “It’s counterintuitive, but it’s a very robust process,” Dunkel says.
    Adapting mathematical equations that had already been derived to explain this “two-balloon effect,” the researchers came up with a model that describes how cell contents are transferred from the 15 small nurse cells to the large oocyte, based on their sizes and their connections to each other. The nurse cells in the layer closest to the oocyte transfer their contents first, followed by the cells in more distant layers.

    advertisement

    “After I spent some time building a more complicated model to explain the 16-cell problem, we realized that the simulation of the simpler 16-balloon system looked very much like the 16-cell network. It is surprising to see that such counterintuitive but mathematically simple ideas describe the process so well,” Romeo says.
    The first phase of nurse cell dumping appears to coincide with when the channels connecting the cells become large enough for cytoplasm to move through them. Once the nurse cells shrink to about 25 percent of their original size, leaving them only slightly larger than their nuclei, the second phase of the process is triggered and myosin contractions force the remaining contents of the nurse cells into the egg cell.
    “In the first part of the process, there’s very little squeezing going on, and the cells just shrink uniformly. Then this second process kicks in toward the end where you start to get more active squeezing, or peristalsis-like deformations of the cell, that complete the dumping process,” Martin says.
    Cell cooperation
    The findings demonstrate how cells can coordinate their behavior, using both biological and physical mechanisms, to bring about tissue-level behavior, Imran Alsous says.
    “Here, you have several nurse cells whose job it is to nurse the future egg cell, and to do so, these cells appear to transport their contents in a coordinated and directional manner to the oocyte,” she says.
    Oocyte and early embryonic development in fruit flies and other invertebrates bears some similarities to those of mammals, but it’s unknown if the same mechanism of egg cell growth might be seen in humans or other mammals, the researchers say.
    “There’s evidence in mice that the oocyte develops as a cyst with other interconnected cells, and that there is some transport between them, but we don’t know if the mechanisms that we’re seeing here operate in mammals,” Martin says.
    The researchers are now studying what triggers the second, myosin-powered phase of the dumping process to start. They are also investigating how changes to the original sizes of the nurse cells might affect egg formation.
    The research was funded by the National Institute of General Medical Sciences, a Complex Systems Scholar Award from the James S. McDonnell Foundation, and the Robert E. Collins Distinguished Scholarship Fund. More