More stories

  • in

    Study reveals how egg cells get so big

    Egg cells are by far the largest cells produced by most organisms. In humans, they are several times larger than a typical body cell and about 10,000 times larger than sperm cells.
    There’s a reason why egg cells, or oocytes, are so big: They need to accumulate enough nutrients to support a growing embryo after fertilization, plus mitochondria to power all of that growth. However, biologists don’t yet understand the full picture of how egg cells become so large.
    A new study in fruit flies, by a team of MIT biologists and mathematicians, reveals that the process through which the oocyte grows significantly and rapidly before fertilization relies on physical phenomena analogous to the exchange of gases between balloons of different sizes. Specifically, the researchers showed that “nurse cells” surrounding the much larger oocyte dump their contents into the larger cell, just as air flows from a smaller balloon into a larger one when they are connected by small tubes in an experimental setup.
    “The study shows how physics and biology come together, and how nature can use physical processes to create this robust mechanism,” says Jörn Dunkel, an MIT associate professor of physical applied mathematics. “If you want to develop as an embryo, one of the goals is to make things very reproducible, and physics provides a very robust way of achieving certain transport processes.”
    Dunkel and Adam Martin, an MIT associate professor of biology, are the senior authors of the paper, which appears this week in the Proceedings of the National Academy of Sciences. The study’s lead authors are postdoc Jasmin Imran Alsous and graduate student Nicolas Romeo. Jonathan Jackson, a Harvard University graduate student, and Frank Mason, a research assistant professor at Vanderbilt University School of Medicine, are also authors of the paper.
    A physical process
    In female fruit flies, eggs develop within cell clusters known as cysts. An immature oocyte undergoes four cycles of cell division to produce one egg cell and 15 nurse cells. However, the cell separation is incomplete, and each cell remains connected to the others by narrow channels that act as valves that allow material to pass between cells.

    advertisement

    Members of Martin’s lab began studying this process because of their longstanding interest in myosin, a class of proteins that can act as motors and help muscle cells contract. Imran Alsous performed high-resolution, live imaging of egg formation in fruit flies and found that myosin does indeed play a role, but only in the second phase of the transport process. During the earliest phase, the researchers were puzzled to see that the cells did not appear to be increasing their contractility at all, suggesting that a mechanism other than “squeezing” was initiating the transport.
    “The two phases are strikingly obvious,” Martin says. “After we saw this, we were mystified, because there’s really not a change in myosin associated with the onset of this process, which is what we were expecting to see.”
    Martin and his lab then joined forces with Dunkel, who studies the physics of soft surfaces and flowing matter. Dunkel and Romeo wondered if the cells might be behaving the same way that balloons of different sizes behave when they are connected. While one might expect that the larger balloon would leak air to the smaller until they are the same size, what actually happens is that air flows from the smaller to the larger.
    This happens because the smaller balloon, which has greater curvature, experiences more surface tension, and therefore higher pressure, than the larger balloon. Air is therefore forced out of the smaller balloon and into the larger one. “It’s counterintuitive, but it’s a very robust process,” Dunkel says.
    Adapting mathematical equations that had already been derived to explain this “two-balloon effect,” the researchers came up with a model that describes how cell contents are transferred from the 15 small nurse cells to the large oocyte, based on their sizes and their connections to each other. The nurse cells in the layer closest to the oocyte transfer their contents first, followed by the cells in more distant layers.

    advertisement

    “After I spent some time building a more complicated model to explain the 16-cell problem, we realized that the simulation of the simpler 16-balloon system looked very much like the 16-cell network. It is surprising to see that such counterintuitive but mathematically simple ideas describe the process so well,” Romeo says.
    The first phase of nurse cell dumping appears to coincide with when the channels connecting the cells become large enough for cytoplasm to move through them. Once the nurse cells shrink to about 25 percent of their original size, leaving them only slightly larger than their nuclei, the second phase of the process is triggered and myosin contractions force the remaining contents of the nurse cells into the egg cell.
    “In the first part of the process, there’s very little squeezing going on, and the cells just shrink uniformly. Then this second process kicks in toward the end where you start to get more active squeezing, or peristalsis-like deformations of the cell, that complete the dumping process,” Martin says.
    Cell cooperation
    The findings demonstrate how cells can coordinate their behavior, using both biological and physical mechanisms, to bring about tissue-level behavior, Imran Alsous says.
    “Here, you have several nurse cells whose job it is to nurse the future egg cell, and to do so, these cells appear to transport their contents in a coordinated and directional manner to the oocyte,” she says.
    Oocyte and early embryonic development in fruit flies and other invertebrates bears some similarities to those of mammals, but it’s unknown if the same mechanism of egg cell growth might be seen in humans or other mammals, the researchers say.
    “There’s evidence in mice that the oocyte develops as a cyst with other interconnected cells, and that there is some transport between them, but we don’t know if the mechanisms that we’re seeing here operate in mammals,” Martin says.
    The researchers are now studying what triggers the second, myosin-powered phase of the dumping process to start. They are also investigating how changes to the original sizes of the nurse cells might affect egg formation.
    The research was funded by the National Institute of General Medical Sciences, a Complex Systems Scholar Award from the James S. McDonnell Foundation, and the Robert E. Collins Distinguished Scholarship Fund. More

  • in

    Speeding up commercialization of electric vehicles

    Professor Byoungwoo Kang develops a high-density cathode material through controlling local structures of the Li-rich layered materials.
    Researchers in Korea have developed a high-capacity cathode material that can be stably charged and discharged for hundreds of cycles without using the expensive cobalt (Co) metal. The day is fast approaching when electric vehicles can drive long distances with Li- ion batteries.
    Professor Byoungwoo Kang and Dr. Junghwa Lee of POSTECH’s Department of Materials Science and Engineering have successfully developed a high energy-density cathode material that can stably maintain charge and discharge for over 500 cycles without the expensive and toxic Co metal. The research team achieved this by controlling the local structure via developing the simple synthesis process for the Li-rich layered material that is attracting attention as the next-generation high-capacity cathode material. These research findings were published in ACS Energy Letters, a journal in the energy field of the American Chemistry Association.
    The mileage and the charge-discharge cycle of an electric vehicle depend on the unique properties of the electrode material in the rechargeable Li-ion battery. Electricity is generated when lithium ions flow back and forth between the cathode and anode. In the case of Li-rich layered material, the number of cycles decreases sharply when large amount of lithium is extracted and inserted. In particular, when a large amount of lithium is extracted and an oxygen reaction occurs in a highly charged state, a structural collapse occurs rendering it impossible to maintain the charge-discharge properties or the high-energy density for long-term cycles. This deterioration of the cycle property has hampered commercialization.
    The research team had previously revealed that the homogeneous distribution of atoms between the transition metal layer and the lithium layer of the Li-rich layered material can be an important factor in the activation of electrochemical reaction and cycle property in Li-rich layered materials. The team then conducted a subsequent research to control the synthesis conditions for increasing the degree of the atoms’ distribution in the structure. Using the previously published solid-state reaction, the team developed a new, simple and efficient process that can produce a cathode material that has an optimized atomic distribution.
    As a result, it was confirmed that the synthesized Li-rich layered material has an optimized local structure in terms of electrochemical activity and cycle property, enabling a large amount of lithium to be used reversibly. It was also confirmed that the oxygen redox reaction was also stably and reversibly driven for several hundred cycles.
    Under these optimized conditions, the Co-free Li-rich layered material synthesized displayed 180% higher reversible energy at 1,100Wh/kg than the conventionally commercialized high nickel layered material (ex. LiNi0.8Mn0.1Co0.1O2) with energy density of 600Wh/kg. In particular, even if a large amount of lithium is removed, a stable structure was maintained, enabling about 95% capacity for 100 cycles. In addition, by maintaining 83% for 500 cycles, a breakthrough performance that can maintain stable high energy for hundreds of cycles is anticipated.
    “The significance of these research findings is that the cycle property, which is one of the important issues in the next-generation high-capacity Li-rich layered materials, have been dramatically improved through relatively simple process changes,” explained Professor Byoungwoo Kang of POSTECH. “This is noteworthy in that we have moved a step closer to commercializing the next generation Li-rich layered materials.”

    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Study finds no link between gender and physics course performance

    A new data-driven study from Texas A&M University casts serious doubt on the stereotype that male students perform better than female students in science — specifically, physics.
    A team of researchers in the Department of Physics and Astronomy analyzed both the midterm exam scores and final grades of more than 10,000 Texas A&M students enrolled in four introductory physics courses across more than a decade, finding no evidence that male students consistently outperform female students in these courses.
    The work was led by Texas A&M physicist and Presidential Professor for Teaching Excellence Tatiana Erukhimova.
    With help of nearly two dozen departmental colleagues, the Texas A&M team built a database reflecting the complete introductory physics educational spectrum: the calculus-based course sequence primarily taken by engineering and physics majors as well as the algebra-based course sequence typically taken by life sciences and premed majors. Their final analysis shows that exam performance and final letter grades are largely independent of student gender — results which Erukhimova says show promise in ending gender stereotypes that negatively impact so many female students in STEM.
    “There is no consistent trend on male students outperforming female students,” Erukhimova said. “Our study also provides new knowledge regarding whether statistically significant differences based on gender occurred on each exam for four introductory physics courses as the semesters were progressing — an area that has not previously been studied, at least not for such a large data set and over a long period of time.”
    When differences in final letter grades for a course were observed, there were no persistent differences across that course’s exams, she said. Conversely, when researchers found differences on exams within a course, they observed no differences for final letter grades in that course. In algebra-based mechanics, they found that female students outperformed male students by a small but statistically significant margin.

    advertisement

    Their findings were published recently in the American Physical Society journal Physical Review Physics Education Research.
    Prior to the team’s study and others similar to it, Erukhimova says it has been an open question as to whether significant differences between male and female students could show up on particular exams but remain slight enough so as not to affect final course grades. For the past 25 years, the physics education profession has relied on inventory tests — optional surveys intended to assess conceptual understanding and retention of key physics concepts — to answer that question, effectively substantiating the argument for gender differences in student performance by default because men tend to score higher on them.
    “In the field of physics education research, the majority of existing studies report a persistent gender gap with males performing significantly better than females on introductory mechanics concept inventory assessments, such as the Force Concept Inventory,” Erukhimova said. “The results of prior studies on the gendered differences in student performance based on course grades and examinations are less consistent. While a number of studies indicate that male students outperform female students on the exams and course grades, other groups found no significant gendered difference in student performance.”
    The team applied multiple statistical analyses to the course-level data they collected to study whether there were performance differences based on student gender. To see how their findings aligned with student perceptions, they also took a snapshot of the students’ feelings about course performance, inclusion and contributions using a short anonymous questionnaire distributed to 1,600 students in fall 2019.
    “Responses indicated that female students had lower perception of their performance than their male classmates,” Erukhimova said. “The only class where female students perceived their performance as equal to their male classmates was algebra-based mechanics, in which females typically outperform males. Additionally, we found that although male and female students may feel differently regarding their performance and in-class contributions, they feel equally included in class.”
    Although the team’s study represents clear progress to Erukhimova, she acknowledges it comes with its own limitations — the most significant being that it relies solely on course-level data collected from faculty and does not analyze the possible impact of non-academic factors on student performance. In the future, she says the team would like to connect as much of their data set as possible to university-level records to see how prior preparation, such as SAT scores, affects these results.
    “We believe that all students should have equal opportunities and chances for success in physics,” Erukhimova said. “The results of this work may help with fighting the gender stereotype threat that negatively impacts so many female students. By contributing to the body of knowledge about how gender relates to student performance, we hope that our work, which would not have been possible without our colleagues’ data, can be another step in dismantling the preconceived notion of a societal bias based on gender in physics.”
    The team’s research was funded in part by the College of Science Diversity and Equity Small Grants Program.

    Story Source:
    Materials provided by Texas A&M University. Original written by Shana Hutchins. Note: Content may be edited for style and length. More

  • in

    Beauty is in the brain: AI reads brain data, generates personally attractive images

    Researchers have succeeded in making an AI understand our subjective notions of what makes faces attractive. The device demonstrated this knowledge by its ability to create new portraits on its own that were tailored to be found personally attractive to individuals. The results can be utilised, for example, in modelling preferences and decision-making as well as potentially identifying unconscious attitudes.
    Researchers at the University of Helsinki and University of Copenhagen investigated whether a computer would be able to identify the facial features we consider attractive and, based on this, create new images matching our criteria. The researchers used artificial intelligence to interpret brain signals and combined the resulting brain-computer interface with a generative model of artificial faces. This enabled the computer to create facial images that appealed to individual preferences.
    “In our previous studies, we designed models that could identify and control simple portrait features, such as hair colour and emotion. However, people largely agree on who is blond and who smiles. Attractiveness is a more challenging subject of study, as it is associated with cultural and psychological factors that likely play unconscious roles in our individual preferences. Indeed, we often find it very hard to explain what it is exactly that makes something, or someone, beautiful: Beauty is in the eye of the beholder,” says Senior Researcher and Docent Michiel Spapé from the Department of Psychology and Logopedics, University of Helsinki.
    The study, which combines computer science and psychology, was published in February in the IEEE Transactions in Affective Computing journal.
    Preferences exposed by the brain
    Initially, the researchers gave a generative adversarial neural network (GAN) the task of creating hundreds of artificial portraits. The images were shown, one at a time, to 30 volunteers who were asked to pay attention to faces they found attractive while their brain responses were recorded via electroencephalography (EEG).

    advertisement

    “It worked a bit like the dating app Tinder: the participants ‘swiped right’ when coming across an attractive face. Here, however, they did not have to do anything but look at the images. We measured their immediate brain response to the images,” Spapé explains.
    The researchers analysed the EEG data with machine learning techniques, connecting individual EEG data through a brain-computer interface to a generative neural network.
    “A brain-computer interface such as this is able to interpret users’ opinions on the attractiveness of a range of images. By interpreting their views, the AI model interpreting brain responses and the generative neural network modelling the face images can together produce an entirely new face image by combining what a particular person finds attractive,” says Academy Research Fellow and Associate Professor Tuukka Ruotsalo, who heads the project.
    To test the validity of their modelling, the researchers generated new portraits for each participant, predicting they would find them personally attractive. Testing them in a double-blind procedure against matched controls, they found that the new images matched the preferences of the subjects with an accuracy of over 80%.
    “The study demonstrates that we are capable of generating images that match personal preference by connecting an artificial neural network to brain responses. Succeeding in assessing attractiveness is especially significant, as this is such a poignant, psychological property of the stimuli. Computer vision has thus far been very successful at categorising images based on objective patterns. By bringing in brain responses to the mix, we show it is possible to detect and generate images based on psychological properties, like personal taste,” Spapé explains.
    Potential for exposing unconscious attitudes
    Ultimately, the study may benefit society by advancing the capacity for computers to learn and increasingly understand subjective preferences, through interaction between AI solutions and brain-computer interfaces.
    “If this is possible in something that is as personal and subjective as attractiveness, we may also be able to look into other cognitive functions such as perception and decision-making. Potentially, we might gear the device towards identifying stereotypes or implicit bias and better understand individual differences,” says Spapé.

    Story Source:
    Materials provided by University of Helsinki. Original written by Aino Pekkarinen. Note: Content may be edited for style and length. More

  • in

    New quantum theory heats up thermodynamic research

    Researchers have developed a new quantum version of a 150-year-old thermodynamical thought experiment that could pave the way for the development of quantum heat engines.
    Mathematicians from the University of Nottingham have applied new quantum theory to the Gibbs paradox and demonstrated a fundamental difference in the roles of information and control between classical and quantum thermodynamics. Their research has been published today in Nature Communications.
    The classical Gibbs paradox led to crucial insights for the development of early thermodynamics and emphasises the need to consider an experimenter’s degree of control over a system.
    The research team developed a theory based on mixing two quantum gases — for example, one red and one blue, otherwise identical — which start separated and then mix in a box. Overall, the system has become more uniform, which is quantified by an increase in entropy. If the observer then puts on purple-tinted glasses and repeats the process; the gases look the same, so it appears as if nothing changes. In this case, the entropy change is zero.
    The lead authors on the paper, Benjamin Yadin and Benjamin Morris, explain: “Our findings seem odd because we expect physical quantities such as entropy to have meaning independent of who calculates them. In order to resolve the paradox, we must realise that thermodynamics tells us what useful things can be done by an experimenter who has devices with specific capabilities. For example, a heated expanding gas can be used to drive an engine. In order to extract work (useful energy) from the mixing process, you need a device that can “see” the difference between red and blue gases.”
    Classically, an “ignorant” experimenter, who sees the gases as indistinguishable, cannot extract work from the mixing process. The research shows that in the quantum case, despite being unable to tell the difference between the gases, the ignorant experimenter can still extract work through mixing them.
    Considering the situation when the system becomes large, where quantum behaviour would normally disappear, the researchers found that the quantum ignorant observer can extract as much work as if they had been able to distinguish the gases. Controlling these gases with a large quantum device would behave entirely differently from a classical macroscopic heat engine. This phenomenon results from the existence of special superposition states that encode more information than is available classically.
    Professor Gerardo Adesso said: “Despite a century of research, there are so many aspects we don’t know or we don’t understand yet at the heart of quantum mechanics. Such a fundamental ignorance, however, doesn’t prevent us from putting quantum features to good use, as our work reveals. We hope our theoretical study can inspire exciting developments in the burgeoning field of quantum thermodynamics and catalyse further progress in the ongoing race for quantum-enhanced technologies.
    “Quantum heat engines are microscopic versions of our everyday heaters and refrigerators, which may be realised with just one or a few atoms (as already experimentally verified) and whose performance can be boosted by genuine quantum effects such as superposition and entanglement. Presently, to see our quantum Gibbs paradox played out in a laboratory would require exquisite control over the system parameters, something which may be possible in fine-tuned “optical lattice” systems or Bose-Einstein condensates — we are currently at work to design such proposals in collaboration with experimental groups.”

    Story Source:
    Materials provided by University of Nottingham. Note: Content may be edited for style and length. More

  • in

    Can't solve a riddle? The answer might lie in knowing what doesn't work

    You look for a pattern, or a rule, and you just can’t spot it. So you back up and start over.
    That’s your brain recognizing that your current strategy isn’t working, and that you need a new way to solve the problem, according to new research from the University of Washington. With the help of about 200 puzzle-takers, a computer model and functional MRI (fMRI) images, researchers have learned more about the processes of reasoning and decision-making, pinpointing the brain pathway that springs into action when problem-solving goes south.
    “There are two fundamental ways your brain can steer you through life — toward things that are good, or away from things that aren’t working out,” said Chantel Prat, associate professor of psychology and co-author of the new study, published Feb. 23 in the journal Cognitive Science. “Because these processes are happening beneath the hood, you’re not necessarily aware of how much driving one or the other is doing.”
    Using a decision-making task developed by Michael Frank at Brown University, the researchers measured exactly how much “steering” in each person’s brain involved learning to move toward rewarding things as opposed to away from less-rewarding things. Prat and her co-authors were focused on understanding what makes someone good at problem-solving.
    The research team first developed a computer model that specified the series of steps they believed were required for solving the Raven’s Advanced Performance Matrices (Raven’s) — a standard lab test made of puzzles like the one above. To succeed, the puzzle-taker must identify patterns and predict the next image in the sequence. The model essentially describes the four steps people take to solve a puzzle:
    Identify a key feature in a pattern;
    Figure out where that feature appears in the sequence;
    Come up with a rule for manipulating the feature;
    Check whether the rule holds true for the entire pattern.
    At each step, the model evaluated whether it was making progress. When the model was given real problems to solve, it performed best when it was able to steer away from the features and strategies that weren’t helping it make progress. According to the authors, this ability to know when your “train of thought is on the wrong track” was central to finding the correct answer.
    The next step was to see whether this was true in people. To do so, the team had three groups of participants solve puzzles in three different experiments. In the first, they solved the original set of Raven’s problems using a paper-and-pencil test, along with Frank’s test which separately measured their ability to “choose” the best options and to “avoid” the worse options. Their results suggested that only the ability to “avoid” the worst options related to problem-solving success. There was no relation between one’s ability to recognize the best choice in the decision-making test, and to solve the puzzles effectively.
    The second experiment replaced the paper-and-pencil version of the puzzles with a shorter, computerized version of the task that could also be implemented in an MRI brain-scanning environment. These results confirmed that those who were best at avoiding the worse options in the decision-making task were also the best problem solvers.
    The final group of participants completed the computerized puzzles while having their brain activity recorded using fMRI. Based on the model, the researchers gauged which parts of the brain would drive problem-solving success. They zeroed in on the basal ganglia — what Prat calls the “executive assistant” to the prefrontal cortex, or “CEO” of the brain. The basal ganglia assist the prefrontal cortex in deciding which action to take using parallel paths: one that turns the volume “up” on information it believes is relevant, and another that turns the volume “down” on signals it believes to be irrelevant. The “choose” and “avoid” behaviors associated with Frank’s decision-making test relate to the functioning of these two pathways. Results from this experiment suggest that the process of “turning down the volume” in the basal ganglia predicted how successful participants were at solving the puzzles.
    “Our brains have parallel learning systems for avoiding the least good thing and getting the best thing. A lot of research has focused on how we learn to find good things, but this pandemic is an excellent example of why we have both systems. Sometimes, when there are no good options, you have to pick the least bad one! What we found here was that this is even more critical to complex problem-solving than recognizing what’s working.”
    Co-authors of the study were Andrea Stocco, associate professor, and Lauren Graham, assistant teaching professor, in the UW Department of Psychology. The research was supported by the UW Royalty Research Fund, a UW startup fund award and the Bezos Family Foundation.

    Story Source:
    Materials provided by University of Washington. Original written by Kim Eckart. Note: Content may be edited for style and length. More

  • in

    Extreme-scale computing and AI forecast a promising future for fusion power

    Efforts to duplicate on Earth the fusion reactions that power the sun and stars for unlimited energy must contend with extreme heat-load density that can damage the doughnut-shaped fusion facilities called tokamaks, the most widely used laboratory facilities that house fusion reactions, and shut them down. These loads flow against the walls of what are called divertor plates that extract waste heat from the tokamaks.
    Far larger forecast
    But using high-performance computers and artificial intelligence (AI), researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have predicted a far larger and less damaging heat-load width for the full-power operation of ITER, the international tokamak under construction in France, than previous estimates have found. The new formula produced a forecast that was over six-times wider than those developed by a simple extrapolation from present tokamaks to the much larger ITER facility whose goal is to demonstrate the feasibility of fusion power.
    “If the simple extrapolation to full-power ITER from today’s tokamaks were correct, no known material could withstand the extreme heat load without some difficult preventive measures,” said PPPL physicist C.S. Chang, leader of the team that developed the new, wider forecast and first author of a paper that Physics of Plasmas has published as an Editor’s Pick. “An accurate formula can enable scientists to operate ITER in a more comfortable and cost-effective way toward its goal of producing 10 times more fusion energy than the input energy,” Chang said.
    Fusion reactions combine light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei that makes up to 99 percent of the visible universe — to generate massive amounts of energy. Tokamaks, the most widely used fusion facilities, confine the plasma in magnetic fields and heat it to million-degree temperatures to produce fusion reactions. Scientists around the world are seeking to produce and control such reactions to create a safe, clean, and virtually inexhaustible supply of power to generate electricity.
    The Chang team’s surprisingly optimistic forecast harkens back to results the researchers produced on the Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory in 2017. The team used the PPPL-developed XGC high-fidelity plasma turbulence code to forecast a heat load that was over six-times wider in full-power ITER operation than simple extrapolations from current tokamaks predicted.

    advertisement

    Surprise finding
    The surprising finding raised eyebrows by sharply contradicting the dangerously narrow heat-load forecasts. What accounted for the difference — might there be some hidden plasma parameter, or condition of plasma behavior, that the previous forecasts had failed to detect?
    Those forecasts arose from parameters in the simple extrapolations that regarded plasma as a fluid without considering the important kinetic, or particle motion, effects. By contrast, the XGC code produces kinetic simulations using trillions of particles on extreme-scale computers, and its six-times wider forecast suggested that there might indeed be hidden parameters that the fluid approach did not factor in.
    The team performed more refined simulations of the full-power ITER plasma on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory to ensure that their 2017 findings on Titan were not in error.
    The team also performed new XGC simulations on current tokamaks to compare the results to the much wider Summit and Titan findings. One simulation was on one of the highest magnetic-field plasmas on the Joint European Torus (JET) in the United Kingdom, which reaches 73 percent of the full-power ITER magnetic field strength. Another simulation was on one of the highest magnetic-field plasmas on the now decommissioned C-Mod tokamak at the Massachusetts Institute of Technology (MIT), which reaches 100 percent of the full-power ITER magnetic field.

    advertisement

    The results in both cases agreed with the narrow heat-load width forecasts from simple extrapolations. These findings strengthened the suspicion that there are indeed hidden parameters.
    Supervised machine learning
    The team then turned to a type of AI method called supervised machine learning to discover what the unnoticed parameters might be. Using kinetic XGC simulation data from future ITER plasma, the AI code identified the hidden parameter as related to the orbiting of plasma particles around the tokamak’s magnetic field lines, an orbiting called gyromotion.
    The AI program suggested a new formula that forecasts a far wider and less dangerous heat-load width for full-power ITER than the previous XGC formula derived from experimental results in present tokamaks predicted. Furthermore, the AI-produced formula recovers the previous narrow findings of the formula built for the tokamak experiments.
    “This exercise exemplifies the necessity for high-performance computing, by not only producing high-fidelity understanding and prediction but also improving the analytic formula to be more accurate and predictive.” Chang said. “It is found that the full-power ITER edge plasma is subject to a different type of turbulence than the edge in present tokamaks due to the large size of the ITER edge plasma compared to the gyromotion radius of particles.”
    Researchers then verified the AI-produced formula by performing three more simulations of future ITER plasmas on the supercomputers Summit at OLCF and Theta at the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. “If this formula is validated experimentally,” Chang said, “this will be huge for the fusion community and for ensuring that ITER’s divertor can accommodate the heat exhaust from the plasma without too much complication.”
    The team would next like to see experiments on current tokamaks that could be designed to test the AI-produced extrapolation formula. If it is validated, Chang said, “the formula can be used for easier operation of ITER and for the design of more economical fusion reactors.” More

  • in

    Recommended for you: Role, impact of tools behind automated product picks explored

    As you scroll through Amazon looking for the perfect product, or flip through titles on Netflix searching for a movie to fit your mood, auto-generated recommendations can help you find exactly what you’re looking for among extensive offerings.
    These recommender systems are used in retail, entertainment, social networking and more. In a recently published study, two researchers from The University of Texas at Dallas investigated the informative role of these systems and the economic impacts on competing sellers and consumers.
    “Recommender systems have become ubiquitous in e-commerce platforms and are touted as sales-support tools that help consumers find their preferred or desired product among the vast variety of products,” said Dr. Jianqing Chen, professor of information systems in the Naveen Jindal School of Management. “So far, most of the research has been focused on the technical side of recommender systems, while the research on the economic implications for sellers is limited.”
    In the study, published in the December 2020 issue of MIS Quarterly, Chen and Dr. Srinivasan Raghunathan, the Ashbel Smith Professor of information systems, developed an analytical model in which sellers sell their products through a common electronic marketplace.
    The paper focuses on the informative role of the recommender system: how it affects consumers’ decisions by informing them about products about which they otherwise may be unaware. Recommender systems seem attractive to sellers because they do not have to pay the marketplace for receiving recommendations, while traditional advertising is costly.
    The researchers note that recommender systems have been reported to increase sales on these marketplaces: More than 35% of what consumers purchase on Amazon and more than 60% of what they watch on Netflix result from recommendations. The systems use information including purchase history, search behavior, demographics and product ratings to predict a user’s preferences and recommend the product the consumer is most likely to buy.

    advertisement

    While recommender systems introduce consumers to new products and increase the market size — which benefits sellers — the free exposure is not necessarily profitable, Chen said.
    The researchers found the advertising effect causes sellers to advertise less on their own, and the competition effect causes them to decrease their prices. Sellers also are more likely to benefit from the recommender system only when it has a high precision.
    “This means that sellers are likely to benefit from the recommender system only when the recommendations are effective and the products recommended are indeed consumers’ preferred products,” Chen said.
    The researchers determined these results do not change whether sellers use targeted advertising or uniform advertising.
    Although the exposure is desirable for sellers, the negative effects on profitability could overshadow the positive effects. Sellers should carefully choose their advertising approach and adopt uniform advertising if they cannot accurately target customers, Chen said.

    advertisement

    “Free exposure turns out to not really be free,” he said. “To mitigate such a negative effect, sellers should strive to help the marketplace provide effective recommendations. For example, sellers should provide accurate product descriptions, which can help recommender systems provide better matching between products and consumers.”
    Consumers, on the other hand, benefit both directly and indirectly from recommender systems, Raghunathan said. For example, they might be introduced to a new product or benefit from price competition among sellers.
    Conversely, they also might end up paying more than the value of such recommendations in the form of increased prices, Raghunathan said.
    “Consumers should embrace recommender systems,” he said. “However, sharing additional information, such as their preference in the format of online reviews, with the platform is a double-edged sword. While it can help recommender systems more effectively find a product that a consumer might like, the additional information can be used to increase the recommendation precision, which in turn can reduce the competition pressure on sellers and can be bad for consumers.”
    The researchers said that although significant efforts are underway to develop more sophisticated recommender systems, the economic implications of these systems are poorly understood.
    “The business and societal value of recommender systems cannot be assessed properly unless economic issues surrounding them are examined,” Chen said. He and Raghunathan plan to conduct further research on this topic.
    Lusi Li PhD’17, now at California State University, Los Angeles, also contributed to the research. The project was part of Li’s doctoral dissertation at UT Dallas. More