More stories

  • in

    Personal interactions are important drivers of STEM identity in girls

    As head of the educational outreach arm of the Florida State University-headquartered National High Magnetic Field Laboratory, Roxanne Hughes has overseen dozens of science camps over the years, including numerous sessions of the successful SciGirls Summer Camp she co-organizes with WFSU .
    In a new paper published in the Journal of Research in Science Teaching, Hughes and her colleagues took a much closer look at one of those camps, a coding camp for middle school girls.
    They found that nuanced interactions between teachers and campers as well as among the girls themselves impacted how girls viewed themselves as coders.
    The MagLab offers both co-ed camps and summer camps for girls about science in general as well as about coding in particular . Hughes, director of the MagLab’s Center for Integrating Research and Learning , wanted to study the coding camp because computer science is the only STEM field where the representation of women has actually declined since 1990.
    “It’s super gendered in how it has been advertised, beginning with the personal computer,” Hughes said. “And there are stereotypes behind what is marketed to girls versus what is marketed to boys. We wanted to develop a conceptual framework focusing specifically on coding identity — how the girls see themselves as coders — to add to existing research on STEM identity more broadly.”
    This specific study focused on the disparate experiences of three girls in the camp. The researchers looked at when and how the girls were recognized for their coding successes during the camp, and how teachers and peers responded when the girls demonstrated coding skills.

    advertisement

    “Each girl received different levels of recognition, which affected their coding identity development,” Hughes said. “We found that educators play a crucial role in amplifying recognition, which then influences how those interactions reinforce their identities as coders.”
    Positive praise often resulted in a girl pursuing more challenging activities, for example, strengthening her coding identity.
    Exactly how teachers praised the campers played a role in how that recognition impacted the girls. Being praised in front of other girls, for example, had more impact than a discreet pat on the back. More public praise prompted peer recognition, which further boosted a girl’s coding identity.
    The type of behavior recognized by teachers also appeared to have different effects. A girl praised for demonstrating a skill might feel more like a coder than one lauded for her persistence, for example. Lack of encouragement was also observed: One girl who sought attention for her coding prowess went unacknowledged, while another who was assisting her peers received lots of recognition, responses that seem to play into gender stereotypes, Hughes said. Even in a camp explicitly designed to bolster girls in the sciences, prevailing stereotypes can undermine best intentions.
    “To me, the most interesting piece was the way in which educators still carry the general gender stereotypes, and how that influenced the behavior they rewarded.” Hughes said. “They recognized the girl who was being a team player, checking in on how everyone was feeling — all very stereotypically feminine traits that are not necessarily connected to or rewarded in computing fields currently.”
    Messaging about science is especially important for girls in middle school, Hughes said. At that developmental stage, their interest in STEM disciplines begins to wane as they start to get the picture that those fields clash with their other identities.

    advertisement

    The MagLab study focused on three girls — one Black, one white and one Latina — as a means to develop a framework for future researchers to understand coding identity. Hughes says this is too small a data set to tease out definitive conclusions about roles of race and gender, but the study does raise many questions for future researchers to examine with the help of these findings.
    “The questions that come out of the study to me are so fascinating,” Hughes said. “Like, how would these girls be treated differently if they were boys? How do the definitions of ‘coder’ that the girls develop in the camp open or constrain opportunities for them to continue this identity work as they move forward?”
    The study has also prompted Hughes to think about how to design more inclusive, culturally responsive camps at the MagLab.
    “Even though this is a summer camp, there is still the same carryover of stereotypes and sexism and racism from the outer world into this space,” she said. “How can we create a space where girls can behave differently from the social gendered expectations?”
    The challenge will be to show each camper that she and her culture are valued in the camp and to draw connections between home and camp that underscore that. “We need to show that each of the girls has value — in that camp space and in science in general,” Hughes said.
    Joining Hughes as co-authors on the study were Jennifer Schellinger of Florida State University and Kari Roberts of the MagLab.
    The National High Magnetic Field Laboratory is funded by the National Science Foundation and the State of Florida, and has operations at Florida State University, University of Florida and Los Alamos National Laboratory. More

  • in

    The impact of human mobility on disease spread

    Due to continual improvements in transportation technology, people travel more extensively than ever before. Although this strengthened connection between faraway countries comes with many benefits, it also poses a serious threat to disease control and prevention. When infected humans travel to regions that are free of their particular contagions, they might inadvertently transmit their infections to local residents and cause disease outbreaks. This process has occurred repeatedly throughout history; some recent examples include the SARS outbreak in 2003, the H1N1 influenza pandemic in 2009, and — most notably — the ongoing COVID-19 pandemic.
    Imported cases challenge the ability of nonendemic countries — countries where the disease in question does not occur regularly — to entirely eliminate the contagion. When combined with additional factors such as genetic mutation in pathogens, this issue makes the global eradication of many diseases exceedingly difficult, if not impossible. Therefore, reducing the number of infections is generally a more feasible goal. But to achieve control of a disease, health agencies must understand how travel between separate regions impacts its spread.
    In a paper publishing on Tuesday in the SIAM Journal of Applied Mathematics, Daozhou Gao of Shanghai Normal University investigated the way in which human dispersal affects disease control and total extent of an infection’s spread. Few previous studies have explored the impact of human movement on infection size or disease prevalence — defined as the proportion of individuals in a population that are infected with a specific pathogen — in different regions. This area of research is especially pertinent during severe disease outbreaks, when governing leaders may dramatically reduce human mobility by closing borders and restricting travel. During these times, it is essential to understand how limiting people’s movements affects the spread of disease.
    To examine the spread of disease throughout a population, researchers often use mathematical models that sort individuals into multiple distinct groups, or “compartments.” In his study, Gao utilized a particular type of compartmental model called the susceptible-infected-susceptible (SIS) patch model. He divided the population in each patch — a group of people such as a community, city, or country — into two compartments: infected people who currently have the designated illness, and people who are susceptible to catching it. Human migration then connects the patches. Gao assumed that the susceptible and infected subpopulations spread out at the same rate, which is generally true for diseases like the common cold that often only mildly affect mobility.
    Each patch in Gao’s SIS model has a certain infection risk that is represented by its basic reproduction number (R0) — the quantity that predicts how many cases will be caused by the presence of a single contagious person within a susceptible population. “The larger the reproduction number, the higher the infection risk,” Gao said. “So the patch reproduction number of a higher-risk patch is assumed to be higher than that of a lower-risk patch.” However, this number only measures the initial transmission potential; it can rarely predict the true extent of infection.
    Gao first used his model to investigate the effect of human movement on disease control by comparing the total infection sizes that resulted when individuals dispersed quickly versus slowly. He found that if all patches recover at the same rate, large dispersal results in more infections than small dispersal. Surprisingly, an increase in the amount by which people spread can actually reduce R0 while still increasing the total amount of infections.
    The SIS patch model can also help elucidate how dispersal impacts the distribution of infections and prevalence of the disease within each patch. Without diffusion between patches, a higher-risk patch will always have a higher prevalence of disease, but Gao wondered if the same was true when people can travel to and from that high-risk patch. The model revealed that diffusion can decrease infection size in the highest-risk patch since it exports more infections than it imports, but this consequently increases infections in the patch with the lowest risk. However, it is never possible for the highest-risk patch to have the lowest disease prevalence.
    Using a numerical simulation based on the common cold — the attributes of which are well-studied — Gao delved deeper into human migration’s impact on the total size of an infection. When Gao incorporated just two patches, his model exhibited a wide variety of behaviors under different environmental conditions. For example, the dispersal of humans often led to a larger total infection size than no dispersal, but rapid human scattering in one scenario actually reduced the infection size. Under different conditions, small dispersal was detrimental but large dispersal ultimately proved beneficial to disease management. Gao completely classifies the combinations of mathematical parameters for which dispersal causes more infections when compared to a lack of dispersal in a two-patch environment. However, the situation becomes more complex if the model incorporates more than two patches.
    Further investigation into Gao’s SIS patch modeling approach could reveal more nuanced information about the complexities of travel restrictions’ impact on disease spread, which is relevant to real-world situations — such as border closures during the COVID-19 pandemic. “To my knowledge, this is possibly the first theoretical work on the influence of human movement on the total number of infections and their distribution,” Gao said. “There are numerous directions to improve and extend the current work.” For example, future work could explore the outcome of a ban on only some travel routes, such as when the U.S. banned travel from China to impede the spread of COVID-19 but failed to block incoming cases from Europe. Continuing research on these complicated effects may help health agencies and governments develop informed measures to control dangerous diseases.

    Story Source:
    Materials provided by Society for Industrial and Applied Mathematics. Original written by Jillian Kunze. Note: Content may be edited for style and length. More

  • in

    Ecologists confirm Alan Turing's theory for Australian fairy circles

    Fairy circles are one of nature’s greatest enigmas and most visually stunning phenomena. An international research team led by the University of Göttingen has now, for the first time, collected detailed data to show that Alan Turing’s model explains the striking vegetation patterns of the Australian fairy circles. In addition, the researchers showed that the grasses that make up these patterns act as “eco-engineers” to modify their own hostile and arid environment, thus keeping the ecosystem functioning. The results were published in the Journal of Ecology.
    Researchers from Germany, Australia and Israel undertook an in-depth fieldwork study in the remote Outback of Western Australia. They used drone technology, spatial statistics, quadrat-based field mapping, and continuous data-recording from a field-weather station. With the drone and a multispectral camera, the researchers mapped the “vitality status” of the Triodia grasses (how strong and how well they grew) in five one-hectare plots and classified them into high- and low-vitality.
    The systematic and detailed fieldwork enabled, for the first time in such an ecosystem, a comprehensive test of the “Turing pattern” theory. Turing’s concept was that in certain systems, due to random disturbances and a “reaction-diffusion” mechanism, interaction between just two diffusible substances was enough to allow strongly patterned structures to spontaneously emerge. Physicists have used this model to explain the striking skin patterns in zebrafish or leopards for instance. Earlier modelling had suggested this theory might apply to these intriguing vegetation patterns and now there is robust data from multiple scales which confirms that Alan Turing’s model applies to Australian fairy circles.
    The data show that the unique gap pattern of the Australian fairy circles, which occur only in a small area east of the town of Newman, emerges from ecohydrological biomass-water feedbacks from the grasses. In fact, the fairy circles — with their large diameters of 4m, clay crusts from weathering and resultant water run-off — are a critical extra source of water for the dryland vegetation. Clumps of grasses increased shading and water infiltration around the nearby roots. With increasing years after fire, they merged more and more at the periphery of the vegetation gaps to form a barrier so that they could maximize their water uptake from the fairy circle’s runoff. The protective plant cover of grasses could reduce soil-surface temperatures by about 25°C at the hottest time of the day, which facilitates the germination and growth of new grasses. In summary, the scientists found evidence both at the scale of the landscape and at much smaller scales that the grasses, with their cooperative growth dynamics, redistribute the water resources, modulate the physical environment, and thus function as “ecosystem engineers” to modify their own environment and better cope with the arid conditions.
    Dr Stephan Getzin, Department of Ecosystem Modelling at the University of Göttingen, explains, “The intriguing thing is that the grasses are actively engineering their own environment by forming symmetrically spaced gap patterns. The vegetation benefits from the additional runoff water provided by the large fairy circles, and so keeps the arid ecosystem functional even in very harsh, dry conditions.” This contrasts with the uniform vegetation cover seen in less water-stressed environments. “Without the self-organization of the grasses, this area would likely become desert, dominated by bare soil,” he adds. The emergence of Turing-like patterned vegetation seems to be nature’s way of managing an ancient deficit of permanent water shortage.
    In 1952 when the British mathematician, Alan Turing, published his ground-breaking theoretical paper on pattern formation, he had most likely never heard of the fairy circles before. But with his theory he laid the foundation for generations of physicists to explain highly symmetrical patterns like sand ripples in dunes, cloud stripes in the sky or spots on an animal’s coat with the reaction-diffusion mechanism. Now, ecologists have provided an empirical study to extend this principle from physics to dryland ecosystems with fairy circles.

    Story Source:
    Materials provided by University of Göttingen. Note: Content may be edited for style and length. More

  • in

    New freshwater database tells water quality story for 12K lakes globally

    Although less than one per cent of all water in the world is freshwater, it is what we drink and use for agriculture. In other words, it’s vital to human survival. York University researchers have just created a publicly available water quality database for close to 12,000 freshwater lakes globally — almost half of the world’s freshwater supply — that will help scientists monitor and manage the health of these lakes.
    The study, led by Faculty of Science Postdoctoral Fellow Alessandro Filazzola and Master’s student Octavia Mahdiyan, collected data for lakes in 72 countries, from Antarctica to the United States and Canada. Hundreds of the lakes are in Ontario.
    “The database can be used by scientists to answer questions about what lakes or regions may be faring worse than others, how water quality has changed over the years and which environmental stressors are most important in driving changes in water quality,” says Filazzola.
    The team included a host of graduate and undergraduate students working in the laboratory of Associate Professor Sapna Sharma in addition to a collaboration with Assistant Professor Derek Gray of Wilfrid Laurier University, Associate Professor Catherine O’Reilly of Illinois State University and York University Associate Professor Roberto Quinlan.
    The researchers reviewed 3,322 studies from as far back as the 1950s along with online data repositories to collect data on chlorophyll levels, a commonly used marker to determine lake and ecosystem health. Chlorophyll is a predictor of the amount of vegetation and algae in lakes, known as primary production, including invasive species such as milfoil.
    “Human activity, climate warming, agricultural, urban runoff and phosphorus from land use can all increase the level of chlorophyll in lakes. The primary production is most represented by the amount of chlorophyll in the lake, which has a cascading impact on the phytoplankton that eat the algae and the fish that eat the phytoplankton and the fish that eat those fish,” says Filazzola. “If the chlorophyll is too low, it can have cascading negative effects on the entire ecosystem, while too much can cause an abundance of algae growth, which is not always good.”
    Warming summer temperatures and increased solar radiation from decreased cloud cover in the northern hemisphere also contributes to an increase in chlorophyll, while more storm events caused by climate change contribute to degraded water quality, says Sharma. “Agricultural areas and urban watersheds are more associated with degraded water quality conditions because of the amount of nutrients input into these lakes.”
    The researchers also gathered data on phosphorus and nitrogen levels — often a predictor of chlorophyll — as well as lake characteristics, land use variables, and climate data for each lake. Freshwater lakes are particularly vulnerable to changes in nutrient levels, climate, land use and pollution.
    “In addition to drinking water, freshwater is important for transportation, agriculture, and recreation, and provides habitats for more than 100,000 species of invertebrates, insects, animals and plants,” says Sharma. “The database can be used to improve our understanding of how chlorophyll levels respond to global environmental change and it provides baseline comparisons for environmental managers responsible for maintaining water quality in lakes.”
    The researchers started looking only at Ontario lakes, but quickly expanded it globally as although there are thousands of lakes in Ontario a lot of the data is not as readily available as it is in other regions of the world.
    “The creation of this database is a feat typically only accomplished by very large teams with millions of dollars, not by a single lab with a few small grants, which is why I am especially proud of this research,” says Sharma.

    Story Source:
    Materials provided by York University. Note: Content may be edited for style and length. More

  • in

    Thin and ultra-fast photodetector sees the full spectrum

    Researchers have developed the world’s first photodetector that can see all shades of light, in a prototype device that radically shrinks one of the most fundamental elements of modern technology.
    Photodetectors work by converting information carried by light into an electrical signal and are used in a wide range of technologies, from gaming consoles to fibre optic communication, medical imaging and motion detectors. Currently photodetectors are unable to sense more than one colour in the one device.
    This means they have remained bigger and slower than other technologies, like the silicon chip, that they integrate with.
    The new hyper-efficient broadband photodetector developed by researchers at RMIT University is at least 1,000 times thinner than the smallest commercially available photodetector device.
    In a significant leap for the technology, the prototype device can also see all shades of light between ultraviolet and near infrared, opening new opportunities to integrate electrical and optical components on the same chip.
    *New possibilities*
    The breakthrough technology opens the door for improved biomedical imaging, advancing early detection of health issues like cancer.

    advertisement

    Study lead author, PhD researcher Vaishnavi Krishnamurthi, said in photodetection technologies, making a material thinner usually came at the expense of performance.
    “But we managed to engineer a device that packs a powerful punch, despite being thinner than a nanometre, which is roughly a million times smaller than the width of a pinhead,” she said.
    As well as shrinking medical imaging equipment, the ultra-thin prototype opens possibilities for more effective motion detectors, low-light imaging and potentially faster fibre optical communication.
    “Smaller photodetectors in biomedical imaging equipment could lead to more accurate targeting of cancer cells during radiation therapy,” Krishnamurthi said.
    “Shrinking the technology could also help deliver smaller, portable medical imaging systems that could be brought into remote areas with ease, compared to the bulky equipment we have today.”
    *Lighting up the spectrum*

    advertisement

    How versatile and useful photodetectors are depends largely on three factors: their operating speed, their sensitivity to lower levels of light and how much of the spectrum they can sense.
    Typically, when engineers have tried improving a photodetector’s capabilities in one of those areas, at least one of the other capabilities have been diminished.
    Current photodetector technology relies on a stacked structure of three to four layers.
    Imagine a sandwich, where you have bread, butter, cheese and another layer of bread — regardless of how good you are at squashing that sandwich, it will always be four layers thick, and if you remove a layer, you’d compromise the quality.
    The researchers from RMIT’s School of Engineering scrapped the stacked model and worked out how to use a nanothin layer — just a single atom thick — on a chip.
    Importantly, they did this without diminishing the photodetector’s speed, low-light sensitivity or visibility of the spectrum.
    The prototype device can interpret light ranging from deep ultraviolet to near infrared wavelengths, making it sensitive to a broader spectrum than a human eye.
    And it does this over 10,000 times faster than the blink of an eye.
    *Nano-thin technology*
    A major challenge for the team was ensuring electronic and optical properties didn’t deteriorate when the photodetector was shrunk, a technological bottleneck that had previously prevented miniaturisation of light detection technologies.
    Chief investigator Associate Professor Sumeet Walia said the material used, tin monosulfide, is low-cost and naturally abundant, making it attractive for electronics and optoelectronics.
    “The material allows the device to be extremely sensitive in low-lighting conditions, making it suitable for low-light photography across a wide light spectrum,” he said.
    Walia said his team is now looking at industry applications for their photodetector, which can be integrated with existing technologies such as CMOS chips.
    “With further development, we could be looking at applications including more effective motion detection in security cameras at night and faster, more efficient data storage,” he said. More

  • in

    Web resources bring new insight into COVID-19

    Researchers around the world are a step closer to a better understanding of the intricacies of COVID-19 thanks to two new web resources developed by investigators at Baylor College of Medicine and the University of California San Diego. The resources are freely available through the Signaling Pathways Project (Baylor) and the Network Data Exchange (UCSD). They put at researchers’ fingertips information about cellular genes whose expression is affected by coronavirus infection and place these data points in the context of the complex network of host molecular signaling pathways. Using this resource has the potential to accelerate the development of novel therapeutic strategies.
    The study appears in the journal Scientific Data.
    “Our motivation for developing this resource is to contribute to making research about COVID-19 more accessible to the scientific community. When researchers have open access to each other’s work, discoveries move forward more efficiently,” said leading author Dr. Neil McKenna, associate professor of molecular and cellular biology and member of the Dan L Duncan Comprehensive Cancer Center at Baylor.
    The Signaling Pathway Project
    For years, the scientific community has been generating and archiving molecular datasets documenting how genes are expressed as cells conduct their normal functions, or in association with disease. However, usually this information is not easily accessible.
    In 2019, McKenna and his colleagues developed the Signaling Pathways Project, a web-based platform that integrates molecular datasets published in the scientific literature into consensus regulatory signatures, or what they are calling consensomes, that rank genes according to their rates of differential expression.

    advertisement

    In the current study, the researchers generated consensomes for genes affected by infection with three major coronaviruses, Middle East respiratory syndrome coronavirus (MERS) and severe acute respiratory syndrome coronaviruses 1 (SARS1) and 2 (SARS2, which causes COVID-19).
    McKenna and his colleagues provide a resource that assists researchers in making the most out of coronavirus’ datasets. The resource identifies the genes whose expression is most consistently affected by the infection and integrates those responses with data about the cells’ molecular signaling pathways, in a sense getting a better picture of what happens inside a cell infected by coronavirus and how the cell responds.
    “The collaboration with UCSD makes our analyses available as intuitive Cytoscape-style networks,” says McKenna. “Because using these resources does not require training in meta-analysis, they greatly lower the barriers to usability by bench researchers.”
    Providing new insights into COVID-19
    The consensus strategy, the researchers explain, can bring to light previously unrecognized links or provide further support for suspected connections between coronavirus infection and human signaling pathways, ultimately simplifying the generation of hypotheses to be tested in the laboratory.

    advertisement

    For example, the connection between pregnancy and susceptibility to COVID-19 has been difficult to evaluate due to lack of clinical data, but McKenna and colleagues’ approach has provided new insights into this puzzle.
    “We found evidence that progesterone receptor signaling antagonizes SARS2-induced inflammatory signaling mediated by interferon in the airway epithelium. This finding suggests the hypothesis that the suppression of the interferon response to SARS2 infection by elevated circulating progesterone during pregnancy may contribute to the asymptomatic clinical course,” McKenna said.
    Consistent with their hypothesis, while this paper was being reviewed, a clinical trial was launched to evaluate progesterone as a treatment for COVID-19 in men.
    Scott A. Ochsner at Baylor College of Medicine and Rudolf T. Pillich at the University of California San Diego were also authors of this work.
    This study was supported by the National Institute of Diabetes, Digestive and Kidney Diseases NIDDK Information Network (DK097748), the National Cancer Institute (CA125123, CA184427) and by the Brockman Medical Research Foundation. The Signaling Pathways Project website is hosted by the Dan L Duncan Comprehensive Cancer Center. More

  • in

    Cities beat suburbs at inspiring cutting-edge innovations

    The disruptive inventions that make people go “Wow!” tend to come from research in the heart of cities and not in the suburbs, a new study suggests.
    Researchers found that, within metro areas, the majority of patents come from innovations created in suburbs — often in the office parks of big tech companies like Microsoft and IBM.
    But the unconventional, disruptive innovations — the ones that combine research from different technological fields — are more likely to be produced in cities, said Enrico Berkes, co-author of the study and postdoctoral researcher in economics at The Ohio State University.
    These unconventional patents are ones that, for example, may blend research on acoustics with research on information storage — the basis for digital music players like the iPod. Or patents that cite previous work on “vacuum cleaning” and “computing” to produce the Roomba.
    “Densely populated cities do not generate more patents than the suburbs, but they tend to generate more unconventional patents,” said Berkes, who did the work as a doctoral student at Northwestern University.
    “Our findings suggest that cities provide more opportunities for creative people in different fields to interact informally and exchange ideas, which can lead to more disruptive innovation.”
    Berkes conducted the study with Ruben Gaetani, assistant professor of strategic management at the University of Toronto. Their research was published online recently in The Economic Journal.

    advertisement

    Previous research had shown that large metropolitan areas are where patenting activity tends to concentrate, Berkes said, suggesting that population density is an important factor for innovation.
    But once Berkes and Gaetani started looking more closely at metro areas, they found that a sizable share of these patents was developed in the suburbs — the least densely populated part. Nearly three-quarters of patents came from places that had density below 3,650 people per square mile in 2000, about the density of Palo Alto, California.
    “If new technology is spurred by population density, we wanted to know why so much is happening in the least dense parts of the metro areas,” Berkes said.
    So Berkes and Gaetani analyzed more than 1 million U.S. patents granted between January 2002 and August 2014. They used finely geolocated data from the U.S. Patent and Trademark Office that allowed them to see exactly where in metro areas — including city centers and specific suburbs — that patented discoveries were made.
    But they were also interested in determining the type of innovations produced — whether they would be considered conventional or unconventional. They did this by analyzing the previous work on which each patent was based.

    advertisement

    The researchers tagged new patents as unconventional if the inventors cited previous work in widely different areas.
    For example, a patent from 2000 developed in Pittsburgh is one of the first recorded inventions in wearable technologies and one of the precursors to products such as Fitbit. It was recognized as unconventional because it cites previous patents in both apparel and electrical equipment — two very distant fields.
    After analyzing the data, the researchers found that both urban and suburban areas played a prominent role in the innovation process, but in different ways, Berkes said.
    Large innovative companies, such as IBM or Microsoft, tend to perform their research in large office parks located outside the main city centers.
    “These companies are very successful in taking advantage of formal channels of knowledge diffusion, such as meetings or conferences, where they can capitalize on the expertise of their scientists and have them work together on specialized projects for the company,” Berkes said.
    “But it is more difficult for them to tap ideas from other scientific fields because this demands interactions with inventors they’re not communicating with every day or running into in the cafeteria or in the hallway.”
    That’s where the urban cores excelled. In cities like San Francisco and Boston, researchers may meet people in entirely different fields at bars, restaurants, museums and cultural events. Any chance encounter could lead to productive partnerships, he said.
    “If you want to create something truly new and disruptive, it helps if you have opportunities to casually bump into people from other scientific fields and exchange ideas and experiences and knowledge. That’s what happens in cities,” he said.
    “Density plays an important role in the type, rather than the amount, of innovation.”
    These findings show the potential value of tech parks that gather technology startup companies in a variety of fields in one place, Berkes said. But they have to be set up properly.
    “Our research suggests that informal interactions are important. Tech parks should be structured in a way that people from different startups can easily interact with each other on a regular basis and share ideas,” he said. More

  • in

    AI could expand healing with bioscaffolds

    A dose of artificial intelligence can speed the development of 3D-printed bioscaffolds that help injuries heal, according to researchers at Rice University.
    A team led by computer scientist Lydia Kavraki of Rice’s Brown School of Engineering used a machine learning approach to predict the quality of scaffold materials, given the printing parameters. The work also found that controlling print speed is critical in making high-quality implants.
    Bioscaffolds developed by co-author and Rice bioengineer Antonios Mikos are bonelike structures that serve as placeholders for injured tissue. They are porous to support the growth of cells and blood vessels that turn into new tissue and ultimately replace the implant.
    Mikos has been developing bioscaffolds, largely in concert with the Center for Engineering Complex Tissues, to improve techniques to heal craniofacial and musculoskeletal wounds. That work has progressed to include sophisticated 3D printing that can make a biocompatible implant custom-fit to the site of a wound.
    That doesn’t mean there isn’t room for improvement. With the help of machine learning techniques, designing materials and developing processes to create implants can be faster and eliminate much trial and error.
    “We were able to give feedback on which parameters are most likely to affect the quality of printing, so when they continue their experimentation, they can focus on some parameters and ignore the others,” said Kavraki, an authority on robotics, artificial intelligence and biomedicine and director of Rice’s Ken Kennedy Institute.

    advertisement

    The team reported its results in Tissue Engineering Part A.
    The study identified print speed as the most important of five metrics the team measured, the others in descending order of importance being material composition, pressure, layering and spacing.
    Mikos and his students had already considered bringing machine learning into the mix. The COVID-19 pandemic created a unique opportunity to pursue the project.
    “This was a way to make great progress while many students and faculty were unable to get to the lab,” Mikos said.
    Kavraki said the researchers — graduate students Anja Conev and Eleni Litsa in her lab and graduate student Marissa Perez and postdoctoral fellow Mani Diba in the Mikos lab, all co-authors of the paper — took time at the start to establish an approach to a mass of data from a 2016 study on printing scaffolds with biodegradable poly(propylene fumarate), and then to figure out what more was needed to train the computer models.

    advertisement

    “The students had to figure out how to talk to each other, and once they did, it was amazing how quickly they progressed,” Kavraki said.
    From start to finish, the COVID-19 window let them assemble data, develop models and get the results published within seven months, record time for a process that can often take years.
    The team explored two modeling approaches. One was a classification method that predicted whether a given set of parameters would produce a “low” or “high” quality scaffold. The other was a regression-based approach that approximated the values of print-quality metrics to come to a result. Kavraki said both relied upon a “classical supervised learning technique” called random forest that builds multiple “decision trees” and “merges” them together to get a more accurate and stable prediction.
    Ultimately, the collaboration could lead to better ways to quickly print a customized jawbone, kneecap or bit of cartilage on demand.
    “A hugely important aspect is the potential to discover new things,” Mikos said. “This line of research gives us not only the ability to optimize a system for which we have a number of variables — which is very important — but also the possibility to discover something totally new and unexpected. In my opinion, that’s the real beauty of this work.
    “It’s a great example of convergence,” he said. “We have a lot to learn from advances in computer science and artificial intelligence, and this study is a perfect example of how they will help us become more efficient.”
    “In the long run, labs should be able to understand which of their materials can give them different kinds of printed scaffolds, and in the very long run, even predict results for materials they have not tried,” Kavraki said. “We don’t have enough data to do that right now, but at some point we think we should be able to generate such models.”
    Kavraki noted The Welch Institute, recently established at Rice to enhance the university’s already stellar reputation for advanced materials science, has great potential to expand such collaborations.
    “Artificial intelligence has a role to play in new materials, so what the institute offers should be of interest to people on this campus,” she said. “There are so many problems at the intersection of materials science and computing, and the more people we can get to work on them, the better.” More