More stories

  • in

    AI can identify people with abnormal heart rhythms

    Investigators from the Smidt Heart Institute at Cedars-Sinai found that an artificial intelligence (AI) algorithm can detect an abnormal heart rhythm in people not yet showing symptoms.
    The algorithm, which identified hidden signals in common medical diagnostic testing, may help doctors better prevent strokes and other cardiovascular complications in people with atrial fibrillation — the most common type of heart rhythm disorder.
    Previously developed algorithms have been primarily used in white populations. This algorithm works in diverse settings and patient populations, including U.S. veterans and underserved populations. The findings were published today in the peer-reviewed journal JAMA Cardiology.
    “This research allows for better identification of a hidden heart condition and informs the best way to develop algorithms that are equitable and generalizable to all patients,” said David Ouyang, MD, a cardiologist in the Department of Cardiology in the Smidt Heart Institute at Cedars-Sinai, a researcher in the Division of Artificial Intelligence in Medicine, and senior author of the study.
    Experts estimate that about 1 in 3 people with atrial fibrillation do not know they have the condition.
    In atrial fibrillation, the electrical signals in the heart that regulate the pumping of blood from the upper chambers to the lower chambers are chaotic. This can cause blood in the upper chambers to pool and form blood clots that can travel to the brain and trigger an ischemic stroke.
    To create the algorithm, investigators programmed an artificial intelligence tool to study patterns found in electrocardiogram readings. An electrocardiogram is a test that monitors electrical signals from the heart. People who undergo this test have electrodes placed on their body that detect the heart’s electrical activity. More

  • in

    How to build greener data centers? Scientists say crank up the heat

    Colder is not always better for energy-hungry data centers, especially when it comes to their power bills. A new analysis says that keeping the centers at 41°C, or around 105°F, could save up to 56% in cooling costs worldwide. The study, publishing October 10 in the journal Cell Reports Physical Science, proposes new temperature guidelines that may help develop and manage more efficient data centers and IT servers in the future.
    “The cooling system accounts for over one-third of the data center’s total energy consumption, so many studies talk about reducing the energy consumption of cooling systems,” says senior author Shengwei Wang of the Hong Kong Polytechnic University. “But rather than finding better ways to cool the data centers, why not redesign the servers to operate at higher temperatures?”
    Data centers typically operate at temperatures between 20-25°C (68-77°F) today. The conventional cooling systems that maintain these centers work by pulling computer-generated hot air past water-chilled coils to cool down the air before it cycles back to the space. The heated water then enters either chillers or a process called free-cooling before circulating back to the coils. Unlike energy-intensive chillers that operate similarly to air conditioners, free-cooling uses ambient air to cool the water with much less energy use.
    To save energy, data centers are often built in colder areas to leverage free-cooling. But thanks to advances in electronic technology, engineers and scientists know that it’s no longer necessary to blast the chiller-based air conditioning at data centers. Many IT servers already allow a higher temperature operation above 30°C (86°F). This means that in most climates, including those that are hotter, data centers can also benefit from free-cooling by raising the temperature of data centers.
    “The question is, to what temperature?” says Wang. To find out, Wang and his team built a model based on the conventional cooling system and simulated the system’s operation under different climate conditions. The results showed that data centers in almost all regions across climate zones could rely nearly 100% on free-cooling throughout the year when operated at 41°C, which they coined “global free-cooling temperature.” These data centers could save 13%-56% of energy compared to those that run at 22°C (71.6°F).
    Depending on an area’s temperature and humidity, the researchers say that data centers might not even need to raise the temperature that far to take full advantage of free-cooling. For example, the temperatures for Beijing, Kunming, and Hong Kong to entirely rely on free-cooling are 39°C (102.2°F), 38°C (100.4°F), and 40°C (104°F), respectively.
    “But before we raise the temperature settings, we need to ensure three things,” says Wang. “First, we need to ensure the reliability of server operation. Second, the computational efficiency needs to remain the same. Third, we need to ensure the servers’ energy consumption is not increased by activating their built-in cooling protection, such as the fans.” That said, Wang is optimistic that it is possible for the next generation of servers to work at up to 40°C without performance degradation.
    “For the first time we can provide cooling system engineers and server design engineers a concrete goal to work towards,” says Wang. “I think 41°C is achievable in the near future. We’re only 10°C (18°F) or less away.” More

  • in

    Learn programming by playing

    The changing information technology industry, latest artificial intelligence applications, high demand for IT professionals, and evolving need for learning are leading to the search for innovations in education that will allow current and future employees to acquire knowledge in a contemporary and accessible way.
    This is particularly relevant in the field of programming, where the complexity of the process often creates learning difficulties. Researchers from Kaunas University of Technology (KTU) and universities in Poland, Portugal, and Italy are proposing to gamify this process.
    “Gamification is a learning method in which traditional game elements and principles such as levels, points or leader boards are used,” explains KTU researcher Rytis Maskeliūnas.
    According to him, the main goal of this approach is to make learning as enjoyable and challenging as a game. This dynamic method should encourage learners to become more involved in learning activities and help them retain information more easily.
    Creates a personalised learning process
    KTU professor highlights that the possibility to personally adaptthe learning process based on each learner’s specific needs, abilities, and level of progress is one of the main advantages of gamification.
    Maskeli?nas says that such personal adaptation is a complex process, which starts with the identification of the student’s initial knowledge, abilities, strengths, and weaknesses. Then, with the help of the AI or tutor, goals are selected and an individual learning plan is generated. More

  • in

    Founder personality could predict start-up success

    The stats don’t lie — the overwhelming majority of start-up companies fail. So, what makes the seemingly lucky few not only survive, but thrive?
    While good fortune and circumstances can play a part, new research reveals that when it comes to start-up success, a founder’s personality — or the combined personalities of the founding team — is paramount. The study, published today in Scientific Reports, shows founders of successful start-ups have personality traits that differ significantly from the rest of the population — and that these traits are more important for success than many other factors.
    “We find that personality traits don’t simply matter for start-ups — they are critical to elevating the chances of success,” says Paul X. McCarthy, lead author of the study and adjunct professor at UNSW Sydney. “A small number of astute venture capitalists have suspected this for some time, but now we have the data to demonstrate this is the case.”
    Personality key to start-up success
    For the study, the team, which also included researchers from Oxford Internet Institute, the University of Oxford, University of Technology Sydney (UTS), and the University of Melbourne, inferred the personality profiles of the founders of more than 21,000 founder-led companies from language and activity in their publicly available Twitter accounts using a machine learning algorithm. The algorithm could distinguish successful start-up founders with 82.5 per cent accuracy.
    They then correlated the personality profiles to data from the largest directory on start-ups in the world, Crunchbase, to determine whether certain founder personalities and their combinations in cofounded teams relate to start-up success — if the company had been acquired, if they acquired another company, or listed on a public stock exchange.
    The researchers found that successful start-up founders’ core Big Five personality traits — the widely accepted model of human personality measuring openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism — significantly differ from that of the population at large. More

  • in

    Human Lung Chip leveraged to faithfully model radiation-induced lung injury

    Researchers have developed a human in vitro model that closely mimics the complexities of radiation-induced lung injury (RILI) and radiation dose sensitivity of the human lung. Using a previously developed microfluidic human Lung Alveolus Chip lined by human lung alveolar epithelial cells interfaced with lung capillary cells to recreate the alveolar-capillary interface in vitro, the researchers recapitulated many of the hallmarks of RILI, including radiation-induced DNA damage in lung tissue, cell-specific changes in gene expression, inflammation, and injury to both the lung epithelial cells and blood vessel-lining endothelial cells. By also evaluating the potential of two drugs to suppress the effects of acute RILI, the researchers demonstrated their model’s capabilities as an advanced, human-relevant, preclinical, drug discovery platform.
    The lung is one of the tissues most sensitive to radiation in the human body. People exposed to high radiation doses following nuclear incidents develop radiation-induced lung injury (RILI), which affects the function of many cell types in the lung, causing acute and sustained inflammation, and in the longer term, the thickening and scarring of lung tissue known as fibrosis. RILI also is a common side effect of radiation therapy administered to cancer patients to kill malignant cells in their bodies, and can limit the maximum radiation dose doctors can use to control their tumors, as well as dramatically impair patients’ quality of life.
    Anti-inflammatory drugs given to patients during radiation therapy can dampen the inflammation in the lungs, called pneumonitis, but not all patients respond equally well. This is because RILI is a complex disorder that varies between patients and is influenced by risk factors, such as age, lung cancer state, and other pre-existing lung diseases, and likely the patient’s genetic makeup. In the event of nuclear accidents, which usually involve the one-time exposure to much higher doses of radiation, no medical countermeasures are available yet that could prevent and protect against the damage to the lungs and other organs, making this a key priority of the US Food and Drug Administration (FDA).
    A major obstacle to developing a much deeper understanding of the pathological processes triggered by radiation in the lung and other organs, which is the basis for discovering medical countermeasures, is the lack of experimental model systems that recapitulate how exactly the damage occurs in people. Small animal preclinical models fail to produce key hallmarks of the human pathophysiology and do not mimic the dose sensitivities observed in humans. And although non-human primate models are considered the gold-standard for radiation injury, they are in short supply, costly, and raise serious ethical concerns; they also are not human and sometimes fail to predict responses observed when drugs move into the clinic.
    Now, a multi-disciplinary research team at the Wyss Institute for Biologically Inspired Engineering at Harvard University and Boston Children’s Hospital led by Wyss Founding Director Donald Ingber, M.D., Ph.D., in an FDA-funded project, has developed a human in vitro model that closely mimics the complexities of RILI and radiation dose sensitivity of the human lung. Lung alveoli are the small air sacs where oxygen and CO2 exchange between the lung and blood takes place, and the major site of radiation pneumonitis. Using a previously developed microfluidic human Lung Alveolus Chip lined by human lung alveolar epithelial cells interfaced with lung capillary cells to recreate the alveolar-capillary interface in vitro, the researchers recapitulated many of the hallmarks of RILI, including radiation-induced DNA damage in lung tissue, cell-specific changes in gene expression, inflammation, and injury to both the lung epithelial cells and blood vessel-lining endothelial cells. By also evaluating the potential of two drugs to suppress the effects of acute RILI, the researchers demonstrated their model’s capabilities as an advanced, human-relevant, preclinical, drug discovery platform. The findings are published in Nature Communications.
    “Forming a better understanding of how radiation injury occurs and finding new strategies to treat and prevent it poses a multifaceted challenge that in the face of nuclear threats and the realities of current cancer therapies needs entirely new solutions,” said Ingber. “The Lung Chip model that we developed to recapitulatedevelopment of RILI leverages our extensive microfluidic Organ Chip culture expertise and, in combination with new analytical and computational drug and biomarker discovery tools, gives us powerful new inroads into this problem.” Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and Boston Children’s Hospital, and the Hansjörg Wyss Professor of Bioinspired Engineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.
    Advanced human in vitro model of RILI
    The human Lung Alveolus Chip is a 2-channel microfluidic culture system in which primary human lung alveolar epithelial cells are cultured in one channel where they are exposed to air as they would be in the lung. They are also interfaced across a porous membrane with primary human lung capillary endothelial cells in the parallel channel that are constantly perfused with a blood-like nutrient medium that contains circulating human immune cells, which also can contribute to radiation responses. This carefully engineered, immunologically active, alveolar-capillary interface also experiences cyclic mechanical movements mimicking actual breathing motions. Importantly, this living breathing Lung Chip can be transiently exposed to clinically relevant doses of radiation, and then investigated for the effects over an extended period of time. More

  • in

    AI models identify biodiversity from animal sounds in tropical rainforests

    Tropical forests are among the most important habitats on our planet. They are characterised by extremely high species diversity and play an eminent role in the global carbon cycle and the world climate. However, many tropical forest areas have been deforested and overexploitation continues day by day.
    Reforested areas in the tropics are therefore becoming increasingly important for the climate and biodiversity. How well biodiversity develops on such areas can be monitored very well with an automated analysis of animal sounds. This was reported by researchers in the journal Nature Communications.
    Recordings on Former Cocoa Plantations and Pastures
    As part of the DFG research group Reassembly, the team worked in northern Ecuador on abandoned pastures and former cacao plantations where forest is gradually reestablishing itself. There, they investigated whether autonomous sound recorders and artificial intelligence (AI) can be used to automatically recognise how the species communities of birds, amphibians and mammals are composed.
    “The research results show that the sound data reflect excellently the return of biodiversity in abandoned agricultural areas,” Professor Jörg Müller is pleased to say. The head of the Ecological Station Fabrikschleichach at Julius-Maximilians-Universität (JMU) Würzburg and his colleague Oliver Mitesser were in charge of the study.
    Overall it is particularly the communities of vocalizing species that mirrors the recolonisation very well — because the communities follow strictly the recovery gradients. A preliminary set of 70 AI bird models was able to describe the entire species communities of birds, amphibians and some calling mammals. Even the changes in nocturnal insects could be meaningfully correlated with them.
    AI Models are Being Further Refined
    The team is currently working on further improving the AI models used and expanding the set of models. The goal is to be able to automatically record even more species. The models are also to be established in other protected areas in Ecuador, the Sailershausen JMU Forest and in Germany’s oldest national park in the Bavarian Forest.
    “Our AI models can be the basis for a very universal tool for monitoring biodiversity in reforested areas,” says Jörg Müller. The Würzburg professor sees possible applications, for example, in the context of certifications or biodiversity credits. Biodiversity credits function similarly to carbon dioxide emissions trading. They are issued by projects that protect or improve biodiversity. They are purchased by companies or organisations that want to compensate for negative impacts of their activities. More

  • in

    Virtual reality helps people with hoarding disorder practice decluttering

    Many people who dream of an organized, uncluttered home à la Marie Kondo find it hard to decide what to keep and what to let go. But for those with hoarding disorder — a mental condition estimated to affect 2.5% of the U.S. population — the reluctance to let go can reach dangerous and debilitating levels.
    Now, a pilot study by Stanford Medicine researchers suggests that a virtual reality therapy that allows those with hoarding disorder to rehearse relinquishing possessions in a simulation of their own home could help them declutter in real life. The simulations can help patients practice organizational and decision-making skills learned in cognitive behavioral therapy — currently the standard treatment — and desensitize them to the distress they feel when discarding.
    The study was published in the October issue of the Journal of Psychiatric Research.
    A hidden problem
    Hoarding disorder is an under-recognized and under-treated condition that has been included in the Diagnostic and Statistical Manual of Mental Disorders — referred to as the DSM-5 — as a formal diagnosis only since 2013. People with the disorder, who tend to be older, have persistent difficulty parting with possessions, resulting in an accumulation of clutter that impairs their relationships, their work and even their safety.
    “Unfortunately, stigma and shame prevent people from seeking help for hoarding disorder,” said Carolyn Rodriguez, MD, PhD, professor of psychiatry and behavioral sciences and senior author of the study. “They may also be unwilling to have anyone else enter the home to help.”
    Sometimes the condition is discovered through cluttered backgrounds on Zoom calls or, tragically, when firefighters respond to a fire, Rodriguez said. Precarious piles of stuff not only prevent people from sleeping in their beds and cooking in their kitchens, but they can also attract pests; block fire exits; and collapse on occupants, first responders and clinicians offering treatment. More

  • in

    New polymer membranes, AI predictions could dramatically reduce energy, water use in oil refining

    A new kind of polymer membrane created by researchers at Georgia Tech could reshape how refineries process crude oil, dramatically reducing the energy and water required while extracting even more useful materials.
    The so-called DUCKY polymers — more on the unusual name in a minute — are reported Oct. 16 in Nature Materials. And they’re just the beginning for the team of Georgia Tech chemists, chemical engineers, and materials scientists. They also have created artificial intelligence tools to predict the performance of these kinds of polymer membranes, which could accelerate development of new ones.
    The implications are stark: the initial separation of crude oil components is responsible for roughly 1% of energy used across the globe. What’s more, the membrane separation technology the researchers are developing could have several uses, from biofuels and biodegradable plastics to pulp and paper products.
    “We’re establishing concepts here that we can then use with different molecules or polymers, but we apply them to crude oil because that’s the most challenging target right now,” said M.G. Finn, professor and James A. Carlos Family Chair in the School of Chemistry and Biochemistry.
    Crude oil in its raw state includes thousands of compounds that have to be processed and refined to produce useful materials — gas and other fuels, as well as plastics, textiles, food additives, medical products, and more. Squeezing out the valuable stuff involves dozens of steps, but it starts with distillation, a water- and energy-intensive process.
    Researchers have been trying to develop membranes to do that work instead, filtering out the desirable molecules and skipping all the boiling and cooling.
    “Crude oil is an enormously important feedstock for almost all aspects of life, and most people don’t think about how it’s processed,” said Ryan Lively, Thomas C. DeLoach Jr. Professor in the School of Chemical and Biomolecular Engineering. “These distillation systems are massive water consumers, and the membranes simply are not. They’re not using heat or combustion. They just use electricity. You could ostensibly run it off of a wind turbine, if you wanted. It’s just a fundamentally different way of doing a separation.”
    What makes the team’s new membrane formula so powerful is a new family of polymers. The researchers used building blocks called spirocyclic monomers that assemble together in chains with lots of 90-degree turns, forming a kinky material that doesn’t compress easily and forms pores that selectively bind and permit desirable molecules to pass through. The polymers are not rigid, which means they’re easier to make in large quantities. They also have a well-controlled flexibility or mobility that allows pores of the right filtering structure to come and go over time. More