More stories

  • in

    The complexity of forests cannot be explained by simple mathematical rules, study finds

    The way trees grow together do not resemble how branches grow on a single tree, scientists have discovered.
    Nature is full of surprising repetitions. In trees, the large branches often look like entire trees, while smaller branches and twigs look like the larger branches they grow from. If seen in isolation, each part of the tree could be mistaken for a miniature version of itself.
    It has long been assumed that this property, called fractality, also applies to entire forests but researchers from the University of Bristol have found that this is not the case.
    The study, published in December in Journal of Ecology, refutes claims that the self-similarity which is observed within individual trees can be extended to whole forest canopies and landscapes.
    Lead author Dr Fabian Fischer explained: “Fractality can be found in many natural systems. Transport networks such as arteries or rivers often show self-similarity in the way they branch, and many organic structures, such as trees, ferns or broccoli, are composed of parts that look like the whole.
    “Fractality provides a way of categorising and quantifying these self-similar patterns we so often observe in nature, and has been hypothesized to be an emergent property that is shared by many natural systems.
    “Intuitively, if you look at a picture of something and you can’t quite determine how big it is, then this is good indicator of fractality. For instance, is this a large mountain in front of me or just a small rock looking like a mountain? Is it a branch or whole a tree?

    “Scientifically, this self-similarity has the attractive property that it allows you to describe an apparently complex object using some very simple rules and numbers.”
    If self-similarity extended from the small twigs of a single tree to entire forest ecosystems, it would help ecologists describe complex landscapes in much simpler ways, and potentially directly compare the complexity of very different ecosystems, such as coral reefs and forest canopies.
    To test this idea that forest canopies behave like fractals, the team used airborne laser scanning data from nine sites spread across Australia’s Terrestrial Ecosystem Research Network (TERN). These sites span a large rainfall gradient and vary enormously in their structure: from sparse and short arid woodlands in Western Australia to towering, 90-m tall mountain ash forests in Tasmania. From each laser scan, they derived high-resolution forest height maps and compared these to what forest heights would look like if the forests were fractal in nature.
    Dr Fischer said: “We found that forest canopies are not fractal, but they are very similar in how they deviate from fractality, irrespective of what ecosystem they are in.
    “That they are not fractal makes a lot of sense and was our hypothesis from the start. While it might be possible to confuse a branch for an entire tree, it’s usually easy to differentiate trees from a grove of trees or from an entire forest.
    “But it was surprising how similar all forest canopies were in the way they deviated from true fractals, and how deviations were linked to the size of the trees and how dry their environment was.

    “The consistency of deviations also gave us an idea of how we could compare complexity across ecosystems. Most ecosystems, like forests, will hit an upper limit — most likely determined by the maximum size of its organisms — beyond which their structure cannot vary freely anymore.
    “If we could determine these upper limits, this could open up routes to understanding how very different organisms and systems (coral reefs, forests, etc.) work and to test whether they might share the same basic organising principles.”
    Now the team plan to compare an even wider range of forest ecosystems across the globe, find out whether there are similar organizing principles in forests and beyond, and discover what drives these patterns by looking at multiple scans in time.
    Dr Fischer concluded: “A key question in science is whether there are generalizable patterns in nature, and an excellent candidate for this is fractality.
    “The forests we studied were not fractal, but there were clear similarities across all sites in how they deviated from fractality. From a theoretical point of view, this points the way to a framework for finding general organizing principles in biology.
    “But this also has practical implications: if we cannot understand the forest from its trees, and vice versa, then we must monitor forests both at small and large scales to understand how they respond to climatic changes and growing human pressure.” More

  • in

    Misinformation and irresponsible AI — experts forecast how technology may shape our near future

    From misinformation and invisible cyber attacks, to irresponsible AI that could cause events involving multiple deaths, expert futurists have forecast how rapid technology changes may shape our world by 2040.
    As the pace of computer technology advances surges ahead, and systems become increasingly interlinked, it is vital to know how these fast technology advances could impact the world in order to take steps to prevent the worst outcomes.
    Using a Delphi study, a well known technique for forecasting, a team of cyber security researchers led by academics from Lancaster University interviewed 12 experts in the future of technologies.
    The experts, ranged from chief technology officers in businesses, consultant futurists and a technology journalist to academic researchers. They were asked how particular technologies may develop and change our world over the next 15 years by 2040, what risks they might pose, and how to address the challenges that may arise.
    Most of the experts forecasted exponential growth in Artificial Intelligence (AI) over the next 15 years, and many also expressed concern that corners could be cut in the development of safe AI. They felt that this corner cutting could be driven by nation states seeking competitive advantage. Several of the experts even considered it possible that poorly implemented AI could lead to incidents involving many deaths, although other experts disagreed with this view.
    Dr Charles Weir, Lecturer at Lancaster University’s School of Computing and Communications and lead researcher of the study, said: “Technology advances have brought, and will continue to bring, great benefits. We also know there are risks around some of these technologies, including AI, and where their development may go — everyone’s been discussing them — but the possible magnitude of some of the risks forecast by some of the experts was staggering.
    “But by forecasting what potential risks lie just beyond the horizon we can take steps to avoid major problems.”
    Another significant concern held by most of the experts involved in the study was that technology advances will make it easier for misinformation to spread. This has the potential to make it harder for people to tell the difference between truth and fiction — with ramifications for democracies.

    Dr Weir said: “We are already seeing misinformation on social media networks, and used by some nation states. The experts are forecasting that advances in technologies will make it much easier for people and bad actors to continue spreading misleading material by 2040.”
    Other technologies were forecast to not have as big as impact by 2040, including quantum computing which experts see as having impacts over a much longer timeframe, and Blockchain which was dismissed by most of the experts as being a source of major change.
    The experts forecast that:
    · By 2040, competition between nation states and big tech companies will lead to corners being cut in the development of safe AI
    · Quantum computing will have limited impact by 2040
    · By 2040 there will be ownership of public web assets. These will be identified and traded through digital tokens
    · By 2040 it will be harder to distinguish truth from fiction because widely accessible AI can massively generate doubtful content

    · By 2040 there will be less ability to distinguish accidents from criminal incidents due to the decentralised nature and complexity of systems
    The forecasters also offered some suggested solutions to help mitigate against some of the concerns raised. Their suggestions included governments introducing AI purchasing safety principles, new laws to regulate AI safety. In addition, universities could be vital by introducing courses combining technical skills and legislation.
    These forecasts will help policy makers and technology professionals make strategic decisions around developing and deploying novel computing technologies. They are outlined in the paper ‘Interlinked Computing in 2040: Safety, Truth, Ownership and Accountability’ which has been published by the peer-reviewed journal IEEE Computer.
    The paper’s authors are: Charles Weir and Anna Dyson of Lancaster University; Olamide Jogunola and Katie Paxton-Fear of Manchester Metropolitan University; and Louise Dennis of Manchester University. More

  • in

    What coffee with cream can teach us about quantum physics

    Add a dash of creamer to your morning coffee, and clouds of white liquid will swirl around your cup. But give it a few seconds, and those swirls will disappear, leaving you with an ordinary mug of brown liquid.
    Something similar happens in quantum computer chips — devices that tap into the strange properties of the universe at its smallest scales — where information can quickly jumble up, limiting the memory capabilities of these tools.
    That doesn’t have to be the case, said Rahul Nandkishore, associate professor of physics at the University of Colorado Boulder.
    In a new coup for theoretical physics, he and his colleagues have used math to show that scientists could create, essentially, a scenario where the milk and coffee never mix — no matter how hard you stir them.
    The group’s findings may lead to new advances in quantum computer chips, potentially providing engineers with new ways to store information in incredibly tiny objects.
    “Think of the initial swirling patterns that appear when you add cream to your morning coffee,” said Nandkishore, senior author of the new study. “Imagine if these patterns continued to swirl and dance no matter how long you watched.”
    Researchers still need to run experiments in the lab to make sure that these never-ending swirls really are possible. But the group’s results are a major step forward for physicists seeking to create materials that remain out of balance, or equilibrium, for long periods of time — a pursuit known as “ergodicity breaking.”
    The team’s findings appeared this week in the latest issue of Physical Review Letters.

    Quantum memory
    The study, which includes co-authors David Stephen and Oliver Hart, postdoctoal researchers in physics at CU Boulder, hinges on a common problem in quantum computing.
    Normal computers run on “bits,” which take the form of zeros or ones. Nandkishore explained that quantum computers, in contrast, employ “qubits,” which can exist as zero, one or, through the strangeness of quantum physics, zero and one at the same time. Engineers have made qubits out of a wide range of things, including individual atoms trapped by lasers or tiny devices called superconductors.
    But just like that cup of coffee, qubits can become easily mixed up. If you flip, for example, all of your qubits to one, they’ll eventually flip back and forth until the entire chip becomes a disorganized mess.
    In the new research, Nandkishore and his colleagues may have figured a way around that tendency toward mixing. The group calculated that if scientists arrange qubits into particular patterns, these assemblages will retain their information — even if you disturb them using a magnetic field or a similar disruption. That could, the physicist said, allow engineers to build devices with a kind of quantum memory.
    “This could be a way of storing information,” he said. “You would write information into these patterns, and the information couldn’t be degraded.”
    Tapping into geometry

    In the study, the researchers used mathematical modeling tools to envision an array of hundreds to thousands of qubits arranged in a checkerboard-like pattern.
    The trick, they discovered, was to stuff the qubits into a tight spot. If qubits get close enough together, Nadkishore explained, they can influence the behavior of their neighbors, almost like a crowd of people trying to squeeze themselves into a telephone booth. Some of those people might be standing upright or on their heads, but they can’t flip the other way without pushing on everyone else.
    The researchers calculated that if they arranged these patterns in just the right way, those patterns might flow around a quantum computer chip and never degrade — much like those clouds of cream swirling forever in your coffee.
    “The wonderful thing about this study is that we discovered that we could understand this fundamental phenomenon through what is almost simple geometry,” Nandkishore said.
    The team’s findings could influence a lot more than just quantum computers.
    Nandkishore explained that almost everything in the universe, from cups of coffee to vast oceans, tends to move toward what scientists call “thermal equilibrium.” If you drop an ice cube into your mug, for example, heat from your coffee will melt the ice, eventually forming a liquid with a uniform temperature.
    His new findings, however, join a growing body of research that suggests that some small organizations of matter can resist that equilibrium — seemingly breaking some of the most immutable laws of the universe.
    “We’re not going to have to redo our math for ice and water,” Nandkishore said. “The field of mathematics that we call statistical physics is incredibly successful for describing things we encounter in everyday life. But there are settings where maybe it doesn’t apply.” More

  • in

    Offshore wind farms are vulnerable to cyberattacks

    The hurrying pace of societal electrification is encouraging from a climate perspective. But the transition away from fossil fuels toward renewable sources like wind presents new risks that are not yet fully understood.
    Researchers from Concordia and Hydro-Quebec presented a new study on the topic in Glasgow, United Kingdom at the 2023 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm). Their study explores the risks of cyberattacks faced by offshore wind farms. Specifically, the researchers considered wind farms that use voltage-source-converter high-voltage direct-current (VSC-HVDC) connections, which are rapidly becoming the most cost-effective solution to harvest offshore wind energy around the world.
    “As we advance the integration of renewable energies, it is imperative to recognize that we are venturing into uncharted territory, with unknown vulnerabilities and cyber threats,” says Juanwei Chen, a PhD student at the Concordia Institute for Information Systems Engineering (CIISE) at the Gina Cody School of Engineering and Computer Science.
    “Offshore wind farms are connected to the main power grid using HVDC technologies. These farms may face new operational challenges,” Chen explains.
    “Our focus is to investigate how these challenges could be intensified by cyber threats and to assess the broader impact these threats might have on our power grid.”
    Concordia PhD student Hang Du, CIISE associate professor Jun Yan and Gina Cody School dean Mourad Debbabi, along with Rawad Zgheib from the Hydro-Quebec Research Institute (IREQ), also contributed to the study. This work is part of a broad research collaboration project involving the group of Prof. Debbabi and the IREQ cybersecurity research group led by Dr. Marthe Kassouf and involving a team of researchers including Dr. Zgheib.
    Complex and vulnerable systems
    Offshore wind farms require more cyber infrastructure than onshore wind farms, given that offshore farms are often dozens of kilometres from land and operated remotely. Offshore wind farms need to communicate with onshore systems via a wide area network. Meanwhile, the turbines also communicate with maintenance vessels and inspection drones, as well as with each other.

    This complex, hybrid-communication architecture presents multiple access points for cyberattacks. If malicious actors were able to penetrate the local area network of the converter station on the wind farm side, these actors could tamper with the system’s sensors. This tampering could lead to the replacement of actual data with false information. As a result, electrical disturbances would affect the offshore wind farm at the points of common coupling.
    In turn, these disturbances could trigger poorly dampened power oscillations from the offshore wind farms when all the offshore wind farms are generating their maximum output. If these cyber-induced electrical disturbances are repetitive and match the frequency of the poorly dampened power oscillations, the oscillations could be amplified. These amplified oscillations might then be transmitted through the HVDC system, potentially reaching and affecting the stability of the main power grid. While existing systems usually have redundancies built in to protect them against physical contingencies, such protection is rare against cyber security breaches.
    “The system networks can handle events like router failures or signal decays. If there is an attacker in the middle who is trying to hijack the signals, then that becomes more concerning,” says Yan, the Concordia University Research Chair (Tier 2) in Artificial Intelligence in Cyber Security and Resilience.
    Yan adds that considerable gaps exist in the industry, both among manufacturers and utilities. While many organizations are focusing on corporate issues such as data security and access controls, much is to be done to strengthen the security of operational technologies.
    He notes that Concordia is leading the push for international standardization efforts but acknowledges the work is just beginning.
    “There are regulatory standards for the US and Canada, but they often only state what is required without specifying how it should be done,” he says. “Researchers and operators are aware of the need to protect our energy security, but there remain many directions to pursue and open questions to answer.”
    This research is supported by the Concordia/Hydro-Québec/Hitachi Partnership Research Chair, with additional support from NSERC and PROMPT. More

  • in

    Many but not all of the world’s aquifers are losing water

    The world’s precious stash of subterranean freshwater is shrinking — and in nearly a third of aquifers, that loss has been speeding up in the last couple of decades, researchers report in the Jan. 25 Nature.

    A one-two punch of unsustainable groundwater withdrawals and changing climate has been causing global water levels to fall on average, leading to water shortages, slumping land surfaces and seawater intrusion into aquifers. The new study suggests that groundwater decline has accelerated in many places since 2000, but also suggests that these losses can be reversible with better water management. More

  • in

    When lab-trained AI meets the real world, ‘mistakes can happen’

    Human pathologists are extensively trained to detect when tissue samples from one patient mistakenly end up on another patient’s microscope slides (a problem known as tissue contamination). But such contamination can easily confuse artificial intelligence (AI) models, which are often trained in pristine, simulated environments, reports a new Northwestern Medicine study.
    “We train AIs to tell ‘A’ versus ‘B’ in a very clean, artificial environment, but, in real life, the AI will see a variety of materials that it hasn’t trained on. When it does, mistakes can happen,” said corresponding author Dr. Jeffery Goldstein, director of perinatal pathology and an assistant professor of perinatal pathology and autopsy at Northwestern University Feinberg School of Medicine.
    “Our findings serve as a reminder that AI that works incredibly well in the lab may fall on its face in the real world. Patients should continue to expect that a human expert is the final decider on diagnoses made on biopsies and other tissue samples. Pathologists fear — and AI companies hope — that the computers are coming for our jobs. Not yet.”
    In the new study, scientists trained three AI models to scan microscope slides of placenta tissue to (1) detect blood vessel damage; (2) estimate gestational age; and (3) classify macroscopic lesions. They trained a fourth AI model to detect prostate cancer in tissues collected from needle biopsies. When the models were ready, the scientists exposed each one to small portions of contaminant tissue (e.g. bladder, blood, etc.) that were randomly sampled from other slides. Finally, they tested the AIs’ reactions.
    Each of the four AI models paid too much attention to the tissue contamination, which resulted in errors when diagnosing or detecting vessel damage, gestational age, lesions and prostate cancer, the study found.
    The findings were published earlier this month in the journal Modern Pathology. It marks the first study to examine how tissue contamination affects machine-learning models.
    ‘For a human, we’d call it a distraction, like a bright, shiny object’
    Tissue contamination is a well-known problem for pathologists, but it often comes as a surprise to non-pathologist researchers or doctors, the study points out. A pathologist examining 80 to 100 slides per day can expect to see two to three with contaminants, but they’ve been trained to ignore them.

    When humans examine tissue on slides, they can only look at a limited field within the microscope, then move to a new field and so on. After examining the entire sample, they combine all the information they’ve gathered to make a diagnosis. An AI model performs in the same way, but the study found AI was easily misled by contaminants.
    “The AI model has to decide which pieces to pay attention to and which ones not to, and that’s zero sum,” Goldstein said. “If it’s paying attention to tissue contaminants, then it’s paying less attention to the tissue from the patient that is being examined. For a human, we’d call it a distraction, like a bright, shiny object.”
    The AI models gave a high level of attention to contaminants, indicating an inability to encode biological impurities. Practitioners should work to quantify and improve upon this problem, the study authors said.
    Previous AI scientists in pathology have studied different kinds of image artifacts, such as blurriness, debris on the slide, folds or bubbles, but this is the first time they’ve examined tissue contamination.
    ‘Confident that AI for placenta is doable’
    Perinatal pathologists, such as Goldstein, are incredibly rare. In fact, there are only 50 to 100 in the entire U.S., mostly located in big academic centers, Goldstein said. This means only 5% of placentas in the U.S. are examined by human experts. Worldwide, that number is even lower. Embedding this type of expertise into AI models can help pathologists across the country do their jobs better and faster, Goldstein said.
    “I’m actually very excited about how well we were able to build the models and how well they performed before we deliberately broke them for the study,” Goldstein said. “Our results make me confident that AI evaluations of placenta are doable. We ran into a real-world problem, but hitting that speedbump means we’re on the road to better integrating the use of machine learning in pathology.” More

  • in

    Artificial intelligence and immunity

    Researchers from Cleveland Clinic and IBM have published a strategy for identifying new targets for immunotherapy through artificial intelligence (AI). This is the first peer-reviewed publication from the two organizations’ Discovery Accelerator partnership, designed to advance research in healthcare and life sciences.
    The team worked together to develop supervised and unsupervised AI to reveal the molecular characteristics of peptide antigens, small pieces of protein molecules immune cells use to recognize threats. Project members came from diverse groups led by Cleveland Clinic’s Timothy Chan, M.D., Ph.D., as well as IBM’s Jeff Weber, Ph.D., Senior Research Scientist, and Wendy Cornell, Ph.D., Manager and Strategy Lead for Healthcare and Life Sciences Accelerated Discovery .
    “In the past, all our data on cancer antigen targets came from trial and error,” says Dr. Chan, chair of Cleveland Clinic’s Center for Immunotherapy and Precision Immuno-Oncology and Sheikha Fatima Bint Mubarak Endowed Chair in Immunotherapy and Precision Immuno-Oncology. “Partnering with IBM allows us to push the boundaries of artificial intelligence and health sciences research to change the way we develop and evaluate targets for cancer therapy.”
    For decades, scientists have been researching how to better identify antigens and use them to attack cancer cells or cells infected with viruses. This task has proved challenging because antigen peptides interact with immune cells based on specific features on the surface of the cells, a process which is still not well understood. Research has been limited by the sheer number of variables that affect how immune systems recognize these targets. Identifying these variables is difficult and time intensive with regular computing, so current models are limited and at times inaccurate.
    Published inBriefings in Bioinformatics,the study found that AI models that account for changes in molecular shape over time can accurately depict how immune systems recognize a target antigen. Through these models, researchers could home in on what processes are critical to target with immunotherapy treatments such as vaccines and engineered immune cells.
    Researchers can incorporate these insights into other AI models moving forward to identify more effective immunotherapy targets.
    “These discoveries are an example of what makes this partnership successful — combining IBM’s cutting-edge computational resources with Cleveland Clinic’s medical expertise,” Dr. Weber says. “These findings resulted from a key collaboration between everyone from a world-class expert in cancer immunotherapy to our physics-based simulation and AI experts. Collaboration when combined with innovation has terrific potential.” More

  • in

    AI surveillance tool successfully helps to predict sepsis, saves lives

    Each year, at least 1.7 million adults in the United States develop sepsis, and approximately 350,000 will die from the serious blood infection that can trigger a life-threatening chain reaction throughout the entire body.
    In a new study, published in the January 23, 2024 online edition of npj Digital Medicine, researchers at University of California San Diego School of Medicine utilized an artificial intelligence (AI) model in the emergency departments at UC San Diego Health in order to quickly identify patients at risk for sepsis infection.
    The study found the AI algorithm, entitled COMPOSER, which was previously developed by the research team, resulted in a 17% reduction in mortality.
    “Our COMPOSER model uses real-time data in order to predict sepsis before obvious clinical manifestations,” said study co-author Gabriel Wardi, MD, chief of the Division of Critical Care in the Department of Emergency Medicine at UC San Diego School of Medicine. “It works silently and safely behind the scenes, continuously surveilling every patient for signs of possible sepsis.”
    Once a patient checks in at the emergency department, the algorithm begins to continuously monitor more than 150 different patient variables that could be linked to sepsis, such as lab results, vital signs, current medications, demographics and medical history.
    Should a patient present with multiple variables, resulting in high risk for sepsis infection, the AI algorithm will notify nursing staff via the hospital’s electronic health record. The nursing team will then review with the physician and determine appropriate treatment plans.
    “These advanced AI algorithms can detect patterns that are not initially obvious to the human eye,” said study co-author Shamim Nemati, PhD, associate professor of biomedical informatics and director of predictive analytics at UC San Diego School of Medicine. “The system can look at these risk factors and come up with a highly accurate prediction of sepsis. Conversely, if the risk patterns can be explained by other conditions with higher confidence, then no alerts will be sent.”
    The study examined more than 6,000 patient admissions before and after COMPOSER was deployed in the emergency departments at UC San Diego Medical Center in Hillcrest and at Jacobs Medical Center in La Jolla.

    It is the first study to report improvement in patient outcomes by utilizing an AI deep-learning model, which is a model that uses artificial neural networks as a check and balance in order to safely, and correctly, identify health concerns in patients. The model is able to identify complex and multiple risk factors, which are then reviewed by the health care team for confirmation.
    “It is because of this AI model that our teams can provide life-saving therapy for patients quicker,” said Wardi, emergency medicine and critical care physician at UC San Diego Health.
    COMPOSER was activated in December 2022 and is now also being utilized in many hospital in-patient units throughout UC San Diego Health. It will soon be activated at the health system’s newest location, UC San Diego Health East Campus.
    UC San Diego Health, the region’s only academic medical system, is a pioneer in the field of AI health care, with a recent announcement of its inaugural chief health AI officer and opening of the Joan and Irwin Jacobs Center for Health Innovation at UC San Diego Health, which seeks to develop sophisticated and advanced solutions in health care.
    Additionally, the health system recently launched a pilot in which Epic, a cloud-based electronic health record system, and Microsoft’s generative AI integration automatically drafts more compassionate message responses through ChatGPT, alleviating this additional step from doctors and caregivers so they can focus on patient care.
    “Integration of AI technology in the electronic health record is helping to deliver on the promise of digital health, and UC San Diego Health has been a leader in this space to ensure AI-powered solutions support high reliability in patient safety and quality health care,” said study co-author Christopher Longhurst, MD, executive director of the Jacobs Center for Health Innovation, and chief medical officer and chief digital officer at UC San Diego Health.
    Co-authors of this study include Aaron Boussina, Theodore Chan, Allison Donahue, Robert El-Kareh, Atul Malhotra, Robert Owens, Kimberly Quintero and Supreeth Shashikumar, all at UC San Diego. More