More stories

  • in

    Deepfake challenges 'will only grow'

    Although most public attention surrounding deepfakes has focused on large propaganda campaigns, the problematic new technology is much more insidious, according to a new report by artificial intelligence (AI) and foreign policy experts at Northwestern University and the Brookings Institution.
    In the new report, the authors discuss deepfake videos, images and audio as well as their related security challenges. The researchers predict the technology is on the brink of being used much more widely, including in targeted military and intelligence operations.
    Ultimately, the experts make recommendations to security officials and policymakers for how to handle the unsettling new technology. Among their recommendations, the authors emphasize a need for the United States and its allies to develop a code of conduct for governments’ use of deepfakes.
    The research report, “Deepfakes and international conflict,” was published this month by Brookings.
    “The ease with which deepfakes can be developed for specific individuals and targets, as well as their rapid movement — most recently through a form of AI known as stable diffusion — point toward a world in which all states and nonstate actors will have the capacity to deploy deepfakes in their security and intelligence operations,” the authors write. “Security officials and policymakers will need to prepare accordingly.”
    Northwestern co-authors include AI and security expert V.S. Subrahmanian, the Walter P. Murphy Professor of Computer Science at Northwestern’s McCormick School of Engineering and Buffett Faculty Fellow at the Buffett Institute of Global Affairs, and Chongyang Gao, a Ph.D. student in Subrahmanian’s lab. Brookings Institute co-authors include Daniel L. Bynam and Chris Meserole.

    Deepfakes require ‘little difficulty’
    Leader of the Northwestern Security and AI Lab, Subrahmanian and his student Gao previously developed TREAD (Terrorism Reduction with Artificial Intelligence Deepfakes), a new algorithm that researchers can use to generate their own deepfake videos. By creating convincing deepfakes, researchers can better understand the technology within the context of security.
    Using TREAD, Subrahmanian and his team created sample deepfake videos of deceased Islamic State terrorist Abu Mohammed al-Adnani. While the resulting video looks and sounds like al-Adnani — with highly realistic facial expressions and audio — he is actually speaking words by Syrian President Bashar al-Assad.
    The researchers created the lifelike video within hours. The process was so straight-forward that Subrahmanian and his coauthors said militaries and security agencies should just assume that rivals are capable of generating deepfake videos of any official or leader within minutes.
    “Anyone with a reasonable background in machine learning can — with some systematic work and the right hardware — generate deepfake videos at scale by building models similar to TREAD,” the authors write. “The intelligence agencies of virtually any country, which certainly includes U.S. adversaries, can do so with little difficulty.”
    Avoiding ‘cat-and-mouse games’

    The authors believe that state and non-state actors will leverage deepfakes to strengthen ongoing disinformation efforts. Deepfakes could help fuel conflict by legitimizing war, sowing confusion, undermining popular support, polarizing societies, discrediting leaders and more. In the short-term, security and intelligence experts can counteract deepfakes by designing and training algorithms to identify potentially fake videos, images and audio. This approach, however, is unlikely to remain effective in the long term.
    “The result will be a cat-and-mouse game similar to that seen with malware: When cybersecurity firms discover a new kind of malware and develop signatures to detect it, malware developers make ‘tweaks’ to evade the detector,” the authors said. “The detect-evade-detect-evade cycle plays out over time…Eventually, we may reach an endpoint where detection becomes infeasible or too computationally intensive to carry out quickly and at scale.”
    For long-term strategies, the report’s authors make several recommendations: Educate the general public to increase digital literacy and critical reasoning Develop systems capable of tracking the movement of digital assets by documenting each person or organization that handles the asset Encourage journalists and intelligence analysts to slow down and verify information before including it in published articles. “Similarly, journalists might emulate intelligence products that discuss ‘confidence levels’ with regard to judgments.” Use information from separate sources, such as verification codes, to confirm legitimacy of digital assetsAbove all, the authors argue that the government should enact policies that offer robust oversight and accountability mechanisms for governing the generation and distribution of deepfake content. If the United States or its allies want to “fight fire with fire” by creating their own deepfakes, then policies first need to be agreed upon and put in place. The authors say this could include establishing a “Deepfakes Equities Process,” modeled after similar processes for cybersecurity.
    “The decision to generate and use deepfakes should not be taken lightly and not without careful consideration of the trade-offs,” the authors write. “The use of deepfakes, particularly designed to attack high-value targets in conflict settings, will affect a wide range of government offices and agencies. Each stakeholder should have the opportunity to offer input, as needed and as appropriate. Establishing such a broad-based, deliberative process is the best route to ensuring that democratic governments use deepfakes responsibly.”
    Further information: https://www.brookings.edu/research/deepfakes-and-international-conflict/ More

  • in

    Researchers gain deeper understanding of mechanism behind superconductors

    Physicists at Leipzig University have once again gained a deeper understanding of the mechanism behind superconductors. This brings the research group led by Professor Jürgen Haase one step closer to their goal of developing the foundations for a theory for superconductors that would allow current to flow without resistance and without energy loss. The researchers found that in superconducting copper-oxygen bonds, called cuprates, there must be a very specific charge distribution between the copper and the oxygen, even under pressure.
    This confirmed their own findings from 2016, when Haase and his team developed an experimental method based on magnetic resonance that can measure changes that are relevant to superconductivity in the structure of materials. They were the first team in the world to identify a measurable material parameter that predicts the maximum possible transition temperature — a condition required to achieve superconductivity at room temperature. Now they have discovered that cuprates, which under pressure enhance superconductivity, follow the charge distribution predicted in 2016. The researchers have published their new findings in the journal PNAS.
    “The fact that the transition temperature of cuprates can be enhanced under pressure has puzzled researchers for 30 years. But until now we didn’t know which mechanism was responsible for this,” Haase said. He and his colleagues at the Felix Bloch Institute for Solid State Physics have now come a great deal closer to understanding the actual mechanism in these materials. “At Leipzig University — with support from the Graduate School Building with Molecules and Nano-objects (BuildMoNa) — we have established the basic conditions needed to research cuprates using nuclear resonance, and Michael Jurkutat was the first doctoral researcher to join us. Together, we established the Leipzig Relation, which says that you have to take electrons away from the oxygen in these materials and give them to the copper in order to increase the transition temperature. You can do this with chemistry, but also with pressure. But hardly anyone would have thought that we could measure all of this with nuclear resonance,” Haase said.
    Their current research finding could be exactly what is needed to produce a superconductor at room temperature, which has been the dream of many physicists for decades and is now expected to take only a few more years, according to Haase. To date, this has only been possible at very low temperatures around minus 150 degrees Celsius and below, which are not easy to find anywhere on Earth. About a year ago, a Canadian research group verified the findings of Professor Haase’s team from 2016 using newly developed, computer-aided calculations and thus substantiated the findings theoretically.
    Superconductivity is already used today in a variety of ways, for example, in magnets for MRI machines and in nuclear fusion. But it would be much easier and less expensive if superconductors operated at room temperature. The phenomenon of superconductivity was discovered in metals as early as 1911, but even Albert Einstein did not attempt to come up with an explanation back then. Nearly half a century passed before BCS theory provided an understanding of superconductivity in metals in 1957. In 1986, the discovery of superconductivity in ceramic materials (cuprate superconductors) at much higher temperatures by physicists Georg Bednorz and Karl Alexander Müller raised new questions, but also raised hopes that superconductivity could be achieved at room temperature. More

  • in

    Researchers use AI to triage patients with chest pain

    Artificial intelligence (AI) may help improve care for patients who show up at the hospital with acute chest pain, according to a study published in Radiology, a journal of the Radiological Society of North America (RSNA).
    “To the best of our knowledge, our deep learning AI model is the first to utilize chest X-rays to identify individuals among acute chest pain patients who need immediate medical attention,” said the study’s lead author, Márton Kolossváry, M.D., Ph.D., radiology research fellow at Massachusetts General Hospital (MGH) in Boston.
    Acute chest pain syndrome may consist of tightness, burning or other discomfort in the chest or a severe pain that spreads to your back, neck, shoulders, arms, or jaw. It may be accompanied by shortness of breath.
    Acute chest pain syndrome accounts for over 7 million emergency department visits annually in the United States, making it one of the most common complaints.
    Fewer than 8% of these patients are diagnosed with the three major cardiovascular causes of acute chest pain syndrome, which are acute coronary syndrome, pulmonary embolism or aortic dissection. However, the life-threatening nature of these conditions and low specificity of clinical tests, such as electrocardiograms and blood tests, lead to substantial use of cardiovascular and pulmonary diagnostic imaging, often yielding negative results. As emergency departments struggle with high patient numbers and shortage of hospital beds, effectively triaging patients at very low risk of these serious conditions is important.
    Deep learning is an advanced type of artificial intelligence (AI) that can be trained to search X-ray images to find patterns associated with disease.

    For the study, Dr. Kolossváry and colleagues developed an open-source deep learning model to identify patients with acute chest pain syndrome who were at risk for 30-day acute coronary syndrome, pulmonary embolism, aortic dissection or all-cause mortality, based on a chest X-ray.
    The study used electronic health records of patients presenting with acute chest pain syndrome who had a chest X-ray and additional cardiovascular or pulmonary imaging and/or stress tests at MGH or Brigham and Women’s Hospital in Boston between January 2005 and December 2015. For the study, 5,750 patients (mean age 59, including 3,329 men) were evaluated.
    The deep-learning model was trained on 23,005 patients from MGH to predict a 30-day composite endpoint of acute coronary syndrome, pulmonary embolism or aortic dissection and all-cause mortality based on chest X-ray images.
    The deep-learning tool significantly improved prediction of these adverse outcomes beyond age, sex and conventional clinical markers, such as d-dimer blood tests. The model maintained its diagnostic accuracy across age, sex, ethnicity and race. Using a 99% sensitivity threshold, the model was able to defer additional testing in 14% of patients as compared to 2% when using a model only incorporating age, sex, and biomarker data.
    “Analyzing the initial chest X-ray of these patients using our automated deep learning model, we were able to provide more accurate predictions regarding patient outcomes as compared to a model that uses age, sex, troponin or d-dimer information,” Dr. Kolossváry said. “Our results show that chest X-rays could be used to help triage chest pain patients in the emergency department.”
    According to Dr. Kolossváry, in the future such an automated model could analyze chest X-rays in the background and help select those who would benefit most from immediate medical attention and may help identify patients who may be discharged safely from the emergency department.
    “Deep Learning Analysis of Chest Radiographs to Triage Patients with Acute Chest Pain Syndrome.” Collaborating with Dr. Kolossváry were Vineet K. Raghu, Ph.D., John T. Nagurney, M.D., Udo Hoffmann, M.D., M.P.H., and Michael T. Lu, M.D., M.P.H. More

  • in

    Novel framework provides 'measuring stick' for assessing patient matching tools

    Accurate linking of an individual’s medical records from disparate sources within and between health systems, known as patient matching, plays a critical role in patient safety and quality of care, but has proven difficult to accomplish in the United States, the last developed country without a unique patient identifier. In the U.S., linking patient data is dependent on algorithms designed by researchers, vendors and others. Research scientists led by Regenstrief Institute Vice President for Data and Analytics Shaun Grannis, M.D., M.S., have developed an eight-point framework for evaluating the validity and performance of algorithms to match medical records to the correct patient.
    “The value of data standardization is well recognized. There are national healthcare provider IDs. There are facility IDs and object identifiers. There are billing codes. There are standard vocabularies for healthcare lab test results and medical observations — such as LOINC® here at Regenstrief. Patient identity is the last gaping hole in our health infrastructure,” said Dr. Grannis. “We are providing a framework to evaluate patient matching algorithms for accuracy.
    “We recognize that the need for patient matching is not going away and that we need standardized methods to uniquely identify patients,” said Dr. Grannis. “Current patient matching algorithms come in many different flavors, shapes and sizes. To be able to compare how one performs against the other, or even to understand how they might interact together, we have to have a standard way of assessment. We have produced a novel, robust framework for consistent and reproducible evaluation. Simply put, the framework we’ve developed at Regenstrief provides a ‘measuring stick’ for the effectiveness of patient matching tools.”
    Individuals increasingly receive care from multiple sources. While patient matching is complex, it is crucial to health information exchange. Is the William Jones seen at one healthcare system the same person as the William, Will or Willy Jones or perhaps Bill or Billy Jones receiving care at other facilities? Does Elizabeth Smith’s name appear at different medical offices or perhaps at a physical therapy or a dialysis facility as Liz or Beth? To which Juan J. Gomez do various lab test results belong? Typos, missing information and other data errors as well as typical variations add to the complexity.
    The framework’s eight-point approach to the creation of gold standard matching data sets necessary for record linkage encompasses technical areas including data preprocessing, blocking, record adjudication, linkage evaluation and reviewer characteristics. The authors note that the framework “can help record linkage method developers provide necessary transparency when creating and validating gold standard reference matching data sets. In turn, this transparency will support both the internal and external validity of recording linkage studies and improve the robustness of new record linkage strategies.”
    Measures and standards are ubiquitous. “When you go to a gas station pump, the measure of how much gas goes through is standardized so that we know exactly how much is flowing. Similarly, we need to have a common way of measuring and understanding how algorithms for patient matching work,” said Dr. Grannis. “Our eight-pronged approach helps to cover the waterfront of what needs to be evaluated. Laying out the framework and specifying the tasks and activities that need to be completed goes a long way toward standardizing patient matching.”
    In addition to playing a critical role in patient safety and quality of care, improved patient matching accuracy supports more cost-effective healthcare delivery in a variety of ways including reduction in the number of duplicate medical tests. More

  • in

    New small laser device can help detect signs of life on other planets

    As space missions delve deeper into the outer solar system, the need for more compact, resource-conserving and accurate analytical tools has become increasingly critical — especially as the hunt for extraterrestrial life and habitable planets or moons continues.
    A University of Maryland-led team developed a new instrument specifically tailored to the needs of NASA space missions. Their mini laser-sourced analyzer is significantly smaller and more resource efficient than its predecessors — all without compromising the quality of its ability to analyze planetary material samples and potential biological activity onsite. The team’s paper on this new device was published in the journal Nature Astronomy on January 16, 2023.
    Weighing only about 17 pounds, the instrument is a physically scaled-down combination of two important tools for detecting signs of life and identifying compositions of materials: a pulsed ultraviolet laser that removes small amounts of material from a planetary sample and an OrbitrapTM analyzer that delivers high-resolution data about the chemistry of the examined materials.
    “The Orbitrap was originally built for commercial use,” explained Ricardo Arevalo, lead author of the paper and an associate professor of geology at UMD. “You can find them in the labs of pharmaceutical, medical and proteomic industries. The one in my own lab is just under 400 pounds, so they’re quite large, and it took us eight years to make a prototype that could be used efficiently in space — significantly smaller and less resource-intensive, but still capable of cutting-edge science.”
    The team’s new gadget shrinks down the original Orbitrap while pairing it with laser desorption mass spectrometry (LDMS) — techniques that have yet to be applied in an extraterrestrial planetary environment. The new device boasts the same benefits as its larger predecessors but is streamlined for space exploration and onsite planetary material analysis, according to Arevalo.
    Thanks to its diminutive mass and minimal power requirements, the mini Orbitrap LDMS instrument can be easily stowed away and maintained on space mission payloads. The instrument’s analyses of a planetary surface or substance are also far less intrusive and thus much less likely to contaminate or damage a sample than many current methods that attempt to identify unknown compounds.

    “The good thing about a laser source is that anything that can be ionized can be analyzed. If we shoot our laser beam at an ice sample, we should be able to characterize the composition of the ice and see biosignatures in it,” Arevalo said. “This tool has such a high mass resolution and accuracy that any molecular or chemical structures in a sample become much more identifiable.”
    The laser component of the mini LDMS Orbitrap also allows researchers access to larger, more complex compounds that are more likely to be associated with biology. Smaller organic compounds like amino acids, for example, are more ambiguous signatures of life forms.
    “Amino acids can be produced abiotically, meaning that they’re not necessarily proof of life. Meteorites, many of which are chock full of amino acids, can crash onto a planet’s surface and deliver abiotic organics to the surface,” Arevalo said. “We know now that larger and more complex molecules, like proteins, are more likely to have been created by or associated with living systems. The laser lets us study larger and more complex organics that can reflect higher fidelity biosignatures than smaller, simpler compounds.”
    For Arevalo and his team, the mini LDMS Orbitrap will offer much-needed insight and flexibility for future ventures into the outer solar system, such as missions focused on life detection objectives (e.g., Enceladus Orbilander) and exploration of the lunar surface (e.g., the NASA Artemis Program). They hope to send their device into space and deploy it on a planetary target of interest within the next few years.
    “I view this prototype as a pathfinder for other future LDMS and Orbitrap-based instruments,” Arevalo said. “Our mini Orbitrap LDMS instrument has the potential to significantly enhance the way we currently study the geochemistry or astrobiology of a planetary surface.”
    Other UMD-affiliated researchers on the team include geology graduate students Lori Willhite and Ziqin “Grace” Ni, geology postdoctoral associates Anais Bardyn and Soumya Ray, and astronomy visiting associate research engineer Adrian Southard.
    This study was supported by NASA (Award Nos. 80NSSC19K0610, 80NSSC19K0768, 80GSFC21M0002), NASA Goddard Space Flight Center Internal Research Development (IRAD), and the University of Maryland Faculty Incentive Program. More

  • in

    Blocking radio waves and electromagnetic interference with the flip of a switch

    Researchers in Drexel University’s College of Engineering have developed a thin film device, fabricated by spray coating, that can block electromagnetic radiation with the flip of a switch. The breakthrough, enabled by versatile two-dimensional materials called MXenes, could adjust the performance of electronic devices, strengthen wireless connections and secure mobile communications against intrusion.
    The team, led by Yury Gogotsi, PhD, Distinguished University and Bach professor in Drexel’s College of Engineering, previously demonstrated that the two-dimensional layered MXene materials, discovered just over a decade ago, when combined with an electrolyte solution, can be turned into a potent active shield against electromagnetic waves. This latest MXene discovery, reported in Nature Nanotechnology, shows how this shielding can be tuned when a small voltage — less than that produced by an alkaline battery — is applied.
    “Dynamic control of electromagnetic wave jamming has been a significant technological challenge for protecting electronic devices working at gigahertz frequencies and a variety of other communications technologies,” Gogotsi said. “As the number of wireless devices being used in industrial and private sectors has increased by orders of magnitude over the past decade, the urgency of this challenge has grown accordingly. This is why our discovery — which would dynamically mitigate the effect of electromagnetic interference on these devices — could have a broad impact.”
    MXene is a unique material in that it is highly conductive — making it perfectly suited for reflecting microwave radiation that could cause static, feedback or diminish the performance of communications devices — but its internal chemical structure can also be temporarily altered to allow these electromagnetic waves to pass through.
    This means that a thin coating on a device or electrical components prevents them from both emitting electromagnetic waves, as well as being penetrated by those emitted by other electronics. Eliminating the possibility of interference from both internal and external sources can ensure the performance of the device, but some waves must be allowed to exit and enter when it is being used for communication.
    “Without being able to control the ebb and flow of electromagnetic waves within and around a device, it’s a bit like a leaky faucet — you’re not really turning off the water and that constant dripping is no good,” Gogotsi said. “Our shielding ensures the plumbing is tight — so-to-speak — no electromagnetic radiation is leaking out or getting in until we want to use the device.”
    The key to eliciting bidirectional tunability of MXene’s shielding property is using the flow and expulsion of ions to alternately expand and compress the space between material’s layers, like an accordion, as well as to change the surface chemistry of MXenes.

    With a small voltage applied to the film, ions enter — or intercalate — between the MXene layers altering the charge of their surface and inducing electrostatic attraction, which serves to change the layer spacing, the conductivity and shielding efficiency of the material. When the ions are deintercalated, as the current is switched off, the MXene layers return to their original state.
    The team tested 10 different MXene-electrolyte combinations, applying each via paint sprayer in a layer about 30 to 100 times thinner than a human hair. The materials consistently demonstrated the dynamic tunability of shielding efficiency in blocking microwave radiation, which is impossible for traditional metals like copper and steel. And the device sustained the performance through more than 500 charge-discharge cycles.
    “These results indicate that the MXene films can convert from electromagnetic interference shielding to quasi-electromagnetic wave transmission by electrochemical oxidation of MXenes,” Gogotsi and his co-authors wrote. “The MXene film can potentially serve as a dynamic EMI shielding switch.”
    For security applications, Gogotsi suggests that the MXene shielding could hide devices from detection by radar or other tracing systems. The team also tested the potential of a one-way shielding switch. This would allow a device to remain undetectable and protected from unauthorized access until it is deployed for use.
    “A one-way switch could open the protection and allow a signal to be sent or communication to be opened in an emergency or at the required moment,” Gogotsi said. “This means it could protect communications equipment from being influenced or tampered with until it is in use. For example, it could encase the device during transportation or storage and then activate only when it is ready to be used.”
    The next step for Gogotsi’s team is to explore additional MXene-electrolyte combinations and mechanisms to fine-tune the shielding to achieve a stronger modulation of electromagnetic wave transmission and dynamic adjustment to block radiation at a variety of bandwidths. More

  • in

    COVID calculations spur solution to old problem in computer science

    During the corona epidemic many of us became amateur mathematicians. How quickly would the number of hospitalized patients rise, and when would herd immunity be achieved? Professional mathematicians were challenged as well, and a researcher at University of Copenhagen became inspired to solve a 30-year-old problem in computer science. The breakthrough has just been published in th Journal of the ACM (Association for Computing Machinery).
    “Like many others, I was out to calculate how the epidemic would develop. I wanted to investigate certain ideas from theoretical computer science in this context. However, I realized that the lack of solution to the old problem was a showstopper,” says Joachim Kock, Associate Professor at the Department of Mathematics, University of Copenhagen.
    His solution to the problem can be of use in epidemiology and computer science, and potentially in other fields as well. A common feature for these fields is the presence of systems where the various components exhibit mutual influence. For instance, when a healthy person meets a person infected with COVID, the result can be two people infected.
    Smart method invented by German teenager
    To understand the breakthrough, one needs to know that such complex systems can be described mathematically through so-called Petri nets. The method was invented in 1939 by German Carl Adam Petri (by the way at the age of only 13) for chemistry applications. Just like a healthy person meeting a person infected with COVID can trigger a change, the same may happen when two chemical substances mix and react.
    In a Petri net the various components are drawn as circles while events such as a chemical reaction or an infection are drawn as squares. Next, circles and squares are connected by arrows which show the interdependencies in the system.

    A simple version of a Petri net for COVID infection. The starting point is a non-infected person. “S” denotes “susceptible.” Contact with an infected person (“I”) is an event which leads to two persons being infected. Later another event will happen, removing a person from the group of infected. Here, “R” denotes “recovered” which in this context could be either cured or dead. Either outcome would remove the person from the infected group.
    Computer scientists regarded the problem as unsolvable
    In chemistry, Petri nets are applied for calculating how the concentrations of various chemical substances in a mixture will evolve. This manner of thinking has influenced the use of Petri nets in other fields such as epidemiology: we are starting out with a high “concentration” of un-infected people, whereafter the “concentration” of infected starts to rise. In computer science, the use of Petri nets is somewhat different: the focus is on individuals rather than concentrations, and the development happens in steps rather than continuously.
    What Joachim Kock had in mind was to apply the more individual-oriented Petri nets from computer science for COVID calculations. This was when he encountered the old problem:
    “Basically, the processes in a Petri net can be described through two separate approaches. The first approach regards a process as a series of events, while the second approach sees the net as a graphical expression of the interdependencies between components and events,” says Joachim Kock, adding:
    “The serial approach is well suited for performing calculations. However, it has a downside since it describes causalities less accurately than the graphical approach. Further, the serial approach tends to fall short when dealing with events that take place simultaneously.”

    “The problem was that nobody had been able to unify the two approaches. The computer scientists had more or less resigned, regarding the problem as unsolvable. This was because no-one had realized that you need to go all the way back and revise the very definition of a Petri net,” says Joachim Kock.
    Small modification with large impact
    The Danish mathematician realized that a minor modification to the definition of a Petri net would enable a solution to the problem:
    “By allowing parallel arrows rather than just counting them and writing a number, additional information is made available. Things work out and the two approaches can be unified.”
    The exact mathematical reason why this additional information matters is complex, but can be illustrated by an analogy:
    “Assigning numbers to objects has helped humanity greatly. For instance, it is highly practical that I can arrange the right number of chairs in advance for a dinner party instead of having to experiment with different combinations of chairs and guests after they have arrived. However, the number of chairs and guests does not reveal who will be sitting where. Some information is lost when we consider numbers instead of the real objects.”
    Similarly, information is lost when the individual arrows of the Petri net are replaced by a number.
    “It takes a bit more effort to treat the parallel arrows individually, but one is amply rewarded as it becomes possible to combine the two approaches so that the advantages of both can be obtained simultaneously.”
    The circle to COVID has been closed
    The solution helps our mathematical understanding of how to describe complex systems with many interdependencies, but will not have much practical effect on the daily work of computer scientists using Petri nets, according to Joachim Kock:
    “This is because the necessary modifications are mostly back-compatible and can be applied without need for revision of the entire Petri net theory.”
    “Somewhat surprisingly, some epidemiologists have started using the revised Petri nets. So, one might say the circle has been closed!”
    Joachim Kock does see a further point to the story:
    “I wasn’t out to find a solution to the old problem in computer science at all. I just wanted to do COVID calculations. This was a bit like looking for your pen but realizing that you must find your glasses first. So, I would like to take the opportunity to advocate the importance of research which does not have a predefined goal. Sometimes research driven by curiosity will lead to breakthroughs.” More

  • in

    Clinical trial results indicate low rate of adverse events associated with implanted brain computer interface

    For people with paralysis caused by neurologic injury or disease — such as ALS (also known as Lou Gehrig’s disease), stroke, or spinal cord injury — brain-computer interfaces (BCIs) have the potential to restore communication, mobility, and independence by transmitting information directly from the brain to a computer or other assistive technology.
    Although implanted brain sensors, the core component of many brain-computer interfaces, have been used in neuroscientific studies with animals for decades and have been approved for short term use ( More