More stories

  • in

    What violin synchronization can teach us about better networking in complex times

    Human networking involves every field and includes small groups of people to large, coordinated systems working together toward a goal, be it traffic management in an urban area, economic systems or epidemic control. A new study published in Nature Communications suggests by using a model of violin synchronization in a network of violin players, there are ways to drown out distractions and miscommunications that could be used as a model for human networks in society.
    Titled “The Synchronization of Complex Human Networks,” the study was conceived by Elad Shniderman, a graduate student in the Department of Music in the College of Arts and Sciences at Stony Brook University, and scientist Moti Fridman, PhD, at the Institute of Nanotechnology and Advanced Materials at Bar-llan University. He co-authored the paper with Daniel Weymouth, PhD, Associate Professor of Composition and Theory in the Department of Music and scientists at Bar-llan and the Weizmann Institute of Science in Israel. The collaboration was initiated at the Fetter Museum of Nanoscience and Art.
    The research team devised an experiment involving 16 violinists with electric violins connected to a computer system. Each of the violinists had sound-canceling headphones, hearing only the sound received from the computer. All violinists played a simple repeating musical phrase and tried to synchronize with other violinists according to what they heard in their headphones.
    According to Shniderman, Weymouth and their fellow authors: “Research on network links or coupling has focused predominantly on all-to-do coupling, whereas current social networks and human interactions are often based on complex coupling configurations.
    This study of synchronization between violin players in complex networks with full control over network connectivity, coupling strength and delay, revealed that players can tune their playing period and delete connections by ignoring frustrating signals to find a stable solution. These controlled and new degrees of freedom enable new strategies and yield better solutions potentially applicable for other human networking models.”
    “Society in its complexity is recognizing how human networks affect a broad range of crucial issues, including economic inequality, stock market crashes, political polarization and the spread of disease,” says Weymouth. “We believe there are a lot of important, real-world applications to the results of this experiment and ongoing work.”

    Story Source:
    Materials provided by Stony Brook University. Note: Content may be edited for style and length. More

  • in

    AI-enhanced precision medicine identifies novel autism subtype

    A novel precision medicine approach enhanced by artificial intelligence (AI) has laid the groundwork for what could be the first biomedical screening and intervention tool for a subtype of autism, reports a new study from Northwestern University, Ben Gurion University, Harvard University and the Massachusetts Institute of Technology.
    The approach is believed to be the first of its kind in precision medicine.
    “Previously, autism subtypes have been defined based on symptoms only — autistic disorder, Asperger syndrome, etc. — and they can be hard to differentiate as it is really a spectrum of symptoms,” said study co-first author Dr. Yuan Luo, associate professor of preventive medicine: health and biomedical informatics at the Northwestern University Feinberg School of Medicine. “The autism subtype characterized by abnormal levels identified in this study is the first multidimensional evidenced-based subtype that has distinct molecular features and an underlying cause.”
    Luo is also chief AI officer at the Northwestern University Clinical and Translational Sciences Institute and the Institute of Augmented Intelligence in Medicine. He also is a member of the McCormick School of Engineering.
    The findings were published August 10 in Nature Medicine.
    Autism affects an estimated 1 in 54 children in the United States, according to the Centers for Disease Control and Prevention. Boys are four times more likely than girls to be diagnosed. Most children are diagnosed after age 4, although autism can be reliably diagnosed based on symptoms as early as age 2.

    advertisement

    The subtype of the disorder studied by Luo and colleagues is known as dyslipidemia-associated autism, which represents 6.55% of all diagnosed autism spectrum disorders in the U.S.
    “Our study is the first precision medicine approach to overlay an array of research and health care data — including genetic mutation data, sexually different gene expression patterns, animal model data, electronic health record data and health insurance claims data — and then use an AI-enhanced precision medicine approach to attempt to define one of the world’s most complex inheritable disorders,” said Luo.
    The idea is similar to that of today’s digital maps. In order to get a true representation of the real world, the team overlaid different layers of information on top of one another.
    “This discovery was like finding a needle in a haystack, as there are thousands of variants in hundreds of genes thought to underlie autism, each of which is mutated in less than 1% of families with the disorder. We built a complex map, and then needed to develop a magnifier to zoom in,” said Luo.
    To build that magnifier, the research team identified clusters of gene exons that function together during brain development. They then used a state-of-the-art AI algorithm graph clustering technique on gene expression data. Exons are the parts of genes that contain information coding for a protein. Proteins do most of the work in our cells and organs, or in this case, the brain.
    “The map and magnifier approach showcases a generalizable way of using multiple data modalities for subtyping autism and it holds the potential for many other genetically complex diseases to inform targeted clinical trials,” said Luo.
    Using the tool, the research team also identified a strong association of parental dyslipidemia with autism spectrum disorder in their children. They further saw altered blood lipid profiles in infants later diagnosed with autism spectrum disorder. These findings have led the team to pursue subsequent studies, including clinical trials that aim to promote early screening and early intervention of autism.
    “Today, autism is diagnosed based only on symptoms, and the reality is when a physician identifies it, it’s often when early and critical brain developmental windows have passed without appropriate intervention,” said Luo. “This discovery could shift that paradigm.”

    Story Source:
    Materials provided by Northwestern University. Original written by Roger Anderson. Note: Content may be edited for style and length. More

  • in

    Machine learning can predict market behavior

    Machine learning can assess the effectiveness of mathematical tools used to predict the movements of financial markets, according to new Cornell research based on the largest dataset ever used in this area.
    The researchers’ model could also predict future market movements, an extraordinarily difficult task because of markets’ massive amounts of information and high volatility.
    “What we were trying to do is bring the power of machine learning techniques to not only evaluate how well our current methods and models work, but also to help us extend these in a way that we never could do without machine learning,” said Maureen O’Hara, the Robert W. Purcell Professor of Management at the SC Johnson College of Business.
    O’Hara is co-author of “Microstructure in the Machine Age,” published July 7 in The Review of Financial Studies.
    “Trying to estimate these sorts of things using standard techniques gets very tricky, because the databases are so big. The beauty of machine learning is that it’s a different way to analyze the data,” O’Hara said. “The key thing we show in this paper is that in some cases, these microstructure features that attach to one contract are so powerful, they can predict the movements of other contracts. So we can pick up the patterns of how markets affect other markets, which is very difficult to do using standard tools.”
    Markets generate vast amounts of data, and billions of dollars are at stake in mining that data for patterns to shed light on future market behavior. Companies on Wall Street and elsewhere employ various algorithms, examining different variables and factors, to find such patterns and predict the future.

    advertisement

    In the study, the researchers used what’s known as a random forest machine learning algorithm to better understand the effectiveness of some of these models. They assessed the tools using a dataset of 87 futures contracts — agreements to buy or sell assets in the future at predetermined prices.
    “Our sample is basically all active futures contracts around the world for five years, and we use every single trade — tens of millions of them — in our analysis,” O’Hara said. “What we did is use machine learning to try to understand how well microstructure tools developed for less complex market settings work to predict the future price process both within a contract and then collectively across contracts. We find that some of the variables work very, very well — and some of them not so great.”
    Machine learning has long been used in finance, but typically as a so-called “black box” — in which an artificial intelligence algorithm uses reams of data to predict future patterns but without revealing how it makes its determinations. This method can be effective in the short term, O’Hara said, but sheds little light on what actually causes market patterns.
    “Our use for machine learning is: I have a theory about what moves markets, so how can I test it?” she said. “How can I really understand whether my theories are any good? And how can I use what I learned from this machine learning approach to help me build better models and understand things that I can’t model because it’s too complex?”
    Huge amounts of historical market data are available — every trade has been recorded since the 1980’s — and vast volumes of information are generated every day. Increased computing power and greater availability of data have made it possible to perform more fine-grained and comprehensive analyses, but these datasets, and the computing power needed to analyze them, can be prohibitively expensive for scholars.
    In this research, finance industry practitioners partnered with the academic researchers to provide the data and the computers for the study as well as expertise in machine learning algorithms used in practice.
    “This partnership brings benefits to both,” said O’Hara, adding that the paper is one in a line of research she, Easley and Lopez de Prado have completed over the last decade. “It allows us to do research in ways generally unavailable to academic researchers.”

    Story Source:
    Materials provided by Cornell University. Original written by Melanie Lefkowitz. Note: Content may be edited for style and length. More

  • in

    Brain-NET, a deep learning methodology, accurately predicts surgeon certification scores based on neuroimaging data

    In order to earn certification in general surgery, residents in the United States need to demonstrate proficiency in the Fundamentals of Laparoscopic program (FLS), a test that requires manipulation of laparoscopic tools within a physical training unit. Central to that assessment is a quantitative score, known as the FLS score, which is manually calculated using a formula that is time-consuming and labor-intensive.
    By combining brain optical imaging, and a deep learning framework they call “Brain-NET,” a multidisciplinary team of engineers at Rensselaer Polytechnic Institute, in close collaboration with the Department of Surgery at the Jacobs School of Medicine & Biomedical Sciences at the University at Buffalo, has developed a new methodology that has the potential to transform training and the certification process for surgeons.
    In a new article in IEEE Transactions on Biomedical Engineering, the researchers demonstrated how Brain-NET can accurately predict a person’s level of expertise in terms of their surgical motor skills, based solely on neuroimaging data. These results support the future adoption of a new, more efficient method of surgeon certification that the team has developed.
    “This is an area of expertise that is really unique to RPI,” said Xavier Intes, a professor of biomedical engineering at Rensselaer, who led this research.
    According to Intes, Brain-NET not only performed more quickly than the traditional prediction model, but also more accurately, especially as it analyzed larger datasets.
    Brain-NET builds upon the research team’s earlier work in this area. Researchers led by Suvranu De, the head of the Rensselaer Department of Mechanical, Aerospace, and Nuclear Engineering, previously showed that they could accurately assess a doctor’s surgical motor skills by analyzing brain activation signals using optical imaging.
    In addition to its potential to streamline the surgeon certification process, the development of Brain-NET, combined with that optical imaging analysis, also enables real-time score feedback for surgeons who are training.
    “If you can get the measurement of the predicted score, you can give feedback right away,” Intes said. “What this opens the door to is to engage in remediation or training.”

    Story Source:
    Materials provided by Rensselaer Polytechnic Institute. Original written by Torie Wells. Note: Content may be edited for style and length. More

  • in

    Ultraviolet communication for secure networks

    Of ever-increasing concern for operating a tactical communications network is the possibility that a sophisticated adversary may detect friendly transmissions. Army researchers developed an analysis framework that enables the rigorous study of the detectability of ultraviolet communication systems, providing the insights needed to deliver the requirements of future, more secure Army networks.
    In particular, ultraviolet communication has unique propagation characteristics that not only allow for a novel non-line-of-sight optical link, but also imply that the transmissions may be harder for an adversary to detect.
    Building off of experimentally validated channel modeling, channel simulations, and detection and estimation theory, the developed framework enables the evaluation of tradeoffs associated with different design choices and the manner of operation of ultraviolet communication systems, said Dr. Robert Drost of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory.
    “While many techniques have been proposed to decrease the detectability of conventional radio-frequency, or RF, communications, the increased atmospheric absorption of deep-ultraviolet wavelengths implies that ultraviolet communication, or UVC, has a natural low-probability-of-detection, or LPD, characteristic,” Drost said.
    “In order to fully take advantage of this characteristic, a rigorous understanding of the LPD properties of UVC is needed.”
    In particular, Drost said, such understanding is essential for optimizing the design and operation of UVC systems and networks and for predicting the quality of the LPD property in a given scenario, such as using UVC to securely network a command post that has an estimate of the direction and distance to the adversary.

    advertisement

    Without such a predictive capability, he said, users would lack the guidance needed to know the extent and limit of their detectability, and this lack of awareness would substantially limit the usefulness of the LPD capability.
    The researchers, including Drs. Mike Weisman, Fikadu Dagefu, Terrence Moore and Drost from CCDC ARL and Dr. Hakan Arlsan, Oak Ridge Associated Universities postdoctoral fellow at the lab, demonstrated this by applying their framework to produce a number of key insights regarding the LPD characteristics of UVC, including:
    LPD capability is relatively insensitive to a number of system and channel properties, which is important for the robustness of the LPD property Adversarial line-of-sight detection of a non-line-of-sight communication link is not as significant of a concern as one might fear Perhaps counter to intuition, steering of a UVC transmitter does not appear to be an effective detection-mitigation strategy in many cases Line-of-sight UVC link provides non-line-of-sight standoff distances that are commensurate with the communication range
    Prior modeling and experimental research has demonstrated that UVC signals attenuate dramatically at long distance, leading to the hypothesis that UVC has a fundamental LPD property, Drost said. However, there has been little effort on rigorously and precisely quantifying this property in terms of the detectability of a communication signal.
    “Our work provides a framework enabling the study of the fundamental limits of detectability for an ultraviolet communication system meeting desired communication performance requirements,” Drost said.

    advertisement

    Although this research is focused on longer-term applications, he said, it is addressing the Army Modernization Priority on Networks by developing the fundamental understanding of a novel communications capability, with a goal of providing the Soldier with network connectivity despite challenging environments that include adversarial activity.
    “The future communications and networking challenges that the Army faces are immense, and it is essential that we explore all possible means to overcoming those challenges,” Drost said. “Our research is ensuring that the community has the fundamental understanding of the potential for and limitations of using ultraviolet wavelengths for communications, and I am confident that this understanding will inform the development of future Army networking capabilities. Conducting fundamental research that impacts decision making and Army technologies is why we work for the Army, and it is very satisfying to know that our work will ultimately support the warfighter in his or her mission.”
    The researchers are currently continuing to develop refined understanding of how best to design and operate ultraviolet communications, and an important next step is the application of this framework to understand the detectability of a network of ultraviolet communications systems.
    Another key effort involves the experimental characterization, exploration and demonstration of this technology in a practical network using ARL’s Common Sensor Radio, a sophisticated mesh-networking radio designed to provide robust and energy-efficient networking.
    This research supports the laboratory’s FREEDOM (Foundational Research for Electronic Warfare in Multi-Domain Operations) Essential Research Program goal of studying the integration of low-signature communications technologies with advanced camouflage and decoy techniques.
    According to Drost, the work is also an on-ramp to studying how ultraviolet communications and other communications modalities, including conventional radio-frequency communications, can operate together in a seamless and autonomous extremely heterogeneous network, which the researchers believe is needed in order to fully realize the benefits of individual novel communication technologies.
    As they make continued progress on these fundamental research questions, the researchers will continue to work closely with their transition partner at the CCDC C5ISR (Command, Control, Computers, Communications, Cyber, Intelligence, Surveillance and Reconnaissance) Center to push ultraviolet communications toward nearer term transition to the warfighter. More

  • in

    Digital content on track to equal half 'Earth's mass' by 2245

    As we use resources, such as coal, oil, natural gas, copper, silicon and aluminum, to power massive computer farms and process digital information, our technological progress is redistributing Earth’s matter from physical atoms to digital information — the fifth state of matter, alongside liquid, solid, gas and plasma.
    Eventually, we will reach a point of full saturation, a period in our evolution in which digital bits will outnumber atoms on Earth, a world “mostly computer simulated and dominated by digital bits and computer code,” according to an article published in AIP Advances, by AIP Publishing.
    It is just a matter of time.
    “We are literally changing the planet bit by bit, and it is an invisible crisis,” author Melvin Vopson said.
    Vopson examines the factors driving this digital evolution. He said the impending limit on the number of bits, the energy to produce them, and the distribution of physical and digital mass will overwhelm the planet soon.
    For example, using current data storage densities, the number of bits produced per year and the size of a bit compared to the size of an atom, at a rate of 50% annual growth, the number of bits would equal the number of atoms on Earth in approximately 150 years.
    It would be approximately 130 years until the power needed to sustain digital information creation would equal all the power currently produced on planet Earth, and by 2245, half of Earth’s mass would be converted to digital information mass.
    “The growth of digital information seems truly unstoppable,” Vopson said. “According to IBM and other big data research sources, 90% of the world’s data today has been created in the last 10 years alone. In some ways, the current COVID-19 pandemic has accelerated this process as more digital content is used and produced than ever before.”
    Vopson draws on the mass-energy equivalence in Einstein’s theory of general relativity; the work of Rolf Landauer, who applied the laws of thermodynamics to information; and the work of Claude Shannon, the inventor of the digital bit.
    In 2019, Vopson formulated a principle that postulates that information moves between states of mass and energy just like other matter.
    “The mass-energy-information equivalence principle builds on these concepts and opens up a huge range of new physics, especially in cosmology,” he said. “When one brings information content into existing physical theories, it is almost like an extra dimension to everything in physics.”

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Fear of stricter regulations spurs gun sales after mass shootings, new analysis suggests

    It’s commonly known that gun sales go up after a mass shooting, but two competing hypotheses have been put forth to explain why that’s the case: is it because people fear more violence and want to protect themselves, or is it because mass shootings trigger discussions about tighter gun regulations, which sends people out to stock up? In a new study appearing August 11 in the journal Patterns, investigators used data science to study this phenomenon. By working with spatio-temporal data from all the states in the US, they determined that the increase in firearm purchases after mass shootings is driven by a concern about regulations rather than a perceived need for protection.
    “It’s been well documented that mass shootings are linked to increases in firearm purchases, but the motivation behind this connection has been understudied,” says first author Maurizio Porfiri, Institute Professor at the New York University Tandon School of Engineering, who is currently on research sabbatical at the Technical University of Cartagena in Spain. “Previous research on this topic has been done mostly from the perspective of social science. We instead used a data-science approach.”
    Porfiri and his colleagues employed a statistical method called transfer entropy analysis, which is used to study large, complex systems like financial markets and climate-change models. With this approach, two variables are defined, and then computational techniques are used to determine if the future of one of them can be predicted by the past of the other. “This is a step above studying correlation,” Porfiri explains. “It’s actually looking at causation. Unique to this study is the analysis of spatio-temporal data, by examining the behavior of all the US states”
    The data that were put into consideration came from several sources: FBI background checks, which enabled the approximation of monthly gun sales by state; a Washington Post database on mass shootings; and news coverage about mass shooting from five major newspapers around the country. The news stories were put in two categories: those that mentioned gun regulations and those that didn’t. In all, the study used data related to 87 mass shootings that occurred in the United States between 1999 and 2017.
    The researchers also rated individual states by how restrictive their gun laws are. “We expected to find that gun sales increased in states that have more permissive gun laws, but it was less expected in states with restrictive laws. We saw it in both,” Porfiri says. “Also, when we looked at particular geographic areas, we didn’t find any evidence that gun sales increased when mass shootings happened nearby.”
    He adds that one limitation of the data is that news coverage may not fully capture public sentiment at a given time. In addition, although the study was successful in determining causal links among states, more work is needed to study the nature of these relationships, especially when one has laws that are much more restrictive than another
    Porfiri usually uses computational systems to study topics related to engineering, including ionic polymer metal composites and underwater robots. His reason for studying mass shootings is personal: he received his PhD in 2006 from Virginia Tech, which, the following year, was the site where — at that time — the deadliest mass shooting in the country took place. One member of his PhD committee was killed in the shooting, and he knew many others who were deeply affected.
    For him, this project is part of a larger effort to study gun violence. “Mass shootings are a small part of death from guns,” Porfiri says. “Suicide and homicide are much more common. But mass shootings are an important catalyst for a larger discussion. I plan to look at the wider role of guns in the future.”
    This study is part of the collaborative activities carried out under the programs of the region of Murcia (Spain): “Groups of Excellence of the region of Murcia, the Fundación Séneca, Science and Technology Agency” project 19884/GERM/15 and “Call for Fellowships for Guest Researcher Stays at Universities and OPIS” project 21144/IV/19. The researcher was also supported by the New York University Research Challenge Fund Program and Mitsui-USA foundation.

    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Why does COVID-19 impact only some organs, not others?

    In severe cases of COVID-19, damage can spread beyond the lungs and into other organs, such as the heart, liver, kidney and parts of the neurological system. Beyond these specific sets of organs, however, the virus seems to lack impact.
    Ernesto Estrada, from the University of Zaragoza and Agencia Aragonesa para la Investigación Foundation in Spain, aimed to uncover an explanation as to how it is possible for these damages to propagate selectively rather than affecting the entire body. He discusses his findings in the journal Chaos, from AIP Publishing.
    In order to enter human cells, the coronavirus relies on interactions with an abundant protein called angiotensin-converting enzyme 2.
    “This receptor is ubiquitous in most human organs, such that if the virus is circulating in the body, it can also enter into other organs and affect them,” Estrada said. “However, the virus affects some organs selectively and not all, as expected from these potential mechanisms.”
    Once inside a human cell, the virus’s proteins interact with those in the body, allowing for its effects to cultivate. COVID-19 damages only a subset of organs, signaling to Estrada that there must be a different pathway for its transmission. To uncover a plausible route, he considered the displacements of proteins prevalent in the lungs and how they interact with proteins in other organs.
    “For two proteins to find each other and form an interaction complex, they need to move inside the cell in a subdiffusive way,” Estrada said.
    He described this subdiffusive motion as resembling a drunkard walking on a crowded street. The crowd presents obstacles to the drunkard, stunting displacement and making it difficult to reach the destination.
    Similarly, proteins in a cell face several crowded obstacles they must overcome in order to interact. Adding to the complexity of the process, some proteins exist within the same cell or organ, but others do not.
    Taking these into account, Estrada developed a mathematical model that allowed him to find a group of 59 proteins within the lungs that act as the primary activators affecting other human organs. A chain of interactions, beginning with this set, triggers changes in proteins down the line, ultimately impacting their health.
    “Targeting some of these proteins in the lungs with existing drugs will prevent the perturbation of the proteins expressed in organs other than the lungs, avoiding multiorgan failure, which, in many cases, conduces the death of the patient,” Estrada said.
    How the affected proteins travel between organs remains an open question that Estrada is dedicating for future studies.
    The article, “Fractional diffusion on the human proteome as an alternative to the multi-organ damage of SARS CoV-2,” is authored by Ernesto Estrada. The article will appear in Chaos on Aug. 11, 2020.

    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More