More stories

  • in

    'Edge of chaos' opens pathway to artificial intelligence discoveries

    Scientists at the University of Sydney and Japan’s National Institute for Material Science (NIMS) have discovered that an artificial network of nanowires can be tuned to respond in a brain-like way when electrically stimulated.
    The international team, led by Joel Hochstetter with Professor Zdenka Kuncic and Professor Tomonobu Nakayama, found that by keeping the network of nanowires in a brain-like state “at the edge of chaos,” it performed tasks at an optimal level.
    This, they say, suggests the underlying nature of neural intelligence is physical, and their discovery opens an exciting avenue for the development of artificial intelligence.
    The study is published today in Nature Communications.
    “We used wires 10 micrometres long and no thicker than 500 nanometres arranged randomly on a two-dimensional plane,” said lead author Joel Hochstetter, a doctoral candidate in the University of Sydney Nano Institute and School of Physics.
    “Where the wires overlap, they form an electrochemical junction, like the synapses between neurons,” he said. “We found that electrical signals put through this network automatically find the best route for transmitting information. And this architecture allows the network to ‘remember’ previous pathways through the system.”
    ON THE EDGE OF CHAOS More

  • in

    RAMBO speeds searches on huge DNA databases

    Rice University computer scientists are sending RAMBO to rescue genomic researchers who sometimes wait days or weeks for search results from enormous DNA databases.
    DNA sequencing is so popular, genomic datasets are doubling in size every two years, and the tools to search the data haven’t kept pace. Researchers who compare DNA across genomes or study the evolution of organisms like the virus that causes COVID-19 often wait weeks for software to index large, “metagenomic” databases, which get bigger every month and are now measured in petabytes.
    RAMBO, which is short for “repeated and merged bloom filter,” is a new method that can cut indexing times for such databases from weeks to hours and search times from hours to seconds. Rice University computer scientists presented RAMBO last week at the Association for Computing Machinery data science conference SIGMOD 2021.
    “Querying millions of DNA sequences against a large database with traditional approaches can take several hours on a large compute cluster and can take several weeks on a single server,” said RAMBO co-creator Todd Treangen, a Rice computer scientist whose lab specializes in metagenomics. “Reducing database indexing times, in addition to query times, is crucially important as the size of genomic databases are continuing to grow at an incredible pace.”
    To solve the problem, Treangen teamed with Rice computer scientist Anshumali Shrivastava, who specializes in creating algorithms that make big data and machine learning faster and more scalable, and graduate students Gaurav Gupta and Minghao Yan, co-lead authors of the peer-reviewed conference paper on RAMBO.
    RAMBO uses a data structure that has a significantly faster query time than state-of-the-art genome indexing methods as well as other advantages like ease of parallelization, a zero false-negative rate and a low false-positive rate. More

  • in

    Deep machine learning completes information about the bioactivity of one million molecules

    A tool developed by the Structural Bioinformatics and Network Biology lab at IRB Barcelona predicts the biological activity of chemical compounds, key information to evaluate their therapeutic potential.
    Using artificial neural networks, scientists have inferred experimental data for a million compounds and have developed a package of programs to make estimates for any type of molecule.
    The work has been published in the journal Nature Communications.
    The Structural Bioinformatics and Network Biology laboratory, led by ICREA Researcher Dr. Patrick Aloy, has completed the bioactivity information for a million molecules using deep machine-learning computational models. It has also disclosed a tool to predict the biological activity of any molecule, even when no experimental data are available.
    This new methodology is based on the Chemical Checker, the largest database of bioactivity profiles for pseudo pharmaceuticals to date, developed by the same laboratory and published in 2020. The Chemical Checker collects information from 25 spaces of bioactivity for each molecule. These spaces are linked to the chemical structure of the molecule, the targets with which it interacts or the changes it induces at the clinical or cellular level. However, this highly detailed information about the mechanism of action is incomplete for most molecules, implying that for a particular one there may be information for one or two spaces of bioactivity but not for all 25.
    With this new development, researchers integrate all the experimental information available with deep machine learning methods, so that all the activity profiles, from chemistry to clinical level, for all molecules can be completed. More

  • in

    Fast IR imaging-based AI identifies tumor type in lung cancer

    The examined tissue does not need to be marked for this. The analysis only takes around half an hour. “This is a major step that shows that infrared imaging can be a promising methodology in future diagnostic testing and treatment prediction,” says Professor Klaus Gerwert, director of PRODI. The study is published in the American Journal of Pathology on 1 July 2021.
    Treatment decision by means of a genetic mutation analysis
    Lung tumours are divided into various types, such as small cell lung cancer, adenocarcinoma and squamous cell carcinoma. Many rare tumour types and sub-types also exist. This diversity hampers reliable rapid diagnostic methods in everyday clinical practice. In addition to histological typing, the tumour samples also need to be comprehensively examined for certain changes at a DNA level. “Detecting one of these mutations is important key information that influences both the prognosis and further therapeutic decisions,” says co-author Professor Reinhard Büttner, head of the Institute of General Pathology and Pathological Anatomy at University Hospital Cologne.
    Patients with lung cancer clearly benefit when the driver mutations have previously been characterised: for instance, tumours with activating mutations in the EGFR (epidermal growth factor) gene often respond well to tyrosine kinase inhibitors, whereas non-EGFR-mutated tumours or tumours with other mutations, such as KRAS, do not respond at all to this medication. The differential diagnosis of lung cancer previously took place with immunohistochemical staining of tissue samples and a subsequent extensive genetic analysis to determine the mutation.
    Fast and reliable measuring technique
    The potential of infrared imaging, IR imaging for short, as a diagnostic tool to classify tissue, called label-free digital pathology, was already shown by the group led by Klaus Gerwert in previous studies. The procedure identifies cancerous tissue without prior staining or other markings and functions automatically with the aid of artificial intelligence (AI). In contrast to the methods used to determine tumour shape and mutations in tumour tissue in everyday clinical practice, which can sometimes take several days, the new procedure only takes around half an hour. In these 30 minutes, it is not only possible to ascertain whether the tissue sample contains tumour cells, but also what type of tumour it is and whether it contains a certain mutation.
    Infrared spectroscopy makes genetic mutations visible
    The Bochum researchers were able to verify the procedure on samples from over 200 lung cancer patients in their work. When identifying mutations, they concentrated on by far the most common lung tumour, adenocarcinoma, which accounts for over 50 per cent of tumours. Its most common genetic mutations can be determined with a sensitivity and specificity of 95 per cent compared to laborious genetic analysis. “For the first time, we were able to identify spectral markers that allow for a spatially resolved distinction between various molecular conditions in lung tumours,” explains Nina Goertzen from PRODI. A single infrared spectroscopic measurement offers information about the sample which would otherwise require several time-consuming procedures.
    A further step towards personalised medicine
    The results once again confirm the potential of label-free digital pathology for clinical use. “To further increase reliability and promote a translation of the method as a new diagnostic tool, studies with larger patient numbers adapted to clinical needs and external testing in everyday clinical practice are required,” says Dr. Frederik Großerüschkamp, IR imaging project manager. “In order to translate IR imaging into everyday clinical practice, it is crucial to shorten the measuring time, ensure simple and reliable operation of the measuring instruments, and provide answers to questions that are important and helpful both clinically and for the patients.”
    Story Source:
    Materials provided by Ruhr-University Bochum. Note: Content may be edited for style and length. More

  • in

    A way to surmount supercooling

    Scientists at Osaka University, Panasonic Corporation, and Waseda University used scanning electron microscopy (SEM) and X-ray absorption spectroscopy to determine which additives induce crystallization in supercooled aqueous solutions. This work may lead to the development of new energy storage materials based on latent heat.
    If you put a bottle of water into the freezer, you will expect to pull out a solid cylinder of ice after a few hours. However, if the water has very few impurities and left undisturbed, it may not be frozen, and instead remain as a supercooled liquid. Be careful, because this state is very unstable, and the water will crystallize quickly if shaken or impurities are added — as many YouTube videos will attest. Supercooling is a phenomenon in which an aqueous solution maintains its liquid state without solidifying, even though its temperature is below the freezing point. Although many studies have been done on additives that trigger the freezing of supercooling liquids, the details of the mechanism are unknown. One potential application might be latent heat storage materials, which rely on freezing and melting to capture and later release heat, like a reusable freezer pack.
    Now, a team of researchers led by Osaka University has shown that silver nanoparticles are very effective at inducing crystallization in clathrate hydrates. Clathrate hydrates physically look like ice and are composed of hydrogen-bonded water cages with guest molecules inside. “Using SEM with the freeze-fracture replica method, we captured the moment when a nascent cluster enveloped a silver nanoparticle in the aqueous solution of latent heat storage materials,” corresponding author Professor Takeshi Sugahara explains. This occurs because the nanoparticles serve as a “seed,” or nucleation site, for tiny clusters to form. Once this gets started, the remaining solute and water molecules can quickly form additional clusters and then cluster densification leads to the crystallization. The researchers found that while silver nanoparticles tended to accelerate the formation of these clusters, other metal nanoparticles, such as palladium, gold, and iridium do not promote crystallization. “The supercooling suppression effect obtained in the present study will contribute to achieve the practical use of clathrate hydrates as latent heat storage materials,” Professor Sugahara says. Material design guidelines for enhanced supercooling control, as described in this study, may lead to the application of latent heat storage materials in solar energy and heat recovery technologies with improved efficiency.
    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    AI learns to predict human behavior from videos

    Predicting what someone is about to do next based on their body language comes naturally to humans but not so for computers. When we meet another person, they might greet us with a hello, handshake, or even a fist bump. We may not know which gesture will be used, but we can read the situation and respond appropriately.
    In a new study, Columbia Engineering researchers unveil a computer vision technique for giving machines a more intuitive sense for what will happen next by leveraging higher-level associations between people, animals, and objects.
    “Our algorithm is a step toward machines being able to make better predictions about human behavior, and thus better coordinate their actions with ours,” said Carl Vondrick, assistant professor of computer science at Columbia, who directed the study, which was presented at the International Conference on Computer Vision and Pattern Recognition on June 24, 2021. “Our results open a number of possibilities for human-robot collaboration, autonomous vehicles, and assistive technology.”
    It’s the most accurate method to date for predicting video action events up to several minutes in the future, the researchers say. After analyzing thousands of hours of movies, sports games, and shows like “The Office,” the system learns to predict hundreds of activities, from handshaking to fist bumping. When it can’t predict the specific action, it finds the higher-level concept that links them, in this case, the word “greeting.”
    Past attempts in predictive machine learning, including those by the team, have focused on predicting just one action at a time. The algorithms decide whether to classify the action as a hug, high five, handshake, or even a non-action like “ignore.” But when the uncertainty is high, most machine learning models are unable to find commonalities between the possible options.
    Columbia Engineering PhD students Didac Suris and Ruoshi Liu decided to look at the longer-range prediction problem from a different angle. “Not everything in the future is predictable,” said Suris, co-lead author of the paper. “When a person cannot foresee exactly what will happen, they play it safe and predict at a higher level of abstraction. Our algorithm is the first to learn this capability to reason abstractly about future events.”
    Suris and Liu had to revisit questions in mathematics that date back to the ancient Greeks. In high school, students learn the familiar and intuitive rules of geometry — that straight lines go straight, that parallel lines never cross. Most machine learning systems also obey these rules. But other geometries, however, have bizarre, counter-intuitive properties; straight lines bend and triangles bulge. Suris and Liu used these unusual geometries to build AI models that organize high-level concepts and predict human behavior in the future.
    “Prediction is the basis of human intelligence,” said Aude Oliva, senior research scientist at the Massachusetts Institute of Technology and co-director of the MIT-IBM Watson AI Lab, an expert in AI and human cognition who was not involved in the study. “Machines make mistakes that humans never would because they lack our ability to reason abstractly. This work is a pivotal step towards bridging this technological gap.”
    The mathematical framework developed by the researchers enables machines to organize events by how predictable they are in the future. For example, we know that swimming and running are both forms of exercising. The new technique learns how to categorize these activities on its own. The system is aware of uncertainty, providing more specific actions when there is certainty, and more generic predictions when there is not.
    The technique could move computers closer to being able to size up a situation and make a nuanced decision, instead of a pre-programmed action, the researchers say. It’s a critical step in building trust between humans and computers, said Liu, co-lead author of the paper. “Trust comes from the feeling that the robot really understands people,” he explained. “If machines can understand and anticipate our behaviors, computers will be able to seamlessly assist people in daily activity.”
    While the new algorithm makes more accurate predictions on benchmark tasks than previous methods, the next steps are to verify that it works outside the lab, says Vondrick. If the system can work in diverse settings, there are many possibilities to deploy machines and robots that might improve our safety, health, and security, the researchers say. The group plans to continue improving the algorithm’s performance with larger datasets and computers, and other forms of geometry.
    “Human behavior is often surprising,” Vondrick commented. “Our algorithms enable machines to better anticipate what they are going to do next.” More

  • in

    Environmental impact of hydrofracking vs. conventional gas/oil drilling: Research shows the differences may be minimal

    Crude oil production and natural gas withdrawals in the United States have lessened the country’s dependence on foreign oil and provided financial relief to U.S. consumers, but have also raised longstanding concerns about environmental damage, such as groundwater contamination.
    A researcher in Syracuse University’s College of Arts and Sciences, and a team of scientists from Penn State, have developed a new machine learning technique to holistically assess water quality data in order to detect groundwater samples likely impacted by recent methane leakage during oil and gas production. Using that model, the team concluded that unconventional drilling methods like hydraulic fracturing — or hydrofracking — do not necessarily incur more environmental problems than conventional oil and gas drilling.
    The two common ways to extract oil and gas in the U.S. are through conventional and unconventional methods. Conventional oil and gas are pumped from easily accessed sources using natural pressure. Conversely, unconventional oil and gas are acquired from hard-to-reach sources through a combination of horizontal drilling and hydraulic fracturing. Hydrofracking extracts natural gas, petroleum and brine from bedrock formations by injecting a mixture of sand, chemicals and water. By drilling into the earth and directing the high-pressure mixture into rock, the gas inside releases and flows out to the head of a well.
    Tao Wen, assistant professor of earth and environmental sciences (EES) at Syracuse, recently led a study comparing data from different states to see which method might result in greater contamination of groundwater. They specifically tested levels of methane, which is the primary component of natural gas.
    The team selected four U.S. states located in important shale zones to target for their study: Pennsylvania, Colorado, Texas and New York. One of those states — New York — banned the practice of hydrofracking in 2015 following a review by the NYS Department of Health which found significant uncertainties about health, including increased water and air pollution.
    Wen and his colleagues compiled a large groundwater chemistry dataset from multiple sources including federal agency reports, journal articles, and oil and gas companies. The majority of tested water samples in their study were collected from domestic water wells. Although methane itself is not toxic, Wen says that methane contamination detected in shallow groundwater could be a risk to the relevant homeowner as it could be an explosion hazard, could increase the level of other toxic chemical species like manganese and arsenic, and would contribute to global warming as methane is a greenhouse gas.
    Their model used sophisticated algorithms to analyze almost all of the retained geochemistry data in order to predict if a given groundwater sample was negatively impacted by recent oil and gas drilling.
    The data comparison showed that methane contamination cases in New York — a state without unconventional drilling but with a high volume of conventional drilling — were similar to that of Pennsylvania — a state with a high volume of unconventional drilling. Wen says this suggests that unconventional drilling methods like fracking do not necessarily lead to more environmental problems than conventional drilling, although this result might be alternatively explained by the different sizes of groundwater chemistry datasets compiled for these two states.
    The model also detected a higher rate of methane contamination cases in Pennsylvania than in Colorado and Texas. Wen says this difference could be attributed to different practices when drillers build/drill the oil and gas wells in different states. According to previous research, most of the methane released into the environment from gas wells in the U.S. occurs because the cement that seals the well is not completed along the full lengths of the production casing. However, no data exists to conclude if drillers in those three states use different technology. Wen says this requires further study and review of the drilling data if they become available.
    According to Wen, their machine learning model proved to be effective in detecting groundwater contamination, and by applying it to other states/counties with ongoing or planned oil and gas production it will be an important resource for determining the safest methods of gas and oil drilling.
    Wen and his colleagues from Penn State, including Mengqi Liu, a graduate student from the College of Information Sciences and Technology, Josh Woda, a graduate student from Department of Geosciences, Guanjie Zheng, former Ph.D. student from the College of Information Sciences and Technology, and Susan L. Brantley, distinguished professor in the Department of Geosciences and director of Earth and Environmental Systems Institute, recently had their findings published in the journal Water Research.
    The team’s work was funded by National Science Foundation IIS-16-39150, US Geological Survey (104b award G16AP00079), and College of Earth and Mineral Sciences Dean’s Fund for Postdoc-Facilitated Innovation at Penn State.
    Story Source:
    Materials provided by Syracuse University. Original written by Dan Bernardi. Note: Content may be edited for style and length. More

  • in

    Unbroken: New soft electronics don't break, even when punctured

    A team of Virginia Tech researchers from the Department of Mechanical Engineering and the Macromolecules Innovation Institute has created a new type of soft electronics, paving the way for devices that are self-healing, reconfigurable, and recyclable. These skin-like circuits are soft and stretchy, sustain numerous damage events under load without losing electrical conductivity, and can be recycled to generate new circuits at the end of a product’s life.
    Led by Assistant Professor Michael Bartlett, the team recently published its findings in Communications Materials, an open access journal from Nature Research.
    Current consumer devices, such as phones and laptops, contain rigid materials that use soldered wires running throughout. The soft circuit developed by Bartlett’s team replaces these inflexible materials with soft electronic composites and tiny, electricity-conducting liquid metal droplets. These soft electronics are part of a rapidly emerging field of technology that gives gadgets a level of durability that would have been impossible just a few years ago.
    The liquid metal droplets are initially dispersed in an elastomer, a type of rubbery polymer, as electrically insulated, discrete drops.
    “To make circuits, we introduced a scalable approach through embossing, which allows us to rapidly create tunable circuits by selectively connecting droplets,” postdoctoral researcher and first author Ravi Tutika said. “We can then locally break the droplets apart to remake circuits and can even completely dissolve the circuits to break all the connections to recycle the materials, and then start back at the beginning.”
    The circuits are soft and flexible, like skin, continuing to work even under extreme damage. If a hole is punched in these circuits, the metal droplets can still transfer power. Instead of cutting the connection completely as in the case of a traditional wire, the droplets make new connections around the hole to continue passing electricity.
    The circuits will also stretch without losing their electrical connection, as the team pulled the device to over 10 times its original length without failure during the research.
    At the end of a product’s life, the metal droplets and the rubbery materials can be reprocessed and returned to a liquid solution, effectively making them recyclable. From that point, they can be remade to start a new life, an approach that offers a pathway to sustainable electronics.
    While a stretchy smartphone has not yet been made, rapid development in the field also holds promise for wearable electronics and soft robotics. These emerging technologies require soft, robust circuitry to make the leap into consumer applications.
    “We’re excited about our progress and envision these materials as key components for emerging soft technologies,” Bartlett said. “This work gets closer to creating soft circuitry that could survive in a variety of real-world applications.”
    Story Source:
    Materials provided by Virginia Tech. Original written by Alex Parrish. Note: Content may be edited for style and length. More