More stories

  • in

    Chemical data management: an open way forward

    One of the most challenging aspects of modern chemistry is managing data. For example, when synthesizing a new compound, scientists will go through multiple attempts of trial-and-error to find the right conditions for the reaction, generating in the process massive amounts of raw data. Such data is of incredible value, as, like humans, machine-learning algorithms can learn much from failed and partially successful experiments.
    The current practice is, however, to publish only the most successful experiments, since no human can meaningfully process the massive amounts of failed ones. But AI has changed this; it is exactly what these machine-learning methods can do provided the data are stored in a machine-actionable format for anyone to use.
    “For a long time, we needed to compress information due to the limited page count in printed journal articles,” says Professor Berend Smit, who directs the Laboratory of Molecular Simulation at EPFL Valais Wallis. “Nowadays, many journals do not even have printed editions anymore; however, chemists still struggle with reproducibility problems because journal articles are missing crucial details. Researchers ‘waste’ time and resources replicating ‘failed’ experiments of authors and struggle to build on top of published results as raw data are rarely published.”
    But volume is not the only problem here; data diversity is another: research groups use different tools like Electronic Lab Notebook software, which store data in proprietary formats that are sometimes incompatible with each other. This lack of standardization makes it nearly impossible for groups to share data.
    Now, Smit, with Luc Patiny and Kevin Jablonka at EPFL, have published a perspective in Nature Chemistry presenting an open platform for the entire chemistry workflow: from the inception of a project to its publication.
    The scientists envision the platform as “seamlessly” integrating three crucial steps: data collection, data processing, and data publication — all with minimal cost to researchers. The guiding principle is that data should be FAIR: easily findable, accessible, interoperable, and re-usable. “At the moment of data collection, the data will be automatically converted into a standard FAIR format, making it possible to automatically publish all ‘failed’ and partially successful experiments together with the most successful experiment,” says Smit.
    But the authors go a step further, proposing that data should also be machine-actionable. “We are seeing more and more data-science studies in chemistry,” says Jablonka. “Indeed, recent results in machine learning try to tackle some of the problems chemists believe are unsolvable. For instance, our group has made enormous progress in predicting optimal reaction conditions using machine-learning models. But those models would be much more valuable if they could also learn reaction conditions that fail, but otherwise, they remain biased because only the successful conditions are published.”
    Finally, the authors propose five concrete steps that the field must take to create a FAIR data-management plan: The chemistry community should embrace its own existing standards and solutions. Journals need to make deposition of reusable raw data, where community standards exist, mandatory. We need to embrace the publication of “failed” experiments. Electronic Lab Notebooks that do not allow exporting all data into an open machine-actionable form should be avoided. Data-intensive research must enter our curricula.”We think there is no need to invent new file formats or technologies,” says Patiny. “In principle, all the technology is there, and we need to embrace existing technologies and make them interoperable.”
    The authors also point out that just storing data in any electronic lab notebook — the current trend — does not necessarily mean that humans and machines can reuse the data. Rather, the data must be structured and published in a standardized format, and they also must contain enough context to enable data-driven actions.
    “Our perspective offers a vision of what we think are the key components to bridge the gap between data and machine learning for core problems in chemistry,” says Smit. “We also provide an open science solution in which EPFL can take the lead.”
    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Nik Papageorgiou. Note: Content may be edited for style and length. More

  • in

    Making a ‘sandwich’ out of magnets and topological insulators, potential for lossless electronics

    A Monash University-led research team has discovered that a structure comprising an ultra-thin topological insulator sandwiched between two 2D ferromagnetic insulators becomes a large-bandgap quantum anomalous Hall insulator.
    Such a heterostructure provides an avenue towards viable ultra-low energy future electronics, or even topological photovoltaics.
    Topological Insulator: The Filling in the Sandwich
    In the researchers’ new heterostructure, a ferromagnetic material forms the ‘bread’ of the sandwich, while a topological insulator (ie, a material displaying nontrivial topology) takes the place of the ‘filling’.
    Combining magnetism and nontrivial band topology gives rise to quantum anomalous Hall (QAH) insulators, as well as exotic quantum phases such as the QAH effect where current flows without dissipation along quantized edge states.
    Inducing magnetic order in topological insulators via proximity to a magnetic material offers a promising pathway towards achieving QAH effect at higher temperatures (approaching or exceeding room temperature) for lossless transport applications. More

  • in

    Understanding the use of bicycle sharing systems with statistics

    Bicycle sharing systems (BSSs) are a popular transport system in many of the world’s big cities. Not only do BSSs provide a convenient and eco-friendly mode of travel, they also help reduce traffic congestion. Moreover, bicycles can be rented at one port and returned at a different port. Despite these advantages, however, BSSs cannot rely solely on its users to maintain the availability of bicycles at all ports at all times. This is because many bicycle trips only go in one direction, causing excess bicycles at some ports and a lack of bicycles in others.
    This problem is generally solved by rebalancing, which involves strategically dispatching special trucks to relocate excess bicycles to other ports, where they are needed. Efficient rebalancing, however, is an optimization problem of its own, and Professor Tohru Ikeguchi and his colleagues from Tokyo University of Science, Japan, have devoted much work to finding optimal rebalancing strategies. In a study from 2021, they proposed a method for optimally rebalancing tours in a relatively short time. However, the researchers only checked the performance of their algorithm using randomly generated ports as benchmarks, which may not reflect the conditions of BSS ports in the real world.
    Addressing this issue, Prof. Ikeguchi has recently led another study, together with PhD student Ms. Honami Tsushima, to find more realistic benchmarks. In their paper published in Nonlinear Theory and Its Applications, IEICE, the researchers sought to create these benchmarks by statistically analyzing the actual usage history of rented and returned bicycles in real BSSs. “Bike sharing systems could become the preferred public transport system globally in the future. It is, therefore, an important issue to address in our societies,” Prof. Ikeguchi explains.
    The researchers used publicly available data from four real BSSs located in four major cities in USA: Boston, Washington DC, New York City, and Chicago. Save for Boston, these cities have over 560 ports each, for a total number of bicycles in the thousands.
    First, a preliminary analysis revealed that an excess and lack of bicycles occurred across all four BSSs during all months of the year, verifying the need for active rebalancing. Next, the team sought to understand the temporal patterns of rented and returned bicycles, for which they treated the logged rent and return events as “point processes.”
    The researchers independently analyzed both point processes using three approaches, namely raster plots, coefficient of variation, and local variation. Raster plots helped them find periodic usage patterns, while coefficient of variation and local variation allowed them to measure the global and local variabilities, respectively, of the random intervals between consecutive bicycle rent or return events.
    The analyses of raster plots yielded useful insights about how the four BSSs were used in their respective cities. Most bicycles were used during daytime and fewer overnight, producing a periodic pattern. Interestingly, from the analyses of the local variation, the team found that usage patterns were similar between weekdays and weekends, contradicting the results of previous studies. Finally, the results indicated that the statistical characteristics of the temporal patterns of rented and returned bikes followed a Poisson process — a widely studied random distribution — only in New York City. This was an important find, given the original objective of the research team. “We can now create realistic benchmark instances whose temporal patterns of rented and returned bicycles follow the Poisson process. This, in turn, can help improve the bicycle rebalancing model we proposed in our earlier work,” explains Prof. Ikeguchi.
    Overall, this study paves the way to a deeper understanding of how people use BSSs. Moreover, through further detailed analyses, it should be possible to gain insight into more complex aspects of human life, as Prof. Ikeguchi remarks: “We believe that the analysis of BSS data will lead not only to efficient bike sharing but also to a better understanding of the social dynamics of human movement.”
    In any case, making BSSs a more efficient and attractive option will, hopefully, make a larger percentage of people choose the bicycle as their preferred means of transportation. More

  • in

    The future of 5G+ infrastructure could be built tile by tile

    5G+ (5G/Beyond 5G) is the fastest-growing segment and the only significant opportunity for investment growth in the wireless network infrastructure market, according to the latest forecast by Gartner, Inc. But currently 5G+ technologies rely on large antenna arrays that are typically bulky and come only in very limited sizes, making them difficult to transport and expensive to customize.
    Researchers from Georgia Tech’s College of Engineering have developed a novel and flexible solution to address the problem. Their additively manufactured tile-based approach can construct on-demand, massively scalable arrays of 5G+ (5G/Beyond 5G)enabled smart skins with the potential to enable intelligence on nearly any surface or object. The study, recently published in Scientific Reports, describes the approach, which is not only much easier to scale and customize than current practices, but features no performance degradation whenever flexed or scaled to a very large number of tiles.
    “Typically, there are a lot of smaller wireless network systems working together, but they are not scalable. With the current techniques, you can’t increase, decrease, or direct bandwidth, especially for very large areas,” said Tentzeris. “Being able to utilize and scale this novel tile-based approach makes this possible.”
    Tentzeris says his team’s modular application equipped with 5G+ capability has the potential for immediate, large-scale impact as the telecommunications industry continues to rapidly transition to standards for faster, higher capacity, and lower latency communications.
    Building the Tiles
    In Georgia Tech’s new approach, flexible and additively manufactured tiles are assembled onto a single, flexible underlying layer. This allows tile arrays to be attached to a multitude of surfaces. The architecture also allows for very large 5G+ phased/electronically steerable antenna array networks to be installed on-the-fly. According to Tentzeris, attaching a tile array to an unmanned aerial vehicle (UAV) is even a possibility to surge broadband capacity in low coverage areas. More

  • in

    Technology has the potential to change the patient-provider relationship

    Healthcare technology continues to evolve and has the potential to significantly change the relationship between providers and their patients. A study from the U.S. Department of Veterans Affairs, Regenstrief Institute and Indiana University School of Medicine analyzed perspectives on personal health records.
    Personal health records are different from electronic health records because they are used by the patient as opposed to the provider. They are sometimes referred to as patient portals and allow the patient to see test results, medications and other health information.
    The research team interviewed providers, patients and caregivers associated with the Richard L. Roudebush VA Medical Center about their thoughts on personal health records and how they could be used.
    “During the interviews, patients expressed the potential for personal health records to deepen their relationship with their provider and to allow them to be more understood. Physicians were interested in having more clinical information sharing to facilitate better care,” said study author David Haggstrom, M.D., MAS, director of the Regenstrief Institute Center for Health Services Research, core investigator at the VA Health Services Research and Development (HSR&D) Center for Health Information and Communication (CHIC) and associate professor of medicine at IU School of Medicine. “These different visions of the value of these records show the need for discussions between physicians and patients to set expectations about the uses of PHRs.”
    Both doctors and patients raised concerns about workflow.
    “Patient portals have already created an additional strain on medical staff, and patients are sensitive to that. Careful thought needs to be given to how health systems and teams deploy PHRs to still provide patient-centered care,” said Dr. Haggstrom.
    The next steps for personal health records involve implementing them more widely, tailoring them for specific conditions and making them more user-friendly.
    Dr. Haggstrom is currently leading a five-year clinical trial using a personal health record created specifically for cancer patients. The research team will be looking at both the quality of care and the impact on the patient-provider relationship.
    In addition to Dr. Haggstrom, Thomas Carr, M.D. of VA CHIC is an author. The study was supported in part by VA HSR&D CDA 07-016, the VA Advanced Medical Informatics Fellowship Program and the Livestrong Foundation.
    Story Source:
    Materials provided by Regenstrief Institute. Note: Content may be edited for style and length. More

  • in

    Study shows gaps in how STEM organizations collect demographic information

    Professional organizations in science, technology, engineering and mathematics (STEM) fields could more effectively collect data on underrepresented groups in their fields, according to a new survey published March 31 in Science. With more robust information, STEM organizations could better target efforts to recruit and retain a more diverse membership.
    “We want to start a conversation among STEM organizations,” said Nicholas Burnett, lead author of the study and a postdoctoral researcher in the Department of Neurobiology, Physiology and Behavior at the University of California, Davis. “The ultimate goal is to increase representation of these groups, and you can’t do that without knowing where to target resources.”
    Burnett’s coauthors on the study are: Alyssa Hernandez, Harvard University; Emily King, UC Berkeley; Richelle Tanner, Chapman University; and Kathryn Wilsterman, University of Montana, Missoula.
    The researchers surveyed 164 U. S.-based STEM organizations, drawn mostly from a list of societies affiliated with the American Association for the Advancement of Science. The organizations were asked about the kinds of demographic information they collected on their members and conference attendees, and how they put it to use. Survey results were not associated with any particular organization, and the researchers did not ask for actual demographic information from the respondents: only what categories of information were collected.
    Seventy-three organizations responded to the survey, representing over 700,000 constituents in a range of fields from life sciences and physical sciences to mathematics and technology.
    While most organizations (80 percent) collected some demographic data, exactly what they collected varied. Many organizations followed the kind of breakdown used by federal agencies, offering a number of options for “race and ethnicity” but also lumping together several disparate groups under one category (such as “Asian American and Pacific Islander”). More

  • in

    Can an image-based electrocardiographic algorithm improve access to care in remote settings?

    Researchers at the Yale Cardiovascular Data Science (CarDS) Lab have developed an artificial intelligence (AI)-based model for clinical diagnosis that can use electrocardiogram (ECG) images, regardless of format or layout, to diagnose multiple heart rhythm and conduction disorders.
    The team led by Dr. Rohan Khera, assistant professor in cardiovascular medicine, developed a novel multilabel automated diagnosis model from ECG images. ECG Dx © is the latest tool from the CarDS Lab designed to make AI-based ECG interpretation accessible in remote settings. They hope the new technology provides an improved method to diagnose key cardiac disorders. The findings were published in Nature Communications on March 24.
    The first author of the study is Veer Sangha, a computer science major at Yale College. “Our study suggests that image and signal models performed comparably for clinical labels on multiple datasets,” said Sangha. “Our approach could expand the applications of artificial intelligence to clinical care targeting increasingly complex challenges.”
    As mobile technology improves, patients increasingly have access to ECG images, which raises new questions about how to incorporate these devices in patient care. Under Khera’s mentorship, Sangha’s research at the CarDS Lab analyzes multi-modal inputs from electronic health records to design potential solutions.
    The model is based on data collected from more than 2 million ECGs from more than 1.5 million patients who received care in Brazil from 2010 to 2017. One in six patients was diagnosed with rhythm disorders. The tool was independently validated through multiple international data sources, with high accuracy for clinical diagnosis from ECGs.
    Machine learning (ML) approaches, specifically those that use deep learning, have transformed automated diagnostic decision-making. For ECGs, they have led to the development of tools that allow clinicians to find hidden or complex patterns. However, deep learning tools use signal-based models, which according to Khera have not been optimized for remote health care settings. Image-based models may offer improvement in the automated diagnosis from ECGs.
    There are a number of clinical and technical challenges when using AI-based applications.
    “Current AI tools rely on raw electrocardiographic signals instead of stored images, which are far more common as ECGs are often printed and scanned as images. Also, many AI-based diagnostic tools are designed for individual clinical disorders, and therefore, may have limited utility in a clinical setting where multiple ECG abnormalities co-occur,” said Khera. “A key advance is that the technology is designed to be smart — it is not dependent on specific ECG layouts and can adapt to existing variations and new layouts. In that respect, it can perform like expert human readers, identifying multiple clinical diagnoses across different formats of printed ECGs that vary across hospitals and countries.”
    This study was supported by research funding from the National Heart, Lung, and Blood Institute of the National Institutes of Health (K23HL153775).
    Story Source:
    Materials provided by Yale University. Original written by Elisabeth Reitman. Note: Content may be edited for style and length. More

  • in

    Physiological signals could be the key to 'emotionally intelligent' AI, scientists say

    Speech and language recognition technology is a rapidly developing field, which has led to the emergence of novel speech dialog systems, such as Amazon Alexa and Siri. A significant milestone in the development of dialog artificial intelligence (AI) systems is the addition of emotional intelligence. A system able to recognize the emotional states of the user, in addition to understanding language, would generate a more empathetic response, leading to a more immersive experience for the user.
    “Multimodal sentiment analysis” is a group of methods that constitute the gold standard for an AI dialog system with sentiment detection. These methods can automatically analyze a person’s psychological state from their speech, voice color, facial expression, and posture and are crucial for human-centered AI systems. The technique could potentially realize an emotionally intelligent AI with beyond-human capabilities, which understands the user’s sentiment and generates a response accordingly.
    However, current emotion estimation methods focus only on observable information and do not account for the information contained in unobservable signals, such as physiological signals. Such signals are a potential gold mine of emotions that could improve the sentiment estimation performance tremendously.
    In a new study published in the journal IEEE Transactions on Affective Computing, physiological signals were added to multimodal sentiment analysis for the first time by researchers from Japan, a collaborative team comprising Associate Professor Shogo Okada from Japan Advanced Institute of Science and Technology (JAIST) and Prof. Kazunori Komatani from the Institute of Scientific and Industrial Research at Osaka University. “Humans are very good at concealing their feelings. The internal emotional state of a user is not always accurately reflected by the content of the dialog, but since it is difficult for a person to consciously control their biological signals, such as heart rate, it may be useful to use these for estimating their emotional state. This could make for an AI with sentiment estimation capabilities that are beyond human,” explains Dr. Okada.
    The team analyzed 2468 exchanges with a dialog AI obtained from 26 participants to estimate the level of enjoyment experienced by the user during the conversation. The user was then asked to assess how enjoyable or boring they found the conversation to be. The team used the multimodal dialogue data set named “Hazumi1911,” which uniquely combined speech recognition, voice color sensors, facial expression and posture detection with skin potential, a form of physiological response sensing.
    “On comparing all the separate sources of information, the biological signal information proved to be more effective than voice and facial expression. When we combined the language information with biological signal information to estimate the self-assessed internal state while talking with the system, the AI’s performance became comparable to that of a human,” comments an excited Dr. Okada.
    These findings suggest that the detection of physiological signals in humans, which typically remain hidden from our view, could pave the way for highly emotionally intelligent AI-based dialog systems, making for more natural and satisfying human-machine interactions. Moreover, emotionally intelligent AI systems could help identify and monitor mental illness by sensing a change in daily emotional states. They could also come handy in education where the AI could gauge whether the learner is interested and excited over a topic of discussion, or bored, leading to changes in teaching strategy and more efficient educational services.
    Story Source:
    Materials provided by Japan Advanced Institute of Science and Technology. Note: Content may be edited for style and length. More