More stories

  • in

    Blockchain gives Indigenous Americans control over their genomic data

    Scientists today can access genomic data from Indigenous Peoples without their free, prior, and informed consent, leading to potential misuse and the reinforcement of stereotypes. Despite existing tools that facilitate the sharing of genomic information with researchers, none of those options give Indigenous governments control over how these data are used. In an article publishing in the journal Cell on July 21, authors propose a new blockchain model where researchers are only allowed to access the genomic data after the Indigenous entities have approved the research project.

    advertisement More

  • in

    Patient deterioration predictor could surpass limits of traditional vital signs

    An artificial intelligence-driven device that works to detect and predict hemodynamic instability may provide a more accurate picture of patient deterioration than traditional vital sign measurements, a Michigan Medicine study suggests.
    Researchers captured data from over 5,000 adult patients at University of Michigan Health with the Analytic for Hemodynamic Instability. Developed at the U-M Weil Institute for Critical Care Research and Innovation, AHI is a software as a medical device designed to detect and predict changes in hemodynamic status in real-time using data from a single electrocardiogram lead. The researchers compared the results against gold standard vital sign measurements of continuous heart rate and blood pressure measured by invasive arterial monitoring in several intensive care units to determine if the AHI could indicate hemodynamic instability in real-time.
    They found that the AHI detected standard indications of hemodynamic instability, a combination of elevated heart rate and low blood pressure, with nearly 97% sensitivity and 79% specificity. The results are published in Critical Care Explorations (a Society of Critical Care Medicine journal).
    The findings suggest that the AHI may be able to provide continuous dynamic monitoring capabilities in patients who traditionally have intermittent static vital sign measurements, says senior author Ben Bassin, M.D., director of the Joyce and Don Massey Family Foundation Emergency Critical Care Center, also known as EC3, and an associate professor of emergency medicine at U-M Medical School.
    “AHI performs extremely well, and it functions in a way that we think may have transformative clinical utility,” Bassin said. “Most vital signs measurements are static, subject to human error, and require validation and interpretation. AHI is the opposite of that. It’s dynamic, produces a binary output of ‘stable’ or ‘unstable,’ and it may enable early martialing of resources to patients who may not have been on a clinician’s radar.”
    Traditional vital signs have limitations, including limited accuracy in non-invasive monitoring and the fact that patients who are not at obvious risk for immediate deterioration may only be monitored periodically every 4-6 hours or longer. The AHI, which was approved by the United States Food and Drug Administration in 2021 and is licensed to Fifth Eye, Inc. (a U-M spinoff), was designed to address those limitations.
    “The vision of AHI was born out of our continued inability to identify unstable patients and to predict when patients would become unstable, especially in settings where they cannot be intensively monitored, said co-author Kevin Ward, M.D., executive director of the Weil Institute and professor of emergency medicine and biomedical engineering at Michigan Medicine.
    “AHI is ideally suited to be utilized with wearable monitors such as ECG patches, that could make any hospital bed, waiting room or other setting into a sophisticated monitoring environment. The implication of such a technology is that it has the potential to save lives not only in the hospital, but also at home, in the ambulance and on the battlefield.”
    Researchers say future studies are needed to determine if AHI provides clinical and resource allocation benefits in patients undergoing infrequent blood pressure monitoring. The next phase of research will focus on how AHI is used at Michigan Medicine.
    Story Source:
    Materials provided by Michigan Medicine – University of Michigan. Original written by Noah Fromson. Note: Content may be edited for style and length. More

  • in

    Flexible method for shaping laser beams extends depth-of-focus for OCT imaging

    Researchers have developed a new method for flexibly creating various needle-shaped laser beams. These long, narrow beams can be used to improve optical coherence tomography (OCT), a noninvasive and versatile imaging tool that is used for scientific research and various types of clinical diagnoses.
    “Needle-shaped laser beams can effectively extend the depth-of-focus of an OCT system, improving the lateral resolution, signal-to-noise ratio, contrast and image quality over a long depth range,” said research team leader Adam de la Zerda from Stanford University School of Medicine. “However, before now, implementing a specific needle-shaped beam has been difficult due to the lack of a common, flexible generation method.”
    In Optica, Optica Publishing Group’s journal for high-impact research, the researchers describe their new platform for creating needle-shaped beams with different lengths and diameters. It can be used to create various types of beams such as one with an extremely long depth of field or one that is smaller than the diffraction-limit of light, for example.
    The needle-shaped beams generated with this method could benefit a variety of OCT applications. For example, utilizing a long, narrow beam could allow high-resolution OCT imaging of the retina without any dynamic focusing, making the process faster and thus more comfortable for patients. It could also extend the depth-of-focus for OCT endoscopy, which would improve diagnosis accuracy.
    “The rapid high-resolution imaging ability of needle-shaped beams can also get rid of adverse effects that occur due to human movements during image acquisition,” said the paper’s first author Jingjing Zhao. “This can help to pinpoint melanoma and other skin problems using OCT.”
    A flexible solution
    As a noninvasive imaging tool, OCT features an axial resolution that is constant along its imaging depth. However, its axial resolution, which is determined by the light source, has a very small depth of focus. To address this issue, OCT instruments are often made so that the focus can be moved along the depth to capture clear images of an entire region of interest. However, this dynamic focusing can make imaging slower and doesn’t work well for applications where the sample isn’t static. More

  • in

    At the water's edge: Self-assembling 2D materials at a liquid-liquid interface

    The past few decades have witnessed a great amount of research in the field of two-dimensional (2D) materials. As the name implies, these thin film-like materials are composed of layers that are only a few atoms thick. Many of the chemical and physical properties of 2D materials can be fine-tuned, leading to promising applications in many fields, including optoelectronics, catalysis, renewable energy, and more.
    Coordination nanosheets are one particularly interesting type of 2D material. The “coordination” refers to the effect of metallic ions in these molecules, which act as coordination centers. These centers can spontaneously create organized molecular dispositions that span multiple layers in 2D materials. This has attracted the attention of materials scientists due to their favorable properties. In fact, we have only begun to scratch the surface regarding what heterolayer coordination nanosheets — coordination nanosheets whose layers have different atomic composition — can offer.
    In a recent study published first on June 13, 2022, and featured on the front cover of Chemistry — A European Journal, a team of scientists from Tokyo University of Science (TUS) and The University of Tokyo in Japan reported a remarkably simple way to synthesize heterolayer coordination nanosheets. Composed of the organic ligand, terpyridine, coordinating iron and cobalt, these nanosheets assemble themselves at the interface between two immiscible liquids in a peculiar way. The study, led by Prof. Hiroshi Nishihara from TUS, also included contributions from Mr. Joe Komeda, Dr. Kenji Takada, Dr. Hiroaki Maeda, and Dr. Naoya Fukui from TUS.
    To synthesize the heterolayer coordination nanosheets, the team first created the liquid-liquid interface to enable their assembly. They dissolved tris(terpyridine) ligand in dichloromethane (CH2Cl2), an organic liquid that does not mix with water. They then poured a solution of water and ferrous tetrafluoroborate, an iron-containing chemical, on top of the CH2Cl2. After 24 hours, the first layer of the coordination nanosheet, bis(terpyridine)iron (or “Fe-tpy”), formed at the interface between both liquids.
    Following this, they removed the iron-containing water and replaced it with cobalt-containing water. In the next few days, a bis(terpyridine)cobalt (or “Co-tpy”) layer formed right below the iron-containing one at the liquid-liquid interface.
    The team made detailed observations of the heterolayer using various advanced techniques, such as scanning electron microscopy, X-ray photoelectron spectroscopy, atomic force microscopy, and scanning transmission electron microscopy. They found that the Co-tpy layer formed neatly below the Fe-tpy layer at the liquid-liquid interface. Moreover, they could control the thickness of the second layer depending on how long they left the synthesis process run its course.
    Interestingly, the team also found that the ordering of the layers could be swapped by simply changing the order of the synthesis steps. In other words, if they first added a cobalt-containing solution and then replaced it with an iron-containing solution, the synthesized heterolayer would have cobalt coordination centers on the top layer and iron coordination centers on the bottom layer. “Our findings indicate that metal ions can go through the first layer from the aqueous phase to the CH2Cl2 phase to react with terpyridine ligands right at the boundary between the nanosheet and the CH2Cl2 phase,” explains Prof. Nishihara. “This is the first ever clarification of the growth direction of coordination nanosheets at a liquid/liquid interface.”
    Additionally, the team investigated the reduction-oxidation properties of their coordination nanosheets as well as their electrical rectification characteristics. They found that the heterolayers behaved much like a diode in a way that is consistent with the electronic energy levels of Co-tpy and Fe-tpy. These insights, coupled with the easy synthesis procedure developed by the team, could help in the design of heterolayer nanosheets made of other materials and tailored for specific electronics applications. “Our synthetic method could be applicable to other coordination polymers synthesized at liquid-liquid interfaces,” highlights Prof. Nishihara. “Therefore, the results of this study will expand the structural and functional diversity of molecular 2D materials.”
    With eyes set on the future, the team will keep investigating chemical phenomena occurring at liquid-liquid interfaces, elucidating the mechanisms of mass transport and chemical reactions. Their findings can help expand the design of 2D materials and, hopefully, lead to better performance of optoelectronic devices, such as solar cells.
    Story Source:
    Materials provided by Tokyo University of Science. Note: Content may be edited for style and length. More

  • in

    Electric nanomotor made from DNA material

    A research team led by the Technical University of Munich (TUM) has succeeded for the first time in producing a molecular electric motor using the DNA origami method. The tiny machine made of genetic material self-assembles and converts electrical energy into kinetic energy. The new nanomotors can be switched on and off, and the researchers can control the rotation speed and rotational direction.
    Be it in our cars, drills or the automatic coffee grinders — motors help us perform work in our everyday lives to accomplish a wide variety of tasks. On a much smaller scale, natural molecular motors perform vital tasks in our bodies. For instance, a motor protein known as ATP synthase produces the molecule adenosine triphosphate (ATP), which our body uses for short-term storage and transfer of energy.
    While natural molecular motors are essential, it has been quite difficult to recreate motors on this scale with mechanical properties roughly similar to those of natural molecular motors like ATP synthase. A research team has now constructed a working nanoscale molecular rotary motor using the DNA origami method. The team was led by Hendrik Dietz, Professor of Biomolecular Nanotechnology at TUM, Friedrich Simmel, Professor of Physics of Synthetic Biological Systems at TUM, and Ramin Golestanian, director at the Max Planck Institute for Dynamics and Self-Organization.
    A self-assembling nanomotor
    The novel molecular motor consists of DNA — genetic material. The researchers used the DNA origami method to assemble the motor from DNA molecules. This method was invented by Paul Rothemund in 2006 and was later further developed by the research team at TUM. Several long single strands of DNA serve as a basis to which additional DNA strands attach themselves to as counterparts. The DNA sequences are selected in such a way that the attached strands and folds create the desired structures.
    “We’ve been advancing this method of fabrication for many years and can now develop very precise and complex objects, such as molecular switches or hollow bodies that can trap viruses. If you put the DNA strands with the right sequences in solution, the objects self-assemble,” says Dietz.
    The new nanomotor made of DNA material consists of three components: base, platform and rotor arm. The base is approximately 40 nanometers high and is fixed to a glass plate in solution via chemical bonds on a glass plate. A rotor arm of up to 500 nanometers in length is mounted on the base so that it can rotate. Another component is crucial for the motor to work as intended: a platform that lies between the base and the rotor arm. This platform contains obstacles that influence the movement of the rotor arm. To pass the obstacles and rotate, the rotor arm must bend upward a little, similar to a ratchet.
    Targeted movement through AC voltage
    Without energy supply, the rotor arms of the motors move randomly in one direction or the other, driven by random collisions with molecules from the surrounding solvent. However, as soon as AC voltage is applied via two electrodes, the rotor arms rotate in a targeted and continuous manner in one direction.
    “The new motor has unprecedented mechanical capabilities: It can achieve torques in the range of 10 piconewton times nanometer. And it can generate more energy per second than what’s released when two ATP molecules are split,” explains Ramin Golestanian, who led the theoretical analysis of the mechanism of the motor.
    The targeted movement of the motors results from a superposition of the fluctuating electrical forces with the forces experienced by the rotor arm due to the ratchet obstacles. The underlying mechanism realizes a so-called “flashing Brownian ratchet.” The researchers can control the speed and direction of the rotation via the direction of the electric field and also via the frequency and amplitude of the AC voltage.
    “The new motor could also have technical applications in the future. If we develop the motor further we could possibly use it in the future to drive user-defined chemical reactions, inspired by how ATP synthase makes ATP driven by rotation. Then, for example, surfaces could be densely coated with such motors. Then you would add starting materials, apply a little AC voltage and the motors produce the desired chemical compound,” says Dietz. More

  • in

    Deep learning for new alloys

    When is something more than just the sum of its parts? Alloys show such synergy. Steel, for instance, revolutionized industry by taking iron, adding a little carbon and making an alloy much stronger than either of its components.
    Supercomputer simulations are helping scientists discover new types of alloys, called high-entropy alloys. Researchers have used the Stampede2 supercomputer of the Texas Advanced Computing Center (TACC) allocated by the Extreme Science and Engineering Discovery Environment (XSEDE).
    Their research was published in April 2022 in Npj Computational Materials. The approach could be applied to finding new materials for batteries, catalysts and more without the need for expensive metals such as platinum or cobalt.
    “High-entropy alloys represent a totally different design concept. In this case we try to mix multiple principal elements together,” said study senior author Wei Chen, associate professor of materials science and engineering at the Illinois Institute of Technology.
    The term “high entropy” in a nutshell refers to the decrease in energy gained from random mixing of multiple elements at similar atomic fractions, which can stabilize new and novel materials resulting from the ‘cocktail.’
    For the study, Chen and colleagues surveyed a large space of 14 elements and the combinations that yielded high-entropy alloys. They performed high-throughput quantum mechanical calculations, which found the alloy’s stability and elastic properties, the ability to regain their size and shape from stress, of more than 7,000 high-entropy alloys. More

  • in

    Robots learn household tasks by watching humans

    The robot watched as Shikhar Bahl opened the refrigerator door. It recorded his movements, the swing of the door, the location of the fridge and more, analyzing this data and readying itself to mimic what Bahl had done.
    It failed at first, missing the handle completely at times, grabbing it in the wrong spot or pulling it incorrectly. But after a few hours of practice, the robot succeeded and opened the door.
    “Imitation is a great way to learn,” said Bahl, a Ph.D. student at the Robotics Institute (RI) in Carnegie Mellon University’s School of Computer Science. “Having robots actually learn from directly watching humans remains an unsolved problem in the field, but this work takes a significant step in enabling that ability.”
    Bahl worked with Deepak Pathak and Abhinav Gupta, both faculty members in the RI, to develop a new learning method for robots called WHIRL, short for In-the-Wild Human Imitating Robot Learning. WHIRL is an efficient algorithm for one-shot visual imitation. It can learn directly from human-interaction videos and generalize that information to new tasks, making robots well-suited to learning household chores. People constantly perform various tasks in their homes. With WHIRL, a robot can observe those tasks and gather the video data it needs to eventually determine how to complete the job itself.
    The team added a camera and their software to an off-the-shelf robot, and it learned how to do more than 20 tasks — from opening and closing appliances, cabinet doors and drawers to putting a lid on a pot, pushing in a chair and even taking a garbage bag out of the bin. Each time, the robot watched a human complete the task once and then went about practicing and learning to accomplish the task on its own. The team presented their research this month at the Robotics: Science and Systems conference in New York.
    “This work presents a way to bring robots into the home,” said Pathak, an assistant professor in the RI and a member of the team. “Instead of waiting for robots to be programmed or trained to successfully complete different tasks before deploying them into people’s homes, this technology allows us to deploy the robots and have them learn how to complete tasks, all the while adapting to their environments and improving solely by watching.”
    Current methods for teaching a robot a task typically rely on imitation or reinforcement learning. In imitation learning, humans manually operate a robot to teach it how to complete a task. This process must be done several times for a single task before the robot learns. In reinforcement learning, the robot is typically trained on millions of examples in simulation and then asked to adapt that training to the real world.
    Both learning models work well when teaching a robot a single task in a structured environment, but they are difficult to scale and deploy. WHIRL can learn from any video of a human doing a task. It is easily scalable, not confined to one specific task and can operate in realistic home environments. The team is even working on a version of WHIRL trained by watching videos of human interaction from YouTube and Flickr.
    Progress in computer vision made the work possible. Using models trained on internet data, computers can now understand and model movement in 3D. The team used these models to understand human movement, facilitating training WHIRL.
    With WHIRL, a robot can accomplish tasks in their natural environments. The appliances, doors, drawers, lids, chairs and garbage bag were not modified or manipulated to suit the robot. The robot’s first several attempts at a task ended in failure, but once it had a few successes, it quickly latched on to how to accomplish it and mastered it. While the robot may not accomplish the task with the same movements as a human, that’s not the goal. Humans and robots have different parts, and they move differently. What matters is that the end result is the same. The door is opened. The switch is turned off. The faucet is turned on.
    “To scale robotics in the wild, the data must be reliable and stable, and the robots should become better in their environment by practicing on their own,” Pathak said.
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    Alexa and Siri, listen up! Teaching machines to really hear us

    University of Virginia cognitive scientist Per Sederberg has a fun experiment you can try at home. Take out your smartphone and, using a voice assistant such as the one for Google’s search engine, say the word “octopus” as slowly as you can.
    Your device will struggle to reiterate what you just said. It might supply a nonsensical response, or it might give you something close but still off — like “toe pus.” Gross!
    The point is, Sederberg said, when it comes to receiving auditory signals like humans and other animals do — despite all of the computing power dedicated to the task by such heavyweights as Google, Deep Mind, IBM and Microsoft — current artificial intelligence remains a bit hard of hearing.
    The outcomes can range from comical and mildly frustrating to downright alienating for those who have speech problems.
    But using recent breakthroughs in neuroscience as a model, UVA collaborative research has made it possible to convert existing AI neural networks into technology that can truly hear us, no matter at what pace we speak.
    The deep learning tool is called SITHCon, and by generalizing input, it can understand words spoken at different speeds than a network was trained on. More