More stories

  • in

    This fabric can hear your heartbeat

    Someday our clothing may eavesdrop on the soundtrack of our lives, capturing the noises around and inside us.

    A new fiber acts as a microphone — picking up speech, rustling leaves and chirping birds — and turns those acoustic signals into electrical ones. Woven into a fabric, the material can even hear handclaps and faint sounds, such as its wearer’s heartbeat, researchers report March 16 in Nature. Such fabrics could provide a comfortable, nonintrusive — even fashionable — way to monitor body functions or aid with hearing.

    Acoustic fabrics have existed for perhaps hundreds of years, but they’re used to dampen sound, says Wei Yan, a materials scientist at Nanyang Technological University in Singapore. Fabric as a microphone is “totally a different concept,” says Yan, who worked on the fabric while at MIT.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Yan and his colleagues were inspired by the human eardrum. Sound waves cause vibrations in the eardrum, which are converted to electrical signals by the cochlea. “It turns out that this eardrum is made of fibers,” says Yoel Fink, a materials scientist at MIT. In the eardrum’s inner layers, collagen fibers radiate from the center, while others form concentric rings. The crisscrossing fibers play a role in hearing and look similar to the fabrics people weave, Fink says.

    Analogous to what’s happening in an eardrum, sound vibrates fabric at the nanoscale. In the new fabric, cotton fibers and others of a somewhat stiff material called Twaron efficiently convert incoming sound to vibrations. Woven together with these threads is a single fiber that contains a blend of piezoelectric materials, which produce a voltage when pressed or bent (SN: 8/22/17). The buckling and bending of the piezoelectric-containing fiber create electrical signals that can be sent through a tiny circuit board to a device that reads and records the voltage.

    The fabric microphone is sensitive to a range of noise levels, from a quiet library to heavy traffic, the team reports, although it is continuing to investigate what signal processing is needed to detangle target sounds from ambient noise. Integrated into clothing, this sound-sensing fabric feels like regular fabric, Yan says. And it continued to work as a microphone after washing it 10 times.

    Woven into fabric, a specialized fiber (pictured, center) creates electrical signals when bent or buckled, turning the entire material into a microphone.Fink Lab/MIT, Elizabeth Meiklejohn/RISD, Greg Hren

    Piezoelectric materials have “huge potential” for applications from observing the function of bodies to monitoring the integrity of aircraft materials, says Vijay Thakur, a materials scientist at Scotland’s Rural College in Edinburgh who was not part of this work. They’ve even been proposed for energy generation, but, he says, many uses have been limited by the tiny voltages they produce (SN: 10/1/15). The way the fibers are made in this fabric — sandwiching a blend of piezoelectric materials between other components, including a flexible, stretchy outer material — concentrates the energy from the vibrations into the piezoelectric layer, enhancing the signal it produces.

    As a proof of concept, the team incorporated the fabric into a shirt, which could hear its wearer’s heart like a stethoscope does. Used this way, the fabric microphone could listen for murmurs and may someday be able to provide information similar to an echocardiogram, an ultrasound of the heart, Thakur says. If it proves effective as a monitoring and diagnostic tool, placing such microphones into clothing may someday make it easier for doctors to track heart conditions in young children, who have trouble keeping still, he says.

    The team also anticipates that fabric microphones could aid hearing and communication. Another shirt the team created had two piezoelectric fibers spaced apart on the shirt’s back. Based on when each fiber picked up the sound, this shirt can be used to detect the direction a clap came from. And when hooked up to a power source, the fabric microphones can project sound as a speaker.

    “For the past 20 years, we’ve been trying to introduce a new way of thinking about fabrics,” Fink says. Beyond providing beauty and warmth, fabrics may help solve technological problems. And perhaps, Fink says, they can beautify technology too.  More

  • in

    AI to predict antidepressant outcomes in youth

    Mayo Clinic researchers have taken the first step in using artificial intelligence (AI) to predict early outcomes with antidepressants in children and adolescents with major depressive disorder, in a study published in The Journal of Child Psychology and Psychiatry. This work resulted from a collaborative effort between the departments of Molecular Pharmacology and Experimental Therapeutics, and Psychiatry and Psychology, at Mayo Clinic, with support from Mayo Clinic’s Center for Individualized Medicine.
    “This preliminary work suggests that AI has promise for assisting clinical decisions by informing physicians on the selection, use and dosing of antidepressants for children and adolescents with major depressive disorder,” says Paul Croarkin, D.O., a Mayo Clinic psychiatrist and senior author of the study. “We saw improved predictions of treatment outcomes in samples of children and adolescents across two classes of antidepressants.”
    In the study, researchers identified variation in six depressive symptoms: difficulty having fun, social withdrawal, excessive fatigue, irritability, low self-esteem and depressed feelings.
    They assessed these symptoms with the Children’s Depression Rating Scale-Revised to predict outcomes to 10 to 12 weeks of antidepressant pharmacotherapy: The six symptoms predicted 10- to 12-week outcomes at four to six weeks in fluoxetine testing datasets, with an average accuracy of 73%. The same six symptoms predicted 10- to 12-week outcomes at four to six weeks in duloxetine testing datasets, with an average accuracy of 76%. In placebo-treated patients, predicting response and remission accuracy was significantly lower than for antidepressants at 67%.These outcomes show the potential of AI and patient data to ensure children and adolescents receive treatment that has the highest likelihood of delivering therapeutic benefits with minimized side effects, explains Arjun Athreya, Ph.D., a Mayo Clinic researcher and lead author of the study.
    “We designed the algorithm to mimic a clinician’s logic of treatment management at an interim time point based on their estimated guess of whether a patient will likely or not benefit from pharmacotherapy at the current dose,” says Dr. Athreya. “Hence, it was essential for me as a computer engineer to embed and observe the practice closely to not only understand the needs of the patient, but also how AI can be consumed and useful to the clinician to benefit the patient.”
    Next steps
    The research findings are a foundation for future work incorporating physiological information, brain-based measures and pharmacogenomic data for precision medicine approaches in treating youth with depression. This will improve the care of young patients with depression, and help clinicians initiate and dose antidepressants in patients who benefit most.
    “Technological advances are understudied tools that could enhance treatment approaches,” says Liewei Wang, M.D., Ph.D., the Bernard and Edith Waterman Director of the Pharmacogenomics Program and Director of the Center for Individualized Medicine at the Mayo Clinic. “Predicting outcomes in children and adolescents treated for depression is critical in managing what could become a lifelong disease burden.”
    Acknowledgments
    This work was supported by Mayo Clinic Foundation for Medical Education and Research; the National Science Foundation under award No. 2041339; and the National Institute of Mental Health under awards R01MH113700, R01MH124655 and R01AA027486. The content is solely the authors’ responsibility and does not necessarily represent the official views of the funding agencies. The authors have declared no competing or potential conflicts of interest.
    Story Source:
    Materials provided by Mayo Clinic. Original written by Colette Gallagher. Note: Content may be edited for style and length. More

  • in

    Nuclear reactor power levels can be monitored using seismic and acoustic data

    Seismic and acoustic data recorded 50 meters away from a research nuclear reactor could predict whether the reactor was in an on or off state with 98% accuracy, according to a new study published in Seismological Research Letters.
    By applying several machine learning models to the data, researchers at Oak Ridge National Laboratory could also predict when the reactor was transitioning between on and off, and estimate its power levels, with about 66% accuracy.
    The findings provide another tool for the international community to cooperatively verify and monitor nuclear reactor operations in a minimally invasive way, said the study’s lead author Chengping Chai, a geophysicist at Oak Ridge. “Nuclear reactors can be used for both benign and nefarious activities. Therefore, verifying that a nuclear reactor is operating as declared is of interest to the nuclear nonproliferation community.”
    Although seismic and acoustic data have long been used to monitor earthquakes and the structural properties of infrastructure such as buildings and bridges, some researchers now use the data to take a closer look at the movements associated with industrial processes. In this case, Chai and colleagues deployed seismic and acoustic sensors around the High Flux Isotope Reactor at Oak Ridge, a research reactor used to produced neutrons for studies in physics, chemistry, biology, engineering and materials science.
    The reactor’s power status is a thermal process, with a cooling tower that dissipates heat. “We found that seismo-acoustic sensors can record the mechanical signatures of vibrating equipment such as fans and pumps at the cooling tower at an accuracy enough to shed light into operational questions,” Chai said.
    The researchers then compared a number of machine learning algorithms to discover which were best at estimating the reactor’s power state from specific seismo-acoustic signals. The algorithms were trained with seismic-only, acoustic-only and both types of data collected over a year. The combined data produced the best results, they found.
    “The seismo-acoustic signals associated with different power levels show complicated patterns that are difficult to analyze with traditional techniques,” Chai explained. “The machine learning approaches are able to infer the complex relationship between different reactor systems and their seismo-acoustic fingerprint and use it to predict power levels.”
    Chai and colleagues detected some interesting signals during the course of their study, including the vibrations of a noisy pump in the reactor’s off state, which disappeared when the pump was replaced.
    Chai said it is a long-term and challenging goal to associate seismic and acoustic signatures with different industrial activities and equipment. For the High Flux Isotope Reactor, preliminary research shows that fans and pumps have different seismo-acoustic fingerprints, and that different fan speeds have their own unique signatures.
    “Some normal but less frequent activities such as yearly or incidental maintenance need to be distinguished in seismic and acoustic data,” Chai said. To better understand how these signatures relate to specific operations, “we need to study both the seismic and acoustic signatures of instruments and the background noise at various industrial facilities.”
    Story Source:
    Materials provided by Seismological Society of America. Note: Content may be edited for style and length. More

  • in

    Intensity control of projectors in parallel: A doorway to an augmented reality future

    A challenge to adopting augmented reality (AR) in wider applications is working with dynamic objects, owing to a delay between their movement and the projection of light onto their new position. But, Tokyo Tech scientists may have a workaround. They have developed a method that uses multiple projectors while reducing delay time. Their method could open the door to a future driven by AR, helping us live increasingly technology-centered lives.
    Technological advancements continue to redesign the way we interact with digital media, the world around us, and each other. Augmented reality (AR), which uses technology to alter the perception of objects in the real world, is unlocking unprecedented landscapes in entertainment, advertising, education, and across many other industries. The use of multiple projectors plays an important role in expanding the usage of AR, alongside a technique called projection mapping. However, an obstacle to the widespread adoption of AR is the application of this method to moving, or “dynamic,” targets without the loss of immersion in the AR space.
    This technique, known as dynamic projection mapping, relies on a blend of cameras and projectors that visually detect target surfaces and project onto them, respectively. A critical aspect is the need for high speed in information transfer and low “latency,” or delay between detection and projection. Any latency leads to a misalignment of the projector, which effects our perception, and reduces the effectiveness of the AR space.
    Other issues like changes in shadowing and target overlap are solved easily by using multiple projectors. However, the addition of new projectors correspondingly drives up the latency. This is a result of the need to calculate the intensity at every pixel simultaneously for every frame of a moving scene. Simply put, more projectors lead to longer and more complex calculations. The latency is a massive hurdle towards AR taking a true foothold in broader applications across society.
    Thankfully, a team of scientists at Tokyo Institute of Technology (Tokyo Tech), led by Associate Professor Yoshihiro Watanabe, might just have the necessary solution. They have developed a novel method to calculate the intensity of each pixel on a target in parallel, reducing the need for a single large optimization calculation. Their method relies on the principle that if pixels are small enough, they can be evaluated independently. While based on an approximation, their results, published in IEEE Transactions on Visualization and Computer Graphics, suggest that they could achieve the same quality of images as conventional, more computationally expensive methods, while drastically increasing the mapping speed and thereby reducing the latency.
    “Another advantage of our proposed method is, as there is no longer need for a single global calculation, it allows the use of multiple rendering computers connected through a network, each only controlling a single projector,” explains Dr. Watanabe. “Such a network system is easily customizable to incorporate more projectors, without major sacrifices to the latency.”
    This new method can allow large spaces with many projectors for efficient dynamic projection mapping, taking us a step closer to broader AR applications, as Dr. Watanabe describes: “The presented high-speed multi-projection is expected to be a major part of important base technologies that will advance spatial AR to derive more practical uses in our daily life.”
    Video: https://youtu.be/ltwSmsYnlK8
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Stackable 'holobricks' can make giant 3D images

    Researchers have developed a new method to display highly realistic holographic images using ‘holobricks’ that can be stacked together to generate large-scale holograms.
    The researchers, from the University of Cambridge and Disney Research, developed a holobrick proof-of-concept, which can tile holograms together to form a large seamless 3D image. This is the first time this technology has been demonstrated and opens the door for scalable holographic 3D displays. The results are reported in the journal Light: Science & Applications.
    As technology develops, people want high-quality visual experiences, from 2D high resolution TV to 3D holographic augmented or virtual reality, and large true 3D displays. These displays need to support a significant amount of data flow: for a 2D full HD display, the information data rate is about three gigabits per second (Gb/s), but a 3D display of the same resolution would require a rate of three terabits per second, which is not yet available.
    Holographic displays can reconstruct high quality images for a real 3D visual perception. They are considered the ultimate display technology to connect the real and virtual worlds for immersive experiences.
    “Delivering an adequate 3D experience using the current technology is a huge challenge,” said Professor Daping Chu from Cambridge’s Department of Engineering, who led the research. “Over the past ten years, we’ve been working with our industrial partners to develop holographic displays which allow the simultaneous realisation of large size and large field-of-view, which needs to be matched with a hologram with a large optical information content.”
    However, the information content of current holograms information is much greater than the display capabilities of current light engines, known as spatial light modulators, due to their limited space bandwidth product.
    For 2D displays, it’s standard practice to tile small size displays together to form one large display. The approach being explored here is similar, but for 3D displays, which has not been done before. “Joining pieces of 3D images together is not trivial, because the final image must be seen as seamless from all angles and all depths,” said Chu, who is also Director of the Centre for Advanced Photonics and Electronics (CAPE). “Directly tiling 3D images in real space is just not possible.”
    To address this challenge, the researchers developed the holobrick unit, based on coarse integrated holographic displays for angularly tiled 3D images, a concept developed at CAPE with Disney Research about seven years ago.
    Each of the holobricks uses a high-information bandwidth spatial light modulator for information delivery in conjunction with coarse integrated optics, to form the angularly tiled 3D holograms with large viewing areas and fields of view.
    Careful optical design makes sure the holographic fringe pattern fills the entire face of the holobrick, so that multiple holobricks can be seamlessly stacked to form a scalable spatially tiled holographic image 3D display, capable of both wide field-of-view angle and large size.
    The proof-of-concept developed by the researchers is made of two seamlessly tiled holobricks. Each full-colour brick is 1024×768 pixels, with a 40° field of view and 24 frames per second, to display tiled holograms for full 3D images.
    “There are still many challenges ahead to make ultra-large 3D displays with wide viewing angles, such as a holographic 3D wall,” said Chu. “We hope that this work can provide a promising way to tackle this issue based on the currently limited display capability of spatial light modulators.”
    Story Source:
    Materials provided by University of Cambridge. The original text of this story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More

  • in

    A new brain-computer interface with a flexible backing

    Engineering researchers have invented an advanced brain-computer interface with a flexible and moldable backing and penetrating microneedles. Adding a flexible backing to this kind of brain-computer interface allows the device to more evenly conform to the brain’s complex curved surface and to more uniformly distribute the microneedles that pierce the cortex. The microneedles, which are 10 times thinner than the human hair, protrude from the flexible backing, penetrate the surface of the brain tissue without piercing surface venules, and record signals from nearby nerve cells evenly across a wide area of the cortex.
    This novel brain-computer interface has thus far been tested in rodents. The details were published online on February 25 in the journal Advanced Functional Materials. This work is led by a team in the lab of electrical engineering professor Shadi Dayeh at the University of California San Diego, together with researchers at Boston University led by biomedical engineering professor Anna Devor.
    This new brain-computer interface is on par with and outperforms the “Utah Array,” which is the existing gold standard for brain-computer interfaces with penetrating microneedles. The Utah Array has been demonstrated to help stroke victims and people with spinal cord injury. People with implanted Utah Arrays are able to use their thoughts to control robotic limbs and other devices in order to restore some everyday activities such as moving objects.
    The backing of the new brain-computer interface is flexible, conformable, and reconfigurable, while the Utah Array has a hard and inflexible backing. The flexibility and conformability of the backing of the novel microneedle-array favors closer contact between the brain and the electrodes, which allows for better and more uniform recording of the brain-activity signals. Working with rodents as model species, the researchers have demonstrated stable broadband recordings producing robust signals for the duration of the implant which lasted 196 days.
    In addition, the way the soft-backed brain-computer interfaces are manufactured allows for larger sensing surfaces, which means that a significantly larger area of the brain surface can be monitored simultaneously. In the Advanced Functional Materials paper, the researchers demonstrate that a penetrating microneedle array with 1,024 microneedles successfully recorded signals triggered by precise stimuli from the brains of rats. This represents ten times more microneedles and ten times the area of brain coverage, compared to current technologies.
    Thinner and transparent backings
    These soft-backed brain-computer interfaces are thinner and lighter than the traditional, glass backings of these kinds of brain-computer interfaces. The researchers note in their Advanced Functional Materials paper that light, flexible backings may reduce irritation of the brain tissue that contacts the arrays of sensors. More

  • in

    Assessing the impact of automation on long-haul trucking

    As automated truck technology continues to be developed in the United States, there are still many questions about how the technology will be deployed and what its potential impacts will be on the long-haul trucking market.
    A new study by researchers at the University of Michigan and Carnegie Mellon University assessed how and where automation might replace operator hours in long-haul trucking.
    They found that up to 94% of operator hours may be impacted if automated trucking technology improves to operate in all weather conditions across the continental United States. Currently, automated trucking is being tested mainly in the Sun Belt.
    “Our results suggest that the impacts of automation may not happen all at once. If automation is restricted to Sun Belt states (including Florida, Texas and Arizona) — because the technology may not initially work well in rough weather — about 10% of the operator hours will be affected,” said study co-author Parth Vaishnav, assistant professor of sustainable systems at the U-M School for Environment and Sustainability.
    Using transportation data from the 2017 Commodity Flow Survey, which is produced by the U.S. Bureau of Transportation Statistics, U.S. Census Bureau and U.S. Department of Commerce, the study authors gathered information on trucking shipments and the operator hours used to fulfill those shipments.
    In addition, they explored different automated trucking deployment scenarios, including deployment in southern, sunny states; deployment in spring and summer months (April 1 to Sept. 30); deployment for journeys more than 500 miles; and deployment across the United States. More

  • in

    Optimizer tool designs, evaluates, maximizes solar-powered cooling systems

    Solar-powered adsorption cooling systems (SACS) have gained traction as a renewable energy technology that could provide clean power for air conditioning and refrigeration while significantly reducing the load on the electric grid. But these systems lack energy efficiency.
    In the Journal of Renewable and Sustainable Energy, by AIP Publishing, researchers from Anna University in India developed an optimizer tool to design, evaluate, and maximize the performance of different types of SACS under various operating scenarios. The tool was created using Visual Basic programming language that is easy to learn and enables rapid application development.
    “Our user-friendly optimizer is a multifunctional tool capable of designing and analyzing a complete solar powered adsorption refrigeration system,” co-author Edwin Mohan said. “Our tool is capable of assessing different combinations of operational parameters to determine the settings that maximize system performance.”
    SACS, which work by turning solar energy into heat, consists of a sorption bed, condenser, liquid storage tank, expansion valve, and evaporator. At night, water or another refrigerant is vaporized through the evaporator.
    During daylight hours, heat obtained from the sun causes the vapor to travel through the condenser, where it is reliquefied to release latent heat. The liquid eventually returns to the evaporator to repeat the process.
    One of the most important elements of SACS is the pairing of materials used in the adsorption process in which atoms or molecules of a substance (the adsorbate) adhere to the surface of a porous material (the adsorbent), like activated carbon and zeolite, to maximize the surface-to-volume ratio.
    In their study, the researchers used their computational tool to test two adsorbent/adsorbate pairs: activated-carbon and methanol, and zeolite and water. The experiments were carried out over four days in a prototype SACS with a cooling capacity of 0.25 kilowatts. They found the activated-carbon-methanol combination achieved a higher coefficient of performance, but the zeolite-water adsorption system could operate at higher temperatures.
    The optimizer tool predicted the proper material mass concentration ratios. The method calculated the cooling load, predicted maximal performance, and conducted the overall performance analysis of the cooling system.
    Although the study focused on residential home cooling systems, the researchers said their optimizer tool could be extended to higher capacity systems.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More