More stories

  • in

    New technology speeds up organic data transfer

    Researches are pushing the boundaries of data speed with a brand new type of organic LEDs.
    An international research team, involving Newcastle University experts, developed visible light communication (VLC) setup capable of a data rate of 2.2 Mb/s by employing a new type of organic light-emitting diodes (OLEDs).
    To reach this speed, the scientists created new far-red/near-infrared, solution-processed OLEDs. And by extending the spectral range to 700-1000nm, they successfully expanded the bandwidth and achieved the fastest-ever data speed for solution-based OLEDs.
    Described in the journal Light Science & Applications, the new OLEDs create opportunities for new internet-of-things (IoT) connectivity, as well as wearable and implantable biosensors technology.
    The project is a collaboration between Newcastle University, University College London, the London Centre for Nanotechnology, the Institute of Organic Chemistry — Polish Academy of Sciences (Warsaw, Poland) and the Institute for the Study of Nanostructured Materials — Research National Council (CNR-ISMN, Bologna, Italy).
    Dr Paul Haigh, Lecturer in Communications at Newcastle University’s Intelligent Sensing and Communications Group, was part of the research team. He led the development of a real-time transmission of signals that transmit as quickly as possible. He achieved this by using information modulation formats developed in-house, achieving approximately 2.2 Mb/s.
    Dr Haigh said: “Our team developed highly efficient long wavelength (far red/near-infrared) polymer LEDs for the first time, free of heavy metals which has been a long standing research challenge in the organic optoelectronics community. Achieving such high data rates opens up opportunities for the integration of portable, wearable or implantable organic biosensors into visible/ nearly (in)visible light communication links.”
    The demand for faster data transmission speeds is driving the popularity of light-emitting devices in VLC systems. LEDs have multiple applications and are used lighting systems, mobile phones and TV displays. While OLEDs don’t offer the same speed as inorganic LEDs and laser diodes do, they are cheaper to produce, recyclable and more sustainable.
    The data rate the team achieved through the pioneering device is high enough to support an indoor point-to-point link, with a view of IoT applications.
    The researchers highlight the possibility of achieving such data rates without computationally complex and power-demanding equalisers. Together with the absence of toxic heavy metals in the active layer of the OLEDs, the new VLC setup is promising for the integration of portable, wearable or implantable organic biosensors.

    Story Source:
    Materials provided by Newcastle University. Note: Content may be edited for style and length. More

  • in

    Will telehealth services become the norm following COVID-19 pandemic?

    The onset of the COVID-19 pandemic has broadly affected how health care is provided in the United States. One notable change is the expanded use of telehealth services, which have been quickly adopted by many health care providers and payers, including Medicare, to ensure patients’ access to care while reducing their risk of exposure to the coronavirus.
    In an article published in JAMA Oncology, Trevor Royce, MD, MS, MPH, an assistant professor of radiation oncology at the University of North Carolina Lineberger Comprehensive Cancer Center and UNC School of Medicine, said the routine use of telehealth for patients with cancer could have long-lasting and unforeseen effects on the provision and quality of care.
    “The COVID-19 pandemic has resulted in the rapid deregulation of telehealth services. This was done in part by lifting geographical restrictions, broadening patient, health care professional, and services eligibility,” said Royce, the article’s corresponding author. “It is likely aspects of telehealth continue to be part of the health care delivery system, beyond the pandemic.”
    The article’s other authors are UNC Lineberger’s Hanna K. Sanoff, MD, MPH, clinical medical director of the North Carolina Cancer Hospital and associate professor in the UNC School of Medicine Division of Hematology, and Amar Rewari, MD, MBA, from the Associates in Radiation Medicine, Adventist HealthCare Radiation Oncology Center in Rockville, Maryland.
    Royce said the widespread shift to telehealth was made possible, in part, by three federal economic stimulus packages and the Centers for Medicare and Medicaid Services making several policy changes in March that expanded Medicare recipients’ access to telehealth services.
    The policy changes included allowing telehealth services to be provided in a patient’s home. Medicare previously only paid for telehealth services in a facility in nonurban areas or areas with a health professional shortage. Medicare also approved payment for new patient appointments, expanded telehealth coverage to include 80 additional services, allowed for services to be carried out on a wider assortment of telecommunication systems — including remote video communications platforms, such as Zoom — and modified the restrictions of who can provide and supervise care.
    While the potential benefits of telehealth have been demonstrated during the pandemic, Royce said they must be balanced with concerns about care quality and safety.
    “There is a lot we don’t know about telehealth, and how its rapid adoption will impact our patients,” Royce said. “How will the safety and quality of care be impacted? How will we integrate essential components of the traditional doctor visit, including physical exam, lab work, scans and imaging? Will patients and doctors be more or less satisfied with their care? These are all potential downsides if we are not thoughtful with our adoption.”
    He said appropriate oversight of care is critical. There will be a continued need for objective patient assessments, such as patient-reported outcomes, physical examinations and laboratory tests, and to measure care quality and monitor for fraud. There are also a number of standard measures of care quality that can be implemented during the transition to telehealth, including tracking emergency room visits, hospitalizations and adverse events.
    Telehealth presents other challenges, as well. Though technology and internet access are now more widely available, they are not universally accessible. Where one lives, their socioeconomic status and comfort level with technology can be barriers to using telehealth services. A reliance on telehealth might lower participation in clinical trials, which can require regular in-person appointments.
    “Telehealth can be used to improve access to care in traditionally hard-to-reach populations. However, it is important to acknowledge that if we are not thoughtful in its adoption, the opposite could be true,” Royce said. “For example, will lower socioeconomic groups have the same level of access to an adequate internet connection or cellular services that make a virtual video visit possible? Telehealth needs to be adopted with equity in mind.” More

  • in

    'Blinking' crystals may convert CO2 into fuels

    Imagine tiny crystals that “blink” like fireflies and can convert carbon dioxide, a key cause of climate change, into fuels.
    A Rutgers-led team has created ultra-small titanium dioxide crystals that exhibit unusual “blinking” behavior and may help to produce methane and other fuels, according to a study in the journal Angewandte Chemie. The crystals, also known as nanoparticles, stay charged for a long time and could benefit efforts to develop quantum computers.
    “Our findings are quite important and intriguing in a number of ways, and more research is needed to understand how these exotic crystals work and to fulfill their potential,” said senior author Tewodros (Teddy) Asefa, a professor in the Department of Chemistry and Chemical Biology in the School of Arts and Sciences at Rutgers University-New Brunswick. He’s also a professor in the Department of Chemical and Biochemical Engineering in the School of Engineering.
    More than 10 million metric tons of titanium dioxide are produced annually, making it one of the most widely used materials, the study notes. It is used in sunscreens, paints, cosmetics and varnishes, for example. It’s also used in the paper and pulp, plastic, fiber, rubber, food, glass and ceramic industries.
    The team of scientists and engineers discovered a new way to make extremely small titanium dioxide crystals. While it’s still unclear why the engineered crystals blink and research is ongoing, the “blinking” is believed to arise from single electrons trapped on titanium dioxide nanoparticles. At room temperature, electrons — surprisingly — stay trapped on nanoparticles for tens of seconds before escaping and then become trapped again and again in a continuous cycle.
    The crystals, which blink when exposed to a beam of electrons, could be useful for environmental cleanups, sensors, electronic devices and solar cells, and the research team will further explore their capabilities.

    Story Source:
    Materials provided by Rutgers University. Note: Content may be edited for style and length. More

  • in

    Machining the heart: New predictor for helping to beat chronic heart failure

    Tens of millions of people worldwide have chronic heart failure, and only a little over half of them survive 5 years beyond their diagnosis. Now, researchers from Japan are helping doctors to assign patients into groups based on their specific needs, to improve medical outcomes.
    In a study recently published in the Journal of Nuclear Cardiology, researchers from Kanazawa University have used computer science to disentangle patients most at risk of sudden arrhythmic cardiac death from patients most at risk of heart failure death.
    Doctors have many methods at their disposal for diagnosing chronic heart failure. However, there’s a need to better identify what treatment to pursue, in accordance with the risks of each approach. When combined with conventional clinical tests, a molecule known as iodine-123 labelled MIBG can help discriminate between high-risk and low-risk patients. However, there is no way to assess the risk of arrhythmic death separately from the risk of heart failure death, something the researchers at Kanazawa University aimed to address.
    “We used artificial intelligence to show that numerous variables work in synergy to better predict chronic heart failure outcomes,” explains lead author of the study Kenichi Nakajima. “Neither variable, in and of itself, is quite up to the task.”
    To do this, the researchers examined the medical records of 526 patients with chronic heart failure and who underwent consecutive iodine-123-MIBG imaging and standard clinical testing. Conventional medical care proceeded as normal after imaging.
    “The results were clear,” says Nakajima. “Heart failure death was most common in older adult patients with very low MIBG activity, worse New York Heart Association class, and comorbidities.”
    Furthermore, arrhythmia was most common in younger patients with moderately low iodine-123-MIBG activity and less serious heart failure. Doctors can use the Kanazawa University researchers’ results to tailor medical care; for example, the type of implantable defibrillator most likely to meet the needs of the patient.
    “It’s important to note that our results need to be confirmed in a larger study,” explains Nakajima. “In particular, the arrhythmia outcomes were perhaps too infrequent to be clinically reliable.”
    Given that chronic heart failure is a global problem that frequently kills within a few years after diagnosis, if not treated appropriately, it’s essential to start the most appropriate medical care as soon as possible. With a reliable test that predicts which patients most likely need which treatments, a greater number of patients are likely to live longer.

    Story Source:
    Materials provided by Kanazawa University. Note: Content may be edited for style and length. More

  • in

    Recognizing fake images using frequency analysis

    They look deceptively real, but they are made by computers: so-called deep-fake images are generated by machine learning algorithms, and humans are pretty much unable to distinguish them from real photos. Researchers at the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum and the Cluster of Excellence “Cyber Security in the Age of Large-Scale Adversaries” (Casa) have developed a new method for efficiently identifying deep-fake images. To this end, they analyse the objects in the frequency domain, an established signal processing technique.
    The team presented their work at the International Conference on Machine Learning (ICML) on 15 July 2020, one of the leading conferences in the field of machine learning. Additionally, the researchers make their code freely available online at https://github.com/RUB-SysSec/GANDCTAnalysis, so that other groups can reproduce their results.
    Interaction of two algorithms results in new images
    Deep-fake images — a portmanteau word from “deep learning” for machine learning and “fake” — are generated with the help of computer models, so-called Generative Adversarial Networks, GANs for short. Two algorithms work together in these networks: the first algorithm creates random images based on certain input data. The second algorithm needs to decide whether the image is a fake or not. If the image is found to be a fake, the second algorithm gives the first algorithm the command to revise the image — until it no longer recognises it as a fake.
    In recent years, this technique has helped make deep-fake images more and more authentic. On the website www.whichfaceisreal.com, users can check if they’re able to distinguish fakes from original photos. “In the era of fake news, it can be a problem if users don’t have the ability to distinguish computer-generated images from originals,” says Professor Thorsten Holz from the Chair for Systems Security.
    For their analysis, the Bochum-based researchers used the data sets that also form the basis of the above-mentioned page “Which face is real.” In this interdisciplinary project, Joel Frank, Thorsten Eisenhofer and Professor Thorsten Holz from the Chair for Systems Security cooperated with Professor Asja Fischer from the Chair of Machine Learning as well as Lea Schönherr and Professor Dorothea Kolossa from the Chair of Digital Signal Processing.
    Frequency analysis reveals typical artefacts
    To date, deep-fake images have been analysed using complex statistical methods. The Bochum group chose a different approach by converting the images into the frequency domain using the discrete cosine transform. The generated image is thus expressed as the sum of many different cosine functions. Natural images consist mainly of low-frequency functions.
    The analysis has shown that images generated by GANs exhibit artefacts in the high-frequency range. For example, a typical grid structure emerges in the frequency representation of fake images. “Our experiments showed that these artefacts do not only occur in GAN generated images. They are a structural problem of all deep learning algorithms,” explains Joel Frank from the Chair for Systems Security. “We assume that the artefacts described in our study will always tell us whether the image is a deep-fake image created by machine learning,” adds Frank. “Frequency analysis is therefore an effective way to automatically recognise computer-generated images.”

    Story Source:
    Materials provided by Ruhr-University Bochum. Note: Content may be edited for style and length. More

  • in

    Marine drifters: Interdisciplinary study explores plankton diversity

    Ocean plankton are the drifters of the marine world. They’re algae, animals, bacteria, or protists that are at the mercy of the tide and currents. Many are microscopic and hidden from view, barely observable with the naked eye. Though others, like jellyfish, can grow relatively large.
    There’s one thing about these drifting critters that has puzzled ecologists for decades — the diversity among ocean plankton is much higher than expected. Generally, in any given ocean sample, there are many rare species of plankton and a small number of abundant species. Researchers from the Okinawa Institute of Science and Technology Graduate University (OIST) have published a paper in Science Advances that combines mathematical models with metagenomics and marine science to uncover why this might be the case.
    “For years, scientists have been asking why there are so many species in the ocean,” said Professor Simone Pigolotti, who leads OIST’s Biological Complexity Unit. Professor Pigolotti explained that plankton can be transported across very large distances by currents, so they don’t seem to be limited by dispersal. This would suggest that niche preference is the factor that determines species diversity — in other words, a single species will outcompete all other species if the environment suits them best, leading to communities with only a few, highly abundant species.
    “Our research explored the theory that ocean currents promote species diversity, not because they help plankton to disperse, but because they can actually limit dispersal by creating barriers,” said Professor Pigolotti. “In contrast, when we looked at samples from lakes, where there are little or no currents, we found more abundant species, but fewer species altogether.”
    At first glance, this might seem counter-intuitive. But while currents may carry plankton from one area to another, they also prevent the plankton from crossing to the other side of the current. Thus, these currents reduce competition and force each species of plankton to coexist with other species, albeit in small numbers.
    Combining DNA tests with mathematical models
    For over a century, ecologists have measured diversity by counting the number of species, such as birds or insects, in an area. This allowed them to find the proportions of abundant species versus rare species. Today, the task is streamlined through both quantitative modelling that can predict species distributions and metagenomics — instead of just counting species, researchers can efficiently collect all the DNA in a sample.

    advertisement

    “Simply counting the amount of species in a sample is very time consuming,” said Professor Tom Bourguignon, who leads OIST’s Evolutionary Genomics Unit. “With advancements in sequencing technologies, we can run just one test and have several thousand DNA sequences that represents a good estimation of planktonic diversity.”
    For this study, the researchers were particularly interested in protists — microscopic, usually single-celled, planktonic organisms. The group created a mathematical model that considered the role of oceanic currents in determining the genealogy of protists through simulations. They couldn’t just simulate a protist community at the DNA level because there would be a huge number of individuals. So, instead, they simulated the individuals in a given sample from the ocean.
    To find out how closely related the individuals were, and whether they were of the same species, the researchers then looked back in time. “We created a trajectory that went back one hundred years,” said Professor Pigolotti. “If two individuals came from a common ancestor in the timescale of our simulation, then we classed them as the same species.”
    What they were specifically measuring was the number of species, and the number of individuals per species. The model was simulated with and without ocean currents. As the researchers had hypothesized, it showed that the presence of ocean currents caused a sharp increase in the number of protist species, but a decline in the number of individuals per species.
    To confirm the results of this model, the researchers then analyzed datasets from two studies of aquatic protists. The first dataset was of oceanic protists’ DNA sequences and the second, freshwater protists’ DNA sequences. They found that, on average, oceanic samples contained more rare species and less abundant species and, overall, had a larger number of species. This agreed with the model’s predictions.
    “Our results support the theory that ocean currents positively impact the diversity of rare aquatic protists by creating these barriers,” said Professor Pigolotti. “The project was very interdisciplinary. By combining theoretical physics, marine science, and metagenomics, we’ve shed new light on a classic problem in ecology, which is of relevance for marine biodiversity.” More

  • in

    Scientists identify new material with potential for brain-like computing

    The most powerful and advanced computing is still primitive compared to the power of the human brain, says Chinedu E. Ekuma, Assistant Professor in Lehigh University’s Department of Physics.
    Ekuma’s lab, which aims to gain an understanding of the physical properties of materials, develops models at the interface of computation, theory, and experiment. One area of focus: 2-Dimensional (2D) materials. Also dubbed low-dimensional, these are crystalline nanomaterials that consist of a single layer of atoms. Their novel properties make them especially useful for the next-generation of AI-powered electronics, known as neuromorphic, or brain-like devices.
    Neuromorphic devices attempt to better mimic how the human brain processes information than current computing methods. A key challenge in neuromorphic research is matching the human brain’s flexibility, and its ability to learn from unstructured inputs with energy efficiency. According to Ekuma, early successes in neuromorphic computing relied mainly on conventional silicon-based materials that are energy inefficient.
    “Neuromorphic materials have a combination of computing memory capabilities and energy efficiency for brain-like applications,” he says.
    Now Ekuma and his colleagues at the Sensor and Electrons Devices Directorate at the U.S. Army Research Laboratory have developed a new complex material design strategy for potential use in neuromorphic computing, using metallocene intercalation in hafnium disulfide (HfS2). The work is the first to demonstrate the effectiveness of a design strategy that functionalizes a 2D material with an organic molecule. It has been published in an article called “Dynamically reconfigurable electronic and phononic properties in intercalated HfS2” in Materials Today. Additional authors: Sina Najmaei, Adam A.Wilson Asher C. Leff and Madan Dubey of the United States Army Research Laboratory.
    “We knew that low-dimensional materials showed novel properties, but we did not expect such high tunability of the HfS2-based system,” says Ekuma. “The strategy was a concerted effort and synergy between experiment and computation. It started with an afternoon coffee chat where my colleagues and I discussed exploring the possibility of introducing organic molecules into a gap, known as van der Waals gap, in 2D materials. This was followed by the material design and rigorous computations to test the feasibility. Based on the encouraging computational data, we proceeded to make the sample, characterize the properties, and then made a prototype device with the designed material.”
    Scholars in search of energy-efficient materials may be particularly interested in this research, as well as industry, especially semiconductor industries designing logic gates and other electronic devices.
    “The key takeaway here is that complex materials design based on 2D materials is a promising route to achieving high performing and energy-efficient materials,” says Ekuma.

    Story Source:
    Materials provided by Lehigh University. Note: Content may be edited for style and length. More

  • in

    A GoPro for beetles: Researchers create a robotic camera backpack for insects

    In the movie “Ant-Man,” the title character can shrink in size and travel by soaring on the back of an insect. Now researchers at the University of Washington have developed a tiny wireless steerable camera that can also ride aboard an insect, giving everyone a chance to see an Ant-Man view of the world.
    The camera, which streams video to a smartphone at 1 to 5 frames per second, sits on a mechanical arm that can pivot 60 degrees. This allows a viewer to capture a high-resolution, panoramic shot or track a moving object while expending a minimal amount of energy. To demonstrate the versatility of this system, which weighs about 250 milligrams — about one-tenth the weight of a playing card — the team mounted it on top of live beetles and insect-sized robots.
    The results will be published July 15 in Science Robotics.
    “We have created a low-power, low-weight, wireless camera system that can capture a first-person view of what’s happening from an actual live insect or create vision for small robots,” said senior author Shyam Gollakota, a UW associate professor in the Paul G. Allen School of Computer Science & Engineering. “Vision is so important for communication and for navigation, but it’s extremely challenging to do it at such a small scale. As a result, prior to our work, wireless vision has not been possible for small robots or insects.”
    Typical small cameras, such as those used in smartphones, use a lot of power to capture wide-angle, high-resolution photos, and that doesn’t work at the insect scale. While the cameras themselves are lightweight, the batteries they need to support them make the overall system too big and heavy for insects — or insect-sized robots — to lug around. So the team took a lesson from biology.
    “Similar to cameras, vision in animals requires a lot of power,” said co-author Sawyer Fuller, a UW assistant professor of mechanical engineering. “It’s less of a big deal in larger creatures like humans, but flies are using 10 to 20% of their resting energy just to power their brains, most of which is devoted to visual processing. To help cut the cost, some flies have a small, high-resolution region of their compound eyes. They turn their heads to steer where they want to see with extra clarity, such as for chasing prey or a mate. This saves power over having high resolution over their entire visual field.”
    To mimic an animal’s vision, the researchers used a tiny, ultra-low-power black-and-white camera that can sweep across a field of view with the help of a mechanical arm. The arm moves when the team applies a high voltage, which makes the material bend and move the camera to the desired position. Unless the team applies more power, the arm stays at that angle for about a minute before relaxing back to its original position. This is similar to how people can keep their head turned in one direction for only a short period of time before returning to a more neutral position.

    advertisement

    “One advantage to being able to move the camera is that you can get a wide-angle view of what’s happening without consuming a huge amount of power,” said co-lead author Vikram Iyer, a UW doctoral student in electrical and computer engineering. “We can track a moving object without having to spend the energy to move a whole robot. These images are also at a higher resolution than if we used a wide-angle lens, which would create an image with the same number of pixels divided up over a much larger area.”
    The camera and arm are controlled via Bluetooth from a smartphone from a distance up to 120 meters away, just a little longer than a football field.
    The researchers attached their removable system to the backs of two different types of beetles — a death-feigning beetle and a Pinacate beetle. Similar beetles have been known to be able to carry loads heavier than half a gram, the researchers said.
    “We made sure the beetles could still move properly when they were carrying our system,” said co-lead author Ali Najafi, a UW doctoral student in electrical and computer engineering. “They were able to navigate freely across gravel, up a slope and even climb trees.”
    The beetles also lived for at least a year after the experiment ended.

    advertisement

    “We added a small accelerometer to our system to be able to detect when the beetle moves. Then it only captures images during that time,” Iyer said. “If the camera is just continuously streaming without this accelerometer, we could record one to two hours before the battery died. With the accelerometer, we could record for six hours or more, depending on the beetle’s activity level.”
    The researchers also used their camera system to design the world’s smallest terrestrial, power-autonomous robot with wireless vision. This insect-sized robot uses vibrations to move and consumes almost the same power as low-power Bluetooth radios need to operate.
    The team found, however, that the vibrations shook the camera and produced distorted images. The researchers solved this issue by having the robot stop momentarily, take a picture and then resume its journey. With this strategy, the system was still able to move about 2 to 3 centimeters per second — faster than any other tiny robot that uses vibrations to move — and had a battery life of about 90 minutes.
    While the team is excited about the potential for lightweight and low-power mobile cameras, the researchers acknowledge that this technology comes with a new set of privacy risks.
    “As researchers we strongly believe that it’s really important to put things in the public domain so people are aware of the risks and so people can start coming up with solutions to address them,” Gollakota said.
    Applications could range from biology to exploring novel environments, the researchers said. The team hopes that future versions of the camera will require even less power and be battery free, potentially solar-powered.
    “This is the first time that we’ve had a first-person view from the back of a beetle while it’s walking around. There are so many questions you could explore, such as how does the beetle respond to different stimuli that it sees in the environment?” Iyer said. “But also, insects can traverse rocky environments, which is really challenging for robots to do at this scale. So this system can also help us out by letting us see or collect samples from hard-to-navigate spaces.”
    This research was funded by a Microsoft fellowship and the National Science Foundation.
    Video: https://www.youtube.com/watch?v=115BGUZopHs&feature=emb_logo More