More stories

  • in

    Hacking and loss of driving skills are major consumer concerns for self-driving cars

    A new study from the University of Kent, Toulouse Business School, ESSCA School of Management (Paris) and ESADE Business School (Spain) has revealed the three primary risks and benefits perceived by consumers towards autonomous vehicles (self-driving cars).
    The increased development of autonomous vehicles worldwide inspired the researchers to uncover how consumers feel towards the growing market, particularly in areas that dissuade them from purchasing, to understand the challenges of marketing the product. The following perceptions, gained through qualitative interviews and quantitative surveys, are key to consumer decision making around autonomous vehicles.
    The three key perceived risks for autonomous vehicles, according to surveyed consumers, can be classified as:1. Performance (safety) risks of the vehicles’ Artificial Intelligence and sensor systems 2. Loss of competencies by the driving public (primarily the ability to drive and use roads) 3. Privacy security breaches, similar to a personal computer or online account being hacked.These concerns, particularly regarding road and passenger safety, have long been present in how automotive companies have marketed their products. Marketers’ have advertised the continued improvements to the product’s technology, in a bid to ease safety concerns. However, the concerns for loss of driving skills and privacy breaches are still of major concern and will need addressing as these products become more widespread.
    The three perceived benefits to consumers were:1. Freeing of time (spent instead of driving) 2. Removing the issue of human error (accidents caused by human drivers) 3. Outperforming human capacity, such as improved route and traffic prediction, handling speed.Ben Lowe, Professor of Marketing at the University of Kent and co-author of the study said: ‘The results of this study illustrate the perceived benefits of autonomous vehicles for consumers and how marketers can appeal to consumers in this growing market. However, we will now see how the manufacturers respond to concerns of these key perceived risks as they are major factors in the decision making of consumers, with the safety of the vehicles’ performance the greatest priority. Our methods used in this study will help clarify for manufacturers and marketers that, second to the issue of online account security, they will now have to address concerns that their product is reducing the autonomy of the consumer.’
    Story Source:
    Materials provided by University of Kent. Original written by Sam Wood. Note: Content may be edited for style and length. More

  • in

    The last 30 years were the hottest on record for the United States

    There’s a new normal for U.S. weather. On May 4, the National Oceanic and Atmospheric Administration announced an official change to its reference values for temperature and precipitation. Instead of using the average values from 1981 to 2010, NOAA’s new “climate normals” will be the averages from 1991 to 2020.

    This new period is the warmest on record for the country. Compared with the previous 30-year-span, for example, the average temperature across the contiguous United States rose from 11.6° Celsius (52.8° Fahrenheit) to 11.8° C (53.3° F). Some of the largest increases were in the South and Southwest — and that same region also showed a dramatic decrease in precipitation (SN: 8/17/20).  

    The United States and other members of the World Meteorological Organization are required to update their climate normals every 10 years. These data put daily weather events in historical context and also help track changes in drought conditions, energy use and freeze risks for farmers.

    That moving window of averages for the United States also tells a stark story about the accelerating pace of climate change. When each 30-year period is compared with the average temperatures from 1901 to 2000, no part of the country is cooler now than it was during the 20th century. And temperatures in large swaths of the country, from the American West to the Northeast, are 1 to 2 degrees Fahrenheit higher. More

  • in

    Scientific software – Quality not always good

    Computational tools are indispensable in almost all scientific disciplines. Especially in cases where large amounts of research data are generated and need to be quickly processed, reliable, carefully developed software is crucial for analyzing and correctly interpreting such data. Nevertheless, scientific software can have quality quality deficiencies. To evaluate software quality in an automated way, computer scientists at Karlsruhe Institute of Technology (KIT) and Heidelberg Institute for Theoretical Studies (HITS) have designed the SoftWipe tool.
    “Adherence to coding standards is rarely considered in scientific software, although it can even lead to incorrect scientific results,” says Professor Alexandros Stamatakis, who works both at HITS and at the Institute of Theoretical Informatics (ITI) of KIT. The open-source SoftWipe software tool provides a fast, reliable, and cost-effective approach to addressing this problem by automatically assessing adherence to software development standards. Besides designing the above-mentioned tool, the computer scientists benchmarked 48 scientific software tools from different research areas, to assess to which degree they met coding standards.
    “SoftWipe can also be used in the review process of scientific software and support the software selection process,” adds Adrian Zapletal. The Master’s student and his fellow student Dimitri Höhler have substantially contributed to the development of SoftWipe. To select assessment criteria, they relied on existing standards that are used in safety-critical environments, such as at NASA or CERN.
    “Our research revealed enormous discrepancies in software quality,” says co-author Professor Carsten Sinz of ITI. Many programs, such as covid-sim, which is used in the UK for mathematical modeling of the COVID-19 disease, had a very low quality score and thus performed poorly in the ranking. The researchers recommend using programs such as SoftWipe by default in the selection and review process of software for scientific purposes.
    How Does SoftWipe Work?
    SoftWipe is a pipeline written in the Python3 programming language that uses several available static and dynamic code analyzers (most of them are freely available) in order to assess the code quality of software written in C/C++. In this process, SoftWipe compiles the software and then executes it so that programming errors can be detected during execution. Based on the output of the code analysis tools used, SoftWipe calculates a quality score between 0 (poor) and 10 (excellent) to compute an overall final score .
    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More

  • in

    Quantum electronics: 'Bite' defects in bottom-up graphene nanoribbons

    Graphene nanoribbons (GNRs), narrow strips of single-layer graphene, have interesting physical, electrical, thermal, and optical properties because of the interplay between their crystal and electronic structures. These novel characteristics have pushed them to the forefront in the search for ways to advance next-generation nanotechnologies.
    While bottom-up fabrication techniques now allow the synthesis of a broad range of graphene nanoribbons that feature well-defined edge geometries, widths, and heteroatom incorporations, the question of whether or not structural disorder is present in these atomically precise GNRs, and to what extent, is still subject to debate. The answer to this riddle is of critical importance to any potential applications or resulting devices.
    Collaboration between Oleg Yazyev’s Chair of Computational Condensed Matter Physics theory group at EPFL and Roman Fasel’s experimental nanotech@surfaces Laboratory at Empa has produced two papers that look at this issue in armchair-edged and zigzag-edged graphene nanoribbons.
    “In these two works, we focused on characterizing “bite-defects” in graphene nanoribbons and their implications on GNR properties,” explains Gabriela Borin Barin from Empa’s nanotech@surfaces lab. “We observed that even though the presence of these defects can disrupt GNRs’ electronic transport, they could also yield spin-polarized currents. These are important findings in the context of the potential applications of GNRs in nanoelectronics and quantum technology.”
    Armchair graphene nanoribbons
    The paper “Quantum electronic transport across “bite” defects in graphene nanoribbons,” recently published in 2D Materials, specifically looks at 9-atom wide armchair graphene nanoribbons (9-AGNRs). The mechanical robustness, long-term stability under ambient conditions, easy transferability onto target substrates, scalability of fabrication, and suitable band-gap width of these GNRs has made them one of the most promising candidates for integration as active channels in field-effect transistors (FETs). Indeed, among the graphene-based electronic devices realized so far, 9-AGNR-FETs display the highest performance. More

  • in

    Data from smartwatches can help predict clinical blood test results

    Smartwatches and other wearable devices may be used to sense illness, dehydration and even changes to the red blood cell count, according to biomedical engineers and genomics researchers at Duke University and the Stanford University School of Medicine.
    The researchers say that, with the help of machine learning, wearable device data on heart rate, body temperature and daily activities may be used to predict health measurements that are typically observed during a clinical blood test. The study appears in Nature Medicine on May 24, 2021.
    During a doctor’s office visit, a medical worker usually measures a patient’s vital signs, including their height, weight, temperature and blood pressure. Although this information is filed away in a person’s long-term health record, it isn’t usually used to create a diagnosis. Instead, physicians will order a clinical lab, which tests a patient’s urine or blood, to gather specific biological information to help guide health decisions.
    These vital measurements and clinical tests can inform a doctor about specific changes to a person’s health, like if a patient has diabetes or has developed pre-diabetes, if they’re getting enough iron or water in their diet, and if their red or white blood cell count is in the normal range.
    But these tests are not without their drawbacks. They require an in-person visit, which isn’t always easy for patients to arrange, and procedures like a blood draw can be invasive and uncomfortable. Most notably, these vitals and clinical samples are not usually taken at regular and controlled intervals. They only provide a snapshot of a patient’s health on the day of the doctor’s visit, and the results can be influenced by a host of factors, like when a patient last ate or drank, stress, or recent physical activity.
    “There is a circadian (daily) variation in heart rate and in body temperature, but these single measurements in clinics don’t capture that natural variation,” said Duke’s Jessilyn Dunn, a co-lead and co-corresponding author of the study. “But devices like smartwatches or Fitbits have the ability to track these measurements and natural changes over a prolonged period of time and identify when there is variation from that natural baseline.”
    To gain a consistent and fuller picture of patients’ health, Dunn, an assistant professor of biomedical engineering at Duke, Michael Snyder, a professor and chair of genetics at Stanford, and their team wanted to explore if long-term data gathered from wearable devices could match changes that were observed during clinical tests and help indicate health abnormalities. More

  • in

    Machine learning platform identifies activated neurons in real-time

    Biomedical engineers at Duke University have developed an automatic process that uses streamlined artificial intelligence (AI) to identify active neurons in videos faster and more accurately than current techniques.
    The technology should allow researchers to watch an animal’s brain activity in real time, as they are behaving.
    The work appears May 20 in Nature Machine Intelligence.
    One of the ways researchers study the activity of neurons in living animals is through a process known as two-photon calcium imaging, which makes active neurons appear as flashes of light. Analyzing these videos, however, typically requires a human circling every burst of intensity they see in a process called segmentation. While this may seem straightforward, these bursts often overlap in spaces where thousands of neurons are imaged simultaneously. Analyzing just a five-minute video this way could take weeks or even months.
    “People try to figure out how the brain works by recording the activity of neurons as an animal does a behavior to study the relationship between the two,” said Yiyang Gong, the primary author on the paper. “But manual segmentation creates a big bottleneck and doesn’t allow researchers to see the activation of the neurons in real-time.”
    Gong, an assistant professor of biomedical engineering, and Sina Farsiu, a professor of biomedical engineering, previously addressed this bottleneck in a 2019 paper, where they shared the development of a deep-learning platform that maps active neurons as accurately as humans in a fraction of the time. But because videos can be tens of gigabytes, researchers still have to wait hours or days for them to process. More

  • in

    AI spots neurons better than human experts

    A new combination of optical coherence tomography (OCT), adaptive optics and deep neural networks should enable better diagnosis and monitoring for neuron-damaging eye and brain diseases like glaucoma.
    Biomedical engineers at Duke University led a multi-institution consortium to develop the process, which easily and precisely tracks changes in the number and shape of retinal ganglion cells in the eye.
    This work appears in a paper published on May 3 in the journal Optica.
    The retina of the eye is an extension of the central nervous system. Ganglion cells are one of the primary neurons in the eye that process and send visual information to the brain. In many neurodegenerative diseases like glaucoma, ganglion cells degenerate and disappear, leading to irreversible blindness. Traditionally, researchers use OCT, an imaging technology similar to ultrasound that uses light instead of sound, to peer beneath layers of eye tissue to diagnose and track the progression of glaucoma and other eye diseases.
    Although OCT allows researchers to efficiently view the ganglion cell layer in the retina, the technique is only sensitive enough to show the thickness of the cell layer — it can’t reveal individual ganglion cells. This hinders early diagnosis or rapid tracking of the disease progression, as large quantities of ganglion cells need to disappear before physicians can see the changes in thickness.
    To remedy this, a recent technology called adaptive optics OCT (AO-OCT) enables imaging sensitive enough to view individual ganglion cells. Adaptive optics is a technology that minimizes the effect of optical aberrations that occur when examining the eye, which are a major limiting factor in achieving high-resolution in OCT imaging. More

  • in

    Quantum sensing: Odd angles make for strong spin-spin coupling

    Sometimes things are a little out of whack, and it turns out to be exactly what you need.
    That was the case when orthoferrite crystals turned up at a Rice University laboratory slightly misaligned. Those crystals inadvertently became the basis of a discovery that should resonate with researchers studying spintronics-based quantum technology.
    Rice physicist Junichiro Kono, alumnus Takuma Makihara and their collaborators found an orthoferrite material, in this case yttrium iron oxide, placed in a high magnetic field showed uniquely tunable, ultrastrong interactions between magnons in the crystal.
    Orthoferrites are iron oxide crystals with the addition of one or more rare-earth elements.
    Magnons are quasiparticles, ghostly constructs that represent the collective excitation of electron spin in a crystal lattice.
    What one has to do with the other is the basis of a study that appears in Nature Communications, where Kono and his team describe an unusual coupling between two magnons dominated by antiresonance, through which both magnons gain or lose energy simultaneously. More