More stories

  • in

    Why computer security advice is more confusing than it should be

    If you find the computer security guidelines you get at work confusing and not very useful, you’re not alone. A new study highlights a key problem with how these guidelines are created, and outlines simple steps that would improve them — and probably make your computer safer.
    At issue are the computer security guidelines that organizations like businesses and government agencies provide their employees. These guidelines are generally designed to help employees protect personal and employer data and minimize risks associated with threats such as malware and phishing scams.
    “As a computer security researcher, I’ve noticed that some of the computer security advice I read online is confusing, misleading or just plain wrong,” says Brad Reaves, corresponding author of the new study and an assistant professor of computer science at North Carolina State University. “In some cases, I don’t know where the advice is coming from or what it’s based on. That was the impetus for this research. Who’s writing these guidelines? What are they basing their advice on? What’s their process? Is there any way we could do better?”
    For the study, researchers conducted 21 in-depth interviews with professionals who are responsible for writing computer security guidelines for organizations including large corporations, universities and government agencies.
    “The key takeaway here is that the people writing these guidelines try to give as much information as possible,” Reaves says. “That’s great, in theory. But the writers don’t prioritize the advice that’s most important. Or, more specifically, they don’t deprioritize the points that are significantly less important. And because there is so much security advice to include, the guidelines can be overwhelming — and the most important points get lost in the shuffle.”
    The researchers found that one reason security guidelines can be so overwhelming is that guideline writers tend to incorporate every possible item from a wide variety of authoritative sources.
    “In other words, the guideline writers are compiling security information, rather than curating security information for their readers,” Reaves says. More

  • in

    Novel thermal sensor could help drive down the heat

    Excess heat from electronic or mechanical devices is a sign or cause of inefficient performance. In many cases, embedded sensors to monitor the flow of heat could help engineers alter device behavior or designs to improve their efficiency. For the first time, researchers exploit a novel thermoelectric phenomenon to build a thin sensor that can visualize heat flow in real time. The sensor could be built deep inside devices where other kinds of sensors are impractical. It is also quick, cheap and easy to manufacture using well-established methods.
    According to the law of conservation of energy, energy is never created or destroyed but only changes form from one to another depending on the interaction between the entities involved. All energy eventually ends up as heat. For us that can be a useful thing, for example, when we want to heat our homes in winter; or detrimental, when we want to cool something down, or get the most out of a battery-driven application. In any case, the better we can manage the thermal behavior of a device, the better we can engineer around this inevitable effect and improve the efficiency of the device in question. However, this is easier said than done, as knowing how heat flows inside some complex, miniature or hazardous device is something ranging from the difficult to the impossible, depending on the application.
    Inspired by this problem, Project Associate Professor Tomoya Higo and Professor Satoru Nakatsuji from the Department of Physics at the University of Tokyo, and their team, which included a corporate partnership, set out to find a solution. “The amount of heat conducted through a material is known as its heat flux. Finding new ways to measure this could not only help improve device efficiency, but also safety, as batteries with poor thermal management can be unsafe, and even health, as various health or lifestyle issues can relate to body heat,” said Higo. “But finding a sensor technology to measure heat flux, while also satisfying a number of other conditions, such as robustness, cost efficiency, ease of manufacture and so on, is not easy. Typical thermal diode devices are relatively large and only give a value for temperature in a specific area, rather than an image, of the heat flux across an entire surface.”
    The team explored the way a heat flux sensor consisting of certain special magnetic materials and electrodes behaves when there are complex patterns of heat flow. The magnetic material based on iron and gallium exhibits a phenomenon known as the anomalous Nernst effect (ANE), which is where heat energy is unusually converted to an electrical signal. This is not the only magnetic effect that can turn heat into power, though. There is also the Seebeck effect, which can actually create more electrical power, but it requires a large bulk of material, and the materials are brittle so hard to work with. ANE, on the other hand, allowed the team to engineer their device on an incredibly thin and malleable sheet of plastic.
    “By finding the right magnetic and electrode materials and then applying them in a special repeating pattern, we created microscopic electronic circuits that are flexible, robust, cheap and easy to produce, and most of all are very good at outputting heat flux data in real time,” said Higo. “Our method involves rolling a thin sheet of clear, strong and lightweight PET plastic as a base layer, with magnetic and electrode materials sputtered onto it in thin and consistent layers. We then etch our desired patterns into the resultant film, similar to how electronic circuits are made.”
    The team designed the circuits in a particular kind of way to boost ANE whilst also suppressing the Seebeck effect, as this actually interferes with the data-gathering potential of ANE. Previous attempts to do this were unsuccessful in any way that could be easily scaled up and potentially commercialized, making this sensor the first of its kind.
    “I envisage seeing downstream applications such as power generation or data centers, where heat impedes efficiency. But as the world becomes more automated, we might see these kinds of sensors in automated manufacturing environments where they could improve our ability to predict machine failures, certain safety issues, and more,” said Nakatsuji. “With further developments, we might even see internal medical applications to help doctors produce internal heat maps of specific areas of the body, or organs, to aid in imaging and diagnosis.” More

  • in

    Link found between childhood television watching and adulthood metabolic syndrome

    A University of Otago study has added weight to the evidence that watching too much television as a child can lead to poor health in adulthood.
    The research, led by Professor Bob Hancox, of the Department of Preventive and Social Medicine, and published this week in the journal Pediatrics, found that children who watched more television were more likely to develop metabolic syndrome as an adult.
    Metabolic syndrome is a cluster of conditions including high blood pressure, high blood sugar, excess body fat, and abnormal cholesterol levels that lead to an increased risk of heart disease, diabetes and stroke.
    Using data from 879 participants of the Dunedin study, researchers found those who watched more television between the ages of 5 and 15 were more likely to have these conditions at age 45.
    Television viewing times were asked at ages 5, 7, 9, 11, 13 and 15. On average, they watched just over two hours per weekday.
    “Those who watched the most had a higher risk of metabolic syndrome in adulthood,” Professor Hancox says.
    “More childhood television viewing time was also associated with a higher risk of overweight and obesity and lower physical fitness.”
    Boys watched slightly more television than girls and metabolic syndrome was more common in men, than women (34 percent and 20 per cent respectively). The link between childhood television viewing time and adult metabolic syndrome was seen in both sexes however, and may even be stronger in women. More

  • in

    AI predicts the work rate of enzymes

    Enzymes play a key role in cellular metabolic processes. To enable the quantitative assessment of these processes, researchers need to know the so-called “turnover number” (for short: kcat) of the enzymes. In the scientific journal Nature Communications, a team of bioinformaticians from Heinrich Heine University Düsseldorf (HHU) now describes a tool for predicting this parameter for various enzymes using AI methods.
    Enzymes are important biocatalysts in all living cells. They are normally large proteins, which bind smaller molecules — so-called substrates — and then convert them into other molecules, the “products.” Without enzymes, the reaction that converts the substrates into the products could not take place, or could only do so at a very low rate. Most organisms possess thousands of different enzymes. Enzymes have many applications in a wide range of biotechnological processes and in everyday life — from the proving of bread dough to detergents.
    The maximum speed at which a specific enzyme can convert its substrates into products is determined by the so-called turnover number kcat. It is an important parameter for quantitative research on enzyme activities and plays a key role in understanding cellular metabolism.
    However, it is time-consuming and expensive to determine kcat turnover numbers in experiments, which is why they are not known for the vast majority of reactions. The Computational Cell Biology research group at HHU headed by Professor Dr Martin Lercher has now developed a new tool called TurNuP to predict the kcat turnover numbers of enzymes using AI methods.
    To train a kcat prediction model, information about the enzymes and catalysed reactions was converted into numerical vectors using deep learning models. These numerical vectors served as the input for a machine learning model — a so-called gradient boosting model — which predicts the kcat turnover numbers.
    Lead author Alexander Kroll: “TurNuP outperforms previous models and can even be used successfully for enzymes that have only a low similarity to those in the training dataset.” Previous models have not been able to make any meaningful predictions unless at least 40% of the enzyme sequence is identical to at least one enzyme in the training set. By contrast, TurNuP can already make meaningful predictions for enzymes with a maximum sequence identity of 0 — 40%.
    Professor Lercher adds: “In our study, we show that the predictions made by TurNuP can be used to predict the concentrations of enzymes in living cells much more accurately than has been the case to date.”
    In order to make the prediction model easily accessible to as many users as possible, the HHU team has developed a user-friendly web server, which other researchers can use to predict the kcat turnover numbers of enzymes. More

  • in

    Robot preachers get less respect, fewer donations

    As artificial intelligence expands across more professions, robot preachers and AI programs offer new means of sharing religious beliefs, but they may undermine credibility and reduce donations for religious groups that rely on them, according to research published by the American Psychological Association.
    “It seems like robots take over more occupations every year, but I wouldn’t be so sure that religious leaders will ever be fully automated because religious leaders need credibility, and robots aren’t credible,” said lead researcher Joshua Conrad Jackson, PhD, an assistant professor at the University of Chicago in the Booth School of Business.
    The research was published in the Journal of Experimental Psychology: General.
    Jackson and his colleagues conducted an experiment with the Mindar humanoid robot at the Kodai-Ji Buddhist temple in Kyoto, Japan. The robot has a humanlike silicon face with moving lips and blinking eyes on a metal body. It delivers 25-minute Heart Sutra sermons on Buddhist principles with surround sound and multi-media projections.
    Mindar, which was created in 2019 by a Japanese robotics team in partnership with the temple, cost almost $1 million to develop, but it might be reducing donations to the temple, according to the study.
    The researchers surveyed 398 participants who were leaving the temple after hearing a sermon delivered either by Mindar or a human Buddhist priest. Participants viewed Mindar as less credible and gave smaller donations than those who heard a sermon from the human priest.
    In another experiment in a Taoist temple in Singapore, half of the 239 participants heard a sermon by a human priest while the other half heard the same sermon from a humanoid robot called Pepper. That experiment had similar findings — the robot was viewed as less credible and inspired smaller donations. Participants who heard the robot sermon also said they were less likely to share its message or distribute flyers to support the temple. More

  • in

    Going the distance for better wireless charging

    A better way to wirelessly charge over long distances has been developed at Aalto University. Engineers have optimized the way antennas transmitting and receiving power interact with each other, making use of the phenomenon of “radiation suppression.” The result is a better theoretical understanding of wireless power transfer compared to the conventional inductive approach, a significant advancement in the field.
    Charging over short distances, such as through induction pads, uses magnetic near fields to transfer power with high efficiency, but at longer distances the efficiency dramatically drops. New research shows that this high efficiency can be sustained over long distances by suppressing the radiation resistance of the loop antennas that are sending and receiving power. Previously, the same lab created an omnidirectional wireless charging system that allowed devices to be charged at any orientation. Now, they have extended that work with a new dynamic theory of wireless charging that looks more closely at both near (non-radiative) and far (radiative) distances and conditions. In particular, they show that high transfer efficiency, over 80 percent, can be achieved at distances approximately five times the size of the antenna, utilizing the optimal frequency within the hundred-megahertz range.
    ‘We wanted to balance effectively transferring power with the radiation loss that always happens over longer distances,’ says lead author Nam Ha-Van, a postdoctoral researcher at Aalto University. ‘It turns out that when the currents in the loop antennas have equal amplitudes and opposite phases, we can cancel the radiation loss, thus boosting efficiency.’
    The researchers created a way to analyse any wireless power transfer system, either mathematically or experimentally. This allows for a more thorough evaluation of power transfer efficiency, at both near and far distances, which hasn’t been done before. They then tested how charging worked between two loop antennas (see image) positioned at a considerable distance relative to their sizes, establishing that radiation suppression is the mechanism that helps boost transfer efficiency.
    ‘This is all about figuring out the optimal setup for wireless power transfer, whether near or far,’ says Ha-Van. ‘With our approach, we can now extend the transfer distance beyond that of conventional wireless charging systems, while maintaining high efficiency.’ Wireless power transfer is not just important for phones and gadgets; biomedical implants with limited battery capacity can also benefit. The research of Ha-Van and colleagues can also account for barriers like human tissue that can impede charging. More

  • in

    Scientists develop AI-based tracking and early-warning system for viral pandemics

    Scripps Research scientists have developed a machine-learning system — a type of artificial intelligence (AI) application — that can track the detailed evolution of epidemic viruses and predict the emergence of viral variants with important new properties.
    In a paper in Cell Patterns on July 21, 2023, the scientists demonstrated the system by using data on recorded SARS-CoV-2 variants and COVID-19 mortality rates. They showed that the system could have predicted the emergence of new SARS-CoV-2 “variants of concern” (VOCs) ahead of their official designations by the World Health Organization (WHO). Their findings point to the possibility of using such a system in real-time to track future viral pandemics.
    “There are rules of pandemic virus evolution that we have not understood but can be discovered, and used in an actionable sense by private and public health organizations, through this unprecedented machine-learning approach,” says study senior author William Balch, PhD, professor in the Department of Molecular Medicine at Scripps Research.
    The co-first authors of the study were Salvatore Loguercio, PhD, a staff scientist in the Balch lab at the time of the study, and currently a staff scientist at the Scripps Research Translational Institute; and Ben Calverley, PhD, a postdoctoral research associate in the Balch lab.
    The Balch lab specializes in the development of computational, often AI-based methods to illuminate how genetic variations alter the symptoms and spread of diseases. For this study, they applied their approach to the COVID-19 pandemic. They developed machine-learning software, using a strategy called Gaussian process-based spatial covariance, to relate three data sets spanning the course of the pandemic: the genetic sequences of SARS-CoV-2 variants found in infected people worldwide, the frequencies of those variants, and the global mortality rate for COVID-19.
    “This computational method used data from publicly available repositories,” Loguercio says. “But it can be applied to any genetic mapping resource.”
    The software enabled the researchers to track sets of genetic changes appearing in SARS-CoV-2 variants around the world. These changes — typically trending towards increased spread rates and decreased mortality rates — signified the virus’ adaptations to lockdowns, mask wearing, vaccines, increasing natural immunity in the global population, and the relentless competition among SARS-CoV-2 variants themselves. More

  • in

    Detecting threats beyond the limits of human, sensor sight

    Remember what it’s like to twirl a sparkler on a summer night? Hold it still and the fire crackles and sparks but twirl it around and the light blurs into a line tracing each whirl and jag you make.
    A new patented software system developed at Sandia National Laboratories can find the curves of motion in streaming video and images from satellites, drones and far-range security cameras and turn them into signals to find and track moving objects as small as one pixel. The developers say this system can enhance the performance of any remote sensing application.
    “Being able to track each pixel from a distance matters, and it is an ongoing and challenging problem,” said Tian Ma, a computer scientist and co-developer of the system. “For physical security surveillance systems, for example, the farther out you can detect a possible threat, the more time you have to prepare and respond. Often the biggest challenge is the simple fact that when objects are located far away from the sensors, their size naturally appears to be much smaller. Sensor sensitivity diminishes as the distance from the target increases.”
    Ma and Robert Anderson started working on the Multi-frame Moving Object Detection System in 2015 as a Sandia Laboratory Directed Research and Development project. A paper about MMODS was recently published in Sensors.
    Detecting one moving pixel in a sea of 10 million
    The ability to detect objects through remote sensing systems is typically limited to what can be seen in a single video frame, whereas MMODS uses a new, multiframe method to detect small objects in low visibility conditions, Ma said. At a computer station, image streams from various sensors flow in, and MMODS processes the data with an image filter frame by frame in real time. An algorithm finds movement in the video frames and matches it into target signals that can be correlated and then integrated across a set of video frame sequences.
    This process improves the signal-to-noise ratio or overall image quality because the moving target’s signal can be correlated over time and increases steadily, whereas movement from background noise like wind is filtered out because it moves randomly and is not correlated. More