More stories

  • in

    Link found between childhood television watching and adulthood metabolic syndrome

    A University of Otago study has added weight to the evidence that watching too much television as a child can lead to poor health in adulthood.
    The research, led by Professor Bob Hancox, of the Department of Preventive and Social Medicine, and published this week in the journal Pediatrics, found that children who watched more television were more likely to develop metabolic syndrome as an adult.
    Metabolic syndrome is a cluster of conditions including high blood pressure, high blood sugar, excess body fat, and abnormal cholesterol levels that lead to an increased risk of heart disease, diabetes and stroke.
    Using data from 879 participants of the Dunedin study, researchers found those who watched more television between the ages of 5 and 15 were more likely to have these conditions at age 45.
    Television viewing times were asked at ages 5, 7, 9, 11, 13 and 15. On average, they watched just over two hours per weekday.
    “Those who watched the most had a higher risk of metabolic syndrome in adulthood,” Professor Hancox says.
    “More childhood television viewing time was also associated with a higher risk of overweight and obesity and lower physical fitness.”
    Boys watched slightly more television than girls and metabolic syndrome was more common in men, than women (34 percent and 20 per cent respectively). The link between childhood television viewing time and adult metabolic syndrome was seen in both sexes however, and may even be stronger in women. More

  • in

    AI predicts the work rate of enzymes

    Enzymes play a key role in cellular metabolic processes. To enable the quantitative assessment of these processes, researchers need to know the so-called “turnover number” (for short: kcat) of the enzymes. In the scientific journal Nature Communications, a team of bioinformaticians from Heinrich Heine University Düsseldorf (HHU) now describes a tool for predicting this parameter for various enzymes using AI methods.
    Enzymes are important biocatalysts in all living cells. They are normally large proteins, which bind smaller molecules — so-called substrates — and then convert them into other molecules, the “products.” Without enzymes, the reaction that converts the substrates into the products could not take place, or could only do so at a very low rate. Most organisms possess thousands of different enzymes. Enzymes have many applications in a wide range of biotechnological processes and in everyday life — from the proving of bread dough to detergents.
    The maximum speed at which a specific enzyme can convert its substrates into products is determined by the so-called turnover number kcat. It is an important parameter for quantitative research on enzyme activities and plays a key role in understanding cellular metabolism.
    However, it is time-consuming and expensive to determine kcat turnover numbers in experiments, which is why they are not known for the vast majority of reactions. The Computational Cell Biology research group at HHU headed by Professor Dr Martin Lercher has now developed a new tool called TurNuP to predict the kcat turnover numbers of enzymes using AI methods.
    To train a kcat prediction model, information about the enzymes and catalysed reactions was converted into numerical vectors using deep learning models. These numerical vectors served as the input for a machine learning model — a so-called gradient boosting model — which predicts the kcat turnover numbers.
    Lead author Alexander Kroll: “TurNuP outperforms previous models and can even be used successfully for enzymes that have only a low similarity to those in the training dataset.” Previous models have not been able to make any meaningful predictions unless at least 40% of the enzyme sequence is identical to at least one enzyme in the training set. By contrast, TurNuP can already make meaningful predictions for enzymes with a maximum sequence identity of 0 — 40%.
    Professor Lercher adds: “In our study, we show that the predictions made by TurNuP can be used to predict the concentrations of enzymes in living cells much more accurately than has been the case to date.”
    In order to make the prediction model easily accessible to as many users as possible, the HHU team has developed a user-friendly web server, which other researchers can use to predict the kcat turnover numbers of enzymes. More

  • in

    Robot preachers get less respect, fewer donations

    As artificial intelligence expands across more professions, robot preachers and AI programs offer new means of sharing religious beliefs, but they may undermine credibility and reduce donations for religious groups that rely on them, according to research published by the American Psychological Association.
    “It seems like robots take over more occupations every year, but I wouldn’t be so sure that religious leaders will ever be fully automated because religious leaders need credibility, and robots aren’t credible,” said lead researcher Joshua Conrad Jackson, PhD, an assistant professor at the University of Chicago in the Booth School of Business.
    The research was published in the Journal of Experimental Psychology: General.
    Jackson and his colleagues conducted an experiment with the Mindar humanoid robot at the Kodai-Ji Buddhist temple in Kyoto, Japan. The robot has a humanlike silicon face with moving lips and blinking eyes on a metal body. It delivers 25-minute Heart Sutra sermons on Buddhist principles with surround sound and multi-media projections.
    Mindar, which was created in 2019 by a Japanese robotics team in partnership with the temple, cost almost $1 million to develop, but it might be reducing donations to the temple, according to the study.
    The researchers surveyed 398 participants who were leaving the temple after hearing a sermon delivered either by Mindar or a human Buddhist priest. Participants viewed Mindar as less credible and gave smaller donations than those who heard a sermon from the human priest.
    In another experiment in a Taoist temple in Singapore, half of the 239 participants heard a sermon by a human priest while the other half heard the same sermon from a humanoid robot called Pepper. That experiment had similar findings — the robot was viewed as less credible and inspired smaller donations. Participants who heard the robot sermon also said they were less likely to share its message or distribute flyers to support the temple. More

  • in

    Going the distance for better wireless charging

    A better way to wirelessly charge over long distances has been developed at Aalto University. Engineers have optimized the way antennas transmitting and receiving power interact with each other, making use of the phenomenon of “radiation suppression.” The result is a better theoretical understanding of wireless power transfer compared to the conventional inductive approach, a significant advancement in the field.
    Charging over short distances, such as through induction pads, uses magnetic near fields to transfer power with high efficiency, but at longer distances the efficiency dramatically drops. New research shows that this high efficiency can be sustained over long distances by suppressing the radiation resistance of the loop antennas that are sending and receiving power. Previously, the same lab created an omnidirectional wireless charging system that allowed devices to be charged at any orientation. Now, they have extended that work with a new dynamic theory of wireless charging that looks more closely at both near (non-radiative) and far (radiative) distances and conditions. In particular, they show that high transfer efficiency, over 80 percent, can be achieved at distances approximately five times the size of the antenna, utilizing the optimal frequency within the hundred-megahertz range.
    ‘We wanted to balance effectively transferring power with the radiation loss that always happens over longer distances,’ says lead author Nam Ha-Van, a postdoctoral researcher at Aalto University. ‘It turns out that when the currents in the loop antennas have equal amplitudes and opposite phases, we can cancel the radiation loss, thus boosting efficiency.’
    The researchers created a way to analyse any wireless power transfer system, either mathematically or experimentally. This allows for a more thorough evaluation of power transfer efficiency, at both near and far distances, which hasn’t been done before. They then tested how charging worked between two loop antennas (see image) positioned at a considerable distance relative to their sizes, establishing that radiation suppression is the mechanism that helps boost transfer efficiency.
    ‘This is all about figuring out the optimal setup for wireless power transfer, whether near or far,’ says Ha-Van. ‘With our approach, we can now extend the transfer distance beyond that of conventional wireless charging systems, while maintaining high efficiency.’ Wireless power transfer is not just important for phones and gadgets; biomedical implants with limited battery capacity can also benefit. The research of Ha-Van and colleagues can also account for barriers like human tissue that can impede charging. More

  • in

    Scientists develop AI-based tracking and early-warning system for viral pandemics

    Scripps Research scientists have developed a machine-learning system — a type of artificial intelligence (AI) application — that can track the detailed evolution of epidemic viruses and predict the emergence of viral variants with important new properties.
    In a paper in Cell Patterns on July 21, 2023, the scientists demonstrated the system by using data on recorded SARS-CoV-2 variants and COVID-19 mortality rates. They showed that the system could have predicted the emergence of new SARS-CoV-2 “variants of concern” (VOCs) ahead of their official designations by the World Health Organization (WHO). Their findings point to the possibility of using such a system in real-time to track future viral pandemics.
    “There are rules of pandemic virus evolution that we have not understood but can be discovered, and used in an actionable sense by private and public health organizations, through this unprecedented machine-learning approach,” says study senior author William Balch, PhD, professor in the Department of Molecular Medicine at Scripps Research.
    The co-first authors of the study were Salvatore Loguercio, PhD, a staff scientist in the Balch lab at the time of the study, and currently a staff scientist at the Scripps Research Translational Institute; and Ben Calverley, PhD, a postdoctoral research associate in the Balch lab.
    The Balch lab specializes in the development of computational, often AI-based methods to illuminate how genetic variations alter the symptoms and spread of diseases. For this study, they applied their approach to the COVID-19 pandemic. They developed machine-learning software, using a strategy called Gaussian process-based spatial covariance, to relate three data sets spanning the course of the pandemic: the genetic sequences of SARS-CoV-2 variants found in infected people worldwide, the frequencies of those variants, and the global mortality rate for COVID-19.
    “This computational method used data from publicly available repositories,” Loguercio says. “But it can be applied to any genetic mapping resource.”
    The software enabled the researchers to track sets of genetic changes appearing in SARS-CoV-2 variants around the world. These changes — typically trending towards increased spread rates and decreased mortality rates — signified the virus’ adaptations to lockdowns, mask wearing, vaccines, increasing natural immunity in the global population, and the relentless competition among SARS-CoV-2 variants themselves. More

  • in

    Detecting threats beyond the limits of human, sensor sight

    Remember what it’s like to twirl a sparkler on a summer night? Hold it still and the fire crackles and sparks but twirl it around and the light blurs into a line tracing each whirl and jag you make.
    A new patented software system developed at Sandia National Laboratories can find the curves of motion in streaming video and images from satellites, drones and far-range security cameras and turn them into signals to find and track moving objects as small as one pixel. The developers say this system can enhance the performance of any remote sensing application.
    “Being able to track each pixel from a distance matters, and it is an ongoing and challenging problem,” said Tian Ma, a computer scientist and co-developer of the system. “For physical security surveillance systems, for example, the farther out you can detect a possible threat, the more time you have to prepare and respond. Often the biggest challenge is the simple fact that when objects are located far away from the sensors, their size naturally appears to be much smaller. Sensor sensitivity diminishes as the distance from the target increases.”
    Ma and Robert Anderson started working on the Multi-frame Moving Object Detection System in 2015 as a Sandia Laboratory Directed Research and Development project. A paper about MMODS was recently published in Sensors.
    Detecting one moving pixel in a sea of 10 million
    The ability to detect objects through remote sensing systems is typically limited to what can be seen in a single video frame, whereas MMODS uses a new, multiframe method to detect small objects in low visibility conditions, Ma said. At a computer station, image streams from various sensors flow in, and MMODS processes the data with an image filter frame by frame in real time. An algorithm finds movement in the video frames and matches it into target signals that can be correlated and then integrated across a set of video frame sequences.
    This process improves the signal-to-noise ratio or overall image quality because the moving target’s signal can be correlated over time and increases steadily, whereas movement from background noise like wind is filtered out because it moves randomly and is not correlated. More

  • in

    Dreaming in technicolor

    A team of computer scientists and designers based out of the University of Waterloo have developed a tool to help people use colour better in graphic design. 
    The tool, De-Stijl, uses powerful machine learning technology to suggest intuitive colour palettes for novice designers and inexperienced users. The software combines and improves on the functionalities of existing tools like Figma, Pixlr, and Coolor, allowing users to select important theme colors and quickly visualize how they’ll impact a design.
    “You put your graphical elements into the canvas,” said Jian Zhao, an assistant professor of computer science at Waterloo. “De-Stijl separates it into background, image, decoration and text, and based on these it creates a palette and then can make recommendations based on the design elements of layout, colour proximity, and proportion.”
    De-Stijl’s most exciting contribution is an innovative 2-D colour palette, developed in consultation with expert graphic designers, that not only suggests colours but also demonstrates their impact in different distributions.
    “Humans perceive colors differently based on their proportion and their placement,” said Xinyu Shi, a PhD student in computer science and the lead author on the research. “With the 2D format, users can better perceive how their current graphic designs look, focusing on the colour itself.”
    The Waterloo-led project grew out of a longstanding relationship with Adobe, the design powerhouse responsible for products like Photoshop and InDesign.
    Adobe realized that a lot of people responsible for creating branding and other marketing materials didn’t have advanced graphic design knowledge or the resources to hire expert designers. They tasked the Waterloo team with helping them find AI-powered solutions for these novice designers. 
    The De-Stijl team worked with a combination of design experts and ordinary users to build and test the software. During the testing phase, users customized marketing materials from provided templates using both De-Stijl and its competitors. More

  • in

    Future AI algorithms have potential to learn like humans

    Memories can be as tricky to hold onto for machines as they can be for humans. To help understand why artificial agents develop holes in their own cognitive processes, electrical engineers at The Ohio State University have analyzed how much a process called “continual learning” impacts their overall performance.
    Continual learning is when a computer is trained to continuously learn a sequence of tasks, using its accumulated knowledge from old tasks to better learn new tasks.
    Yet one major hurdle scientists still need to overcome to achieve such heights is learning how to circumvent the machine learning equivalent of memory loss — a process which in AI agents is known as “catastrophic forgetting.” As artificial neural networks are trained on one new task after another, they tend to lose the information gained from those previous tasks, an issue that could become problematic as society comes to rely on AI systems more and more, said Ness Shroff, an Ohio Eminent Scholar and professor of computer science and engineering at The Ohio State University.
    “As automated driving applications or other robotic systems are taught new things, it’s important that they don’t forget the lessons they’ve already learned for our safety and theirs,” said Shroff. “Our research delves into the complexities of continuous learning in these artificial neural networks, and what we found are insights that begin to bridge the gap between how a machine learns and how a human learns.”
    Researchers found that in the same way that people might struggle to recall contrasting facts about similar scenarios but remember inherently different situations with ease, artificial neural networks can recall information better when faced with diverse tasks in succession, instead of ones that share similar features, Shroff said.
    The team, including Ohio State postdoctoral researchers Sen Lin and Peizhong Ju and professors Yingbin Liang and Shroff, will present their research this month at the 40th annual International Conference on Machine Learning in Honolulu, Hawaii, a flagship conference in machine learning.
    While it can be challenging to teach autonomous systems to exhibit this kind of dynamic, lifelong learning, possessing such capabilities would allow scientists to scale up machine learning algorithms at a faster rate as well as easily adapt them to handle evolving environments and unexpected situations. Essentially, the goal for these systems would be for them to one day mimic the learning capabilities of humans. More