More stories

  • in

    Better way to determine safe drug doses for children

    Determining safe yet effective drug dosages for children is an ongoing challenge for pharmaceutical companies and medical doctors alike. A new drug is usually first tested on adults, and results from these trials are used to select doses for pediatric trials. The underlying assumption is typically that children are like adults, just smaller, which often holds true, but may also overlook differences that arise from the fact that children’s organs are still developing.
    Compounding the problem, pediatric trials don’t always shed light on other differences that can affect recommendations for drug doses. There are many factors that limit children’s participation in drug trials — for instance, some diseases simply are rarer in children — and consequently, the generated datasets tend to be very sparse.
    To make drugs and their development safer for children, researchers at Aalto University and the pharmaceutical company Novartis have developed a method that makes better use of available data.
    ‘This is a method that could help determine safe drug doses more quickly and with less observations than before,’ says co-author Aki Vehtari, an associate professor of computer science at Aalto University and the Finnish Center for Artificial Intelligence FCAI.
    In their study, the research team created a model that improves our understanding of how organs develop.
    ‘The size of an organ is not necessarily the only thing that affects its performance. Kids’ organs are simply not as efficient as those of adults. In drug modeling, if we assume that size is the only thing that matters, we might end up giving too large of doses,’ explains Eero Siivola, first author of the study and doctoral student at Aalto University.
    Whereas the standard approach of assessing pediatric data relies on subjective evaluations of model diagnostics, the new approach, based on Gaussian process regression, is more data-driven and consequently less prone to bias. It is also better at handling small sample sizes as uncertainties are accounted for.
    The research comes out of FCAI’s research programme on Agile and probabilistic AI, offering a great example of a method that makes the best out of even very scarce datasets.
    In the study, the researchers demonstrate their approach by re-analyzing a pediatric trial investigating Everolimus, a drug used to prevent the rejection of organ transplants. But the possible benefits of their method are far reaching.
    ‘It works for any drug whose concentration we want to examine,’ Vehtari says, like allergy and pain medication.
    The approach could be particularly useful for situations where a new drug is tested on a completely new group — of children or adults — which is small in size, potentially making the trial phase much more efficient than it currently is. Another promising application relates to extending use of an existing drug to other symptoms or diseases; the method could support this process more effectively than current practices.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    An uncrackable combination of invisible ink and artificial intelligence

    Coded messages in invisible ink sound like something only found in espionage books, but in real life, they can have important security purposes. Yet, they can be cracked if their encryption is predictable. Now, researchers reporting in ACS Applied Materials & Interfaces have printed complexly encoded data with normal ink and a carbon nanoparticle-based invisible ink, requiring both UV light and a computer that has been taught the code to reveal the correct messages.
    Even as electronic records advance, paper is still a common way to preserve data. Invisible ink can hide classified economic, commercial or military information from prying eyes, but many popular inks contain toxic compounds or can be seen with predictable methods, such as light, heat or chemicals. Carbon nanoparticles, which have low toxicity, can be essentially invisible under ambient lighting but can create vibrant images when exposed to ultraviolet (UV) light — a modern take on invisible ink. In addition, advances in artificial intelligence (AI) models — made by networks of processing algorithms that learn how to handle complex information — can ensure that messages are only decipherable on properly trained computers. So, Weiwei Zhao, Kang Li, Jie Xu and colleagues wanted to train an AI model to identify and decrypt symbols printed in a fluorescent carbon nanoparticle ink, revealing hidden messages when exposed to UV light.
    The researchers made carbon nanoparticles from citric acid and cysteine, which they diluted with water to create an invisible ink that appeared blue when exposed to UV light. The team loaded the solution into an ink cartridge and printed a series of simple symbols onto paper with an inkjet printer. Then, they taught an AI model, composed of multiple algorithms, to recognize symbols illuminated by UV light and decode them using a special codebook. Finally, they tested the AI model’s ability to decode messages printed using a combination of both regular red ink and the UV fluorescent ink. With 100% accuracy, the AI model read the regular ink symbols as “STOP,” but when a UV light was shown on the writing, the invisible ink illustrated the desired message “BEGIN.” Because these algorithms can notice minute modifications in symbols, this approach has the potential to encrypt messages securely using hundreds of different unpredictable symbols, the researchers say.
    Story Source:
    Materials provided by American Chemical Society. Note: Content may be edited for style and length. More

  • in

    SMART evaluates impact of competition between autonomous vehicles and public transit

    The rapid advancement of Autonomous Vehicles (AV) technology in recent years has changed transport systems and consumer habits globally. As countries worldwide see a surge in AV usage, the rise of shared Autonomous Mobility on Demand (AMoD) service is likely to be next on the cards. Public Transit (PT), a critical component of urban transportation, will inevitably be impacted by the upcoming influx of AMoD and the question remains unanswered on whether AMoD would co-exist with or threaten the PT system.
    Researchers at the Future Urban Mobility (FM) Interdisciplinary Research Group (IRG) at Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and Massachusetts Institute of Technology (MIT), conducted a case study in the first-mile mobility market from origins to subway stations in Tampines, Singapore, to find out.
    In a paper titled “Competition between Shared Autonomous Vehicles and Public Transit: A Case Study in Singapore” recently published in the journal Transportation Research Part C: Emerging Technologies, the first-of-its-kind study used Game Theory to analyse the competition between AMoD and PT.
    The study was simulated and evaluated from a competitive perspective?where both AMoD and PT operators are profit-oriented with dynamically adjustable supply strategies. Using an agent-based simulation, the competition process and system performance were evaluated from the standpoints of four stakeholders — the AMoD operator, the PT operator, passengers, and the transport authority.
    “The objective of our study is to envision cities of the future and to understand how competition between AMoD and PT will impact the evolution of transportation systems,” says the corresponding author of the paper, SMART FM Lead Principal Investigator and Associate Professor at MIT Department of Urban Studies and Planning, Jinhua Zhao. “Our study found that competition between AMoD and PT can be favourable, leading to increased profits and system efficiency for both operators when compared to the status quo, while also benefiting the public and the transport authorities. However, the impact of the competition on passengers is uneven and authorities may be required to provide support for people who suffer from higher travel costs or longer travel times in terms of discounts or other feeder modes.”
    The research found that the competition between AMoD and PT would compel bus operators to reduce the frequency of inefficient routes and allow AMoDs to fill in the gaps in the service coverage. “Although the overall bus supply was reduced, the change was not uniform,” says the first author of the paper, a PhD candidate at MIT, Baichuan Mo. “We found that PT services will be spatially concentrated to shorter routes that feed directly to the subway station, and temporally concentrated to peak hours. On average, this reduces travel time of passengers but increases travel costs. However, the generalised travel cost is reduced when incorporating the value of time.” The study also found that providing subsidies to PT services would result in a relatively higher supply, profit, and market share for PT as compared to AMoD, and increased passenger generalised travel cost and total system passenger car equivalent (PCE), which is measured by the average vehicle load and the total vehicle kilometer traveled.
    The findings suggest that PT should be allowed to optimise its supply strategies under specific operation goals and constraints to improve efficiency. On the other hand, AMoD operations should be regulated to reduce detrimental system impacts, including limiting the number of licenses, operation time, and service areas, resulting in AMoD operating in a manner more complementary to PT system.
    “Our research shows that under the right conditions, an AMoD-PT integrated transport system can effectively co-exist and complement each other, benefiting all four stakeholders involved,” says SMART FM alumni, Hongmou Zhang, a PhD graduate from MIT’s Department of Urban Studies and Planning, and now Assistant Professor at Peking University School of Government. “Our findings will help the industry, policy makers and government bodies create future policies and plans to maximise the efficiency and sustainability of transportation systems, as well as protect the social welfare of residents as passengers.”
    The findings of this study is important for future mobility industries and relevant government bodies as it provides insight into possible evolutions and threats to urban transportation systems with the rise of AV and AMoD, and offers a predictive guide for future policy and regulation designs for a AMoD-PT integrated transport system. Policymakers should consider the uneven social costs such as increased travel costs or travel time, especially to vulnerable groups, by supporting and providing them with discounts or other feeder modes.
    The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) programme. More

  • in

    New algorithm uses a hologram to control trapped ions

    Researchers have discovered the most precise way to control individual ions using holographic optical engineering technology.
    The new technology uses the first known holographic optical engineering device to control trapped ion qubits. This technology promises to help create more precise controls of qubits that will aid the development of quantum industry-specific hardware to further new quantum simulation experiments and potentially quantum error correction processes for trapped ion qubits.
    “Our algorithm calculates the hologram’s profile and removes any aberrations from the light, which lets us develop a highly precise technique for programming ions,” says lead author Chung-You Shih, a PhD student at the University of Waterloo’s Institute for Quantum Computing (IQC).
    Kazi Rajibul Islam, a faculty member at IQC and in physics and astronomy at Waterloo is the lead investigator on this work. His team has been trapping ions used in quantum simulation in the Laboratory for Quantum Information since 2019 but needed a precise way to control them.
    A laser aimed at an ion can “talk” to it and change the quantum state of the ion, forming the building blocks of quantum information processing. However, laser beams have aberrations and distortions that can result in a messy, wide focus spot, which is a problem because the distance between trapped ions is a few micrometers — much narrower than a human hair.
    The laser beam profiles the team wanted to stimulate the ions would need to be precisely engineered. To achieve this they took a laser, blew its light up to 1cm wide and then sent it through a digital micromirror device (DMD), which is programable and functions as a movie projector. The DMD chip has two-million micron-scale mirrors on it that are individually controlled using electric voltage. Using an algorithm that Shih developed, the DMD chip is programmed to display a hologram pattern. The light produced from the DMD hologram can have its intensity and phase exactly controlled.
    In testing, the team has been able to manipulate each ion with the holographic light. Previous research has struggled with cross talk, which means that if a laser focuses on one ion, the light leaks on the surrounding ions. With this device, the team successfully characterizes the aberrations using an ion as a sensor. They can then cancel the aberrations by adjusting the hologram and obtain the lowest cross talk in the world.
    “There is a challenge in using commercially available DMD technology,” Shih says. “Its controller is made for projectors and UV lithography, not quantum experiments. Our next step is to develop our own hardware for quantum computation experiments.”
    This research was supported in part by the Canada First Research Excellence Fund through Transformative Quantum Technologies.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    When algorithms go bad: How consumers respond

    Researchers from University of Texas-Austin and Copenhagen Business School published a new paper in the Journal of Marketing that offers actionable guidance to managers on the deployment of algorithms in marketing contexts.
    The study, forthcoming in the Journal of Marketing, is titled “When Algorithms Fail: Consumers’ Responses to Brand Harm Crises Caused by Algorithm Errors” and is authored by Raji Srinivasan and Gulen Sarial-Abi.
    Marketers increasingly rely on algorithms to make important decisions. A perfect example is the Facebook News Feed. You do not know why some of your posts show up on some people’s News Feeds or not, but Facebook does. Or how about Amazon recommending books and products for you? All of these are driven by algorithms. Algorithms are software and are far from perfect. Like any software, they can fail, and some do fail spectacularly. Add in the glare of social media and a small glitch can quickly turn into a brand harm crisis, and a massive PR nightmare. Yet, we know little about consumers’ responses to brands following such brand harm crises.
    First, the research team finds that consumers penalize brands less when an algorithm (vs. human) causes an error that causes a brand harm crisis. In addition, consumers’ perceptions of the algorithm’s lower agency for the error and resultant lower responsibility for the harm caused mediate their less negative responses to a brand following such a crisis.
    Second, when the algorithm is more humanized — when it is anthropomorphized (e.g., Alexa, Siri) (vs. not) or machine learning (vs. not), it is used in a subjective (vs. objective) task, or an interactive (vs. non-interactive) task — consumers’ responses to the brand are more negative following a brand harm crisis caused by an algorithm error. Srinivasan says that “Marketers must be aware that in contexts where the algorithm appears to be more human, it would be wise to have heightened vigilance in the deployment and monitoring of algorithms and provides resources for managing the aftermath of brand harm crises caused by algorithm errors.”
    This study also generates insights about how to manage the aftermath of brand harm crises caused by algorithm errors. Managers can highlight the role of the algorithm and the lack of agency of the algorithm for the error, which may reduce consumers’ negative responses to the brand. However, highlighting the role of the algorithm will consumers’ negative responses to the brand for an anthropomorphized algorithm, a machine learning algorithm, or if the algorithm error occurs in a subjective or in an interactive task, all of which tend to humanize the algorithm.
    Finally, insights indicate that marketers should not publicize human supervision of algorithms (which may actually be effective in fixing the algorithm) in communications with customers following brand harm crises caused by algorithm errors. However, they should publicize the technological supervision of the algorithm when they use it. The reason? Consumers are less negative when there is technological supervision of the algorithm following a brand harm crisis.
    “Overall, our findings suggest that people are more forgiving of algorithms used in algorithmic marketing when they fail than they are of humans. We see this as a silver lining to the growing usage of algorithms in marketing and their inevitable failures in practice,” says Sarial-Abi.
    Story Source:
    Materials provided by American Marketing Association. Original written by Matt Weingarden. Note: Content may be edited for style and length. More

  • in

    Loan applications processed around midday more likely to be rejected

    Bank credit officers are more likely to approve loan applications earlier and later in the day, while ‘decision fatigue’ around midday is associated with defaulting to the safer option of saying no.
    These are the findings of a study by researchers in Cambridge’s Department of Psychology, published today in the journal Royal Society Open Science.
    Decision fatigue is the tiredness caused by having to make difficult decisions over a long period. Previous studies have shown that people suffering from decision fatigue tend to fall back on the ‘default decision’: choosing whatever option is easier or seems safer.
    The researchers looked at the decisions made on 26,501 credit loan applications by 30 credit officers of a major bank over a month. The officers were making decisions on ‘restructuring requests’: where the customer already has a loan but is having difficulties paying it back, so asks the bank to adjust the repayments.
    By studying decisions made at a bank, the researchers could calculate the economic cost of decision fatigue in a specific context — the first time this has been done. They found the bank could have collected around an extra $500,000 in loan repayments if all decisions had been made in the early morning.
    “Credit officers were more willing to make the difficult decision of granting a customer more lenient loan repayment terms in the morning, but by midday they showed decision fatigue and were less likely to agree to a loan restructuring request. After lunchtime they probably felt more refreshed and were able to make better decisions again,” said Professor Simone Schnall in the University of Cambridge’s Department of Psychology, senior author of the report.
    Decisions on loan restructuring requests are cognitively demanding: credit officers have to weigh up the financial strength of the customer against risk factors that reduce the likelihood of repayment. Errors can be costly to the bank. Approving the request results in a loss relative to the original payment plan, but if the restructuring succeeds, the loss is significantly smaller than if the loan is not repaid at all.
    The study found that customers whose restructuring requests were approved were more likely to repay their loan than if they were instructed to stick to the original repayment terms. Credit officers’ tendency to decline more requests around lunchtime was associated with a financial loss for the bank.
    “Even decisions we might assume are very objective and driven by specific financial considerations are influenced by psychological factors. This is clear evidence that regular breaks during working hours are important for maintaining high levels of performance,” said Tobias Baer, a researcher in the University of Cambridge’s Department of Psychology and first author of the report.
    Modern work patterns have been characterised by extended hours and higher work volume. The results suggest that cutting down on prolonged periods of intensive mental exertion may make workers more productive.
    Story Source:
    Materials provided by University of Cambridge. The original story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More

  • in

    New application of AI just removed one of the biggest roadblocks in astrophysics

    Using a bit of machine learning magic, astrophysicists can now simulate vast, complex universes in a thousandth of the time it takes with conventional methods. The new approach will help usher in a new era in high-resolution cosmological simulations, its creators report in a study published online May 4 in Proceedings of the National Academy of Sciences.
    “At the moment, constraints on computation time usually mean we cannot simulate the universe at both high resolution and large volume,” says study lead author Yin Li, an astrophysicist at the Flatiron Institute in New York City. “With our new technique, it’s possible to have both efficiently. In the future, these AI-based methods will become the norm for certain applications.”
    The new method developed by Li and his colleagues feeds a machine learning algorithm with models of a small region of space at both low and high resolutions. The algorithm learns how to upscale the low-res models to match the detail found in the high-res versions. Once trained, the code can take full-scale low-res models and generate ‘super-resolution’ simulations containing up to 512 times as many particles.
    The process is akin to taking a blurry photograph and adding the missing details back in, making it sharp and clear.
    This upscaling brings significant time savings. For a region in the universe roughly 500 million light-years across containing 134 million particles, existing methods would require 560 hours to churn out a high-res simulation using a single processing core. With the new approach, the researchers need only 36 minutes.
    The results were even more dramatic when more particles were added to the simulation. For a universe 1,000 times as large with 134 billion particles, the researchers’ new method took 16 hours on a single graphics processing unit. Existing methods would take so long that they wouldn’t even be worth running without dedicated supercomputing resources, Li says. More

  • in

    New graphite-based sensor technology for wearable medical devices

    Researchers at AMBER, the SFI Centre for Advanced Materials and BioEngineering Research, and from Trinity’s School of Physics, have developed next-generation, graphene-based sensing technology using their innovative G-Putty material.
    The team’s printed sensors are 50 times more sensitive than the industry standard and outperform other comparable nano-enabled sensors in an important metric seen as a game-changer in the industry: flexibility.
    Maximising sensitivity and flexibility without reducing performance makes the teams’ technology an ideal candidate for the emerging areas of wearable electronics and medical diagnostic devices.
    The team — led by Professor Jonathan Coleman from Trinity’s School of Physics, one of the world’s leading nanoscientists — demonstrated that they can produce a low-cost, printed, graphene nanocomposite strain sensor.
    They developed a method to formulate G-Putty based inks that can be printed as a thin-film onto elastic substrates, including band-aids, and attached easily to the skin.
    The team developed a method to formulate G-Putty based inks that can be printed as a thin-film onto elastic substrates, including band-aids, and attached easily to the skin. More