More stories

  • in

    How automated vehicles can impede driver performance, and what to do about it

    As cars keep getting smarter, automation is taking many tricky tasks — from parallel parking to backing up — out of drivers’ hands.
    Now, a University of Toronto Engineering study is underscoring the importance of drivers keeping their eyes on the road — even when they are in an automated vehicle (AV).
    Using an AV driving simulator and eye-tracking equipment, Professor Birsen Donmez and her team studied two types of in-vehicle displays and their effects on the driving behaviours of 48 participants.
    The findings, published recently in the journal Accident Analysis & Prevention, revealed that drivers can become over-reliant on AV technology. This was especially true with a type of in-vehicle display the team coined as takeover request and automation capability (TORAC).
    A “takeover request” asks the driver to take vehicle control when automation is not able to handle a situation; “automation capability” indicates how close to that limit the automation is.
    “Drivers find themselves in situations where, although they are not actively driving, they are still part of the driving task — they must be monitoring the vehicle and step in if the vehicle fails,” says Donmez.

    advertisement

    “And these vehicles fail, it’s just guaranteed. The technology on the market right now is not mature enough to the point where we can just let the car drive and we go to sleep. We are not at that stage yet.”
    Tesla’s AV system, for example, warns drivers every 30 seconds or less when their hands aren’t detected on the wheel. This prompt can support driver engagement to some extent, but when the automation fails, driver attention and anticipation are the key factors that determine whether or not you get into a traffic accident.
    “Even though cars are advertised right now as self-driving, they are still just Level 2, or partially automated,” adds Dengbo He, postdoctoral fellow and lead author. “The driver should not rely on these types of vehicle automation.”
    In one of the team’s driving scenarios, the participants were given a non-driving, self-paced task — meant to mimic common distractions such as reading text messages — while takeover prompts and automation capability information were turned on.
    “Their monitoring of the road went way down compared to the condition where these features were turned off,” says Donmez. “Automated vehicles and takeover requests can give people a false sense of security, especially if they work most of the time. People are going to end up looking away and doing something non-driving related.”
    The researchers also tested a second in-vehicle display system that added information on surrounding traffic to the data provided by the TORAC system, called STTORAC. These displays showed more promise in ensuring driving safety.
    STTORAC provides drivers with ongoing information about their surrounding driving environment, including highlighting potential traffic conflicts on the road. This type of display led to the shortest reaction time in scenarios where drivers had to take over control of the vehicle, showing a significant improvement from both the TORAC and the no-display conditions.
    “When you’re not driving and aren’t engaged, it’s easy to lose focus. Adding information on surrounding traffic kept drivers better engaged in monitoring and anticipating traffic conflicts,” says He, adding that the key takeaway for designers of next-generation AVs is to ensure systems are designed to keep drivers attentive. “Drivers should not be distracted, at least at this stage.”
    Donmez’s team will next look at the effects of non-driving behaviours on drowsiness while operating an AV. “If someone isn’t engaged in a non-driving task and is just monitoring the road, they can be more likely to fall into states of drowsiness, which is even more dangerous than being distracted.” More

  • in

    Shrinking massive neural networks used to model language

    You don’t need a sledgehammer to crack a nut.
    Jonathan Frankle is researching artificial intelligence — not noshing pistachios — but the same philosophy applies to his “lottery ticket hypothesis.” It posits that, hidden within massive neural networks, leaner subnetworks can complete the same task more efficiently. The trick is finding those “lucky” subnetworks, dubbed winning lottery tickets.
    In a new paper, Frankle and colleagues discovered such subnetworks lurking within BERT, a state-of-the-art neural network approach to natural language processing (NLP). As a branch of artificial intelligence, NLP aims to decipher and analyze human language, with applications like predictive text generation or online chatbots. In computational terms, BERT is bulky, typically demanding supercomputing power unavailable to most users. Access to BERT’s winning lottery ticket could level the playing field, potentially allowing more users to develop effective NLP tools on a smartphone — no sledgehammer needed.
    “We’re hitting the point where we’re going to have to make these models leaner and more efficient,” says Frankle, adding that this advance could one day “reduce barriers to entry” for NLP.
    Frankle, a PhD student in Michael Carbin’s group at the MIT Computer Science and Artificial Intelligence Laboratory, co-authored the study, which will be presented next month at the Conference on Neural Information Processing Systems. Tianlong Chen of the University of Texas at Austin is the lead author of the paper, which included collaborators Zhangyang Wang, also of Texas A&M, as well as Shiyu Chang, Sijia Liu, and Yang Zhang, all of the MIT-IBM Watson AI Lab.
    You’ve probably interacted with a BERT network today. It’s one of the technologies that underlies Google’s search engine, and it has sparked excitement among researchers since Google released BERT in 2018. BERT is a method of creating neural networks — algorithms that use layered nodes, or “neurons,” to learn to perform a task through training on numerous examples. BERT is trained by repeatedly attempting to fill in words left out of a passage of writing, and its power lies in the gargantuan size of this initial training dataset. Users can then fine-tune BERT’s neural network to a particular task, like building a customer-service chatbot. But wrangling BERT takes a ton of processing power.

    advertisement

    “A standard BERT model these days — the garden variety — has 340 million parameters,” says Frankle, adding that the number can reach 1 billion. Fine-tuning such a massive network can require a supercomputer. “This is just obscenely expensive. This is way beyond the computing capability of you or me.”
    Chen agrees. Despite BERT’s burst in popularity, such models “suffer from enormous network size,” he says. Luckily, “the lottery ticket hypothesis seems to be a solution.”
    To cut computing costs, Chen and colleagues sought to pinpoint a smaller model concealed within BERT. They experimented by iteratively pruning parameters from the full BERT network, then comparing the new subnetwork’s performance to that of the original BERT model. They ran this comparison for a range of NLP tasks, from answering questions to filling the blank word in a sentence.
    The researchers found successful subnetworks that were 40 to 90 percent slimmer than the initial BERT model, depending on the task. Plus, they were able to identify those winning lottery tickets before running any task-specific fine-tuning — a finding that could further minimize computing costs for NLP. In some cases, a subnetwork picked for one task could be repurposed for another, though Frankle notes this transferability wasn’t universal. Still, Frankle is more than happy with the group’s results.
    “I was kind of shocked this even worked,” he says. “It’s not something that I took for granted. I was expecting a much messier result than we got.”
    This discovery of a winning ticket in a BERT model is “convincing,” according to Ari Morcos, a scientist at Facebook AI Research. “These models are becoming increasingly widespread,” says Morcos. “So it’s important to understand whether the lottery ticket hypothesis holds.” He adds that the finding could allow BERT-like models to run using far less computing power, “which could be very impactful given that these extremely large models are currently very costly to run.”
    Frankle agrees. He hopes this work can make BERT more accessible, because it bucks the trend of ever-growing NLP models. “I don’t know how much bigger we can go using these supercomputer-style computations,” he says. “We’re going to have to reduce the barrier to entry.” Identifying a lean, lottery-winning subnetwork does just that — allowing developers who lack the computing muscle of Google or Facebook to still perform cutting-edge NLP. “The hope is that this will lower the cost, that this will make it more accessible to everyone … to the little guys who just have a laptop,” says Frankle. “To me that’s really exciting.” More

  • in

    Researchers study influence of cultural factors on gesture design

    Imagine changing the TV channel with a wave of your hand or turning on the car radio with a twist of your wrist.
    Freehand gesture-based interfaces in interactive systems are becoming more common, but what if your preferred way to gesture a command — say, changing the TV to channel 10 — significantly differed from that of a user from another culture? Would the system recognize your command?
    Researchers from the Penn State College of Information Sciences and Technology and their collaborators explored this question and found that some gesture choices are significantly influenced by the cultural backgrounds of participants.
    “Certain cultures may prefer particular gestures and we may see a difference, but there is common ground between cultures choosing some gestures for the same kind of purposes and actions,” said Xiaolong “Luke” Zhang, associate professor of information sciences and technology and principal investigator of the study. “So we wanted to find out what can be shared among the different cultures, and what the differences are among different cultures to design better products.”
    In their study, the researchers asked American and Chinese participants to perform their preferred gestures for different commands in three separate settings: answering a phone call in the car, rotating an object in a virtual reality environment, and muting the television.
    The team found that while many preferred commands were similar among both cultural groups, there were some gesture choices that differed significantly between the groups. For example, most American participants used a thumbs up gesture to confirm a task in the virtual reality environment, while Chinese participants preferred to make an OK sign with their fingers. To reject a phone call in the car, most American participants made a horizontal movement across their neck with a flat hand, similar to a “cut” motion, while Chinese participants waved a hand back and forth to reject the call. Additionally, in Chinese culture, one hand can represent digits above five, while in American culture an individual can only represent numbers one to five using one hand.
    “This project is one of the first kind of research to study the existence of cultural influence and the use of preferences of hand gestures,” said Zhang. “We provide empirical evidence to show indeed that we should be aware of the existence of this matter.”
    On the other hand, Zhang said, from the perspective of design, the study shows that certain gestures can be common across multiple cultures, while other gestures can be very different.
    “Designers have to be careful when delivering products to different markets,” he said. “(This work could inform companies) to enable users customize the gesture commands, rather than have them pick something that is unnatural to learn from the perspective of the culture.”

    Story Source:
    Materials provided by Penn State. Original written by Jessica Hallman. Note: Content may be edited for style and length. More

  • in

    Next step in simulating the universe

    Computer simulations have struggled to capture the impact of elusive particles called neutrinos on the formation and growth of the large-scale structure of the Universe. But now, a research team from Japan has developed a method that overcomes this hurdle.
    In a study published this month in The Astrophysical Journal, researchers led by the University of Tsukuba present simulations that accurately depict the role of neutrinos in the evolution of the Universe.
    Why are these simulations important? One key reason is that they can set constraints on a currently unknown quantity: the neutrino mass. If this quantity is set to a particular value in the simulations and the simulation results differ from observations, that value can be ruled out. However, the constraints can be trusted only if the simulations are accurate, which was not guaranteed in previous work. The team behind this latest research aimed to address this limitation.
    “Earlier simulations used certain approximations that might not be valid,” says lead author of the study Lecturer Kohji Yoshikawa. “In our work, we avoided these approximations by employing a technique that accurately represents the velocity distribution function of the neutrinos and follows its time evolution.”
    To do this, the research team directly solved a system of equations known as the Vlasov-Poisson equations, which describe how particles move in the Universe. They then carried out simulations for different values of the neutrino mass and systemically examined the effects of neutrinos on the large-scale structure of the Universe.
    The simulation results demonstrate, for example, that neutrinos suppress the clustering of dark matter — the ‘missing’ mass in the Universe — and in turn galaxies. They also show that neutrino-rich regions are strongly correlated with massive galaxy clusters and that the effective temperature of the neutrinos varies substantially depending on the neutrino mass.
    “Overall, our findings suggest that neutrinos considerably affect the large-scale structure formation, and that our simulations provide an accurate account for the important effect of neutrinos,” explains Lecturer Yoshikawa. “It is also reassuring that our new results are consistent with those from entirely different simulation approaches.”

    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    AI predicts which drug combinations kill cancer cells

    When healthcare professionals treat patients suffering from advanced cancers, they usually need to use a combination of different therapies. In addition to cancer surgery, the patients are often treated with radiation therapy, medication, or both.
    Medication can be combined, with different drugs acting on different cancer cells. Combinatorial drug therapies often improve the effectiveness of the treatment and can reduce the harmful side-effects if the dosage of individual drugs can be reduced. However, experimental screening of drug combinations is very slow and expensive, and therefore, often fails to discover the full benefits of combination therapy. With the help of a new machine learning method, one could identify best combinations to selectively kill cancer cells with specific genetic or functional makeup.
    Researchers at Aalto University, University of Helsinki and the University of Turku in Finland developed a machine learning model that accurately predicts how combinations of different cancer drugs kill various types of cancer cells. The new AI model was trained with a large set of data obtained from previous studies, which had investigated the association between drugs and cancer cells. ‘The model learned by the machine is actually a polynomial function familiar from school mathematics, but a very complex one,’ says Professor Juho Rousu from Aalto University.
    The research results were published in the journal Nature Communications, demonstrating that the model found associations between drugs and cancer cells that were not observed previously. ‘The model gives very accurate results. For example, the values ??of the so-called correlation coefficient were more than 0.9 in our experiments, which points to excellent reliability,’ says Professor Rousu. In experimental measurements, a correlation coefficient of 0.8-0.9 is considered reliable.
    The model accurately predicts how a drug combination selectively inhibits particular cancer cells when the effect of the drug combination on that type of cancer has not been previously tested. ‘This will help cancer researchers to prioritize which drug combinations to choose from thousands of options for further research,’ says researcher Tero Aittokallio from the Institute for Molecular Medicine Finland (FIMM) at the University of Helsinki.
    The same machine learning approach could be used for non-cancerous diseases. In this case, the model would have to be re-taught with data related to that disease. For example, the model could be used to study how different combinations of antibiotics affect bacterial infections or how effectively different combinations of drugs kill cells that have been infected by the SARS-Cov-2 coronavirus.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Microfluidic system with cell-separating powers may unravel how novel pathogens attack

    To develop effective therapeutics against pathogens, scientists need to first uncover how they attack host cells. An efficient way to conduct these investigations on an extensive scale is through high-speed screening tests called assays.
    Researchers at Texas A&M University have invented a high-throughput cell separation method that can be used in conjunction with droplet microfluidics, a technique whereby tiny drops of fluid containing biological or other cargo can be moved precisely and at high speeds. Specifically, the researchers successfully isolated pathogens attached to host cells from those that were unattached within a single fluid droplet using an electric field.
    “Other than cell separation, most biochemical assays have been successfully converted into droplet microfluidic systems that allow high-throughput testing,” said Arum Han, professor in the Department of Electrical and Computer Engineering and principal investigator of the project. “We have addressed that gap, and now cell separation can be done in a high-throughput manner within the droplet microfluidic platform. This new system certainly simplifies studying host-pathogen interactions, but it is also very useful for environmental microbiology or drug screening applications.”
    The researchers reported their findings in the August issue of the journal Lab on a Chip.
    Microfluidic devices consist of networks of micron-sized channels or tubes that allow for controlled movements of fluids. Recently, microfluidics using water-in-oil droplets have gained popularity for a wide range of biotechnological applications. These droplets, which are picoliters (or a million times less than a microliter) in volume, can be used as platforms for carrying out biological reactions or transporting biological materials. Millions of droplets within a single chip facilitate high-throughput experiments, saving not just laboratory space but the cost of chemical reagents and manual labor.
    Biological assays can involve different cell types within a single droplet, which eventually need to be separated for subsequent analyses. This task is extremely challenging in a droplet microfluidic system, Han said.
    “Getting cell separation within a tiny droplet is extremely difficult because, if you think about it, first, it’s a tiny 100-micron diameter droplet, and second, within this extremely tiny droplet, multiple cell types are all mixed together,” he said.
    To develop the technology needed for cell separation, Han and his team chose a host-pathogen model system consisting of the salmonella bacteria and the human macrophage, a type of immune cell. When both these cell types are introduced within a droplet, some of the bacteria adhere to the macrophage cells. The goal of their experiments was to separate the salmonella that attached to the macrophage from the ones that did not.
    For cell separation, Han and his team constructed two pairs of electrodes that generated an oscillating electric field in close proximity to the droplet containing the two cell types. Since the bacteria and the host cells have different shapes, sizes and electrical properties, they found that the electric field produced a different force on each cell type. This force resulted in the movement of one cell type at a time, separating the cells into two different locations within the droplet. To separate the mother droplet into two daughter droplets containing one type of cells, the researchers also made a downstream Y-shaped splitting junction.
    Han said although these experiments were carried with a host and pathogen whose interaction is well-established, their new microfluidic system equipped with in-drop separation is most useful when the pathogenicity of bacterial species is unknown. He added that their technology enables quick, high-throughput screening in these situations and for other applications where cell separation is required.
    “Liquid handling robotic hands can conduct millions of assays but are extremely costly. Droplet microfluidics can do the same in millions of droplets, much faster and much cheaper,” Han said. “We have now integrated cell separation technology into droplet microfluidic systems, allowing the precise manipulation of cells in droplets in a high-throughput manner, which was not possible before.”

    Story Source:
    Materials provided by Texas A&M University. Original written by Vandana Suresh. Note: Content may be edited for style and length. More

  • in

    Report assesses promises and pitfalls of private investment in conservation

    The Ecological Society of America (ESA) today released a report entitled “Innovative Finance for Conservation: Roles for Ecologists and Practitioners” that offers guidelines for developing standardized, ethical and effective conservation finance projects.
    Public and philanthropic sources currently supply most of the funds for protecting and conserving species and ecosystems. However, the private sector is now driving demand for market-based mechanisms that support conservation projects with positive environmental, social and financial returns. Examples of projects that can support this triple bottom line include green infrastructure for stormwater management, clean transport projects and sustainable production of food and fiber products.
    “The reality is that public and philanthropic funds are insufficient to meet the challenge to conserve the world’s biodiversity,” said Garvin Professor and Senior Director of Conservation Science at Cornell University Amanda Rodewald, the report’s lead author. “Private investments represent a new path forward both because of their enormous growth potential and their ability to be flexibly adapted to a wide variety of social and ecological contexts.”
    Today’s report examines the legal, social and ethical issues associated with innovative conservation finance and offers resources and guidelines for increasing private capital commitments to conservation. It also identifies priority actions that individuals and organizations working in conservation finance will need to adopt in order to “mainstream” the field.
    One priority action is to standardize the metrics that allow practitioners to compare and evaluate projects. While the financial services and investment sectors regularly employ standardized indicators of financial risk and return, it is more difficult to apply such indicators to conservation projects. Under certain conservation financing models, for example, returns on investment are partially determined by whether the conservation project is successful — but “success” can be difficult to quantify when it is defined by complex social or environmental changes, such as whether a bird species is more or less at risk of going extinct as a result of a conservation project.
    Another priority action is to establish safeguards and ethical standards for involving local stakeholders, including Indigenous communities. In the absence of robust accountability and transparency measures, mobilizing private capital in conservation can result in unjust land grabs or in unscrupulous investments where profits flow disproportionately to wealthy or powerful figures. The report offers guidelines for ensuring that conservation financing improves the prosperity of local communities.
    According to co-author Peter Arcese, a professor at the University of British Columbia and adjunct professor at Cornell University, opportunities in conservation finance are growing for patient investors who are interested in generating modest returns while simultaneously supporting sustainable development.
    “Almost all landowners I’ve worked with in Africa and North and South America share a deep desire to maintain or enhance the environmental, cultural and aesthetic values of the ecosystems their land supports,” Arcese said. “By creating markets and stimulating investment in climate mitigation, and forest, water and biodiversity conservation projects, we can offer landowners alternative income sources and measurably slow habitat loss and degradation.”
    Rodewald sees a similar landscape of interest and opportunity. “No matter the system — be it a coffee plantation in the Andes, a timber harvest in the Pacific Northwest, or a farm in the Great Plains — I am reminded again and again that conservation is most successful when we safeguard the health and well-being of local communities. Private investments can be powerful tools to do just that,” said Rodewald.
    Report: Amanda Rodewald, et al. 2020. “Innovative Finance for Conservation: Roles for Ecologists and Practitioners.

    Story Source:
    Materials provided by Ecological Society of America. Note: Content may be edited for style and length. More

  • in

    Esports: Fit gamers challenge ‘fat’ stereotype

    Esports players are up to 21 per cent healthier weight than the general population, hardly smoke and drink less too, finds a new QUT (Queensland University of Technology) study.
    The findings, published in the International Journal of Environmental Research and Public Health, were based on 1400 survey participants from 65 countries.
    First study to investigate the BMI (Body Mass Index) status of a global sample of esports players.
    Esports players were between 9 and 21 per cent more likely to be a healthy weight than the general population.
    Esports players drank and smoked less than the general population.
    The top 10 per cent of esports players were significantly more physically active than lower level players, showing that physical activity could influence esports expertise.
    QUT eSports researcher Michael Trotter said the results were surprising considering global obesity levels.
    “The findings challenge the stereotype of the morbidly obese gamer,” he said.
    Mr Trotter said the animated satire South Park poked fun at the unfit gamer but the link between video gaming and obesity had not been strongly established.
    “When you think of esports, there are often concerns raised regarding sedentary behaviour and poor health as a result, and the study revealed some interesting and mixed results,” he said.

    advertisement

    “As part of their training regime, elite esports athletes spend more than an hour per day engaging in physical exercise as a strategy to enhance gameplay and manage stress,” he said.
    The World Health Organisation guidelines for time that should be spent being physically active weekly is a minimum of 150 minutes.
    “Only top-level players surveyed met physical activity guidelines, with the best players exercising on average four days a week,” the PhD student said.
    However, the study found 4.03 per cent of esports players were more likely to be morbidly obese compared to the global population.
    Mr Trotter said strategies should be developed to support players classed at the higher end of BMI categories.

    advertisement

    “Exercise and physical activity play a role in success in esports and should be a focus for players and organisations training esports players,” Mr Trotter said.
    “This will mean that in the future, young gamers will have more reason and motivation to be physically active.
    “Grassroots esports pathways, such as growing university and high school esports are likely to be the best place for young esports players to develop good health habits for gamers.”
    The research also found esports players are 7.8 per cent more likely to abstain from drinking daily, and of those players that do drink, only 0.5 per cent reported drinking daily.
    The survey showed only 3.7 per cent of esports players smoked daily, with player smoking frequency lower compared to global data at 18.7 per cent.
    Future research will investigate how high-school and university esports programs can improve health outcomes and increase physical activity for gaming students.
    The study was led by QUT’s Faculty of Health School of Exercise and Nutrition Sciences and in collaboration with the Department of Psychology at Umeå University in Sweden.

    Story Source:
    Materials provided by Queensland University of Technology. Note: Content may be edited for style and length. More