More stories

  • in

    Self-repairing gelatin-based film could be a smart move for electronics

    Dropping a cell phone can sometimes cause superficial cracks to appear. But other times, the device can stop working altogether because fractures develop in the material that stores data. Now, researchers have made an environmentally friendly, gelatin-based film that can repair itself multiple times and still maintain the electronic signals needed to access a device’s data. The material could be used someday in smart electronics and health-monitoring devices. More

  • in

    New lab-on-a-chip infection test could provide cheaper, faster portable diagnostics

    The chip, developed at Imperial College London and known as TriSilix, is a ‘micro laboratory’ which performs a miniature version of the polymerase chain reaction (PCR) on the spot. PCR is the gold-standard test for detecting viruses and bacteria in biological samples such as bodily fluids, faeces, or environmental samples.
    Although PCR is usually performed in a laboratory, which means test results aren’t immediately available, this new lab-on-a-chip can process and present results in a matter of minutes.
    The chip is made from silicon, the same material that is used to make electronic chips. Silicon itself is cheap, however, it is expensive to process into chips which requires massive, ‘extremely clean’ factories otherwise known as cleanrooms. To make the new lab-on-chip, the researchers developed a series of methods to produce the chips in a standard laboratory, cutting the costs and time they take to fabricate, potentially allowing them to be produced anywhere in the world.
    Lead researcher Dr Firat Guder of Imperial’s Department of Bioengineering said: “Rather than sending swabs to the lab or going to a clinic, the lab could come to you on a fingernail-sized chip. You would use the test much like how people with diabetes use blood sugar tests, by providing a sample and waiting for results — except this time it’s for infectious diseases.”
    The paper is published today in Nature Communications.
    The researchers have so far used TriSilix to diagnose a bacterial infection mainly present in animals as well as a synthetic version of the genetic material from SARS-CoV-2, the virus behind COVID-19.

    advertisement

    The researchers say the system could in future be mounted onto handheld blood sugar test-style devices. This would let people test themselves and receive results at home for colds, flu, recurrent infections like those of the urinary tract (UTIs), and COVID-19.
    Table-top devices for testing of infections like COVID-19 already exist, but these tests can be time-consuming and costly since the patient must go to a clinic, have a sample taken by a healthcare worker and go home or stay in clinic to wait. People leaving their homes when not feeling well increases the risk of spread of a pathogen to others.
    If validated on human samples, this new test could provide results outside a clinic, at home or on-the-go within minutes.
    The researchers also say a highly portable test could accelerate diagnosis of infections and reduce costs by eliminating transportation of samples. Such tests could be performed by citizens in the absence of highly trained medical professionals — hence, if they need to self-isolate, they can start immediately without potentially infecting others.
    Making testing more accessible and cheaper is especially important for people in rural areas of low-income countries, where clinics can be far away and expensive to travel to. If made available to patients, it could also be used to diagnose and monitor infections like UTIs, which often recur despite antibiotics.
    First author Dr Estefania Nunez-Bajo, also of the Department of Bioengineering, said: “Monitoring infections at home could even help patients, with the help of their doctor, to personalise and tailor their antibiotic use to help reduce the growing problem of antibiotic resistance.”
    Each lab-on-a-chip contains a DNA sensor, temperature detector and heater to automate the testing process. A typical smartphone battery could power up to 35 tests on a single charge.
    Next, the researchers plan to validate their chip with clinical samples, automate the preparation of samples and advance their handheld electronics. They are looking for partners and funders to help accelerate the translation of the technology deliver testing at resource limited settings at homes, farms or remote locations in the developing world.

    Story Source:
    Materials provided by Imperial College London. Original written by Caroline Brogan. Note: Content may be edited for style and length. More

  • in

    How automated vehicles can impede driver performance, and what to do about it

    As cars keep getting smarter, automation is taking many tricky tasks — from parallel parking to backing up — out of drivers’ hands.
    Now, a University of Toronto Engineering study is underscoring the importance of drivers keeping their eyes on the road — even when they are in an automated vehicle (AV).
    Using an AV driving simulator and eye-tracking equipment, Professor Birsen Donmez and her team studied two types of in-vehicle displays and their effects on the driving behaviours of 48 participants.
    The findings, published recently in the journal Accident Analysis & Prevention, revealed that drivers can become over-reliant on AV technology. This was especially true with a type of in-vehicle display the team coined as takeover request and automation capability (TORAC).
    A “takeover request” asks the driver to take vehicle control when automation is not able to handle a situation; “automation capability” indicates how close to that limit the automation is.
    “Drivers find themselves in situations where, although they are not actively driving, they are still part of the driving task — they must be monitoring the vehicle and step in if the vehicle fails,” says Donmez.

    advertisement

    “And these vehicles fail, it’s just guaranteed. The technology on the market right now is not mature enough to the point where we can just let the car drive and we go to sleep. We are not at that stage yet.”
    Tesla’s AV system, for example, warns drivers every 30 seconds or less when their hands aren’t detected on the wheel. This prompt can support driver engagement to some extent, but when the automation fails, driver attention and anticipation are the key factors that determine whether or not you get into a traffic accident.
    “Even though cars are advertised right now as self-driving, they are still just Level 2, or partially automated,” adds Dengbo He, postdoctoral fellow and lead author. “The driver should not rely on these types of vehicle automation.”
    In one of the team’s driving scenarios, the participants were given a non-driving, self-paced task — meant to mimic common distractions such as reading text messages — while takeover prompts and automation capability information were turned on.
    “Their monitoring of the road went way down compared to the condition where these features were turned off,” says Donmez. “Automated vehicles and takeover requests can give people a false sense of security, especially if they work most of the time. People are going to end up looking away and doing something non-driving related.”
    The researchers also tested a second in-vehicle display system that added information on surrounding traffic to the data provided by the TORAC system, called STTORAC. These displays showed more promise in ensuring driving safety.
    STTORAC provides drivers with ongoing information about their surrounding driving environment, including highlighting potential traffic conflicts on the road. This type of display led to the shortest reaction time in scenarios where drivers had to take over control of the vehicle, showing a significant improvement from both the TORAC and the no-display conditions.
    “When you’re not driving and aren’t engaged, it’s easy to lose focus. Adding information on surrounding traffic kept drivers better engaged in monitoring and anticipating traffic conflicts,” says He, adding that the key takeaway for designers of next-generation AVs is to ensure systems are designed to keep drivers attentive. “Drivers should not be distracted, at least at this stage.”
    Donmez’s team will next look at the effects of non-driving behaviours on drowsiness while operating an AV. “If someone isn’t engaged in a non-driving task and is just monitoring the road, they can be more likely to fall into states of drowsiness, which is even more dangerous than being distracted.” More

  • in

    Shrinking massive neural networks used to model language

    You don’t need a sledgehammer to crack a nut.
    Jonathan Frankle is researching artificial intelligence — not noshing pistachios — but the same philosophy applies to his “lottery ticket hypothesis.” It posits that, hidden within massive neural networks, leaner subnetworks can complete the same task more efficiently. The trick is finding those “lucky” subnetworks, dubbed winning lottery tickets.
    In a new paper, Frankle and colleagues discovered such subnetworks lurking within BERT, a state-of-the-art neural network approach to natural language processing (NLP). As a branch of artificial intelligence, NLP aims to decipher and analyze human language, with applications like predictive text generation or online chatbots. In computational terms, BERT is bulky, typically demanding supercomputing power unavailable to most users. Access to BERT’s winning lottery ticket could level the playing field, potentially allowing more users to develop effective NLP tools on a smartphone — no sledgehammer needed.
    “We’re hitting the point where we’re going to have to make these models leaner and more efficient,” says Frankle, adding that this advance could one day “reduce barriers to entry” for NLP.
    Frankle, a PhD student in Michael Carbin’s group at the MIT Computer Science and Artificial Intelligence Laboratory, co-authored the study, which will be presented next month at the Conference on Neural Information Processing Systems. Tianlong Chen of the University of Texas at Austin is the lead author of the paper, which included collaborators Zhangyang Wang, also of Texas A&M, as well as Shiyu Chang, Sijia Liu, and Yang Zhang, all of the MIT-IBM Watson AI Lab.
    You’ve probably interacted with a BERT network today. It’s one of the technologies that underlies Google’s search engine, and it has sparked excitement among researchers since Google released BERT in 2018. BERT is a method of creating neural networks — algorithms that use layered nodes, or “neurons,” to learn to perform a task through training on numerous examples. BERT is trained by repeatedly attempting to fill in words left out of a passage of writing, and its power lies in the gargantuan size of this initial training dataset. Users can then fine-tune BERT’s neural network to a particular task, like building a customer-service chatbot. But wrangling BERT takes a ton of processing power.

    advertisement

    “A standard BERT model these days — the garden variety — has 340 million parameters,” says Frankle, adding that the number can reach 1 billion. Fine-tuning such a massive network can require a supercomputer. “This is just obscenely expensive. This is way beyond the computing capability of you or me.”
    Chen agrees. Despite BERT’s burst in popularity, such models “suffer from enormous network size,” he says. Luckily, “the lottery ticket hypothesis seems to be a solution.”
    To cut computing costs, Chen and colleagues sought to pinpoint a smaller model concealed within BERT. They experimented by iteratively pruning parameters from the full BERT network, then comparing the new subnetwork’s performance to that of the original BERT model. They ran this comparison for a range of NLP tasks, from answering questions to filling the blank word in a sentence.
    The researchers found successful subnetworks that were 40 to 90 percent slimmer than the initial BERT model, depending on the task. Plus, they were able to identify those winning lottery tickets before running any task-specific fine-tuning — a finding that could further minimize computing costs for NLP. In some cases, a subnetwork picked for one task could be repurposed for another, though Frankle notes this transferability wasn’t universal. Still, Frankle is more than happy with the group’s results.
    “I was kind of shocked this even worked,” he says. “It’s not something that I took for granted. I was expecting a much messier result than we got.”
    This discovery of a winning ticket in a BERT model is “convincing,” according to Ari Morcos, a scientist at Facebook AI Research. “These models are becoming increasingly widespread,” says Morcos. “So it’s important to understand whether the lottery ticket hypothesis holds.” He adds that the finding could allow BERT-like models to run using far less computing power, “which could be very impactful given that these extremely large models are currently very costly to run.”
    Frankle agrees. He hopes this work can make BERT more accessible, because it bucks the trend of ever-growing NLP models. “I don’t know how much bigger we can go using these supercomputer-style computations,” he says. “We’re going to have to reduce the barrier to entry.” Identifying a lean, lottery-winning subnetwork does just that — allowing developers who lack the computing muscle of Google or Facebook to still perform cutting-edge NLP. “The hope is that this will lower the cost, that this will make it more accessible to everyone … to the little guys who just have a laptop,” says Frankle. “To me that’s really exciting.” More

  • in

    Researchers study influence of cultural factors on gesture design

    Imagine changing the TV channel with a wave of your hand or turning on the car radio with a twist of your wrist.
    Freehand gesture-based interfaces in interactive systems are becoming more common, but what if your preferred way to gesture a command — say, changing the TV to channel 10 — significantly differed from that of a user from another culture? Would the system recognize your command?
    Researchers from the Penn State College of Information Sciences and Technology and their collaborators explored this question and found that some gesture choices are significantly influenced by the cultural backgrounds of participants.
    “Certain cultures may prefer particular gestures and we may see a difference, but there is common ground between cultures choosing some gestures for the same kind of purposes and actions,” said Xiaolong “Luke” Zhang, associate professor of information sciences and technology and principal investigator of the study. “So we wanted to find out what can be shared among the different cultures, and what the differences are among different cultures to design better products.”
    In their study, the researchers asked American and Chinese participants to perform their preferred gestures for different commands in three separate settings: answering a phone call in the car, rotating an object in a virtual reality environment, and muting the television.
    The team found that while many preferred commands were similar among both cultural groups, there were some gesture choices that differed significantly between the groups. For example, most American participants used a thumbs up gesture to confirm a task in the virtual reality environment, while Chinese participants preferred to make an OK sign with their fingers. To reject a phone call in the car, most American participants made a horizontal movement across their neck with a flat hand, similar to a “cut” motion, while Chinese participants waved a hand back and forth to reject the call. Additionally, in Chinese culture, one hand can represent digits above five, while in American culture an individual can only represent numbers one to five using one hand.
    “This project is one of the first kind of research to study the existence of cultural influence and the use of preferences of hand gestures,” said Zhang. “We provide empirical evidence to show indeed that we should be aware of the existence of this matter.”
    On the other hand, Zhang said, from the perspective of design, the study shows that certain gestures can be common across multiple cultures, while other gestures can be very different.
    “Designers have to be careful when delivering products to different markets,” he said. “(This work could inform companies) to enable users customize the gesture commands, rather than have them pick something that is unnatural to learn from the perspective of the culture.”

    Story Source:
    Materials provided by Penn State. Original written by Jessica Hallman. Note: Content may be edited for style and length. More

  • in

    Next step in simulating the universe

    Computer simulations have struggled to capture the impact of elusive particles called neutrinos on the formation and growth of the large-scale structure of the Universe. But now, a research team from Japan has developed a method that overcomes this hurdle.
    In a study published this month in The Astrophysical Journal, researchers led by the University of Tsukuba present simulations that accurately depict the role of neutrinos in the evolution of the Universe.
    Why are these simulations important? One key reason is that they can set constraints on a currently unknown quantity: the neutrino mass. If this quantity is set to a particular value in the simulations and the simulation results differ from observations, that value can be ruled out. However, the constraints can be trusted only if the simulations are accurate, which was not guaranteed in previous work. The team behind this latest research aimed to address this limitation.
    “Earlier simulations used certain approximations that might not be valid,” says lead author of the study Lecturer Kohji Yoshikawa. “In our work, we avoided these approximations by employing a technique that accurately represents the velocity distribution function of the neutrinos and follows its time evolution.”
    To do this, the research team directly solved a system of equations known as the Vlasov-Poisson equations, which describe how particles move in the Universe. They then carried out simulations for different values of the neutrino mass and systemically examined the effects of neutrinos on the large-scale structure of the Universe.
    The simulation results demonstrate, for example, that neutrinos suppress the clustering of dark matter — the ‘missing’ mass in the Universe — and in turn galaxies. They also show that neutrino-rich regions are strongly correlated with massive galaxy clusters and that the effective temperature of the neutrinos varies substantially depending on the neutrino mass.
    “Overall, our findings suggest that neutrinos considerably affect the large-scale structure formation, and that our simulations provide an accurate account for the important effect of neutrinos,” explains Lecturer Yoshikawa. “It is also reassuring that our new results are consistent with those from entirely different simulation approaches.”

    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    AI predicts which drug combinations kill cancer cells

    When healthcare professionals treat patients suffering from advanced cancers, they usually need to use a combination of different therapies. In addition to cancer surgery, the patients are often treated with radiation therapy, medication, or both.
    Medication can be combined, with different drugs acting on different cancer cells. Combinatorial drug therapies often improve the effectiveness of the treatment and can reduce the harmful side-effects if the dosage of individual drugs can be reduced. However, experimental screening of drug combinations is very slow and expensive, and therefore, often fails to discover the full benefits of combination therapy. With the help of a new machine learning method, one could identify best combinations to selectively kill cancer cells with specific genetic or functional makeup.
    Researchers at Aalto University, University of Helsinki and the University of Turku in Finland developed a machine learning model that accurately predicts how combinations of different cancer drugs kill various types of cancer cells. The new AI model was trained with a large set of data obtained from previous studies, which had investigated the association between drugs and cancer cells. ‘The model learned by the machine is actually a polynomial function familiar from school mathematics, but a very complex one,’ says Professor Juho Rousu from Aalto University.
    The research results were published in the journal Nature Communications, demonstrating that the model found associations between drugs and cancer cells that were not observed previously. ‘The model gives very accurate results. For example, the values ??of the so-called correlation coefficient were more than 0.9 in our experiments, which points to excellent reliability,’ says Professor Rousu. In experimental measurements, a correlation coefficient of 0.8-0.9 is considered reliable.
    The model accurately predicts how a drug combination selectively inhibits particular cancer cells when the effect of the drug combination on that type of cancer has not been previously tested. ‘This will help cancer researchers to prioritize which drug combinations to choose from thousands of options for further research,’ says researcher Tero Aittokallio from the Institute for Molecular Medicine Finland (FIMM) at the University of Helsinki.
    The same machine learning approach could be used for non-cancerous diseases. In this case, the model would have to be re-taught with data related to that disease. For example, the model could be used to study how different combinations of antibiotics affect bacterial infections or how effectively different combinations of drugs kill cells that have been infected by the SARS-Cov-2 coronavirus.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More