More stories

  • in

    Chaotic early solar system collisions resembled 'Asteroids' arcade game

    One Friday evening in 1992, a meteorite ended a more than 150 million-mile journey by smashing into the trunk of a red Chevrolet Malibu in Peekskill, New York. The car’s owner reported that the 30-pound remnant of the earliest days of our solar system was still warm and smelled of sulfur.
    Nearly 30 years later, a new analysis of that same Peekskill meteorite and 17 others by researchers at The University of Texas at Austin and the University of Tennessee, Knoxville, has led to a new hypothesis about how asteroids formed during the early years of the solar system.
    The meteorites studied in the research originated from asteroids and serve as natural samples of the space rocks. They indicate that the asteroids formed though violent bombardment and subsequent reassembly, a finding that runs counter to the prevailing idea that the young solar system was a peaceful place.
    The study was published in print Dec.1 in the journal Geochimica et Cosmochimica Acta.
    The research began when co-author Nick Dygert was a postdoctoral fellow at UT’s Jackson School of Geosciences studying terrestrial rocks using a method that could measure the cooling rates of rocks from very high temperatures, up to 1,400 degrees Celsius.
    Dygert, now an assistant professor at the University of Tennessee, realized that this method — called a rare earth element (REE)-in-two-pyroxene thermometer — could work for space rocks, too.

    advertisement

    “This is a really powerful new technique for using geochemistry to understand geophysical processes, and no one had used it to measure meteorites yet,” Dygert said.
    Since the 1970s, scientists have been measuring minerals in meteorites to figure out how they formed. The work suggested that meteorites cooled very slowly from the outside inward in layers. This “onion shell model” is consistent with a relatively peaceful young solar system where chunks of rock orbited unhindered. But those studies were only capable of measuring cooling rates from temperatures near about 500 degrees Celsius.
    When Dygert and Michael Lucas, a postdoctoral scholar at the University of Tennessee who led the work, applied the REE-in-two-pyroxene method, with its much higher sensitivity to peak temperature, they found unexpected results. From around 900 degrees Celsius down to 500 degrees Celsius, cooling rates were 1,000 to 1 million times faster than at lower temperatures.
    How could these two very different cooling rates be reconciled?
    The scientists proposed that asteroids formed in stages. If the early solar system was, much like the old Atari game “Asteroids,” rife with bombardment, large rocks would have been smashed to bits. Those smaller pieces would have cooled quickly. Afterward, when the small pieces reassembled into larger asteroids we see today, cooling rates would have slowed.

    advertisement

    To test this rubble pile hypothesis, Jackson School Professor Marc Hesse and first-year doctoral student Jialong Ren built a computational model of a two-stage thermal history of rubble pile asteroids for the first time.
    Because of the vast number of pieces in a rubble pile — 1015 or a thousand trillions — and the vast array of their sizes, Ren had to develop new techniques to account for changes in mass and temperature before and after bombardment.
    “This was an intellectually significant contribution,” Hesse said.
    The resulting model supports the rubble pile hypothesis and provides other insights as well. One implication is that cooling slowed so much after reassembly not because the rock gave off heat in layers. Rather, it was that the rubble pile contained pores.
    “The porosity reduces how fast you can conduct heat,” Hesse said. “You actually cool slower than you would have if you hadn’t fragmented because all of the rubble makes kind of a nice blanket. And that’s sort of unintuitive.”
    Tim Swindle of the Lunar and Planetary Laboratory at the University of Arizona, who studies meteorites but was not involved in the research, said that this work is a major step forward.
    “This seems like a more complete model, and they’ve added data to part of the question that people haven’t been talking about, but should have been. The jury is still out, but this is a strong argument.”
    The biggest implication of the new rubble pile hypothesis, Dygert said, is that these collisions characterized the early days of the solar system.
    “They were violent, and they started early on,” he said.
    The research was supported by NASA. The Smithsonian National Museum of Natural History supplied samples of meteorites for the study. More

  • in

    New machine learning tool tracks urban traffic congestion

    A new machine learning algorithm is poised to help urban transportation analysts relieve bottlenecks and chokepoints that routinely snarl city traffic.
    The tool, called TranSEC, was developed at the U.S. Department of Energy’s Pacific Northwest National Laboratory to help urban traffic engineers get access to actionable information about traffic patterns in their cities.
    Currently, publicly available traffic information at the street level is sparse and incomplete. Traffic engineers generally have relied on isolated traffic counts, collision statistics and speed data to determine roadway conditions. The new tool uses traffic datasets collected from UBER drivers and other publicly available traffic sensor data to map street-level traffic flow over time. It creates a big picture of city traffic using machine learning tools and the computing resources available at a national laboratory.
    “What’s novel here is the street level estimation over a large metropolitan area,” said Arif Khan, a PNNL computer scientist who helped develop TranSEC. “And unlike other models that only work in one specific metro area, our tool is portable and can be applied to any urban area where aggregated traffic data is available.”
    UBER-fast traffic analysis
    TranSEC (which stands for transportation state estimation capability) differentiates itself from other traffic monitoring methods by its ability to analyze sparse and incomplete information. It uses machine learning to connect segments with missing data, and that allows it to make near real-time street level estimations.

    advertisement

    In contrast, the map features on our smart phones can help us optimize our journey through a city landscape, pointing out chokepoints and suggesting alternate routes. But smart phone tools only work for an individual driver trying to get from point A to point B. City traffic engineers are concerned with how to help all vehicles get to their destinations efficiently. Sometimes a route that seems efficient for an individual driver leads to too many vehicles trying to access a road that wasn’t designed to handle that volume of traffic.
    Using public data from the entire 1,500-square-mile Los Angeles metropolitan area, the team reduced the time needed to create a traffic congestion model by an order of magnitude, from hours to minutes. The speed-up, accomplished with high-performance computing resources at PNNL, makes near-real-time traffic analysis feasible. The research team recently presented that analysis at the August 2020 virtual Urban Computing Workshop as part of the Knowledge Discovery and Data Mining (SIGKDD) conference, and in September 2020 they sought the input of traffic engineers at a virtual meeting on TranSEC.
    “TranSEC has the potential to initiate a paradigm shift in how traffic professionals monitor and predict system mobility performance,” said Mark Franz, a meeting attendee and a research engineer at the Center for Advanced Transportation Technology, University of Maryland, College Park. “TranSEC overcomes the inherent data gaps in legacy data collection methods and has tremendous potential.”
    Machine learning improves accuracy over time
    The machine learning feature of TranSEC means that as more data is acquired and processed it becomes more refined and useful over time. This kind of analysis is used to understand how disturbances spread across networks. Given enough data, the machine learning element will be able to predict impacts so that traffic engineers can create corrective strategies.

    advertisement

    “We use a graph-based model together with novel sampling methods and optimization engines, to learn both the travel times and the routes,” said Arun Sathanur, a PNNL computer scientist and a lead researcher on the team. “The method has significant potential to be expanded to other modes of transportation, such as transit and freight traffic. As an analytic tool, it is capable of investigating how a traffic condition spreads.”
    With PNNL’s data-driven approach, users can upload real-time data and update TranSEC on a regular basis in a transportation control center. Engineers can use short-term forecasts for decision support to manage traffic issues. PNNL’s approach is also extensible to include weather or other data that affect conditions on the road.
    Computing power for transportation planners nationwide
    Just as situational awareness of conditions informs an individual driver’s decisions, TranSEC’s approach provides situational awareness on a system-wide basis to help reduce urban traffic congestion.
    “Traffic engineers nationwide have not had a tool to give them anywhere near real-time estimation of transportation network states,” said Robert Rallo, PNNL computer scientist and principal investigator on the TranSEC project. “Being able to predict conditions an hour or more ahead would be very valuable, to know where the blockages are going to be.”
    While running a full-scale city model still requires high-performance computing resources, TranSEC is scalable. For example, a road network with only the major highways and arterials could be modeled on a powerful desktop computer.
    “We are working toward making TranSEC available to municipalities nationwide,” said Katherine Wolf, project manager for TranSEC.
    Eventually, after further development, TranSEC could be used to help program autonomous vehicle routes, according to the research team.
    Video: https://www.youtube.com/watch?v=8S4bLv9CtOo
    The project was supported by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy’s Vehicle Technologies Office, Energy Efficient Mobility Systems Program. More

  • in

    Self-repairing gelatin-based film could be a smart move for electronics

    Dropping a cell phone can sometimes cause superficial cracks to appear. But other times, the device can stop working altogether because fractures develop in the material that stores data. Now, researchers have made an environmentally friendly, gelatin-based film that can repair itself multiple times and still maintain the electronic signals needed to access a device’s data. The material could be used someday in smart electronics and health-monitoring devices. More

  • in

    New lab-on-a-chip infection test could provide cheaper, faster portable diagnostics

    The chip, developed at Imperial College London and known as TriSilix, is a ‘micro laboratory’ which performs a miniature version of the polymerase chain reaction (PCR) on the spot. PCR is the gold-standard test for detecting viruses and bacteria in biological samples such as bodily fluids, faeces, or environmental samples.
    Although PCR is usually performed in a laboratory, which means test results aren’t immediately available, this new lab-on-a-chip can process and present results in a matter of minutes.
    The chip is made from silicon, the same material that is used to make electronic chips. Silicon itself is cheap, however, it is expensive to process into chips which requires massive, ‘extremely clean’ factories otherwise known as cleanrooms. To make the new lab-on-chip, the researchers developed a series of methods to produce the chips in a standard laboratory, cutting the costs and time they take to fabricate, potentially allowing them to be produced anywhere in the world.
    Lead researcher Dr Firat Guder of Imperial’s Department of Bioengineering said: “Rather than sending swabs to the lab or going to a clinic, the lab could come to you on a fingernail-sized chip. You would use the test much like how people with diabetes use blood sugar tests, by providing a sample and waiting for results — except this time it’s for infectious diseases.”
    The paper is published today in Nature Communications.
    The researchers have so far used TriSilix to diagnose a bacterial infection mainly present in animals as well as a synthetic version of the genetic material from SARS-CoV-2, the virus behind COVID-19.

    advertisement

    The researchers say the system could in future be mounted onto handheld blood sugar test-style devices. This would let people test themselves and receive results at home for colds, flu, recurrent infections like those of the urinary tract (UTIs), and COVID-19.
    Table-top devices for testing of infections like COVID-19 already exist, but these tests can be time-consuming and costly since the patient must go to a clinic, have a sample taken by a healthcare worker and go home or stay in clinic to wait. People leaving their homes when not feeling well increases the risk of spread of a pathogen to others.
    If validated on human samples, this new test could provide results outside a clinic, at home or on-the-go within minutes.
    The researchers also say a highly portable test could accelerate diagnosis of infections and reduce costs by eliminating transportation of samples. Such tests could be performed by citizens in the absence of highly trained medical professionals — hence, if they need to self-isolate, they can start immediately without potentially infecting others.
    Making testing more accessible and cheaper is especially important for people in rural areas of low-income countries, where clinics can be far away and expensive to travel to. If made available to patients, it could also be used to diagnose and monitor infections like UTIs, which often recur despite antibiotics.
    First author Dr Estefania Nunez-Bajo, also of the Department of Bioengineering, said: “Monitoring infections at home could even help patients, with the help of their doctor, to personalise and tailor their antibiotic use to help reduce the growing problem of antibiotic resistance.”
    Each lab-on-a-chip contains a DNA sensor, temperature detector and heater to automate the testing process. A typical smartphone battery could power up to 35 tests on a single charge.
    Next, the researchers plan to validate their chip with clinical samples, automate the preparation of samples and advance their handheld electronics. They are looking for partners and funders to help accelerate the translation of the technology deliver testing at resource limited settings at homes, farms or remote locations in the developing world.

    Story Source:
    Materials provided by Imperial College London. Original written by Caroline Brogan. Note: Content may be edited for style and length. More

  • in

    How automated vehicles can impede driver performance, and what to do about it

    As cars keep getting smarter, automation is taking many tricky tasks — from parallel parking to backing up — out of drivers’ hands.
    Now, a University of Toronto Engineering study is underscoring the importance of drivers keeping their eyes on the road — even when they are in an automated vehicle (AV).
    Using an AV driving simulator and eye-tracking equipment, Professor Birsen Donmez and her team studied two types of in-vehicle displays and their effects on the driving behaviours of 48 participants.
    The findings, published recently in the journal Accident Analysis & Prevention, revealed that drivers can become over-reliant on AV technology. This was especially true with a type of in-vehicle display the team coined as takeover request and automation capability (TORAC).
    A “takeover request” asks the driver to take vehicle control when automation is not able to handle a situation; “automation capability” indicates how close to that limit the automation is.
    “Drivers find themselves in situations where, although they are not actively driving, they are still part of the driving task — they must be monitoring the vehicle and step in if the vehicle fails,” says Donmez.

    advertisement

    “And these vehicles fail, it’s just guaranteed. The technology on the market right now is not mature enough to the point where we can just let the car drive and we go to sleep. We are not at that stage yet.”
    Tesla’s AV system, for example, warns drivers every 30 seconds or less when their hands aren’t detected on the wheel. This prompt can support driver engagement to some extent, but when the automation fails, driver attention and anticipation are the key factors that determine whether or not you get into a traffic accident.
    “Even though cars are advertised right now as self-driving, they are still just Level 2, or partially automated,” adds Dengbo He, postdoctoral fellow and lead author. “The driver should not rely on these types of vehicle automation.”
    In one of the team’s driving scenarios, the participants were given a non-driving, self-paced task — meant to mimic common distractions such as reading text messages — while takeover prompts and automation capability information were turned on.
    “Their monitoring of the road went way down compared to the condition where these features were turned off,” says Donmez. “Automated vehicles and takeover requests can give people a false sense of security, especially if they work most of the time. People are going to end up looking away and doing something non-driving related.”
    The researchers also tested a second in-vehicle display system that added information on surrounding traffic to the data provided by the TORAC system, called STTORAC. These displays showed more promise in ensuring driving safety.
    STTORAC provides drivers with ongoing information about their surrounding driving environment, including highlighting potential traffic conflicts on the road. This type of display led to the shortest reaction time in scenarios where drivers had to take over control of the vehicle, showing a significant improvement from both the TORAC and the no-display conditions.
    “When you’re not driving and aren’t engaged, it’s easy to lose focus. Adding information on surrounding traffic kept drivers better engaged in monitoring and anticipating traffic conflicts,” says He, adding that the key takeaway for designers of next-generation AVs is to ensure systems are designed to keep drivers attentive. “Drivers should not be distracted, at least at this stage.”
    Donmez’s team will next look at the effects of non-driving behaviours on drowsiness while operating an AV. “If someone isn’t engaged in a non-driving task and is just monitoring the road, they can be more likely to fall into states of drowsiness, which is even more dangerous than being distracted.” More

  • in

    Shrinking massive neural networks used to model language

    You don’t need a sledgehammer to crack a nut.
    Jonathan Frankle is researching artificial intelligence — not noshing pistachios — but the same philosophy applies to his “lottery ticket hypothesis.” It posits that, hidden within massive neural networks, leaner subnetworks can complete the same task more efficiently. The trick is finding those “lucky” subnetworks, dubbed winning lottery tickets.
    In a new paper, Frankle and colleagues discovered such subnetworks lurking within BERT, a state-of-the-art neural network approach to natural language processing (NLP). As a branch of artificial intelligence, NLP aims to decipher and analyze human language, with applications like predictive text generation or online chatbots. In computational terms, BERT is bulky, typically demanding supercomputing power unavailable to most users. Access to BERT’s winning lottery ticket could level the playing field, potentially allowing more users to develop effective NLP tools on a smartphone — no sledgehammer needed.
    “We’re hitting the point where we’re going to have to make these models leaner and more efficient,” says Frankle, adding that this advance could one day “reduce barriers to entry” for NLP.
    Frankle, a PhD student in Michael Carbin’s group at the MIT Computer Science and Artificial Intelligence Laboratory, co-authored the study, which will be presented next month at the Conference on Neural Information Processing Systems. Tianlong Chen of the University of Texas at Austin is the lead author of the paper, which included collaborators Zhangyang Wang, also of Texas A&M, as well as Shiyu Chang, Sijia Liu, and Yang Zhang, all of the MIT-IBM Watson AI Lab.
    You’ve probably interacted with a BERT network today. It’s one of the technologies that underlies Google’s search engine, and it has sparked excitement among researchers since Google released BERT in 2018. BERT is a method of creating neural networks — algorithms that use layered nodes, or “neurons,” to learn to perform a task through training on numerous examples. BERT is trained by repeatedly attempting to fill in words left out of a passage of writing, and its power lies in the gargantuan size of this initial training dataset. Users can then fine-tune BERT’s neural network to a particular task, like building a customer-service chatbot. But wrangling BERT takes a ton of processing power.

    advertisement

    “A standard BERT model these days — the garden variety — has 340 million parameters,” says Frankle, adding that the number can reach 1 billion. Fine-tuning such a massive network can require a supercomputer. “This is just obscenely expensive. This is way beyond the computing capability of you or me.”
    Chen agrees. Despite BERT’s burst in popularity, such models “suffer from enormous network size,” he says. Luckily, “the lottery ticket hypothesis seems to be a solution.”
    To cut computing costs, Chen and colleagues sought to pinpoint a smaller model concealed within BERT. They experimented by iteratively pruning parameters from the full BERT network, then comparing the new subnetwork’s performance to that of the original BERT model. They ran this comparison for a range of NLP tasks, from answering questions to filling the blank word in a sentence.
    The researchers found successful subnetworks that were 40 to 90 percent slimmer than the initial BERT model, depending on the task. Plus, they were able to identify those winning lottery tickets before running any task-specific fine-tuning — a finding that could further minimize computing costs for NLP. In some cases, a subnetwork picked for one task could be repurposed for another, though Frankle notes this transferability wasn’t universal. Still, Frankle is more than happy with the group’s results.
    “I was kind of shocked this even worked,” he says. “It’s not something that I took for granted. I was expecting a much messier result than we got.”
    This discovery of a winning ticket in a BERT model is “convincing,” according to Ari Morcos, a scientist at Facebook AI Research. “These models are becoming increasingly widespread,” says Morcos. “So it’s important to understand whether the lottery ticket hypothesis holds.” He adds that the finding could allow BERT-like models to run using far less computing power, “which could be very impactful given that these extremely large models are currently very costly to run.”
    Frankle agrees. He hopes this work can make BERT more accessible, because it bucks the trend of ever-growing NLP models. “I don’t know how much bigger we can go using these supercomputer-style computations,” he says. “We’re going to have to reduce the barrier to entry.” Identifying a lean, lottery-winning subnetwork does just that — allowing developers who lack the computing muscle of Google or Facebook to still perform cutting-edge NLP. “The hope is that this will lower the cost, that this will make it more accessible to everyone … to the little guys who just have a laptop,” says Frankle. “To me that’s really exciting.” More

  • in

    Researchers study influence of cultural factors on gesture design

    Imagine changing the TV channel with a wave of your hand or turning on the car radio with a twist of your wrist.
    Freehand gesture-based interfaces in interactive systems are becoming more common, but what if your preferred way to gesture a command — say, changing the TV to channel 10 — significantly differed from that of a user from another culture? Would the system recognize your command?
    Researchers from the Penn State College of Information Sciences and Technology and their collaborators explored this question and found that some gesture choices are significantly influenced by the cultural backgrounds of participants.
    “Certain cultures may prefer particular gestures and we may see a difference, but there is common ground between cultures choosing some gestures for the same kind of purposes and actions,” said Xiaolong “Luke” Zhang, associate professor of information sciences and technology and principal investigator of the study. “So we wanted to find out what can be shared among the different cultures, and what the differences are among different cultures to design better products.”
    In their study, the researchers asked American and Chinese participants to perform their preferred gestures for different commands in three separate settings: answering a phone call in the car, rotating an object in a virtual reality environment, and muting the television.
    The team found that while many preferred commands were similar among both cultural groups, there were some gesture choices that differed significantly between the groups. For example, most American participants used a thumbs up gesture to confirm a task in the virtual reality environment, while Chinese participants preferred to make an OK sign with their fingers. To reject a phone call in the car, most American participants made a horizontal movement across their neck with a flat hand, similar to a “cut” motion, while Chinese participants waved a hand back and forth to reject the call. Additionally, in Chinese culture, one hand can represent digits above five, while in American culture an individual can only represent numbers one to five using one hand.
    “This project is one of the first kind of research to study the existence of cultural influence and the use of preferences of hand gestures,” said Zhang. “We provide empirical evidence to show indeed that we should be aware of the existence of this matter.”
    On the other hand, Zhang said, from the perspective of design, the study shows that certain gestures can be common across multiple cultures, while other gestures can be very different.
    “Designers have to be careful when delivering products to different markets,” he said. “(This work could inform companies) to enable users customize the gesture commands, rather than have them pick something that is unnatural to learn from the perspective of the culture.”

    Story Source:
    Materials provided by Penn State. Original written by Jessica Hallman. Note: Content may be edited for style and length. More