More stories

  • in

    Deep learning for new alloys

    When is something more than just the sum of its parts? Alloys show such synergy. Steel, for instance, revolutionized industry by taking iron, adding a little carbon and making an alloy much stronger than either of its components.
    Supercomputer simulations are helping scientists discover new types of alloys, called high-entropy alloys. Researchers have used the Stampede2 supercomputer of the Texas Advanced Computing Center (TACC) allocated by the Extreme Science and Engineering Discovery Environment (XSEDE).
    Their research was published in April 2022 in Npj Computational Materials. The approach could be applied to finding new materials for batteries, catalysts and more without the need for expensive metals such as platinum or cobalt.
    “High-entropy alloys represent a totally different design concept. In this case we try to mix multiple principal elements together,” said study senior author Wei Chen, associate professor of materials science and engineering at the Illinois Institute of Technology.
    The term “high entropy” in a nutshell refers to the decrease in energy gained from random mixing of multiple elements at similar atomic fractions, which can stabilize new and novel materials resulting from the ‘cocktail.’
    For the study, Chen and colleagues surveyed a large space of 14 elements and the combinations that yielded high-entropy alloys. They performed high-throughput quantum mechanical calculations, which found the alloy’s stability and elastic properties, the ability to regain their size and shape from stress, of more than 7,000 high-entropy alloys. More

  • in

    Robots learn household tasks by watching humans

    The robot watched as Shikhar Bahl opened the refrigerator door. It recorded his movements, the swing of the door, the location of the fridge and more, analyzing this data and readying itself to mimic what Bahl had done.
    It failed at first, missing the handle completely at times, grabbing it in the wrong spot or pulling it incorrectly. But after a few hours of practice, the robot succeeded and opened the door.
    “Imitation is a great way to learn,” said Bahl, a Ph.D. student at the Robotics Institute (RI) in Carnegie Mellon University’s School of Computer Science. “Having robots actually learn from directly watching humans remains an unsolved problem in the field, but this work takes a significant step in enabling that ability.”
    Bahl worked with Deepak Pathak and Abhinav Gupta, both faculty members in the RI, to develop a new learning method for robots called WHIRL, short for In-the-Wild Human Imitating Robot Learning. WHIRL is an efficient algorithm for one-shot visual imitation. It can learn directly from human-interaction videos and generalize that information to new tasks, making robots well-suited to learning household chores. People constantly perform various tasks in their homes. With WHIRL, a robot can observe those tasks and gather the video data it needs to eventually determine how to complete the job itself.
    The team added a camera and their software to an off-the-shelf robot, and it learned how to do more than 20 tasks — from opening and closing appliances, cabinet doors and drawers to putting a lid on a pot, pushing in a chair and even taking a garbage bag out of the bin. Each time, the robot watched a human complete the task once and then went about practicing and learning to accomplish the task on its own. The team presented their research this month at the Robotics: Science and Systems conference in New York.
    “This work presents a way to bring robots into the home,” said Pathak, an assistant professor in the RI and a member of the team. “Instead of waiting for robots to be programmed or trained to successfully complete different tasks before deploying them into people’s homes, this technology allows us to deploy the robots and have them learn how to complete tasks, all the while adapting to their environments and improving solely by watching.”
    Current methods for teaching a robot a task typically rely on imitation or reinforcement learning. In imitation learning, humans manually operate a robot to teach it how to complete a task. This process must be done several times for a single task before the robot learns. In reinforcement learning, the robot is typically trained on millions of examples in simulation and then asked to adapt that training to the real world.
    Both learning models work well when teaching a robot a single task in a structured environment, but they are difficult to scale and deploy. WHIRL can learn from any video of a human doing a task. It is easily scalable, not confined to one specific task and can operate in realistic home environments. The team is even working on a version of WHIRL trained by watching videos of human interaction from YouTube and Flickr.
    Progress in computer vision made the work possible. Using models trained on internet data, computers can now understand and model movement in 3D. The team used these models to understand human movement, facilitating training WHIRL.
    With WHIRL, a robot can accomplish tasks in their natural environments. The appliances, doors, drawers, lids, chairs and garbage bag were not modified or manipulated to suit the robot. The robot’s first several attempts at a task ended in failure, but once it had a few successes, it quickly latched on to how to accomplish it and mastered it. While the robot may not accomplish the task with the same movements as a human, that’s not the goal. Humans and robots have different parts, and they move differently. What matters is that the end result is the same. The door is opened. The switch is turned off. The faucet is turned on.
    “To scale robotics in the wild, the data must be reliable and stable, and the robots should become better in their environment by practicing on their own,” Pathak said.
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    Alexa and Siri, listen up! Teaching machines to really hear us

    University of Virginia cognitive scientist Per Sederberg has a fun experiment you can try at home. Take out your smartphone and, using a voice assistant such as the one for Google’s search engine, say the word “octopus” as slowly as you can.
    Your device will struggle to reiterate what you just said. It might supply a nonsensical response, or it might give you something close but still off — like “toe pus.” Gross!
    The point is, Sederberg said, when it comes to receiving auditory signals like humans and other animals do — despite all of the computing power dedicated to the task by such heavyweights as Google, Deep Mind, IBM and Microsoft — current artificial intelligence remains a bit hard of hearing.
    The outcomes can range from comical and mildly frustrating to downright alienating for those who have speech problems.
    But using recent breakthroughs in neuroscience as a model, UVA collaborative research has made it possible to convert existing AI neural networks into technology that can truly hear us, no matter at what pace we speak.
    The deep learning tool is called SITHCon, and by generalizing input, it can understand words spoken at different speeds than a network was trained on. More

  • in

    Motion capture reveals why VAR in football struggles with offside decisions

    New research by the University of Bath has used motion capture technology to assess the accuracy of Video Assistant Referee (VAR) technologies in football. The study suggests that VAR is useful for preventing obvious mistakes but is currently not precise enough to give accurate judgements every time.
    VAR was introduced into association football in 2018 to help referees review decisions for goals, red cards, penalties and offsides. The technology uses film footage from pitch-side cameras, meaning that VAR operators can view the action from different angles and then offer their judgements on incidents to the head referee to make a final decision.
    However, the accuracy and application of VAR has also been questioned by some, including high profile pundits like Gary Lineker and Alan Shearer, following controversial decisions which can change the course of the game.
    Critics of VAR further argue that it hampers the flow of the game, however some research suggests it has reduced the number of fouls, offsides and yellow cards.
    Dr Pooya Soltani, from the University of Bath’s Centre for Analysis of Motion, Entertainment Research and Applications (CAMERA), used optical motion capture systems to assess the accuracy of VAR systems.
    He filmed a football player receiving the ball from a teammate, viewed from different camera angles, whilst recording the 3D positions of the ball and players using optical motion capture cameras. More

  • in

    Physicists use quantum simulation tools to study, understand exotic state of matter

    Physicists have demonstrated how simulations using quantum computing can enable observation of a distinctive state of matter taken out of its normal equilibrium. Such novel states of matter could one day lead to developments in fast, powerful quantum information storage and precision measurement science.
    Thomas Iadecola worked his way through the title of the latest research paper that includes his theoretical and analytical work, patiently explaining digital quantum simulation, Floquet systems and symmetry-protected topological phases.
    Then he offered explanations of nonequilibrium systems, time crystals, 2T periodicity and the 2016 Nobel Prize in Physics.
    Iadecola’s corner of quantum condensed matter physics — the study of how states of matter emerge from collections of atoms and subatomic particles — can be counterintuitive and needs an explanation at most every turn and term.
    The bottom line, as explained by the Royal Swedish Academy of Sciences in announcing that 2016 physics prize to David Thouless, Duncan Haldane and Michael Kosterlitz, is that researchers are revealing more and more of the secrets of exotic matter, “an unknown world where matter can assume strange states.”
    The new paper published in the journal Nature and co-authored by Iadecola, an Iowa State University assistant professor of physics and astronomy and an Ames National Laboratory scientist, describes simulations using quantum computing that enabled observation of a distinctive state of matter taken out of its normal equilibrium. More

  • in

    Idea of ice age 'species pump' in the Philippines boosted by new way of drawing evolutionary trees

    Does the Philippines’ astonishing biodiversity result in part from rising and falling seas during the ice ages?
    Scientists have long thought the unique geography of the Philippines — coupled with seesawing ocean levels — could have created a “species pump” that triggered massive diversification by isolating, then reconnecting, groups of species again and again on islands. They call the idea the “Pleistocene aggregate island complex (PAIC) model” of diversification.
    But hard evidence, connecting bursts of speciation to the precise times that global sea levels rose and fell, has been scant until now.
    A groundbreaking Bayesian method and new statistical analyses of genomic data from geckos in the Philippines shows that during the ice ages, the timing of gecko diversification gives strong statistical support for the first time to the PAIC model, or “species pump.” The investigation, with roots at the University of Kansas, was just published in the Proceedings of the National Academy of Sciences.
    “The Philippines is an isolated archipelago, currently including more than 7,100 islands, but this number was dramatically reduced, possibly to as few as six or seven giant islands, during the Pleistocene,” said co-author Rafe Brown, curator-in-charge of the herpetology division of the Biodiversity Institute and Natural History Museum at KU. “The aggregate landmasses were composed of many of today’s smaller islands, which became connected together by dry land as sea levels fell, and all that water was tied up in glaciers. It’s been hypothesized that this kind of fragmentation and fusion of land, which happened as sea levels repeatedly fluctuated over the last 4 million years, sets the stage for a special evolutionary process, which may have triggered simultaneous clusters or bursts of speciation in unrelated organisms present at the time. In this case, we tested this prediction in two different genera of lizards, each with species found only in the Philippines.”
    For decades, the Philippines has been a hotbed of fieldwork by biologists with KU’s Biodiversity Institute, where the authors analyzed genetic samples of Philippine geckos as well as other animals. However, even with today’s technology and scientists’ ability to characterize variation from across the genome, the development of powerful statistical approaches capable of handling genome-scale data is still catching up — particularly in challenging cases, like the task of estimating past times that species formed, using genetic data collected from populations surviving today. More

  • in

    Magnetic memory milestone

    Computers and smartphones have different kinds of memory, which vary in speed and power efficiency depending on where they are used in the system. Typically, larger computers, especially those in data centers, will use a lot of magnetic hard drives, which are less common in consumer systems now. The magnetic technology these are based on provides very high capacity, but lack the speed of solid state system memory. Devices based on upcoming spintronic technology may be able to bridge that gap and radically improve upon even theoretical performance of classical electronic devices.
    Professor Satoru Nakatsuji and Project Associate Professor Tomoya Higo from the Department of Physics at the University of Tokyo, together with their team, explore the world of spintronics and other related areas of solid state physics — broadly speaking, the physics of things that function without moving. Over the years, they have studied special kinds of magnetic materials, some of which have very unusual properties. You’ll be familiar with ferromagnets, as these are the kinds that exist in many everyday applications like computer hard drives and electric motors — you probably even have some stuck to your refrigerator. However, of greater interest to the team are more obscure magnetic materials called antiferromagnets.
    “Like ferromagnets, antiferromagnets’ magnetic properties arise from the collective behavior of their component particles, in particular the spins of their electrons, something analogous to angular momentum,” said Nakatsuji. “Both materials can be used to encode information by changing localized groups of constituent particles. However, antiferromagnets have a distinct advantage in the high speed at which these changes to the information-storing spin states can be made, at the cost of increased complexity.”
    “Some spintronic memory devices already exist. MRAM (magnetoresistive random access memory) has been commercialized and can replace electronic memory in some situations, but it is based on ferromagnetic switching,” said Higo. “After considerable trial and error, I believe we are the first to report the successful switching of spin states in antiferromagnetic material Mn3Sn by using the same method as that used for ferromagnets in the MRAM, meaning we have coaxed the antiferromagnetic substance into acting as a simple memory device.”
    This method of switching is called spin-orbit torque (SOT) switching and it’s cause for excitement in the technology sector. It uses a fraction of the power to change the state of a bit (1 or 0) in memory, and although the researchers’ experiments involved switching their Mn3Sn sample in as little as a few milliseconds (thousandth of a second), they are confident that SOT switching could occur on the picosecond (trillionth of a second) scale, which would be orders of magnitude faster than the switching speed of current state-of-the-art electronic computer chips.
    “We achieved this due to the unique material Mn3Sn,” said Nakatsuji. “It proved far easier to work with in this way that other antiferromagnetic materials may have been.”
    “There is no rule book on how to fabricate this material. We aim to create a pure, flat crystal lattice of Mn3Sn from manganese and tin using a process called molecular beam epitaxy,” said Higo. “There are many parameters to this process that have to be fine-tuned, and we are still refining the process to see how it might be scaled up if it’s to become an industrial method one day.”
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Melanoma thickness equally hard for algorithms and dermatologists to judge

    Assessing the thickness of melanoma is difficult, whether done by an experienced dermatologist or a well-trained machine-learning algorithm. A study from the University of Gothenburg shows that the algorithm and the dermatologists had an equal success rate in interpreting dermoscopic images.
    In diagnosing melanoma, dermatologists evaluate whether it is an aggressive form (“invasive melanoma”), where the cancer cells grow down into the dermis and there is a risk of spreading to other parts of the body, or a milder form (“melanoma in situ,” MIS) that develops in the outer skin layer, the epidermis, only. Invasive melanomas that grow deeper than one millimeter into the skin are considered thick and, as such, more aggressive.
    Importance of thickness
    Melanomas are assessed by investigation with a dermatoscope — a type of magnifying glass fitted with a bright light. Diagnosing melanoma is often relatively simple, but estimating its thickness is a much greater challenge.
    “As well as providing valuable prognostic information, the thickness may affect the choice of surgical margins for the first operation and how promptly it needs to be performed,” says Sam Polesie, associate professor (docent) of dermatology and venereology at Sahlgrenska Academy, University of Gothenburg, Polesie is also a dermatologist at Sahlgrenska University Hospital and the study’s first author.
    Tie between man and machine
    Using a web platform, 438 international dermatologists assessed nearly 1,500 melanoma images captured with a dermatoscope. The dermatologists’ results were then compared with those from a machine-learning algorithm trained in classifying melanoma depth. More