More stories

  • in

    Novel nanowire fabrication technique paves way for next generation spintronics

    The challenge of fabricating nanowires directly on silicon substrates for the creation of the next generation of electronics has finally been solved by researchers from Tokyo Tech. Next-generation spintronics will lead to better memory storage mechanisms in computers, making them faster and more efficient.
    As our world modernizes faster than ever before, there is an ever-growing need for better and faster electronics and computers. Spintronics is a new system which uses the spin of an electron, in addition to the charge state, to encode data, making the entire system faster and more efficient. Ferromagnetic nanowires with high coercivity (resistance to changes in magnetization) are required to realize the potential of spintronics. Especially L10-ordered (a type of crystal structure) cobalt-platinum (CoPt) nanowires.
    Conventional fabrication processes for L10-ordered nanowires involve heat treatment to improve the physical and chemical properties of the material, a process called annealing on the crystal substrate; the transfer of a pattern onto the substrate through lithography; and finally the chemical removal of layers through a process called etching. Eliminating the etching process by directly fabricating nanowires onto the silicon substrate would lead to a marked improvement in the fabrication of spintronic devices. However, when directly fabricated nanowires are subjected to annealing, they tend to transform into droplets as a result of the internal stresses in the wire.
    Recently, a team of researchers led by Professor Yutaka Majima from the Tokyo Institute of Technology have found a solution to the problem. The team reported a new fabrication process to make L10-ordered CoPt nanowires on silicon/silicon dioxide (Si/SiO2) substrates. Talking about their research, published in Nanoscale Advances, Prof. Majima says, “Our nanostructure-induced ordering method allows the direct fabrication of ultrafine L10-ordered CoPt nanowires with the narrow widths of 30nm scale required for spintronics. This fabrication method could further be applied to other L10-ordered ferromagnetic materials such as iron-platinum and iron-palladium compounds.”
    In this study, the researchers first coated a Si/SiO2 substrate with a material called a ‘resist’ and subjected it to electron beam lithography and evaporation to create a stencil for the nanowires. Then then deposited a multilayer of CoPt on the substrate. The deposited sampled were then ‘lifted-off’, leaving behind CoPt nanowires. These nanowires were then subjected to high temperature annealing. The researchers also examined the fabricated nanowires using several characterization techniques.
    They found that the nanowires took on L10-ordering during the annealing process. This transformation was induced by atomic interdiffusion, surface diffusion, and extremely large internal stress at the ultrasmall 10 nm scale curvature radii of the nanowires. They also found that the nanowires exhibited a large coercivity of 10 kiloOersteds (kOe).
    According to Prof. Majima, “The internal stresses on the nanostructure here induce the L10-ordering. This is a different mechanism than in previous studies. We are hopeful that this discovery will open up a new field of research called ‘nanostructure-induced materials science and engineering.'”
    The wide applicability and convenience of the novel fabrication technique is sure to make a significant contribution to the field of spintronics research.
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Researchers discover security loophole allowing attackers to use WiFi to see through walls

    A research team based out of the University of Waterloo has developed a drone-powered device that can use WiFi networks to see through walls.
    The device, nicknamed Wi-Peep, can fly near a building and then use the inhabitants’ WiFi network to identify and locate all WiFi-enabled devices inside in a matter of seconds.
    The Wi-Peep exploits a loophole the researchers call polite WiFi. Even if a network is password protected, smart devices will automatically respond to contact attempts from any device within range. The Wi-Peep sends several messages to a device as it flies and then measures the response time on each, enabling it to identify the device’s location to within a metre.
    Dr. Ali Abedi, an adjunct professor of computer science at Waterloo, explains the significance of this discovery.
    “The Wi-Peep devices are like lights in the visible spectrum, and the walls are like glass,” Abedi said. “Using similar technology, one could track the movements of security guards inside a bank by following the location of their phones or smartwatches. Likewise, a thief could identify the location and type of smart devices in a home, including security cameras, laptops, and smart TVs, to find a good candidate for a break-in. In addition, the device’s operation via drone means that it can be used quickly and remotely without much chance of the user being detected.”
    While scientists have explored WiFi security vulnerability in the past using bulky, expensive devices, the Wi-Peep is notable because of its accessibility and ease of transportation. Abedi’s team built it using a store-bought drone and $20 of easily purchased hardware.
    “As soon as the Polite WiFi loophole was discovered, we realized this kind of attack was possible,” Abedi said.
    The team built the Wi-Peep to test their theory and quickly realized that anyone with the right expertise could easily create a similar device.
    “On a fundamental level, we need to fix the Polite WiFi loophole so that our devices do not respond to strangers,” Abedi said. “We hope our work will inform the design of next-generation protocols.”
    In the meantime, he urges WiFi chip manufacturers to introduce an artificial, randomized variation in device response time, which will make calculations like the ones the Wi-Peep uses wildly inaccurate.
    The paper summarizing this research, Non-cooperative wi-fi localization & its privacy implications, was presented at the 28th Annual International Conference on Mobile Computing and Networking.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    Researchers encourage retailers to embrace AI to better service customers

    Three QUT researchers are part of an international research team that have identified new ways for retailers to use Artificial Intelligence in concert with in-store cameras to better service consumer behaviour and tailor store layouts to maximise sales.
    In research published in Artificial Intelligence Review, the team propose an AI-powered store layout design framework for retailers to best take advantage of recent advances in AI techniques, and its sub-fields in computer vision and deep learning to monitor the physical shopping behaviours of their customers.
    Any shopper who has retrieved milk from the farthest corner of a shop knows well that an efficient store layout presents its merchandise to both attract customer attention to items they had not intended to buy, increase browsing time, and easily find related or viable alternative products grouped together.
    A well thought out layout has been shown to positively correlate with increased sales and customer satisfaction. It is one of the most effective in-store marketing tactics which can directly influence customer decisions to boost profitability.
    QUT researchers Dr Kien Nguyen and Professor Clinton Fookes from the School of Electrical Engineering & Robotics and Professor Brett Martin, QUT Business Schoolteamed up with researchers Dr Minh Le, from the University of Economics, Ho Chi Minh city, Vietnam, and Professor Ibrahim Cil from Sakarya University, Serdivan, Turkey, to conduct a comprehensive review on existing approaches to in store layout design.
    Dr Nguyen says improving supermarket layout design — through understanding and prediction — is a vital tactic to improve customer satisfaction and increase sales. More

  • in

    In the latest human vs. machine match, artificial intelligence wins by a hair

    Vikas Nanda has spent more than two decades studying the intricacies of proteins, the highly complex substances present in all living organisms. The Rutgers scientist has long contemplated how the unique patterns of amino acids that compose proteins determine whether they become anything from hemoglobin to collagen, as well as the subsequent, mysterious step of self-assembly where only certain proteins clump together to form even more complex substances.
    So, when scientists wanted to conduct an experiment pitting a human — one with a profound, intuitive understanding of protein design and self-assembly — against the predictive capabilities of an artificially intelligent computer program, Nanda, a researcher at the Center for Advanced Biotechnology and Medicine (CABM) at Rutgers, was one of those at the top of the list.
    Now, the results to see who — or what — could do a better job at predicting which protein sequences would combine most successfully are out. Nanda, along with researchers at Argonne National Laboratory in Illinois and colleagues from throughout the nation, reports in Nature Chemistry that the battle was close but decisive. The competition matching Nanda and several colleagues against an artificial intelligence (AI) program has been won, ever so slightly, by the computer program.
    Scientists are deeply interested in protein self-assembly because they believe understanding it better could help them design a host of revolutionary products for medical and industrial uses, such as artificial human tissue for wounds and catalysts for new chemical products.
    “Despite our extensive expertise, the AI did as good or better on several data sets, showing the tremendous potential of machine learning to overcome human bias,” said Nanda, a professor in the Department of Biochemistry and Molecular Biology at Rutgers Robert Wood Johnson Medical School.
    Proteins are made of large numbers of amino acids joined end to end. The chains fold up to form three-dimensional molecules with complex shapes. The precise shape of each protein, along with the amino acids it contains, determines what it does. Some researchers, such as Nanda, engage in “protein design,” creating sequences that produce new proteins. Recently, Nanda and a team of researchers designed a synthetic protein that quickly detects VX, a dangerous nerve agent, and could pave the way for new biosensors and treatments. More

  • in

    Machine learning facilitates 'turbulence tracking' in fusion reactors

    Fusion, which promises practically unlimited, carbon-free energy using the same processes that power the sun, is at the heart of a worldwide research effort that could help mitigate climate change.
    A multidisciplinary team of researchers is now bringing tools and insights from machine learning to aid this effort. Scientists from MIT and elsewhere have used computer-vision models to identify and track turbulent structures that appear under the conditions needed to facilitate fusion reactions.
    Monitoring the formation and movements of these structures, called filaments or “blobs,” is important for understanding the heat and particle flows exiting from the reacting fuel, which ultimately determines the engineering requirements for the reactor walls to meet those flows. However, scientists typically study blobs using averaging techniques, which trade details of individual structures in favor of aggregate statistics. Individual blob information must be tracked by marking them manually in video data.
    The researchers built a synthetic video dataset of plasma turbulence to make this process more effective and efficient. They used it to train four computer vision models, each of which identifies and tracks blobs. They trained the models to pinpoint blobs in the same ways that humans would.
    When the researchers tested the trained models using real video clips, the models could identify blobs with high accuracy — more than 80 percent in some cases. The models were also able to effectively estimate the size of blobs and the speeds at which they moved.
    Because millions of video frames are captured during just one fusion experiment, using machine-learning models to track blobs could give scientists much more detailed information. More

  • in

    How network pruning can skew deep learning models

    Computer science researchers have demonstrated that a widely used technique called neural network pruning can adversely affect the performance of deep learning models, detailed what causes these performance problems, and demonstrated a technique for addressing the challenge.
    Deep learning is a type of artificial intelligence that can be used to classify things, such as images, text or sound. For example, it can be used to identify individuals based on facial images. However, deep learning models often require a lot of computing resources to operate. This poses challenges when a deep learning model is put into practice for some applications.
    To address these challenges, some systems engage in “neural network pruning.” This effectively makes the deep learning model more compact and, therefore, able to operate while using fewer computing resources.
    “However, our research shows that this network pruning can impair the ability of deep learning models to identify some groups,” says Jung-Eun Kim, co-author of a paper on the work and an assistant professor of computer science at North Carolina State University.
    “For example, if a security system uses deep learning to scan people’s faces in order to determine whether they have access to a building, the deep learning model would have to be made compact so that it can operate efficiently. This may work fine most of the time, but the network pruning could also affect the deep learning model’s ability to identify some faces.”
    In their new paper, the researchers lay out why network pruning can adversely affect the performance of the model at identifying certain groups — which the literature calls “minority groups” — and demonstrate a new technique for addressing these challenges. More

  • in

    Here’s how polar bears might get traction on snow

    Tiny “fingers” can help polar bears get a grip.

    Like the rubbery nubs on the bottom of baby socks, microstructures on the bears’ paw pads offer some extra friction, scientists report November 1 in the Journal of the Royal Society Interface. The pad protrusions may keep polar bears from slipping on snow, says Ali Dhinojwala, a polymer scientist at the University of Akron in Ohio who has also studied the sticking power of gecko feet (SN: 8/9/05).

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Nathaniel Orndorf, a materials scientist at Akron who focuses on ice, adhesion and friction, was interested in the work Dhinojwala’s lab did on geckos, but “we can’t really put geckos on the ice,” he says. So he turned to polar bears.

    Orndorf teamed up with Dhinojwala and Austin Garner, an animal biologist now at Syracuse University in New York, and compared the paws of polar bears, brown bears, American black bears and a sun bear. All but the sun bear had paw pad bumps. But the polar bears’ bumps looked a little different. For a given diameter, their bumps tend to be taller, the team found. That extra height translates to more traction on lab-made snow, experiments with 3-D printed models of the bumps suggest.

    Until now, scientists didn’t know that bump shape could make the difference between gripping and slipping, Dhinojwala says.

    Rough bumps on the pads of polar bears’ paws (pictured) offer the animals extra traction on snow.N. Orndorf et al/Journal of the Royal Society Interface 2022

    Polar bear paw pads are also ringed with fur and are smaller than those of other bears, the team reports, adaptations that might let the Arctic animals conserve body heat as they trod upon ice. Smaller pads generally mean less real estate for grabbing the ground. So extra-grippy pads could help polar bears make the most of what they’ve got, Orndorf says.

    Along with bumpy pads, the team hopes to study polar bears’ fuzzy paws and short claws, which might also give the animals a nonslip grip. More

  • in

    Tracking trust in human-robot work interactions

    The future of work is here.
    As industries begin to see humans working closely with robots, there’s a need to ensure that the relationship is effective, smooth and beneficial to humans. Robot trustworthiness and humans’ willingness to trust robot behavior are vital to this working relationship. However, capturing human trust levels can be difficult due to subjectivity, a challenge researchers in the Wm Michael Barnes ’64 Department of Industrial and Systems Engineering at Texas A&M University aim to solve.
    Dr. Ranjana Mehta, associate professor and director of the NeuroErgonomics Lab, said her lab’s human-autonomy trust research stemmed from a series of projects on human-robot Interactions in safety-critical work domains funded by the National Science Foundation (NSF).
    “While our focus so far was to understand how operator states of fatigue and stress impact how humans interact with robots, trust became an important construct to study,” Mehta said. “We found that as humans get tired, they let their guards down and become more trusting of automation than they should. However, why that is the case becomes an important question to address.”
    Mehta’s latest NSF-funded work, recently published in Human Factors: The Journal of the Human Factors and Ergonomics Society, focuses on understanding the brain-behavior relationships of why and how an operator’s trusting behaviors are influenced by both human and robot factors.
    Mehta also has another publication in the journal Applied Ergonomics that investigates these human and robot factors.
    Using functional near-infrared spectroscopy, Mehta’s lab captured functional brain activity as operators collaborated with robots on a manufacturing task. They found faulty robot actions decreased the operator’s trust in the robots. That distrust was associated with increased activation of regions in the frontal, motor and visual cortices, indicating increasing workload and heightened situational awareness. Interestingly, the same distrusting behavior was associated with the decoupling of these brain regions working together, which otherwise were well connected when the robot behaved reliably. Mehta said this decoupling was greater at higher robot autonomy levels, indicating that neural signatures of trust are influenced by the dynamics of human-autonomy teaming.
    “What we found most interesting was that the neural signatures differed when we compared brain activation data across reliability conditions (manipulated using normal and faulty robot behavior) versus operator’s trust levels (collected via surveys) in the robot,” Mehta said. “This emphasized the importance of understanding and measuring brain-behavior relationships of trust in human-robot collaborations since perceptions of trust alone is not indicative of how operators’ trusting behaviors shape up.”
    Dr. Sarah Hopko ’19, lead author on both papers and recent industrial engineering doctoral student, said neural responses and perceptions of trust are both symptoms of trusting and distrusting behaviors and relay distinct information on how trust builds, breaches and repairs with different robot behaviors. She emphasized the strengths of multimodal trust metrics — neural activity, eye tracking, behavioral analysis, etc. — can reveal new perspectives that subjective responses alone cannot offer.
    The next step is to expand the research into a different work context, such as emergency response, and understand how trust in multi-human robot teams impact teamwork and taskwork in safety-critical environments. Mehta said the long-term goal is not to replace humans with autonomous robots but to support them by developing trust-aware autonomy agents.
    “This work is critical, and we are motivated to ensure that humans-in-the-loop robotics design, evaluation and integration into the workplace are supportive and empowering of human capabilities,” Mehta said.
    Story Source:
    Materials provided by Texas A&M University. Original written by Jennifer Reiley. Note: Content may be edited for style and length. More