More stories

  • in

    Researchers discover security loophole allowing attackers to use WiFi to see through walls

    A research team based out of the University of Waterloo has developed a drone-powered device that can use WiFi networks to see through walls.
    The device, nicknamed Wi-Peep, can fly near a building and then use the inhabitants’ WiFi network to identify and locate all WiFi-enabled devices inside in a matter of seconds.
    The Wi-Peep exploits a loophole the researchers call polite WiFi. Even if a network is password protected, smart devices will automatically respond to contact attempts from any device within range. The Wi-Peep sends several messages to a device as it flies and then measures the response time on each, enabling it to identify the device’s location to within a metre.
    Dr. Ali Abedi, an adjunct professor of computer science at Waterloo, explains the significance of this discovery.
    “The Wi-Peep devices are like lights in the visible spectrum, and the walls are like glass,” Abedi said. “Using similar technology, one could track the movements of security guards inside a bank by following the location of their phones or smartwatches. Likewise, a thief could identify the location and type of smart devices in a home, including security cameras, laptops, and smart TVs, to find a good candidate for a break-in. In addition, the device’s operation via drone means that it can be used quickly and remotely without much chance of the user being detected.”
    While scientists have explored WiFi security vulnerability in the past using bulky, expensive devices, the Wi-Peep is notable because of its accessibility and ease of transportation. Abedi’s team built it using a store-bought drone and $20 of easily purchased hardware.
    “As soon as the Polite WiFi loophole was discovered, we realized this kind of attack was possible,” Abedi said.
    The team built the Wi-Peep to test their theory and quickly realized that anyone with the right expertise could easily create a similar device.
    “On a fundamental level, we need to fix the Polite WiFi loophole so that our devices do not respond to strangers,” Abedi said. “We hope our work will inform the design of next-generation protocols.”
    In the meantime, he urges WiFi chip manufacturers to introduce an artificial, randomized variation in device response time, which will make calculations like the ones the Wi-Peep uses wildly inaccurate.
    The paper summarizing this research, Non-cooperative wi-fi localization & its privacy implications, was presented at the 28th Annual International Conference on Mobile Computing and Networking.
    Story Source:
    Materials provided by University of Waterloo. Note: Content may be edited for style and length. More

  • in

    Researchers encourage retailers to embrace AI to better service customers

    Three QUT researchers are part of an international research team that have identified new ways for retailers to use Artificial Intelligence in concert with in-store cameras to better service consumer behaviour and tailor store layouts to maximise sales.
    In research published in Artificial Intelligence Review, the team propose an AI-powered store layout design framework for retailers to best take advantage of recent advances in AI techniques, and its sub-fields in computer vision and deep learning to monitor the physical shopping behaviours of their customers.
    Any shopper who has retrieved milk from the farthest corner of a shop knows well that an efficient store layout presents its merchandise to both attract customer attention to items they had not intended to buy, increase browsing time, and easily find related or viable alternative products grouped together.
    A well thought out layout has been shown to positively correlate with increased sales and customer satisfaction. It is one of the most effective in-store marketing tactics which can directly influence customer decisions to boost profitability.
    QUT researchers Dr Kien Nguyen and Professor Clinton Fookes from the School of Electrical Engineering & Robotics and Professor Brett Martin, QUT Business Schoolteamed up with researchers Dr Minh Le, from the University of Economics, Ho Chi Minh city, Vietnam, and Professor Ibrahim Cil from Sakarya University, Serdivan, Turkey, to conduct a comprehensive review on existing approaches to in store layout design.
    Dr Nguyen says improving supermarket layout design — through understanding and prediction — is a vital tactic to improve customer satisfaction and increase sales. More

  • in

    In the latest human vs. machine match, artificial intelligence wins by a hair

    Vikas Nanda has spent more than two decades studying the intricacies of proteins, the highly complex substances present in all living organisms. The Rutgers scientist has long contemplated how the unique patterns of amino acids that compose proteins determine whether they become anything from hemoglobin to collagen, as well as the subsequent, mysterious step of self-assembly where only certain proteins clump together to form even more complex substances.
    So, when scientists wanted to conduct an experiment pitting a human — one with a profound, intuitive understanding of protein design and self-assembly — against the predictive capabilities of an artificially intelligent computer program, Nanda, a researcher at the Center for Advanced Biotechnology and Medicine (CABM) at Rutgers, was one of those at the top of the list.
    Now, the results to see who — or what — could do a better job at predicting which protein sequences would combine most successfully are out. Nanda, along with researchers at Argonne National Laboratory in Illinois and colleagues from throughout the nation, reports in Nature Chemistry that the battle was close but decisive. The competition matching Nanda and several colleagues against an artificial intelligence (AI) program has been won, ever so slightly, by the computer program.
    Scientists are deeply interested in protein self-assembly because they believe understanding it better could help them design a host of revolutionary products for medical and industrial uses, such as artificial human tissue for wounds and catalysts for new chemical products.
    “Despite our extensive expertise, the AI did as good or better on several data sets, showing the tremendous potential of machine learning to overcome human bias,” said Nanda, a professor in the Department of Biochemistry and Molecular Biology at Rutgers Robert Wood Johnson Medical School.
    Proteins are made of large numbers of amino acids joined end to end. The chains fold up to form three-dimensional molecules with complex shapes. The precise shape of each protein, along with the amino acids it contains, determines what it does. Some researchers, such as Nanda, engage in “protein design,” creating sequences that produce new proteins. Recently, Nanda and a team of researchers designed a synthetic protein that quickly detects VX, a dangerous nerve agent, and could pave the way for new biosensors and treatments. More

  • in

    Machine learning facilitates 'turbulence tracking' in fusion reactors

    Fusion, which promises practically unlimited, carbon-free energy using the same processes that power the sun, is at the heart of a worldwide research effort that could help mitigate climate change.
    A multidisciplinary team of researchers is now bringing tools and insights from machine learning to aid this effort. Scientists from MIT and elsewhere have used computer-vision models to identify and track turbulent structures that appear under the conditions needed to facilitate fusion reactions.
    Monitoring the formation and movements of these structures, called filaments or “blobs,” is important for understanding the heat and particle flows exiting from the reacting fuel, which ultimately determines the engineering requirements for the reactor walls to meet those flows. However, scientists typically study blobs using averaging techniques, which trade details of individual structures in favor of aggregate statistics. Individual blob information must be tracked by marking them manually in video data.
    The researchers built a synthetic video dataset of plasma turbulence to make this process more effective and efficient. They used it to train four computer vision models, each of which identifies and tracks blobs. They trained the models to pinpoint blobs in the same ways that humans would.
    When the researchers tested the trained models using real video clips, the models could identify blobs with high accuracy — more than 80 percent in some cases. The models were also able to effectively estimate the size of blobs and the speeds at which they moved.
    Because millions of video frames are captured during just one fusion experiment, using machine-learning models to track blobs could give scientists much more detailed information. More

  • in

    How network pruning can skew deep learning models

    Computer science researchers have demonstrated that a widely used technique called neural network pruning can adversely affect the performance of deep learning models, detailed what causes these performance problems, and demonstrated a technique for addressing the challenge.
    Deep learning is a type of artificial intelligence that can be used to classify things, such as images, text or sound. For example, it can be used to identify individuals based on facial images. However, deep learning models often require a lot of computing resources to operate. This poses challenges when a deep learning model is put into practice for some applications.
    To address these challenges, some systems engage in “neural network pruning.” This effectively makes the deep learning model more compact and, therefore, able to operate while using fewer computing resources.
    “However, our research shows that this network pruning can impair the ability of deep learning models to identify some groups,” says Jung-Eun Kim, co-author of a paper on the work and an assistant professor of computer science at North Carolina State University.
    “For example, if a security system uses deep learning to scan people’s faces in order to determine whether they have access to a building, the deep learning model would have to be made compact so that it can operate efficiently. This may work fine most of the time, but the network pruning could also affect the deep learning model’s ability to identify some faces.”
    In their new paper, the researchers lay out why network pruning can adversely affect the performance of the model at identifying certain groups — which the literature calls “minority groups” — and demonstrate a new technique for addressing these challenges. More

  • in

    Tracking trust in human-robot work interactions

    The future of work is here.
    As industries begin to see humans working closely with robots, there’s a need to ensure that the relationship is effective, smooth and beneficial to humans. Robot trustworthiness and humans’ willingness to trust robot behavior are vital to this working relationship. However, capturing human trust levels can be difficult due to subjectivity, a challenge researchers in the Wm Michael Barnes ’64 Department of Industrial and Systems Engineering at Texas A&M University aim to solve.
    Dr. Ranjana Mehta, associate professor and director of the NeuroErgonomics Lab, said her lab’s human-autonomy trust research stemmed from a series of projects on human-robot Interactions in safety-critical work domains funded by the National Science Foundation (NSF).
    “While our focus so far was to understand how operator states of fatigue and stress impact how humans interact with robots, trust became an important construct to study,” Mehta said. “We found that as humans get tired, they let their guards down and become more trusting of automation than they should. However, why that is the case becomes an important question to address.”
    Mehta’s latest NSF-funded work, recently published in Human Factors: The Journal of the Human Factors and Ergonomics Society, focuses on understanding the brain-behavior relationships of why and how an operator’s trusting behaviors are influenced by both human and robot factors.
    Mehta also has another publication in the journal Applied Ergonomics that investigates these human and robot factors.
    Using functional near-infrared spectroscopy, Mehta’s lab captured functional brain activity as operators collaborated with robots on a manufacturing task. They found faulty robot actions decreased the operator’s trust in the robots. That distrust was associated with increased activation of regions in the frontal, motor and visual cortices, indicating increasing workload and heightened situational awareness. Interestingly, the same distrusting behavior was associated with the decoupling of these brain regions working together, which otherwise were well connected when the robot behaved reliably. Mehta said this decoupling was greater at higher robot autonomy levels, indicating that neural signatures of trust are influenced by the dynamics of human-autonomy teaming.
    “What we found most interesting was that the neural signatures differed when we compared brain activation data across reliability conditions (manipulated using normal and faulty robot behavior) versus operator’s trust levels (collected via surveys) in the robot,” Mehta said. “This emphasized the importance of understanding and measuring brain-behavior relationships of trust in human-robot collaborations since perceptions of trust alone is not indicative of how operators’ trusting behaviors shape up.”
    Dr. Sarah Hopko ’19, lead author on both papers and recent industrial engineering doctoral student, said neural responses and perceptions of trust are both symptoms of trusting and distrusting behaviors and relay distinct information on how trust builds, breaches and repairs with different robot behaviors. She emphasized the strengths of multimodal trust metrics — neural activity, eye tracking, behavioral analysis, etc. — can reveal new perspectives that subjective responses alone cannot offer.
    The next step is to expand the research into a different work context, such as emergency response, and understand how trust in multi-human robot teams impact teamwork and taskwork in safety-critical environments. Mehta said the long-term goal is not to replace humans with autonomous robots but to support them by developing trust-aware autonomy agents.
    “This work is critical, and we are motivated to ensure that humans-in-the-loop robotics design, evaluation and integration into the workplace are supportive and empowering of human capabilities,” Mehta said.
    Story Source:
    Materials provided by Texas A&M University. Original written by Jennifer Reiley. Note: Content may be edited for style and length. More

  • in

    Machine learning, from you

    Many computer systems people interact with on a daily basis require knowledge about certain aspects of the world, or models, to work. These systems have to be trained, often needing to learn to recognize objects from video or image data. This data often contains superfluous content that reduces the accuracy of models. So researchers found a way to incorporate natural hand gestures into the teaching process. This way, users can more easily teach machines about objects, and the machines can also learn more effectively.
    You’ve probably heard the term machine learning before, but are you familiar with machine teaching? Machine learning is what happens behind the scenes when a computer uses input data to form models that can later be used to perform useful functions. But machine teaching is the somewhat less explored part of the process, of how the computer gets its input data to begin with. In the case of visual systems, for example ones that can recognize objects, people need to show objects to a computer so it can learn about them. But there are drawbacks to the ways this is typically done that researchers from the University of Tokyo’s Interactive Intelligent Systems Laboratory sought to improve.
    “In a typical object training scenario, people can hold an object up to a camera and move it around so a computer can analyze it from all angles to build up a model,” said graduate student Zhongyi Zhou. “However, machines lack our evolved ability to isolate objects from their environments, so the models they make can inadvertently include unnecessary information from the backgrounds of the training images. This often means users must spend time refining the generated models, which can be a rather technical and time-consuming task. We thought there must be a better way of doing this that’s better for both users and computers, and with our new system, LookHere, I believe we have found it.”
    Zhou, working with Associate Professor Koji Yatani, created LookHere to address two fundamental problems in machine teaching: firstly, the problem of teaching efficiency, aiming to minimize the users’ time, and required technical knowledge. And secondly, of learning efficiency — how to ensure better learning data for machines to create models from. LookHere achieves these by doing something novel and surprisingly intuitive. It incorporates the hand gestures of users into the way an image is processed before the machine incorporates it into its model, known as HuTics. For example, a user can point to or present an object to the camera in a way that emphasizes its significance compared to the other elements in the scene. This is exactly how people might show objects to each other. And by eliminating extraneous details, thanks to the added emphasis to what’s actually important in the image, the computer gains better input data for its models.
    “The idea is quite straightforward, but the implementation was very challenging,” said Zhou. “Everyone is different and there is no standard set of hand gestures. So, we first collected 2,040 example videos of 170 people presenting objects to the camera into HuTics. These assets were annotated to mark what was part of the object and what parts of the image were just the person’s hands. LookHere was trained with HuTics, and when compared to other object recognition approaches, can better determine what parts of an incoming image should be used to build its models. To make sure it’s as accessible as possible, users can use their smartphones to work with LookHere and the actual processing is done on remote servers. We also released our source code and data set so that others can build upon it if they wish.”
    Factoring in the reduced demand on users’ time that LookHere affords people, Zhou and Yatani found that it can build models up to 14 times faster than some existing systems. At present, LookHere deals with teaching machines about physical objects and it uses exclusively visual data for input. But in theory, the concept can be expanded to use other kinds of input data such as sound or scientific data. And models made from that data would benefit from similar improvements in accuracy too.
    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Quantum dots form ordered material

    Quantum dots are clusters of some 1,000 atoms which act as one large ‘super-atom’. It is possible to accurately design the electronic properties of these dots just by changing their size. However, to create functional devices, a large number of dots have to be combined into a new material. During this process, the properties of the dots are often lost. Now, a team led by University of Groningen professor of Photophysics and Optoelectronics, Maria Antonietta Loi, has succeeded in making a highly conductive optoelectronic metamaterial through self-organization. The metamaterial is described in the journal Advanced Materials, published on 29 October.
    Quantum dots of PbSe (lead selenide) or PbS (lead sulphide) can convert shortwave infrared light into an electrical current. This is a useful property for making detectors, or a switch for telecommunications. ‘However, a single dot does not make a device. And when dots are combined, the assembly often loses the unique optical properties of individual dots, or, if they do maintain them, their capacity to transport charge carriers becomes very poor’, explains Loi. ‘This is because it is difficult to create an ordered material from the dots.’
    Ordered layer
    Working with colleagues from the Zernike Institute for Advanced Materials at the Faculty of Science and Engineering, University of Groningen, Loi experimented with a method that allows the production of a metamaterial from a colloidal solution of quantum dots. These dots, each about five to six nanometres in size, show a very high conductivity when assembled in an ordered array, while maintaining their optical properties.
    ‘We knew from the literature that dots can self-organize into a two-dimensional, ordered layer. We wanted to expand this to a 3D material’, says Loi. To achieve this, they filled small containers with a liquid that acted as a ‘mattress’ for the colloidal quantum dots. ‘By injecting a small amount onto the surface of the liquid, we created a 2D material. Then, adding a bigger volume of quantum dots turned out to produce an ordered 3D material.’
    Superlattice
    The dots are not submersed in the liquid, but self-orient on the surface to achieve a low energy state. ‘The dots have a truncated cubic shape, and when they are put together, they form an ordered structure in three dimensions; a superlattice, where the dots act like atoms in a crystal’, explains Loi. This superlattice that is composed by the quantum dot super atoms displays the highest electron mobility reported for quantum dot assemblies.
    Detectors
    It took special equipment to ascertain what the new metamaterial looks like. The team used an electron microscope which has atomic resolution to show the details of the material. They also ‘imaged’ the large-scale structure of the material using a technique called Grazing-incidence small-angle X-ray scattering. ‘Both techniques are available at the Zernike Institute, thanks to my colleagues Bart Kooi and Giuseppe Portale, respectively, which was a great help’, says Loi.
    Measurements of the electronic properties of the material show that it closely resembles that of a bulk semiconductor, but with the optical properties of the dots. Thus, the experiment paves the way to create new metamaterials based on quantum dots. The sensitivity of the dots used in the present study to infrared light could be used to create optical switches for telecommunication devices. ‘And they might also be used in infrared detectors for night-vision and autonomous driving.’
    ERC Grant
    Loi is extremely pleased with the results of the experiments: ‘People have been dreaming of achieving this since the 1980s. That is how long attempts have been made to assemble quantum dots into functional materials. The control of the structure and the properties we have achieved was beyond our wildest expectations.’ Loi is now working on understanding and improving the technology to build extended superlattices from quantum dots, but is also planning to do so with other building blocks, for which she was recently awarded an Advanced Grant from the European Research Council. ‘Our next step is to improve the technique in order to make the materials more perfect and fabricate photodetectors with them.’
    Story Source:
    Materials provided by University of Groningen. Note: Content may be edited for style and length. More