More stories

  • in

    Can't solve a riddle? The answer might lie in knowing what doesn't work

    You look for a pattern, or a rule, and you just can’t spot it. So you back up and start over.
    That’s your brain recognizing that your current strategy isn’t working, and that you need a new way to solve the problem, according to new research from the University of Washington. With the help of about 200 puzzle-takers, a computer model and functional MRI (fMRI) images, researchers have learned more about the processes of reasoning and decision-making, pinpointing the brain pathway that springs into action when problem-solving goes south.
    “There are two fundamental ways your brain can steer you through life — toward things that are good, or away from things that aren’t working out,” said Chantel Prat, associate professor of psychology and co-author of the new study, published Feb. 23 in the journal Cognitive Science. “Because these processes are happening beneath the hood, you’re not necessarily aware of how much driving one or the other is doing.”
    Using a decision-making task developed by Michael Frank at Brown University, the researchers measured exactly how much “steering” in each person’s brain involved learning to move toward rewarding things as opposed to away from less-rewarding things. Prat and her co-authors were focused on understanding what makes someone good at problem-solving.
    The research team first developed a computer model that specified the series of steps they believed were required for solving the Raven’s Advanced Performance Matrices (Raven’s) — a standard lab test made of puzzles like the one above. To succeed, the puzzle-taker must identify patterns and predict the next image in the sequence. The model essentially describes the four steps people take to solve a puzzle:
    Identify a key feature in a pattern;
    Figure out where that feature appears in the sequence;
    Come up with a rule for manipulating the feature;
    Check whether the rule holds true for the entire pattern.
    At each step, the model evaluated whether it was making progress. When the model was given real problems to solve, it performed best when it was able to steer away from the features and strategies that weren’t helping it make progress. According to the authors, this ability to know when your “train of thought is on the wrong track” was central to finding the correct answer.
    The next step was to see whether this was true in people. To do so, the team had three groups of participants solve puzzles in three different experiments. In the first, they solved the original set of Raven’s problems using a paper-and-pencil test, along with Frank’s test which separately measured their ability to “choose” the best options and to “avoid” the worse options. Their results suggested that only the ability to “avoid” the worst options related to problem-solving success. There was no relation between one’s ability to recognize the best choice in the decision-making test, and to solve the puzzles effectively.
    The second experiment replaced the paper-and-pencil version of the puzzles with a shorter, computerized version of the task that could also be implemented in an MRI brain-scanning environment. These results confirmed that those who were best at avoiding the worse options in the decision-making task were also the best problem solvers.
    The final group of participants completed the computerized puzzles while having their brain activity recorded using fMRI. Based on the model, the researchers gauged which parts of the brain would drive problem-solving success. They zeroed in on the basal ganglia — what Prat calls the “executive assistant” to the prefrontal cortex, or “CEO” of the brain. The basal ganglia assist the prefrontal cortex in deciding which action to take using parallel paths: one that turns the volume “up” on information it believes is relevant, and another that turns the volume “down” on signals it believes to be irrelevant. The “choose” and “avoid” behaviors associated with Frank’s decision-making test relate to the functioning of these two pathways. Results from this experiment suggest that the process of “turning down the volume” in the basal ganglia predicted how successful participants were at solving the puzzles.
    “Our brains have parallel learning systems for avoiding the least good thing and getting the best thing. A lot of research has focused on how we learn to find good things, but this pandemic is an excellent example of why we have both systems. Sometimes, when there are no good options, you have to pick the least bad one! What we found here was that this is even more critical to complex problem-solving than recognizing what’s working.”
    Co-authors of the study were Andrea Stocco, associate professor, and Lauren Graham, assistant teaching professor, in the UW Department of Psychology. The research was supported by the UW Royalty Research Fund, a UW startup fund award and the Bezos Family Foundation.

    Story Source:
    Materials provided by University of Washington. Original written by Kim Eckart. Note: Content may be edited for style and length. More

  • in

    Extreme-scale computing and AI forecast a promising future for fusion power

    Efforts to duplicate on Earth the fusion reactions that power the sun and stars for unlimited energy must contend with extreme heat-load density that can damage the doughnut-shaped fusion facilities called tokamaks, the most widely used laboratory facilities that house fusion reactions, and shut them down. These loads flow against the walls of what are called divertor plates that extract waste heat from the tokamaks.
    Far larger forecast
    But using high-performance computers and artificial intelligence (AI), researchers at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) have predicted a far larger and less damaging heat-load width for the full-power operation of ITER, the international tokamak under construction in France, than previous estimates have found. The new formula produced a forecast that was over six-times wider than those developed by a simple extrapolation from present tokamaks to the much larger ITER facility whose goal is to demonstrate the feasibility of fusion power.
    “If the simple extrapolation to full-power ITER from today’s tokamaks were correct, no known material could withstand the extreme heat load without some difficult preventive measures,” said PPPL physicist C.S. Chang, leader of the team that developed the new, wider forecast and first author of a paper that Physics of Plasmas has published as an Editor’s Pick. “An accurate formula can enable scientists to operate ITER in a more comfortable and cost-effective way toward its goal of producing 10 times more fusion energy than the input energy,” Chang said.
    Fusion reactions combine light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei that makes up to 99 percent of the visible universe — to generate massive amounts of energy. Tokamaks, the most widely used fusion facilities, confine the plasma in magnetic fields and heat it to million-degree temperatures to produce fusion reactions. Scientists around the world are seeking to produce and control such reactions to create a safe, clean, and virtually inexhaustible supply of power to generate electricity.
    The Chang team’s surprisingly optimistic forecast harkens back to results the researchers produced on the Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory in 2017. The team used the PPPL-developed XGC high-fidelity plasma turbulence code to forecast a heat load that was over six-times wider in full-power ITER operation than simple extrapolations from current tokamaks predicted.

    advertisement

    Surprise finding
    The surprising finding raised eyebrows by sharply contradicting the dangerously narrow heat-load forecasts. What accounted for the difference — might there be some hidden plasma parameter, or condition of plasma behavior, that the previous forecasts had failed to detect?
    Those forecasts arose from parameters in the simple extrapolations that regarded plasma as a fluid without considering the important kinetic, or particle motion, effects. By contrast, the XGC code produces kinetic simulations using trillions of particles on extreme-scale computers, and its six-times wider forecast suggested that there might indeed be hidden parameters that the fluid approach did not factor in.
    The team performed more refined simulations of the full-power ITER plasma on the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) at Oak Ridge National Laboratory to ensure that their 2017 findings on Titan were not in error.
    The team also performed new XGC simulations on current tokamaks to compare the results to the much wider Summit and Titan findings. One simulation was on one of the highest magnetic-field plasmas on the Joint European Torus (JET) in the United Kingdom, which reaches 73 percent of the full-power ITER magnetic field strength. Another simulation was on one of the highest magnetic-field plasmas on the now decommissioned C-Mod tokamak at the Massachusetts Institute of Technology (MIT), which reaches 100 percent of the full-power ITER magnetic field.

    advertisement

    The results in both cases agreed with the narrow heat-load width forecasts from simple extrapolations. These findings strengthened the suspicion that there are indeed hidden parameters.
    Supervised machine learning
    The team then turned to a type of AI method called supervised machine learning to discover what the unnoticed parameters might be. Using kinetic XGC simulation data from future ITER plasma, the AI code identified the hidden parameter as related to the orbiting of plasma particles around the tokamak’s magnetic field lines, an orbiting called gyromotion.
    The AI program suggested a new formula that forecasts a far wider and less dangerous heat-load width for full-power ITER than the previous XGC formula derived from experimental results in present tokamaks predicted. Furthermore, the AI-produced formula recovers the previous narrow findings of the formula built for the tokamak experiments.
    “This exercise exemplifies the necessity for high-performance computing, by not only producing high-fidelity understanding and prediction but also improving the analytic formula to be more accurate and predictive.” Chang said. “It is found that the full-power ITER edge plasma is subject to a different type of turbulence than the edge in present tokamaks due to the large size of the ITER edge plasma compared to the gyromotion radius of particles.”
    Researchers then verified the AI-produced formula by performing three more simulations of future ITER plasmas on the supercomputers Summit at OLCF and Theta at the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. “If this formula is validated experimentally,” Chang said, “this will be huge for the fusion community and for ensuring that ITER’s divertor can accommodate the heat exhaust from the plasma without too much complication.”
    The team would next like to see experiments on current tokamaks that could be designed to test the AI-produced extrapolation formula. If it is validated, Chang said, “the formula can be used for easier operation of ITER and for the design of more economical fusion reactors.” More

  • in

    Recommended for you: Role, impact of tools behind automated product picks explored

    As you scroll through Amazon looking for the perfect product, or flip through titles on Netflix searching for a movie to fit your mood, auto-generated recommendations can help you find exactly what you’re looking for among extensive offerings.
    These recommender systems are used in retail, entertainment, social networking and more. In a recently published study, two researchers from The University of Texas at Dallas investigated the informative role of these systems and the economic impacts on competing sellers and consumers.
    “Recommender systems have become ubiquitous in e-commerce platforms and are touted as sales-support tools that help consumers find their preferred or desired product among the vast variety of products,” said Dr. Jianqing Chen, professor of information systems in the Naveen Jindal School of Management. “So far, most of the research has been focused on the technical side of recommender systems, while the research on the economic implications for sellers is limited.”
    In the study, published in the December 2020 issue of MIS Quarterly, Chen and Dr. Srinivasan Raghunathan, the Ashbel Smith Professor of information systems, developed an analytical model in which sellers sell their products through a common electronic marketplace.
    The paper focuses on the informative role of the recommender system: how it affects consumers’ decisions by informing them about products about which they otherwise may be unaware. Recommender systems seem attractive to sellers because they do not have to pay the marketplace for receiving recommendations, while traditional advertising is costly.
    The researchers note that recommender systems have been reported to increase sales on these marketplaces: More than 35% of what consumers purchase on Amazon and more than 60% of what they watch on Netflix result from recommendations. The systems use information including purchase history, search behavior, demographics and product ratings to predict a user’s preferences and recommend the product the consumer is most likely to buy.

    advertisement

    While recommender systems introduce consumers to new products and increase the market size — which benefits sellers — the free exposure is not necessarily profitable, Chen said.
    The researchers found the advertising effect causes sellers to advertise less on their own, and the competition effect causes them to decrease their prices. Sellers also are more likely to benefit from the recommender system only when it has a high precision.
    “This means that sellers are likely to benefit from the recommender system only when the recommendations are effective and the products recommended are indeed consumers’ preferred products,” Chen said.
    The researchers determined these results do not change whether sellers use targeted advertising or uniform advertising.
    Although the exposure is desirable for sellers, the negative effects on profitability could overshadow the positive effects. Sellers should carefully choose their advertising approach and adopt uniform advertising if they cannot accurately target customers, Chen said.

    advertisement

    “Free exposure turns out to not really be free,” he said. “To mitigate such a negative effect, sellers should strive to help the marketplace provide effective recommendations. For example, sellers should provide accurate product descriptions, which can help recommender systems provide better matching between products and consumers.”
    Consumers, on the other hand, benefit both directly and indirectly from recommender systems, Raghunathan said. For example, they might be introduced to a new product or benefit from price competition among sellers.
    Conversely, they also might end up paying more than the value of such recommendations in the form of increased prices, Raghunathan said.
    “Consumers should embrace recommender systems,” he said. “However, sharing additional information, such as their preference in the format of online reviews, with the platform is a double-edged sword. While it can help recommender systems more effectively find a product that a consumer might like, the additional information can be used to increase the recommendation precision, which in turn can reduce the competition pressure on sellers and can be bad for consumers.”
    The researchers said that although significant efforts are underway to develop more sophisticated recommender systems, the economic implications of these systems are poorly understood.
    “The business and societal value of recommender systems cannot be assessed properly unless economic issues surrounding them are examined,” Chen said. He and Raghunathan plan to conduct further research on this topic.
    Lusi Li PhD’17, now at California State University, Los Angeles, also contributed to the research. The project was part of Li’s doctoral dissertation at UT Dallas. More

  • in

    'Egg carton' quantum dot array could lead to ultralow power devices

    A new path toward sending and receiving information with single photons of light has been discovered by an international team of researchers led by the University of Michigan.
    Their experiment demonstrated the possibility of using an effect known as nonlinearity to modify and detect extremely weak light signals, taking advantage of distinct changes to a quantum system to advance next generation computing.
    Today, as silicon-electronics-based information technology becomes increasingly throttled by heating and energy consumption, nonlinear optics is under intense investigation as a potential solution. The quantum egg carton captures and releases photons, supporting “excited” quantum states while it possesses the extra energy. As the energy in the system rises, it takes a bigger jump in energy to get to that next excited state — that’s the nonlinearity.
    “Researchers have wondered whether detectable nonlinear effects can be sustained at extremely low power levels — down to individual photons. This would bring us to the fundamental lower limit of power consumption in information processing,” said Hui Deng, professor of physics and senior author of the paper in Nature.
    “We demonstrated a new type of hybrid state to bring us to that regime, linking light and matter through an array of quantum dots,” she added.
    The physicists and engineers used a new kind of semiconductor to create quantum dots arranged like an egg carton. Quantum dots are essentially tiny structures that can isolate and confine individual quantum particles, such as electrons and other, stranger things. These dots are the pockets in the egg carton. In this case, they confine excitons, quasi-particles made up of an electron and a “hole.” A hole appears when an electron in a semiconductor is kicked into a higher energy band, leaving a positive charge behind in its usual spot. If the hole shadows the electron in its parallel energy band, the two are considered as a single entity, an exciton.

    advertisement

    In conventional devices — with little to no nonlinearity — the excitons roam freely and scarcely meet with each other. These materials can contain many identical excitons at the same time without researchers noticing any change to the material properties.
    However, if the exciton is confined to a quantum dot, it becomes impossible to put in a second identical exciton in the same pocket. You’ll need an exciton with a higher energy if you want to get another one in there, which means you’ll need a higher energy photon to make it. This is known as quantum blockade, and it’s the cause of the nonlinearity.
    But typical quantum dots are only a few atoms across — they aren’t on a usable scale. As a solution, Deng’s team created an array of quantum dots that contribute to the nonlinearity all at once.
    The team produced this egg carton energy landscape with two flakes of semiconductor, which are considered two-dimensional materials because they are made of a single molecular layer, just a few atoms thick. 2D semiconductors have quantum properties that are very different from larger chunks. One flake was tungsten disulfide and the other was molybdenum diselenide. Laid with an angle of about 56.5 degrees between their atomic lattices, the two intertwined electronic structures created a larger electronic lattice, with pockets about 10 atoms across.
    In order for the array of quantum dots inside the 2D semiconductor to be controlled as a group with light, the team built a resonator by making one mirror at the bottom, laying the semiconductor on top of it, and then depositing a second mirror on top of the semiconductor.

    advertisement

    “You need to control the thickness very tightly so that the semiconductor is at the maximum of the optical field,” said Zhang Long, a postdoctoral research fellow in the Deng lab and first author on the paper.
    With the quantum egg carton embedded in the mirrored “cavity” that enabled red laser light to resonate, the team observed the formation of another quantum state, called a polariton. Polaritons are a hybrid of the excitons and the light in the cavity. This confirmed all the quantum dots interact with light in concert. In this system, Deng’s team showed that putting a few excitons into the carton led to a measurable change of the polariton’s energy — demonstrating nonlinearity and showing that quantum blockade was occurring.
    “Engineers can use that nonlinearity to discern energy deposited into the system, potentially down to that of a single photon, which makes the system promising as an ultra-low energy switch,” Deng said.
    Switches are among the devices needed to achieve ultralow power computing, and they can be built into more complex gates.
    “Professor Deng’s research describes how polariton nonlinearities can be tailored to consume less energy,” said Michael Gerhold, program manager at the Army Research Office, an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory. “Control of polaritons is aimed at future integrated photonics used for ultra-low energy computing and information processing that could be used for neuromorphic processing for vision systems, natural language processing or autonomous robots.”
    The quantum blockade also means a similar system could possibly be used for qubits, the building blocks for quantum information processing. One forward path is figuring out how to address each quantum dot in the array as an individual qubit. Another way would be to achieve polariton blockade, similar to the exciton blockade seen here. In this version, the array of excitons, resonating in time with the light wave, would be the qubit.
    Used in these ways, the new 2D semiconductors have potential for bringing quantum devices up to room temperature, rather than the extreme cold of liquid nitrogen or liquid helium.
    “We are coming to the end of Moore’s Law,” said Steve Forrest, the Peter A. Franken Distinguished University Professor of Electrical Engineering and co-author of the paper, referring to the trend of the density of transistors on a chip doubling every two years. “Two dimensional materials have many exciting electronic and optical properties that may, in fact, lead us to that land beyond silicon.” More

  • in

    The (robotic) doctor will see you now

    In the era of social distancing, using robots for some health care interactions is a promising way to reduce in-person contact between health care workers and sick patients. However, a key question that needs to be answered is how patients will react to a robot entering the exam room.
    Researchers from MIT and Brigham and Women’s Hospital recently set out to answer that question. In a study performed in the emergency department at Brigham and Women’s, the team found that a large majority of patients reported that interacting with a health care provider via a video screen mounted on a robot was similar to an in-person interaction with a health care worker.
    “We’re actively working on robots that can help provide care to maximize the safety of both the patient and the health care workforce. The results of this study give us some confidence that people are ready and willing to engage with us on those fronts,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.
    In a larger online survey conducted nationwide, the researchers also found that a majority of respondents were open to having robots not only assist with patient triage but also perform minor procedures such as taking a nose swab.
    Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital and a research affiliate in Traverso’s lab, is the lead author of the study, which appears today in JAMA Network Open.
    Triage by robot
    After the Covid-19 pandemic began early last year, Traverso and his colleagues turned their attention toward new strategies to minimize interactions between potentially sick patients and health care workers. To that end, they worked with Boston Dynamics to create a mobile robot that could interact with patients as they waited in the emergency department. The robots were equipped with sensors that allow them to measure vital signs, including skin temperature, breathing rate, pulse rate, and blood oxygen saturation. The robots also carried an iPad that allowed for remote video communication with a health care provider.

    advertisement

    This kind of robot could reduce health care workers’ risk of exposure to Covid-19 and help to conserve the personal protective equipment that is needed for each interaction. However, the question still remained whether patients would be receptive to this type of interaction.
    “Often as engineers, we think about different solutions, but sometimes they may not be adopted because people are not fully accepting of them,” Traverso says. “So, in this study we were trying to tease that out and understand if the population is receptive to a solution like this one.”
    The researchers first conducted a nationwide survey of about 1,000 people, working with a market research company called YouGov. They asked questions regarding the acceptability of robots in health care, including whether people would be comfortable with robots performing not only triage but also other tasks such as performing nasal swabs, inserting a catheter, or turning a patient over in bed. On average, the respondents stated that they were open to these types of interactions.
    The researchers then tested one of their robots in the emergency department at Brigham and Women’s Hospital last spring, when Covid-19 cases were surging in Massachusetts. Fifty-one patients were approached in the waiting room or a triage tent and asked if they would be willing to participate in the study, and 41 agreed. These patients were interviewed about their symptoms via video connection, using an iPad carried by a quadruped, dog-like robot developed by Boston Dynamics. More than 90 percent of the participants reported that they were satisfied with the robotic system.
    “For the purposes of gathering quick triage information, the patients found the experience to be similar to what they would have experienced talking to a person,” Chai says.

    advertisement

    Robotic assistants
    The numbers from the study suggest that it could be worthwhile to try to develop robots that can perform procedures that currently require a lot of human effort, such as turning a patient over in bed, the researchers say. Turning Covid-19 patients onto their stomachs, also known as “proning,” has been shown to boost their blood oxygen levels and make breathing easier. Currently the process requires several people to perform. Administering Covid-19 tests is another task that requires a lot of time and effort from health care workers, who could be deployed for other tasks if robots could help perform swabs.
    “Surprisingly, people were pretty accepting of the idea of having a robot do a nasal swab, which suggests that potential engineering efforts could go into thinking about building some of these systems,” Chai says.
    The MIT team is continuing to develop sensors that can obtain vital sign data from patients remotely, and they are working on integrating these systems into smaller robots that could operate in a variety of environments, such as field hospitals or ambulances.
    Other authors of the paper include Farah Dadabhoy, Hen-wei Huang, Jacqueline Chu, Annie Feng, Hien Le, Joy Collins, Marco da Silva, Marc Raibert, Chin Hur, and Edward Boyer. The research was funded by the National Institutes of Health, the Hans and Mavis Lopater Psychosocial Foundation, e-ink corporation, the Karl Van Tassel (1925) Career Development Professorship, MIT’s Department of Mechanical Engineering, and the Brigham and Women’s Hospital Division of Gastroenterology. More

  • in

    Cutting off stealthy interlopers: a framework for secure cyber-physical systems

    In 2015, hackers infiltrated the corporate network of Ukraine’s power grid and injected malicious software, which caused a massive power outage. Such cyberattacks, along with the dangers to society that they represent, could become more common as the number of cyber-physical systems (CPS) increases.
    A CPS is any system controlled by a network involving physical elements that tangibly interact with the material world. CPSs are incredibly common in industries, especially those integrating robotics or similar automated machinery to the production line. However, as CPSs make their way into societal infrastructures such as public transport and energy management, it becomes even more important to be able to efficiently fend off various types of cyberattacks.
    In a recent study published in IEEE Transactions on Industrial Informatics, researchers from Daegu Gyeongbuk Institute of Science and Technology (DGIST), Korea, have developed a framework for CPSs that is resilient against a sophisticated kind of cyberattack: the pole-dynamics attack (PDA). In a PDA, the hacker connects to a node in the network of the CPS and injects false sensor data. Without proper readings from the sensors of the physical elements of the system, the control signals sent by the control algorithm to the physical actuators are incorrect, causing them to malfunction and behave in unexpected, potentially dangerous ways.
    To address PDAs, the researchers adopted a technique known as software-defined networking (SDN), whereby the network of the CPS is made more dynamic by distributing the relaying of signals through controllable SDN switches. In addition, the proposed approach relies on a novel attack-detection algorithm embedded in the SDN switches, which can raise an alarm to the centralized network manager if false sensor data are being injected.
    Once the network manager is notified, it not only cuts the cyberattacker off by pruning the compromised nodes but also establishes a new safe path for the sensor data. “Existing studies have only focused on attack detection, but they fail to consider the implications of detection and recovery in real time,” explains Professor Kyung-Joon Park, who led the study, “In our study, we simultaneously considered these factors to understand their effects on real-time performance and guarantee stable CPS operation.”
    The new framework was validated experimentally in a dedicated testbed, showing promising results. Excited about the outcomes of the study, Park remarks, “Considering CPSs are a key technology of smart cities and unmanned transport systems, we expect our research will be crucial to provide reliability and resiliency to CPSs in various application domains.” Having a system that is robust against cyberattacks means that economic losses and personal injuries can be minimized. Therefore, this study paves the way to a more secure future for both CPSs and ourselves.

    Story Source:
    Materials provided by DGIST (Daegu Gyeongbuk Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    Advance in 'optical tweezers' to boost biomedical research

    Much like the Jedis in Star Wars use ‘the force’ to control objects from a distance, scientists can use light or ‘optical force’ to move very small particles.
    The inventors of this ground-breaking laser technology, known as ‘optical tweezers’, were awarded the 2018 Nobel Prize in physics.
    Optical tweezers are used in biology, medicine and materials science to assemble and manipulate nanoparticles such as gold atoms. However, the technology relies on a difference in the refractive properties of the trapped particle and the surrounding environment.
    Now scientists have discovered a new technique that allows them to manipulate particles that have the same refractive properties as the background environment, overcoming a fundamental technical challenge.
    The study ‘Optical tweezers beyond refractive index mismatch using highly doped upconversion nanoparticles’ has just been published in Nature Nanotechnology.
    “This breakthrough has huge potential, particularly in fields such as medicine,” says leading co-author Dr Fan Wang from the University of Technology Sydney (UTS).

    advertisement

    “The ability to push, pull and measure the forces of microscopic objects inside cells, such as strands of DNA or intracellular enzymes, could lead to advances in understanding and treating many different diseases such as diabetes or cancer.
    “Traditional mechanical micro-probes used to manipulate cells are invasive, and the positioning resolution is low. They can only measure things like the stiffness of a cell membrane, not the force of molecular motor proteins inside a cell,” he says.
    The research team developed a unique method to control the refractive properties and luminescence of nanoparticles by doping nanocrystals with rare-earth metal ions.
    Having overcome this first fundamental challenge, the team then optimised the doping concentration of ions to achieve the trapping of nanoparticles at a much lower energy level, and at 30 times increased efficiency.
    “Traditionally, you need hundreds of milliwatts of laser power to trap a 20 nanometre gold particle. With our new technology, we can trap a 20 nanometre particle using tens of milliwatts of power,” says Xuchen Shan, first co-author and UTS PhD candidate in the UTS School of Electrical and Data Engineering.

    advertisement

    “Our optical tweezers also achieved a record high degree of sensitivity or ‘stiffness’ for nanoparticles in a water solution. Remarkably, the heat generated by this method was negligible compared with older methods, so our optical tweezers offer a number of advantages,” he says.
    Fellow leading co-author Dr Peter Reece, from the University of New South Wales, says this proof-of-concept research is a significant advancement in a field that is becoming increasingly sophisticated for biological researchers.
    “The prospect of developing a highly-efficient nanoscale force probe is very exciting. The hope is that the force probe can be labelled to target intracellular structures and organelles, enabling the optical manipulation of these intracellular structures,” he says.
    Distinguished Professor Dayong Jin, Director of the UTS Institute for Biomedical Materials and Devices (IBMD) and a leading co-author, says this work opens up new opportunities for super resolution functional imaging of intracellular biomechanics.
    “IBMD research is focused on the translation of advances in photonics and material technology into biomedical applications, and this type of technology development is well aligned to this vision,” says Professor Jin.
    “Once we have answered the fundamental science questions and discovered new mechanisms of photonics and material science, we then move to apply them. This new advance will allow us to use lower-power and less-invasive ways to trap nanoscopic objects, such as live cells and intracellular compartments, for high precision manipulation and nanoscale biomechanics measurement.” More

  • in

    Researchers discover that privacy-preserving tools leave private data anything but

    Machine-learning (ML) systems are becoming pervasive not only in technologies affecting our day-to-day lives, but also in those observing them, including face expression recognition systems. Companies that make and use such widely deployed services rely on so-called privacy preservation tools that often use generative adversarial networks (GANs), typically produced by a third party to scrub images of individuals’ identity. But how good are they?
    Researchers at the NYU Tandon School of Engineering, who explored the machine-learning frameworks behind these tools, found that the answer is “not very.” In the paper “Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images,” presented last month at the 35th AAAI Conference on Artificial Intelligence, a team led by Siddharth Garg, Institute Associate Professor of electrical and computer engineering at NYU Tandon, explored whether private data could still be recovered from images that had been “sanitized” by such deep-learning discriminators as privacy protecting GANs (PP-GANs) and that had even passed empirical tests. The team, including lead author Kang Liu, a Ph.D. candidate, and Benjamin Tan, research assistant professor of electrical and computer engineering, found that PP-GAN designs can, in fact, be subverted to pass privacy checks, while still allowing secret information to be extracted from sanitized images.
    Machine-learning-based privacy tools have broad applicability, potentially in any privacy sensitive domain, including removing location-relevant information from vehicular camera data, obfuscating the identity of a person who produced a handwriting sample, or removing barcodes from images. The design and training of GAN-based tools are outsourced to vendors because of the complexity involved.
    “Many third-party tools for protecting the privacy of people who may show up on a surveillance or data-gathering camera use these PP-GANs to manipulate images,” said Garg. “Versions of these systems are designed to sanitize images of faces and other sensitive data so that only application-critical information is retained. While our adversarial PP-GAN passed all existing privacy checks, we found that it actually hid secret data pertaining to the sensitive attributes, even allowing for reconstruction of the original private image.”
    The study provides background on PP-GANs and associated empirical privacy checks, formulates an attack scenario to ask if empirical privacy checks can be subverted, and outlines an approach for circumventing empirical privacy checks.
    The team provides the first comprehensive security analysis of privacy-preserving GANs and demonstrate that existing privacy checks are inadequate to detect leakage of sensitive information.
    Using a novel steganographic approach, they adversarially modify a state-of-the-art PP-GAN to hide a secret (the user ID), from purportedly sanitized face images.
    They show that their proposed adversarial PP-GAN can successfully hide sensitive attributes in “sanitized” output images that pass privacy checks, with 100% secret recovery rate.
    Noting that empirical metrics are dependent on discriminators’ learning capacities and training budgets, Garg and his collaborators argue that such privacy checks lack the necessary rigor for guaranteeing privacy.
    “From a practical standpoint, our results sound a note of caution against the use of data sanitization tools, and specifically PP-GANs, designed by third parties,” explained Garg. “Our experimental results highlighted the insufficiency of existing DL-based privacy checks and the potential risks of using untrusted third-party PP-GAN tools.”

    Story Source:
    Materials provided by NYU Tandon School of Engineering. Note: Content may be edited for style and length. More