More stories

  • in

    A new path for electron optics in solid-state systems

    Electrons can interfere in the same manner as water, acoustical or light waves do. When exploited in solid-state materials, such effects promise novel functionality for electronic devices, in which elements such as interferometers, lenses or collimators could be integrated for controlling electrons at the scale of mirco- and nanometres. However, so far such effects have been demonstrated mainly in one-dimensional devices, for example in nanotubes, or under specific conditions in two-dimensional graphene devices. Writing in Physical Review X, a collaboration including the Department of Physics groups of Klaus Ensslin, Thomas Ihn and Werner Wegscheider in the Laboratory for Solid State Physics and Oded Zilberberg at the Institute of Theoretical Physics, now introduces a novel general scenario for realizing electron optics in two dimensions.

    advertisement

    The main functional principle of optical interferometers is the interference of monochromatic waves that propagate in the same direction. In such interferometers, the interference can be observed as a periodic oscillation of the transmitted intensity on varying the wavelength of the light. However, the period of the interference pattern strongly depends on the incident angle of the light, and, as a result, the interference pattern is averaged out if light is sent through the interferometer at all possible incident angles at once. The same arguments apply to the interference of matter waves as described by quantum mechanics, and in particular to interferometers in which electrons interfere.
    As part of their PhD projects, experimentalist Matija Karalic and theorist Antonio Štrkalj have investigated the phenomenon of electronic interference in a solid-state system consisting of two coupled semiconductor layers, InAs and GaSb. They discovered that the band inversion and hybridization present in this system provide a novel transport mechanism that guarantees non-vanishing interference even when all angles of incidence occur. Through a combination of transport measurements and theoretical modelling, they found that their devices operate as a Fabry-Pérot interferometer in which electrons and holes form hybrid states and interfere.
    The significance of these results goes firmly beyond the specific InAs/GaSb realization explored in this work, as the reported mechanism requires solely the two ingredients of band inversion and hybridization. Therefore new paths are now open for engineering electron-optical phenomena in a broad variety of materials.

    make a difference: sponsored opportunity

    Story Source:
    Materials provided by ETH Zurich Department of Physics. Note: Content may be edited for style and length.

    Journal Reference:
    Matija Karalic, Antonio Štrkalj, Michele Masseroni, Wei Chen, Christopher Mittag, Thomas Tschirky, Werner Wegscheider, Thomas Ihn, Klaus Ensslin, Oded Zilberberg. Electron-Hole Interference in an Inverted-Band Semiconductor Bilayer. Physical Review X, 2020; 10 (3) DOI: 10.1103/PhysRevX.10.031007

    Cite This Page:

    ETH Zurich Department of Physics. “A new path for electron optics in solid-state systems.” ScienceDaily. ScienceDaily, 14 July 2020. .
    ETH Zurich Department of Physics. (2020, July 14). A new path for electron optics in solid-state systems. ScienceDaily. Retrieved July 14, 2020 from www.sciencedaily.com/releases/2020/07/200714132737.htm
    ETH Zurich Department of Physics. “A new path for electron optics in solid-state systems.” ScienceDaily. www.sciencedaily.com/releases/2020/07/200714132737.htm (accessed July 14, 2020). More

  • in

    Wireless aquatic robot could clean water and transport cells

    Researchers at Eindhoven University of Technology developed a tiny plastic robot, made of responsive polymers, which moves under the influence of light and magnetism. In the future this ‘wireless aquatic polyp’ should be able to attract and capture contaminant particles from the surrounding liquid or pick up and transport cells for analysis in diagnostic devices. The researchers published their results in the journal PNAS.
    The mini robot is inspired by a coral polyp; a small soft creature with tentacles, which makes up the corals in the ocean. Doctoral candidate Marina Pilz Da Cunha: “I was inspired by the motion of these coral polyps, especially their ability to interact with the environment through self-made currents.” The stem of the living polyps makes a specific movement that creates a current which attracts food particles. Subsequently, the tentacles grab the food particles floating by.
    The developed wireless artificial polyp is 1 by 1 cm, has a stem that reacts to magnetism, and light steered tentacles. “Combining two different stimuli is rare since it requires delicate material preparation and assembly, but it is interesting for creating untethered robots because it allows for complex shape changes and tasks to be performed,” explains Pilz Da Cunha. The tentacles move by shining light on them. Different wavelengths lead to different results. For example, the tentacles ‘grab’ under the influence of UV light, while they ‘release’ with blue light.
    FROM LAND TO WATER
    The device now presented can grab and release objects underwater, which is a new feature of the light-guided package delivery mini robot the researchers presented earlier this year. This land-based robot couldn’t work underwater, because the polymers making up that robot act through photothermal effects. The heat generated by the light fueled the robot, instead of the light itself. Pilz Da Cunha: “Heat dissipates in water, which makes it impossible to steer the robot under water.” She therefore developed a photomechanical polymer material that moves under the influence of light only. Not heat.

    advertisement

    And that is not its only advantage. Next to operating underwater, this new material can hold its deformation after being activated by light. While the photothermal material immediately returns to its original shape after the stimuli has been removed, the molecules in the photomechanical material actually take on a new state. This allows different stable shapes, to be maintained for a longer period of time. “That helps to control the gripper arm; once something has been captured, the robot can keep holding it until it is addressed by light once again to release it,” says Pilz Da Cunha.
    FLOW ATTRACTS PARTICLES
    By placing a rotating magnet underneath the robot, the stem circles around its axis. Pilz Da Cunha: “It was therefore possible to actually move floating objects in the water towards the polyp, in our case oil droplets.”
    The position of the tentacles (open, closed or something in between), turned out to have an influence on the fluid flow. “Computer simulations, with different tentacle positions, eventually helped us to understand and get the movement of the stem exactly right. And to ‘attract’ the oil droplets towards the tentacles,” explains Pilz Da Cunha.
    OPERATION INDEPENDENT OF THE WATER COMPOSITION
    An added advantage is that the robot operates independently from the composition of the surrounding liquid. This is unique, because the dominant stimuli-responsive material used for underwater applications nowadays, hydrogels, are sensitive for their environment. Hydrogels therefore behave differently in contaminated water. Pilz Da Cunha: “Our robot also works in the same way in salt water, or water with contaminants. In fact, in the future the polyp may be able to filter contaminants out of the water by catching them with its tentacles.”

    advertisement

    NEXT STEP: SWIMMING ROBOT
    PhD student Pilz Da Cunha is now working on the next step: an array of polyps that can work together. She hopes to realize transport of particles, in which one polyp passes on a package to the other. A swimming robot is also on her wish list. Here, she thinks of biomedical applications such as capturing specific cells.
    To achieve this, the researchers still have to work on the wavelengths to which the material responds. “UV light affects cells and the depth of penetration in the human body is limited. In addition, UV light might damage the robot itself, making it less durable. Therefore we want to create a robot that doesn’t need UV light as a stimuli,” concludes Pilz Da Cunha.
    Video: https://www.youtube.com/watch?v=QYklipdzesI&feature=emb_logo More

  • in

    'Knock codes' for smartphone security are easily predicted

    Smartphone owners who unlock their devices with knock codes aren’t as safe as they think, according to researchers from New Jersey Institute of Technology, the George Washington University and Ruhr University Bochum.
    Knock codes work by letting people select patterns to tap on a phone’s locked screen. LG popularized the method in 2014, and now there are approximately 700,000 people using this method in the U.S. alone, along with one million downloads worldwide of clone applications for Google Android devices generally, the researchers said.
    Raina Samuel, a doctoral student in computer science at NJIT’s Ying Wu College of Computing, said she had the idea for this research while attending a security conference in 2017.
    “During that conference I heard our co-author Adam Aviv give a presentation. He was talking about passwords, PINs, shoulder surfing and how these mobile methods of authentication can be manipulated and insecure sometimes,” she said. “At the time, I had an LG phone and I was using the knock codes. It was a bit of a personal interest for me.”
    Knock codes typically present users with a 2-by-2 grid, which must be tapped in the correct sequence to unlock their phone. The sequence is between six and ten taps. The researchers analyzed how easily an attacker could guess a tapping pattern.
    In an online study, 351 participants picked codes. The researchers found that 65% of users started their codes in the top left corner, often proceeding to the top right corner next, which could be attributed to Western reading habits. They also found that increasing the size of the grid didn’t help, instead making the users more likely to pick shorter codes.

    advertisement

    “Knock codes really intrigued me as I have spent a lot of time working on other mobile authentication options, such as PINs or Android patterns, and had never heard of these,” Aviv, an associate professor of computer science at GW, said. “Turns out, while less popular than PINs or patterns, there are still a surprising number of people using knock codes, so it’s important to understand the security and usability properties of them.”
    The researchers also tested a blocklist of common codes, so that survey participants would pick something harder to guess. The list contained the 30 most popular codes. The first three were:
    Top left, top right, bottom left, bottom right, top left, top right (Hourglass shape)
    Top left, top right, bottom right, bottom left, top left, top right (Square shape)
    Top left, top left, top right, top right, bottom left, bottom left. (Number 7 shape)
    The researchers said there should be a feature that blocks codes which are too easy to guess and advises users to pick stronger ones, similar to how some websites respond when users create password-protected accounts.
    The study showed that knock codes are difficult to memorize. Approximately one in ten participants forgot their code by the end of the study, even though it lasted only five minutes. In addition, entering such a code to unlock the display took 5 seconds on average, compared to entering a PIN which typically takes 4.5 seconds and an Android unlock pattern needing only 3 seconds.
    The research team also included Ruhr University’s Philipp Markert. Aviv asked Markert to join their project when peer reviewers said the study of knock code patterns should be done on phones, not on computer simulations. Markert adapted the study’s programming for this change.
    “I’m always interested in new authentication schemes, and I worked with Adam on a similar project about PINs, so when he asked me to join the team, I didn’t think twice.” Markert said.
    The paper will be presented at the 16th Symposium on Usable Privacy and Security, held concurrently with the prestigious USENIX Security Symposium August 9-11. Funding was supplied by the Army Research Laboratory, National Science Foundation and Germany’s North Rhine-Westphalian Experts on Research in Digitalization. More

  • in

    Links between video games and gambling run deeper than previously thought, study reveals

    A range of video game practices have potentially dangerous links to problem gambling, a study has revealed.
    Building on previous research by the same author, which exposed a link between problem gambling and video game loot boxes, the new study suggests that a number of other practices in video games, such as token wagering, real-money gaming, and social casino spending, are also significantly linked to problem gambling.
    The research provides evidence that players who engage in these practices are also more likely to suffer from disordered gaming — a condition where persistent and repeated engagement with video games causes an individual significant impairment or distress.
    Author of the study, Dr David Zendle from the Department of Computer Science at the University of York, said: “These findings suggest that the relationship between gaming and problem gambling is more complex than many people think.”
    “When we go beyond loot boxes, we can see that there are multiple novel practices in gaming that incorporate elements of gambling. All of them are linked to problem gambling, and all seem prevalent. This may pose an important public health risk. Further research is urgently needed.”
    For the study, a group of just under 1,100 participants were quota-sampled to represent the UK population in terms of age, gender, and ethnicity. They were then asked about their gaming and gambling habits.
    The study revealed that a significant proportion (18.5%) of the participants had engaged in some behaviour that related to both gaming and gambling, such as playing a social casino game or spending money on a loot box.
    Dr Zendle added: “There are currently loopholes that mean some gambling related elements of video games avoid regulation. For example social casinos are ‘video games’ that are basically a simulation of gambling: you can spend real money in them, and the only thing that stops them being regulated as proper gambling is that winnings cannot be converted into cash.
    “We need to have regulations in place that address all of the similarities between gambling and video games. Loot boxes aren’t the only element of video games that overlaps with gambling: They’re just a tiny symptom of this broader convergence.”
    Last year, University of York academics, including Dr David Zendle, contributed to a House of Commons select committee inquiry whose report called for video game loot boxes to be regulated under gambling law and for their sale to children to be banned. Dr Zendle also provided key evidence to the recent House of Lords select committee inquiry that likewise produced a report recommending the regulation of loot boxes as gambling.

    Story Source:
    Materials provided by University of York. Note: Content may be edited for style and length. More

  • in

    Artificial 'neurotransistor' created

    Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain. For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics.
    Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely — we need new approaches,” Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.
    “Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.
    Silicon wafer + polymer = chip capable of learning
    Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua from the University of California at Berkeley, who had already postulated similar components in the early 1970s.
    Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance — called solgel — to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”
    Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More

  • in

    Tech sector job interviews assess anxiety, not software skills

    A new study from North Carolina State University and Microsoft finds that the technical interviews currently used in hiring for many software engineering positions test whether a job candidate has performance anxiety rather than whether the candidate is competent at coding. The interviews may also be used to exclude groups or favor specific job candidates.
    “Technical interviews are feared and hated in the industry, and it turns out that these interview techniques may also be hurting the industry’s ability to find and hire skilled software engineers,” says Chris Parnin, an assistant professor of computer science at NC State and co-author of a paper on the work. “Our study suggests that a lot of well-qualified job candidates are being eliminated because they’re not used to working on a whiteboard in front of an audience.”
    Technical interviews in the software engineering sector generally take the form of giving a job candidate a problem to solve, then requiring the candidate to write out a solution in code on a whiteboard — explaining each step of the process to an interviewer.
    Previous research found that many developers in the software engineering community felt the technical interview process was deeply flawed. So the researchers decided to run a study aimed at assessing the effect of the interview process on aspiring software engineers.
    For this study, researchers conducted technical interviews of 48 computer science undergraduates and graduate students. Half of the study participants were given a conventional technical interview, with an interviewer looking on. The other half of the participants were asked to solve their problem on a whiteboard in a private room. The private interviews did not require study participants to explain their solutions aloud, and had no interviewers looking over their shoulders.
    Researchers measured each study participant’s interview performance by assessing the accuracy and efficiency of each solution. In other words, they wanted to know whether the code they wrote would work, and the amount of computing resources needed to run it.

    advertisement

    “People who took the traditional interview performed half as well as people that were able to interview in private,” Parnin says. “In short, the findings suggest that companies are missing out on really good programmers because those programmers aren’t good at writing on a whiteboard and explaining their work out loud while coding.”
    The researchers also note that the current format of technical interviews may also be used to exclude certain job candidates.
    “For example, interviewers may give easier problems to candidates they prefer,” Parnin says. “But the format may also serve as a barrier to entire classes of candidates. For example, in our study, all of the women who took the public interview failed, while all of the women who took the private interview passed. Our study was limited, and a larger sample size would be needed to draw firm conclusions, but the idea that the very design of the interview process may effectively exclude an entire class of job candidates is troubling.”
    What’s more, the specific nature of the technical interview process means that many job candidates try to spend weeks or months training specifically for the technical interview, rather than for the actual job they’d be doing.
    “The technical interview process gives people with industry connections an advantage,” says Mahnaz Behroozi, first author of study and a Ph.D. student at NC State. “But it gives a particularly large advantage to people who can afford to take the time to focus solely on preparing for an interview process that has very little to do with the nature of the work itself.
    “And the problems this study highlights are in addition to a suite of other problems associated with the hiring process in the tech sector, which we presented at ICSE-SES [the International Conference on Software Engineering, Software Engineering In Society],” adds Behroozi. “If the tech sector can address all of these challenges in a meaningful way, it will make significant progress in becoming more fair and inclusive. More to the point, the sector will be drawing from a larger and more diverse talent pool, which would contribute to better work.”
    The study on technical interviews, “Does Stress Impact Technical Interview Performance?,” will be presented at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, being held virtually from Nov. 8-13. The study was co-authored by Shivani Shirolkar, a Ph.D. student at NC State who worked on the project while an undergraduate; and by Titus Barik, a researcher at Microsoft and former Ph.D. student at NC State. More

  • in

    Robot jaws shows medicated chewing gum could be the future

    Medicated chewing gum has been recognised as a new advanced drug delivery method but currently there is no gold standard for testing drug release from chewing gum in vitro. New research has shown a chewing robot with built-in humanoid jaws could provide opportunities for pharmaceutical companies to develop medicated chewing gum.
    The aim of the University of Bristol study, published in IEEE Transactions on Biomedical Engineering, was to confirm whether a humanoid chewing robot could assess medicated chewing gum. The robot is capable of closely replicating the human chewing motion in a closed environment. It features artificial saliva and allows the release of xylitol the gum to be measured.
    The study wanted to compare the amount of xylitol remaining in the gum between the chewing robot and human participants. The research team also wanted to assess the amount of xylitol released from chewing the gum.
    The researchers found the chewing robot demonstrated a similar release rate of xylitol as human participants. The greatest release of xylitol occurred during the first five minutes of chewing and after 20 minutes of chewing only a low amount of xylitol remained in the gum bolus, irrespective of the chewing method used.
    Saliva and artificial saliva solutions respectively were collected after five, ten, 15 and 20 minutes of continuous chewing and the amount of xylitol released from the chewing gum established.
    Dr Kazem Alemzadeh, Senior Lecturer in the Department of Mechanical Engineering, who led the research, said: “Bioengineering has been used to create an artificial oral environment that closely mimics that found in humans.
    “Our research has shown the chewing robot gives pharmaceutical companies the opportunity to investigate medicated chewing gum, with reduced patient exposure and lower costs using this new method.”
    Nicola West, Professor in Restorative Dentistry in the Bristol Dental School and co-author, added: “The most convenient drug administration route to patients is through oral delivery methods. This research, utilising a novel humanoid artificial oral environment, has the potential to revolutionise investigation into oral drug release and delivery.”

    Story Source:
    Materials provided by University of Bristol. Note: Content may be edited for style and length. More

  • in

    Transparent, reflective objects now within grasp of robots

    Kitchen robots are a popular vision of the future, but if a robot of today tries to grasp a kitchen staple such as a clear measuring cup or a shiny knife, it likely won’t be able to. Transparent and reflective objects are the things of robot nightmares.
    Roboticists at Carnegie Mellon University, however, report success with a new technique they’ve developed for teaching robots to pick up these troublesome objects. The technique doesn’t require fancy sensors, exhaustive training or human guidance, but relies primarily on a color camera. The researchers will present this new system during this summer’s International Conference on Robotics and Automation virtual conference.
    David Held, an assistant professor in CMU’s Robotics Institute, said depth cameras, which shine infrared light on an object to determine its shape, work well for identifying opaque objects. But infrared light passes right through clear objects and scatters off reflective surfaces. Thus, depth cameras can’t calculate an accurate shape, resulting in largely flat or hole-riddled shapes for transparent and reflective objects.
    But a color camera can see transparent and reflective objects as well as opaque ones. So CMU scientists developed a color camera system to recognize shapes based on color. A standard camera can’t measure shapes like a depth camera, but the researchers nevertheless were able to train the new system to imitate the depth system and implicitly infer shape to grasp objects. They did so using depth camera images of opaque objects paired with color images of those same objects.
    Once trained, the color camera system was applied to transparent and shiny objects. Based on those images, along with whatever scant information a depth camera could provide, the system could grasp these challenging objects with a high degree of success.
    “We do sometimes miss,” Held acknowledged, “but for the most part it did a pretty good job, much better than any previous system for grasping transparent or reflective objects.”
    The system can’t pick up transparent or reflective objects as efficiently as opaque objects, said Thomas Weng, a Ph.D. student in robotics. But it is far more successful than depth camera systems alone. And the multimodal transfer learning used to train the system was so effective that the color system proved almost as good as the depth camera system at picking up opaque objects.
    “Our system not only can pick up individual transparent and reflective objects, but it can also grasp such objects in cluttered piles,” he added.
    Other attempts at robotic grasping of transparent objects have relied on training systems based on exhaustively repeated attempted grasps — on the order of 800,000 attempts — or on expensive human labeling of objects.
    The CMU system uses a commercial RGB-D camera that’s capable of both color images (RGB) and depth images (D). The system can use this single sensor to sort through recyclables or other collections of objects — some opaque, some transparent, some reflective.
    Video: https://www.youtube.com/watch?v=Gny7NfmqyOk&feature=emb_logo

    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Byron Spice. Note: Content may be edited for style and length. More