More stories

  • in

    Artificial 'neurotransistor' created

    Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain. For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics.
    Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely — we need new approaches,” Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.
    “Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.
    Silicon wafer + polymer = chip capable of learning
    Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua from the University of California at Berkeley, who had already postulated similar components in the early 1970s.
    Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance — called solgel — to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”
    Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Note: Content may be edited for style and length. More

  • in

    Tech sector job interviews assess anxiety, not software skills

    A new study from North Carolina State University and Microsoft finds that the technical interviews currently used in hiring for many software engineering positions test whether a job candidate has performance anxiety rather than whether the candidate is competent at coding. The interviews may also be used to exclude groups or favor specific job candidates.
    “Technical interviews are feared and hated in the industry, and it turns out that these interview techniques may also be hurting the industry’s ability to find and hire skilled software engineers,” says Chris Parnin, an assistant professor of computer science at NC State and co-author of a paper on the work. “Our study suggests that a lot of well-qualified job candidates are being eliminated because they’re not used to working on a whiteboard in front of an audience.”
    Technical interviews in the software engineering sector generally take the form of giving a job candidate a problem to solve, then requiring the candidate to write out a solution in code on a whiteboard — explaining each step of the process to an interviewer.
    Previous research found that many developers in the software engineering community felt the technical interview process was deeply flawed. So the researchers decided to run a study aimed at assessing the effect of the interview process on aspiring software engineers.
    For this study, researchers conducted technical interviews of 48 computer science undergraduates and graduate students. Half of the study participants were given a conventional technical interview, with an interviewer looking on. The other half of the participants were asked to solve their problem on a whiteboard in a private room. The private interviews did not require study participants to explain their solutions aloud, and had no interviewers looking over their shoulders.
    Researchers measured each study participant’s interview performance by assessing the accuracy and efficiency of each solution. In other words, they wanted to know whether the code they wrote would work, and the amount of computing resources needed to run it.

    advertisement

    “People who took the traditional interview performed half as well as people that were able to interview in private,” Parnin says. “In short, the findings suggest that companies are missing out on really good programmers because those programmers aren’t good at writing on a whiteboard and explaining their work out loud while coding.”
    The researchers also note that the current format of technical interviews may also be used to exclude certain job candidates.
    “For example, interviewers may give easier problems to candidates they prefer,” Parnin says. “But the format may also serve as a barrier to entire classes of candidates. For example, in our study, all of the women who took the public interview failed, while all of the women who took the private interview passed. Our study was limited, and a larger sample size would be needed to draw firm conclusions, but the idea that the very design of the interview process may effectively exclude an entire class of job candidates is troubling.”
    What’s more, the specific nature of the technical interview process means that many job candidates try to spend weeks or months training specifically for the technical interview, rather than for the actual job they’d be doing.
    “The technical interview process gives people with industry connections an advantage,” says Mahnaz Behroozi, first author of study and a Ph.D. student at NC State. “But it gives a particularly large advantage to people who can afford to take the time to focus solely on preparing for an interview process that has very little to do with the nature of the work itself.
    “And the problems this study highlights are in addition to a suite of other problems associated with the hiring process in the tech sector, which we presented at ICSE-SES [the International Conference on Software Engineering, Software Engineering In Society],” adds Behroozi. “If the tech sector can address all of these challenges in a meaningful way, it will make significant progress in becoming more fair and inclusive. More to the point, the sector will be drawing from a larger and more diverse talent pool, which would contribute to better work.”
    The study on technical interviews, “Does Stress Impact Technical Interview Performance?,” will be presented at the ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, being held virtually from Nov. 8-13. The study was co-authored by Shivani Shirolkar, a Ph.D. student at NC State who worked on the project while an undergraduate; and by Titus Barik, a researcher at Microsoft and former Ph.D. student at NC State. More

  • in

    Robot jaws shows medicated chewing gum could be the future

    Medicated chewing gum has been recognised as a new advanced drug delivery method but currently there is no gold standard for testing drug release from chewing gum in vitro. New research has shown a chewing robot with built-in humanoid jaws could provide opportunities for pharmaceutical companies to develop medicated chewing gum.
    The aim of the University of Bristol study, published in IEEE Transactions on Biomedical Engineering, was to confirm whether a humanoid chewing robot could assess medicated chewing gum. The robot is capable of closely replicating the human chewing motion in a closed environment. It features artificial saliva and allows the release of xylitol the gum to be measured.
    The study wanted to compare the amount of xylitol remaining in the gum between the chewing robot and human participants. The research team also wanted to assess the amount of xylitol released from chewing the gum.
    The researchers found the chewing robot demonstrated a similar release rate of xylitol as human participants. The greatest release of xylitol occurred during the first five minutes of chewing and after 20 minutes of chewing only a low amount of xylitol remained in the gum bolus, irrespective of the chewing method used.
    Saliva and artificial saliva solutions respectively were collected after five, ten, 15 and 20 minutes of continuous chewing and the amount of xylitol released from the chewing gum established.
    Dr Kazem Alemzadeh, Senior Lecturer in the Department of Mechanical Engineering, who led the research, said: “Bioengineering has been used to create an artificial oral environment that closely mimics that found in humans.
    “Our research has shown the chewing robot gives pharmaceutical companies the opportunity to investigate medicated chewing gum, with reduced patient exposure and lower costs using this new method.”
    Nicola West, Professor in Restorative Dentistry in the Bristol Dental School and co-author, added: “The most convenient drug administration route to patients is through oral delivery methods. This research, utilising a novel humanoid artificial oral environment, has the potential to revolutionise investigation into oral drug release and delivery.”

    Story Source:
    Materials provided by University of Bristol. Note: Content may be edited for style and length. More

  • in

    Transparent, reflective objects now within grasp of robots

    Kitchen robots are a popular vision of the future, but if a robot of today tries to grasp a kitchen staple such as a clear measuring cup or a shiny knife, it likely won’t be able to. Transparent and reflective objects are the things of robot nightmares.
    Roboticists at Carnegie Mellon University, however, report success with a new technique they’ve developed for teaching robots to pick up these troublesome objects. The technique doesn’t require fancy sensors, exhaustive training or human guidance, but relies primarily on a color camera. The researchers will present this new system during this summer’s International Conference on Robotics and Automation virtual conference.
    David Held, an assistant professor in CMU’s Robotics Institute, said depth cameras, which shine infrared light on an object to determine its shape, work well for identifying opaque objects. But infrared light passes right through clear objects and scatters off reflective surfaces. Thus, depth cameras can’t calculate an accurate shape, resulting in largely flat or hole-riddled shapes for transparent and reflective objects.
    But a color camera can see transparent and reflective objects as well as opaque ones. So CMU scientists developed a color camera system to recognize shapes based on color. A standard camera can’t measure shapes like a depth camera, but the researchers nevertheless were able to train the new system to imitate the depth system and implicitly infer shape to grasp objects. They did so using depth camera images of opaque objects paired with color images of those same objects.
    Once trained, the color camera system was applied to transparent and shiny objects. Based on those images, along with whatever scant information a depth camera could provide, the system could grasp these challenging objects with a high degree of success.
    “We do sometimes miss,” Held acknowledged, “but for the most part it did a pretty good job, much better than any previous system for grasping transparent or reflective objects.”
    The system can’t pick up transparent or reflective objects as efficiently as opaque objects, said Thomas Weng, a Ph.D. student in robotics. But it is far more successful than depth camera systems alone. And the multimodal transfer learning used to train the system was so effective that the color system proved almost as good as the depth camera system at picking up opaque objects.
    “Our system not only can pick up individual transparent and reflective objects, but it can also grasp such objects in cluttered piles,” he added.
    Other attempts at robotic grasping of transparent objects have relied on training systems based on exhaustively repeated attempted grasps — on the order of 800,000 attempts — or on expensive human labeling of objects.
    The CMU system uses a commercial RGB-D camera that’s capable of both color images (RGB) and depth images (D). The system can use this single sensor to sort through recyclables or other collections of objects — some opaque, some transparent, some reflective.
    Video: https://www.youtube.com/watch?v=Gny7NfmqyOk&feature=emb_logo

    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Byron Spice. Note: Content may be edited for style and length. More

  • in

    Power of DNA to store information gets an upgrade

    A team of interdisciplinary researchers has discovered a new technique to store in DNA information — in this case “The Wizard of Oz,” translated into Esperanto — with unprecedented accuracy and efficiency. The technique harnesses the information-storage capacity of intertwined strands of DNA to encode and retrieve information in a way that is both durable and compact.
    The technique is described in a paper in this week’s Proceedings of the National Academy of Sciences.
    “The key breakthrough is an encoding algorithm that allows accurate retrieval of the information even when the DNA strands are partially damaged during storage,” said Ilya Finkelstein, an associate professor of molecular biosciences and one of the authors of the study.
    Humans are creating information at exponentially higher rates than we used to, contributing to the need for a way to store more information efficiently and in a way that will last a long time. Companies such as Google and Microsoft are among those exploring using DNA to store information.
    “We need a way to store this data so that it is available when and where it’s needed in a format that will be readable,” said Stephen Jones, a research scientist who collaborated on the project with Finkelstein; Bill Press, a professor jointly appointed in computer science and integrative biology; and Ph.D. alumnus John Hawkins. “This idea takes advantage of what biology has been doing for billions of years: storing lots of information in a very small space that lasts a long time. DNA doesn’t take up much space, it can be stored at room temperature, and it can last for hundreds of thousands of years.”
    DNA is about 5 million times more efficient than current storage methods. Put another way, a one milliliter droplet of DNA could store the same amount of information as two Walmarts full of data servers. And DNA doesn’t require permanent cooling and hard disks that are prone to mechanical failures.
    There’s just one problem: DNA is prone to errors. And when a genetic code has errors, it’s a lot different from when a computer code has errors. Errors in computer codes tend to show up as blank spots in the code. Errors in DNA sequences show up as insertions or deletions. The problem there is that when something is deleted or added in DNA, the whole sequence shifts, with no blank spots to alert anyone.
    Previously, when information was stored in DNA, the piece of information that needed to be saved, such as a paragraph from a novel, would be repeated 10 to 15 times. When the information was read, the repetitions would be compared to eliminate any insertions or deletions.
    “We found a way to build the information more like a lattice,” Jones said. “Each piece of information reinforces other pieces of information. That way, it only needs to be read once.”
    The language the researchers developed also avoids sections of DNA that are prone to errors or that are difficult to read. The parameters of the language can also change with the type of information that is being stored. For instance, a dropped word in a novel is not as big a deal as a dropped zero in a tax return.
    To demonstrate information retrieval from degraded DNA, the team subjected its “Wizard of Oz” code to high temperatures and extreme humidity. Even though the DNA strands were damaged by these harsh conditions, all the information was still decoded successfully.
    “We tried to tackle as many problems with the process as we could at the same time,” said Hawkins, who recently was with UT’s Oden Institute for Computational Engineering and Sciences. “What we ended up with is pretty remarkable.”

    Story Source:
    Materials provided by University of Texas at Austin. Note: Content may be edited for style and length. More

  • in

    Social media inspired models show winter warming hits fish stocks

    Mathematical modelling inspired by social media is identifying the significant impacts of warming seas on the world’s fisheries.
    University of Queensland School of Veterinary Science researcher Dr Nicholas Clark and colleagues from the University of Otago and James Cook University have assembled a holistic picture of climate change’s impacts on fish stocks in the Mediterranean Sea.
    “Usually, when modelling ecosystems to understand how nature is changing, we build models that only focus on the effects of the environment,” Dr Clark said.
    “But it’s just not accurate enough.
    “Newer models — commonly used in social media to document people’s social interactions — offer an exciting way to address this gap in scientific knowledge.
    “These innovative network models give us a more accurate picture of reality by incorporating biology, allowing us to ask how one species responds to both environmental change and to the presence of other species, including humans.”
    The team used this technique to analyse fish populations in the Mediterranean Sea, a fisheries-based biodiversity hotspot with its future under threat from rapidly warming seas.

    advertisement

    “Experts from fisheries, ecology and the geographical sciences have compiled decades of research to describe the geographical ranges for more than 600 Mediterranean fish species,” Dr Clark said.
    “We put this information, along with data from the Intergovernmental Panel on Climate Change’s sophisticated climate models into our network model.
    “We found that warming seas — particularly in winter — have widespread effects on fish biodiversity.”
    The University of Otago’s Associate Professor Ceridwen Fraser said winter warming was often overlooked when people thought about the impacts of climate change.
    “A great deal of research and media attention has been on the impacts of extreme summer temperatures on people and nature, but winters are getting warmer too,” Dr Fraser said.

    advertisement

    “Interestingly, coastal water temperatures are expected to increase at a faster rate in winter than in summer.
    “Even though winter warming might not reach the extreme high temperatures of summer heatwaves, this research shows that warmer winters could also lead to ecosystem disruption, in some cases more than hotter summer warming will.
    “Our results suggest that winter warming will cause fish species to hang out together in different ways, and some species will disappear from some areas entirely.”
    The researchers hope the study will emphasise the need to understand and address climate change.
    “If fish communities are more strongly regulated by winter temperatures as our model suggests, this means that fish diversity may change more quickly than we previously thought,” Dr Clark said.
    “Catches for many bottom-dwelling and open-ocean fishery species in the Mediterranean Sea have been steadily declining, so any changes to fish communities may have widespread economic impacts.
    “For the sake of marine ecosystems and the people whose livelihoods depend on them, we need to gain a better understanding of how ocean warming will influence both species and economies.”

    Story Source:
    Materials provided by University of Queensland. Note: Content may be edited for style and length. More

  • in

    For next-generation semiconductors, 2D tops 3D

    Netflix, which provides an online streaming service around the world, has 42 million videos and about 160 million subscribers in total. It takes just a few seconds to download a 30-minute video clip and you can watch a show within 15 minutes after it airs. As distribution and transmission of high-quality contents are growing rapidly, it is critical to develop reliable and stable semiconductor memories.
    To this end, POSTECH research team has developed a memory device using a two-dimensional layered-structure material, unlocking the possibility of commercializing the next-generation memory device that can be stably operated at a low power.
    POSTECH research team consisting of Professor Jang-Sik Lee of the Department of Materials Science and Engineering, Professor Donghwa Lee of the Division of Advanced Materials Science, Youngjun Park, and Seong Hun Kim in the PhD course succeeded in designing an optimal halide perovskite material (CsPb2Br5) that can be applied to a ReRAM*1 device by applying the first-principles calculation*2 based on quantum mechanics. The findings were published in Advanced Science.
    The ideal next-generation memory device should process information at high speeds, store large amounts of information with non-volatile characteristics where the information does not disappear when power is off, and operate at low power for mobile devices.
    The recent discovery of the resistive switching property in halide perovskite materials has led to worldwide active research to apply them to ReRAM devices. However, the poor stability of halide perovskite materials when they are exposed to the atmosphere have been raised as an issue.
    The research team compared the relative stability and properties of halide perovskites with various structures using the first principles calculation2. DFT calculations predicted that CsPb2Br5, a two-dimensional layered structure in the form of AB2X5, may have better stability than the three-dimensional structure of ABX3 or other structures (A3B2X7, A2BX4), and that this structure could show improved performance in memory devices.
    To verify this result, CsPb2Br5, an inorganic perovskite material with a two-dimensional layered structure, was synthesized and applied to memory devices for the first time. The memory devices with a three-dimensional structure of CsPbBr3 lost their memory characteristics at temperatures higher than 100 °C. However, the memory devices using a two-dimensional layered-structure of CsPb2Br5 maintained their memory characteristics over 140 °C and could be operated at voltages lower than 1V.
    Professor Jang-Sik Lee who led the research commented, “Using this materials-designing technique based on the first-principles screening and experimental verification, the development of memory devices can be accelerated by reducing the time spent on searching for new materials. By designing an optimal new material for memory devices through computer calculations and applying it to actually producing them, the material can be applied to memory devices of various electronic devices such as mobile devices that require low power consumption or servers that require reliable operation. This is expected to accelerate the commercialization of next-generation data storage devices.”

    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Electron cryo-microscopy: Using inexpensive technology to produce high-resolution images

    Biochemists at Martin Luther University Halle-Wittenberg (MLU) have used a standard electron cryo-microscope to achieve surprisingly good images that are on par with those taken by far more sophisticated equipment. They have succeeded in determining the structure of ferritin almost at the atomic level. Their results were published in the journal PLOS ONE.
    Electron cryo-microscopy has become increasingly important in recent years, especially in shedding light on protein structures. The developers of the new technology were awarded the Nobel Prize for Chemistry in 2017. The trick: the samples are flash frozen and then bombarded with electrons. In the case of traditional electron microscopy, all of the water is first extracted from the sample. This is necessary because the investigation takes place in a vacuum, which means water would evaporate immediately and make imaging impossible. However, because water molecules play such an important role in biomolecules, especially in proteins, they cannot be examined using traditional electron microscopy. Proteins are among the most important building blocks of cells and perform a variety of tasks. In-depth knowledge of their structure is necessary in order to understand how they work.
    The research group led by Dr Panagiotis Kastritis, who is a group leader at the Centre for Innovation Competence HALOmem and a junior professor at the Institute of Biochemistry and Biotechnology at MLU, acquired a state-of-the-art electron cryo-microscope in 2019. “There is no other microscope like it in Halle,” says Kastritis. The new “Thermo Fisher Glacios 200 kV,” financed by the Federal Ministry of Education and Research, is not the best and most expensive microscope of its kind. Nevertheless, Kastritis and his colleagues succeeded in determining the structure of the iron storage protein apoferritin down to 2.7 ångströms (Å), in other words, almost down to the individual atom. One ångström equals one-tenth of a nanometre. This puts the research group in a similar league to departments with far more expensive equipment. Apoferritin is often used as a reference protein to determine the performance levels of corresponding microscopes. Just recently, two research groups broke a new record with a resolution of about 1.2 Å. “Such values can only be achieved using very powerful instruments, which only a handful of research groups around the world have at their disposal. Our method is designed for microscopes found in many laboratories,” explains Kastritis.
    Electron cryo-microscopes are very complex devices. “Even tiny misalignments can render the images useless,” says Kastritis. It is important to programme them correctly and Halle has the technical expertise to do this. But the analysis that is conducted after the data has been collected is just as important. “The microscope produces several thousand images,” explains Kastritis. Image processing programmes are used to create a 3D structure of the molecule. In cooperation with Professor Milton T. Stubbs from the Institute of Biochemistry and Biotechnology at MLU, the researchers have developed a new method to create a high-resolution model of a protein. Stubbs’ research group uses X-ray crystallography, another technique for determining the structure of proteins, which requires the proteins to be crystallised. They were able to combine a modified form of an image analysis technique with the images taken with the electron cryo-microscope. This made charge states and individual water molecules visible.
    “It’s an attractive method,” says Kastritis. Instead of needing very expensive microscopes, a lot of computing capacity is required, which MLU has. Now, in addition to using X-ray crystallography, electron cryo-microscopy can be used to produce images of proteins — especially those that are difficult to crystallise. This enables collaboration, both inside and outside the university, on the structural analysis of samples with medical and biotechnological potential.

    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More