More stories

  • in

    Researchers are working on tech so machines can thermally 'breathe'

    In the era of electric cars, machine learning and ultra-efficient vehicles for space travel, computers and hardware are operating faster and more efficiently. But this increase in power comes with a trade-off: They get superhot.
    To counter this, University of Central Florida researchers are developing a way for large machines to “breathe” in and out cooling blasts of water to keep their systems from overheating.
    The findings are detailed in a recent study in the journal Physical Review Fluids.
    The process is much like how humans and some animals breath in air to cool their bodies down, except in this case, the machines would be breathing in cool blasts of water, says Khan Rabbi, a doctoral candidate in UCF’s Department of Mechanical and Aerospace Engineering and lead author of the study.
    “Our technique used a pulsed water-jet to cool a hot titanium surface,” Rabbi says. “The more water we pumped out of the spray jet nozzles, the greater the amount of heat that transferred between the solid titanium surface and the water droplets, thus cooling the titanium. Fundamentally, an idea of optimum jet-pulsation needs to be generated to ensure maximum heat transfer performance.”
    “It is essentially like exhaling the heat from the surface,” he says.

    advertisement

    The water is emitted from small water-jet nozzles, about 10 times the thickness of a human hair, that douse a hot surface of a large electronic system and the water is collected in a storage chamber, where it can be pumped out and circulated again to repeat the cooling process. The storage chamber in their study held about 10 ounces of water.
    Using high-speed, infrared thermal imaging, the researchers were able to find the optimum amount of water for maximum cooling performance.
    Rabbi says everyday applications for the system could include cooling large electronics, space vehicles, batteries in electric vehicles and gas turbines.
    Shawn Putnam, an associate professor in UCF’s Department of Mechanical and Aerospace Engineering and study co-author, says that this research is part of an effort to explore different techniques to efficiently cool hot devices and surfaces.
    “Most likely, the most versatile and efficient cooling technology will take advantage of several different cooling mechanisms, where pulsed jet cooling is expected to be one of these key contributors,” Putnam says.
    The researcher says there are multiple ways to cool hot hardware, but water-jet cooling is a preferred method because it can be adjusted to different directions, has good heat-transfer ability, and uses minimum amounts of water or liquid coolant.
    However, it has its drawbacks, namely either over or underwatering that results in floods or dry hotspots. The UCF method overcomes this problem by offering a system that is tunable to hardware needs so that the only water applied is the amount needed and in the right spot.
    The technology is needed since once device temperatures surpass a threshold value, for example, 194 degrees Fahrenheit, the device’s performance decreases, Rabbi says.
    “For this reason, we need better cooling technologies in place to keep the device temperature well within the maximum temperature for optimum operation,” he says. “We believe this study will provide engineers, scientists and researchers a unique understanding to develop future generation liquid cooling systems.”

    Story Source:
    Materials provided by University of Central Florida. Original written by Robert H Wells. Note: Content may be edited for style and length. More

  • in

    Engineers create helical topological exciton-polaritons

    Our understanding of quantum physics has involved the creation of a wide range of “quasiparticles.” These notional constructs describe emergent phenomena that appear to have the properties of multiple other particles mixed together.
    An exciton, for example, is a quasiparticle that acts like an electron bound to an electron hole, or the empty space in a semiconducting material where an electron could be. A step further, an exciton-polariton combines the properties of an exciton with that of a photon, making it behave like a combination of matter and light. Achieving and actively controlling the right mixture of these properties — such as their mass, speed, direction of motion, and capability to strongly interact with one another — is the key to applying quantum phenomena to technology, like computers.
    Now, researchers at the University of Pennsylvania’s School of Engineering and Applied Science are the first to create an even more exotic form of the exciton-polariton, one which has a defined quantum spin that is locked to its direction of motion. Depending on the direction of their spin, these helical topological exciton-polaritons move in opposite directions along the surface of an equally specialized type of topological insulator.
    In a study published in the journal Science, they have demonstrated this phenomenon at temperatures much warmer than the near-absolute-zero usually required to maintain this sort of quantum phenomenon. The ability to route these quasiparticles based on their spin in more user-friendly conditions, and an environment where they do not back-scatter, opens up the possibility of using them to transmit information or perform computations at unprecedented speeds.
    The study was led by Ritesh Agarwal, professor in the Department of Materials Science and Engineering, and Wenjing Liu, a postdoctoral researcher in his lab. They collaborated with researchers from Hunan University and George Washington University.
    The study also demonstrates a new type of topological insulators, a class of material developed at Penn by Charles Kane and Eugene Mele that has a conductive surface and an insulating core. Topological insulators are prized for their ability to propagate electrons at their surface without scattering them, and the same idea can be extended to quasiparticles such as photons or polaritons.

    advertisement

    “Replacing electrons with photons would make for even faster computers and other technologies, but photons are very hard to modulate, route or switch. They cannot be transported around sharp turns and leak out of the waveguide,” Agarwal says. “This is where topological exciton-polaritons can be useful, but that means we need to make new types of topological insulators that can work with polaritons. If we could make this type of quantum material, we could route exciton-polaritons along certain channels without any scattering, as well as modulate or switch them via externally applied electric fields or by slight changes in temperature.”
    Agarwal’s group has created several types of photonic topological insulators in the past. While the first “chiral” polariton topological insulator was reported by a group in Europe, it worked at extremely low temperatures while requiring strong magnetic fields The missing piece, and distinction between “chiral” and “helical” in this case, was the ability to control the direction of flow via the quasiparticles’ spin.
    “To create this phase, we used an atomically thin semiconductor, tungsten disulfide, which forms very tightly bound excitons, and coupled it strongly to a properly designed photonic crystal via symmetry engineering. This induced nontrivial topology to the resulting polaritons,” Agarwal says. “At the interface between photonic crystals with different topology, we demonstrated the generation of helical topological polaritons that did not scatter at sharp corners or defects, as well as spin-dependent transport.”
    Agarwal and his colleagues conducted the study at 200K, or roughly -100F without the need for applying any magnetic fields. While that seems cold, it is considerably warmer — and easier to achieve — than similar systems that operate at 4K, or roughly -450F.
    They are confident that further research and improved fabrication techniques for their semiconductor material will easily allow their design to operate at room temperature.
    “From an academic point of view, 200K is already almost room temperature, so small advances in material purity could easily push it to working in ambient conditions,” says Agarwal. “Atomically thin, ‘2D’ materials form very strong excitons that survive room temperature and beyond, so we think we need only small modifications to how our materials are assembled.”
    Agarwal’s group is now working on studying how topological polaritons interact with one another, which would bring them a step closer to using them in practical photonic devices. More

  • in

    Want to wait less at the bus stop? Beware real-time updates

    Smartphone apps that tell commuters when a bus will arrive at a stop don’t result in less time waiting than reliance on an official bus route schedule, a new study suggests.
    In fact, people who followed the suggestions of transit apps to time their arrival for when the bus pulls up to the stop were likely to miss the bus about three-fourths of the time, results showed.
    “Following what transit apps tell you about when to leave your home or office for the bus stop is a risky strategy,” said Luyu Liu, lead author of the study and a doctoral student in geography at The Ohio State University.
    “The app may tell you the bus will be five minutes late, but drivers can make up time after you start walking, and you end up missing the bus.”
    The best choice on average for bus commuters is to refer to the official schedule, or at least build in extra time when using the app’s suggestions, according to the researchers.
    Liu conducted the study with Harvey Miller, professor of geography and director of Ohio State’s Center for Urban and Regional Analysis. The study was published recently online in the journal Transportation Research Part A.

    advertisement

    “We’re not saying that real-time bus information is bad. It is reassuring to know that a bus is coming,” Miller said.
    “But if you’re going to use these apps, you have to know how to use them and realize it still won’t be better on average than following the schedule.”
    For the study, the researchers analyzed bus traffic for one year (May 2018 to May 2019) on one route of the Central Ohio Transit Authority (COTA), the public bus system in Columbus.
    Liu and Miller used the same real-time data that publicly available apps use to tell riders where buses are and when they are likely to reach individual stops. They compared the real-time data predictions of when buses would arrive at stops to when buses actually arrived for a popular bus route that traverses a large part of the city. The researchers then calculated the average time commuters would wait at a stop if they used different tactics to time their arrival, including just following the bus schedule.
    The absolute worst way to catch the bus was using what the researchers called the “greedy tactic” — the one used by many transit apps — in which commuters timed their arrival at the stop to when the app said the bus would pull up.

    advertisement

    The average wait using the greedy tactic was about 12½ minutes — about three times longer than simply following the schedule. That’s because riders using this tactic are at high risk of missing the bus, researchers found.
    The app tells riders when the bus will arrive based on where it is and how fast it is traveling when a commuter checks it, Miller said.
    But there are two problems with that method, he said. For one, drivers can make up lost time.
    “COTA wants to deliver on-time service, so bus operators understandably will try to get back on schedule,” Miller said.
    Plus, the apps don’t check the bus location often enough to get accurate real-time information.
    Slightly better was the “arbitrary tactic” when a person just randomly walked up to a stop and caught the next bus that arrived. Commuters using this tactic would wait on average about 8½ minutes for the next bus.
    The second-best tactic was what the researchers called the “prudent tactic,” which was using the app to plan for arrival at the stop but adding some time as an “insurance buffer.” Here the average wait time was four minutes and 42 seconds, with a 10 percent risk of missing the bus.
    The prudent tactic waiting time was similar to the “schedule tactic,” which is just using the public schedule to determine when to arrive at the stop. These commuters waited an average of four minutes and 12 seconds, with only a 6 percent risk of missing the bus.
    There is some variation on waiting time within these averages, especially with the two tactics that use real-time information from apps. One of the most important factors is the length of a commuter’s walk to the bus stop.
    Those who have longer walks take more risks when they rely on real-time information. If the app tells commuters their bus is running late, a long walk gives the bus more time to speed up to get back on schedule.
    Another important factor is the length of time between buses arriving at a stop. A longer time between buses means more risk if you miss a bus, and results in more time waiting.
    While on average the schedule tactic worked best, there were minor exceptions.
    Results showed that it was generally better for work commuters to follow the schedule tactic in the morning when going to work and follow the prudent tactic using an app in the afternoon.
    But one thing was certain, the researchers said: It was never a good idea to be greedy and try to achieve no waiting at the bus stop.
    Waiting time for buses is an important issue, Miller said. For one, long waiting times is one of the top issues cited by people for not using public transportation.
    It is also a safety concern for people to not have to wait for long periods at stops, especially at night, or for those rushing around busy streets because they are late for a bus. And for many people, missing buses can jeopardize their jobs or important health care appointments, Miller said.
    Miller said the apps themselves could be more helpful by taking advantage of the data used in this study to make better recommendations.
    “These apps shouldn’t be pushing risky strategies on users for eliminating waiting time. They should be more sophisticated,” he said. More

  • in

    New deep learning models: Fewer neurons, more intelligence

    Artificial intelligence has arrived in our everyday lives — from search engines to self-driving cars. This has to do with the enormous computing power that has become available in recent years. But new results from AI research now show that simpler, smaller neural networks can be used to solve certain tasks even better, more efficiently, and more reliably than ever before.
    An international research team from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals, such as threadworms. This novel AI-system can control a vehicle with just a few artificial neurons. The team says that system has decisive advantages over previous deep learning models: It copes much better with noisy input, and, because of its simplicity, its mode of operation can be explained in detail. It does not have to be regarded as a complex “black box,” but it can be understood by humans. This new deep learning model has now been published in the journal Nature Machine Intelligence.
    Learning from nature
    Similar to living brains, artificial neural networks consist of many individual cells. When a cell is active, it sends a signal to other cells. All signals received by the next cell are combined to decide whether this cell will become active as well. The way in which one cell influences the activity of the next determines the behavior of the system — these parameters are adjusted in an automatic learning process until the neural network can solve a specific task.
    “For years, we have been investigating what we can learn from nature to improve deep learning,” says Prof. Radu Grosu, head of the research group “Cyber-Physical Systems” at TU Wien. “The nematode C. elegans, for example, lives its life with an amazingly small number of neurons, and still shows interesting behavioral patterns. This is due to the efficient and harmonious way the nematode’s nervous system processes information.”
    “Nature shows us that there is still lots of room for improvement,” says Prof. Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “Therefore, our goal was to massively reduce complexity and enhance interpretability of neural network models.”
    “Inspired by nature, we developed new mathematical models of neurons and synapses,” says Prof. Thomas Henzinger, president of IST Austria.

    advertisement

    “The processing of the signals within the individual cells follows different mathematical principles than previous deep learning models,” says Dr. Ramin Hasani, postdoctoral associate at the Institute of Computer Engineering, TU Wien and MIT CSAIL. “Also, our networks are highly sparse — this means that not every cell is connected to every other cell. This also makes the network simpler.”
    Autonomous Lane Keeping
    To test the new ideas, the team chose a particularly important test task: self-driving cars staying in their lane. The neural network receives camera images of the road as input and is to decide automatically whether to steer to the right or left.
    “Today, deep learning models with many millions of parameters are often used for learning complex tasks such as autonomous driving,” says Mathias Lechner, TU Wien alumnus and PhD student at IST Austria. “However, our new approach enables us to reduce the size of the networks by two orders of magnitude. Our systems only use 75,000 trainable parameters.”
    Alexander Amini, PhD student at MIT CSAIL explains that the new system consists of two parts: The camera input is first processed by a so-called convolutional neural network, which only perceives the visual data to extract structural features from incoming pixels. This network decides which parts of the camera image are interesting and important, and then passes signals to the crucial part of the network — a “control system” that then steers the vehicle.

    advertisement

    Both subsystems are stacked together and are trained simultaneously. Many hours of traffic videos of human driving in the greater Boston area were collected, and are fed into the network, together with information on how to steer the car in any given situation — until the system has learned to automatically connect images with the appropriate steering direction and can independently handle new situations.
    The control part of the system (called neural circuit policy, or NCP), which translates the data from the perception module into a steering command, only consists of 19 neurons. Mathias Lechner explains that NCPs are up to 3 orders of magnitude smaller than what would have been possible with previous state-of-the-art models.
    Causality and Interpretability
    The new deep learning model was tested on a real autonomous vehicle. “Our model allows us to investigate what the network focuses its attention on while driving. Our networks focus on very specific parts of the camera picture: The curbside and the horizon. This behavior is highly desirable, and it is unique among artificial intelligence systems,” says Ramin Hasani. “Moreover, we saw that the role of every single cell at any driving decision can be identified. We can understand the function of individual cells and their behavior. Achieving this degree of interpretability is impossible for larger deep learning models.”
    Robustness
    “To test how robust NCPs are compared to previous deep models, we perturbed the input images and evaluated how well the agents can deal with the noise,” says Mathias Lechner. “While this became an insurmountable problem for other deep neural networks, our NCPs demonstrated strong resistance to input artifacts. This attribute is a direct consequence of the novel neural model and the architecture.”
    “Interpretability and robustness are the two major advantages of our new model,” says Ramin Hasani. “But there is more: Using our new methods, we can also reduce training time and the possibility to implement AI in relatively simple systems. Our NCPs enable imitation learning in a wide range of possible applications, from automated work in warehouses to robot locomotion. The new findings open up important new perspectives for the AI community: The principles of computation in biological nervous systems can become a great resource for creating high-performance interpretable AI — as an alternative to the black-box machine learning systems we have used so far.”
    Code Repository: https://github.com/mlech26l/keras-ncp
    Video: https://ist.ac.at/en/news/new-deep-learning-models/ More

  • in

    Software spots and fixes hang bugs in seconds, rather than weeks

    Hang bugs — when software gets stuck, but doesn’t crash — can frustrate both users and programmers, taking weeks for companies to identify and fix. Now researchers from North Carolina State University have developed software that can spot and fix the problems in seconds.
    “Many of us have experience with hang bugs — think of a time when you were on website and the wheel just kept spinning and spinning,” says Helen Gu, co-author of a paper on the work and a professor of computer science at NC State. “Because these bugs don’t crash the program, they’re hard to detect. But they can frustrate or drive away customers and hurt a company’s bottom line.”
    With that in mind, Gu and her collaborators developed an automated program, called HangFix, that can detect hang bugs, diagnose the relevant problem, and apply a patch that corrects the root cause of the error. Video of Gu discussing the program can be found here.
    The researchers tested a prototype of HangFix against 42 real-world hang bugs in 10 commonly used cloud server applications. The bugs were drawn from a database of hang bugs that programmers discovered affecting various websites. HangFix fixed 40 of the bugs in seconds.
    “The remaining two bugs were identified and partially fixed, but required additional input from programmers who had relevant domain knowledge of the application,” Gu says.
    For comparison, it took weeks or months to detect, diagnose and fix those hang bugs when they were first discovered.
    “We’re optimistic that this tool will make hang bugs less common — and websites less frustrating for many users,” Gu says. “We are working to integrate Hangfix into InsightFinder.” InsightFinder is the AI-based IT operations and analytics startup founded by Gu.
    The paper, “HangFix: Automatically Fixing Software Hang Bugs for Production Cloud Systems,” is being presented at the ACM Symposium on Cloud Computing (SoCC’20), being held online Oct. 19-21. The paper was co-authored by Jingzhu He, a Ph.D. student at NC State who is nearing graduation; Ting Dai, a Ph.D. graduate of NC State who is now at IBM Research; and Guoliang Jin, an assistant professor of computer science at NC State.
    The work was done with support from the National Science Foundation under grants 1513942 and 1149445.
    HangFix is the latest in a long line of tools Gu’s team has developed to address cloud computing challenges. Her 2011 paper, “CloudScale: Elastic Resource Scaling for Multi-tenant Cloud Systems,” was selected as the winner of the 2020 SoCC 10-Year Award at this year’s conference.

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    Using robotic assistance to make colonoscopy kinder and easier

    Scientists have made a breakthrough in their work to develop semi-autonomous colonoscopy, using a robot to guide a medical device into the body.
    The milestone brings closer the prospect of an intelligent robotic system being able to guide instruments to precise locations in the body to take biopsies or allow internal tissues to be examined.
    A doctor or nurse would still be on hand to make clinical decisions but the demanding task of manipulating the device is offloaded to a robotic system.
    The latest findings — ‘Enabling the future of colonoscopy with intelligent and autonomous magnetic manipulation’ — is the culmination of 12 years of research by an international team of scientists led by the University of Leeds.
    The research is published today (Monday, 12 October) in the scientific journal Nature Machine Intelligence. 
    Patient trials using the system could begin next year or in early 2022.

    advertisement

    Pietro Valdastri, Professor of Robotics and Autonomous Systems at Leeds, is supervising the research. He said: “Colonoscopy gives doctors a window into the world hidden deep inside the human body and it provides a vital role in the screening of diseases such as colorectal cancer. But the technology has remained relatively unchanged for decades.
    “What we have developed is a system that is easier for doctors or nurses to operate and is less painful for patients. It marks an important a step in the move to make colonoscopy much more widely available — essential if colorectal cancer is to be identified early.”
    Because the system is easier to use, the scientists hope this can increase the number of providers who can perform the procedure and allow for greater patient access to colonoscopy.
    A colonoscopy is a procedure to examine the rectum and colon. Conventional colonoscopy is carried out using a semi-flexible tube which is inserted into the anus, a process some patients find so painful they require an anaesthetic.
    Magnetic flexible colonoscope
    The research team has developed a smaller, capsule-shaped device which is tethered to a narrow cable and is inserted into the anus and then guided into place — not by the doctor or nurse pushing the colonoscope but by a magnet on a robotic arm positioned over the patient.

    advertisement

    The robotic arm moves around the patient as it manoeuvres the capsule. The system is based on the principle that magnetic forces attract and repel.
    The magnet on the outside of the patient interacts with tiny magnets in the capsule inside the body, navigating it through the colon. The researchers say it will be less painful than having a conventional colonoscopy.
    Guiding the robotic arm can be done manually but it is a technique that is difficult to master. In response, the researchers have developed different levels of robotic assistance. This latest research evaluated how effective the different levels of robotic assistance were in aiding non-specialist staff to carry out the procedure.
    Levels of robotic assistance
    Direct robot control. This is where the operator has direct control of the robot via a joystick. In this case, there is no assistance.
    Intelligent endoscope teleoperation. The operator focuses on where they want the capsule to be located in the colon, leaving the robotic system to calculate the movements of the robotic arm necessary to get the capsule into place.
    Semi-autonomous navigation. The robotic system autonomously navigates the capsule through the colon, using computer vision — although this can be overridden by the operator.
    During a laboratory simulation, 10 non-expert staff were asked to get the capsule to a point within the colon within 20 minutes. They did that five times, using the three different levels of assistance.
    Using direct robot control, the participants had a 58% success rate. That increased to 96% using intelligent endoscope teleoperation — and 100% using semi-autonomous navigation.
    In the next stage of the experiment, two participants were asked to navigate a conventional colonoscope into the colon of two anaesthetised pigs — and then to repeat the task with the magnet-controlled robotic system using the different levels of assistance. A vet was in attendance to ensure the animals were not harmed.
    The participants were scored on the NASA Task Load Index, a measure of how taxing a task was, both physically and mentally.
    The NASA Task Load Index revealed that they found it easier to operate the colonoscope with robotic assistance. A sense of frustration was a major factor in operating the conventional colonoscope and where participants had direct control of the robot.
    James Martin, a PhD researcher from the University of Leeds who co-led the study, said: “Operating the robotic arm is challenging. It is not very intuitive and that has put a brake on the development of magnetic flexible colonoscopes.
    “But we have demonstrated for the first time that it is possible to offload that function to the robotic system, leaving the operator to think about the clinical task they are undertaking — and it is making a measurable difference in human performance.”
    The techniques developed to conduct colonoscopy examinations could be applied to other endoscopic devices, such as those used to inspect the upper digestive tract or lungs.
    Dr Bruno Scaglioni, a Postdoctoral Research Fellow at Leeds and co-leader of the study, added: “Robot-assisted colonoscopy has the potential to revolutionize the way the procedure is carried out. It means people conducting the examination do not need to be experts in manipulating the device.
    “That will hopefully make the technique more widely available, where it could be offered in clinics and health centres rather than hospitals.” More

  • in

    Liquid metals come to the rescue of semiconductors

    Moore’s law is an empirical suggestion describing that the number of transistors doubles every few years in integrated circuits (ICs). However, Moore’s law has started to fail as transistors are now so small that the current silicon-based technologies are unable to offer further opportunities for shrinking.
    One possibility of overcoming Moore’s law is to resort to two-dimensional semiconductors. These two-dimensional materials are so thin that they can allow the propagation of free charge carriers, namely electrons and holes in transistors that carry the information, along an ultra-thin plane. This confinement of charge carriers can potentially allow the switching of the semiconductor very easily. It also allows directional pathways for the charge carriers to move without scattering and therefore leading to infinitely small resistance for the transistors. This means in theory the two-dimensional materials can result in transistors that do not waste energy during their on/off switching. Theoretically, they can switch very fast and also switch off to absolute zero resistance values during their non-operational states. Sounds ideal, but life is not ideal! In reality, there are still many technological barriers that should be surpassed for creating such perfect ultra-thin semiconductors. One of the barriers with the current technologies is that the deposited ultra-thin films are full of grain boundaries so that the charge carriers are bounced back from them and hence the resistive loss increases.
    One of the most exciting ultra-thin semiconductors is molybdenum disulphide (MoS2) which has been the subject of investigation for the past two decades for its electronic properties. However, obtaining very large-scale two-dimensional MoS2 without any grain boundaries has been proven to be a real challenge. Using any current large-scale deposition technologies, grain-boundary-free MoS2 which is essential for making ICs has yet been reached with acceptable maturity. However, now researchers at the School of Chemical Engineering, University of New South Wales (UNSW) have developed a method to eliminate such grain boundaries based on a new deposition approach.
    “This unique capability was achieved with the help of gallium metal in its liquid state. Gallium is an amazing metal with a low melting point of only 29.8 °C. It means that at a normal office temperature it is solid, while it turns into a liquid when placed at the palm of someone’s hand. It is a melted metal, so its surface is atomically smooth. It is also a conventional metal which means that its surface provides a large number of free electrons for facilitating chemical reactions.” Ms Yifang Wang, the first author of the paper said.
    “By bringing the sources of molybdenum and sulphur near the surface of gallium liquid metal, we were able to realize chemical reactions that form the molybdenum sulphur bonds to establish the desired MoS2. The formed two-dimensional material is templated onto an atomically smooth surface of gallium, so it is naturally nucleated and grain boundary free. This means that by a second step annealing, we were able to obtain very large area MoS2 with no grain boundary. This is a very important step for scaling up this fascinating ultra-smooth semiconductor.” Prof Kourosh Kalantar-Zadeh, the leading author of the work said.
    The researchers at UNSW are now planning to expand their methods to creating other two-dimensional semiconductors and dielectric materials in order to create a number of materials that can be used as different parts of transistors.

    Story Source:
    Materials provided by ARC Centre of Excellence in Future Low-Energy Electronics Technologies. Note: Content may be edited for style and length. More

  • in

    New virtual reality software allows scientists to 'walk' inside cells

    Virtual reality software which allows researchers to ‘walk’ inside and analyse individual cells could be used to understand fundamental problems in biology and develop new treatments for disease.
    The software, called vLUME, was created by scientists at the University of Cambridge and 3D image analysis software company Lume VR Ltd. It allows super-resolution microscopy data to be visualised and analysed in virtual reality, and can be used to study everything from individual proteins to entire cells. Details are published in the journal Nature Methods.
    Super-resolution microscopy, which was awarded the Nobel Prize for Chemistry in 2014, makes it possible to obtain images at the nanoscale by using clever tricks of physics to get around the limits imposed by light diffraction. This has allowed researchers to observe molecular processes as they happen. However, a problem has been the lack of ways to visualise and analyse this data in three dimensions.
    “Biology occurs in 3D, but up until now it has been difficult to interact with the data on a 2D computer screen in an intuitive and immersive way,” said Dr Steven F. Lee from Cambridge’s Department of Chemistry, who led the research. “It wasn’t until we started seeing our data in virtual reality that everything clicked into place.”
    The vLUME project started when Lee and his group met with the Lume VR founders at a public engagement event at the Science Museum in London. While Lee’s group had expertise in super-resolution microscopy, the team from Lume specialised in spatial computing and data analysis, and together they were able to develop vLUME into a powerful new tool for exploring complex datasets in virtual reality.
    “vLUME is revolutionary imaging software that brings humans into the nanoscale,” said Alexandre Kitching, CEO of Lume. “It allows scientists to visualise, question and interact with 3D biological data, in real time all within a virtual reality environment, to find answers to biological questions faster. It’s a new tool for new discoveries.”
    Viewing data in this way can stimulate new initiatives and ideas. For example, Anoushka Handa — a PhD student from Lee’s group — used the software to image an immune cell taken from her own blood, and then stood inside her own cell in virtual reality. “It’s incredible — it gives you an entirely different perspective on your work,” she said.
    The software allows multiple datasets with millions of data points to be loaded in and finds patterns in the complex data using in-built clustering algorithms. These findings can then be shared with collaborators worldwide using image and video features in the software.
    “Data generated from super-resolution microscopy is extremely complex,” said Kitching. “For scientists, running analysis on this data can be very time consuming. With vLUME, we have managed to vastly reduce that wait time allowing for more rapid testing and analysis.”
    The team are mostly using vLUME with biological datasets, such as neurons, immune cells or cancer cells. For example, Lee’s group has been studying how antigen cells trigger an immune response in the body. “Through segmenting and viewing the data in vLUME, we’ve quickly been able to rule out certain hypotheses and propose new ones,” said Lee. This software allows researchers to explore, analyse, segment and share their data in new ways. All you need is a VR headset.”

    Story Source:
    Materials provided by University of Cambridge. The original story is licensed under a Creative Commons License. Note: Content may be edited for style and length. More