More stories

  • in

    Artificial intelligence aids gene activation discovery

    Scientists have long known that human genes spring into action through instructions delivered by the precise order of our DNA, directed by the four different types of individual links, or “bases,” coded A, C, G and T.
    Nearly 25% of our genes are widely known to be transcribed by sequences that resemble TATAAA, which is called the “TATA box.” How the other three-quarters are turned on, or promoted, has remained a mystery due to the enormous number of DNA base sequence possibilities, which has kept the activation information shrouded.
    Now, with the help of artificial intelligence, researchers at the University of California San Diego have identified a DNA activation code that’s used at least as frequently as the TATA box in humans. Their discovery, which they termed the downstream core promoter region (DPR), could eventually be used to control gene activation in biotechnology and biomedical applications. The details are described September 9 in the journal Nature.
    “The identification of the DPR reveals a key step in the activation of about a quarter to a third of our genes,” said James T. Kadonaga, a distinguished professor in UC San Diego’s Division of Biological Sciences and the paper’s senior author. “The DPR has been an enigma — it’s been controversial whether or not it even exists in humans. Fortunately, we’ve been able to solve this puzzle by using machine learning.”
    In 1996, Kadonaga and his colleagues working in fruit flies identified a novel gene activation sequence, termed the DPE (which corresponds to a portion of the DPR), that enables genes to be turned on in the absence of the TATA box. Then, in 1997, they found a single DPE-like sequence in humans. However, since that time, deciphering the details and prevalence of the human DPE has been elusive. Most strikingly, there have been only two or three active DPE-like sequences found in the tens of thousands of human genes. To crack this case after more than 20 years, Kadonaga worked with lead author and post-doctoral scholar Long Vo ngoc, Cassidy Yunjing Huang, Jack Cassidy, a retired computer scientist who helped the team leverage the powerful tools of artificial intelligence, and Claudia Medrano.
    In what Kadonaga describes as “fairly serious computation” brought to bear in a biological problem, the researchers made a pool of 500,000 random versions of DNA sequences and evaluated the DPR activity of each. From there, 200,000 versions were used to create a machine learning model that could accurately predict DPR activity in human DNA.

    advertisement

    The results, as Kadonaga describes them, were “absurdly good.” So good, in fact, that they created a similar machine learning model as a new way to identify TATA box sequences. They evaluated the new models with thousands of test cases in which the TATA box and DPR results were already known and found that the predictive ability was “incredible,” according to Kadonaga.
    These results clearly revealed the existence of the DPR motif in human genes. Moreover, the frequency of occurrence of the DPR appears to be comparable to that of the TATA box. In addition, they observed an intriguing duality between the DPR and TATA. Genes that are activated with TATA box sequences lack DPR sequences, and vice versa.
    Kadonaga says finding the six bases in the TATA box sequence was straightforward. At 19 bases, cracking the code for DPR was much more challenging.
    “The DPR could not be found because it has no clearly apparent sequence pattern,” said Kadonaga. “There is hidden information that is encrypted in the DNA sequence that makes it an active DPR element. The machine learning model can decipher that code, but we humans cannot.”
    Going forward, the further use of artificial intelligence for analyzing DNA sequence patterns should increase researchers’ ability to understand as well as to control gene activation in human cells. This knowledge will likely be useful in biotechnology and in the biomedical sciences, said Kadonaga.
    “In the same manner that machine learning enabled us to identify the DPR, it is likely that related artificial intelligence approaches will be useful for studying other important DNA sequence motifs,” said Kadonaga. “A lot of things that are unexplained could now be explainable.”
    This study was supported by the National Institute of General Medical Sciences (NIGMS) at the National Institutes of Health. More

  • in

    How AI-controlled sensors could save lives in 'smart' hospitals and homes

    As many as 400,000 Americans die each year because of medical errors, but many of these deaths could be prevented by using electronic sensors and artificial intelligence to help medical professionals monitor and treat vulnerable patients in ways that improve outcomes while respecting privacy.
    “We have the ability to build technologies into the physical spaces where health care is delivered to help cut the rate of fatal errors that occur today due to the sheer volume of patients and the complexity of their care,” said Arnold Milstein, a professor of medicine and director of Stanford’s Clinical Excellence Research Center (CERC).
    Milstein, along with computer science professor Fei-Fei Li and graduate student Albert Haque, are co-authors of a Nature paper that reviews the field of “ambient intelligence” in health care — an interdisciplinary effort to create such smart hospital rooms equipped with AI systems that can do a range of things to improve outcomes. For example, sensors and AI can immediately alert clinicians and patient visitors when they fail to sanitize their hands before entering a hospital room. AI tools can be built into smart homes where technology could unobtrusively monitor the frail elderly for behavioral clues of impending health crises. And they prompt in-home caregivers, remotely located clinicians and patients themselves to make timely, life-saving interventions.
    Li, who is co-director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), said ambient technologies have many potential benefits, but they also raise legal and regulatory issues, as well as privacy concerns that must be identified and addressed in a public way to win the trust of patients and providers, as well as the various agencies and institutions that pay health care costs. “Technology to protect the health of medically fragile populations is inherently human-centered,” Li said. “Researchers must listen to all the stakeholders in order to create systems that supplement and complement the efforts of nurses, doctors and other caregivers, as well as patients themselves.”
    Li and Milstein co-direct the 8-year-old Stanford Partnership in AI-Assisted Care (PAC), one of a growing number of centers, including those at Johns Hopkins University and the University of Toronto, where technologists and clinicians have teamed up to develop ambient intelligence technologies to help health care providers manage patient volumes so huge — roughly 24 million Americans required an overnight hospital stay in 2018 — that even the tiniest margin of error can cost many lives.
    “We are in a foot race with the complexity of bedside care,” Milstein said. “By one recent count, clinicians in a hospital’s neonatal intensive care unit took 600 bedside actions, per patient, per day. Without technology assistance, perfect execution of this volume of complex actions is well beyond what is reasonable to expect of even the most conscientious clinical teams.”
    The Fix: Invisible light guided by AI?

    advertisement

    Haque, who compiled the 170 scientific papers cited in the Nature article, said the field is based largely on the convergence of two technological trends: the availability of infrared sensors that are inexpensive enough to build into high-risk care-giving environments, and the rise of machine learning systems as a way to use sensor input to train specialized AI applications in health care.
    The infrared technologies are of two types. The first is active infrared, such as the invisible light beams used by TV remote controls. But instead of simply beaming invisible light in one direction, like a TV remote, new active infrared systems use AI to compute how long it takes the invisible rays to bounce back to the source, like a light-based form of radar that maps the 3D outlines of a person or object.
    Such infrared depth sensors are already being used outside hospital rooms, for instance, to discern whether a person washed their hands before entering and, if not, issue an alert. In one Stanford experiment, a tablet computer hung near the door shows a solid green screen that transitions to red, or some other alert color that might be tested, should a hygiene failure occur. Researchers had considered using audible warnings until medical professionals advised otherwise. “Hospitals are already full of buzzes and beeps,” Milstein said. “Our human-centered design interviews with clinicians taught us that a visual cue would likely be more effective and less annoying.”
    These alert systems are being tested to see if they can reduce the number of ICU patients who get nosocomial infections — potentially deadly illnesses contracted by patients due to failure of other people in the hospital to fully adhere to infection prevention protocols.
    The second type of infrared technology are passive detectors, of the sort that allow night vision goggles to create thermal images from the infrared rays generated by body heat. In a hospital setting, a thermal sensor above an ICU bed would enable the governing AI to detect twitching or writhing beneath the sheets, and alert clinical team members to impending health crises without constantly going from room to room.
    So far, the researchers have avoided using high-definition video sensors, such as those in smartphones, as capturing video imagery could unnecessarily intrude on the privacy of clinicians and patients. “The silhouette images provided by infrared sensors may provide data that is sufficiently accurate to train AI algorithms for many clinically important applications,” Haque said.
    Constant monitoring by ambient intelligence systems in a home environment could also be used to detect clues of serious illness or potential accidents, and alert caregivers to make timely interventions. For instance, when frail seniors start moving more slowly or stop eating regularly, such behaviors can presage depression, a greater likelihood of a fall or the rapid onset of a dangerous health crisis. Researchers are developing activity recognition algorithms that can sift through infrared sensing data to detect changes in habitual behaviors, and help caregivers get a more holistic view of patient well-being.
    Privacy is of particular concern in homes, assisted living settings and nursing homes, but “the preliminary results we’re getting from hospitals and daily living spaces confirm that ambient sensing technologies can provide the data we need to curb medical errors,” Milstein said. “Our Nature review tells the field that we’re on the right track.” More

  • in

    New method prevents quantum computers from crashing

    Quantum information is fragile, which is why quantum computers must be able to correct errors. But what if whole qubits are lost? Researchers are now presenting a method that allows quantum computers to keep going even if they lose some qubits along the way.
    Qubits — the carriers of quantum information — are prone to errors induced by undesired environmental interactions. These errors accumulate during a quantum computation and correcting them is thus a key requirement for a reliable use of quantum computers.
    It is by now well known that quantum computers can withstand a certain amount of computational errors, such as bit flip or phase flip errors. However, in addition to computational errors, qubits might get lost altogether. Depending on the type of quantum computer, this can be due to actual loss of particles, such as atoms or ions, or due to quantum particles transitioning for instance to unwanted energy states, so that they are no longer recognized as a qubit. When a qubit gets lost, the information in the remaining qubits becomes scrambled and unprotected, rendering this process a potentially fatal type of error.
    Detect and correct loss in real time
    A team of physicists led by Rainer Blatt from the Department of Experimental Physics at the University of Innsbruck, in collaboration with theoretical physicists from Germany and Italy, has now developed and implemented advanced techniques that allow their trapped-ion quantum computer to adapt in real-time to loss of qubits and to maintain protection of the fragile stored quantum information. “In our trapped-ion quantum computer, ions hosting the qubits can be trapped for very long times, even days,” says Innsbruck physicist Roman Stricker. “However, our ions are much more complex than a simplified description as a two-level qubit captures. This offers great potential and additional flexibility in controlling our quantum computer, but unfortunately it also provides a possibility for quantum information to leak out of the qubit space due to imperfect operations or radiative decay.” Using an approach developed by the Markus Müller’s theoretical quantum technology group at RWTH Aachen University and Forschungszentrum Jülich, in collaboration with Davide Vodola from the University of Bologna, the Innsbruck team has demonstrated that such leakage can be detected and corrected in real-time. Müller emphasizes that “combining quantum error correction with correction of qubit loss and leakage is a necessary next step towards large-scale and robust quantum computing.”
    Widely applicable techniques
    The researchers had to develop two key techniques to protect their quantum computer from the loss of qubits. The first challenge was to detect the loss of a qubit in the first place: “Measuring the qubit directly was not an option as this would destroy the quantum information that is stored in it,” explains Philipp Schindler from the University of Innsbruck. “We managed to overcome this problem by developing a technique where we used an additional ion to probe whether the qubit in question was still there or not, without disturbing it,” explains Martin Ringbauer. The second challenge was to adapt the rest of the computation in real-time in case the qubit was indeed lost. This adaptation is crucial to unscramble the quantum information after a loss and maintain protection of the remaining qubits. Thomas Monz, who lead the Innsbruck team, emphasizes that “all the building blocks developed in this work are readily applicable to other quantum computer architectures and other leading quantum error correction protocols.”
    The research was financed by the Austrian Science Fund FWF, the Austrian Research Promotion Agency FFG and the European Union, among others.

    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    New maps show how warm water may reach Thwaites Glacier’s icy underbelly

    New seafloor maps reveal the first clear view of a system of channels that may be helping to hasten the demise of West Antarctica’s vulnerable Thwaites Glacier. The channels are deeper and more complex than previously thought, and may be funneling warm ocean water all the way to the underside of the glacier, melting it from below, the researchers found.
    Scientists estimate that meltwater from Florida-sized Thwaites Glacier is currently responsible for about 4 percent of global sea level rise (SN: 1/7/20). A complete collapse of the glacier, which some researchers estimate could happen within the next few decades, could increase sea levels by about 65 centimeters. How and when that collapse might occur is the subject of a five-year international collaborative research effort.
    Glaciers like Thwaites are held back from sliding seaward both by buttressing ice shelves — tongues of floating ice that jut out into the sea — and by the shape of the seafloor itself, which can help pin the glacier’s ice in place (SN: 4/3/18). But in two new studies, published online September 9 in The Cryosphere, the researchers show how the relatively warm ocean waters may have a pathway straight to the glacier’s underbelly.
    Channels carved into the seafloor, extending several kilometers wide and hundreds of meters deep, may act as pathways (red line with yellow arrows as seen in this 3-D illustration) to bring relatively warm ocean waters to the edges of vulnerable Thwaites Glacier, hastening its melting.International Thwaites Glacier Collaboration
    From January to March 2019 researchers used a variety of airborne and ship-based methods — including radar, sonar and gravity measurements — to examine the seafloor around the glacier and two neighboring ice shelves. From those data, the team was able to estimate how the seafloor is shaped beneath the ice itself.
    These efforts revealed a rugged series of high ridges and deep troughs on the seafloor, varying between about 250 meters and 1,000 meters deep. In particular, one major channel, more than 800 meters deep, could be funneling warm water all the way from Pine Island Bay to the submerged edge of the glacier, the team found.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox More

  • in

    Lecturer takes laptops and smart phones away and musters student presence

    A Danish university lecturer experiments with banning screens in discussion lessons. In a new study, a UCPH researcher and her colleagues at Aarhus University analyzed the results, which include greater student presence, improved engagement and deeper learning.
    At a time when much of instruction is performed digitally and university lecture halls are often illuminated by a sea of laptops, it can be difficult to imagine that all instruction was recorded by pen and paper until about 20 years ago.
    Digital technology constitutes a significant presence in education, with many advantages — especially during these corona times, when a great number of students have been forced to work from home.
    But digital technology in the classroom is not without its drawbacks. A lack of concentration and absence of attention among students became too much for one Danish lecturer to bear.
    “The lecturer felt as if their students’ use of social media on their laptops and smartphones distracted and prevented them from achieving deeper learning. Eventually, the frustration became so great that he decided to ban all screens in discussion lessons,” explains Katrine Lindvig, a postdoc at the University of Copenhagen’s Department of Science Education.
    Together with researchers Kim Jesper Herrmann and Jesper Aagaard of Aarhus University, she analysed 100 university student evaluations of the lecturer’s screen-free lessons. Their findings resulted in a new study that had this to say about analogue instruction:
    “Students felt compelled to be present — and liked it. When it suddenly became impossible to Google their way to an answer or more knowledge about a particular theorist, they needed to interact and, through shared reflection, develop as a group. It heightened their engagement and presence,” explains Katrine Lindvig.

    advertisement

    Without distraction, we engage in deeper learning
    What explains this deeper engagement and presence when our phones and computers are stashed away?
    According to Katrine Lindvig, the answer rests in the structure of our brains:
    “A great deal of research suggests that humans can’t really multitask. While we are capable of hopping from task to task, doing so usually results in accomplishing tasks more slowly. However, if we create a space where there’s only one thing — in this case, discussing cases and theories with fellow students — then we do what the brain is best at, and are rewarded by our brains for doing so,” she says.
    Furthermore, a more analog approach can lead to deeper learning, where one doesn’t just memorize things only to see them vanish immediately after an exam. According to Lindvig:
    “Learning, and especially deep learning, is about reflecting on what one has read and then comparing it to previously acquired knowledge. In this way, one can develop and think differently, as opposed to simply learning for the sake of passing an exam. When discussing texts with fellow students, one is exposed to a variety of perspectives that contribute to the achievement of deep learning.”

    advertisement

    We’re not going back to the Stone Age
    While there are numerous advantages to engaging in lessons where Facebook, Instagram and text messages don’t diminish concentration, there are also drawbacks.
    Several students weren’t so enthusiastic about hand-written note taking, explains Katrine Lindvig.
    “They got tired of not being able to search through their notes afterwards and readily share notes with students who weren’t in attendance,” she says.
    Therefore, according to Lindvig, it is not a question of ‘to screen or not to screen’ — “we’re not going back to the Stone Age,” as she puts it. Instead, it’s about how to integrate screens with instruction in a useful way:
    “It’s about identifying what form best supports the content and type of instruction. In our case, screens were restricted during lessons where discussion was the goal. This makes sense, because there is no denying that conversation improves when people look into each other’s eyes rather than down at a screen,” Lindvig says.
    Speaking to the value of screens, she adds:
    “When it comes to lectures which are primarily one-way in nature, it can be perfectly fine for students to take notes on laptops, to help them feel better prepared for exams. We can also take advantage of students’ screens to increase interaction during larger lectures. It’s about matching tools with tasks. Just as a hammer works better than a hacksaw to pound in nails.” More

  • in

    Tool transforms world landmark photos into 4D experiences

    Using publicly available tourist photos of world landmarks such as the Trevi Fountain in Rome or Top of the Rock in New York City, Cornell University researchers have developed a method to create maneuverable 3D images that show changes in appearance over time.
    The method, which employs deep learning to ingest and synthesize tens of thousands of mostly untagged and undated photos, solves a problem that has eluded experts in computer vision for six decades.
    “It’s a new way of modeling scenes that not only allows you to move your head and see, say, the fountain from different viewpoints, but also gives you controls for changing the time,” said Noah Snavely, associate professor of computer science at Cornell Tech and senior author of “Crowdsampling the Plenoptic Function,” presented at the European Conference on Computer Vision, held virtually Aug. 23-28.
    “If you really went to the Trevi Fountain on your vacation, the way it would look would depend on what time you went — at night, it would be lit up by floodlights from the bottom. In the afternoon, it would be sunlit, unless you went on a cloudy day,” Snavely said. “We learned the whole range of appearances, based on time of day and weather, from these unorganized photo collections, such that you can explore the whole range and simultaneously move around the scene.”
    Representing a place in a photorealistic way is challenging for traditional computer vision, partly because of the sheer number of textures to be reproduced. “The real world is so diverse in its appearance and has different kinds of materials — shiny things, water, thin structures,” Snavely said.
    Another problem is the inconsistency of the available data. Describing how something looks from every possible viewpoint in space and time — known as the plenoptic function — would be a manageable task with hundreds of webcams affixed around a scene, recording data day and night. But since this isn’t practical, the researchers had to develop a way to compensate.

    advertisement

    “There may not be a photo taken at 4 p.m. from this exact viewpoint in the data set. So we have to learn from a photo taken at 9 p.m. at one location, and a photo taken at 4:03 from another location,” Snavely said. “And we don’t know the granularity of when these photos were taken. But using deep learning allows us to infer what the scene would have looked like at any given time and place.”
    The researchers introduced a new scene representation called Deep Multiplane Images to interpolate appearance in four dimensions — 3D, plus changes over time. Their method is inspired in part on a classic animation technique developed by the Walt Disney Company in the 1930s, which uses layers of transparencies to create a 3D effect without redrawing every aspect of a scene.
    “We use the same idea invented for creating 3D effects in 2D animation to create 3D effects in real-world scenes, to create this deep multilayer image by fitting it to all these disparate measurements from the tourists’ photos,” Snavely said. “It’s interesting that it kind of stems from this very old, classic technique used in animation.”
    In the study, they showed that this model could be trained to create a scene using around 50,000 publicly available images found on sites such as Flickr and Instagram. The method has implications for computer vision research, as well as virtual tourism — particularly useful at a time when few can travel in person.
    “You can get the sense of really being there,” Snavely said. “It works surprisingly well for a range of scenes.”
    First author of the paper is Cornell Tech doctoral student Zhengqi Li. Abe Davis, assistant professor of computer science in the Faculty of Computing and Information Science, and Cornell Tech doctoral student Wenqi Xian also contributed.
    The research was partly supported by philanthropist Eric Schmidt, former CEO of Google, and Wendy Schmidt, by recommendation of the Schmidt Futures Program.

    Story Source:
    Materials provided by Cornell University. Original written by Melanie Lefkowitz. Note: Content may be edited for style and length. More

  • in

    New perception metric balances reaction time, accuracy

    Researchers at Carnegie Mellon University have developed a new metric for evaluating how well self-driving cars respond to changing road conditions and traffic, making it possible for the first time to compare perception systems for both accuracy and reaction time.
    Mengtian Li, a Ph.D. student in CMU’s Robotics Institute, said academic researchers tend to develop sophisticated algorithms that can accurately identify hazards, but may demand a lot of computation time. Industry engineers, by contrast, tend to prefer simple, less accurate algorithms that are fast and require less computation, so the vehicle can respond to hazards more quickly.
    This tradeoff is a problem not only for self-driving cars, but also for any system that requires real-time perception of a dynamic world, such as autonomous drones and augmented reality systems. Yet until now, there’s been no systematic measure that balances accuracy and latency — the delay between when an event occurs and when the perception system recognizes that event. This lack of an appropriate metric as made it difficult to compare competing systems.
    The new metric, called streaming perception accuracy, was developed by Li, together with Deva Ramanan, associate professor in the Robotics Institute, and Yu-Xiong Wang, assistant professor at the University of Illinois at Urbana-Champaign. They presented it last month at the virtual European Conference on Computer Vision, where it received a best paper honorable mention award.
    Streaming perception accuracy is measured by comparing the output of the perception system at each moment with the ground truth state-of-the-world.
    “By the time you’ve finished processing inputs from sensors, the world has already changed,” Li explained, noting that the car has traveled some distance while the processing occurs.
    “The ability to measure streaming perception offers a new perspective on existing perception systems,” Ramanan said. Systems that perform well according to classic measures of performance may perform quite poorly on streaming perception. Optimizing such systems using the newly introduced metric can make them far more reactive.
    One insight from the team’s research is that the solution isn’t necessarily for the perception system to run faster, but to occasionally take a well-timed pause. Skipping the processing of some frames prevents the system from falling farther and farther behind real-time events, Ramanan added.
    Another insight is to add forecasting methods to the perception processing. Just as a batter in baseball swings at where they think the ball is going to be — not where it is — a vehicle can anticipate some movements by other vehicles and pedestrians. The team’s streaming perception measurements showed that the extra computation necessary for making these forecasts doesn’t significantly harm accuracy or latency.
    The CMU Argo AI Center for Autonomous Vehicle Research, directed by Ramanan, supported this research, as did the Defense Advanced Research Projects Agency.

    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Byron Spice. Note: Content may be edited for style and length. More

  • in

    Virtual tourism could offer new opportunities for travel industry, travelers

    A new proposal for virtual travel, using advanced mathematical techniques and combining livestream video with existing photos and videos of travel hotspots, could help revitalize an industry that has been devastated by the coronavirus pandemic, according to researchers at the Medical College of Georgia at Augusta University.
    In a new proposal published in Cell Patterns, Dr. Arni S.R. Srinivasa Rao, a mathematical modeler and director of the medical school’s Laboratory for Theory and Mathematical Modeling, and co-author Dr. Steven Krantz, a professor of mathematics and statistics at Washington University, suggest using data science to improve on existing television and internet-based tourism experiences. Their technique involves measuring and then digitizing the curvatures and angles of objects and the distances between them using drone footage, photos and videos, and could make virtual travel experiences more realistic for viewers and help revitalize the tourism industry, they say.
    They call this proposed technology LAPO, or Live Streaming with Actual Proportionality of Objects. LAPO employs both information geometry — the measures of an object’s curvatures, angles and area — and conformal mapping, which uses the measures of angles between the curves of an object and accounts for the distance between objects, to make images of people, places and things seem more real.
    “This is about having a new kind of technology that uses advanced mathematical techniques to turn digitized data, captured live at a tourist site, into more realistic photos and videos with more of a feel for the location than you would get watching amovie or documentary,” says corresponding author Rao. “When you go see the Statue of Liberty for instance, you stand on the bank of the Hudson River and look at it. When you watch a video of it, you can only see the object from one angle. When you measure and preserve multiple angles and digitize that in video form, you could visualize it from multiple angles. You would feel like you’re there while you’re sitting at home.”
    Their proposed combination of techniques is novel, Rao says. “Information geometry has seen wide applications in physics and economics, but the angle preservation of the captured footage is never applied,” he says.
    Rao and Krantz say the technology could help mediate some of the pandemic’s impact on the tourism industry and offer other advantages.
    Those include its cost-effectiveness, because virtual tourism would be cheaper; health safety, because it can be done from the comfort of home; it saves time, eliminating travel times; it’s accessibility — tourism hotspots that are not routinely accessible to seniors or those with physical disabilities would be; it’s safer and more secure, eliminating risks like becoming a victim of crime while traveling; and it requires no special equipment — a standard home computer with a graphics card and internet access is all that’s needed to enjoy a “virtual trip.”
    “Virtual tourism (also) creates new employment opportunities for virtual tour guides, interpreters, drone pilots, videographers and photographers, as well as those building the new equipment for virtual tourism,” the authors write.
    “People would pay for these experiences like they pay airlines, hotels and tourist spots during regular travel,” Rao says. “The payments could go to each individual involved in creating the experience or to a company that creates the entire trip, for example.”
    Next steps include looking for investors and partners in the hospitality, tourism and technology industries, he says.
    If the pandemic continues for several more months, the World Travel and Tourism Council, the trade group representing major global travel companies, projects a global loss of 75 million jobs and $2.1 trillion in revenue.
    Rao is a professor of health economics and modeling in the MCG Department of Population Health Sciences. More