More stories

  • in

    MonoEye: A human motion capture system using a single wearable camera

    Researchers at Tokyo Institute of Technology (Tokyo Tech) and Carnegie Mellon University have together developed a new human motion capture system that consists of a single ultra-wide fisheye camera mounted on the user’s chest. The simplicity of their system could be conducive to a wide range of applications in the sports, medical and entertainment fields.
    Computer vision-based technologies are advancing rapidly owing to recent developments in integrating deep learning. In particular, human motion capture is a highly active research area driving advances for example in robotics, computer generated animation and sports science.
    Conventional motion capture systems in specially equipped studios typically rely on having several synchronized cameras attached to the ceiling and walls that capture movements by a person wearing a body suit fitted with numerous sensors. Such systems are often very expensive and limited in terms of the space and environment in which the wearer can move.
    Now, a team of researchers led by Hideki Koike at Tokyo Tech present a new motion capture system that consists of a single ultra-wide fisheye camera mounted on the user’s chest. Their design not only overcomes the space constraints of existing systems but is also cost-effective.
    Named MonoEye, the system can capture the user’s body motion as well as the user’s perspective, or ‘viewport’. “Our ultra-wide fisheye lens has a 280-degree field-of-view and it can capture the user’s limbs, face, and the surrounding environment,” the researchers say.
    To achieve robust multimodal motion capture, the system has been designed with three deep neural networks capable of estimating 3D body pose, head pose and camera pose in real-time.
    Already, the researchers have trained these neural networks with an extensive synthetic dataset consisting of 680,000 renderings of people with a range of body shapes, clothing, actions, background and lighting conditions, as well as 16,000 frames of photo-realistic images.
    Some challenges remain, however, due to the inevitable domain gap between synthetic and real-world datasets. The researchers plan to keep expanding their dataset with more photo-realistic images to help minimize this gap and improve accuracy.
    The researchers envision that the chest-mounted camera could go on to be transformed into an everyday accessory such as a tie clip, brooch or sports gear in future.
    The team’s work will be presented at the 33rd ACM Symposium on User Interface Software and Technology (UIST), a leading forum for innovations in human-computer interfaces, to be held virtually on 20-23 October 2020.

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Kitchen temperature supercurrents from stacked 2D materials

    Could a stack of 2D materials allow for supercurrents at ground-breakingly warm temperatures, easily achievable in the household kitchen?
    An international study published in August opens a new route to high-temperature supercurrents at temperatures as ‘warm’ as inside a kitchen fridge.
    The ultimate aim is to achieve superconductivity (ie, electrical current without any energy loss to resistance) at a reasonable temperature.
    TOWARDS ROOM-TEMPERATURE SUPERCONDUCTIVITY
    Previously, superconductivity has only been possible at impractically low temperatures, less than -170°C below zero — even the Antarctic would be far too warm!
    For this reason, the cooling costs of superconductors have been high, requiring expensive and energy-intensive cooling systems.

    advertisement

    Superconductivity at everyday temperatures is the ultimate goal of researchers in the field.
    This new semiconductor superlattice device could form the basis of a radically new class of ultra-low energy electronics with vastly lower energy consumption per computation than conventional, silicon-based (CMOS) electronics.
    Such electronics, based on new types of conduction in which solid-state transistors switch between zero and one (ie, binary switching) without resistance at room temperature, is the aim of the FLEET Centre of Excellence.
    EXCITON SUPERCURRENTS IN ENERGY-EFFICIENT ELECTRONICS
    Because oppositely-charged electrons and holes in semiconductors are strongly attracted to each other electrically, they can form tightly-bound pairs. These composite particles are called excitons, and they open up new paths towards conduction without resistance at room temperature.

    advertisement

    Excitons can in principle form a quantum, ‘superfluid’ state, in which they move together without resistance. With such tightly bound excitons, the superfluidity should exist at high temperatures — even as high as room temperature.
    But unfortunately, because the electron and hole are so close together, in practice excitons have extremely short lifetimes — just a few nanoseconds, not enough time to form a superfluid.
    As a workaround, the electron and hole can be kept completely apart in two, separated atomically-thin conducting layers, creating so-called ‘spatially indirect’ excitons. The electrons and holes move along separate but very close conducting layers. This makes the excitons long-lived, and indeed superfluidity has recently been observed in such systems.
    Counterflow in the exciton superfluid, in which the oppositely charged electrons and holes move together in their separate layers, allows so-called ‘supercurrents’ (dissipationless electrical currents) to flow with zero resistance and zero wasted energy. As such, it is clearly an exciting prospect for future, ultra-low-energy electronics.
    STACKED LAYERS OVERCOME 2D LIMITATIONS
    Sara Conti who is a co-author on the study, notes another problem however: atomically-thin conducting layers are two-dimensional, and in 2D systems there are rigid topological quantum restrictions discovered by David Thouless and Michael Kosterlitz (2016 Nobel prize), that eliminate the superfluidity at very low temperatures, above about -170°C.
    The key difference with the new proposed system of stacked atomically-thin layers of transition metal dichalcogenide (TMD) semiconducting materials, is that it is three dimensional.
    The topological limitations of 2D are overcome by using this 3D `superlattice’ of thin layers. Alternate layers are doped with excess electrons (n-doped) and excess holes (p-doped) and these form the 3D excitons.
    The study predicts exciton supercurrents will flow in this system at temperatures as warm as -3°C.
    David Neilson, who has worked for many years on exciton superfluidity and 2D systems, says “The proposed 3D superlattice breaks out from the topological limitations of 2D systems, allowing for supercurrents at -3°C. Because the electrons and holes are so strongly coupled, further design improvements should carry this right up to room temperature.”
    “Amazingly, it is becoming routine today to produce stacks of these atomically-thin layers, lining them up atomically, and holding them together with the weak van der Waals atomic attraction,” explains Prof Neilson. “And while our new study is a theoretical proposal, it is carefully designed to be feasible with present technology.” More

  • in

    Simple software creates complex wooden joints

    Wood is considered an attractive construction material for both aesthetic and environmental purposes. Construction of useful wood objects requires complicated structures and ways to connect components together. Researchers created a novel 3D design application to hugely simplify the design process and also provide milling machine instructions to efficiently produce the designed components. The designs do not require nails or glue, meaning items made with this system can be easily assembled, disassembled, reused, repaired or recycled.
    Carpentry is a practice as ancient as humanity itself. Equal parts art and engineering, it has figuratively and literally shaped the world around us. Yet despite its ubiquity, carpentry is a difficult and time-consuming skill, leading to relatively high prices for hand-crafted wooden items like furniture. For this reason, much wooden furniture around us is often, at least to some degree, made by machines. Some machines can be highly automated and programmed with designs created on computers by human designers. This in itself can be a very technical and creative challenge, out of reach to many, until now.
    Researchers from the Department of Creative Informatics at the University of Tokyo have created a 3D design application to create structural wooden components quickly, easily and efficiently. They call it Tsugite, the Japanese word for joinery, and through a simple 3D interface, users with little or no prior experience in either woodworking or 3D design can create designs for functional wooden structures in minutes. These designs can then instruct milling machines to carve the structural components, which users can then piece together without the need for additional tools or adhesives, following on-screen instructions.
    “Our intention was to make the art of joinery available to people without specific experience. When we tested the interface in a user study, people new to 3D modeling not only designed some complex structures, but also enjoyed doing so,” said researcher Maria Larsson. “Tsugite is simple to use as it guides users through the process one step at a time, starting with a gallery of existing designs that can then be modified for different purposes. But more advanced users can jump straight to a manual editing mode for more freeform creativity.”
    Tsugite gives users a detailed view of wooden joints represented by what are known as voxels, essentially 3D pixels, in this case small cubes. These voxels can be moved around at one end of a component to be joined; this automatically adjusts the voxels at the end of the corresponding component such that they are guaranteed to fit together tightly without the need for nails or even glue. Two or more components can be joined and the software algorithm will adjust all accordingly. Different colors inform the user about properties of the joints such as how easily they will slide together, or problems such as potential weaknesses.
    Something that makes Tsugite unique is that it will factor the fabrication process directly into the designs. This means that milling machines, which have physical limitations such as their degrees of freedom, tool size and so on, are only given designs they are able to create. Something that has plagued users of 3D printers, which share a common ancestry with milling machines, is that software for 3D printers cannot always be sure how the machine itself will behave which can lead to failed prints.
    “There is some great research in the field of computer graphics on how to model a wide variety of joint geometries. But that approach often lacks the practical considerations of manufacturing and material properties,” said Larsson. “Conversely, research in the fields of structural engineering and architecture may be very thorough in this regard, but they might only be concerned with a few kinds of joints. We saw the potential to combine the strengths of these approaches to create Tsugite. It can explore a large variety of joints and yet keeps them within realistic physical limits.”
    Another advantage of incorporating fabrication limitations into the design process is that Tsugite’s underlying algorithms have an easier time navigating all the different possibilities they could present to users, as those that are physically impossible are simply not given as options. The researchers hope through further refinements and advancements that Tsugite can be scaled up to design not just furniture and small structures, but also entire buildings.
    “According to the U.N., the building and construction industry is responsible for almost 40% of worldwide carbon dioxide emissions. Wood is perhaps the only natural and renewable building material that we have, and efficient joinery can add further sustainability benefits,” said Larsson. “When connecting timbers with joinery, as opposed to metal fixings, for example, it reduces mixing materials. This is good for sorting and recycling. Also, unglued joints can be taken apart without destroying building components. This opens up the possibility for buildings to be disassembled and reassembled elsewhere. Or for defective parts to be replaced. This flexibility of reuse and repair adds sustainability benefits to wood.”
    This research is supported by JST ACT-I grant number JPMJPR17UT, JSPS KAKENHI grant number 17H00752, and JST CREST grant number JPMJCR17A1, Japan.

    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Seeing no longer believing: the manipulation of online images

    A peace sign from Martin Luther King, Jr, becomes a rude gesture; President Donald Trump’s inauguration crowd scenes inflated; dolphins in Venice’s Grand Canal; and crocodiles on the streets of flooded Townsville — all manipulated images posted as truth.
    Image editing software is so ubiquitous and easy to use, according to researchers from QUT’s Digital Media Research Centre, it has the power to re-imagine history.
    And, they say, deadline-driven journalists lack the tools to tell the difference, especially when the images come through from social media.
    Their study, Visual mis/disinformation in journalism and public communications, has been published in Journalism Practice. It was driven by the increased prevalence of fake news and how social media platforms and news organisations are struggling to identify and combat visual mis/disinformation presented to their audiences.
    “When Donald Trump’s staff posted an image to his official Facebook page in 2019, journalists were able to spot the photoshopped edits to the president’s skin and physique because an unedited version exists on the White House’s official Flickr feed,” said lead author Dr T.J. Thomson.
    “But what about when unedited versions aren’t available online and journalists can’t rely on simple reverse-image searches to verify whether an image is real or has been manipulated?

    advertisement

    “When it is possible to alter past and present images, by methods like cloning, splicing, cropping, re-touching or re-sampling, we face the danger of a re-written history — a very Orwellian scenario.”
    Examples highlighted in the report include photos shared by news outlets last year of crocodiles on Townsville streets during a flood which were later shown to be images of alligators in Florida from 2014. It also quotes a Reuters employee on their discovery that a harrowing video shared during Cyclone Idai, which devastated parts of Africa in 2019, had been shot in Libya five years earlier.
    An image of Dr Martin Luther King Jr’s reaction to the US Senate’s passing of the civil rights bill in 1964, was manipulated to make it appear that he was flipping the bird to the camera. This edited version was shared widely on Twitter, Reddit, and white supremacist website The Daily Stormer.
    Dr Thomson, Associate Professor Daniel Angus, Dr Paula Dootson, Dr Edward Hurcombe, and Adam Smith have mapped journalists’ current social media verification techniques and suggest which tools are most effective for which circumstances.
    “Detection of false images is made harder by the number of visuals created daily — in excess of 3.2 billion photos and 720,000 hours of video — along with the speed at which they are produced, published, and shared,” said Dr Thomson.

    advertisement

    “Other considerations include the digital and visual literacy of those who see them. Yet being able to detect fraudulent edits masquerading as reality is critically important.
    “While journalists who create visual media are not immune to ethical breaches, the practice of incorporating more user-generated and crowd-sourced visual content into news reports is growing. Verification on social media will have to increase commensurately if we wish to improve trust in institutions and strengthen our democracy.”
    Dr Thomson said a recent quantitative study performed by the International Centre for Journalists (ICFJ) found a very low usage of social media verification tools in newsrooms.
    “The ICFJ surveyed over 2,700 journalists and newsroom managers in more than 130 countries and found only 11% of those surveyed used social media verification tools,” he said.
    “The lack of user-friendly forensic tools available and low levels of digital media literacy, combined, are chief barriers to those seeking to stem the tide of visual mis/disinformation online.”
    Associate Professor Angus said the study demonstrated an urgent need for better tools, developed with journalists, to provide greater clarity around the provenance and authenticity of images and other media.
    “Despite knowing little about the provenance and veracity of the visual content they encounter, journalists have to quickly determine whether to re-publish or amplify this content,” he said.
    “The many examples of misattributed, doctored, and faked imagery attest to the importance of accuracy, transparency, and trust in the arena of public discourse. People generally vote and make decisions based on information they receive via friends and family, politicians, organisations, and journalists.”
    The researchers cite current manual detection strategies — using a reverse image search, examining image metadata, examining light and shadows; and using image editing software — but say more tools need to be developed, including more advanced machine learning methods, to verify visuals on social media.
    Video: https://www.youtube.com/watch?v=S_flHHn1280&feature=emb_logo More

  • in

    AI and photonics join forces to make it easier to find 'new Earths'

    Australian scientists have developed a new type of sensor to measure and correct the distortion of starlight caused by viewing through the Earth’s atmosphere, which should make it easier to study the possibility of life on distant planets.
    Using artificial intelligence and machine learning, University of Sydney optical scientists have developed a sensor that can neutralise a star’s ‘twinkle’ caused by heat variations in the Earth’s atmosphere. This will make the discovery and study of planets in distant solar systems easier from optical telescopes on Earth.
    “The main way we identify planets orbiting distant stars is by measuring regular dips in starlight caused by planets blocking out bits of their sun,” said lead author Dr Barnaby Norris, who holds a joint position as a Research Fellow in the University of Sydney Astrophotonic Instrumentation Laboratory and in the University of Sydney node of Australian Astronomical Optics in the School of Physics.
    “This is really difficult from the ground, so we needed to develop a new way of looking up at the stars. We also wanted to find a way to directly observe these planets from Earth,” he said.
    The team’s invention will now be deployed in one of the largest optical telescopes in the world, the 8.2-metre Subaru telescope in Hawaii, operated by the National Astronomical Observatory of Japan.
    “It is really hard to separate a star’s ‘twinkle’ from the light dips caused by planets when observing from Earth,” Dr Norris said. “Most observations of exoplanets have come from orbiting telescopes, such as NASA’s Kepler. With our invention, we hope to launch a renaissance in exoplanet observation from the ground.”
    The research is published today in Nature Communications.

    advertisement

    NOVEL METHODS
    Using the new ‘photonic wavefront sensor’ will help astronomers directly image exoplanets around distant stars from Earth.
    Over the past two decades, thousands of planets beyond our solar system have been detected, but only a small handful have been directly imaged from Earth. This severely limits scientific exploration of these exoplanets.
    Making an image of the planet gives far more information than indirect detection methods, like measuring starlight dips. Earth-like planets might appear a billion times fainter than their host star. And observing the planet separate from its star is like looking at a 10-cent coin held in Sydney, as viewed from Melbourne.
    To solve this problem, the scientific team in the School of Physics developed a ‘photonic wavefront sensor’, a new way to allow the exact distortion caused by the atmosphere to be measured, so it can then be corrected by the telescope’s adaptive optics systems thousands of times a second.

    advertisement

    “This new sensor merges advanced photonic devices with deep learning and neural networks techniques to achieve an unprecedented type of wavefront sensor for large telescopes,’ Dr Norris said.
    “Unlike conventional wavefront sensors, it can be placed at the same location in the optical instrument where the image is formed. This means it is sensitive to types of distortions invisible to other wavefront sensors currently used today in large observatories,” he said.
    Professor Olivier Guyon from the Subaru Telescope and the University of Arizona is one of the world’s leading experts in adaptive optics. He said: “This is no doubt a very innovative approach and very different to all existing methods. It could potentially resolve several major limitations of the current technology. We are currently working in collaboration with the University of Sydney team towards testing this concept at Subaru in conjunction with SCExAO, which is one of the most advanced adaptive optics systems in the world.”
    APPLICATION BEYOND ASTRONOMY
    The scientists have achieved this remarkable result by building on a novel method to measure (and correct) the wavefront of light that passes through atmospheric turbulence directly at the focal plane of an imaging instrument. This is done using an advanced light converter, known as a photonic lantern, linked to a neural network inference process.
    “This is a radically different approach to existing methods and resolves several major limitations of current approaches,” said co-author Jin (Fiona) Wei, a postgraduate student at the Sydney Astrophotonic Instrumentation Laboratory.
    The Director of the Sydney Astrophotonic Instrumentation Laboratory in the School of Physics at the University of Sydney, Associate Professor Sergio Leon-Saval, said: “While we have come to this problem to solve a problem in astronomy, the proposed technique is extremely relevant to a wide range of fields. It could be applied in optical communications, remote sensing, in-vivo imaging and any other field that involves the reception or transmission of accurate wavefronts through a turbulent or turbid medium, such as water, blood or air.” More

  • in

    Virtual Reality health appointments can help patients address eating disorders

    Research from the University of Kent, the Research centre on Interactive Media, Smart systems and Emerging technologies — RISE Ltd and the University of Cyprus has revealed that Virtual Reality (VR) technology can have significant impact on the validity of remote health appointments for those with eating disorders, through a process called Virtual Reality Exposure Therapy (VRET).
    This paper demonstrates the potential value of Multi-User Virtual Reality (MUVR) remote psychotherapy for those with body shape and weight concerns.
    In the study, published in Human-Computer Interaction Journal, participants and therapists were fitted with VR Head-Mounted Displays and introduced to each other within the VR system. Participant would then customize their virtual avatar according to their look (body shape and size, skin tone and hair colour and shape). Participant and therapist were then “teleported” to two Virtual Environment interventions for several discussions, building up to the Mirror Exposure.
    Mirror Exposure involves confrontation in a mirror with ones’ shape and body. In the MUVR, the participant faces the virtual avatar they customized to match their own physical body. Here, they were again able to adjust body shapes using virtual sliders, change clothing, skin tone, as well as hair style and colour. Clothing was then gradually reduced until the participant’s avatar was in their virtual underwear.
    The participant was then asked to examine each part of their body and perform adjustments while describing their feelings, thoughts and concerns with the therapist, leading to virtual exposure therapy for the patient to their body shape and size through the customised avatar.
    The study found that the avatar of the therapist was vital to the participant. The cartoonish avatar facilitated greater openness from participants, whilst therapist avatars in human-form represented the idea of negative judgement. In post-session interviews, participants noted the lack of fear of judgement as enabling them to commit to the session’s aims.
    Dr Jim Ang, Senior Lecturer in Multimedia/Digital Systems and Supervisor of the study said: ‘The potential of Virtual Reality being used in addressing health issues with patients, remotely and without the issue of potential judgement, is for VR to be utilised throughout the health sector. Without the issue of judgement, which people can fear in advance of even seeking medical advice, VR can give people the confidence to engage with and embrace medical advice. In terms of the technical capabilities, the potential for VR to aid in remote non-contact medical appointments between patients and practitioners is huge, due particular consideration in times of pandemic.’
    Dr Maria Matsangidou, Research Associate at RISE Ltd and Experimental Researcher of the study said: ‘Multi-User Virtual Reality is an innovative medium for psychotherapeutic interventions that allows for the physical separation of therapist and patient, providing thus more ‘comfortable’ openness by the patients. Exposure to patient worries about body shape and size may exhibit anxious reactions, but through the remote exposure therapy this can elicit new learning that helps the patient to shape new experiences.’

    Story Source:
    Materials provided by University of Kent. Original written by Sam Wood. Note: Content may be edited for style and length. More

  • in

    3D hand pose estimation using a wrist-worn camera

    Researchers at Tokyo Institute of Technology (Tokyo Tech) working in collaboration with colleagues at Carnegie Mellon University, the University of St Andrews and the University of New South Wales have developed a wrist-worn device for 3D hand pose estimation. The system consists of a camera that captures images of the back of the hand, and is supported by a neural network called DorsalNet which can accurately recognize dynamic gestures.
    Being able to track hand gestures is of crucial importance in advancing augmented reality (AR) and virtual reality (VR) devices that are already beginning to be much in demand in the medical, sports and entertainment sectors. To date, these devices have involved using bulky data gloves which tend to hinder natural movement or controllers with a limited range of sensing.
    Now, a research team led by Hideki Koike at Tokyo Tech has devised a camera-based wrist-worn 3D hand pose recognition system which could in future be on par with a smartwatch. Their system can importantly allow capture of hand motions in mobile settings.
    “This work is the first vision-based real-time 3D hand pose estimator using visual features from the dorsal hand region,” the researchers say. The system consists of a camera supported by a neural network named DorsalNet which can accurately estimate 3D hand poses by detecting changes in the back of the hand.
    The researchers confirmed that their system outperforms previous work with an average of 20% higher accuracy in recognizing dynamic gestures, and achieves a 75% accuracy of detecting eleven different grasp types.
    The work could advance the development of controllers that support bare-hand interaction. In preliminary tests, the researchers demonstrated that it would be possible to use their system for smart devices control, for example, changing the time on a smartwatch simply by changing finger angle. They also showed it would be possible to use the system as a virtual mouse or keyboard, for example by rotating the wrist to control the position of the pointer and using a simple 8-key keyboard for typing input.
    They point out that further improvements to the system such as using a camera with a higher frame rate to capture fast wrist movement and being able to deal with more diverse lighting conditions will be needed for real world use.

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Targeting the shell of the Ebola virus

    As the world grapples with the coronavirus (COVID-19) pandemic, another virus has been raging again in the Democratic Republic of the Congo in recent months: Ebola. Since the first terrifying outbreak in 2013, the Ebola virus has periodically emerged in Africa, causing horrific bleeding in its victims and, in many cases, death.
    How can we battle these infectious agents that reproduce by hijacking cells and reprogramming them into virus-replicating machines? Science at the molecular level is critical to gaining the upper hand — research you’ll find underway in the laboratory of Professor Juan Perilla at the University of Delaware.
    Perilla and his team of graduate and undergraduate students in UD’s Department of Chemistry and Biochemistry are using supercomputers to simulate the inner workings of Ebola, observing the way molecules move, atom by atom, to carry out their functions. In the team’s latest work, they reveal structural features of the virus’s coiled protein shell, or nucleocapsid, that may be promising therapeutic targets, more easily destabilized and knocked out by an antiviral treatment.
    The research is highlighted in the Tuesday, Oct. 20 issue of the Journal of Chemical Physics, which is published by the American Institute of Physics, a federation of societies in the physical sciences representing more than 120,000 members.
    “The Ebola nucleocapsid looks like a Slinky walking spring, whose neighboring rings are connected,” Perilla said. “We tried to find what factors control the stability of this spring in our computer simulations.”
    The life cycle of Ebola is highly dependent on this coiled nucleocapsid, which surrounds the virus’s genetic material consisting of a single strand of ribonucleic acid (ssRNA). Nucleoproteins protect this RNA from being recognized by cellular defense mechanisms. Through interactions with different viral proteins, such as VP24 and VP30, these nucleoproteins form a minimal functional unit — a copy machine — for viral transcription and replication.

    advertisement

    While nucleoproteins are important to the nucleocapsid’s stability, the team’s most surprising finding, Perilla said, is that in the absence of single-stranded RNA, the nucleocapsid quickly becomes disordered. But RNA alone is not sufficient to stabilize it. The team also observed charged ions binding to the nucleocapsid, which may reveal where other important cellular factors bind and stabilize the structure during the virus’s life cycle.
    Perilla compared the team’s work to a search for molecular “knobs” that control the nucleocapsid’s stability like volume control knobs that can be turned up to hinder virus replication.
    The UD team built two molecular dynamics systems of the Ebola nucleocapsid for their study. One included single-stranded RNA; the other contained only the nucleoprotein. The systems were then simulated using the Texas Advanced Computing Center’s Frontera supercomputer — the largest academic supercomputer in the world. The simulations took about two months to complete.
    Graduate research assistant Chaoyi Xu ran the molecular simulations, while the entire team was involved in developing the analytical framework and conducting the analysis. Writing the manuscript was a learning experience for Xu and undergraduate research assistant Tanya Nesterova, who had not been directly involved in this work before. She also received training as a next-generation computational scientist with support from UD’s Undergraduate Research Scholars program and NSF’s XSEDE-EMPOWER program. The latter has allowed her to perform the highest-level research using the nation’s top supercomputers. Postdoctoral researcher Nidhi Katyal’s expertise also was essential to bringing the project to completion, Perilla said.
    While a vaccine exists for Ebola, it must be kept extremely cold, which is difficult in remote African regions where outbreaks have occurred. Will the team’s work help advance new treatments?
    “As basic scientists we are excited to understand the fundamental principles of Ebola,” Perilla said. “The nucleocapsid is the most abundant protein in the virus and it’s highly immunogenic — able to produce an immune response. Thus, our new findings may facilitate the development of new antiviral treatments.”
    Currently, Perilla and Jodi Hadden-Perilla are using supercomputer simulations to study the novel coronavirus that causes COVID-19. Although the structures of the nucleocapsid in Ebola and COVID-19 share some similarities — both are rod-like helical protofilaments and both are involved in the replication, transcription and packing of viral genomes — that is where the similarities end.
    “We now are refining the methodology we used for Ebola to examine SARS-CoV-2,” Perilla said. More