More stories

  • in

    Projection mapping leaves the darkness behind

    Images projected onto objects in the real world create impressive displays that educate and entertain. However, current projection mapping systems all have one common limitation: they only work well in the dark. In a study recently published in IEEE Transactions on Visualization and Computer Graphics, researchers from Osaka University suggest a way to bring projection mapping “into the light.”
    Conventional projection mapping, which turns any three-dimensional surface into an interactive display, requires darkness because any illumination in the surroundings also illuminates the surface of the target object used for display. This means that black and dark colors appear too bright and can’t be displayed properly. In addition, the projected images always look like they are glowing, but not all real objects are luminous, which restricts the range of objects that can be displayed. Displays in dark environments have another disadvantage. Multiple viewers can interact with an illuminated scene, but they are less able to interact with each other in the dark environment.
    “To get around this problem, we use projectors to reproduce normal illumination on every part of the room except the display object itself,” says Masaki Takeuchi, lead author of the study. “In essence, we create the illusion of global illumination without using actual global illumination.”
    Projecting global illumination requires a set of techniques that differs from those of conventional projection mapping. The research team use several standard projectors to illuminate the room along with a projector with a wide aperture and large-format lens to soften the crisp edges of shadows. These luminaire projectors illuminate the environment, but the target object remains in shadow. Conventional texture projectors are then used to map the texture onto its shadowed surface.
    The researchers built a prototype environment and evaluated the performance of their approach. One aspect they evaluated was whether the objects were perceived by humans in aperture-color mode (where the colors appear to radiate from the object itself) or surface-color mode (in which the light appears to be reflected from a colored surface).
    “To our knowledge, we are the first to consider this,” says Daisuke Iwai, senior author of the study. “However, we believe it is fundamental for producing realistic environments.”
    The researchers found that, using their method, they could project texture images onto objects without making the object appear to glow. Instead, the textures were perceived to be the true colors of the object’s surface.
    In future, the researchers plan to add more projectors to handle the complex illumination in the areas next to the display object. Eventually, they aim to produce scenes that are indistinguishable from real-world three-dimensional scenes. They believe that this approach will enable visual design environments for industrial products or packaging, in which the participants can interact not only with their design under natural light but also with each other, facilitating communication and improving design performance. More

  • in

    Holographic message encoded in simple plastic

    There are many ways to store data — digitally, on a hard disk, or using analogue storage technology, for example as a hologram. In most cases, it is technically quite complicated to create a hologram: High-precision laser technology is normally used for this.
    However, if the aim is simply to store data in a physical object, then holography can be done quite easily, as has now been demonstrated at TU Wien: A 3D printer can be used to produce a panel from normal plastic in which a QR code can be stored, for example. The message is read using terahertz rays — electromagnetic radiation that is invisible to the human eye.
    The hologram as a data storage device
    A hologram is completely different from an ordinary image. In an ordinary image, each pixel has a clearly defined position. If you tear off a piece of the picture, a part of the content is lost.
    In a hologram, however, the image is formed by contributions from all areas of the hologram simultaneously. If you take away a piece of the hologram, the rest can still create the complete image (albeit perhaps a blurrier version). With the hologram, the information is not stored pixel by pixel, but rather, all of the information is spread out over the whole hologram.
    “We have applied this principle to terahertz beams,” says Evan Constable from the Institute of Solid State Physics at TU Wien. “These are electromagnetic rays in the range of around one hundred to several thousand gigahertz, comparable to the radiation of a cell phone or a microwave oven — but with a significantly higher frequency.”
    This terahertz radiation is sent to a thin plastic plate. This plate is almost transparent to the terahertz rays, but it has a higher refractive index than the surrounding air, so at each point of the plate, it changes the incident wave a little. “A wave then emanates from each point of the plate, and all these waves interfere with each other,” says Evan Constable. “If you have adjusted the thickness of the plate in just the right way, point by point, then the superposition of all these waves produces exactly the desired image.”
    It is similar to throwing lots of little stones into a pond in a precisely calculated way so that the water waves from all these stones add up to a very specific overall wave pattern.

    A piece of cheap plastic as a high-tech storage unit for valuable items
    In this way, it was possible to encode a Bitcoin wallet address (consisting of 256 bits) in a piece of plastic. By shining terahertz rays of the correct wavelength through this plastic plate, a terahertz ray image is created that produces exactly the desired code. “In this way, you can securely store a value of tens of thousands of euros in an object that only costs a few cents,” says Evan Constable.
    In order for the plate to generate the correct code, one first has to calculate how thick the plate has to be at each point, so that it changes the terahertz wave in exactly the right way. Evan Constable and his collaborators made the code for obtaining this thickness profile available for free on Github. “Once you have this thickness profile, all you need is an ordinary 3D printer to print the plate and you have the desired information stored holographically,” explains Constable. The aim of the research work was not only to make holography with terahertz waves possible, but also to demonstrate how well the technology for working with these waves has progressed and how precisely this still rather unusual range of electromagnetic radiation can already be used today. More

  • in

    New technique helps AI tell when humans are lying

    Researchers have developed a new training tool to help artificial intelligence (AI) programs better account for the fact that humans don’t always tell the truth when providing personal information. The new tool was developed for use in contexts when humans have an economic incentive to lie, such as applying for a mortgage or trying to lower their insurance premiums.
    “AI programs are used in a wide variety of business contexts, such as helping to determine how large of a mortgage an individual can afford, or what an individual’s insurance premiums should be,” says Mehmet Caner, co-author of a paper on the work. “These AI programs generally use mathematical algorithms driven solely by statistics to do their forecasting. But the problem is that this approach creates incentives for people to lie, so that they can get a mortgage, lower their insurance premiums, and so on.
    “We wanted to see if there was some way to adjust AI algorithms in order to account for these economic incentives to lie,” says Caner, who is the Thurman-Raytheon Distinguished Professor of Economics in North Carolina State University’s Poole College of Management.
    To address this challenge, the researchers developed a new set of training parameters that can be used to inform how the AI teaches itself to make predictions. Specifically, the new training parameters focus on recognizing and accounting for a human user’s economic incentives. In other words, the AI trains itself to recognize circumstances in which a human user might lie to improve their outcomes.
    In proof-of-concept simulations, the modified AI was better able to detect inaccurate information from users.
    “This effectively reduces a user’s incentive to lie when submitting information,” Caner says. “However, small lies can still go undetected. We need to do some additional work to better understand where the threshold is between a ‘small lie’ and a ‘big lie.'”
    The researchers are making the new AI training parameters publicly available, so that AI developers can experiment with them.
    “This work shows we can improve AI programs to reduce economic incentives for humans to lie,” Caner says. “At some point, if we make the AI clever enough, we may be able to eliminate those incentives altogether.” More

  • in

    Advance for soft robotics manufacturing, design

    Soft robots use pliant materials such as elastomers to interact safely with the human body and other challenging, delicate objects and environments. A team of Rice University researchers has developed an analytical model that can predict the curing time of platinum-catalyzed silicone elastomers as a function of temperature. The model could help reduce energy waste and improve throughput for elastomer-based components manufacturing.
    “In our study, we looked at elastomers as a class of materials that enables soft robotics, a field that has seen a huge surge in growth over the past decade,” said Daniel Preston, a Rice assistant professor of mechanical engineering and corresponding author on a study published in Cell Reports Physical Science. “While there is some related research on materials like epoxies and even on several specific silicone elastomers, until now there was no detailed quantitative account of the curing reaction for many of the commercially available silicone elastomers that people are actually using to make soft robots. Our work fills that gap.”
    The platinum-catalyzed silicone elastomers that Preston and his team studied typically start out as two viscoelastic liquids that, when mixed together, transform over time into a rubbery solid. As a liquid mixture, they can be poured into intricate molds and thus used for casting complex components. The curing process can occur at room temperature, but it can also be sped up using heat.
    Manufacturing processes involving elastomers have typically relied on empirical estimates for temperature and duration to control the curing process. However, this ballpark approach makes it difficult to predict how elastomers will behave under varying curing conditions. Having a quantitative framework to determine exactly how temperature impacts curing speed will enable manufacturers to maximize efficiency and reduce waste.
    “Previously, using existing models to predict elastomers’ curing behavior under varying temperature conditions was a much more challenging task,” said Te Faye Yap, a graduate student in the Preston lab who is lead author on the study. “There’s a huge need to make manufacturing processes more efficient and reduce waste, both in terms of energy consumption and materials.”
    To understand how temperature impacts the curing process, the researchers used a rheometer — an instrument that measures the mechanical properties of liquids and soft solids — to analyze the curing behavior of six commercially available platinum-catalyzed elastomers.
    “We were able to develop a model based on what is called the Arrhenius relationship that relates this curing reaction rate to the temperature at which the elastomer is being cured,” Preston said. “Now we have a really nice quantitative understanding of exactly how temperature impacts curing speed.”
    The Arrhenius framework, a formula that relates the rate of chemical reactions to temperature, has been used in a variety of contexts such as semiconductor processing and virus inactivation. Preston and his group have used the framework in some of their prior work and found it also applies to curing reactions for materials like epoxies as described in previous studies. In this study, the researchers used the Arrhenius framework along with rheological data to develop an analytical model that could directly impact manufacturing practices.

    “In this work, we really probed the curing reaction as a function of the temperature of the elastomer, but we also looked in depth at the mechanical properties of the elastomers when cured at elevated temperatures meant to achieve these higher throughputs and curing speeds,” Preston said.
    The researchers conducted mechanical testing on elastomer samples that were cured at room temperature and at elevated temperatures to see whether heating treatments impact the materials’ mechanical properties.
    “We found that exposing the elastomers to 70 degrees Celsius (158 Fahrenheit) does not alter the tensile and compressive properties of the material when compared to components that were cured at room temperature,” Yap said. “Moreover, to demonstrate the usage of accelerated curing when making a device, we fabricated soft, pneumatically actuated grippers at both elevated and room temperature conditions, and we observed no difference in the performance of the grippers upon pressurizing.”
    While temperature did not seem to have an effect on the elastomers’ ability to withstand mechanical stress, the researchers found that it did impact adhesion between components.
    “Say we’ve already cured a few different components that need to be assembled together into the complete, soft robotic system,” Preston said. “When we then try to adhere these components to each other, there’s an impact on the adhesion or the ability to stick them together. In this case, that is greatly affected by the extent of curing that has occurred before we tried to bond.”
    The research advances scientific understanding of how temperature can be used to manipulate fabrication processes involving elastomers, which could open up the soft robotics design space for new or improved applications. One key area of interest is the biomedical industry.

    “Surgical robots oftenbenefit from being compliant or soft in nature, because operating inside the human body means you want to minimize the risk of puncture or bruising to tissue or organs,” Preston said. “So a lot of the robots that now operate inside the human body are moving to softer architectures and are benefiting from that. Some researchers have also started to look into using soft robotic systems to help reposition patients confined to a bed for long periods of time to try to avoid putting pressure on certain areas.”
    Other areas of potential use for soft robotics are agriculture (for instance picking fruits or vegetables that are fragile or bruise easily), disaster relief (search-and-rescue operations in impacted areas with limited or difficult access) and research (collecting or handling samples).
    “This study provides a framework that could expand the design space for manufacturing with thermally cured elastomers to create complex structures that exhibit high elasticity which can be used to develop medical devices, shock absorbers and soft robots,” Yap said.
    Silicone elastomers’ unique properties — biocompatibility, flexibility, thermal resistance, shock absorption, insulation and more — will continue to be an asset in a range of industries, and the current research can help expand and improve their use beyond current capabilities.
    The research was supported by the National Science Foundation (2144809), the Rice Academy of Fellows, NASA (80NSSC21K1276), the National GEM Consortium and the US Department of Energy through an appointment with the Energy Efficiency & Renewable Energy Science, Technology and Policy Program administered by the Oak Ridge Institute for Science and Education (ORISE) and managed by Oak Ridge Associated Universities (ORAU) under contract number DE-SC0014664. More

  • in

    An innovative mixed light field technique for immersive projection mapping

    A novel mixed light field technique that utilizes a mix of ray-controlled ambient lighting with projection mapping (PM) to obtain PM in bright surroundings has been developed by scientists at Tokyo Institute of Technology. This innovative technology utilizes a novel kaleidoscope array to achieve ray-controlled lighting and a binary search algorithm for removing ambient lighting from PM targets. It provides an immersive augmented reality experience with applications in various fields.
    Projection mapping (PM) is a fascinating technology that provides an immersive visual experience by projecting computer-generated images onto physical surfaces, smoothly merging real and virtual worlds. It allows us to experience augmented reality without the need for special glasses. As a result, PM is in high demand in various fields including enhanced stage productions, trying on clothing and make-up, and educational demonstrations.
    Despite its potential, current PM methods face challenges in bright environments with ambient lighting. Ambient lighting drowns the entire scene in light, reducing the contrast of PM targets. This is why conventional PM solutions mainly function in dark environments. However, even in dark environments, PM fails to provide a natural scene, as in a dimly lit scene, only the PM target is well-lit while the rest of the surroundings remain dark, causing it to appear overly bright. Additionally, non-PM objects appear too dark, breaking the immersion.
    To address these issues, a team of researchers from Japan, led by Associate Professor Yoshihiro Watanabe from the Department of Information and Communications Engineering at Tokyo Institute of Technology, has recently developed an innovative new mixed light field approach for achieving PM in brightly lit environments. “In this approach, instead of using normal ambient light, we introduced a mixed light field in which a ray-controllable light avoids the PM target while adequately lighting other areas within the scene while the PM projector exclusively illuminates the target,” explains Dr. Watanabe. Their findings were published in the journal IEEE Transactions on Visualization and Computer Graphics and will be presented at the31stIEEE Conference on Virtual Reality and 3D User Interfaces in Orlando, Florida USA, with the presentation scheduled for March 19th at 1:30 PM local time(UTC-4).
    At the center of this novel approach lies the ray-controllable lighting unit. This unit reproduces a range of ambient lighting scenarios while also avoiding illuminating the PM target. To achieve this, the researchers developed a novel kaleidoscopic array comprising an array of mirrors positioned behind a lens array, which, in turn, was placed in front of a projector. This setup allowed the projector to produce a high-density light field, crucial for ray-controllable lighting.
    Furthermore, to avoid illuminating the PM target, the researchers deployed a camera to capture images of the scene and identify the pixels from the projector that illuminated the PM targets, subsequently turning them off. To identify these pixels, they employed a simple binary-search-based method, resulting in effective mixed light field.
    This innovative approach allowed them to achieve high-contrast PM presentations in brightly lit surroundings. Notably, it preserved the natural appearance and shadows of ordinary non-PM objects, addressing a key challenge in PM technology. Through several captivating augmented scenes, the researchers showcased the seamless coexistence of PM targets and ordinary objects, providing an immersive visual experience.
    While the researchers identified some limitations, such as artefacts and the low efficiency of binary search algorithms with large PM targets, they have already identified potential solutions and are actively working to expand this approach in the future.
    “Our experiments prove the effectiveness of using this technique for achieving natural PM presentations with accurate lighting for all objects. Mixed light field has the potential to usher PM for various practical day-to-day applications, such as attractions, for support in manufacturing and trying on make-up,” says Dr. Watanabe, highlighting the applications of their technology.
    Overall, this approach marks a significant step for PM technology, paving the way for immersive augmented experiences in future. More

  • in

    Virtual reality better than video for evoking fear, spurring climate action

    Depicting worst-case climate scenarios like expanding deserts and dying coral reefs may better motivate people to support environmental policies when delivered via virtual reality, according to a research team led by Penn State that studied how VR and message framing affect the impact of environmental advocacy communications. The study findings, published in the journal Science Communication, may help advocacy groups decide how best to frame and deliver their messages.
    The researchers examined individuals’ responses to climate change messaging when delivered through traditional video and desktop virtual reality — VR programs like Google Earth that can run on a mobile phone or computer. They found that loss-framed messages, or those that transitioned from a positive to negative climate scenario to emphasize what humanity has to lose, were more effective at convincing people to support environmental policies when delivered via VR. Gain-framed messages, which depict a more hope-inspiring change from a negative to a positive environmental outcome, had a greater impact when delivered through traditional video format.
    “The findings of this study suggest that in terms of seeking support for climate change policy, it’s the combination of the medium and the message that can determine the most effective solution for promoting a particular advocacy message,” said S. Shyam Sundar, senior author and the James P. Jimirro Professor of Media Effects at Penn State. “For consumers, the media literacy message here is that you’re much more emotionally vulnerable or more likely to be swayed by a VR presentation of an advocacy message, especially if the presentation focuses on loss.”
    The research team created two desktop virtual reality experiences, one gain-framed and one loss-framed, using the Unity3D game engine. In addition to the loss and gain framed messages, the VR programs also depicted healthy and unhealthy coral reef ecosystems, accompanied by lighter or darker ambient lighting and hopeful or sad audio, and allowed users to explore the aquatic environments. The researchers used the programs to record loss- and gain-framed videos based on the VR experiences.
    They chose to depict coral reef ecosystems because corals are one of the species most endangered by the effects of climate change and far removed from many peoples’ lived experiences.
    “It’s difficult to communicate environmental issues to non-scientists because the consequences are usually long-term and not easily foreseeable,” said Mengqi Liao, first author and doctoral candidate in mass communication at Penn State. “Not to mention that it’s usually very hard to bring people to an environment that has been damaged by climate change, such as coral reefs, which, based on decades of data collected in part from NASA’s airborne and satellite missions, have declined rapidly over the past 30 years. This is where VR comes in handy. You can bring the environment to people and show them what would happen if we fail to act.”
    The researchers recruited 130 participants from Amazon Mechanical Turk and asked them to complete a pre-questionnaire to measure variables like attitudes toward climate change and political ideology. Then they randomly assigned participants to a video or desktop VR experience. Within each of these groups, half saw the gain-framed messaging while the other half saw the loss-framed messaging.

    Participants in the loss-framed experiences saw healthy then unhealthy coral ecosystems, with a message explaining the negative consequences of failing to adopt climate change mitigation behaviors. Those in the gain-framed versions saw unhealthy then healthy coral ecosystems, with messages explaining the positive impacts of adopting climate policies. After completing the experiences, participants answered a questionnaire to measure how likely they would be to support environmental policies.
    The researchers found that loss-framed messages were most effective at motivating people to support climate change mitigation policies when delivered through desktop VR. Gain-framed messages were most effective when delivered in video format.
    Virtual reality is inherently interesting and attention-grabbing, and it has a low cognitive barrier to entry — even small children with limited reading ability can use it, according to Sundar.
    “The nickname for VR is empathy machine. It can generate better empathy because you’re one with the environment,” he said. “Loss-framed messaging tends to be more effective, more about emotions like fear rather than hope. Sometimes fear can be better represented in visually resplendent media like VR.”
    Gain-framed messaging, on the other hand, tends to involve more thinking about the consequences of action or inaction for the environment and what humans have to gain, Sundar explained. The movement and interactivity that come with VR may distract too much from the kind of thinking needed to process the potential gains highlighted in that type of messaging, which is better suited for traditional video or text.
    “With politicized topics like climate change, people are guided by their motivated reasoning, whereby an individual readily accepts information consistent with their worldview and ignores or rejects information that is inconsistent with that view,” Liao said. “Our study suggests that showing stark portrayals of environmental loss can be persuasive in spurring people into action, to support climate change issues regardless of their pre-existing worldviews.”
    Pejman Sajjadi, who completed the work as a postdoctoral scholar at Penn State and is now with Meta, also contributed to the research. More

  • in

    Machine learning classifier accelerates the development of cellular immunotherapies

    Making a personalised T cell therapy for cancer patients currently takes at least six months; scientists at the German Cancer Research Center (DKFZ) and the University Medical Center Mannheim have shown that the laborious first step of identifying tumor-reactive T cell receptors for patients can be replaced with a machine learning classifier that halves this time.
    Personalized cellular immunotherapies are considered promising new treatment options for various types of cancer. One of the therapeutic approaches currently being tested are so-called “T-cell receptor transgenic T-cells.” The idea behind this: immune T cells from a patient are equipped in the laboratory to recognize the patient’s own unique tumor, and then reinfused in large numbers to effectively kill the tumor cells.
    The development of such therapies is a complicated process. First, doctors isolate tumor-infiltrating T cells (TILs) from a sample of the patient’s tumor tissue. This cell population is then searched for T-cell receptors that recognize tumor-specific mutations and can thus kill tumor cells. This search is laborious and has so far required knowledge of the tumor-specific mutations that lead to protein changes that are recognized by the patients’ immune system. During this time the tumor is constantly mutating and spreading, making this step a race against time.
    “Finding the right T cell receptors is like looking for a needle in a haystack, costly and time-consuming,” says Michael Platten, Head of Department at the DKFZ and Director of the Department of Neurology at the University Medical Center Mannheim. “With a method that allows us to identify tumor-reactive T-cell receptors independently of knowledge of the respective tumor epitopes, the process could be considerably simplified and accelerated.”
    A team led by Platten and co-study head Ed Green has now presented a new technology that can achieve precisely this goal in a recent publication. As a starting point, the researchers isolated TILs from a melanoma patient’s brain metastasis and performed single cell sequencing to characterise each cell. The T cell receptors expressed by these TILs were then individually tested in the lab to identify those that were recognised and killed patient tumor cells. The researchers then combined these data to train a machine learning model to predict tumor reactive T cell receptors. The resulting classifier could identify tumor reactive T cells from TILs with 90% accuracy, works in many different types of tumor, and accommodates data from different cell sequencing technologies.
    “predicTCR enables us to cut the time it takes to identify personalised tumor reactive T cell receptors from over three months to a matter of days, regardless of tumor type” said Ed Green.
    “We are now focusing on bringing this technology into clinical practice here in Germany. To finance further development, we have founded the biotech start-up Tcelltech,” adds Michael Platten. “predicTCR is one of the key technologies of this new DKFZ spin-off.” More

  • in

    New study shows analog computing can solve complex equations and use far less energy

    A team of researchers including University of Massachusetts Amherst engineers have proven that their analog computing device, called a memristor, can complete complex, scientific computing tasks while bypassing the limitations of digital computing.
    Many of today’s important scientific questions — from nanoscale material modeling to large-scale climate science — can be explored using complex equations. However, today’s digital computing systems are reaching their limit for performing these computations in terms of speed, energy consumption and infrastructure.
    Qiangfei Xia, UMass Amherst professor of electrical and computer engineering, and one of the corresponding authors of the research published in Science, explains that, with current computing methods, every time you want to store information or give a computer a task, it requires moving data between memory and computing units. With complex tasks moving larger amounts of data, you essentially get a processing “traffic jam” of sorts.
    One way traditional computing has aimed to solve this is by increasing bandwidth. Instead, Xia and his colleagues at UMass Amherst, the University of Southern California, and computing technology maker, TetraMem Inc. have implemented in-memory computing with analog memristor technology as an alternative that can avoid these bottlenecks by reducing the number of data transfers.
    The team’s in-memory computing relies on an electrical component called a memristor — a combination of memory and resistor (which controls the flow of electricity in a circuit). A memristor controls the flow of electrical current in a circuit, while also “remembering” the prior state, even when the power is turned off, unlike today’s transistor-based computer chips, which can only hold information while there is power. The memristor device can be programmed into multiple resistance levels, increasing the information density in one cell.
    When organized into a crossbar array, such a memristive circuit does analog computing by using physical laws in a massively parallel fashion, substantially accelerating matrix operation, the most frequently used but very power-hungry computation in neural networks. The computing is performed at the site of the device, rather than moving the data between memory and processing. Using the traffic analogy, Xia compares in-memory computing to the nearly empty roads seen at the height of the pandemic: “You eliminated traffic because [nearly] everybody worked from home,” he says. “We work simultaneously, but we only send the important data/results out.”
    Previously, these researchers demonstrated that their memristor can complete low-precision computing tasks, like machine learning. Other applications have included analog signal processing, radiofrequency sensing, and hardware security.

    “In this work, we propose and demonstrate a new circuit architecture and programming protocol that can efficiently represent high-precision numbers using a weighted sum of multiple, relatively low-precision analog devices, such as memristors, with a greatly reduced overhead in circuitry, energy and latency compared with existing quantization approaches,” says Xia.
    “The breakthrough for this particular paper is that we push the boundary further,” he adds. “This technology is not only good for low-precision, neural network computing, but it can also be good for high-precision, scientific computing.”
    For the proof-of-principle demonstration, the memristor solved static and time-evolving partial differential equations, Navier-Stokes equations, and magnetohydrodynamics problems.
    “We pushed ourselves out of our own comfort zone,” he says, expanding beyond the low-precision requirements of edge computing neural networks to high-precision scientific computing.
    It took over a decade for the UMass Amherst team and collaborators to design a proper memristor device and build sizeable circuits and computer chips for analog in-memory computing. “Our research in the past decade has made analog memristor a viable technology. It is time to move such a great technology into the semiconductor industry to benefit the broad AI hardware community,” Xia says. More