More stories

  • in

    Robots can use eye contact to draw out reluctant participants in groups

    Eye contact is a key to establishing a connection, and teachers use it often to encourage participation. But can a robot do this too? Can it draw a response simply by making “eye” contact, even with people who are less inclined to speak up. A recent study suggests that it can.
    Researchers at KTH Royal Institute of Technology published results of experiments in which robots led a Swedish word game with individuals whose proficiency in the Nordic language was varied. They found that by redirecting its gaze to less proficient players, a robot can elicit involvement from even the most reluctant participants.
    Researchers Sarah Gillet and Ronald Cumbal say the results offer evidence that robots could play a productive role in educational settings.
    Calling on someone by name isn’t always the best way to elicit engagement, Gillet says. “Gaze can by nature influence very dynamically how much people are participating, especially if there is this natural tendency for imbalance — due to the differences in language proficiency,” she says.
    “If someone is not inclined to participate for some reason, we showed that gaze is able to overcome this difference and help everyone to participate.”
    Cumbal says that studies have shown that robots can support group discussion, but this is the first study to examine what happens when a robot uses gaze in a group interaction that isn’t balanced — when it is dominated by one or more individuals.
    The experiment involved pairs of players — one fluent in Swedish and one who is learning Swedish. The players were instructed to give the robot clues in Swedish so that it could guess the correct term. The face of the robot was an animated projection on a specially designed plastic mask.
    While it would be natural for a fluent speaker to dominate such a scenario, Cumbal says, the robot was able to prompt the participation of the less fluent player by redirecting its gaze naturally toward them and silently waiting for them to hazard an attempt.
    “Robot gaze can modify group dynamics — what role people take in a situation,” he says. “Our work builds on that and shows further that even when there is an imbalance in skills required for the activity, the gaze of a robot can still influence how the participants contribute.”

    Story Source:
    Materials provided by KTH, Royal Institute of Technology. Note: Content may be edited for style and length. More

  • in

    An electrically charged glass display smoothly transitions between a spectrum of colors

    Scientists have developed a see-through glass display with a high white light contrast ratio that smoothly transitions between a broad spectrum of colors when electrically charged. The technology, from researchers at Jilin University in Changchun, China, overcomes limitations of existing electrochromic devices by harnessing interactions between metal ions and ligands, opening the door for numerous future applications. The work appears March 10 in the journal Chem.
    “We believe that the method behind this see-through, non-emissive display may accelerate the development of transparent, eye-friendly displays with improved readability for bright working conditions,” says Yu-Mo Zhang, an associate professor of chemistry at Jilin University and an author on the study. “As an inevitable display technology in the near future, non-emissive see-through displays will be ubiquitous and irreplaceable as a part of the Internet of Things, in which physical objects are interconnected through software.”
    With the application of voltage, electrochromic displays offer a platform in which light’s properties can be continuously and reversibly manipulated. These devices have been proposed for use in windows, energy-saving electronic price tags, flashy billboards, rearview mirrors, augmented virtual reality, and even artificial irises. However, current models come with limitations — they tend to have low contrast ratios, especially for white light, poor stability, and limited color variations, all of which have prevented electrochromic displays from reaching their technological potential.
    To overcome these deficiencies, Yuyang Wang and colleagues developed a simple chemical approach in which metal ions induce a wide variety of switchable dyes to take on particular structures, then stabilize them once they have reached the desired configurations. To trigger a color change, the electrical field is simply applied to switch the metal ions’ valences, forming new bonds between the metal ions and molecular switches.
    “Differently from the traditional electrochromic materials, whose color-changing motifs and redox motifs are located at the same site, this new material is an indirect-redox-color-changing system composed by switchable dyes and multivalent metal ions,” says Zhang.
    To test this approach, the researchers fabricated an electrochromic device by injecting a material containing metal salts, dyes, electrolytes, and solvent into a sandwiched device with two electrodes and adhesive as a spacer. Next, they performed a battery of light spectrum and electrochemical tests, finding that the devices could effectively achieve cyan, magenta, yellow, red, green, black, pink, purple, and gray-black displays, while maintaining a high contrast ratio. The prototype also shifted seamlessly from a colorless, transparent display to black — the most useful color for commercial applications — with high coloration efficiency, low transmittance change voltage, and a white light contrast ratio that would be suitable for real transparent displays.
    “The low cost and simple preparation process of this glass device will also benefit its scalable production and commercial applications,” notes Zhang.
    Next, the researchers plan to optimize the display’s performance so that it may quickly meet the requirements of high-end displays for real-world applications. Additionally, to avoid leakage from its liquid components, they plan to develop improved fabrication technologies that can produce solid or semi-solid electrochromic devices.
    “We are hoping that more and more visionary researchers and engineers cooperate with each other to optimize the electrochromic displays and promote their commercialization,” says Zhang.
    The authors received financial support from the National Natural Science Foundation of China.

    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Finding quvigints in a quantum treasure map

    Researchers have struck quantum gold — and created a new word — by enlisting machine learning to efficiently navigate a 20-dimensional quantum treasure map.
    Physicist Dr Markus Rambach from the ARC Centre of Excellence for Engineered Quantum Systems (EQUS) at The University of Queensland said the team was able to find unknown quantum states more quickly and accurately, using a technique called self-guided tomography.
    The team also introduced the ‘quvigint’, which is like a qubit (the quantum version of a classical bit that takes on the values ‘0’ or ‘1’) except that it takes on not two, but 20 possible values.
    Dr Rambach said high-dimensional quantum states such as quvigints were ideal for storing and sending large amounts of information securely.
    However, finding unknown states becomes increasingly difficult in higher dimensions, because the same scaling that gives quantum devices their power also limits our ability to describe them.
    He said this problem was akin to navigating a high-dimensional quantum treasure map.

    advertisement

    “We know where we are, and that there’s treasure, but we don’t know which way to go to get to it,” Dr Rambach said.
    “Using standard tomography, this problem would be solved by first determining which directions you need to look in to ensure you cover the whole map, then collecting and storing all the relevant data, and finally processing the data to find the treasure.
    “Instead, using self-guided tomography, we pick two directions at random, try them both, pick the one that gets us closer to the treasure based on clues from the machine learning algorithm, and then repeat this until we reach it.
    “This technique saves a huge amount of time and energy, meaning we can find the treasure — the unknown quvigint — much more quickly and easily.”
    To illustrate the technique, the team simulated a quvigint travelling through the atmosphere, as it would when being used to send quantum information between two points on Earth or to a satellite.

    advertisement

    As the quvigint travels, it is modified by atmospheric turbulence.
    Standard tomography is very susceptible to this type of noise, but by using self-guided tomography the team was able to reconstruct the original quvigint with high accuracy.
    Dr Jacq Romero, also at EQUS and UQ, said self-guided tomography was unlike other methods for finding unknown quantum states.
    “Self-guided tomography is efficient, accurate, robust to noise and readily scalable to high dimensions, such as quvigints,” Dr Romero said.
    “Self-guided tomography is a robust tomography method that is agnostic to the physical system, so it can be applied to other systems such as atoms or ions as well.”

    Story Source:
    Materials provided by University of Queensland. Note: Content may be edited for style and length. More

  • in

    Learning to help the adaptive immune system

    Scientists from the Institute of Industrial Science at The University of Tokyo demonstrated how the adaptive immune system uses a method similar to reinforcement learning to control the immune reaction to repeat infections. This work may lead to significant improvements in vaccine development and interventions to boost the immune system.
    In the human body, the adaptive immune system fights germs by remembering previous infections so it can respond quickly if the same pathogens return. This complex process depends on the cooperation of many cell types. Among these are T helpers, which assist by coordinating the response of other parts of the immune system — called effector cells — such as T killer and B cells. When an invading pathogen is detected, antigen presenting cells bring an identifying piece of the germ to a T cell. Certain T cells become activated and multiply many times in a process known as clonal selection. These clones then marshal a particular set of effector cells to battle the germs. Although the immune system has been extensively studied for decades, the “algorithm” used by T cells to optimize the response to threats is largely unknown.
    Now, scientists at The University of Tokyo have used an artificial intelligence framework to show that the number of T helpers act like the “hidden layer” between inputs and outputs in an artificial neural network commonly used in adaptive learning. In this case, the antigens presented are the inputs, and the responding effector immune cells are the output.
    “Just as a neural network can be trained in machine learning, we believe the immune network can reflect associations between antigen patterns and the effective responses to pathogens,” first author Takuya Kato says.
    The main difference between the adaptive immune system compared with computer machine learning is that only the number of T helper cells of each type can be varied, as opposed to the connection weights between nodes in each layer. The team used computer simulations to predict the distribution of T cell abundances after undergoing adaptive learning. These values were found to agree with experimental data based on the genetic sequencing of actual T helper cells.
    “Our theoretical framework may completely change our understanding of adaptive immunity as a real learning system,” says co-author Tetsuya Kobayashi. “This research can shed light on other complex adaptive systems, as well as ways to optimize vaccines to evoke a stronger immune response.”

    Story Source:
    Materials provided by Institute of Industrial Science, The University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Using artificial intelligence to generate 3D holograms in real-time

    Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing. One reason: VR can make users feel sick. Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.
    Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.
    Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.
    “People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” says Liang Shi, the study’s lead author and a PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.”
    Shi believes the new approach, which the team calls “tensor holography,” will finally bring that elusive 10-year goal within reach. The advance could fuel a spillover of holography into fields like VR and 3D printing.
    Shi worked on the study, published in Nature, with his advisor and co-author Wojciech Matusik. Other co-authors include Beichen Li of EECS and the Computer Science and Artificial Intelligence Laboratory at MIT, as well as former MIT researchers Changil Kim (now at Facebook) and Petr Kellnhofer (now at Stanford University).

    advertisement

    The quest for better 3D
    A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene’s colors, but it ultimately yields a flat image.
    In contrast, a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene’s parallax and depth. So, while a photograph of Monet’s “Water Lilies” can highlight the paintings’ color palate, a hologram can bring the work to life, rendering the unique 3D texture of each brush stroke. But despite their realism, holograms are a challenge to make and share.
    First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves’ phase. This reference generates a hologram’s unique sense of depth. The resulting images were static, so they couldn’t capture motion. And they were hard copy only, making them difficult to reproduce and share.
    Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. “Because each point in the scene has a different depth, you can’t apply the same operations for all of them,” says Shi. “That increases the complexity significantly.” Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don’t model occlusion with photorealistic precision. So Shi’s team took a different approach: letting the computer teach physics to itself.

    advertisement

    They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn’t previously exist for 3D holograms.
    The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photorealistic training data. Next, the algorithm got to work.
    By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.
    “We are amazed at how well it performs,” says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multicamera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What’s more, the compact tensor network requires less than 1 MB of memory. “It’s negligible, considering the tens and hundreds of gigabytes available on the latest cell phone,” he says.
    “A considerable leap”
    Real-time 3D holography would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.
    Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.
    “It’s a considerable leap that could completely change people’s attitudes toward holography,” says Matusik. “We feel like neural networks were born for this task.”
    The work was supported, in part, by Sony. More

  • in

    Successful trial shows way forward on quieter drone propellers

    Researchers have published a study revealing their successful approach to designing much quieter propellers.
    The Australian research team used machine learning to design their propellers, then 3D printed several of the most promising prototypes for experimental acoustic testing at the Commonwealth Scientific and Industrial Research Organisation’s specialised ‘echo-free’ chamber.
    Results now published in Aerospace Research Central show the prototypes made around 15dB less noise than commercially available propellers, validating the team’s design methodology.
    RMIT University aerospace engineer and lead researcher Dr Abdulghani Mohamed said the impressive results were enabled by two key innovations — the numerical algorithms developed to design the propellers and their consideration of how noise is perceived in the human ear — as part of the testing.
    “By using our algorithms to iterate through a variety of propeller designs, we were able to optimise for different metrics such as thrust, torque, sound directivity and much more. We also formulated a new metric, which involves how the human ear perceives sound, and propose to use that in future designs,” he said.
    “Our method for optimising design can be applied to small propellers used on drones to much larger ones used for future urban air mobility vehicles — or air taxis — designed to carry human passengers.”
    The team, which also included Melbourne-based aerospace company XROTOR, explored how various manipulations of propeller blade noise affected how it was perceived by the human ear.

    advertisement

    Mohamed said this modulation had the potential to be used as an important design metric for future propellers.
    “The modulation of high frequency noise is a major factor in the human perception of rotor noise. Human ears are more sensitive to certain frequencies than others and our perception of sound also changes as we age,” he explained.
    “By designing to such metrics, which take into account human perception, we can design less annoying propellers, which one day may actually be pleasant to hear.”
    XROTOR Managing Director, Geoff Durham, said it was exciting to see prototype testing show the new designs could significantly reduce the sound impact of drones.
    “Not only were the designs appreciably quieter to the human ear, but the propellers had a higher thrust profile against standard market propellers at the same throttle signal input,” he said.
    The RMIT research team also included Dr Woutijn Baars, Dr Robert Carrese, Professor Simon Watkins and Professor Pier Marzocca. The prototypes were 3D printed at RMIT’s Advanced Manufacturing Precinct.

    Story Source:
    Materials provided by RMIT University. Original written by Grace Taylor and Michael Quin. Note: Content may be edited for style and length. More

  • in

    Carbon nanotube patterns called moirés created for materials research

    Material behaviors depend on many things including not just the composition of the material but also the arrangement of its molecular parts. For the first time, researchers have found a way to coax carbon nanotubes into creating moiré patterns. Such structures could be useful in materials research, in particular in the field of superconducting materials.
    Professor Hiroyuki Isobe from the Department of Chemistry at the University of Tokyo, and his team create nanoscopic material structures, primarily from carbon. Their aim is to explore new ways to create carbon nanostructures and to find useful applications for them. The most recent breakthrough from their lab is a new form of carbon nanotube with a very specific arrangement of atoms that has attracted much attention in the field of nanomaterials.
    “We successfully created different kinds of atom-thick carbon nanotubes which self-assemble into complex structures,” said Isobe. “These nanotubes are made from rolled up sheets of carbon atoms arranged hexagonally. We made wide ones and narrow ones which fit inside them. This means the resulting complex tube structure has a double-layered wall. The hexagonal patterns of these layers are offset such that the two layers together create what is known as a moiré pattern. And this is significant for materials researchers.”
    You may see moiré patterns in your everyday life. When repeating patterns overlay one another a new resultant pattern emerges. If you then move one of the layers, or if you move relative to the layers, this resultant pattern will change slightly. For example, if you look at a screen door through a mesh curtain, or if you hold two sieves together. In the case of the team’s moiré patterns, they are made when one hexagonal grid of carbon atoms is rotated slightly relative to another similar hexagonal grid.
    These patterns aren’t just for show, they can imbue materials with functional properties. Two areas that might especially benefit from the properties created here are synthetic chemistry, as the moiré carbon bilayer tubes could be challenging yet attractive targets of molecular self-assembly, and superconducting materials, which could lead to a generational leap in electrical devices which require far less power to run and would be far more capable than current devices.

    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More

  • in

    Avatar marketing: Moving beyond gimmicks to results

    Researchers from University of Texas-Arlington, University of Virginia, Sun Yat-Sen University, and University of Washington published a new paper in the Journal of Marketing that seeks to advance the discipline of avatar-based marketing.
    The study, forthcoming in the Journal of Marketing, is titled “An Emerging Theory of Avatar Marketing” and is authored by Fred Miao, Irina Kozlenkova, Haizhong Wang, Tao Xie, and Robert Palmatier.
    In 2020, Samsung’s Star Labs brought digital avatars to CES 2020. However, this promotion was burned by its own fanfare. The avatars looked realistic and successfully answered some questions, but only when they were heavily controlled. As this example illustrates, avatar-based marketing is still in its nascent stage.
    A pressing question is “How to design effective avatars?” Given the considerable amount of ambiguity about the definition of avatar, the researchers first identify and evaluate key conceptual elements of the term avatar and offer this definition: digital entities with anthropomorphic appearance, controlled by a human or software, that have an ability to interact.
    Based on this definition, they present a typology of avatar design to isolate elements that academics and managers can leverage to ensure avatars’ effectiveness for achieving specific goals (e.g., providing standard vs. personalized solutions). Design elements affect avatars’ form realism and behavioral realism. Form realism refers to the extent to which the avatar’s shape appears human, while behavioral realism captures the degree to which it behaves as a human would in the physical world. Form realism includes design elements such as spatial dimension (2D/3D), movement (static vs. dynamic), and human characteristics (e.g., name, gender), whereas behavioral realism captures the avatar’s communication modality (e.g., verbal), response type (scripted vs. natural response), social content, and its controlling entity.
    The study reveals a key limitation in avatar design: lack of consideration of the alignment between form and behavioral realism of avatars. As Miao explains, “If the levels of form and behavioral realism are mismatched, the consequences for avatars’ effectiveness may be profound and can help explain inconsistent avatar performance.”
    Integrating form and behavioral realism, the study features a 2 x 2 avatar taxonomy that identifies four distinct categories of avatars: simplistic, superficial, intelligent unrealistic, and digital human avatars. A simplistic avatar has an unrealistic human appearance (e.g., 2D, visually static, cartoonish image) and engages in low intelligence behaviors (e.g., scripted, only task-specific communication). For example, in the Netherlands, ING Bank’s 2D, cartoonish-looking avatar Inge responds to simple customer inquiries from a set of predetermined answers. In contrast, a superficial avatar has a realistic anthropomorphic appearance (e.g., 3D, visually dynamic, photorealistic image), such as Natwest Bank’s Cora, but low behavioral realism in that it is only able to offer preprogrammed answers to specific questions. An intelligent unrealistic avatar (e.g., REA) is characterized by humanlike cognitive and emotional intelligence, but exhibits an unrealistic (e.g., cartoonish) human image. These avatars can engage customers in real-time, complex transactions without being mistaken for human agents. Finally, a digital human avatar such as SK-II’s YUMI is the most advanced category of avatars, characterized by both a highly realistic anthropomorphic appearance and humanlike cognitive and emotional intelligence, and is designed to provide the highest degree of realism during interactions with human users.

    advertisement

    Based on observations of relative effectiveness of these avatars in practice, the researchers present propositions that predict outcomes of avatar marketing. In particular:
    -As the form realism of an avatar increases, so do customers’ expectations for its behavioral realism.
    -Differences between the avatar’s form and behavioral realism have asymmetric effects, such that customers experience positive (negative) disconfirmation when an avatar’s behavioral realism is greater (less) than its form realism.
    Recall the avatar of Samsung’s Star Labs, which is high in form realism but low in behavioral realism. Kozlenkova says that “Our analysis indicates that Samsung’s avatar sets audience expectations too high, which may have led to a negative disconfirmation, thereby resulting in an unfavorable customer experience.”
    Avatars’ effectiveness may be highly contingent on the level of perceived uncertainty users experience during their interactions with avatars as well as choice of media channel (e.g., smartphones vs. desktops). Finally, design efforts should take the customer relationship phase into account because the relative effects of customers’ cognitive, affective, and social responses differ across relationship stages.
    The framework generates practical implications that urge firms to consider five interrelated areas: (1) when to deploy avatars, (2) avatar form realism, (3) avatar behavioral realism, (4) form-behavioral realism alignment, and (5) avatar contingency effects for optimal avatar-based marketing applications. More