More stories

  • in

    A helping hand for working robots

    Until now, competing types of robotic hand designs offered a trade-off between strength and durability. One commonly used design, employing a rigid pin joint that mimics the mechanism in human finger joints, can lift heavy payloads, but is easily damaged in collisions, particularly if hit from the side. Meanwhile, fully compliant hands, typically made of molded silicone, are more flexible, harder to break, and better at grasping objects of various shapes, but they fall short on lifting power.
    The DGIST research team investigated the idea that a partially-compliant robot hand, using a rigid link connected to a structure known as a Crossed Flexural Hinge (CFH), could increase the robot’s lifting power while minimizing damage in the event of a collision. Generally, a CFH is made of two strips of metal arranged in an X-shape that can flex or bend in one position while remaining rigid in others, without creating friction.
    “Smart industrial robots and cooperative robots that interact with humans need both resilience and strength,” says Dongwon Yun, who heads the DGIST BioRobotics and Mechatronics Lab and led the research team. “Our findings show the advantages of both a rigid structure and a compliant structure can be combined, and this will overcome the shortcomings of both.”
    The team 3D-printed the metal strips that serve as the CFH joints connecting segments in each robotic finger, which allow the robotic fingers to curve and straighten similar to a human hand. The researchers demonstrated the robotic hand’s ability to grasp different objects, including a box of tissues, a small fan and a wallet. The CFH-jointed robot hand was shown to have 46.7 percent more shock absorption than a pin joint-oriented robotic hand. It was also stronger than fully compliant robot hands, with the ability to hold objects weighing up to four kilograms.
    Further improvements are needed before robots with these partially-compliant hands are able to go to work alongside or directly with humans. The researchers note that additional analysis of materials is required, as well as field experiments to pinpoint the best practical applications.
    “The industrial and healthcare settings where robots are widely used are dynamic and demanding places, so it’s important to keep improving robots’ performance,” says DGIST engineering Ph.D. student Junmo Yang, the first paper author.
    Story Source:
    Materials provided by DGIST (Daegu Gyeongbuk Institute of Science and Technology). Note: Content may be edited for style and length. More

  • in

    Electrons waiting for their turn: New model explains 3D quantum material

    This new 3D effect can be the foundation for topological quantum phenomena, which are believed to be particularly robust and therefore promising candidates for extremely powerful quantum technologies. These results have just been published in the scientific journal Nature Communications.
    Dr. Tobias Meng and Dr. Johannes Gooth are early career researchers in the Würzburg-Dresdner Cluster of Excellence ct.qmat that researches topological quantum materials since 2019. They could hardly believe the findings of a recent publication in “Nature” claiming that electrons in the topological metal zirconium pentatelluride (ZrTe5) move only in two-dimensional planes, despite the fact that the material is three-dimensional. Meng and Gooth therefore started their own research and experiments on the material ZrTe5. Meng from the Technische Universität Dresden (TUD) developed the theoretical model, Gooth from the Max Planck Institute for Chemical Physics of Solids designed the experiments. Seven measurements with different techniques always lead to the same conclusion.
    Electrons waiting for their turn
    The research by Meng and Gooth paints a new picture of how the Hall effect works in three-dimensional materials. The scientists believe that electrons move through the metal along three-dimensional paths, but their electric transport can still appear as two-dimensional. In the topological metal zirconium pentatelluride, this is possible because a fraction of the electrons is still waiting to be activated by an external magnetic field.
    “The way electrons move is consistent in all of our measurements, and similar to what is otherwise known from the two-dimensional quantum Hall effects. But our electrons move upwards in spirals, rather than being confined to a circular motion in planes. This is an exciting difference to the quantum Hall effect and to the proposed scenarios for what happens in the material ZrTe5,” comments Meng on the genesis of their new scientific model. “This only works because not all electrons move at all times. Some remain still, as if they were queuing up. Only when an external magnetic field is applied do they become active.”
    Experiments confirm the model
    For their experiments, the scientists cooled the topological quantum material down to -271 degree Celsius and applied an external magnetic field. Then, they performed electric and thermoelectric measurements by sending currents through the sample, studied its thermodynamics by analysing the magnetic properties of the material, and applied ultrasound. They even used X-ray, Raman and electronic spectroscopy to look into the inner workings of the material. “But none of our seven measurements hinted at the electrons moving only two-dimensionally,” explains Meng, head of the Emmy Noether group for Quantum Design at TUD and leading theorist in the present project. “Our model is in fact surprisingly simple, and still explains all the experimental data perfectly.”
    Outlook for topological quantum materials in 3D
    The Nobel-prize-winning quantum Hall effect was discovered in 1980 and describes the stepwise conduction of current in a metal. It is a cornerstone of topological physics, a field that has experienced a surge since 2005 due to its promises for the functional materials of the 21st century. To date, however, the quantum Hall effect has only been observed in two-dimensional metals. The scientific results of the present publication enlarge the understanding of how three-dimensional materials behave in magnetic fields. The cluster members Meng and Gooth intend to further persue this new research direction: “We definitely want to investigate the queueing behavior of electrons in 3D metals in more detail,” says Meng.
    Story Source:
    Materials provided by Technische Universität Dresden. Note: Content may be edited for style and length. More

  • in

    When to release free and paid apps for maximal revenue

    Researchers from Tulane University and University of Maryland published a new paper in the Journal of Marketing that examines the dynamic interplay between free and paid versions of an app over its lifetime and suggests a possible remedy for the failure of apps.
    The study, forthcoming in the Journal of Marketing, is titled “Managing the Versioning Decision over an App’s Lifetime” and is authored by Seoungwoo Lee, Jie Zhang, and Michel Wedel.
    Is it really over for paid mobile apps? The mobile app industry is unique because free apps are much more prevalent than paid apps in most app categories, contrary to many other product markets where free products primarily play a supportive role to the paid products. Apps have been trending toward the free version in the past decade, such that in July 2020, 96% of apps on the Google Play platform were free. However, 63% of the free apps had fewer than a thousand downloads per month and 60% of app publishers generated less than $500 per month in 2015.
    Are there ways for paid apps to make free apps more profitable? And how can app publishers improve profitability by strategically deploying or eliminating the paid and free versions of an app over its lifetime? To answer these questions, the research team investigated app publisher’s decisions to offer the free, paid, or both versions of an app by considering the dynamic interplays between the free and paid versions. The findings offer valuable insights for app publishers on how to manage the versioning decision over an app’s lifetime.
    First, the researchers demonstrate how the free and paid versions influence each other’s current demand, future demand, and in-app revenues. They find that either version’s cumulative user base stimulates future demand for both versions via social influence, yet simultaneously offering both versions hurts the demand for each other in the current period. Also, the presence of a paid version reduces the in-app purchase rate and active user base and, therefore, the in-app purchase and advertising revenues of a free app, but the presence of a free version appears to have little negative impact on the paid version of an app. Therefore, app publishers should be mindful of the paid version’s negative impact on the free version. In general, simultaneously offering both versions helps a publisher achieve cost savings via economies of scale, but it reduces revenues from each version compared to when either version is offered alone.
    Second, analyses show that the most common optimal launch strategy is to offer the paid version first. Paid apps can generate download revenues from the first day of sales, while in-app revenues from either version rely on a sizeable user base which takes time to build. So, publishers can rely on paid apps to generate operating capital and recuperate development and launch costs much more quickly. Nonetheless, there are variations across app categories, which are related to differences in apps’ abilities to monetize from different revenue sources. For example, the percentage of utility apps that should launch a paid app is particularly high because they have a lower ability to monetize the free app through in-app purchase items and advertising. In contrast, entertainment apps should mostly launch a free version because they have high availability of in-app ad networks and in-app purchase items.
    Third, the optimal versioning decisions and their evolutionary patterns change over an app’s ages and vary by app category. The evolutionary patterns of optimal versioning decisions show that, for most apps, the relative profitability of the free version tends to increase with app age while that of the paid version tends to decline. Therefore, the profitability of simultaneously offering both versions tends to increase with app age until a certain point, after which the free-only version will take over as the most common optimal versioning decision, which occurs about 1.5 years after launch on average for the (relatively more successful) apps in the data. Also, there is substantial cross-category variations in the versioning evolution patterns. For example, unlike for the other categories examined, the optimal versioning decision for most utility apps in our data is to stay with the paid-only option throughout an app’s lifetime.
    This research reveals the dynamic interplay between free and paid versions of an app over its lifetime and suggests a possible remedy for the failure of apps. As the researchers explain, “Many apps that start out with a free version fail because they cannot generate enough revenue to sustain early-stage operations. We urge app publishers to pay close attention to the interplay between free and paid app versions and to improve the profitability of free apps by strategically deploying or eliminating their paid version counterparts over an app’s lifetime.”
    Story Source:
    Materials provided by American Marketing Association. Original written by Matt Weingarden. Note: Content may be edited for style and length. More

  • in

    Helping doctors manage COVID-19

    Artificial intelligence (AI) technology developed by researchers at the University of Waterloo is capable of assessing the severity of COVID-19 cases with a promising degree of accuracy.
    A study, which is part of the COVID-Net open-source initiative launched more than a year ago, involved researchers from Waterloo and spin-off start-up company DarwinAI, as well as radiologists at the Stony Brook School of Medicine and the Montefiore Medical Center in New York.
    Deep-learning AI was trained to analyze the extent and opacity of infection in the lungs of COVID-19 patients based on chest x-rays. Its scores were then compared to assessments of the same x-rays by expert radiologists.
    For both extent and opacity, important indicators of the severity of infections, predictions made by the AI software were in good alignment with scores provided by the human experts.
    Alexander Wong, a systems design engineering professor and co-founder of DarwinAI, said the technology could give doctors an important tool to help them manage cases.
    “Assessing the severity of a patient with COVID-19 is a critical step in the clinical workflow for determining the best course of action for treatment and care, be it admitting the patient to ICU, giving a patient oxygen therapy, or putting a patient on a mechanical ventilator,” Wong said.
    “The promising results in this study show that artificial intelligence has a strong potential to be an effective tool for supporting frontline healthcare workers in their decisions and improving clinical efficiency, which is especially important given how much stress the ongoing pandemic has placed on healthcare systems around the world.”
    A paper on the research, “Towards computer-aided severity assessment via deep neural networks for geographic and opacity extent scoring of SARS-CoV-2 chest X-rays,” appears in the journal Scientific Reports.
    Story Source:
    Materials provided by University of Waterloo. Original written by Brian Caldwell. Note: Content may be edited for style and length. More

  • in

    Driving in the snow is a team effort for AI sensors

    Nobody likes driving in a blizzard, including autonomous vehicles. To make self-driving cars safer on snowy roads, engineers look at the problem from the car’s point of view.
    A major challenge for fully autonomous vehicles is navigating bad weather. Snow especially confounds crucial sensor data that helps a vehicle gauge depth, find obstacles and keep on the correct side of the yellow line, assuming it is visible. Averaging more than 200 inches of snow every winter, Michigan’s Keweenaw Peninsula is the perfect place to push autonomous vehicle tech to its limits. In two papers presented at SPIE Defense + Commercial Sensing 2021, researchers from Michigan Technological University discuss solutions for snowy driving scenarios that could help bring self-driving options to snowy cities like Chicago, Detroit, Minneapolis and Toronto.
    Just like the weather at times, autonomy is not a sunny or snowy yes-no designation. Autonomous vehicles cover a spectrum of levels, from cars already on the market with blind spot warnings or braking assistance, to vehicles that can switch in and out of self-driving modes, to others that can navigate entirely on their own. Major automakers and research universities are still tweaking self-driving technology and algorithms. Occasionally accidents occur, either due to a misjudgment by the car’s artificial intelligence (AI) or a human driver’s misuse of self-driving features.
    Humans have sensors, too: our scanning eyes, our sense of balance and movement, and the processing power of our brain help us understand our environment. These seemingly basic inputs allow us to drive in virtually every scenario, even if it is new to us, because human brains are good at generalizing novel experiences. In autonomous vehicles, two cameras mounted on gimbals scan and perceive depth using stereo vision to mimic human vision, while balance and motion can be gauged using an inertial measurement unit. But, computers can only react to scenarios they have encountered before or been programmed to recognize.
    Since artificial brains aren’t around yet, task-specific artificial intelligence (AI) algorithms must take the wheel — which means autonomous vehicles must rely on multiple sensors. Fisheye cameras widen the view while other cameras act much like the human eye. Infrared picks up heat signatures. Radar can see through the fog and rain. Light detection and ranging (lidar) pierces through the dark and weaves a neon tapestry of laser beam threads.
    “Every sensor has limitations, and every sensor covers another one’s back,” said Nathir Rawashdeh, assistant professor of computing in Michigan Tech’s College of Computing and one of the study’s lead researchers. He works on bringing the sensors’ data together through an AI process called sensor fusion. More

  • in

    Mass gatherings during Malaysian election directly and indirectly boosted COVID-19 spread, study suggests

    New estimates suggest that mass gatherings during an election in the Malaysian state of Sabah directly caused 70 percent of COVID-19 cases detected in Sabah after the election, and indirectly caused 64.4 percent of cases elsewhere in Malaysia. Jue Tao Lim of the National University of Singapore, Kenwin Maung of the University of Rochester, New York, and colleagues present these findings in the open-access journal PLOS Computational Biology.
    Mass gatherings of people pose high risks of spreading COVID-19. However, it is difficult to accurately estimate the direct and indirect effects of such events on increased case counts.
    To address this difficulty, Lim, Maung, and colleagues developed a new computational method for estimating both direct and spill-over effects of mass gatherings. Departing from traditional epidemiological approaches, they employed a statistical strategy known as a synthetic control method, which enabled comparison between the aftermath of mass gatherings and what might have happened if the gatherings had not occurred.
    The researchers then applied this method to the Sabah state election. This election involved mandated in-person voting and political rallies, both of which resulted in a significant increase in inter-state travel and in-person gatherings by voters, politicians, and campaign workers. Prior to the election, Malaysia had experienced an average of about 16 newly diagnosed COVID-19 cases per day for nearly four months. After the election, that number jumped to 190 cases per day for 17 days until lockdown policies were reinstated.
    Using their novel method, the researchers estimated that mass gatherings during the election directly caused 70 percent of COVID-19 cases in Sabah during the 17 days after the election, amounting to a total of 2,979 cases. Meanwhile, 64.4 percent of post-election cases elsewhere in Malaysia — 1,741 cases total — were indirectly attributed to the election.
    “Our work underscores the serious risk that mass gatherings in a single region could spill over into other regions and cause a national-scale outbreak,” Lim says.
    Lim and colleagues say that the same synthetic control framework could be applied to death rates and genetic data to deepen understanding of the impact of the Sabah election.
    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    The robot smiled back

    While our facial expressions play a huge role in building trust, most robots still sport the blank and static visage of a professional poker player. With the increasing use of robots in locations where robots and humans need to work closely together, from nursing homes to warehouses and factories, the need for a more responsive, facially realistic robot is growing more urgent.
    Long interested in the interactions between robots and humans, researchers in the Creative Machines Lab at Columbia Engineering have been working for five years to create EVA, a new autonomous robot with a soft and expressive face that responds to match the expressions of nearby humans. The research will be presented at the ICRA conference on May 30, 2021, and the robot blueprints are open-sourced on Hardware-X (April 2021).
    “The idea for EVA took shape a few years ago, when my students and I began to notice that the robots in our lab were staring back at us through plastic, googly eyes,” said Hod Lipson, James and Sally Scapa Professor of Innovation (Mechanical Engineering) and director of the Creative Machines Lab.
    Lipson observed a similar trend in the grocery store, where he encountered restocking robots wearing name badges, and in one case, decked out in a cozy, hand-knit cap. “People seemed to be humanizing their robotic colleagues by giving them eyes, an identity, or a name,” he said. “This made us wonder, if eyes and clothing work, why not make a robot that has a super-expressive and responsive human face?”
    While this sounds simple, creating a convincing robotic face has been a formidable challenge for roboticists. For decades, robotic body parts have been made of metal or hard plastic, materials that were too stiff to flow and move the way human tissue does. Robotic hardware has been similarly crude and difficult to work with — circuits, sensors, and motors are heavy, power-intensive, and bulky.
    The first phase of the project began in Lipson’s lab several years ago when undergraduate student Zanwar Faraj led a team of students in building the robot’s physical “machinery.” They constructed EVA as a disembodied bust that bears a strong resemblance to the silent but facially animated performers of the Blue Man Group. EVA can express the six basic emotions of anger, disgust, fear, joy, sadness, and surprise, as well as an array of more nuanced emotions, by using artificial “muscles” (i.e. cables and motors) that pull on specific points on EVA’s face, mimicking the movements of the more than 42 tiny muscles attached at various points to the skin and bones of human faces. More

  • in

    Artificial neurons recognize biosignals in real time

    Current neural network algorithms produce impressive results that help solve an incredible number of problems. However, the electronic devices used to run these algorithms still require too much processing power. These artificial intelligence (AI) systems simply cannot compete with an actual brain when it comes to processing sensory information or interactions with the environment in real time.
    Neuromorphic chip detects high-frequency oscillations
    Neuromorphic engineering is a promising new approach that bridges the gap between artificial and natural intelligence. An interdisciplinary research team at the University of Zurich, the ETH Zurich, and the UniversityHospital Zurich has used this approach to develop a chip based on neuromorphic technology that reliably and accurately recognizes complex biosignals. The scientists were able to use this technology to successfully detect previously recorded high-frequency oscillations (HFOs). These specific waves, measured using an intracranial electroencephalogram (iEEG), have proven to be promising biomarkers for identifying the brain tissue that causes epileptic seizures.
    Complex, compact and energy efficient
    The researchers first designed an algorithm that detects HFOs by simulating the brain’s natural neural network: a tiny so-called spiking neural network (SNN). The second step involved imple-menting the SNN in a fingernail-sized piece of hardware that receives neural signals by means of electrodes and which, unlike conventional computers, is massively energy efficient. This makes calculations with a very high temporal resolution possible, without relying on the internet or cloud computing. “Our design allows us to recognize spatiotemporal patterns in biological signals in real time,” says Giacomo Indiveri, professor at the Institute for Neuroinformatics of UZH and ETH Zur-ich.
    Measuring HFOs in operating theaters and outside of hospitals
    The researchers are now planning to use their findings to create an electronic system that reliably recognizes and monitors HFOs in real time. When used as an additional diagnostic tool in operating theaters, the system could improve the outcome of neurosurgical interventions.
    However, this is not the only field where HFO recognition can play an important role. The team’s long-term target is to develop a device for monitoring epilepsy that could be used outside of the hospital and that would make it possible to analyze signals from a large number of electrodes over several weeks or months. “We want to integrate low-energy, wireless data communications in the design — to connect it to a cellphone, for example,” says Indiveri. Johannes Sarnthein, a neurophysiologist at UniversityHospital Zurich, elaborates: “A portable or implantable chip such as this could identify periods with a higher or lower rate of incidence of seizures, which would enable us to deliver personalized medicine.” This research on epilepsy is being conducted at the Zurich Center of Epileptology and Epilepsy Surgery, which is run as part of a partnership between UniversityHospital Zurich, the Swiss Epilepsy Clinic and the University Children’s Hospital Zurich.
    Story Source:
    Materials provided by University of Zurich. Note: Content may be edited for style and length. More