More stories

  • in

    Fingertip sensitivity for robots

    In a paper published on February 23, 2022 in Nature Machine Intelligence, a team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) introduce a robust soft haptic sensor named “Insight” that uses computer vision and a deep neural network to accurately estimate where objects come into contact with the sensor and how large the applied forces are. The research project is a significant step toward robots being able to feel their environment as accurately as humans and animals. Like its natural counterpart, the fingertip sensor is very sensitive, robust, and high resolution.
    The thumb-shaped sensor is made of a soft shell built around a lightweight stiff skeleton. This skeleton holds up the structure much like bones stabilize the soft finger tissue. The shell is made from an elastomer mixed with dark but reflective aluminum flakes, resulting in an opaque greyish color which prevents any external light finding its way in. Hidden inside this finger-sized cap is a tiny 160-degree fish-eye camera which records colorful images illuminated by a ring of LEDs.
    When any objects touch the sensor’s shell, the appearance of the color pattern inside the sensor changes. The camera records images many times per second and feeds a deep neural network with this data. The algorithm detects even the smallest change in light in each pixel. Within a fraction of a second, the trained machine-learning model can map out where exactly the finger is contacting an object, determine how strong the forces are and indicate the force direction. The model infers what scientists call a force map: it provides a force vector for every point in the three-dimensional fingertip.
    “We achieved this excellent sensing performance through the innovative mechanical design of the shell, the tailored imaging system inside, automatic data collection, and cutting-edge deep learning,” says Georg Martius, Max Planck Research Group Leader at MPI-IS, where he heads the Autonomous Learning Group. His Ph.D. student Huanbo Sun adds: “Our unique hybrid structure of a soft shell enclosing a stiff skeleton ensures high sensitivity and robustness. Our camera can detect even the slightest deformations of the surface from one single image.” Indeed, while testing the sensor, the researchers realized it was sensitive enough to feel its own orientation relative to gravity.
    The third member of the team is Katherine J. Kuchenbecker, the Director of the Haptic Intelligence Department at MPI-IS. She confirms that the new sensor will be useful: “Previous soft haptic sensors had only small sensing areas, were delicate and difficult to make, and often could not feel forces parallel to the skin, which are essential for robotic manipulation like holding a glass of water or sliding a coin along a table,” says Kuchenbecker.
    But how does such a sensor learn? Huanbo Sun designed a testbed to generate the training data needed for the machine-learning model to understand the correlation between the change in raw image pixels and the forces applied. The testbed probes the sensor all around its surface and records the true contact force vector together with the camera image inside the sensor. In this way, about 200,000 measurements were generated. It took nearly three weeks to collect the data and another one day to train the machine-learning model. Surviving this long experiment with so many different contact forces helped prove the robustness of Insight’s mechanical design, and tests with a larger probe showed how well the sensing system generalizes.
    Another special feature of the thumb-shaped sensor is that itpossesses a nail-shaped zone with a thinner elastomer layer. This tactile fovea is designed to detect even tiny forces and detailed object shapes. For this super-sensitive zone, the scientists choose an elastomer thickness of 1.2 mm rather than the 4 mm they used on the rest of the finger sensor.
    “The hardware and software design we present in our work can be transferred to a wide variety of robot parts with different shapes and precision requirements. The machine-learning architecture, training, and inference process are all general and can be applied to many other sensor designs,” Huanbo Sun concludes.
    Video: https://youtu.be/lTAJwcZopAA
    Story Source:
    Materials provided by Max Planck Institute for Intelligent Systems. Note: Content may be edited for style and length. More

  • in

    Inorganic borophene liquid crystals: A superior new material for optoelectronic devices

    Liquid crystals derived from borophene have risen in popularity, owing to their immense applicability in optoelectronic and photonic devices. However, their development requires a very narrow temperature range, which hinders their large-scale application. Now, Tokyo Tech researchers investigated a liquid-state borophene oxide, discovering that it exhibited high thermal stability and optical switching behavior even at low voltages. These findings highlight the strong potential of borophene oxide-derived liquid crystals for use in widespread applications.
    Two-dimensional (2D) atomic layered materials, such as the carbon-based graphene and the boron-based borophene, are highly sought after for their applications in a variety of optoelectronic devices, owing to their desirable electronic properties. The monolayer structure of borophene with a network of boron bonds endows it with high flexibility, which can be beneficial for the generation of a liquid state at low temperatures. Thus, it is not surprising that liquid crystals derived from 2D networked structures are in high demand. However, the poor stability of borophene, in particular, makes it difficult for it to undergo a phase transition to the liquid state.
    In contrast, borophene oxide — a derivative of borophene — can improve the stability of the internal boron network, in turn stabilizing the entire structure. This property of borophene oxide is different from that of other 2D materials, which are unable to yield liquid crystals without the use of solvents.
    To compensate for the lack of suitable liquid crystals, a team of researchers from Japan, including Assistant Professor Tetsuya Kambe and Professor Kimihisa Yamamoto from Tokyo Institute of Technology, investigated the properties of a borophene oxide analogue as a fully inorganic liquid with a layered structure. Their study was recently published in Nature Communications.
    Initially, the team used previously tested methods to generate borophene oxide layers (BoL) as crystals (BoL-C). They then converted BoL-C to liquid crystals (BoL-LC) by heating them to temperatures of 105-200°C. They observed that the resultant dehydration weakened the interactions between the interlayers of BoL-C, which is desirable for its flexibility.
    The team then analyzed the structural properties of BoL-LC using polarized optical microscopy, discovering that BoL-LC sheets are found stacked parallel to the surface of the liquid drop with a slightly curved form. This spherulite orientation of borophene sheets was confirmed using scanning electron microscopy.
    An analysis of the phase transition features revealed that phase transition (P-ii/P-i) occurred at around 100°C for BoL-LC. In fact, both transition phases exhibited high thermal stability at extreme temperatures. The team also observed a highly ordered orientation of the P-ii phase.
    To test its optical switching behavior, the team created a dynamic scattering device using BoL-LC, and found that unlike other organic liquid crystals, the BoL-based device responded well to voltages as low as 1V. These findings highlight the feasibility of inorganic liquid devices in harsh environments.
    “Although a liquid crystal device using graphene oxide has been reported previously, it was a lyotropic liquid crystal, with a strong dependence on the solution concentration. Therefore, the previously reported material is different from the liquid borophene created in this study, without the use of any solvents,” says Dr. Kambe, while discussing the advantages of BoL-LC over other 2D liquid crystals.
    What’s more, they found that even upon exposure to direct fire, BoL-LC was noncombustible! This confirms that BoL-LC in a liquid state with an ordered layer structure can exist over a wide range of temperatures — a property which has not been observed so far for other organic materials.
    When asked about the implications of these findings, Dr. Kambe and Dr. Yamamoto stated, “BoL-LC exhibits strong potential for use in widespread applications, that are unavailable to conventional organic liquid crystals or inorganic materials.”
    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Automation is fueling increasing mortality among U.S. adults, study finds

    The automation of U.S. manufacturing — robots replacing people on factory floors — is fueling rising mortality rate among America’s working-age adults, according to a new study by researchers at Yale and the University of Pennsylvania.
    The study, published Feb. 23 in the journal Demography, found evidence of a causal link between automation and increasing mortality, driven largely by increased “deaths of despair,” such as suicides and drug overdoses. This is particularly true for males and females aged 45 to 54, according to the study. But researchers also found evidence of increased mortality across multiple age and sex groups from causes as varied as cancer and heart disease.
    Public policy, including strong social-safety net programs, higher minimum wages, and limiting the supply of prescription opioids, can blunt automation’s effects on a community’s health, the researchers concluded.
    “For decades, manufacturers in the United States have turned to automation to remain competitive in a global marketplace, but this technological innovation has reduced the number of quality jobs available to adults without a college degree — a group that has faced increased mortality in recent years,” said lead author Rourke O’Brien, assistant professor of sociology in Yale’s Faculty of Arts and Sciences. “Our analysis shows that automation exacts a toll on the health of individuals both directly — by reducing employment, wages, and access to healthcare — as well as indirectly, by reducing the economic vitality of the broader community.”
    Since 1980, mortality rates in the United States have diverged from those in other high-income countries. Today, Americans on average die three years sooner than their counterparts in other wealthy nations.
    Automation is a major source of the decline of U.S. manufacturing jobs along with other factors, including competition with manufacturers in countries with lower labor costs, such as China and Mexico . Previous research has shown that the adoption of industrial robots caused the loss of an estimated 420,000 to 750,000 jobs during the 1990s and 2000s, the majority of which were in manufacturing.
    To understand the role of automation on increased mortality, O’Brien and co-authors Elizabeth F. Blair and Atheendar Venkataramani, both of the University of Pennsylvania, used newly available measures that chart the adoption of automation across U.S. industries and localities between 1993 and 2007. They combined these measures with U.S. death-certificate data over the same time period to estimate the causal effect of automation on the mortality of working age adults at the county level and for specific types of deaths.
    According to the study, each new robot per 1,000 workers led to about eight additional deaths per 100,000 males aged 45 to 54 and nearly four additional deaths per 100,000 females in the same age group. The analysis showed that automation caused a substantial increase in suicides among middle-aged men and drug overdose deaths among men of all ages and women aged 20 to 29. Overall, automation could be linked to 12% of the increase in drug overdose mortality among all working-age adults during the study period. The researchers also discovered evidence associating the lost jobs and reduced wages caused by automation with increased homicide, cancer, and cardiovascular disease within specific age-sex groups.
    The researchers examined policy areas that could mitigate automation’s harmful effects. They found that robust social safety net programs, such as Medicaid and unemployment benefits, at the state level moderated the effects of automation among middle-aged males, particularly suicide and drug overdose deaths. Labor market policies also soften automation’s effects on middle-aged men: The effects of automation were more pronounced in states with “right to work” laws, which contribute to lower rates of unionization, and states with lower minimum wages, according to the study.
    The study found suggestive evidence that the effect of automation on drug overdose deaths might be higher in areas with higher per capita supplies of prescription opioids.
    “Our findings underscore the importance of public policy in supporting the individuals and communities who have lost their jobs or seen their wages cut due to automation,” said Venkatarmani, co-author of the study. “A strong social safety net and labor market policies that improve the quality of jobs available to workers without a college degree may help reduce deaths of despair and strengthen the general health of communities, particularly those in our nation’s industrial heartland.”
    The study’s authors are members of Opportunity for Health, a research group that explores how economic opportunity affects the health of individuals and communities. The study was supported by the U.S. Social Security Administration.
    Story Source:
    Materials provided by Yale University. Original written by Mike Cummings. Note: Content may be edited for style and length. More

  • in

    Navigation tools could be pointing drivers to the shortest route — but not the safest

    Time for a road trip. You punch the destination into your GPS and choose the suggested route. But is this shortest route the safest? Not necessarily, according to new findings from Texas A&M University researchers.
    Dominique Lord and Soheil Sohrabi, with funding from the A.P. and Florence Wiley Faculty Fellow at Texas A&M, designed a study to examine the safety of navigational tools. Comparing the safest and shortest routes between five metropolitan areas in Texas — Dallas-Fort Worth, Waco, Austin, Houston and Bryan-College Station — including more than 29,000 road segments, they found that taking a route with an 8% reduction in travel time could increase the risk of being in a crash by 23%.
    “As route guidance systems aim to find the shortest path between a beginning and ending point, they can misguide drivers to take routes that may minimize travel time, but concurrently, carry a greater risk of crashes,” said Lord, professor in the Zachry Department of Civil and Environmental Engineering.
    The researchers collected and combined road and traffic characteristics, including geometry design, number of lanes, lane width, lighting and average daily traffic, weather conditions and historical crash data to analyze and develop statistical models for predicting the risk of being involved in crashes.
    The study revealed inconsistencies in the shortest and safest routes. In clear weather conditions, taking the shortest route instead of the safest between Dallas-Fort Worth and Bryan-College Station will reduce the travel time by 8%. Still, the probability of a crash increases to 20%. The analysis suggests that taking the longest route between Austin and Houston with an 11% increase in travel time results in a 1% decrease in the daily probability of crashes.
    Overall, local roads with a higher risk of crashes include poor geometric designs, drainage problems, lack of lighting and a higher risk of wildlife-vehicle collisions. More

  • in

    New artificial intelligence tool detects often overlooked heart diseases

    Physician-scientists in the Smidt Heart Institute at Cedars-Sinai have created an artificial intelligence (AI) tool that can effectively identify and distinguish between two life-threatening heart conditions that are often easy to miss: hypertrophic cardiomyopathy and cardiac amyloidosis. The new findings were published in JAMA Cardiology.
    “These two heart conditions are challenging for even expert cardiologists to accurately identify, and so patients often go on for years to decades before receiving a correct diagnosis,” said David Ouyang, MD, a cardiologist in the Smidt Heart Institute and senior author of the study. “Our AI algorithm can pinpoint disease patterns that can’t be seen by the naked eye, and then use these patterns to predict the right diagnosis.”
    The two-step, novel algorithm was used on over 34,000 cardiac ultrasound videos from Cedars-Sinai and Stanford Healthcare’s echocardiography laboratories. When applied to these clinical images, the algorithm identified specific features — related to the thickness of heart walls and the size of heart chambers — to efficiently flag certain patients as suspicious for having the potentially unrecognized cardiac diseases.
    “The algorithm identified high-risk patients with more accuracy than the well-trained eye of a clinical expert,” said Ouyang. “This is because the algorithm picks up subtle cues on ultrasound videos that distinguish between heart conditions that can often look very similar to more benign conditions, as well as to each other, on initial review.”
    Without comprehensive testing, cardiologists find it challenging to distinguish between similar appearing diseases and changes in heart shape and size that can sometimes be thought of as a part of normal aging. This algorithm accurately distinguishes not only abnormal from normal, but also between which underlying potentially life-threatening cardiac conditions may be present — with warning signals that are now detectable well before the disease clinically progresses to the point where it can impact health outcomes. Getting an earlier diagnosis enables patients to begin effective treatments sooner, prevent adverse clinical events, and improve their quality of life.
    Cardiac amyloidosis, often called “stiff heart syndrome,” is a disorder caused by deposits of an abnormal protein (amyloid) in the heart tissue. As amyloid builds up, it takes the place of healthy heart muscle, making it difficult for the heart to work properly. Cardiac amyloidosis often goes undetected because patients might not have any symptoms, or they might experience symptoms only sporadically. More

  • in

    Pioneering simulations focus on HIV-1 virus

    For the HIV-1 virus, a double layer of fatty molecules called lipids not only serves as its container, but also plays a key role in the virus’s replication and infectivity. Scientists have used supercomputers to complete the first-ever biologically authentic computer model of the HIV-1 virus liposome, its complete spherical lipid bilayer.
    What’s more, this study comes fresh off the heels of a new atomistic model of the HIV-1 capsid, which contains its genetic material. The scientists are hopeful this basic research into viral envelopes can help efforts to develop new HIV-1 therapeutics, as well as laying a foundation for study of other enveloped viruses such as the novel coronavirus, SARS-CoV-2.
    “This work represents an investigation of the HIV-1 liposome at full-scale, and with an unprecedented level of chemical complexity,” said Alex Bryer, a PhD student in the Perilla Laboratory, Department of Chemistry and Biochemistry, University of Delaware. Bryer is the lead author of the liposome-modeling research, published January 2022 in the journal PLOS Computational Biology.
    The science team developed a complex chemical model of the HIV-1 liposome that revealed key characteristics of the liposome’s asymmetry. Most such models assume a geometrically uniform structure and don’t capture the asymmetry inherent in such biological containers.
    Lipid Flip-Flop
    Bryer and his co-authors investigated a mechanism that’s known colloquially as “lipid flip-flop,” which is when lipids in one of the leaflets of the bilayer are moved or transported to the other leaflet. The leaflets flip-flop the lipids and exchange the molecules for various purposes such as achieving a dynamic equilibrium. More

  • in

    A Minecraft build can be used to teach almost any subject

    For all its massive popularity, Minecraft — the highest-selling video game of all time — is not highly regarded among the gaming world’s snob class. The graphics are blocky, and there isn’t much of a point to it. It’s for kids.
    But according to many millions of users, including some Concordia faculty and students, Minecraft’s malleability is its strength. Free from constraints and easily modifiable, the game can be used in countless ways, including as a game-based teaching method. In a period when classrooms have had to pivot online with little warning or prep time, the realm of Minecraft has provided educators with a massive sandbox in which to play, experiment and teach.
    A new paper published in the journal Gamevironments by Darren Wershler, professor of English, and Bart Simon, associate professor of sociology and director of Concordia’s Milieux Institute for Arts, Culture and Technology, describes how Wershler used Minecraft to teach a class on the history and culture of modernity. The course was based entirely within the game server, with instructions, in-class communication and course work almost exclusively carried out within the Minecraft world and over the messaging app Discord. This new pedagogical framework presented the researchers with the opportunity to see how the students used the game to achieve academic goals.
    “The course is not a video game studies course, and it is not a gamified version of a course on modernity,” explains Wershler, a Tier 2 Concordia University Research Chair in Media and Contemporary Literature. “It’s this other thing that sits in an uncomfortable middle and brushes up against both. The learning comes out of trying to think about those two things simultaneously.”
    Familiar concepts, new learning
    The students quickly adapted to their unique classroom and lost little time adapting to their new learning environment. Some took time to teach their peers who were unfamiliar with the game, providing them with instructions on how to mine resources, build homes, plant food and survive waves of attacks by hostile zombies and skeletons. Others, who usually did not identify themselves as natural-born leaders, found themselves answering questions and providing guidance because of their gaming proficiency.
    The students eventually decided on group projects that would be created in the Minecraft world and touched on the issues of modernity addressed in Wershler’s half-hour podcast lectures and readings. One group tried to recreate Moshe Safdie’s futuristic Habitat 67, which, Wershler notes, fits right into the Minecraft aesthetic. Another group built an entire working city (populated by Minecraft villagers) on the model of the Nakagin Capsule Tower Building in Tokyo.
    Rather than using the Creative mode that many educators favour, the game was set in the more difficult Survival mode, so students were often killed by marauding foes. The researchers downloaded fan-made modifications to enhance the game as they chose; but the mods also made the gameplay wonkier and more liable to crash at inopportune times.
    “It was important that the game remained a game and that while the students were working on their projects, there were all these horrible things coming out of the wilderness to kill them,” Wershler says. “This makes them think about the fact that what they are doing requires effort and that the possibility of failure is very real.”
    An adaptable build
    He admits to being happily surprised with how well his students adapted to the parameters of the course he co-designed along with a dozen other interdisciplinary researchers at Concordia. Wershler has been using Minecraft in his course since 2014, but he realized this approach created a scaffold for a new style of teaching.
    “Educators at the high school, college and university levels can use these principles and tools to teach a whole variety of subjects within the game,” he says. “There is no reason why we could not do this with architecture, design, engineering, computer science as well as history, cultural studies or sociology. There are countless ways to structure this to make it work.”
    Story Source:
    Materials provided by Concordia University. Original written by Patrick Lejtenyi. Note: Content may be edited for style and length. More

  • in

    Risks of using AI to grow our food are substantial and must not be ignored, warn researchers

    Imagine a field of wheat that extends to the horizon, being grown for flour that will be made into bread to feed cities’ worth of people. Imagine that all authority for tilling, planting, fertilising, monitoring and harvesting this field has been delegated to artificial intelligence: algorithms that control drip-irrigation systems, self-driving tractors and combine harvesters, clever enough to respond to the weather and the exact needs of the crop. Then imagine a hacker messes things up.
    A new risk analysis, published today in the journal Nature Machine Intelligence, warns that the future use of artificial intelligence in agriculture comes with substantial potential risks for farms, farmers and food security that are poorly understood and under-appreciated.
    “The idea of intelligent machines running farms is not science fiction. Large companies are already pioneering the next generation of autonomous ag-bots and decision support systems that will replace humans in the field,” said Dr Asaf Tzachor in the University of Cambridge’s Centre for the Study of Existential Risk (CSER), first author of the paper.
    “But so far no-one seems to have asked the question ‘are there any risks associated with a rapid deployment of agricultural AI?'” he added.
    Despite the huge promise of AI for improving crop management and agricultural productivity, potential risks must be addressed responsibly and new technologies properly tested in experimental settings to ensure they are safe, and secure against accidental failures, unintended consequences, and cyber-attacks, the authors say.
    In their research, the authors have come up with a catalogue of risks that must be considered in the responsible development of AI for agriculture — and ways to address them. In it, they raise the alarm about cyber-attackers potentially causing disruption to commercial farms using AI, by poisoning datasets or by shutting down sprayers, autonomous drones, and robotic harvesters. To guard against this they suggest that ‘white hat hackers’ help companies uncover any security failings during the development phase, so that systems can be safeguarded against real hackers. More