More stories

  • in

    New law of physics helps humans and robots grasp the friction of touch

    Although robotic devices are used in everything from assembly lines to medicine, engineers have a hard time accounting for the friction that occurs when those robots grip objects — particularly in wet environments. Researchers have now discovered a new law of physics that accounts for this type of friction, which should advance a wide range of robotic technologies.
    “Our work here opens the door to creating more reliable and functional haptic and robotic devices in applications such as telesurgery and manufacturing,” says Lilian Hsiao, an assistant professor of chemical and biomolecular engineering at North Carolina State University and corresponding author of a paper on the work.
    At issue is something called elastohydrodynamic lubrication (EHL) friction, which is the friction that occurs when two solid surfaces come into contact with a thin layer of fluid between them. This would include the friction that occurs when you rub your fingertips together, with the fluid being the thin layer of naturally occurring oil on your skin. But it could also apply to a robotic claw lifting an object that has been coated with oil, or to a surgical device that is being used inside the human body.
    One reason friction is important is because it helps us hold things without dropping them.
    “Understanding friction is intuitive for humans — even when we’re handling soapy dishes,” Hsiao says. “But it is extremely difficult to account for EHL friction when developing materials that controls grasping capabilities in robots.”
    To develop materials that help control EHL friction, engineers would need a framework that can be applied uniformly to a wide variety of patterns, materials and dynamic operating conditions. And that is exactly what the researchers have discovered.
    “This law can be used to account for EHL friction, and can be applied to many different soft systems — as long as the surfaces of the objects are patterned,” Hsiao says.
    In this context, surface patterns could be anything from the slightly raised surfaces on the tips of our fingers to grooves in the surface of a robotic tool.
    The new physical principle, developed jointly by Hsiao and her graduate student Yunhu Peng, makes use of four equations to account for all of the physical forces at play in understanding EHL friction. In the paper, the research team demonstrated the law in three systems: human fingers; a bio-inspired robotic fingertip; and a tool called a tribo-rheometer, which is used to measure frictional forces. Peng is first author of the paper.
    “These results are very useful in robotic hands that have more nuanced controls for reliably handling manufacturing processes,” Hsiao says. “And it has obvious applications in the realm of telesurgery, in which surgeons remotely control robotic devices to perform surgical procedures. We view this as a fundamental advancement for understanding touch and for controlling touch in synthetic systems.”
    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More

  • in

    Finding the optimal way to repay student debt

    The burden of student loans in the U.S. continues to grow unabatedly, currently accounting for a total of $1.7 trillion in household debt among nearly 45 million borrowers. “The introduction of income-based repayment over the past decade has made student loans rather complicated products,” Paolo Guasoni of Dublin City University said. As borrowers navigate this complex process, they face long-term consequences; people with student debt are less likely to own homes or become entrepreneurs, and generally postpone their enrollment in graduate or professional studies. Though legislative reform is necessary to combat this problem on a grand scale, individual borrowers can take steps to repay their loans with minimal long-term costs.
    In a paper that published in April in the SIAM Journal on Financial Mathematics, Guasoni — along with Yu-Jui Huang and Saeed Khalili (both of the University of Colorado, Boulder) — developed a strategy for minimizing the overall cost of repaying student loans. “In the literature, we found mostly empirical studies discussing what borrowers are doing,” Huang said. “But what we wanted to know was rather, how should a borrower repay to minimize debt burden?”
    Students become responsible for repaying their loans a few months after they graduate or unenroll, and must contend with the loan growing at a national fixed interest rate. One option for borrowers is to repay their balances in full by a fixed maturity — the date at which a loan’s final payment is due. Another is to enroll in an income-based scheme, in which monthly payments are only due if the borrower has an income above a certain subsistence threshold. If payments are required, they are proportional to the amount the borrower makes above that threshold. After roughly 20 to 25 years, any remaining balance is forgiven but taxed as ordinary income. “The tension is between postponing payments until forgiveness and letting interest swell the loan balance over time,” Guasoni said. The tax cost of delaying payments increases exponentially with longer timeframes until forgiveness, potentially offsetting the supposed savings.
    The intuitive approach for many borrowers may be to pay off small loans as quickly as possible, since even minimum payments would extinguish the balance by the end of its term, making forgiveness irrelevant. Similarly, one may wish to minimize the payments for a large loan through an income-based scheme, especially if the loan will be forgiven in a few years anyway. However, the situation is not always as simple as it seems. “The counterintuitive part is that, if your loan is large and forgiveness is far away, it may be better to maximize payments over the first few years to keep the loan balance from exploding,” Huang said. “Then you can switch to income-based repayment and take advantage of forgiveness.”
    To investigate what is truly the optimal way to pay back a student loan, the authors created a mathematical model of a borrower who took out a federal student loan — the most common type of student loan — with a constant interest rate. The model assumes that the borrower is able to repay the loan under its original term and even possibly make additional payments; otherwise, they would have no choice but to enroll in an income-based scheme. Quickly paying off the loan leads to lower costs from compounding interest. However, the borrower’s motivation to do so is contradicted by the possibility of the remaining balance being forgiven and taxed in the future, which encourages them to delay payment until the forgiveness date.
    The mathematical model revealed several possible approaches for a borrower who wishes to minimize the overall cost of their loan. “The optimal strategy is to either (i) repay the loan as quickly as possible [if the initial balance is sufficiently low], or (ii) maximize payments up to a critical horizon (possibly now) and then minimize them through income-based repayment,” Guasoni said. The critical horizon occurs when the benefits of forgiveness begin to outweigh the compounding costs of interest on the loan balance. For large loans with a high interest rate — which are common for professional degrees — the savings from the strategy of high initial payments followed by enrollment in an income-based scheme can be substantial, for those that are able to afford such a plan.
    The authors provided an example of a dental school graduate with a balance of $300,000 in Direct PLUS loans that carry an interest rate of 7.08 percent (according to the American Dental Education Association, 83 percent of dental school graduates have student loan debt, with an average balance of $292,169). This graduate has a starting salary of $100,000 that will grow four percent annually, and is able to repay at most 30 percent of the income that they make above the subsistence level. If they kept up such maximal payments, they would repay the loan in less than 20 years with a total cost of $512,000.
    The example graduate could also immediately enroll in income-based repayment, paying only 10 percent of the income that they make above subsistence. After 25 years, their balance would equal $1,053,000 due to compounding interest. This balance would be forgiven and taxed as income at a 40 percent rate, yielding a total cost of $524,000. Alternatively, the graduate could use the authors’ suggested strategy and repay 30 percent of their income above subsistence for around nine years, then switch to the income-based repayment scheme. The remaining balance to be forgiven after 25 years would then be $462,000, leading to a total cost from payments and tax of $490,000 — the lowest of all the strategies. The reduction in the balance through multiple years of high payments curbs the balance’s ensuing growth during the period of minimum payments.
    Future research could further explore the more complicated factors of student debt repayment. The authors’ model is deterministic — it does not account for the fact that the interest rates could potentially change in the future. However, interest rates can increase or decrease, which may compel borrowers to refinance or delay payments. Further work is necessary to determine the influence of such changes on optimal debt repayment.
    This research illuminated the way in which borrowers’ choices in their loan repayments can have a sizable impact on overall costs, especially given compounding interest. “If you have student loans, you should consider your specific options carefully and see what the total cost would be under different strategies,” Guasoni said. Huang agreed, noting that their proposed strategy may be especially beneficial for the large loans that are often held by law and dental school graduates. “Each loan is slightly different,” he said. “Our model does not capture every possible detail, but it helps to focus the attention on two possibilities: quickest full repayment or enrollment in an income-based scheme, possibly after a period of high payments.” A careful, mathematical consideration of the approach to loan repayment can help borrowers make decisions that will benefit them in the years to come. More

  • in

    Small generator captures heat given off by skin to power wearable devices

    Scientists in China have developed a small, flexible device that can convert heat emitted from human skin to electrical power. In their research, presented April 29 in the journal Cell Reports Physical Science, the team showed that the device could power an LED light in real time when worn on a wristband. The findings suggest that body temperature could someday power wearable electronics such as fitness trackers.
    The device is a thermoelectric generator (TEG) that uses temperature gradients to generate power. In this design, researchers use the difference between the warmer body temperature and the relatively cooler ambient environment to generate power.
    “This is a field with great potential,” says corresponding author Qian Zhang of Harbin Institute of Technology, Shenzhen. “TEGs can recover energy that’s lost as waste heat and thus improve the rate of power utilization.”
    Unlike traditional generators that use the energy of motion to produce power, thermoelectric generators have no moving parts, making them essentially maintenance free. These generators are installed on machines located in remote areas and on board space probes to supply energy.
    Zhang and her colleagues have been working on designing thermoelectric generators for years. With wearable devices becoming increasingly popular in recent years, the team wanted to explore whether these reliable generators could replace traditional battery in these devices, including fitness trackers, smart watches, and biosensors.
    “Don’t underestimate the temperature differences between our body and the environment — it’s small, but our experiment shows it can still generate power,” she says.
    Conventional TEGs are usually rigid and can only withstand fewer than 200 instances of bending. Although the flexible kinds can meet the bending requirement, their performance tends to be inadequate. To overcome this limitation and make the device more adaptable to wearables, researchers attached the core electrical components to a stretchable and more adhesive polyurethane material. Tests showed that the device survived at least 10,000 instances of repeated bending without significant changes in performance.
    In addition, commercially available TEGs rely heavily on rare metal bismuth that does not naturally occur in large quantities. The new design partially replaced it with a magnesium-based material, which can substantially lower the costs in large-scale production.
    Researchers designed a prototype of a self-powered electronic system. They connected an LED to a TEG band measuring 4.5 in long and 1.1 in wide. Then, the team wrapped the TEG band around the wrist of someone whose body temperature measured at 92.9?F in ambient environmental conditions. With a temperature difference, the generator harvested heat given off from the skin and successfully lit up the LED.
    “Our prototype already has good performance if it’s introduced to the market,” says corresponding author Feng Cao of Harbin Institute of Technology, Shenzhen. He adds that with the proper voltage converter, the system can power electronics such as smart watches and pulse sensors.
    Looking forward, the team plans to further improve the design so the device can absorb heat more efficiently.
    “There’s an increasing demand for greener energy, and TEGs fit right in, for they can turn wasted heat into power,” Cao says. “While, for example, solar energy can only be generated when there’s sun, TEGs can produce power in many scenarios — as long as there’s a temperature difference.”
    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Silicon chip will drive next generation communications

    Researchers from Osaka University, Japan and the University of Adelaide, Australia have worked together to produce the new multiplexer made from pure silicon for terahertz-range communications in the 300-GHz band.
    “In order to control the great spectral bandwidth of terahertz waves, a multiplexer, which is used to split and join signals, is critical for dividing the information into manageable chunks that can be more easily processed and so can be transmitted faster from one device to another,” said Associate Professor Withawat Withayachumnankul from the University of Adelaide’s School of Electrical and Electronic Engineering.
    “Up until now compact and practical multiplexers have not been developed for the terahertz range. The new terahertz multiplexers, which are economical to manufacture, will be extremely useful for ultra-broadband wireless communications.
    “The shape of the chips we have developed is the key to combining and splitting channels so that more data can be processed more rapidly. Simplicity is its beauty.”
    People around the world are increasingly using mobile devices to access the internet and the number of connected devices is multiplying exponentially. Soon machines will be communicating with each other in the Internet of Things which will require even more powerful wireless networks able to transfer large volumes of data fast.
    Terahertz waves are a portion of the electromagnetic spectrum that has a raw spectral bandwidth that is far broader than that of conventional wireless communications, which is based upon microwaves. The team has developed ultra-compact and efficient terahertz multiplexers, thanks to a novel optical tunnelling process. More

  • in

    Blueprint for a robust quantum future

    Claiming that something has a defect normally suggests an undesirable feature. That’s not the case in solid-state systems, such as the semiconductors at the heart of modern classical electronic devices. They work because of defects introduced into the rigidly ordered arrangement of atoms in crystalline materials like silicon. Surprisingly, in the quantum world, defects also play an important role.
    Researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory, the University of Chicago and scientific institutes and universities in Japan, Korea and Hungary have established guidelines that will be an invaluable resource for the discovery of new defect-based quantum systems. The international team published these guidelines in Nature Reviews Materials.
    Such systems have possible applications in quantum communications, sensing and computing and thereby could have a transformative effect on society. Quantum communications could distribute quantum information robustly and securely over long distances, making a quantum internet possible. Quantum sensing could achieve unprecedented sensitivities for measurements with biological, astronomical, technological and military interest. Quantum computing could reliably simulate the behavior of matter down to the atomic level and possibly simulate and discover new drugs.
    The team derived their design guidelines based on an extensive review of the vast body of knowledge acquired over the last several decades on spin defects in solid-state materials.
    “The defects that interest us here are isolated distortions in the orderly arrangement of atoms in a crystal,” explained Joseph Heremans, a scientist in Argonne’s Center for Molecular Engineering and Materials Science division, as well as the University of Chicago Pritzker School of Molecular Engineering.
    Such distortions might include holes or vacancies created by the removal of atoms or impurities added as dopants. These distortions, in turn, can trap electrons within the crystal. These electrons have a property called spin, which acts as an isolated quantum system. More

  • in

    New computer model helps brings the sun into the laboratory

    Every day, the sun ejects large amounts of a hot particle soup known as plasma toward Earth where it can disrupt telecommunications satellites and damage electrical grids. Now, scientists at the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University’s Department of Astrophysical Sciences have made a discovery that could lead to better predictions of this space weather and help safeguard sensitive infrastructure.
    The discovery comes from a new computer model that predicts the behavior of the plasma in the region above the surface of the sun known as the solar corona. The model was originally inspired by a similar model that describes the behavior of the plasma that fuels fusion reactions in doughnut-shaped fusion facilities known as tokamaks.
    Fusion, the power that drives the sun and stars, combines light elements in the form of plasma — the hot, charged state of matter composed of free electrons and atomic nuclei — that generates massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.
    The Princeton scientists made their findings while studying roped-together magnetic fields that loop into and out of the sun. Under certain conditions, the loops can cause hot particles to erupt from the sun’s surface in enormous burps known as coronal mass ejections. Those particles can eventually hit the magnetic field surrounding Earth and cause auroras, as well as interfere with electrical and communications systems.
    “We need to understand the causes of these eruptions to predict space weather,” said Andrew Alt, a graduate student in the Princeton Program in Plasma Physics at PPPL and lead author of the paper reporting the results in the Astrophysical Journal.
    The model relies on a new mathematical method that incorporates a novel insight that Alt and collaborators discovered into what causes the instability. The scientists found that a type of jiggling known as the “torus instability” could cause roped magnetic fields to untether from the sun’s surface, triggering a flood of plasma.
    The torus instability loosens some of the forces keeping the ropes tied down. Once those forces weaken, another force causes the ropes to expand and lift further off the solar surface. “Our model’s ability to accurately predict the behavior of magnetic ropes indicates that our method could ultimately be used to improve space weather prediction,” Alt said.
    The scientists have also developed a way to more accurately translate laboratory results to conditions on the sun. Past models have relied on assumptions that made calculations easier but did not always simulate plasma precisely. The new technique relies only on raw data. “The assumptions built into previous models remove important physical effects that we want to consider,” Alt said. “Without these assumptions, we can make more accurate predictions.”
    To conduct their research, the scientists created magnetic flux ropes inside PPPL’s Magnetic Reconnection Experiment (MRX), a barrel-shaped machine designed to study the coming together and explosive breaking apart of the magnetic field lines in plasma. But flux ropes created in the lab behave differently than ropes on the sun, since, for example, the flux ropes in the lab have to be contained by a metal vessel.
    The researchers made alterations to their mathematical tools to account for these differences, ensuring that results from MRX could be translated to the sun. “There are conditions on the sun that we cannot mimic in the laboratory,” said PPPL physicist Hantao Ji, a Princeton University professor who advises Alt and contributed to the research. “So, we adjust our equations to account for the absence or presence of certain physical properties. We have to make sure our research compares apples to apples so our results will be accurate.”
    Discovery of the jiggling plasma behavior could also lead to more efficient generation of fusion-powered electricity. Magnetic reconnection and related plasma behavior occur in tokamaks as well as on the sun, so any insight into these processes could help scientists control them in the future.
    Support for this research came from the DOE, the National Aeronautics and Space Administration, and the German Research Foundation. Research partners include Princeton University, Sandia National Laboratories, the University of Potsdam, the Harvard-Smithsonian Center for Astrophysics, and the Bulgarian Academy of Sciences.
    Story Source:
    Materials provided by DOE/Princeton Plasma Physics Laboratory. Original written by Raphael Rosen. Note: Content may be edited for style and length. More

  • in

    Mapping the electronic states in an exotic superconductor

    Scientists characterized how the electronic states in a compound containing iron, tellurium, and selenium depend on local chemical concentrations. They discovered that superconductivity (conducting electricity without resistance), along with distinct magnetic correlations, appears when the local concentration of iron is sufficiently low; a coexisting electronic state existing only at the surface (topological surface state) arises when the concentration of tellurium is sufficiently high. Reported in Nature Materials, their findings point to the composition range necessary for topological superconductivity. Topological superconductivity could enable more robust quantum computing, which promises to deliver exponential increases in processing power.
    “Quantum computing is still in its infancy, and one of the key challenges is reducing the error rate of the computations,” said first author Yangmu Li, a postdoc in the Neutron Scattering Group of the Condensed Matter Physics and Materials Science (CMPMS) Division at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory. “Errors arise as qubits, or quantum information bits, interact with their environment. However, unlike trapped ions or solid-state qubits such as point defects in diamond, topological superconducting qubits are intrinsically protected from part of the noise. Therefore, they could support computation less prone to errors. The question is, where can we find topological superconductivity?
    In this study, the scientists narrowed the search in one compound known to host topological surface states and part of the family of iron-based superconductors. In this compound, topological and superconducting states are not distributed uniformly across the surface. Understanding what’s behind these variations in electronic states and how to control them is key to enabling practical applications like topologically protected quantum computing.
    From previous research, the team knew modifying the amount of iron could switch the material from a superconducting to nonsuperconducting state. For this study, physicist Gendu Gu of the CMPMS Division grew two types of large single crystals, one with slightly more iron relative to the other. The sample with the higher iron content is nonsuperconducting; the other sample is superconducting.
    To understand whether the arrangement of electrons in the bulk of the material varied between the superconducting and nonsuperconducting samples, the team turned to spin-polarized neutron scattering. The Spallation Neutron Source (SNS), located at DOE’s Oak Ridge National Laboratory, is home to a one-of-a-kind instrument for performing this technique.
    “Neutron scattering can tell us the magnetic moments, or spins, of electrons and the atomic structure of a material,” explained corresponding author, Igor Zaliznyak, a physicist in the CMPMS Division Neutron Scattering Group who led the Brookhaven team that helped design and install the instrument with collaborators at Oak Ridge. “In order to single out the magnetic properties of electrons, we polarize the neutrons using a mirror that reflects only one specific spin direction.”
    To their surprise, the scientists observed drastically different patterns of electron magnetic moments in the two samples. Therefore, the slight alteration in the amount of iron caused a change in electronic state. More

  • in

    Driving behaviors harbor early signals of dementia

    Using naturalistic driving data and machine learning techniques, researchers at Columbia University Mailman School of Public Health and Columbia’s Fu Foundation School of Engineering and Applied Science have developed highly accurate algorithms for detecting mild cognitive impairment and dementia in older drivers. Naturalistic driving data refer to data captured through in-vehicle recording devices or other technologies in the real-world setting. These data could be processed to measure driving exposure, space and performance in great detail. The findings are published in the journal Geriatrics.
    The researchers developed random forests models, a statistical technique widely used in AI for classifying disease status, that performed exceptionally well. “Based on variables derived from the naturalistic driving data and basic demographic characteristics, such as age, sex, race/ethnicity and education level, we could predict mild cognitive impairment and dementia with 88 percent accuracy, “said Sharon Di, associate professor of civil engineering and engineering mechanics at Columbia Engineering and the study’s lead author.
    The investigators constructed 29 variables using the naturalistic driving data captured by in-vehicle recording devices from 2977 participants of the Longitudinal Research on Aging Drivers (LongROAD) project, a multisite cohort study sponsored by the AAA Foundation for Traffic Safety. At the time of enrollment, the participants were active drivers aged 65-79 years and had no significant cognitive impairment and degenerative medical conditions. Data used in this study spanned the time period from August 2015 through March 2019.
    Among the 2977 participants whose cars were instrumented with the in-vehicle recording devices, 33 were newly diagnosed with mild cognitive impairment and 31 with dementia by April 2019. The researchers trained a series of machine learning models for detecting mild cognitive impairment/dementia and found that the model based on driving variables and demographic characteristics was 88 percent accurate, much better than models based on demographic characteristics only (29 percent) and driving variables only (66 percent). Further analysis revealed that age was most predictive of mild cognitive impairment and dementia, followed by the percentage of trips traveled within 15 miles of home, race/ethnicity, minutes per trip chain (i.e., length of trips starting and ending at home), minutes per trip, and number of hard braking events with deceleration rates ≥ 0.35 g.
    “Driving is a complex task involving dynamic cognitive processes and requiring essential cognitive functions and perceptual motor skills. Our study indicates that naturalistic driving behaviors can be used as comprehensive and reliable markers for mild cognitive impairment and dementia,” said Guohua Li, MD, DrPH, professor of epidemiology and anesthesiology at Columbia Mailman School of Public Health and Vagelos College of Physicians and Surgeons, and senior author. “If validated, the algorithms developed in this study could provide a novel, unobtrusive screening tool for early detection and management of mild cognitive impairment and dementia in older drivers.”
    Story Source:
    Materials provided by Columbia University’s Mailman School of Public Health. Note: Content may be edited for style and length. More