More stories

  • in

    Artificial intelligence predicts genetics of cancerous brain tumors in under 90 seconds

    Using artificial intelligence, researchers have discovered how to screen for genetic mutations in cancerous brain tumors in under 90 seconds — and possibly streamline the diagnosis and treatment of gliomas, a study suggests.
    A team of neurosurgeons and engineers at Michigan Medicine, in collaboration with investigators from New York University, University of California, San Francisco and others, developed an AI-based diagnostic screening system called DeepGlioma that uses rapid imaging to analyze tumor specimens taken during an operation and detect genetic mutations more rapidly.
    In a study of more than 150 patients with diffuse glioma, the most common and deadly primary brain tumor, the newly developed system identified mutations used by the World Health Organization to define molecular subgroups of the condition with an average accuracy over 90%. The results are published in Nature Medicine.
    “This AI-based tool has the potential to improve the access and speed of diagnosis and care of patients with deadly brain tumors,” said lead author and creator of DeepGlioma Todd Hollon, M.D., a neurosurgeon at University of Michigan Health and assistant professor of neurosurgery at U-M Medical School.
    Molecular classification is increasingly central to the diagnosis and treatment of gliomas, as the benefits and risks of surgery vary among brain tumor patients depending on their genetic makeup. In fact, patients with a specific type of diffuse glioma called astrocytomas can gain an average of five years with complete tumor removal compared to other diffuse glioma subtypes.
    However, access to molecular testing for diffuse glioma is limited and not uniformly available at centers that treat patients with brain tumors. When it is available, Hollon says, the turnaround time for results can take days, even weeks.

    “Barriers to molecular diagnosis can result in suboptimal care for patients with brain tumors, complicating surgical decision-making and selection of chemoradiation regimens,” Hollon said.
    Prior to DeepGlioma, surgeons did not have a method to differentiate diffuse gliomas during surgery. An idea that started in 2019, the system combines deep neural networks with an optical imaging method known as stimulated Raman histology, which was also developed at U-M, to image brain tumor tissue in real time.
    “DeepGlioma creates an avenue for accurate and more timely identification that would give providers a better chance to define treatments and predict patient prognosis,” Hollon said.
    Even with optimal standard-of-care treatment, patients with diffuse glioma face limited treatment options. The median survival time for patients with malignant diffuse gliomas is only 18 months.
    While the development of medications to treat the tumors is essential, fewer than 10% of patients with glioma are enrolled in clinical trials, which often limit participation by molecular subgroups. Researchers hope that DeepGlioma can be a catalyst for early trial enrollment.
    “Progress in the treatment of the most deadly brain tumors has been limited in the past decades- in part because it has been hard to identify the patients who would benefit most from targeted therapies,” said senior author Daniel Orringer, M.D., an associate professor of neurosurgery and pathology at NYU Grossman School of Medicine, who developed stimulated Raman histology. “Rapid methods for molecular classification hold great promise for rethinking clinical trial design and bringing new therapies to patients.”
    Additional authors include Cheng Jiang, Asadur Chowdury, Akhil Kondepudi, Arjun Adapa, Wajd Al-Holou, Jason Heth, Oren Sagher, Maria Castro, Sandra Camelo-Piragua, Honglak Lee, all of University of Michigan, Mustafa Nasir-Moin, John Golfinos, Matija Snuderl, all of New York University, Alexander Aabedi, Pedro Lowenstein, Mitchel Berger, Shawn Hervey-Jumper, all of University of California, San Francisco, Lisa Irina Wadiura, Georg Widhalm, both of Medical University Vienna, Volker Neuschmelting, David Reinecke, Niklas von Spreckelsen, all of University Hospital Cologne, and Christian Freudiger, Invenio Imaging, Inc.
    This work was supported by the National Institutes of Health, Cook Family Brain Tumor Research Fund, the Mark Trauner Brain Research Fund, the Zenkel Family Foundation, Ian’s Friends Foundation and the UM Precision Health Investigators Awards grant program. More

  • in

    New in-home AI tool monitors the health of elderly residents

    Engineers are harnessing artificial intelligence (AI) and wireless technology to unobtrusively monitor elderly people in their living spaces and provide early detection of emerging health problems.
    The new system, built by researchers at the University of Waterloo, follows an individual’s activities accurately and continuously as it gathers vital information without the need for a wearable device and alerts medical experts to the need to step in and provide help.
    “After more than five years of working on this technology, we’ve demonstrated that very low-power, millimetre-wave radio systems enabled by machine learning and artificial intelligence can be reliably used in homes, hospitals and long-term care facilities,” said Dr. George Shaker, an adjunct associate professor of electrical and computer engineering.
    “An added bonus is that the system can alert healthcare workers to sudden falls, without the need for privacy-intrusive devices such as cameras.”
    The work by Shaker and his colleagues comes as overburdened public healthcare systems struggle to meet the urgent needs of rapidly growing elderly populations.
    While a senior’s physical or mental condition can change rapidly, it’s almost impossible to track their movements and discover problems 24/7 — even if they live in long-term care. In addition, other existing systems for monitoring gait — how a person walks — are expensive, difficult to operate, impractical for clinics and unsuitable for homes.

    The new system represents a major step forward and works this way: first, a wireless transmitter sends low-power waveforms across an interior space, such as a long-term care room, apartment or home.
    As the waveforms bounce off different objects and the people being monitored, they’re captured and processed by a receiver. That information goes into an AI engine which deciphers the processed waves for detection and monitoring applications.
    The system, which employs extremely low-power radar technology, can be mounted simply on a ceiling or by a wall and doesn’t suffer the drawbacks of wearable monitoring devices, which can be uncomfortable and require frequent battery charging.
    “Using our wireless technology in homes and long-term care homes can effectively monitor various activities such as sleeping, watching TV, eating and the frequency of bathroom use,” Shaker said.
    “Currently, the system can alert care workers to a general decline in mobility, increased likelihood of falls, possibility of a urinary tract infection, and the onset of several other medical conditions.”
    Waterloo researchers have partnered with a Canadian company, Gold Sentintel, to commercialize the technology, which has already been installed in several long-term care homes.
    A paper on the work, AI-Powered Non-Contact In-Home Gait Monitoring and Activity Recognition System Based on mm-Wave FMCW Radar and Cloud Computing, appears in the IEEE Internet of Things Journal.
    Doctoral student Hajar Abedi was the lead author, with contributions from Ahmad Ansariyan, Dr. Plinio Morita, Dr. Jen Boger and Dr. Alexander Wong. More

  • in

    Paper written using ChatGPT demonstrates opportunities and challenges of AI in academia

    ChatGPT has the potential to create increasing and exciting opportunities — but also poses significant challenges — for the academic community, according to an innovative study written in large part using the software.
    Launched in November 2022, ChatGPT is the latest chatbot and artificial intelligence (AI) platform touted as having the potential to revolutionise research and education.
    However, as it becomes ever more advanced, the technology has also prompted concerns across the education sector about academic honesty and plagiarism.
    To address some of these, the new study directly uses ChatGPT to demonstrate how sophisticated Large Language Machines (LLMs) have become but also the steps that can be taken to ensure its influence remains a positive one.
    Published in the peer-reviewed journal Innovations in Education and Teaching International, the research was conceived by academics from Plymouth Marjon University and the University of Plymouth.
    For the majority of the paper, they used a series of prompts and questions to encourage ChatGPT to produce content in an academic style. These included: Write an original academic paper, with references, describing the implications of GPT-3 for assessment in higher education; How can academics prevent students plagiarising using GPT-3? Are there any technologies which will check if work has been written by a chatbot? Produce several witty and intelligent titles for an academic research paper on the challenges universities face in ChatGPT and plagiarism.Once the text was generated, they copied and pasted the output into the manuscript, ordered it broadly following the structure suggested by ChatGPT, and then inserted genuine references throughout.

    This process was only revealed to readers in the paper’s Discussion section, which was written directly by the researchers without the software’s input.
    In that section, the study’s authors highlight that the text produced by ChatGPT — while much more sophisticated than previous innovations in this area — can be relatively formulaic, and that a number of existing AI-detection tools would pick up on that.
    However, they say their findings should serve as a wake-up call to university staff to think very carefully about the design of their assessments and ways to ensure that academic dishonesty is clearly explained to students and minimised.
    Professor Debby Cotton, Director of Academic Practice and Professor of Higher Education at Plymouth Marjon University, is the study’s lead author. She said: “This latest AI development obviously brings huge challenges for universities, not least in testing student knowledge and teaching writing skills — but looking positively it is an opportunity for us to rethink what we want students to learn and why. I’d like to think that AI would enable us to automate some of the more administrative tasks academics do, allowing more time to be spent working with students”
    Corresponding author Dr Peter Cotton, Associate Professor in Ecology at the University of Plymouth, added: “Banning ChatGPT, as was done within New York schools, can only be a short-term solution while we think how to address the issues. AI is already widely accessible to students outside their institutions, and companies like Microsoft and Google are rapidly incorporating it into search engines and Office suites. The chat (sic) is already out of the bag, and the challenge for universities will be to adapt to a paradigm where the use of AI is the expected norm.”
    Dr Reuben Shipway, Lecturer in Marine Biology at the University of Plymouth, said: “With any new revolutionary technology — and this is a revolutionary technology — there will be winners and losers. The losers will be those that fail to adapt to a rapidly changing landscape. The winners will take a pragmatic approach and leverage this technology to their advantage.” More

  • in

    Optical switching at record speeds opens door for ultrafast, light-based electronics and computers

    Imagine a home computer operating 1 million times faster than the most expensive hardware on the market. Now, imagine that level of computing power as the industry standard. University of Arizona researchers hope to pave the way for that reality using light-based optical computing, a marked improvement from the semiconductor-based transistors that currently run the world.
    “Semiconductor-based transistors are in all of the electronics that we use today,” said Mohammed Hassan, assistant professor of physics and optical sciences. “They’re part of every industry — from kids’ toys to rockets — and are the main building blocks of electronics.”
    Hassan lad an international team of researchers that published the research article “Ultrafast optical switching and data encoding on synthesized light fields” in Science Advances in February. UArizona physics postdoctoral research associate Dandan Hui and physics graduate student Husain Alqattan also contributed to the article, in addition to researchers from Ohio State University and the Ludwig Maximilian University of Munich.
    Semiconductors in electronics rely on electrical signals transmitted via microwaves to switch — either allow or prevent — the flow of electricity and data, represented as either “on” or “off.” Hassan said the future of electronics will be based instead on using laser light to control electrical signals, opening the door for the establishment of “optical transistors” and the development of ultrafast optical electronics.
    Since the invention of semiconductor transistors in the 1940s, technological advancement has centered on increasing the speed at which electric signals can be generated — measured in hertz. According to Hassan, the fastest semiconductor transistors in the world can operate at a speed of more than 800 gigahertz. Data transfer at that frequency is measured at a scale of picoseconds, or one trillionth of a second.
    Computer processing power has increased steadily since the introduction of the semiconductor transistor, though Hassan said one of the primary concerns in developing faster technology is that the heat generated by continuing to add transistors to a microchip would eventually require more energy to cool than can pass through the chip.
    In their article, Hassan and his collaborators discuss using all-optical switching of a light signal on and off to reach data transfer speeds exceeding a petahertz, measured at the attosecond time scale. An attosecond is one quintillionth of a second, meaning the transfer of data 1 million times faster than the fastest semiconductor transistors.
    While optical switches were already shown to achieve information processing speeds faster than that of semiconductor transistor-based technology, Hassan and his co-authors were able to register the on and off signals from a light source happening at the scale of billionths of a second. This was accomplished by taking advantage of a characteristic of fused silica, a glass often used in optics. Fused silica can instantaneously change its reflectivity, and by using ultrafast lasers, Hassan and his team were able to register changes in a light’s signal at the attosecond time scale. The work also demonstrated the possibility of sending data in the form of “one” and “zero” representing on and off via light at previously impossible speeds.
    “This new advancement would also allow the encoding of data on ultrafast laser pulses, which would increase the data transfer speed and could be used in long-distance communications from Earth into deep space,” Hassan said. “This promises to increase the limiting speed of data processing and information encoding and open a new realm of information technology.”
    The project was funded by a $1.4 million grant awarded to Hassan in 2018 by the Gordon and Betty Moore Foundation, an organization that aims “to create positive outcomes for future generations” by supporting research into scientific discovery, environmental conservation and patient care. The article was also based on work supported by the United States Air Force Office of Scientific Research’s Young Investigator Research Program. More

  • in

    Robot caterpillar demonstrates new approach to locomotion for soft robotics

    Researchers at North Carolina State University have demonstrated a caterpillar-like soft robot that can move forward, backward and dip under narrow spaces. The caterpillar-bot’s movement is driven by a novel pattern of silver nanowires that use heat to control the way the robot bends, allowing users to steer the robot in either direction.
    “A caterpillar’s movement is controlled by local curvature of its body — its body curves differently when it pulls itself forward than it does when it pushes itself backward,” says Yong Zhu, corresponding author of a paper on the work and the Andrew A. Adams Distinguished Professor of Mechanical and Aerospace Engineering at NC State. “We’ve drawn inspiration from the caterpillar’s biomechanics to mimic that local curvature, and use nanowire heaters to control similar curvature and movement in the caterpillar-bot.
    “Engineering soft robots that can move in two different directions is a significant challenge in soft robotics,” Zhu says. “The embedded nanowire heaters allow us to control the movement of the robot in two ways. We can control which sections of the robot bend by controlling the pattern of heating in the soft robot. And we can control the extent to which those sections bend by controlling the amount of heat being applied.”
    The caterpillar-bot consists of two layers of polymer, which respond differently when exposed to heat. The bottom layer shrinks, or contracts, when exposed to heat. The top layer expands when exposed to heat. A pattern of silver nanowires is embedded in the expanding layer of polymer. The pattern includes multiple lead points where researchers can apply an electric current. The researchers can control which sections of the nanowire pattern heat up by applying an electric current to different lead points, and can control the amount of heat by applying more or less current.
    “We demonstrated that the caterpillar-bot is capable of pulling itself forward and pushing itself backward,” says Shuang Wu, first author of the paper and a postdoctoral researcher at NC State. “In general, the more current we applied, the faster it would move in either direction. However, we found that there was an optimal cycle, which gave the polymer time to cool — effectively allowing the ‘muscle’ to relax before contracting again. If we tried to cycle the caterpillar-bot too quickly, the body did not have time to ‘relax’ before contracting again, which impaired its movement.”
    The researchers also demonstrated that the caterpillar-bot’s movement could be controlled to the point where users were able steer it under a very low gap — similar to guiding the robot to slip under a door. In essence, the researchers could control both forward and backward motion as well as how high the robot bent upwards at any point in that process.
    “This approach to driving motion in a soft robot is highly energy efficient, and we’re interested in exploring ways that we could make this process even more efficient,” Zhu says. “Additional next steps include integrating this approach to soft robot locomotion with sensors or other technologies for use in various applications — such as search-and-rescue devices.”
    The work was done with support from the National Science Foundation, under grants 2122841, 2005374 and 2126072; and from the National Institutes of Health, under grant number 1R01HD108473. More

  • in

    Biodegradable artificial muscles: Going green in the field of soft robotics

    Artificial muscles are a progressing technology that could one day enable robots to function like living organisms. Such muscles open up new possibilities for how robots can shape the world around us; from assistive wearable devices that can redefine our physical abilities at old age, to rescue robots that can navigate rubble in search of the missing. But just because artificial muscles can have a strong societal impact during use, doesn’t mean they have to leave a strong environmental impact after use.
    The topic of sustainability in soft robotics has now been brought into focus by an international team of researchers from the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart (Germany), the Johannes Kepler University (JKU) in Linz (Austria), and the University of Colorado (CU Boulder), Boulder (USA). The scientists collaborated to design a fully biodegradable, high performance artificial muscle — based on gelatin, oil, and bioplastics. They show the potential of this biodegradable technology by using it to animate a robotic gripper, which could be especially useful in single-use deployments such as for waste collection. At the end of life, these artificial muscles can be disposed of in municipal compost bins; under monitored conditions, they fully biodegrade within six months.
    “We see an urgent need for sustainable materials in the accelerating field of soft robotics. Biodegradable parts could offer a sustainable solution especially for single-use applications, like for medical operations, search-and-rescue missions, and manipulation of hazardous substances. Instead of accumulating in landfills at the end of product life, the robots of the future could become compost for future plant growth,” says Ellen Rumley, a visiting scientist from CU Boulder working in the Robotic Materials Department at MPI-IS. Rumley is co-first author of the paper “Biodegradable electrohydraulic actuators for sustainable soft robots” which will be published in Science Advances on March 22, 2023.
    Specifically, the team of researchers built an electrically driven artificial muscle called HASEL. In essence, HASELs are oil-filled plastic pouches that are partially covered by a pair of electrical conductors called electrodes. Applying a high voltage across the electrode pair causes opposing charges to build on them, generating a force between them that pushes oil to an electrode-free region of the pouch. This oil migration causes the pouch to contract, much like a real muscle. The key requirement for HASELs to deform is that the materials making up the plastic pouch and oil are electrical insulators, which can sustain the high electrical stresses generated by the charged electrodes.
    One of the challenges for this project was to develop a conductive, soft, and fully biodegradable electrode. Researchers atJohannes Kepler University created a recipe based on a mixture of biopolymer gelatin and salts that can be directly cast onto HASEL actuators. “It was important for us to make electrodes suitable for these high-performance applications, but with readily available components and an accessible fabrication strategy. Since our presented formulation can be easily integrated in various types of electrically driven systems, it serves as a building block for future biodegradable applications,” states David Preninger, co-first author for this project and a scientist at the Soft Matter Physics Division at JKU.
    The next step was finding suitable biodegradable plastics. Engineers for this type of materials are mainly concerned with properties like degradation rate or mechanical strength, not with electrical insulation; a requirement for HASELs that operate at a few thousand Volts. Nonetheless, some bioplastics showed good material compatibility with gelatin electrodes and sufficient electrical insulation. HASELs made from one specific material combination were even able to withstand 100,000 actuation cycles at several thousand Volts without signs of electrical failure or loss in performance. These biodegradable artificial muscles are electromechanically competitive with their non-biodegradable counterparts; an exciting result for promoting sustainability in artificial muscle technology.
    “By showing the outstanding performance of this new materials system, we are giving an incentive for the robotics community to consider biodegradable materials as a viable material option for building robots,” Ellen Rumley continues. “The fact that we achieved such great results with bio-plastics hopefully also motivates other material scientists to create new materials with optimized electrical performance in mind.”
    With green technology becoming ever more present, the team’s research project is an important step towards a paradigm shift in soft robotics. Using biodegradable materials for building artificial muscles is just one step towards paving a future for sustainable robotic technology. More

  • in

    Simulated terrible drivers cut the time and cost of AV testing by a factor of one thousand

    The push toward truly autonomous vehicles has been hindered by the cost and time associated with safety testing, but a new system developed at the University of Michigan shows that artificial intelligence can reduce the testing miles required by 99.99%.
    It could kick off a paradigm shift that enables manufacturers to more quickly verify whether their autonomous vehicle technology can save lives and reduce crashes. In a simulated environment, vehicles trained by artificial intelligence perform perilous maneuvers, forcing the AV to make decisions that confront drivers only rarely on the road but are needed to better train the vehicles.
    To repeatedly encounter those kinds of situations for data collection, real world test vehicles need to drive for hundreds of millions to hundreds of billions of miles.
    “The safety critical events — the accidents, or the near misses — are very rare in the real world, and often time AVs have difficulty handling them,” said Henry Liu, U-M professor of civil engineering and director of both Mcity and the Center for Connected and Automated Transportation, a regional transportation research center funded by the U.S. Department of Transportation.
    U-M researchers refer to the problem as the “curse of rarity,” and they’re tackling it by learning from real-world traffic data that contains rare safety-critical events. Testing conducted on test tracks mimicking urban as well as highway driving showed that the AI-trained virtual vehicles can accelerate the testing process by thousands of times. The study appears on the cover of Nature.
    “The AV test vehicles we’re using are real, but we’ve created a mixed reality testing environment. The background vehicles are virtual, which allows us to train them to create challenging scenarios that only happen rarely on the road,” Liu said.

    U-M’s team used an approach to train the background vehicles that strips away nonsafety-critical information from the driving data used in the simulation. Basically, it gets rid of the long spans when other drivers and pedestrians behave in responsible, expected ways — but preserves dangerous moments that demand action, such as another driver running a red light.
    By using only safety-critical data to train the neural networks that make maneuver decisions, test vehicles can encounter more of those rare events in a shorter amount of time, making testing much cheaper.
    “Dense reinforcement learning will unlock the potential of AI for validating the intelligence of safety-critical autonomous systems such as AVs, medical robotics and aerospace systems,” said Shuo Feng, assistant professor in the Department of Automation at Tsinghua University and former assistant research scientist at the U-M Transportation Research Institute.
    “It also opens the door for accelerated training of safety-critical autonomous systems by leveraging AI-based testing agents, which may create a symbiotic relationship between testing and training, accelerating both fields.”
    And it’s clear that training, along with the time and expense involved, is an impediment. An October Bloomberg article stated that although robotaxi leader Waymo’s vehicles had driven 20 million miles over the previous decade, far more data was needed.
    “That means,” the author wrote, “its cars would have to drive an additional 25 times their total before we’d be able to say, with even a vague sense of certainty, that they cause fewer deaths than bus drivers.”
    Testing was conducted at Mcity’s urban environment in Ann Arbor, as well as the highway test track at the American Center for Mobility in Ypsilanti.
    Launched in 2015, Mcity, was the world’s first purpose-built test environment for connected and autonomous vehicles. With new support from the National Science Foundation, outside researchers will soon be able to run remote, mixed reality tests using both the simulation and physical test track, similar to those reported in this study.
    Real-world data sets that support Mcity simulations are collected from smart intersections in Ann Arbor and Detroit, with more intersections to be equipped. Each intersection is fitted with privacy-preserving sensors to capture and categorize each road user, identifying its speed and direction. The research was funded by the Center for Connected and Automated Transportation and the National Science Foundation. More

  • in

    Semiconductor lattice marries electrons and magnetic moments

    A model system created by stacking a pair of monolayer semiconductors is giving physicists a simpler way to study confounding quantum behavior, from heavy fermions to exotic quantum phase transitions.
    The group’s paper, “Gate-Tunable Heavy Fermions in a Moiré Kondo Lattice,” published March 15 in Nature. The lead author is postdoctoral fellow Wenjin Zhao in the Kavli Institute at Cornell.
    The project was led by Kin Fai Mak, professor of physics in the College of Arts and Sciences, and Jie Shan, professor of applied and engineering physics in Cornell Engineering and in A&S, the paper’s co-senior authors. Both researchers are members of the Kavli Institute; they came to Cornell through the provost’s Nanoscale Science and Microsystems Engineering (NEXT Nano) initiative.
    The team set out to address what is known as the Kondo effect, named after Japanese theoretical physicist Jun Kondo. About six decades ago, experimental physicists discovered that by taking a metal and substituting even a small number of atoms with magnetic impurities, they could scatter the material’s conduction electrons and radically alter its resistivity.
    That phenomenon puzzled physicists, but Kondo explained it with a model that showed how conduction electrons can “screen” the magnetic impurities, such that the electron spin pairs with the spin of a magnetic impurity in opposite directions, forming a singlet.
    While the Kondo impurity problem is now well understood, the Kondo lattice problem — one with a regular lattice of magnetic moments instead of random magnetic impurities — is much more complicated and continues to stump physicists. Experimental studies of the Kondo lattice problem usually involve intermetallic compounds of rare earth elements, but these materials have their own limitations.

    “When you move all the way down to the bottom of the Periodic Table, you end up with something like 70 electrons in an atom,” Mak said. “The electronic structure of the material becomes so complicated. It is very difficult to describe what’s going on even without Kondo interactions.”
    The researchers simulated the Kondo lattice by stacking ultrathin monolayers of two semiconductors: molybdenum ditelluride, tuned to a Mott insulating state, and tungsten diselenide, which was doped with itinerant conduction electrons. These materials are much simpler than bulky intermetallic compounds, and they are stacked with a clever twist. By rotating the layers at a 180-degree angle, their overlap results in a moiré lattice pattern that traps individual electrons in tiny slots, similar to eggs in an egg carton.
    This configuration avoids the complication of dozens of electrons jumbling together in the rare earth elements. And instead of requiring chemistry to prepare the regular array of magnetic moments in the intermetallic compounds, the simplified Kondo lattice only needs a battery. When a voltage is applied just right, the material is ordered into forming a lattice of spins, and when one dials to a different voltage, the spins are quenched, producing a continuously tunable system.
    “Everything becomes much simpler and much more controllable,” Mak said.
    The researchers were able to continuously tune the electron mass and density of the spins, which cannot be done in a conventional material, and in the process they observed that the electrons dressed with the spin lattice can become 10 to 20 times heavier than the “bare” electrons, depending on the voltage applied.
    The tunability can also induce quantum phase transitions whereby heavy electrons turn into light electrons with, in between, the possible emergence of a “strange” metal phase, in which electrical resistance increases linearly with temperature. The realization of this type of transition could be particularly useful for understanding the high-temperature superconducting phenomenology in copper oxides.
    “Our results could provide a laboratory benchmark for theorists,” Mak said. “In condensed matter physics, theorists are trying to deal with the complicated problem of a trillion interacting electrons. It would be great if they don’t have to worry about other complications, such as chemistry and material science, in real materials. So they often study these materials with a ‘spherical cow’ Kondo lattice model. In the real world you cannot create a spherical cow, but in our material now we’ve created one for the Kondo lattice.”
    Co-authors include doctoral students Bowen Shen and Zui Tao; postdoctoral researchers Kaifei Kang and Zhongdong Han; and researchers from the National Institute for Materials Science in Tsukuba, Japan.
    The research was primarily supported by the Air Force Office of Scientific Research, the National Science Foundation, the U.S. Department of Energy and the Gordon and Betty Moore Foundation. More