More stories

  • in

    No 'second law of entanglement' after all

    When two microscopic systems are entangled, their properties are linked to each other irrespective of the physical distance between the two. Manipulating this uniquely quantum phenomenon is what allows for quantum cryptography, communication, and computation. While parallels have been drawn between quantum entanglement and the classical physics of heat, new research demonstrates the limits of this comparison. Entanglement is even richer than we have given it credit for. T
    The power of the second law
    The second law of thermodynamics is often considered to be one of only a few physical laws that is absolutely and unquestionably true. The law states that the amount of ‘entropy’ — a physical property — of any closed system can never decrease. It adds an ‘arrow of time’ to everyday occurrences, determining which processes are reversible and which are not. It explains why an ice cube placed on a hot stove will always melt, and why compressed gas will always fly out of its container (and never back in) when a valve is opened to the atmosphere.
    Only states of equal entropy and energy can be reversibly converted from one to the other. This reversibility condition led to the discovery of thermodynamic processes such as the (idealised) Carnot cycle, which poses an upper limit to how efficiently one can convert heat into work, or the other way around, by cycling a closed system through different temperatures and pressures. Our understanding of this process underpinned the rapid economic development during the Western Industrial Revolution.
    Quantum entropy
    The beauty of the second law of thermodynamics is its applicability to any macroscopic system, regardless of the microscopic details. In quantum systems, one of these details may be entanglement: a quantum connection that makes separated components of the system share properties. Intriguingly, quantum entanglement shares many profound similarities with thermodynamics, even though quantum systems are mostly studied in the microscopic regime. Scientists have uncovered a notion of ‘entanglement entropy’ that precisely mimics the role of the thermodynamical entropy, at least for idealised quantum systems that are perfectly isolated from their surroundings.

    “Quantum entanglement is a key resource that underlies much of the power of future quantum computers. To make effective use of it, we need to learn how to manipulate it,” says quantum information researcher Ludovico Lami. A fundamental question became whether entanglement can always be reversibly manipulated, in direct analogy to the Carnot cycle. Crucially, this reversibility would need to hold, at least in theory, even for noisy (‘mixed’) quantum systems that have not been kept perfectly isolated from their environment.
    It was conjectured that a ‘second law of entanglement’ could be established, embodied in a single function that would generalise the entanglement entropy and govern all entanglement manipulation protocols. This conjecture featured in a famous list of open problems in quantum information theory.
    No second law of entanglement
    Resolving this long-standing open question, research carried out by Lami (previously at University of Ulm and currently at QuSoft and the University of Amsterdam) and Bartosz Regula (University of Tokyo) demonstrates that manipulation of entanglement is fundamentally irreversible, putting to rest any hopes of establishing a second law of entanglement. This new result relies on the construction of a particular quantum state which is very ‘expensive’ to create using pure entanglement. Creating this state will always result in a loss of some of this entanglement, as the invested entanglement cannot be fully recovered. As a result, it is inherently impossible to transform this state into another and back again. The existence of such states was previously unknown.
    Because the approach used here does not presuppose what exact transformation protocols are used, it rules out the reversibility of entanglement in all possible settings. It applies to all protocols, assuming they don’t generate new entanglement themselves. Lami explains: “Using entangling operations would be like running a distillery in which alcohol from elsewhere is secretly added to the beverage.”
    Lami: “We can conclude that no single quantity, such as the entanglement entropy, can tell us everything there is to know about the allowed transformations of entangled physical systems. The theory of entanglement and thermodynamics are thus governed by fundamentally different and incompatible sets of laws.”
    This may mean that describing quantum entanglement is not as simple as scientists had hoped. Rather than being a drawback, however, the vastly greater complexity of the theory of entanglement compared to the classical laws of thermodynamics may allow us to use entanglement to achieve feats that would otherwise be completely inconceivable. “For now, what we know for certain is that entanglement hides an even richer and more complicated structure that we had given it credit for,” concludes Lami. More

  • in

    Click beetle-inspired robots jump using elastic energy

    Researchers have made a significant leap forward in developing insect-sized jumping robots capable of performing tasks in the small spaces often found in mechanical, agricultural and search-and-rescue settings.
    A new study led by mechanical sciences and engineering professor Sameh Tawfick demonstrates a series of click beetle-sized robots small enough to fit into tight spaces, powerful enough to maneuver over obstacles and fast enough to match an insect’s rapid escape time.
    The findings are published in the Proceedings of the National Academy of Sciences.
    Researchers at the U. of I. and Princeton University have studied click beetle anatomy, mechanics and evolution over the past decade. A 2020 study found that snap buckling — the rapid release of elastic energy — of a coiled muscle within a click beetle’s thorax is triggered to allow them to propel themselves in the air many times their body length, as a means of righting themselves if flipped onto their backs.
    “One of the grand challenges of small-scale robotics is finding a design that is small, yet powerful enough to move around obstacles or quickly escape dangerous settings,” Tawfick said.
    In the new study, Tawfick and his team used tiny coiled actuators — analogous to animal muscles — that pull on a beam-shaped mechanism, causing it to slowly buckle and store elastic energy until it is spontaneously released and amplified, propelling the robots upward.

    “This process, called a dynamic buckling cascade, is simple compared to the anatomy of a click beetle,” Tawfick said. “However, simple is good in this case because it allows us to work and fabricate parts at this small scale.”
    Guided by biological evolution and mathematical models, the team built and tested four device variations, landing on two configurations that can successfully jump without manual intervention.
    “Moving forward, we do not have a set approach on the exact design of the next generation of these robots, but this study plants a seed in the evolution of this technology — a process similar to biologic evolution,” Tawfick said.
    The team envisions these robots accessing tight spaces to help perform maintenance on large machines like turbines and jet engines, for example, by taking pictures to identify problems.
    “We also imagine insect-scale robots being useful in modern agriculture,” Tawfick said. “Scientists and farmers currently use drones and rovers to monitor crops, but sometimes researchers need a sensor to touch a plant or to capture a photograph of a very small-scale feature. Insect-scale robots can do that.”
    Researchers from the University of Birmingham, UK; Oxford University; and the University of Texas at Dallas also participated in this research.
    The Defense Advanced Research Projects Agency, the Toyota Research Institute North America, the National Science Foundation and The Royal Society supported this study. More

  • in

    Getting kids outdoors can reduce the negative effects of screen time

    If you have young children, you’re likely worried about how much time they spend staring at a screen, be it a tablet, phone, computer, or television. You probably also want to know how screen time affects your child’s development and wonder whether there’s anything you can do to balance out any negative effects. New research from Japan indicates that more screen time at age 2 is associated with poorer communication and daily living skills at age 4 — but when kids also play outdoors, some of the negative effects of screen time are reduced.
    In the study, which will be published in March in JAMA Pediatrics, the researchers followed 885 children from 18 months to 4 years of age. They looked at the relationship between three key features: average amount of screen time per day at age 2, amount of outdoor play at age 2 years 8 months, and neurodevelopmental outcomes — specifically, communication, daily living skills, and socialization scores according to a standardized assessment tool called Vineland Adaptive Behavior Scale-II — at age 4.
    “Although both communication and daily living skills were worse in 4-year-old children who had had more screen time at aged 2, outdoor play time had very different effects on these two neurodevelopmental outcomes,” explains Kenji J. Tsuchiya, Professor at Osaka University and lead author of the study. “We were surprised to find that outdoor play didn’t really alter the negative effects of screen time on communication — but it did have an effect on daily living skills.”
    Specifically, almost one-fifth of the effects of screen time on daily living skills were mediated by outdoor play, meaning that increasing outdoor play time could reduce the negative effects of screen time on daily living skills by almost 20%. The researchers also found that, although it was not linked to screen time, socialization was better in 4-year-olds who had spent more time playing outside at 2 years 8 months of age.
    “Taken together, our findings indicate that optimizing screen time in young children is really important for appropriate neurodevelopment,” says Tomoko Nishimura, senior author of the study. “We also found that screen time is not related to social outcomes, and that even if screen time is relatively high, encouraging more outdoor play time might help to keep kids healthy and developing appropriately.”
    These results are particularly important given the recent COVID-19-related lockdowns around the world, which have generally led to more screen time and less outdoor time for children. Because the use of digital devices is difficult to avoid even in very young children, further research looking at how to balance the risks and benefits of screen time in young children is eagerly awaited. More

  • in

    First computational reconstruction of a virus in its biological entirety

    An Aston University researcher has created the first ever computer reconstruction of a virus, including its complete native genome.
    Although other researchers have created similar reconstructions, this is the first to replicate the exact chemical and 3D structure of a ‘live’ virus.
    The breakthrough could lead the way to research into an alternative to antibiotics, reducing the threat of anti-bacterial resistance.
    The research Reconstruction and validation of entire virus model with complete genome from mixed resolution cryo-EM density by Dr Dmitry Nerukh, from the Department of Mathematics in the College of Engineering and Physical Sciences at Aston University is published in the journal Faraday Discussions.
    The research was conducted using existing data of virus structures measured via cryo-Electron Microscopy (cryo-EM), and computational modelling which took almost three years despite using supercomputers in the UK and Japan.
    The breakthrough will open the way for biologists to investigate biological processes which can’t currently be fully examined because the genome is missing in the virus model.

    This includes finding out how a bacteriophage, which is a type of virus that infects bacteria, kills a specific disease-causing bacterium.
    At the moment it is not known how this happens, but this new method of creating more accurate models will open up further research into using bacteriophage to kill specific life-threatening bacteria.
    This could lead to more targeted treatment of illnesses which are currently treated by antibiotics, and therefore help to tackle the increasing threat to humans of antibiotic resistance.
    Dr Nerukh said: “Up till now no one else had been able to build a native genome model of an entire virus at such detailed (atomistic) level.
    “The ability to study the genome within a virus more clearly is incredibly important. Without the genome it has been impossible to know exactly how a bacteriophage infects a bacterium.
    “This development will now allow help virologists answer questions which previously they couldn’t answer.
    “This could lead to targeted treatments to kill bacteria which are dangerous to humans, and to reduce the global problem of antibiotic-resistant bacteria which are over time becoming more and more serious.”
    The team’s approach to the modelling has many other potential applications. One of these is creating computational reconstructions to assist cryo-Electron Microscopy — a technique used to examine life-forms cooled to an extreme temperature. More

  • in

    'Smart' walking stick could help visually impaired with groceries, finding a seat

    Engineers at the University of Colorado Boulder are tapping into advances in artificial intelligence to develop a new kind of walking stick for people who are blind or visually impaired.
    Think of it as assistive technology meets Silicon Valley.
    The researchers say that their “smart” walking stick could one day help blind people navigate tasks in a world designed for sighted people — from shopping for a box of cereal at the grocery store to picking a private place to sit in a crowded cafeteria.
    “I really enjoy grocery shopping and spend a significant amount of time in the store,” said Shivendra Agrawal, a doctoral student in the Department of Computer Science. “A lot of people can’t do that, however, and it can be really restrictive. We think this is a solvable problem.”
    In a study published in October, Agrawal and his colleagues in the Collaborative Artificial Intelligence and Robotics Lab got one step closer to solving it.
    The team’s walking stick resembles the white-and-red canes that you can buy at Walmart. But it also includes a few add-ons: Using a camera and computer vision technology, the walking stick maps and catalogs the world around it. It then guides users by using vibrations in the handle and with spoken directions, such as “reach a little bit to your right.”
    The device isn’t supposed to be a substitute for designing places like grocery stores to be more accessible, Agrawal said. But he hopes his team’s prototype will show that, in some cases, AI can help millions of Americans become more independent.

    “AI and computer vision are improving, and people are using them to build self-driving cars and similar inventions,” Agrawal said. “But these technologies also have the potential to improve quality of life for many people.”
    Take a seat
    Agrawal and his colleagues first explored that potential by tackling a familiar problem: Where do I sit?
    “Imagine you’re in a café,” he said. “You don’t want to sit just anywhere. You usually take a seat close to the walls to preserve your privacy, and you usually don’t like to sit face-to-face with a stranger.”
    Previous research has suggested that making these kinds of decisions is a priority for people who are blind or visually impaired. To see if their smart walking stick could help, the researchers set up a café of sorts in their lab — complete with several chairs, patrons and a few obstacles.

    Study subjects strapped on a backpack with a laptop in it and picked up the smart walking stick. They swiveled to survey the room with a camera attached near the cane handle. Like a self-driving car, algorithms running inside the laptop identified the various features in the room then calculated the route to an ideal seat.
    The team reported its findings this fall at the International Conference on Intelligent Robots and Systems in Kyoto, Japan. Researchers on the study included Bradley Hayes, assistant professor of computer science, and doctoral student Mary Etta West.
    The study showed promising results: Subjects were able to find the right chair in 10 out of 12 trials with varying levels of difficulty. So far, the subjects have all been sighted people wearing blindfolds. But the researchers plan to evaluate and improve their device by working people who are blind or visually impaired once the technology is more dependable.
    “Shivendra’s work is the perfect combination of technical innovation and impactful application, going beyond navigation to bring advancements in underexplored areas, such as assisting people with visual impairment with social convention adherence or finding and grasping objects,” Hayes said.
    Let’s go shopping
    Next up for the group: grocery shopping.
    In new research, which the team hasn’t yet published, Agrawal and his colleagues adapted their device for a task that can be daunting for anyone: finding and grasping products in aisles filled with dozens of similar-looking and similar-feeling choices.
    Again, the team set up a makeshift environment in their lab: this time, a grocery shelf stocked with several different kinds of cereal. The researchers created a database of product photos, such as boxes of Honey Nut Cheerios or Apple Jacks, into their software. Study subjects then used the walking stick to scan the shelf, searching for the product they wanted.
    “It assigns a score to the objects present, selecting what is the most likely product,” Agrawal said. “Then the system issues commands like ‘move a little bit to your left.'”
    He added that it will be a while before the team’s walking stick makes it into the hands of real shoppers. The group, for example, wants to make the system more compact, designing it so that it can run off a standard smartphone attached to a cane.
    But the human-robot interaction researchers also hope that their preliminary results will inspire other engineers to rethink what robotics and AI are capable of.
    “Our aim is to make this technology mature but also attract other researchers into this field of assistive robotics,” Agrawal said. “We think assistive robotics has the potential to change the world.” More

  • in

    New nanoparticles deliver therapy brain-wide, edit Alzheimer's gene in mice

    Gene therapies have the potential to treat neurological disorders like Alzheimer’s and Parkinson’s diseases, but they face a common barrier — the blood-brain barrier. Now, researchers at the University of Wisconsin-Madison have developed a way to move therapies across the brain’s protective membrane to deliver brain-wide therapy with a range of biological medications and treatments.
    “There is no cure yet for many devastating brain disorders,” says Shaoqin “Sarah” Gong, UW-Madison professor of ophthalmology and visual sciences and biomedical engineering and researcher at the Wisconsin Institute for Discovery. “Innovative brain-targeted delivery strategies may change that by enabling noninvasive, safe and efficient delivery of CRISPR genome editors that could, in turn, lead to genome-editing therapies for these diseases.”
    CRISPR is a molecular toolkit for editing genes (for example, to correct mutations that may cause disease), but the toolkit is only useful if it can get through security to the job site. The blood-brain barrier is a membrane that selectively controls access to the brain, screening out toxins and pathogens that may be present in the bloodstream. Unfortunately, the barrier bars some beneficial treatments, like certain vaccines and gene therapy packages, from reaching their targets because in lumps them in with hostile invaders.
    Injecting treatments directly into the brain is one way to get around the blood-brain barrier, but it’s an invasive procedure that provides access only to nearby brain tissue.
    “The promise of brain gene therapy and genome-editing therapy relies on the safe and efficient delivery of nucleic acids and genome editors to the whole brain,” Gong says.
    In a study recently published in the journal Advanced Materials, Gong and her lab members, including postdoctoral researcher and first author of the study Yuyuan Wang, describe a new family of nano-scale capsules made of silica that can carry genome-editing tools into many organs around the body and then harmlessly dissolve.
    By modifying the surfaces of the silica nanocapsules with glucose and an amino acid fragment derived from the rabies virus, the researchers found the nanocapsules could efficiently pass through the blood-brain barrier to achieve brain-wide gene editing in mice. In their study, the researchers demonstrated the capability of the silica nanocapsule’s CRISPR cargo to successfully edit genes in the brains of mice, such as one related to Alzheimer’s disease called amyloid precursor protein gene.
    Because the nanocapsules can be administered repeatedly and intravenously, they can achieve higher therapeutic efficacy without risking more localized and invasive methods.
    The researchers plan to further optimize the silica nanocapsules’ brain-targeting capabilities and evaluate their usefulness for the treatment of various brain disorders. This unique technology is also being investigated for the delivery of biologics to the eyes, liver and lungs, which can lead to new gene therapies for other types of disorders. More

  • in

    Novel method for assigning workplaces in synthetic populations unveiled

    Synthetic populations are computer-generated groups of people that are designed to look like real populations. They are built using public census information about people’s characteristics, such as their age, gender, and job, alongside statistical algorithms that help put it all together. Their main application is for conducting so-called social simulations to assess different possible solutions to social problems, such as transportation, health issues, and housing. During the COVID-19 pandemic, for example, scientists in many places around the world conducted social simulations to estimate the number of cases in each country.
    In Japan, researchers have been carrying out such simulations using supercomputers under the COVID-19 AI & Simulation Project led by the Cabinet Secretariat of the Japanese government since 2020. They were given significant consideration when deciding various political measures, such as PCR testing policies, immigration limits, domestic tourism support, vaccination programs, and so on. These simulations were possible thanks to a synthetic population which was prepared and updated under the Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) project, since 2017.
    However, this Japanese synthetic population had a significant limitation — even though a home address was one of the attributes assigned to each individual, their workplace location was not. As a result, this synthetic population was more accurate at representing the night-time distribution of people, but not their day-time distribution, or the relationship between both.
    To tackle this problem, a trio of Japanese researchers including Assistant Professor Takuya Harada of Shibaura Institute of Technology, as well as Dr. Tadahiko Murata and Mr. Daiki Iwase of the Faculty of Informatics at Kansai University, recently devised a method to assign a workplace attribute to each worker in synthetic populations. Their study was published in IEEE Transactions on Computational Social Systems and was supported by both JHPCN and the Japan Science and Technology Agency (JST).
    The main challenge the researchers had to overcome was the lack of statistical information linking home and workplace locations for people. In Japan, only local governments whose area has over 200,000 residents release complete origin-destination-industry (ODI) statistics, which provide details about the movement of workers as well as their industry type (like retail, construction, or manufacturing). For cities, towns, or villages with less than 200,000 residents, the available ODI data is less specific, and only tells whether the person works in the same city, in another city within the same prefecture, or in another city in a different prefecture. Unfortunately, approximately 48% of workers in Japan reside in cities with less than 200,000 residents.
    Thus, the research team combined available ODI data with origin-destination (OD) data and developed an innovative workplace assignment method that works for all cities, towns, and villages in Japan. To test whether their method was designed properly, they used it to assign workplaces to people in cities with more than 200,000 residents and compared the results with the available complete ODI data. For the city of Takatsuki in the Osaka prefecture, which the researchers showcased as an example in their paper, the proposed method could assign the correct cities as workplaces for 88.2% of workers.
    The possible applications for detailed social simulations using synthetic populations are manyfold, as Professor Murata of Kansai University remarks: “Real-scale social simulations can be used for estimating the efficiency of urban developments, including housing and transportation projects, as well as the influence of social programs conducted by national or local governments. They can also be employed for rescue and relief programs when facing disasters such as earthquakes, tsunamis, floods, typhoons, and pandemics.” Put simply, social simulations can help decisionmakers accurately image various possible futures.
    Another important aspect of synthetic populations is that they are free from data privacy concerns. “Synthetic populations are a secure technology because no private information is used,” explains Assistant Professor Harada, “Because we synthesize multiple sets of populations that have the same statistical characteristics, third parties cannot identify whether real information is included or not.” Worth noting, this study marks the world’s first synthetic populations with workplace information that are publicly released for engineers and researchers.
    The research team is already working on using their newfound workplace assignment method to estimate the day-time population distribution throughout all of Japan. More