More stories

  • in

    Photonic crystals bend light as though it were under the influence of gravity

    A collaborative group of researchers has manipulated the behavior of light as if it were under the influence of gravity. The findings, which were published in the journal Physical Review A on September 28, 2023, have far-reaching implications for the world of optics and materials science, and bear significance for the development of 6G communications.
    Albert Einstein’s theory of relativity has long established that the trajectory of electromagnetic waves — including light and terahertz electromagnetic waves — can be deflected by gravitational fields.
    Scientists have recently theoretically predicted that replicating the effects of gravity — i.e., pseudogravity — is possible by deforming crystals in the lower normalized energy (or frequency) region.
    “We set out to explore whether lattice distortion in photonic crystals can produce pseudogravity effects,” said Professor Kyoko Kitamura from Tohoku University’s Graduate School of Engineering.
    Photonic crystals possess unique properties that enable scientists to manipulate and control the behavior of light, serving as ‘traffic controllers’ for light within crystals. They are constructed by periodically arranging two or more different materials with varying abilities to interact with and slow down light in a regular, repeating pattern. Furthermore, pseudogravity effects due to adiabatic changes have been observed in photonic crystals.
    Kitamura and her colleagues modified photonic crystals by introducing lattice distortion: gradual deformation of the regular spacing of elements, which disrupted the grid-like pattern of protonic crystals. This manipulated the photonic band structure of the crystals, resulting in a curved beam trajectory in-medium — just like a light-ray passing by a massive celestial body such as a black hole.
    Specifically, they employed a silicon distorted photonic crystal with a primal lattice constant of 200 micrometers and terahertz waves. Experiments successfully demonstrated the deflection of these waves.
    “Much like gravity bends the trajectory of objects, we came up with a means to bend light within certain materials,” adds Kitamura. “Such in-plane beam steering within the terahertz range could be harnessed in 6G communication. Academically, the findings show that photonic crystals could harness gravitational effects, opening new pathways within the field of graviton physics,” said Associate Professor Masayuki Fujita from Osaka University. More

  • in

    Researchers measure global consensus over the ethical use of AI

    To examine the global state of AI ethics, a team of researchers from Brazil performed a systematic review and meta-analysis of global guidelines for AI use. Publishing October 13 in in the journal Patterns, the researchers found that, while most of the guidelines valued privacy, transparency, and accountability, very few valued truthfulness, intellectual property, or children’s rights. Additionally, most of the guidelines described ethical principles and values without proposing practical methods for implementing them and without pushing for legally binding regulation.
    “Establishing clear ethical guidelines and governance structures for the deployment of AI around the world is the first step to promoting trust and confidence, mitigating its risks, and ensuring that its benefits are fairly distributed,” says social scientist and co-author James William Santos of the Pontifical Catholic University of Rio Grande do Sul.
    “Previous work predominantly centered around North American and European documents, which prompted us to actively seek and include perspectives from regions such as Asia, Latin America, Africa, and beyond,” says lead author Nicholas Kluge Corrêa of the Pontifical Catholic University of Rio Grande do Sul and the University of Bonn.
    To determine whether a global consensus exists regarding the ethical development and use of AI, and to help guide such a consensus, the researchers conducted a systematic review of policy and ethical guidelines published between 2014 and 2022. From this, they identified 200 documents related to AI ethics and governance from 37 countries and six continents and written or translated into five different languages (English, Portuguese, French, German, and Spanish). These documents included recommendations, practical guides, policy frameworks, legal landmarks, and codes of conduct.
    Then, the team conducted a meta-analysis of these documents to identify the most common ethical principles, examine their global distribution, and assess biases in terms of the type of organizations or people producing these documents.
    The researchers found that the most common principles were transparency, security, justice, privacy, and accountability, which appeared in 82.5%, 78%, 75.5%, 68.5%, and 67% of the documents, respectively. The least common principles were labor rights, truthfulness, intellectual property, and children/adolescent rights, which appeared in 19.5%, 8.5%, 7%, and 6% of the documents, and the authors emphasize that these principles deserve more attention. For example, truthfulness — the idea that AI should provide truthful information — is becoming increasingly relevant with the release of generative AI technologies like ChatGPT. And since AI has the potential to displace workers and change the way we work, practical measures are to avoid mass unemployment or monopolies.
    Most (96%) of the guidelines were “normative” — describing ethical values that should be considered during AI development and use — while only 2% recommended practical methods of implementing AI ethics, and only 4.5% proposed legally binding forms of AI regulation. More

  • in

    Physicists demonstrate powerful physics phenomenon

    In a new breakthrough, researchers have used a novel technique to confirm a previously undetected physics phenomenon that could be used to improve data storage in the next generation of computer devices.
    Spintronic memories, like those used in some high-tech computers and satellites, use magnetic states generated by an electron’s intrinsic angular momentum to store and read information. Depending on its physical motion, an electron’s spin produces a magnetic current. Known as the “spin Hall effect,” this has key applications for magnetic materials across many different fields, ranging from low power electronics to fundamental quantum mechanics.
    More recently, scientists have found that electrons are also capable of generating electricity through a second kind of movement: orbital angular momentum, similar to how Earth revolves around the sun. This is known as the “orbital Hall effect,” said Roland Kawakami, co-author of the study and a professor in physics at The Ohio State University.
    Theorists predicted that by using light transition metals — materials that have weak spin Hall currents — magnetic currents generated by the orbital Hall effect would be easier to spot flowing alongside them. Until now, directly detecting such a thing has been a challenge, but the study, led by Igor Lyalin, a graduate student in physics, and published today in the journal Physical Review Letters,showed a method to observe the effect.
    “Over the decades, there’s been a continuous discovery of various Hall effects,” said Kawakami. “But the idea of these orbital currents is really a brand new one. The difficulty is that they are mixed with spin currents in typical heavy metals and it’s difficult to tell them apart.”
    Instead, Kawakami’s team demonstrated the orbital Hall effect by reflecting polarized light, in this case, a laser, onto various thin films of the light metal chromium to probe the metal’s atoms for a potential build-up of orbital angular momentum. After nearly a year of painstaking measurements, researchers were able to detect a clear magneto-optical signal which showed that electrons gathered at one end of the film exhibited strong orbital Hall effect characteristics.
    This successful detection could have huge consequences for future spintronics applications, said Kawakami. More

  • in

    Immune system aging can be revealed by CT scan

    Thymus, a small and relatively unknown organ, may play a bigger role in the immune system of adults than was previously believed. With age, the glandular tissue in the thymus is replaced by fat, but, according to a new study from Linköping University (LiU) in Sweden, the rate at which this happens is linked to sex, age and lifestyle factors. These findings also indicate that the appearance of the thymus reflects the ageing of the immune system.
    “We doctors can assess the appearance of the thymus from largely all chest CT scans, but we tend to not see this as very important. But now it turns out that the appearance of the thymus can actually provide a lot of valuable information that we could benefit from and learn more about,” says Mårten Sandstedt, MD, PhD, at the Department of Radiology in Linköping and Department of Health, Medicine and Caring Sciences, Faculty of Medicine and Health Sciences, Linköping University.
    The thymus is a gland located in the upper part of the chest. It has been long known that this small organ is important for immune defence development in children. After puberty, the thymus decreases in size and is eventually replaced by fat, in a process known as fatty degeneration. This has been taken to mean that it loses its function, which is why the thymus has for a long time been considered as being not important in adult life. This view has however been challenged in some minor research studies, mainly on animals, that indicate that having an active thymus as an adult may be an advantage and could provide increased resilience against infectious disease and cancer. Only very few studies so far have examined the thymus in adults.
    In the present study, published in Immunity & Ageing, the researchers have examined thymus appearance in chest CT scans of more than 1,000 Swedish individuals aged 50 to 64, who participated in the large SCAPIS study (Swedish cardiopulmonary bioimage study). SCAPIS includes both extensive imaging and comprehensive health assessments including lifestyle factors, such as dietary habits and physical activity. In their sub-study of SCAPIS, the researchers also analysed immune cells in the blood.
    “We saw a huge variation in thymus appearance. Six out of ten participants had complete fatty degeneration of thymus, which was much more common in men than in women, and in people with abdominal obesity. Lifestyle also mattered. Low intake of fibres in particular was associated with fatty degeneration of thymus,” says Mårten Sandstedt.
    The Linköping researchers study provides new knowledge by associating thymus appearance with lifestyle and health factors, and the immune system. In the development of the immune system, the thymus acts like a school for a type of immune cells known as T-cells (where the T stands for “thymus”). This is where the T-cells learn to recognise bacteria, viruses and other things that are alien to the body. They also learn to be tolerant and not attack anything that is part of the person’s own body, which could otherwise lead to various autoimmune diseases.
    In their study, the LiU researchers saw that individuals with fatty degeneration of the thymus showed lower T-cell regeneration.
    “This association with T-cell regeneration is interesting. It indicates that what we see in CT scans is not only an image, it actually also reflects the functionality of the thymus. You can’t do anything about your age and your sex, but lifestyle-related factors can be influenced. It might be possible to influence immune system ageing,” says Lena Jonasson, professor at the Department of Cardiology in Linköping and Department of Health, Medicine and Caring Sciences, Faculty of Medicine and Health Sciences, Linköping University.
    But more research is needed before it will be possible to know whether thymus appearance, and thereby immune defence ageing, will have any implications for our health. The researchers are now moving on to follow-up studies of the thymus of all 5,000 participants in SCAPIS Linköping to see whether CT scan thymus images can provide information on future risk of disease.
    This research was funded by the Heart-Lung Foundation, the Swedish Research Council, the Swedish Grandlodge of Freemasonry, and Region Östergötland and Linköping University through ALF Grants. Mårten Sandstedt is also affiliated with the Center for Medical Image Science and Visualization, CMIV, in Linköping. More

  • in

    New organ-on-a-chip model of human synovium could accelerate development of treatments for arthritis

    The synovium is a membrane-like structure that lines the knee joint and helps to keep the joint happy and healthy, mainly by producing and maintaining synovial fluid. Inflammation of this tissue is implicated in the onset and progression of arthritic diseases such as rheumatoid and osteoarthritis. Therefore, treatments that target the synovium are promising in treating these diseases. However, we need better models in the laboratory that allow us to find and test new treatments. We have developed an organ-on-a-chip based model of the human synovium, and its associated vasculature, to address this need.
    Researchers at Queen Mary University of London have developed a new organ-on-a-chip model of the human synovium, a membrane-like tissue that lines the joints. The model, published in the journal Biomedical Materials, could help researchers to better understand the mechanisms of arthritis and to develop new treatments for this group of debilitating diseases.
    In the UK, more than 10 million people live with a form of arthritis, which affects the joints and can cause pain, stiffness, and swelling. There is currently no cure for arthritis and the search for new therapeutics is limited by a lack of accurate models.
    The new synovium-on-a-chip model is a three-dimensional microfluidic device that contains human synovial cells and blood vessel cells. The device is subjected to mechanical loading, which mimics the forces applied to the synovium during joint movement.
    The developed synovium-on-a-chip model was able to mimic the behaviour of native human synovium, producing key synovial fluid components and responding to inflammation. This suggests that the new platform has immense potential to help researchers understand disease mechanisms and identify and test new therapies for arthritic diseases.
    “Our model is the first human, vascularised, synovium-on-a-chip model with applied mechanical loading and successfully replicates a number of key features of native synovium biology,” said Dr Timothy Hopkins, Versus Arthritis Foundation Fellow, joint lead author of the study. “The model was developed upon a commercially available platform (Emulate Inc.), that allows for widespread adoption without the need for specialist knowledge of chip fabrication. The vascularised synovium-on-a-chip can act as a foundational model for academic research, with which fundamental questions can be addressed, and complexity (further cell and tissue types) can be added. In addition, we envisage that our model could eventually form part of the drug discovery pipeline in an industrial setting. Some of these conversations have already commenced.”
    The researchers are currently using the synovium-on-a-chip model to study the disease mechanisms of arthritis and to develop stratified and personalized organ-on-a-chip models of human synovium and associated tissues.
    “We believe that our synovium-on-a-chip model, and related models of human joints currently under development in our lab, have the potential to transform pre-clinical testing, streamlining delivery of new therapeutics for treatment of arthritis,” Prof. Martin Knight, Professor of Mechanobiology said. “We are excited to share this model with the scientific community and to work with industry partners to bring new treatments to patients as quickly as possible.” More

  • in

    Self-correcting quantum computers within reach?

    Quantum computers promise to reach speeds and efficiencies impossible for even the fastest supercomputers of today. Yet the technology hasn’t seen much scale-up and commercialization largely due to its inability to self-correct. Quantum computers, unlike classical ones, cannot correct errors by copying encoded data over and over. Scientists had to find another way.
    Now, a new paper in Nature illustrates a Harvard quantum computing platform’s potential to solve the longstanding problem known as quantum error correction.
    Leading the Harvard team is quantum optics expert Mikhail Lukin, the Joshua and Beth Friedman University Professor in physics and co-director of the Harvard Quantum Initiative. The work reported in Nature was a collaboration among Harvard, MIT, and Boston-based QuEra Computing. Also involved was the group of Markus Greiner, the George Vasmer Leverett Professor of Physics.
    An effort spanning the last several years, the Harvard platform is built on an array of very cold, laser-trapped rubidium atoms. Each atom acts as a bit — or a “qubit” as it’s called in the quantum world — which can perform extremely fast calculations.
    The team’s chief innovation is configuring their “neutral atom array” to be able to dynamically change its layout by moving and connecting atoms — this is called “entangling” in physics parlance — mid-computation. Operations that entangle pairs of atoms, called two-qubit logic gates, are units of computing power.
    Running a complicated algorithm on a quantum computer requires many gates. However, these gate operations are notoriously error-prone, and a buildup of errors renders the algorithm useless.
    In the new paper, the team reports near-flawless performance of its two-qubit entangling gates with extremely low error rates. For the first time, they demonstrated the ability to entangle atoms with error rates below 0.5 percent. In terms of operation quality, this puts their technology’s performance on par with other leading types of quantum computing platforms, like superconducting qubits and trapped-ion qubits. More

  • in

    New study unveils stretchable high-resolution user-interactive synesthesia displays for visual–acoustic encryption

    The future of human-machine interfaces is on the cusp of a revolution with the unveiling of a groundbreaking technology — a stretchable high-resolution multicolor synesthesia display that generates synchronized sound and light as input/output sources. A research team, led by Professor Moon Kee Choi in the Department of Materials Science and Engineering at UNIST, has succeeded in developing this cutting-edge display using transfer-printing techniques, propelling the field of multifunctional displays into new realms of possibility.
    Traditionally, multifunctional displays have been confined to visualizing mechanical and electrical signals in light. However, this pioneering stretchable synesthesia display shatters preconceived boundaries by offering unparalleled optical performance and precise sound pressure levels. Its inherent stretchability ensures seamless operation under both static and dynamic deformation, preserving the integrity of the sound relative to the input waveform.
    A key advantage of this groundbreaking technology is its potential to revolutionize wearable devices, mobile devices, and the Internet of Things (IoT) as the next generation of displays. By seamlessly generating sound and light simultaneously, the stretchable display delivers a distinctive user experience and unlocks untapped potential for advanced encryption and authentication.
    To demonstrate the capabilities of this synesthesia display, the research team presented two innovative applications. Firstly, they showcased visual-acoustic encryption, an advanced encryption method that combines visual and auditory cues. This breakthrough sets the stage for reinforced authentication systems that leverage the power of both sight and sound, elevating security to new heights.
    Secondly, the team introduced a multiplex quick response code that bridges multiple domains with a single device. This remarkable technology empowers users to interact with the display, ushering in a new era of seamless integration and user-friendly experiences.
    Professor Choi enthused, “The demand for next-generation displays is skyrocketing, and this stretchable high-resolution display that generates sound and light simultaneously overcomes the limitations of previous light-emitting devices. Our novel light-emission layer transfer technology, achieved through surface energy control, enables us to achieve remarkable patterns and maintain stability even under deformation.”
    The manufactured device boasts exceptional brightness and sound characteristics, with a circular shape maintained at a remarkable rate of over 95% in more than 5,000 deformation experiments. This unparalleled durability and versatility render the stretchable display ideal for a wide range of applications, including wearable speakers, double encryption devices, and multi-quick response code implementations.
    According to the research team, this remarkable advancement in display technology propels us one step closer to a future where multifunctional displays seamlessly integrate with our daily lives. As the demand for advanced human-machine interfaces continues to surge, the stretchable high-resolution multicolor synesthesia display offers a tantalizing glimpse into the limitless possibilities of tomorrow. More

  • in

    AI just got 100-fold more energy efficient

    Forget the cloud.
    Northwestern University engineers have developed a new nanoelectronic device that can perform accurate machine-learning classification tasks in the most energy-efficient manner yet. Using 100-fold less energy than current technologies, the device can crunch large amounts of data and perform artificial intelligence (AI) tasks in real time without beaming data to the cloud for analysis.
    With its tiny footprint, ultra-low power consumption and lack of lag time to receive analyses, the device is ideal for direct incorporation into wearable electronics (like smart watches and fitness trackers) for real-time data processing and near-instant diagnostics.
    To test the concept, engineers used the device to classify large amounts of information from publicly available electrocardiogram (ECG) datasets. Not only could the device efficiently and correctly identify an irregular heartbeat, it also was able to determine the arrhythmia subtype from among six different categories with near 95% accuracy.
    The research will be published on Oct. 12 in the journal Nature Electronics.
    “Today, most sensors collect data and then send it to the cloud, where the analysis occurs on energy-hungry servers before the results are finally sent back to the user,” said Northwestern’s Mark C. Hersam, the study’s senior author. “This approach is incredibly expensive, consumes significant energy and adds a time delay. Our device is so energy efficient that it can be deployed directly in wearable electronics for real-time detection and data processing, enabling more rapid intervention for health emergencies.”
    A nanotechnology expert, Hersam is Walter P. Murphy Professor of Materials Science and Engineering at Northwestern’s McCormick School of Engineering. He also is chair of the Department of Materials Science and Engineering, director of the Materials Research Science and Engineering Center and member of the International Institute of Nanotechnology. Hersam co-led the research with Han Wang, a professor at the University of Southern California, and Vinod Sangwan, a research assistant professor at Northwestern. More