More stories

  • in

    Researchers find the missing photonic link to enable an all-silicon quantum internet

    Researchers at Simon Fraser University have made a crucial breakthrough in the development of quantum technology.
    Their research, published in Nature today, describes their observations of over 150,000 silicon ‘T centre’ photon-spin qubits, an important milestone that unlocks immediate opportunities to construct massively scalable quantum computers and the quantum internet that will connect them.
    Quantum computing has enormous potential to provide computing power well beyond the capabilities of today’s supercomputers, which could enable advances in many other fields, including chemistry, materials science, medicine and cybersecurity.
    In order to make this a reality, it is necessary to produce both stable, long-lived qubits that provide processing power, as well as the communications technology that enables these qubits to link together at scale.
    Past research has indicated that silicon can produce some of the most stable and long-lived qubits in the industry. Now the research published by Daniel Higginbottom, Alex Kurkjian, and co-authors provides proof of principle that T centres, a specific luminescent defect in silicon, can provide a ‘photonic link’ between qubits. This comes out of the SFU Silicon Quantum Technology Lab in SFU’s Physics Department, co-led by Stephanie Simmons, Canada Research Chair in Silicon Quantum Technologies and Michael Thewalt, Professor Emeritus.
    “This work is the first measurement of single T centresin isolation, and actually, the first measurement of any single spin in silicon to be performed with only optical measurements,” says Stephanie Simmons.
    “An emitter like the T centre that combines high-performance spin qubits and optical photon generation is ideal to make scalable, distributed, quantum computers, because they can handle the processing and the communications together, rather than needing to interface two different quantum technologies, one for processing and one for communications,” Simmons says.
    In addition, T centres have the advantage of emitting light at the same wavelength that today’s metropolitan fibre communications and telecom networking equipment use.
    “With T centres, you can build quantum processors that inherently communicate with other processors,” Simmons says. “When your silicon qubit can communicate by emitting photons (light) in the same band used in data centres and fiber networks, you get these same benefits for connecting the millions of qubits needed for quantum computing.”
    Developing quantum technology using silicon provides opportunities to rapidly scale quantum computing. The global semiconductor industry is already able to inexpensively manufacture silicon computer chips at scale, with a staggering degree of precision. This technology forms the backbone of modern computing and networking, from smartphones to the world’s most powerful supercomputers.
    “By finding a way to create quantum computing processors in silicon, you can take advantage of all of the years of development, knowledge, and infrastructure used to manufacture conventional computers, rather than creating a whole new industry for quantum manufacturing,” Simmons says. “This represents an almost insurmountable competitive advantage in the international race for a quantum computer.”
    Story Source:
    Materials provided by Simon Fraser University. Original written by Erin Brown-John. Note: Content may be edited for style and length. More

  • in

    Gender bias in search algorithms has effect on users, new study finds

    Gender-neutral internet searches yield results that nonetheless produce male-dominated output, finds a new study by a team of psychology researchers. Moreover, these search results have an effect on users by promoting gender bias and potentially influencing hiring decisions.
    The work, which appears in the journal Proceedings of the National Academy of Sciences (PNAS), is among the latest to uncover how artificial intelligence (AI) can alter our perceptions and actions.
    “There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s Department of Psychology and the paper’s lead author. “As a consequence, their use by humans may result in the propagation, rather than reduction, of existing disparities.”
    “These findings call for a model of ethical AI that combines human psychology with computational and sociological approaches to illuminate the formation, operation, and mitigation of algorithmic bias,” adds author David Amodio, a professor in NYU’s Department of Psychology and the University of Amsterdam.
    Technology experts have expressed concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are ingrained.
    “Certain 1950s ideas about gender are actually still embedded in our database systems,” Meredith Broussard, author of Artificial Unintelligence: How Computers Misunderstand the World and a professor at NYU’s Arthur L. Carter Journalism Institute, told the Markup earlier this year. More

  • in

    A proof of odd-parity superconductivity

    Superconductivity is a fascinating state of matter in which an electrical current can flow without any resistance. Usually, it can exist in two forms. One is destroyed easily with a magnetic field and has “even parity,” i.e. it has a point symmetric wave function with respect to an inversion point, and one which is stable in magnetic fields applied in certain directions and has “odd parity,” i.e. it has an antisymmetric wave function.
    Consequently, the latter should present a characteristic angle dependence of the critical field where superconductivity disappears. But odd-parity superconductivity is rare in nature; only a few materials support this state, and in none of them has the expected angle dependence been observed. In a new publication in PRX, the group by Elena Hassinger and collaborators show that the angle dependence in the superconductor CeRh2As2 is exactly that expected of an odd-parity state.
    CeRh2As2 was recently found to exhibit two superconducting states: A low-field state changes into a high-field state at 4 T when a magnetic field is applied along one axis. For varying field directions, we measured the specific heat, magnetic susceptibility, and magnetic torque of this material to obtain the angle dependence of the critical fields. We find that the high-field state quickly disappears when the magnetic field is turned away from the initial axis. These results are in excellent agreement with our model identifying the two states with even- and odd-parity states.
    CeRh2As2 presents an extraordinary opportunity to investigate odd-parity superconductivity further. It also allows for testing mechanisms for a transition between two superconducting states, and especially their relation to spin-orbit coupling, multiband physics, and additional ordered states occurring in this material.
    Story Source:
    Materials provided by Max Planck Institute for Chemical Physics of Solids. Note: Content may be edited for style and length. More

  • in

    A machine learning model to predict immunotherapy response in cancer patients

    Immunotherapy is a new cancer treatment that activates the body’s immune system to fight against cancer cells without using chemotherapy or radiotherapy. It has fewer side effects than conventional anticancer drugs because it attacks only cancer cells using the body’s immune system. In addition, because it uses the memory and adaptability of the immune system, patients who have benefited from its therapeutic effects experience sustained anticancer effects.
    The recently developed immune checkpoint inhibitor has considerably improved the survival rate of patients with cancer. However, the problem with cancer immunotherapy is that only approximately 30% of cancer patients receive benefits from its therapeutic effect, and the current diagnostic techniques do not accurately predict the patient’s response to the treatment.
    Under this circumstance, the research team led by Professor Sanguk Kim (Department of Life Sciences) at POSTECH is gaining attention as they have improved the accuracy of predicting patient response to immune checkpoint inhibitors (ICIs) by using network-based machine learning. The research team discovered new network-based biomarkers by analyzing the clinical results of more than 700 patients with three different cancers (melanoma, gastric cancer, and bladder cancer) and the transcriptome data of the patients’ cancer tissues. By utilizing the network-based biomarkers, the team successfully developed artificial intelligence that could predict the response to anticancer treatment. The team further proved that the treatment response prediction based on the newly discovered biomarkers was superior to that based on conventional anticancer treatment biomarkers including immunotherapy targets and tumor microenvironment markers.
    In their previous study, the research team had developed machine learning that could predict drug responses to chemotherapy in patients with gastric or bladder cancer. This study has shown that artificial intelligence using the interactions between genes in a biological network could successfully predict the patient response to not only chemotherapy, but also immunotherapy in multiple cancer types.
    This study helps detect patients who will respond to immunotherapy in advance and establish treatment plans, resulting in customized precision medicine with more patients to benefit from cancer treatments. Supported by the POSTECH Medical Device Innovation Center, the Graduate School of Artificial Intelligence, and ImmunoBiome Inc, this study was recently published in Nature Communications, an international peer-reviewed journal.
    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Researchers remeasure gravitational constant

    Researchers at ETH Zurich have redetermined the gravitational constant G using a new measurement technique. Although there is still a large degree of uncertainty regarding this value, the new method offers great potential for testing one of the most fundamental laws of nature.
    The gravitational constant G determines the strength of gravity — the force that makes apples fall to the ground or pulls the Earth in its orbit around the sun. It is part of Isaac Newton’s law of universal gravitation, which he first formulated more than 300 years ago. The constant cannot be derived mathematically; it has to be determined through experiment.
    Over the centuries, scientists have conducted numerous experiments to determine the value of G, but the scientific community isn’t satisfied with the current figure. It is still less precise than the values of all the other fundamental natural constants — for example, the speed of light in a vacuum.
    One reason gravity is extremely difficult to quantify is that it is a very weak force and cannot be isolated: when you measure the gravity between two bodies, you also measure the effect of all other bodies in the world.
    “The only option for resolving this situation is to measure the gravitational constant with as many different methods as possible,” explains Jürg Dual, a professor in the Department of Mechanical and Process Engineering at ETH Zurich. He and his colleagues conducted a new experiment to redetermine the gravitational constant and have now presented their work in the scientific journal Nature Physics.
    A novel experiment in an old fortress
    To rule out sources of interference as far as possible, Dual’s team set up their measuring equipment in what used to be the Furggels fortress, located near Pfäfers above Bad Ragaz, Switzerland. The experimental setup consists of two beams suspended in vacuum chambers. After the researchers set one vibrating, gravitational coupling caused the second beam to also exhibit minimal movement (in the picometre range — i.e., one trillionth of a metre). Using laser devices, the team measured the motion of the two beams, and the measurement of this dynamic effect allowed them to infer the magnitude of the gravitational constant. More

  • in

    Could a computer diagnose Alzheimer's disease and dementia?

    It takes a lot of time — and money — to diagnose Alzheimer’s disease. After running lengthy in-person neuropsychological exams, clinicians have to transcribe, review, and analyze every response in detail. But researchers at Boston University have developed a new tool that could automate the process and eventually allow it to move online. Their machine learning-powered computational model can detect cognitive impairment from audio recordings of neuropsychological tests — no in-person appointment needed. Their findings were published in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association.
    “This approach brings us one step closer to early intervention,” says Ioannis Paschalidis, a coauthor on the paper and a BU College of Engineering Distinguished Professor of Engineering. He says faster and earlier detection of Alzheimer’s could drive larger clinical trials that focus on individuals in early stages of the disease and potentially enable clinical interventions that slow cognitive decline: “It can form the basis of an online tool that could reach everyone and could increase the number of people who get screened early.”
    The research team trained their model using audio recordings of neuropsychological interviews from over 1,000 individuals in the Framingham Heart Study, a long-running BU-led project looking at cardiovascular disease and other physiological conditions. Using automated online speech recognition tools — think, “Hey, Google!” — and a machine learning technique called natural language processing that helps computers understand text, they had their program transcribe the interviews, then encode them into numbers. A final model was trained to assess the likelihood and severity of an individual’s cognitive impairment using demographic data, the text encodings, and real diagnoses from neurologists and neuropsychologists.
    Paschalidis says the model was not only able to accurately distinguish between healthy individuals and those with dementia, but also detect differences between those with mild cognitive impairment and dementia. And, it turned out, the quality of the recordings and how people spoke — whether their speech breezed along or consistently faltered — were less important than the content of what they were saying.
    “It surprised us that speech flow or other audio features are not that critical; you can automatically transcribe interviews reasonably well, and rely on text analysis through AI to assess cognitive impairment,” says Paschalidis, who’s also the new director of BU’s Rafik B. Hariri Institute for Computing and Computational Science & Engineering. Though the team still needs to validate its results against other sources of data, the findings suggest their tool could support clinicians in diagnosing cognitive impairment using audio recordings, including those from virtual or telehealth appointments.
    Screening before Symptom Onset
    The model also provides insight into what parts of the neuropsychological exam might be more important than others in determining whether an individual has impaired cognition. The researchers’ model splits the exam transcripts into different sections based on the clinical tests performed. They discovered, for instance, that the Boston Naming Test — during which clinicians ask individuals to label a picture using one word — is most informative for an accurate dementia diagnosis. “This might enable clinicians to allocate resources in a way that allows them to do more screening, even before symptom onset,” says Paschalidis.
    Early diagnosis of dementia is not only important for patients and their caregivers to be able to create an effective plan for treatment and support, but it’s also crucial for researchers working on therapies to slow and prevent Alzheimer’s disease progression. “Our models can help clinicians assess patients in terms of their chances of cognitive decline,” says Paschalidis, “and then best tailor resources to them by doing further testing on those that have a higher likelihood of dementia.”
    Want to Join the Research Effort?
    The research team is looking for volunteers to take an online survey and submit an anonymous cognitive test — results will be used to provide personalized cognitive assessments and will also help the team refine their AI model.
    Story Source:
    Materials provided by Boston University. Original written by Gina Mantica. Note: Content may be edited for style and length. More

  • in

    Video game players show enhanced brain activity, decision-making skill study

    Frequent players of video games show superior sensorimotor decision-making skills and enhanced activity in key regions of the brain as compared to non-players, according to a recent study by Georgia State University researchers.
    The authors, who used functional magnetic resonance imaging (FMRI) in the study, said the findings suggest that video games could be a useful tool for training in perceptual decision-making.
    “Video games are played by the overwhelming majority of our youth more than three hours every week, but the beneficial effects on decision-making abilities and the brain are not exactly known,” said lead researcher Mukesh Dhamala, associate professor in Georgia State’s Department of Physics and Astronomy and the university’s Neuroscience Institute.
    “Our work provides some answers on that,” Dhamala said. “Video game playing can effectively be used for training — for example, decision-making efficiency training and therapeutic interventions — once the relevant brain networks are identified.”
    Dhamala was the adviser for Tim Jordan, the lead author of the paper, who offered a personal example of how such research could inform the use of video games for training the brain.
    Jordan, who received a Ph.D. in physics and astronomy from Georgia State in 2021, had weak vision in one eye as a child. As part of a research study when he was about 5, he was asked to cover his good eye and play video games as a way to strengthen the vision in the weak one. Jordan credits video game training with helping him go from legally blind in one eye to building strong capacity for visual processing, allowing him to eventually play lacrosse and paintball. He is now a postdoctoral researcher at UCLA. More

  • in

    A 'wise counsel' for synthetic biology

    Machine learning is transforming all areas of biological science and industry, but is typically limited to a few users and scenarios. A team of researchers at the Max Planck Institute for Terrestrial Microbiology led by Tobias Erb has developed METIS, a modular software system for optimizing biological systems. The research team demonstrates its usability and versatility with a variety of biological examples.
    Though engineering of biological systems is truly indispensable in biotechnology and synthetic biology, today machine learning has become useful in all fields of biology. However, it is obvious that application and improvement of algorithms, computational procedures made of lists of instructions, is not easily accessible. Not only are they limited by programming skills but often also insufficient experimentally-labeled data. At the intersection of computational and experimental works, there is a need for efficient approaches to bridge the gap between machine learning algorithms and their applications for biological systems.
    Now a team at the Max Planck Institute for Terrestrial Microbiology led by Tobias Erb has succeeded in democratizing machine learning. In their recent publication in “Nature Communications,” the team presented together with collaboration partners from the INRAe Institute in Paris, their tool METIS. The application is built in such a versatile and modular architecture that it does not require computational skills and can be applied on different biological systems and with different lab equipment. METIS is short from Machine-learning guided Experimental Trials for Improvement of Systems and also named after the ancient goddess of wisdom and crafts Μῆτις, lit. “wise counsel.”
    Less data required
    Active learning, also known as optimal experimental design, uses machine learning algorithms to interactively suggest the next set of experiments after being trained on previous results, a valuable approach for wet-lab scientists, especially when working with a limited number of experimentally-labeled data. But one of the main bottlenecks is the experimentally-labeled data generated in the lab that are not always high enough to train machine learning models. “While active learning already reduces the need for experimental data, we went further and examined various machine learning algorithms. Encouragingly, we found a model that is even less dependent on data,” says Amir Pandi, one of the lead authors of the study.
    To show the versatility of METIS, the team used it for a variety of applications, including optimization of protein production, genetic constructs, combinatorial engineering of the enzyme activity, and a complex CO2 fixation metabolic cycle named CETCH. For the CETCH cycle, they explored a combinatorial space of 1025 conditions with only 1,000 experimental conditions and reported the most efficient CO2 fixation cascade described to date.
    Optimizing biological systems
    In application, the study provides novel tools to democratize and advance current efforts in biotechnology, synthetic biology, genetic circuit design, and metabolic engineering. “METIS allows researchers to either optimize their already discovered or synthesized biological systems,” says Christoph Diehl, Co-lead author of the study. “But it is also a combinatorial guide for understanding complex interactions and hypothesis-driven optimization. And what is probably the most exciting benefit: it can be a very helpful system for prototyping new-to-nature systems.”
    METIS is a modular tool running as Google Colab Python notebooks and can be used via a personal copy of the notebook on a web browser, without installation, registration, or the need for local computational power. The materials provided in this work can guide users to customize METIS for their applications.
    Story Source:
    Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length. More