More stories

  • in

    Adaptive optical neural network connects thousands of artificial neurons

    Scientists headed by physicists Prof. Wolfram Pernice, Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster (Germany), developed a so-called event-based architecture, using photonic processors. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network.
    Modern computer models — for example for complex, potent AI applications — push traditional digital computer processes to their limits. New types of computing architecture, which emulate the working principles of biological neural networks, hold the promise of faster, more energy-efficient data processing. A team of researchers has now developed a so-called event-based architecture, using photonic processors with which data are transported and processed by means of light. In a similar way to the brain, this makes possible the continuous adaptation of the connections within the neural network. This changeable connections are the basis for learning processes. For the purposes of the study, a team working at Collaborative Research Centre 1459 (“Intelligent Matter”) — headed by physicists Prof. Wolfram Pernice and Prof. Martin Salinga and computer specialist Prof. Benjamin Risse, all from the University of Münster — joined forces with researchers from the Universities of Exeter and Oxford in the UK. The study has been published in the journal “Science Advances.”
    What is needed for a neural network in machine learning are artificial neurons which are activated by external excitatory signals, and which have connections to other neurons. The connections between these artificial neurons are called synapses — just like the biological original. For their study, the team of researchers in Münster used a network consisting of almost 8,400 optical neurons made of waveguide-coupled phase-change material, and the team showed that the connection between two each of these neurons can indeed become stronger or weaker (synaptic plasticity), and that new connections can be formed, or existing ones eliminated (structural plasticity). In contrast to other similar studies, the synapses were not hardware elements but were coded as a result of the properties of the optical pulses — in other words, as a result of the respective wavelength and of the intensity of the optical pulse. This made it possible to integrate several thousand neurons on one single chip and connect them optically.
    In comparison with traditional electronic processors, light-based processors offer a significantly higher bandwidth, making it possible to carry out complex computing tasks, and with lower energy consumption. This new approach consists of basic research. “Our aim is to develop an optical computing architecture which in the long term will make it possible to compute AI applications in a rapid and energy-efficient way,” says Frank Brückerhoff-Plückelmann, one of the lead authors.
    Methodology: The non-volatile phase-change material can be switched between an amorphous structure and a crystalline structure with a highly ordered atomic lattice. This feature allows permanent data storage even without an energy supply. The researchers tested the performance of the neural network by using an evolutionary algorithm to train it to distinguish between German and English texts. The recognition parameter they used was the number of vowels in the text. More

  • in

    To excel at engineering design, generative AI must learn to innovate, study finds

    ChatGPT and other deep generative models are proving to be uncanny mimics. These AI supermodels can churn out poems, finish symphonies, and create new videos and images by automatically learning from millions of examples of previous works. These enormously powerful and versatile tools excel at generating new content that resembles everything they’ve seen before.
    But as MIT engineers say in a new study, similarity isn’t enough if you want to truly innovate in engineering tasks.
    “Deep generative models (DGMs) are very promising, but also inherently flawed,” says study author Lyle Regenwetter, a mechanical engineering graduate student at MIT. “The objective of these models is to mimic a dataset. But as engineers and designers, we often don’t want to create a design that’s already out there.”
    He and his colleagues make the case that if mechanical engineers want help from AI to generate novel ideas and designs, they will have to first refocus those models beyond “statistical similarity.”
    “The performance of a lot of these models is explicitly tied to how statistically similar a generated sample is to what the model has already seen,” says co-author Faez Ahmed, assistant professor of mechanical engineering at MIT. “But in design, being different could be important if you want to innovate.”
    In their study, Ahmed and Regenwetter reveal the pitfalls of deep generative models when they are tasked with solving engineering design problems. In a case study of bicycle frame design, the team shows that these models end up generating new frames that mimic previous designs but falter on engineering performance and requirements.
    When the researchers presented the same bicycle frame problem to DGMs that they specifically designed with engineering-focused objectives, rather than only statistical similarity, these models produced more innovative, higher-performing frames. More

  • in

    Keeping a human in the loop: Managing the ethics of AI in medicine

    Artificial intelligence (AI) — of ChatGPT fame — is increasingly used in medicine to improve diagnosis and treatment of diseases, and to avoid unnecessary screening for patients. But AI medical devices could also harm patients and worsen health inequities if they are not designed, tested, and used with care, according to an international task force that included a University of Rochester Medical Center bioethicist.
    Jonathan Herington, PhD, was a member of the AI Task Force of the Society for Nuclear Medicine and Medical Imaging, which laid out recommendations on how to ethically develop and use AI medical devices in two papers published in the Journal of Nuclear Medicine. In short, the task force called for increased transparency about the accuracy and limits of AI and outlined ways to ensure all people have access to AI medical devices that work for them — regardless of their race, ethnicity, gender, or wealth.
    While the burden of proper design and testing falls to AI developers, health care providers are ultimately responsible for properly using AI and shouldn’t rely too heavily on AI predictions when making patient care decisions.
    “There should always be a human in the loop,” said Herington, who is assistant professor of Health Humanities and Bioethics at URMC and was one of three bioethicists added to the task force in 2021. “Clinicians should use AI as an input into their own decision making, rather than replacing their decision making.”
    This requires that doctors truly understand how a given AI medical device is intended to be used, how well it performs at that task, and any limitations — and they must pass that knowledge on to their patients. Doctors must weigh the relative risks of false positives versus false negatives for a given situation, all while taking structural inequities into account.
    When using an AI system to identify probable tumors in PET scans, for example, health care providers must know how well the system performs at identifying this specific type of tumor in patients of the same sex, race, ethnicity, etc., as the patient in question.
    “What that means for the developers of these systems is that they need to be very transparent,” said Herington. More

  • in

    Ensuring fairness of AI in healthcare requires cross-disciplinary collaboration

    Pursuing fair artificial intelligence (AI) for healthcare requires collaboration between experts across disciplines, says a global team of scientists led by Duke-NUS Medical School in a new perspective published in npj Digital Medicine.
    While AI has demonstrated potential for healthcare insights, concerns around bias remain. “A fair model is expected to perform equally well across subgroups like age, gender and race. However, differences in performance may have underlying clinical reasons and may not necessarily indicate unfairness,” explained first author Ms Liu Mingxuan, a PhD candidate in the Quantitative Biology and Medicine (Biostatistics & Health Data Science) Programme and Centre for Quantitative Medicine (CQM) at Duke-NUS.
    “Focusing on equity — that is, recognising factors like race, gender, etc., and adjusting the AI algorithm or its application to make sure more vulnerable groups get the care they need — rather than complete equality, is likely a more reasonable approach for clinical AI,” said Dr Ning Yilin, Research Fellow with CQM and a co-first-author of the paper. “Patient preferences and prognosis are also crucial considerations, as equal treatment does not always mean fair treatment. An example of this is age, which frequently factors into treatment decisions and outcomes.”
    The paper highlights key misalignments between AI fairness research and clinical needs. “Various metrics exist to measure model fairness, but choosing suitable ones for healthcare is difficult as they can conflict. Trade-offs are often inevitable,” commented Associate Professor Liu Nan also from Duke-NUS’ CQM, senior and corresponding author of the paper.
    He added, “Differences detected between groups are frequently treated as biases to be mitigated in AI research. However, in the medical context, we must discern between meaningful differences and true biases requiring correction.”
    The authors emphasise the need to evaluate which attributes are considered ‘sensitive’ for each application. They say that actively engaging clinicians is vital for developing useful and fair AI models.
    “Variables like race and ethnicity need careful handling as they may represent systemic biases or biological differences,” said Assoc Prof Liu. “Clinicians can provide context, determine if differences are justified, and guide models towards equitable decisions.”
    Overall, the authors argue that pursuing fair AI for healthcare requires collaboration between experts in AI, medicine, ethics and beyond. More

  • in

    Eyes may be the window to your soul, but the tongue mirrors your health

    A 2000-year-old practice by Chinese herbalists — examining the human tongue for signs of disease — is now being embraced by computer scientists using machine learning and artificial intelligence.
    Tongue diagnostic systems are fast gaining traction due to an increase in remote health monitoring worldwide, and a study by Iraqi and Australian researchers provides more evidence of the increasing accuracy of this technology to detect disease.
    Engineers from Middle Technical University (MTU) in Baghdad and the University of South Australia (UniSA) used a USB web camera and computer to capture tongue images from 50 patients with diabetes, renal failure and anaemia, comparing colours with a data base of 9000 tongue images.
    Using image processing techniques, they correctly diagnosed the diseases in 94 per cent of cases, compared to laboratory results. A voicemail specifying the tongue colour and disease was also sent via a text message to the patient or nominated health provider.
    MTU and UniSA Adjunct Associate Professor Ali Al-Naji and his colleagues have reviewed the worldwide advances in computer-aided disease diagnosis, based on tongue colour, in a new paper in AIP Conference Proceedings.
    “Thousands of years ago, Chinese medicine pioneered the practice of examining the tongue to detect illness,” Assoc Prof Al-Naji says.
    “Conventional medicine has long endorsed this method, demonstrating that the colour, shape, and thickness of the tongue can reveal signs of diabetes, liver issues, circulatory and digestive problems, as well as blood and heart diseases. More

  • in

    International team develops novel DNA nano engine

    An international team of scientists has recently developed a novel type of nano engine made of DNA. It is driven by a clever mechanism and can perform pulsing movements. The researchers are now planning to fit it with a coupling and install it as a drive in complex nano machines. Their results were just published today in the journal Nature Nanotechnology.
    Petr Šulc, an assistant professor at Arizona State University’s School of Molecular Sciences and the Biodesign Center for Molecular Design and Biomimetics, has collaborated with professor Famulok (project lead) from the University of Bonn, Germany and professor Walter from the University of Michigan on this project.
    Šulc has used his group’s computer modeling tools to gain insights into design and operation of this leaf-spring nano engine. The structure is comprised of almost 14,000 nucleotides, which form the basic structural units of DNA.
    “Being able to simulate motion in such a large nanostructure would be impossible without oxDNA, the computer model that our group uses for design and design of DNA nanostructures,” explains Šulc. ” It is the first time that a chemically powered DNA nanotechnology motor has been successfully engineered. We are very excited that our research methods could help with studying it, and are looking forward to building even more complex nanodevices in the future.”
    This novel type of engine is similar to a hand grip strength trainer that strengthens your grip when used regularly. However, the motor is around one million times smaller. Two handles are connected by a spring in a V-shaped structure.
    In a hand grip strength trainer, you squeeze the handles together against the resistance of the spring. Once you release your grip, the spring pushes the handles back to their original position. “Our motor uses a very similar principle,” says professor Michael Famulok from the Life and Medical Sciences (LIMES) Institute at the University of Bonn. “But the handles are not pressed together but rather pulled together.”
    The researchers have repurposed a mechanism without which there would be no plants or animals on Earth. Every cell is equipped with a sort of library. It contains the blueprints for all types of proteins that each cell needs to perform its function. If the cell wants to produce a certain type of protein, it orders a copy from the respective blueprint. This transcript is produced by the enzymes called RNA polymerases. More

  • in

    Physical theory improves protein folding prediction

    Proteins are important molecules that perform a variety of functions essential to life. To function properly, many proteins must fold into specific structures. However, the way proteins fold into specific structures is still largely unknown. Researchers from the University of Tokyo developed a novel physical theory that can accurately predict how proteins fold. Their model can predict things previous models cannot. Improved knowledge of protein folding could offer huge benefits to medical research, as well as to various industrial processes.
    You are literally made of proteins. These chainlike molecules, made from tens to thousands of smaller molecules called amino acids, form things like hair, bones, muscles, enzymes for digestion, antibodies to fight diseases, and more. Proteins make these things by folding into various structures that in turn build up these larger tissues and biological components. And by knowing more about this folding process, researchers can better understand more about the processes that constitute life itself. Such knowledge is also essential to medicine, not only for the development of new treatments and industrial processes to produce medicines, but also for knowledge of how certain diseases work, as some are examples of protein folding gone wrong. So, to say proteins are important is putting it mildly. Proteins are the stuff of life.
    Encouraged by the importance of protein folding, Project Assistant Professor Koji Ooka from the College of Arts and Sciences and Professor Munehito Arai from the Department of Life Sciences and Department of Physics embarked on the hard task of improving upon the prediction methods of protein folding. This task is formidable for many reasons. In particular, the computational requirements to simulate the dynamics of molecules necessitate a powerful supercomputer. Recently, the artificial intelligence-based program AlphaFold 2 accurately predicts structures resulting from a given amino acid sequence; but it cannot give details of the way proteins fold, making it a black box. This is problematic, as the forms and behaviors of proteins vary such that two similar ones may fold in radically different ways. So, instead of AI, the duo needed a different approach: statistical mechanics, a branch of physical theory.
    “For over 20 years, a theory called the Wako-Saitô-Muñoz-Eaton (WSME) model has successfully predicted the folding processes for proteins comprising around 100 amino acids or fewer, based on the native protein structures,” said Arai. “WSME can only evaluate small sections of proteins at a time, missing potential connections between sections farther apart. To overcome this issue, we produced a new model, WSME-L, where the L stands for ‘linker.’ Our linkers correspond to these nonlocal interactions and allow WSME-L to elucidate the folding process without the limitations of protein size and shape, which AlphaFold 2 cannot.”
    But it doesn’t end there. There are other limitations of existing protein folding models that Ooka and Arai set their sights on. Proteins can exist inside or outside of living cells; those within are in some ways protected by the cell, but those outside cells, such as antibodies, require additional bonds during folding, called disulfide bonds, which help to stabilize them. Conventional models cannot factor in these bonds, but an extension to WSME-L called WSME-L(SS), where each S stands for sulfide, can. To further complicate things, some proteins have disulfide bonds before folding starts, so the researchers made a further enhancement called WSME-L(SSintact), which factors in that situation at the expense of extra computation time.
    “Our theory allows us to draw a kind of map of protein folding pathways in a relatively short time; mere seconds on a desktop computer for short proteins, and about an hour on a supercomputer for large proteins, assuming the native protein structure is available by experiments or AlphaFold 2 prediction,” said Arai. “The resulting landscape allows a comprehensive understanding of multiple potential folding pathways a long protein might take. And crucially, we can scrutinize structures of transient states. This might be helpful for those researching diseases like Alzheimer’s and Parkinson’s — both are caused by proteins which fail to fold correctly. Also, our method may be useful for designing novel proteins and enzymes which can efficiently fold into stable functional structures, for medical and industrial use.”
    While the models produced here accurately reflect experimental observations, Ooka and Arai hope they can be used to elucidate the folding processes of many proteins that have not yet been studied experimentally. Humans have about 20,000 different proteins, but only around 100 have had their folding processes thoroughly studied. More

  • in

    Electrical control of quantum phenomenon could improve future electronic devices

    A new electrical method to conveniently change the direction of electron flow in some quantum materials could have implications for the development of next-generation electronic devices and quantum computers. A team of researchers from Penn State developed and demonstrated the method in materials that exhibit the quantum anomalous Hall (QAH) effect — a phenomenon in which the flow of electrons along the edge of a material does not lose energy. The team described the work in a paper appearing today (Oct. 19) in the journal Nature Materials.
    “As electronic devices get smaller and computational demands get larger, it is increasingly important to find ways to improve the efficiency of information transfer, which includes the control of electron flow,” said Cui-Zu Chang, Henry W. Knerr Early Career Professor and associate professor of physics at Penn State and co-corresponding author of the paper. “The QAH effect is promising because there is no energy loss as electrons flow along the edges of materials.”
    In 2013, Chang was the first to experimentally demonstrate this quantum phenomenon. Materials exhibiting this effect are referred to as QAH insulators, which are a type of topological insulator — a thin layer of film only a couple dozen atoms thick — that have been made magnetic so that they only conduct current on their edges. Because the electrons travel cleanly in one direction, the effect is referred to as dissipationless, meaning no energy is lost in the form of heat.
    “In a QAH insulator, electrons on one side of the material travel in one direction, while those on the other side travel in the opposite direction, like a two-lane highway,” Chang said. “Our earlier work demonstrated how to scale up the QAH effect, essentially creating a multilane highway for faster electron transport. In this study, we develop a new electrical method to control the transport direction of the electron highway and provide a way for those electrons to make an immediate U-turn.”
    The researchers fabricated a QAH insulator with specific, optimized properties. They found that applying a 5-millisecond current pulse to the QAH insulator impacts the internal magnetism of the material and causes the electrons to change directions. The ability to change direction is critical for optimizing information transfer, storage, and retrieval in quantum technologies. Unlike current electronics, where data is stored in a binary state as on or off — as one or zero — quantum data can be stored simultaneously in a range of possible states. Changing the flow of electrons is an important step in writing and reading these quantum states.
    “The previous method to switch the direction of electron flow relied on an external magnet to alter the material’s magnetism, but using magnets in electronic devices is not ideal,” said Chao-Xing Liu, professor of physics at Penn State and co-corresponding author of the paper. “Bulky magnets are not practical for small devices like smartphones, and an electronic switch is typically much faster than a magnetic switch. In this work, we found a convenient electronic method to change the direction of electron flow.”
    The researchers previously optimized the QAH insulator so that they could take advantage of a physical mechanism in the system to control its internal magnetism. More