More stories

  • in

    Claims AI can boost workplace diversity are 'spurious and dangerous'

    New research highlights a growing market in AI-powered recruitment tools, used to process high volumes of job applicants, that claim to bypass human bias and remove discrimination from hiring. •These AI tools reduce race and gender to trivial data points, and often rely on personality analysis that is “automated pseudoscience,” according to Cambridge researchers. Academics have also teamed up with computing students to debunk use of AI in recruitment by building a version of the kinds of software increasingly used by HR teams. It demonstrates how random changes in clothing or lighting give radically different personality readings that could prove make-or-break for a generation of job seekers.
    Recent years have seen the emergence of AI tools marketed as an answer to lack of diversity in the workforce, from use of chatbots and CV scrapers to line up prospective candidates, through to analysis software for video interviews.
    Those behind the technology claim it cancels out human biases against gender and ethnicity during recruitment, instead using algorithms that read vocabulary, speech patterns and even facial micro-expressions to assess huge pools of job applicants for the right personality type and “culture fit.”
    However, in a new report published in Philosophy and Technology, researchers from Cambridge’s Centre for Gender Studies argue these claims make some uses of AI in hiring little better than an “automated pseudoscience” reminiscent of physiognomy or phrenology: the discredited beliefs that personality can be deduced from facial features and skull shape.
    They say it is a dangerous example of “technosolutionism”: turning to technology to provide quick fixes for deep-rooted discrimination issues that require investment and changes to company culture.
    In fact, the researchers have worked with a team of Cambridge computer science undergraduates to debunk these new hiring techniques by building an AI tool modelled on the technology, available at: https://personal-ambiguator-frontend.vercel.app/
    The ‘Personality Machine’ demonstrates how arbitrary changes in facial expression, clothing, lighting and background can give radically different personality readings — and so could make the difference between rejection and progression for a generation of job seekers vying for graduate positions. More

  • in

    Self-teaching AI uses pathology images to find similar cases, diagnose rare diseases

    Rare diseases are often difficult to diagnose and predicting the best course of treatment can be challenging for clinicians. Investigators from the Mahmood Lab at Brigham and Women’s Hospital, a founding member of the Mass General Brigham healthcare system, have developed a deep learning algorithm that can teach itself to learn features which can then be used to find similar cases in large pathology image repositories. Known as SISH (Self-Supervised Image search for Histology), the new tool acts like a search engine for pathology images and has many potential applications, including identifying rare diseases and helping clinicians determine which patients are likely to respond to similar therapies. A paper introducing the self-teaching algorithm is published in Nature Biomedical Engineering.
    “We show that our system can assist with the diagnosis of rare diseases and find cases with similar morphologic patterns without the need for manual annotations, and large datasets for supervised training,” said senior author Faisal Mahmood, PhD, in the Brigham’s Department of Pathology. “This system has the potential to improve pathology training, disease subtyping, tumor identification, and rare morphology identification.”
    Modern electronic databases can store an immense amount of digital records and reference images, particularly in pathology through whole slide images (WSIs). However, the gigapixel size of each individual WSI and the ever-increasing number of images in large repositories, means that search and retrieval of WSIs can be slow and complicated. As a result, scalability remains a pertinent roadblock for efficient use.
    To solve this issue, researchers at the Brigham developed SISH, which teaches itself to learn feature representations which can be used to find cases with analogous features in pathology at a constant speed regardless of the size of the database.
    In their study, the researchers tested the speed and ability of SISH to retrieve interpretable disease subtype information for common and rare cancers. The algorithm successfully retrieved images with speed and accuracy from a database of tens of thousands of whole slide images from over 22,000 patient cases, with over 50 different disease types and over a dozen anatomical sites. The speed of retrieval outperformed other methods in many scenarios, including disease subtype retrieval, particularly as the image database size scaled into the thousands of images. Even while the repositories expanded in size, SISH was still able to maintain a constant search speed.
    The algorithm, however, has some limitations including a large memory requirement, limited context awareness within large tissue slides and the fact that it is limited to a single imaging modality.
    Overall, the algorithm demonstrated the ability to efficiently retrieve images independent of repository size and in diverse datasets. It also demonstrated proficiency in diagnosis of rare disease types and the ability to serve as a search engine to recognize certain regions of images that may be relevant for diagnosis. This work may greatly inform future disease diagnosis, prognosis, and analysis.
    “As the sizes of image databases continue to grow, we hope that SISH will be useful in making identification of diseases easier,” said Mahmood. “We believe one important future direction in this area is multimodal case retrieval which involves jointly using pathology, radiology, genomic and electronic medical record data to find similar patient cases.”
    Story Source:
    Materials provided by Brigham and Women’s Hospital. Note: Content may be edited for style and length. More

  • in

    Novel navigation strategies for microscopic swimmers

    Autonomous optimal navigation of microswimmers is in fact possible, as researchers from the Max Planck Institute for Dynamics and Self-Organization (MPI-DS) showed. In contrast to the targeted navigation of boats, the motion of swimmers at the microscale is strongly disturbed by fluctuations. The researchers now described a navigation strategy for microswimmers that does not need an external interpreter. Their findings may contribute to the understanding of transport mechanisms in the microcosm as well as to applications such as targeted drug delivery.
    Whereas the shortest way between two points is a straight connection, it might not be the most efficient path to follow. Complex currents often affect the motion of microswimmers and make it difficult for them to reach their destination. At the same time, making use of these currents to navigate as fast as possible is a certain evolutionary advantage. Whereas such strategies allow biological microswimmers to better access food or escape a predator, microrobots could this way be directed to perform specific tasks.
    The optimal path in a given current can readily be determined mathematically, yet fluctuations perturb the motion of microswimmers and deviate them from the optimal route. Thus, they have to readjust their motion in order to account for environmental changes. This typically requires the help of an external interpreter and takes away their autonomy.
    “Thanks to evolution, some microorganisms have developed autonomous strategies that enable directed motion towards larger concentration of nutrients or light,” first author of the study Lorenzo Piro explains. Inspired by this idea, the researchers from the Department of Living Matter Physics at the MPI-DS designed strategies that allow microswimmers to navigate optimally in a nearly autonomous way.
    Light as a guide for autonomous navigation
    When an external interpreter defines the navigation pattern, microswimmers on average follow a well-defined path. Thus, it is a good approach to guide the microswimmer along that path within the current. This can be achieved autonomously via external stimuli, despite the presence of fluctuations. This principle could be applied to swimmers that respond to variation of light, such as certain algae, in which case the optimal path can simply be illuminated. Remarkably, the resulting performances are comparable to externally supervised navigation. “These new strategies can moreover conveniently be applied to more complex scenario such as navigation on curved surfaces or in presence of random currents,” concludes Ramin Golestanian, director at MPI-DS.
    Possible applications of the study thus range from targeted drug delivery at the microscale to the optimal design of autonomous micromachines.
    Story Source:
    Materials provided by Max Planck Institute for Dynamics and Self-Organization. Note: Content may be edited for style and length. More

  • in

    Optical foundations illuminated by quantum light

    Optics, the study of light, is one of the oldest fields in physics and has never ceased to surprise researchers. Although the classical description of light as a wave phenomenon is rarely questioned, the physical origins of some optical effects are. A team of researchers at Tampere University have brought the discussion on one fundamental wave effect, i.e., the debate around the anomalous behaviour of focused light waves, to the quantum domain.
    The researchers have been able to show that quantum waves behave significantly differently from their classical counterparts and can be used to increase the precision of distance measurements. Their findings also add to the discussion on physical origin of the anomalous focusing behaviour. The results are now published in the journal of Nature Photonics.
    “Interestingly, we started with an idea based on our earlier results and set out to structure quantum light for enhanced measurement precision. However, we then realised that the underlying physics of this application also contributes to the long debate about the origins of the Gouy phase anomaly of focused light fields.,” explains Robert Fickler, group leader of the Experimental Quantum Optics group at Tampere University.
    Quantum waves behave differently but point to the same origin
    Over the last decades, methods for structuring light fields down on the single photon level have vastly matured and led to a myriad of novel findings. In addition, a better of optics’ foundations has been achieved. However, the physical origin of why light behaves in such an unexpected way when going through a focus, the so-called Gouy phase anomaly, is still often debated. This is despite its widespread use and importance in optical systems. The novelty of the current study is now to put the effect into the quantum domain.
    “When developing the theory to describe our experimental results, we realised (after a long debate) that the Gouy phase for quantum light is not only different than the standard one, but its origin can be linked to another quantum effect. This is just like what was speculated in an earlier work,” adds Doctoral researcher Markus Hiekkamäki, leading author of the study.
    In the quantum domain, the anomalous behaviour is sped up when compared to classical light. As the Gouy phase behaviour can be used to determine the distance a beam of light has propagated, the speed up of the quantum Gouy phase could allow for an improvement in the precision of measuring distances.
    With this new understanding at hand, the researchers are planning to develop novel techniques to enhance their measurement abilities such that it will be possible to measure more complex beams of structured photons. The team expects that this will help them push forward the application of the observed effect, and potentially bring to light more differences between quantum and classical light fields.
    Story Source:
    Materials provided by Tampere University. Note: Content may be edited for style and length. More

  • in

    Sleep mode makes Energy Internet more energy efficient

    A group of scientists in Nagoya University, Japan, have developed a possible solution to one of the biggest problems of the Internet of Energy, energy efficiency. They did so by creating a controller that has a sleep mode and only procures energy when needed.
    Widespread generation of electricity based on renewable energy has become necessary to combat the climate crisis. One solution to realize society’s electrification needs is the Internet of Energy, which would operate like the information Internet, except that it would consist of energy linked by smart power generation, smart power consumption, smart interconnection, and cloud sharing.
    When information is sent over the Internet, it is divided into transmittable units called ‘packets’, which are tagged with their destination. The energy Internet is based on a similar concept. Information tags are added to power pulses to create units called ‘power packets’. On the basis of requests from terminals, these are then distributed over networks to where they are needed. However, one problem is that since the packets are sent sporadically, the energy supply is intermittent. Current solutions, such as storage batteries or capacitors, complicate the system and reduce its efficiency.
    An alternative solution is what is known as ‘sparse control’, where the terminal’s actuators are active part of the time and are in sleep mode for the rest of the time. In sleep mode, they do not consume fuel or electricity, leading to efficient energy saving and reducing environmental and noise pollution. Although sparse control has been used with a single actuator, it does not necessarily provide good performance when multiple actuators are used. The problem of determining how to do this for multiple actuators is called the ‘maximum turn-off control problem’.
    Now, a Nagoya University research group, led by Professor Shun-ichi Azuma and Doctoral student Takumi Iwata of the Graduate School of Engineering, has developed a model control scheme for multiple actuators. The model has an awake mode, during which it procures and controls the necessary power packets for when they are needed, and a sleep mode. The research was published in the International Journal of Robust and Nonlinear Control.
    “We can see our research being useful in the motor control of production equipment,” explains Professor Azuma. “This research provides a control system configuration method based on the assumption that the energy supply is intermittent. It has the advantage of eliminating the need for storage batteries and capacitors. It is expected to accelerate the practical application of the power packet type energy Internet.”
    This research was supported by Japan Science and Technology Agency Emergent Research Support Program and Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology of Japan.
    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More

  • in

    Superconducting hardware could scale up brain-inspired computing

    Scientists have long looked to the brain as an inspiration for designing computing systems. Some researchers have recently gone even further by making computer hardware with a brainlike structure. These “neuromorphic chips” have already shown great promise, but they have used conventional digital electronics, limiting their complexity and speed. As the chips become larger and more complex, the signals between their individual components become backed up like cars on a gridlocked highway and reduce computation to a crawl.
    Now, a team at the National Institute of Standards and Technology (NIST) has demonstrated a solution to these communication challenges that may someday allow artificial neural systems to operate 100,000 times faster than the human brain.
    The human brain is a network of about 86 billion cells called neurons, each of which can have thousands of connections (known as synapses) with its neighbors. The neurons communicate with each other using short electrical pulses called spikes to create rich, time-varying activity patterns that form the basis of cognition. In neuromorphic chips, electronic components act as artificial neurons, routing spiking signals through a brainlike network.
    Doing away with conventional electronic communication infrastructure, researchers have designed networks with tiny light sources at each neuron that broadcast optical signals to thousands of connections. This scheme can be especially energy-efficient if superconducting devices are used to detect single particles of light known as photons — the smallest possible optical signal that could be used to represent a spike.
    In a new Nature Electronics paper, NIST researchers have achieved for the first time a circuit that behaves much like a biological synapse yet uses just single photons to transmit and receive signals. Such a feat is possible using superconducting single-photon detectors. The computation in the NIST circuit occurs where a single-photon detector meets a superconducting circuit element called a Josephson junction. A Josephson junction is a sandwich of superconducting materials separated by a thin insulating film. If the current through the sandwich exceeds a certain threshold value, the Josephson junction begins to produce small voltage pulses called fluxons. Upon detecting a photon, the single-photon detector pushes the Josephson junction over this threshold and fluxons are accumulated as current in a superconducting loop. Researchers can tune the amount of current added to the loop per photon by applying a bias (an external current source powering the circuits) to one of the junctions. This is called the synaptic weight.
    This behavior is similar to that of biological synapses. The stored current serves as a form of short-term memory, as it provides a record of how many times the neuron produced a spike in the near past. The duration of this memory is set by the time it takes for the electric current to decay in the superconducting loops, which the NIST team demonstrated can vary from hundreds of nanoseconds to milliseconds, and likely beyond. This means the hardware could be matched to problems occurring at many different time scales — from high-speed industrial control systems to more leisurely conversations with humans. The ability to set different weights by changing the bias to the Josephson junctions permits a longer-term memory that can be used to make the networks programmable so that the same network could solve many different problems.
    Synapses are a crucial computational component of the brain, so this demonstration of superconducting single-photon synapses is an important milestone on the path to realizing the team’s full vision of superconducting optoelectronic networks. Yet the pursuit is far from complete. The team’s next milestone will be to combine these synapses with on-chip sources of light to demonstrate full superconducting optoelectronic neurons.
    “We could use what we’ve demonstrated here to solve computational problems, but the scale would be limited,” NIST project leader Jeff Shainline said. “Our next goal is to combine this advance in superconducting electronics with semiconductor light sources. That will allow us to achieve communication between many more elements and solve large, consequential problems.”
    The team has already demonstrated light sources that could be used in a full system, but further work is required to integrate all the components on a single chip. The synapses themselves could be improved by using detector materials that operate at higher temperatures than the present system, and the team is also exploring techniques to implement synaptic weighting in larger-scale neuromorphic chips.
    The work was funded in part by the Defense Advanced Research Projects Agency.
    Story Source:
    Materials provided by National Institute of Standards and Technology (NIST). Note: Content may be edited for style and length. More

  • in

    Repurposing existing drugs to fight new COVID-19 variants

    MSU researchers are using big data and AI to identify current drugs that could be applied to treat new COVID-19 variants.
    Finding new ways to treat the novel coronavirus and its ever-changing variants has been a challenge for researchers, especially when the traditional drug development and discovery process can take years. A Michigan State University researcher and his team are taking a hi-tech approach to determine whether drugs already on the market can pull double duty in treating new COVID variants.
    “The COVID-19 virus is a challenge because it continues to evolve,” said Bin Chen, an associate professor in the College of Human Medicine. “By using artificial intelligence and really large data sets, we can repurpose old drugs for new uses.”
    Chen built an international team of researchers with expertise on topics ranging from biology to computer science to tackle this challenge. First, Chen and his team turned to publicly available databases to mine for the unique coronavirus gene expression signatures from 1,700 host transcriptomic profiles that came from patient tissues, cell cultures and mouse models. These signatures revealed the biology shared by COVID-19 and its variants.
    With the virus’s signature and knowing which genes need to be suppressed and which genes need to be activated, the team was able to use a computer program to screen a drug library consisting of FDA-approved or investigational drugs to find candidates that could correct the expression of signature genes and further inhibit the coronavirus from replicating. Chen and his team discovered one novel candidate, IMD-0354, a drug that passed phase I clinical trials for the treatment of atopic dermatitis. A group in Korea later observed that it was 90-fold more effective against six COVID-19 variants than remdesivir, the first drug approved to treat COVID-19. The team further found that IMD-0354 inhibited the virus from copying itself by boosting the immune response pathways in the host cells. Based on the information learned, the researchers studied a prodrug of IMD-0354 called IMD-1041. A prodrug is an inactive substance that is metabolized within the body to create an active drug.
    “IMD-1041 is even more promising as it is orally available and has been investigated for chronic obstructive pulmonary disease, a group of lung diseases that block airflow and make it difficult to breathe,” Chen said. “Because the structure of IMD-1041 is undisclosed, we are developing a new artificial intelligence platform to design novel compounds that hopefully could be tested and evaluated in more advanced animal models.”
    The research was published in the journal iScience.
    This project was led by two senior postdoctoral scholars in the Chen lab: Jing Xing, who recently became a young investigator at the Chinese Academy of Sciences, and Rama Shankar, with the support from researchers from Institute Pasteur Korea, Shanghai Institute of Materia Medica, University of Texas Medical Branch, Spectrum Health in Grand Rapids and Stanford University.
    Story Source:
    Materials provided by Michigan State University. Original written by Emilie Lorditch. Note: Content may be edited for style and length. More

  • in

    Zooming in on the signals of cancer

    This year, about 240,000 people in the U.S. will discover they have lung cancer. Some 200,000 of them will be diagnosed with non-small-cell lung cancer, which is the second leading cause of death after cardiovascular disease.
    Georgia Tech researcher Ahmet Coskun is working to improve the odds for these patients in two recently published studies that are essentially focused on understanding why and how patients respond differently to disease and treatments.
    “What we have learned is connectivity and communication between molecules and between cells is what really controls everything, regarding whether or not patients get healthy, or how they will respond to drugs,” said Coskun, an assistant professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University.
    Published in the journals npj Precision Oncology and iScience, the studies detail the development of tools and techniques to deeply explore the tumor microenvironment at the subcellular level, utilizing the Coskun lab’s expertise in combining multiplex cellular imaging methods with artificial intelligence.
    “We are developing a better grasp of cellular signaling and decision making, and how it is coordinated in the tumor microenvironment, which can lead to better personalized, precision treatments for these patients,” said Coskun, who is keenly interested in why some patients respond to groundbreaking immunotherapy drugs, and some don’t.
    With that in mind, his team developed SpatialVizScore, a new method they describe in npj Precision Oncology, to deeply study tumor immunology in cancer tissues and help identify which patients are more likely to respond to an immunotherapy. It’s a significant upgrade to the current standard methodology used by cancer physicians and researchers, Immunoscore. More