More stories

  • in

    Modified virtual reality tech can measure brain activity

    Researchers have modified a commercial virtual reality headset, giving it the ability to measure brain activity and examine how we react to hints, stressors and other outside forces.
    The research team at The University of Texas at Austin created a noninvasive electroencephalogram (EEG) sensor that they installed in a Meta VR headset that can be worn comfortably for long periods. The EEG measures the brain’s electrical activity during the immersive VR interactions.
    The device could be used in many ways, from helping people with anxiety, to measuring the attention or mental stress of aviators using a flight simulator, to giving a human the chance to see through the eyes of a robot.
    “Virtual reality is so much more immersive than just doing something on a big screen,” said Nanshu Lu, a professor in the Cockrell School of Engineering’s Department of Aerospace Engineering and Engineering Mechanics who led the research. “It gives the user a more realistic experience, and our technology enables us to get better measurements of how the brain is reacting to that environment.”
    The research is published in Soft Science.
    The pairing of VR and EEG sensors has made its way into the commercial sphere already. However, the devices that exist today are costly, and the researchers say their electrodes are more comfortable for the user, extending the potential wearing time and opening up additional applications.
    The best EEG devices today consist of a cap covered in electrodes, but that does not work well with the VR headset. And individual electrodes struggle to get a strong reading because our hair blocks them from connecting with the scalp. The most popular electrodes are rigid and comb-shaped, inserting through the hairs to connect with the skin, an uncomfortable experience for the user. More

  • in

    How good is that AI-penned radiology report?

    AI tools that quickly and accurately create detailed narrative reports of a patient’s CT scan or X-ray can greatly ease the workload of busy radiologists.
    Instead of merely identifying the presence or absence of abnormalities on an image, these AI reports convey complex diagnostic information, detailed descriptions, nuanced findings, and appropriate degrees of uncertainty. In short, they mirror how human radiologists describe what they see on a scan.
    Several AI models capable of generating detailed narrative reports have begun to appear on the scene. With them have come automated scoring systems that periodically assess these tools to help inform their development and augment their performance.
    So how well do the current systems gauge an AI model’s radiology performance?
    The answer is good but not great, according to a new study by researchers at Harvard Medical School published Aug. 3 in the journal Patterns.
    Ensuring that scoring systems are reliable is critical for AI tools to continue to improve and for clinicians to trust them, the researchers said, but the metrics tested in the study failed to reliably identify clinical errors in the AI reports, some of them significant. The finding, the researchers said, highlights an urgent need for improvement and the importance of designing high-fidelity scoring systems that faithfully and accurately monitor tool performance.
    The team tested various scoring metrics on AI-generated narrative reports. The researchers also asked six human radiologists to read the AI-generated reports. More

  • in

    Deep learning for new protein design

    The key to understanding proteins — such as those that govern cancer, COVID-19, and other diseases — is quite simple. Identify their chemical structure and find which other proteins can bind to them. But there’s a catch.
    “The search space for proteins is enormous,” said Brian Coventry, a research scientist with the Institute for Protein Design, University of Washington and The Howard Hughes Medical Institute.
    A protein studied by his lab typically is made of 65 amino acids, and with 20 different amino acid choices at each position, there are 65 to the 20th power binding combinations, a number bigger than the estimated number of atoms there are in the universe.
    Coventry is the co-author of a study published May 2023 in the journal Nature Communications.
    In it his team used deep learning methods to augment existing energy-based physical models in ‘do novo’ or from-scratch computational protein design, resulting in a 10-fold increase in success rates verified in the lab for binding a designed protein with its target protein.
    “We showed that you can have a significantly improved pipeline by incorporating deep learning methods to evaluate the quality of the interfaces where hydrogen bonds form or from hydrophobic interactions,” said study co-author Nathaniel Bennett, a post-doctoral scholar at the Institute for Protein Design, University of Washington.
    “This is as opposed to trying to exactly enumerate all of these energies by themselves,” he added. More

  • in

    Denial of service threats detected thanks to asymmetric behavior in network traffic

    Scientists have developed a better way to recognize a common internet attack, improving detection by 90 percent compared to current methods.
    The new technique developed by computer scientists at the Department of Energy’s Pacific Northwest National Laboratory works by keeping a watchful eye over ever-changing traffic patterns on the internet. The findings were presented on August 2 by PNNL scientist Omer Subasi at the IEEE International Conference on Cyber Security and Resilience, where the manuscript was recognized as the best research paper presented at the meeting.
    The scientists modified the playbook most commonly used to detect denial-of-service attacks, where perpetrators try to shut down a website by bombarding it with requests. Motives vary: Attackers might hold a website for ransom, or their aim might be to disrupt businesses or users.
    Many systems try to detect such attacks by relying on a raw number called a threshold. If the number of users trying to access a site rises above that number, an attack is considered likely, and defensive measures are triggered. But relying on a threshold can leave systems vulnerable.
    “A threshold just doesn’t offer much insight or information about what it is really going on in your system,” said Subasi. “A simple threshold can easily miss actual attacks, with serious consequences, and the defender may not even be aware of what’s happening.”
    A threshold can also create false alarms that have serious consequences themselves. False positives can force defenders to take a site offline and bring legitimate traffic to a standstill — effectively doing what a real denial-of-service attack, also known as a DOS attack, aims to do.
    “It’s not enough to detect high-volume traffic. You need to understand that traffic, which is constantly evolving over time,” said Subasi. “Your network needs to be able to differentiate between an attack and a harmless event where traffic suddenly surges, like the Super Bowl. The behavior is almost identical.”
    As principal investigator Kevin Barker said: “You don’t want to throttle the network yourself when there isn’t an attack underway.” More

  • in

    Current takes a surprising path in quantum material

    Cornell researchers used magnetic imaging to obtain the first direct visualization of how electrons flow in a special type of insulator, and by doing so they discovered that the transport current moves through the interior of the material, rather than at the edges, as scientists had long assumed.
    The finding provides new insights into the electron behavior in so-called quantum anomalous Hall insulators and should help settle a decades-long debate about how current flows in more general quantum Hall insulators. These insights will inform the development of topological materials for next-generation quantum devices.
    The team’s paper, “Direct Visualization of Electronic Transport in a Quantum Anomalous Hall Insulator,” published Aug. 3 in Nature Materials. The lead author is Matt Ferguson, Ph.D. ’22, currently a postdoctoral researcher at the Max Planck Institute for Chemical Physics of Solids in Germany.
    The project, led by Katja Nowack, assistant professor of physics in the College of Arts and Sciences and the paper’s senior author, has its origins in what’s known as the quantum Hall effect. First discovered in 1980, this effect results when a magnetic field is applied to a specific material to trigger an unusual phenomena: The interior of the bulk sample becomes an insulator while an electrical current moves in a single direction along the outer edge. The resistances are quantized, or restricted, to a value defined by the fundamental universal constant and drop to zero.
    A quantum anomalous Hall insulator, first discovered in 2013, achieves the same effect by using a material that is magnetized. Quantization still occurs and longitudinal resistance vanishes, and the electrons speed along the edge without dissipating energy, somewhat like a superconductor.
    At least that is the popular conception.
    “The picture where the current flows along the edges can really nicely explain how you get that quantization. But it turns out, it’s not the only picture that can explain quantization,” Nowack said. “This edge picture has really been the dominant one since the spectacular rise of topological insulators starting in the early 2000s. The intricacies of the local voltages and local currents have largely been forgotten. In reality, these can be much more complicated than the edge picture suggests.”
    Only a handful of materials are known to be quantum anomalous Hall insulators. For their new work, Nowack’s group focused on chromium-doped bismuth antimony telluride — the same compound in which the quantum anomalous Hall effect was first observed a decade ago. More

  • in

    Social media algorithms exploit how humans learn from their peers

    In prehistoric societies, humans tended to learn from members of our ingroup or from more prestigious individuals, as this information was more likely to be reliable and result in group success. However, with the advent of diverse and complex modern communities — and especially in social media — these biases become less effective. For example, a person we are connected to online might not necessarily be trustworthy, and people can easily feign prestige on social media. In a review published in the journal Trends in Cognitive Science on August 3rd, a group of social scientists describe how the functions of social media algorithms are misaligned with human social instincts meant to foster cooperation, which can lead to large-scale polarization and misinformation.
    “Several user surveys now both on Twitter and Facebook suggest most users are exhausted by the political content they see. A lot of users are unhappy, and there’s a lot of reputational components that Twitter and Facebook must face when it comes to elections and the spread of misinformation,” says first author William Brady, a social psychologist in the Kellogg School of Management at Northwestern.
    “We wanted to put out a systematic review that’s trying to help understand how human psychology and algorithms interact in ways that can have these consequences,” says Brady. “One of the things that this review brings to the table is a social learning perspective. As social psychologists, we’re constantly studying how we can learn from others. This framework is fundamentally important if we want to understand how algorithms influence our social interactions.”
    Humans are biased to learn from others in a way that typically promotes cooperation and collective problem-solving, which is why they tend to learn more from individuals they perceive as a part of their ingroup and those they perceive to be prestigious. In addition, when learning biases were first evolving, morally and emotionally charged information was important to prioritize, as this information would be more likely to be relevant to enforcing group norms and ensuring collective survival.
    In contrast, algorithms are usually selecting information that boosts user engagement in order to increase advertising revenue. This means algorithms amplify the very information humans are biased to learn from, and they can oversaturate social media feeds with what the researchers call Prestigious, Ingroup, Moral, and Emotional (PRIME) information, regardless of the content’s accuracy or representativeness of a group’s opinions. As a result, extreme political content or controversial topics are more likely to be amplified, and if users are not exposed to outside opinions, they might find themselves with a false understanding of the majority opinion of different groups.
    “It’s not that the algorithm is designed to disrupt cooperation,” says Brady. “It’s just that its goals are different. And in practice, when you put those functions together, you end up with some of these potentially negative effects.”
    To address this problem, the research group first proposes that social media users need to be more aware of how algorithms work and why certain content shows up on their feed. Social media companies don’t typically disclose the full details of how their algorithms select for content, but one start might be offering explainers for why a user is being shown a particular post. For example, is it because the user’s friends are engaging with the content or because the content is generally popular? Outside of social media companies, the research team is developing their own interventions to teach people how to be more conscious consumers of social media.
    In addition, the researchers propose that social media companies could take steps to change their algorithms, so they are more effective at fostering community. Instead of solely favoring PRIME information, algorithms could set a limit on how much PRIME information they amplify and prioritize presenting users with a diverse set of content. These changes could continue to amplify engaging information while preventing more polarizing or politically extreme content from becoming overrepresented in feeds.
    “As researchers we understand the tension that companies face when it comes to making these changes and their bottom line. That’s why we actually think these changes could theoretically still maintain engagement while also disallowing this overrepresentation of PRIME information,” says Brady. “User experience might actually improve by doing some of this.” More

  • in

    Sensing and controlling microscopic spin density in materials

    Electronic devices typically use the charge of electrons, but spin — their other degree of freedom — is starting to be exploited. Spin defects make crystalline materials highly useful for quantum-based devices such as ultrasensitive quantum sensors, quantum memory devices, or systems for simulating the physics of quantum effects. Varying the spin density in semiconductors can lead to new properties in a material — something researchers have long wanted to explore — but this density is usually fleeting and elusive, thus hard to measure and control locally.
    Now, a team of researchers at MIT and elsewhere has found a way to tune the spin density in diamond, changing it by a factor of two, by applying an external laser or microwave beam. The finding, reported this week in the journal PNAS, could open up many new possibilities for advanced quantum devices, the authors say. The paper is a collaboration between current and former students of professors Paola Cappellaro and Ju Li at MIT, and collaborators at Politecnico of Milano. The first author of the paper, Guoqing Wang PhD ’23, worked on his PhD thesis in Cappellaro’s lab and is now a postdoc at MIT.
    A specific type of spin defect known as a nitrogen vacancy (NV) center in diamond is one of the most widely studied systems for its potential use in a wide variety of quantum applications. The spin of NV centers is sensitive to any physical, electrical, or optical disturbance, making them potentially highly sensitive detectors. “Solid-state spin defects are one of the most promising quantum platforms,” Wang says, partly because they can work under ambient, room-temperature conditions. Many other quantum systems require ultracold or other specialized environments.
    “The nanoscale sensing capabilities of NV centers makes them promising for probing the dynamics in their spin environment, manifesting rich quantum many body physics yet to be understood,” Wang adds. “A major spin defect in the environment, called P1 center, can usually be 10 to 100 times more populous than the NV center and thus can have stronger interactions, making them ideal for studying many-body physics.”
    But to tune their interactions, scientists need to be able to change the spin density, something that had previously seldom been achieved. With this new approach, Wang says, “We can tune the spin density so it provides a potential knob to actually tune such a system. That’s the key novelty of our work.”
    Such a tunable system could provide more flexible ways of studying the quantum hydrodynamics, Wang says. More immediately, the new process can be applied to some existing nanoscale quantum-sensing devices as a way to improve their sensitivity.
    Li, who holds a joint appointment in MIT’s departments of Nuclear Science and Engineering and Materials Science and Engineering, explains that today’s computers and information processing systems are all based on the control and detection of electrical charges, but some innovative devices are beginning to make use of the property called spin. The semiconductor company Intel, for example, has been experimenting with new kinds of transistors that couple spin and charge, potentially opening a path to devices based on spintronics. More

  • in

    Robots cause company profits to fall — at least at first

    Researchers have found that robots can have a ‘U-shaped’ effect on profits: causing profit margins to fall at first, before eventually rising again.
    The researchers, from the University of Cambridge, studied industry data from the UK and 24 other European countries between 1995 and 2017, and found that at low levels of adoption, robots have a negative effect on profit margins. But at higher levels of adoption, robots can help increase profits.
    According to the researchers, this U-shaped phenomenon is due to the relationship between reducing costs, developing new processes and innovating new products. While many companies first adopt robotic technologies to decrease costs, this ‘process innovation’ can be easily copied by competitors, so at low levels of robot adoption, companies are focused on their competitors rather than on developing new products. However, as levels of adoption increase and robots are fully integrated into a company’s processes, the technologies can be used to increase revenue by innovating new products.
    In other words, firms using robots are likely to focus initially on streamlining their processes before shifting their emphasis to product innovation, which gives them greater market power via the ability to differentiate from their competitors. The results are reported in the journal IEEE Transactions on Engineering Management.
    Robots have been widely used in industry since the 1980s, especially in sectors where they can carry out physically demanding, repetitive tasks, such as automotive assembly. In the decades since, the rate of robot adoption has increased dramatically and consistently worldwide, and the development of precise, electrically controlled robots makes them particularly useful for high-value manufacturing applications requiring greater precision, such as electronics.
    While robots have been shown to reliably raise labour productivity at an industry or country level, what has been less studied is how robots affect profit margins at a similar macro scale.
    “If you look at how the introduction of computers affected productivity, you actually see a slowdown in productivity growth in the 1970s and early 1980s, before productivity starts to rise again, which it did until the financial crisis of 2008,” said co-author Professor Chander Velu from Cambridge’s Institute for Manufacturing. “It’s interesting that a tool meant to increase productivity had the opposite effect, at least at first. We wanted to know whether there is a similar pattern with robotics.”
    “We wanted to know whether companies were using robots to improve processes within the firm, rather than improve the whole business model,” said co-author Dr Philip Chen. “Profit margin can be a useful way to analyse this.” More