More stories

  • in

    Cybersecurity education varies widely in US

    Cybersecurity programs vary dramatically across the country, a review has found. The authors argue that program leaders should work with professional societies to make sure graduates are well trained to meet industry needs in a fast-changing field.
    In the review, published in the Proceedings of the Association for Computing Machinery’s Technical Symposium on Computer Science Education, a Washington State University-led research team found a shortage of research in evaluating the instructional approaches being used to teach cybersecurity. The authors also contend that programs could benefit from increasing their use of educational and instructional tools and theories.
    “There is a huge variation from school to school on how much cybersecurity content is required for students to take,” said co-author Assefaw Gebremedhin, associate professor in the WSU School of Electrical Engineering and Computer Science and leader of the U.S. Department of Defense-funded VICEROY Northwest Institute for Cybersecurity Education and Research (CySER). “We found that programs could benefit from using ideas from other fields, such as educational psychology, in which there would be a little more rigorous evaluation.”
    Cybersecurity is an increasingly important field of study because compromised data or network infrastructure can directly impact people’s privacy, livelihoods and safety. The researchers also noted that adversaries change their tactics frequently, and cybersecurity professionals must be able to respond effectively.
    As part of the study, the researchers analyzed programs at 100 institutions throughout the U.S. that are designated as a National Security Administration’s National Center of Academic Excellence in Cybersecurity. To have the designation, the programs have to meet the NSA requirements for educational content and quality.
    The researchers assessed factors such as the number and type of programs offered, the number of credits focused on cybersecurity courses, listed learning outcomes and lists of professional jobs available for graduates.
    They found that while the NSA designation provides requirements for the amount of cybersecurity content included in curricula, the center of excellence institutions vary widely in the types of programs they offer and how many cybersecurity-specific courses they provide. Half of the programs offered bachelor’s degrees, while other programs offered certificates, associate degrees, minors or concentration tracks.

    The most common type of program offered was a certificate, and most of the programs were housed within engineering, computer science, or technology schools or departments. The researchers found that industry professionals had different expectations of skill levels from what graduates of the program have.
    The researchers hope the work will serve as a benchmark to compare programs across the U.S. and as a roadmap toward better meeting industry needs.
    With funding from the state of Washington, WSU began offering a cybersecurity degree last year. The oldest cybersecurity programs are only about 25 years old, said Gebremedhin, but programs have traditionally been training students to become information technology professionals or system administrators.
    “In terms of maturity, in being a discipline as a separate degree program, cybersecurity is relatively new, even for computer science,” said Gebremedhin.
    The field is also constantly changing.
    “In cyber operations, you want to be on offense,” he said. “If you are to defend, then you need to stay ahead of your attacker, and if they keep changing, you have to be changing at a faster rate.” More

  • in

    Caterbot? Robatapillar? It crawls with ease through loops and bends

    Engineers at Princeton and North Carolina State University have combined ancient paperfolding and modern materials science to create a soft robot that bends and twists through mazes with ease.
    Soft robots can be challenging to guide because steering equipment often increases the robot’s rigidity and cuts its flexibility. The new design overcomes those problems by building the steering system directly into the robot’s body, said Tuo Zhao, a postdoctoral researcher at Princeton.
    In an article published May 6 in the journal PNAS, the researchers describe how they created the robot out of modular, cylindrical segments. The segments, which can operate independently or join to form a longer unit, all contribute to the robot’s ability to move and steer. The new system allows the flexible robot to crawl forward and reverse, pick up cargo and assemble into longer formations.
    “The concept of modular soft robots can provide insight into future soft robots that can grow, repair, and develop new functions,” the authors write in their article.
    Zhao said the robot’s ability to assemble and split up on the move allows the system to work as a single robot or a swarm.
    “Each segment can be an individual unit, and they can communicate with each other and assemble on command,” he said. “They can separate easily, and we use magnets to connect them.”
    Zhao works in Glaucio Paulino’s lab in the Department of Civil and Environmental Engineering and the Princeton Materials Institute. Paulino, the Margareta Engman Augustine Professor of Engineering, has created a body of research that applies origami to a wide array of engineering applications from medical devices to aerospace and construction.

    “We have created a bio-inspired plug-and-play soft modular origami robot enabled by electrothermal actuation with highly bendable and adaptable heaters,” Paulino said. “This is a very promising technology with potential translation to robots that can grow, heal, and adapt on demand.”
    In this case, the researchers began by building their robot out of cylindrical segments featuring an origami form called a Kresling pattern. The pattern allows each segment to twist into a flattened disk and expand back into a cylinder. This twisting, expanding motion is the basis for the robot’s ability to crawl and change direction. By partially folding a section of the cylinder, the researchers can introduce a lateral bend in a robot segment. By combining small bends, the robot changes direction as it moves forward.
    One of the most challenging aspects of the work involved developing a mechanism to control the bending and folding motions used to drive and steer the robot. Researchers at North Carolina State University developed the solution. They used two materials that shrink or expand differently when heated (liquid crystal elastomer and polyimide) and combined them into thin strips along the creases of the Kresling pattern. The researchers also installed a thin stretchable heater made of silver nanowire network along each fold. Electrical current on the nanowire heater heats the control strips, and the two materials’ different expansion introduces a fold in the strip. By calibrating the current, and the material used in the control strips, the researchers can precisely control the folding and bending to drive the robot’s movement and steering.
    “Silver nanowire is an excellent material to fabricate stretchable conductors. Stretchable conductors are building blocks for a variety of stretchable electronic devices including stretchable heaters. Here we used the stretchable heater as the actuation mechanism for the bending and folding motions” said Yong Zhu, the Andrew A. Adams Distinguished Professor in the Department of Mechanical and Aerospace Engineering at N.C. State and one of the lead researchers.
    Shuang Wu, a postdoctoral researcher in Zhu’s lab, said the lab’s previous work used the stretchable heater for continuously bending a bilayer structure. “In this work we achieved localized, sharp folding to actuate the origami pattern. This effective actuation method can be generally applied to origami structures (with creases) for soft robotics,” Wu said.
    The researchers said that the current version of the robot has limited speed, and they are working to increase the locomotion in later generations.
    Zhao said the researchers also plan to experiment with different shapes, patterns, and instability to improve both the speed and the steering. Support for the research was provided in part by the National Science Foundation and the National Institutes of Health. More

  • in

    Simulated chemistry: New AI platform designs tomorrow’s cancer drugs

    Scientists at UC San Diego have developed a machine learning algorithm to simulate the time-consuming chemistry involved in the earliest phases of drug discovery, which could significantly streamline the process and open doors for never-before-seen treatments. Identifying candidate drugs for further optimization typically involves thousands of individual experiments, but the new artificial intelligence (AI) platform could potentially give the same results in a fraction of the time. The researchers used the new tool, described in Nature Communications, to synthesize 32 new drug candidates for cancer.
    The technology is part of a new but growing trend in pharmaceutical science of using AI to improve drug discovery and development.
    “A few years ago, AI was a dirty word in the pharmaceutical industry, but now the trend is definitely the opposite, with biotech startups finding it difficult to raise funds without addressing AI in their business plan,” said senior author Trey Ideker, professor in the Department of Medicine at UC San Diego School of Medicine and adjunct professor of bioengineering and computer science at the UC San Diego Jacobs School of Engineering. “AI-guided drug discovery has become a very active area in industry, but unlike the methods being developed in companies, we’re making our technology open source and accessible to anybody who wants to use it.”
    The new platform, called POLYGON, is unique among AI tools for drug discovery in that it can identify molecules with multiple targets, while existing drug discovery protocols currently prioritize single target therapies. Multi-target drugs are of major interest to doctors and scientists because of their potential to deliver the same benefits as combination therapy, in which several different drugs are used together to treat cancer, but with fewer side effects.
    “It takes many years and millions of dollars to find and develop a new drug, especially if we’re talking about one with multiple targets.” said Ideker. “The rare few multi-target drugs we do have were discovered largely by chance, but this new technology could help take chance out of the equation and kickstart a new generation of precision medicine.”
    The researchers trained POLYGON on a database of over a million known bioactive molecules containing detailed information about their chemical properties and known interactions with protein targets. By learning from patterns found in the database, POLYGON is able to generate original chemical formulas for new candidate drugs that are likely to have certain properties, such as the ability to inhibit specific proteins.
    “Just like AI is now very good at generating original drawings and pictures, such as creating pictures of human faces based off desired properties like age or sex, POLYGON is able to generate original molecular compounds based off of desired chemical properties,” said Ideker. “In this case, instead of telling the AI how old we want our face to look, we’re telling it how we want our future drug to interact with disease proteins.”
    To put POLYGON to the test, the researchers used it to generate hundreds of candidate drugs that target various pairs of cancer-related proteins. Of these, the researchers synthesized 32 molecules that had the strongest predicted interactions with the MEK1 and mTOR proteins, a pair of cellular signaling proteins that are a promising target for cancer combination therapy. These two proteins are what scientists call synthetically lethal, which means that inhibiting both together is enough to kill cancer cells even if inhibiting one alone is not.

    The researchers found that the drugs they synthesized had significant activity against MEK1 and mTOR, but had few off-target reactions with other proteins. This suggests that one or more of the drugs identified by POLYGON could be able to target both proteins as a cancer treatment, providing a list of choices for fine-tuning by human chemists.
    “Once you have the candidate drugs, you still need to do all the other chemistry it takes to refine those options into a single, effective treatment,” said Ideker. “We can’t and shouldn’t try to eliminate human expertise from the drug discovery pipeline, but what we can do is shorten a few steps of the process.”
    Despite this caution, the researchers are optimistic that the possibilities of AI for drug discovery are only just being explored.
    “Seeing how this concept plays out over the next decade, both in academia and in the private sector, is going to be very exciting.” said Ideker. “The possibilities are virtually endless.”
    This study was funded, in part, by the National Institutes of Health (Grants CA274502, GM103504, ES014811, CA243885, CA212456). More

  • in

    Experiment opens door for millions of qubits on one chip

    Researchers from the University of Basel and the NCCR SPIN have achieved the first controllable interaction between two hole spin qubits in a conventional silicon transistor. The breakthrough opens up the possibility of integrating millions of these qubits on a single chip using mature manufacturing processes.
    The race to build a practical quantum computer is well underway. Researchers around the world are working on a huge variety of qubit technologies. So far, there is no consensus on what type of qubit is most suitable for maximizing the potential of quantum information science.
    Qubits are the foundation of a quantum computer: they handle the processing, transfer and storage of data. To work correctly, they have to both reliably store and rapidly process information. The basis for rapid information processing is stable and fast interactions between a large number of qubits whose states can be reliably controlled from the outside.
    For a quantum computer to be practical, millions of qubits must be accommodated on a single chip. The most advanced quantum computers today have only a few hundred qubits, meaning they can only perform calculations that are already possible (and often more efficient) on conventional computers..
    Electrons and holes
    To solve the problem of arranging and linking thousands of qubits, researchers at the University of Basel and the NCCR SPIN rely on a type of qubit that uses the spin (intrinsic angular momentum) of an electron or a hole. A hole is essentially a missing electron in a semiconductor. Both holes and electrons possess spin, which can adopt one of two states: up or down, analogous to 0 and 1 in classical bits. Compared to an electron spin, a hole spin has the advantage that it can be entirely electrically controlled without needing additional components like micromagnets on the chip.
    As early as 2022, Basel physicists were able to show that the hole spins in an existing electronic device can be trapped and used as qubits. These “FinFETs” (fin field-effect transistors) are built into modern smartphones and are produced in widespread industrial processes. Now, a team led by Dr. Andreas Kuhlmann has succeeded for the first time in achieving a controllable interaction between two qubits within this setup.

    Fast and precise controlled spin-flip
    A quantum computer needs “quantum gates” to perform calculations. These represent operations that manipulate the qubits and couple them to each other. As the researchers report in the journal Nature Physics, they were able to couple two qubits and bring about a controlled flip of one of their spins, depending on the state of the other’s spin — known as a controlled spin-flip. “Hole spins allow us to create two-qubit gates that are both fast and high-fidelity. This principle now also makes it possible to couple a larger number of qubit pairs,” says Kuhlmann.
    The coupling of two spin qubits is based on their exchange interaction, which occurs between two indistinguishable particles that interact with each other electrostatically. Surprisingly, the exchange energy of holes is not only electrically controllable, but strongly anisotropic. This is a consequence of spin-orbit coupling, which means that the spin state of a hole is influenced by its motion through space.
    To describe this observation in a model, experimental and theoretical physicists at the University of Basel and the NCCR SPIN combined forces. “The anisotropy makes two-qubit gates possible without the usual trade-off between speed and fidelity,” Dr. Kuhlmann says in summary.
    “Qubits based on hole spins not only leverage the tried-and-tested fabrication of silicon chips, they are also highly scalable and have proven to be fast and robust in experiments.” The study underscores that this approach has a strong chance in the race to develop a large-scale quantum computer. More

  • in

    VR may pose privacy risks for kids: A new study finds parents aren’t as worried as they should be

    New research finds that, while an increasing number of minors are using virtual reality (VR) apps, not many parents recognize the extent of the security and privacy risks that are specific to VR technologies. The study also found that few parents are taking active steps to address those security and privacy issues, such as using parental controls built into the apps.
    “In recent years we have seen an increase in the number of minors using VR apps that have social interaction elements, which increases security and privacy risks — such as unintended self-disclosures of sensitive personal information and surveillance of a user’s biometric data,” says Abhinaya S B, co-author of a paper on the work and a Ph.D. student at NC State.
    “We wanted to see how much parents know about security and privacy risks associated with these VR apps, and what they are currently doing to address those risks,” Abhinaya says. “These findings will help us identify areas where parents, technology designers, and policymakers could do more to enhance children’s security and privacy.”
    For the study, researchers conducted in-depth interviews with 20 parents who have children under the age of 18 at home who use VR apps. The interviews were designed to capture what sort of risks parents perceived regarding VR apps, what strategies the parents used to protect their children’s security and privacy in regard to VR apps, and which VR stakeholders the parents felt were responsible for protecting children who use the apps.
    “We found that parents were primarily worried about physiological development issues,” Abhinaya says. “For example, some parents were worried about VR damaging children’s eyesight or children injuring themselves while using the apps.”
    “There were also concerns that children would interact with people online who would be a bad influence on them,” says Anupam Das, co-author of the paper and an assistant professor of computer science at NC State. “In terms of privacy, there were concerns that children might reveal too much information about themselves to strangers online.”
    “We found that parents did not seem too worried about data surveillance or data collection by the VR companies and app developers; they were more worried about risks of self-disclosure in social VR apps,” Abhinaya says.

    “VR technologies capture a tremendous amount of data on user movement, which can be used to infer information ranging from a user’s height to medical conditions,” Das says.
    “VR technologies also capture a user’s voice, and there are some concerns that voice recordings could be misused,” Das says. “For example, it’s possible that voice recordings might be manipulated with generative AI tools to create fake recordings. Only one parent was concerned about potential misuse of voice recordings.”
    “To be clear, most parents were aware of the possibility of data surveillance, but the vast majority were not concerned about it,” Abhinaya says.
    When it came to risk management strategies, the study found parents were having conversations with their children about being safe and not sharing personal information online. Many parents were also sharing VR accounts with their children, so that they could monitor their children’s VR app use.
    However, very few parents were making use of parental controls that were built into the VR technologies.
    “Most parents were aware that the controls existed, they just weren’t activating them,” Abhinaya says. “In some cases, parents felt their children were more tech-savvy than themselves, and wanted to give their kids autonomy regarding VR usage. This was particularly the case for teens. But in some cases, parents didn’t make use of the controls due to technical challenges.”
    “In other words, some parents didn’t know how to properly activate the controls,” Das says. “There was also a desire for parental controls to incorporate additional features, such as a summary of what a child did while using a given app, who they interacted with, and so on.”

    The study found that parents felt they had the primary responsibility for protecting their children against risks associated with VR use. However, the parents also felt that VR companies should incorporate usable parental controls to help parents reduce risks. In addition, parents felt policymakers should stay abreast of emerging technologies to create or modify laws and regulations that protect children online. Lastly, parents felt that schools have a role to play in teaching children how to navigate these new technologies safely.
    “It is essential for parents to experience and understand VR before they let their children use it, to get a sense of the security and privacy risks VR may pose,” Das says. “However, while parents serve as the first line of defense for protecting children against these risks in VR, it is imperative for other stakeholders such as educators, developers, and policymakers to take proactive steps to ensure the comprehensive protection of children in VR environments.”
    This work was supported in part by an award from Meta Research. More

  • in

    Researchers develop new AI tool for fast and precise tissue analysis to support drug discovery and diagnostics

    A team of scientists from A*STAR’s Genome Institute of Singapore (GIS) and Bioinformatics Institute (BII) has developed a new AI software tool called “BANKSY” that automatically recognises the cell types present in a tissue, such as muscle cells, nerve cells and fat cells. Going a step beyond conventional AI tools which can group cells together into clusters if they contain similar molecules, BANKSY also considers how similar the cells’ surroundings in the tissue are. With BANKSY, researchers would be able to improve their understanding of tissue processes in diverse diseases quicker and more accurately, which can support the development of more effective diagnostics and treatments for cancer, neurological disorders and other diseases. This breakthrough research was published in the article “BANKSY unifies cell typing and tissue domain segmentation for scalable spatial omics data analysis” in Nature Genetics on 27 February 2024.
    BANKSY is adept at identifying subtly distinct cell groups in spatial molecular profiles generated from tissue samples. Moreover, BANKSY addresses the distinct but related problem of demarcating functionally distinct anatomical regions in tissue sections. For instance, it can distinguish layered structures in the human forebrain.
    Spatial molecular profiling (Spatial Omics) technologies are powerful microscopes that allow scientists to study tissues in great detail, by revealing the exact locations of individual biological molecules in cells, as well as the arrangement of cells in tissues. This helps them understand how cells come together in tissues to perform their normal physiological functions, and also how they behave (or misbehave) in diseases such as cancer, autism or infectious diseases such as COVID-19. This understanding is essential for more accurate diagnosis and tailored treatment of patients, as well as the discovery of new drugs.
    BANKSY can help biologists interpret and extract insights from the latest Spatial Omics technologies that have emerged over the past few years. Versatile, accurate, fast and scalable, BANKSY stands out from existing methods at analysing both RNA and protein-based Spatial Omics data. Capable of handling large datasets of over two million cells, BANKSY is 10 to 1,000 times faster than competing methods that were tested, and two to 60 times more scalable. This means that the method can also be applied to other key data-processing steps, such as detecting and removing poor quality areas of the sample, and for merging samples taken from different patients for combined analysis.
    BANKSY has been independently benchmarked and found to be the best-performing algorithm for spatial omics data by two independent studies, one of which concluded that BANKSY can be a powerful solution for the identification of domains. The other study tested six algorithms and selected BANKSY as the most accurate for their data analysis.
    Dr Shyam Prabhakar, Senior Group Leader, Laboratory of Systems Biology and Data Analytics and Associate Director, Spatial and Single Cell Systems at A*STAR’s GIS, said, “We anticipate that BANKSY will be a game-changing tool that helps to unlock the potential of emerging Spatial Omics technologies. This will hopefully improve our understanding of tissue processes in diverse diseases, allowing us to develop more effective treatments for cancers, neurological disorders and many other pathologies.”
    Professor Liu Jian Jun, Acting Executive Director at A*STAR’s GIS, said, “The work on BANKSY advances our strategy of combining high-throughput technologies with scalable, robust AI software for problem-solving and identifying the clues to what can make a difference in the lives of patients.”
    Dr Iain Tan, Senior Consultant, Division of Medical Oncology at National Cancer Centre Singapore and Senior Clinician Scientist at A*STAR’s GIS Laboratory of Applied Cancer Genomics, said, “We are using BANKSY to identify the cells that help tumours grow and spread to other parts of the body — drugs targeting such cells could be a promising direction for cancer treatment.” More

  • in

    Biomechanical dataset for badminton performance analysis

    In sports training, practice is the key, but being able to emulate the techniques of professional athletes can take a player’s performance to the next level. AI-based personalized sports coaching assistants can make this a reality by utilizing published datasets. With cameras and sensors strategically placed on the athlete’s body, these systems can track everything, including joint movement patterns, muscle activation levels, and gaze movements.
    Using this data, personalized feedback is provided on player technique, along with improvement recommendations. Athletes can access this feedback anytime, and anywhere, making these systems versatile for athletes at all levels.
    Now, in a study published in the journal Scientific Data on April 5, 2024, researchers led by Associate Professor SeungJun Kim from the Gwangju Institute of Science and Technology (GIST), South Korea, in collaboration with researchers from Massachusetts Institute of Technology (MIT), CSAIL, USA, have developed a MultiSenseBadminton dataset for AI-driven badminton training.
    “Badminton could benefit from these various sensors, but there is a scarcity of comprehensive badminton action datasets for analysis and training feedback,” says Ph.D. candidate Minwoo Seong, the first author of the study.
    Supported by the 2024 GIST-MIT project, this study took inspiration from MIT’s ActionSense project, which used wearable sensors to track everyday kitchen tasks such as peeling, slicing vegetables, and opening jars. Seong collaborated with MIT’s team, including MIT CSAIL postdoc researcher Joseph DelPreto and MIT CSAIL Director and MIT EECS Professor Daniela Rus and Wojciech Matusik. Together, they developed the MultiSenseBadminton dataset, capturing movements and physiological responses of badminton players. This dataset, shaped with insights from professional badminton coaches, aims to enhance the quality of forehand clear and backhand drive strokes. For this, the researchers collected 23 hours of swing motion data from 25 players with varying levels of training experience.
    During the study, players were tasked with repeatedly executing forehand clear and backhand drive shots while sensors recorded their movements and responses. These included inertial measurement units (IMU) sensors to track joint movements, electromyography (EMG) sensors to monitor muscle signals, insole sensors for foot pressure, and a camera to record both body movements and shuttlecock positions. With a total of 7,763 data points collected, each swing was meticulously labeled based on stroke type, player’s skill level, shuttlecock landing position, impact location relative to the player, and sound upon impact. The dataset was then validated using a machine learning model, ensuring its suitability for training AI models to evaluate stroke quality and offer feedback.
    “The MultiSenseBadminton dataset can be used to build AI-based education and training systems for racket sports players. By analyzing the disparities in motion and sensor data among different levels of players and creating AI-generated action trajectories, the dataset can be applied to personalized motion guides for each level of players,” says Seong.
    The gathered data can enhance training through haptic vibration or electrical muscle stimulation, promoting better motion and refining swing techniques. Additionally, player tracking data, like that in the MultiSenseBadminton dataset, could fuel virtual reality games or training simulations, making sports training more accessible and affordable, potentially transforming how people exercise.
    In the long run, the researchers speculate that this dataset could make sports training more accessible and affordable for a broader audience, promote overall well-being, and foster a healthier population. More

  • in

    As the Arctic tundra warms, soil microbes likely will ramp up CO2 production

    Climate change is warming the Arctic tundra about four times faster than the rest of the planet. Now, a study suggests that rising temperatures will spur underground microbes there to produce more carbon dioxide — potentially creating a feedback loop that worsens climate change.The tundra is “a sleepy biome,” says Sybryn Maes, an environmental scientist at Umeå University in Sweden. This ecosystem is populated by small shrubs, grasses and lichen growing in cold soils rich with stored organic carbon. Scientists have long suspected that warming will wake this sleeping giant, prompting soil microbes to release more of the greenhouse gas CO2 (SN: 8/11/22). But it’s been difficult to demonstrate in field studies.

    Maes’ team included about 70 scientists performing measurements in 28 tundra regions across the planet’s Arctic and alpine zones. During the summer growing season, the researchers placed clear, open-topped plastic chambers, each about a meter in diameter, over patches of tundra. These chambers let in light and precipitation but blocked the wind, warming the air inside by an average of 1.4 degrees Celsius. The researchers monitored how much CO2 microbes in the soil released into the air, a process called respiration, and compared that data with measurements from nearby exposed patches. More