More stories

  • in

    How do people prefer coronavirus contact tracing to be carried out?

    People prefer coronavirus contact tracing to be carried out by a combination of apps and humans, a new study shows.
    The research shows people are more concerned about who runs the process than the risks of others having unauthorised access to their private information, or their data being stolen.
    Most people who took part in the research were in favour of the NHS processing personal data rather than the Government or even a decentralised system that stores only minimal personal data.
    A total of 41 per cent of those questioned wanted a mixture of an app and human contact during the tracing process, compared to 22 per cent who wanted it purely to be run via contact with another person and 37 per cent who wanted the process to only be digital.
    The research was conducted by Laszlo Horvath, Susan Banducci and Oliver James from the University of Exeter during May and is published in the Journal of Experimental Political Science.
    They ran an experiment on 1,504 people who were given information about two apps though a series of five pairings, with their properties relating to privacy and data security displayed randomly, and asked which they would prefer to use. In a second study, the academics also surveyed 809 people about their preferences for how apps should be run and designed.
    The decentralised system of contact tracing, currently trialled in the UK, was chosen by participants with a 50 per cent probability, meaning this particular design didn’t influence people’s choice. However the probability of people choosing the app designed to work as part of a NHS-led centralised system was 57 per cent, meaning it was more popular, while 43 per cent of apps chosen were described as having data which would be stored on servers belonging to the UK government, making them less popular.
    A randomly selected group of people were also informed about the risk of data breach issues, but this didn’t have an impact on people’s preferences.
    Dr Horvath said: “We had thought people would prefer apps which were less intrusive and protected their privacy, for example not needing as much information about their location, but this wasn’t the case. Our research shows people are supportive of taking part in the contact tracing process if needed. They are less concerned about the possibility of data breach problems than who their app is run by, and privacy didn’t affect their preferences when they had a choice of apps.”
    Professor Banducci said: “Our research shows people are supportive of the NHS storing and using their personal information. Faith and trust in the NHS is high at the moment so it may motivate people to take part in the process if the Government involves the health service in its development and deployment. Trust in the provider of contact tracing will be crucial if it is to be used successfully to reduce the spread of infection.”
    Professor James said: “People who took part in this research preferred a balanced — human plus digital — approach to contract tracing. Privacy concerns were not as influential as we expected. Trust in the provider of the app is currently more important, something for the Government to remember as work on the UK’s contact tracing system continues.”

    Story Source:
    Materials provided by University of Exeter. Note: Content may be edited for style and length. More

  • in

    Study confirms widespread literacy in biblical-period kingdom of Judah

    Researchers at Tel Aviv University (TAU) have analyzed 18 ancient texts dating back to around 600 BCE from the Tel Arad military post using state-of-the-art image processing, machine learning technologies, and the expertise of a senior handwriting examiner. They have concluded that the texts were written by no fewer than 12 authors, suggesting that many of the inhabitants of the kingdom of Judah during that period were able to read and write, with literacy not reserved as an exclusive domain in the hands of a few royal scribes.
    The special interdisciplinary study was conducted by TAU’s Dr. Arie Shaus, Ms. Shira Faigenbaum-Golovin, and Dr. Barak Sober of the Department of Applied Mathematics; Prof. Eli Piasetzky of the Raymond and Beverly Sackler School of Physics and Astronomy; and Prof. Israel Finkelstein of the Jacob M. Alkow Department of Archeology and Ancient Near Eastern Civilizations. The forensic handwriting specialist, Ms. Yana Gerber, is a senior expert who served for 27 years in the Questioned Documents Laboratory of the Israel Police Division of Identification and Forensic Science and its International Crime Investigations Unit.
    The results were published in PLOS ONE on September 9, 2020.
    “There is a lively debate among experts as to whether the books of Deuteronomy, Joshua, Judges, Samuel, and Kings were compiled in the last days of the kingdom of Judah or after the destruction of the First Temple by the Babylonians,” Dr. Shaus explains. “One way to try to get to the bottom of this question is to ask when there was the potential for the writing of such complex historical works.
    “For the period following the destruction of the First Temple in 586 BC, there is very scant archaeological evidence of Hebrew writing in Jerusalem and its surroundings, but an abundance of written documents has been found for the period preceding the destruction of the Temple. But who wrote these documents? Was this a society with widespread literacy, or was there just a handful of literate people?”
    To answer this question, the researchers examined the ostraca (fragments of pottery vessels containing ink inscriptions) writings discovered at the Tel Arad site in the 1960s. Tel Arad was a small military post on the southern border of the kingdom of Judah; its built-up area was about 20,000 square feet and it housed between 20 and 30 soldiers.

    advertisement

    “We examined the question of literacy empirically, from different directions of image processing and machine learning,” says Ms. Faigenbaum-Golovin. “Among other things, these areas help us today with the identification, recognition, and analysis of handwriting, signatures, and so on. The big challenge was to adapt modern technologies to 2,600-year-old ostraca. With a lot of effort, we were able to produce two algorithms that could compare letters and answer the question of whether two given ostraca were written by two different people.”
    In 2016, the researchers theorized that 18 of the Tel Arad inscriptions were written by at least four different authors. Combined with additional textual evidence, the researchers concluded that there were in fact at least six different writers. The study aroused great interest around the world.
    The TAU researchers then decided to compare the algorithmic methods, which have since been refined, to the forensic approach. To this end, Ms. Gerber joined the team. After an in-depth examination of the ancient inscriptions, she found that the 18 texts were written by at least 12 distinct writers with varying degrees of certainty. She examined the original Tel Arad ostraca at the Israel Museum, the Eretz Israel Museum, the Sonia and Marco Nedler Institute of Archaeology of Tel Aviv University, and the Israel Antiquities Authority’s warehouses at Beit Shemesh.
    Ms. Gerber explained:
    “This study was very exciting, perhaps the most exciting in my professional career. These are ancient Hebrew inscriptions written in ink on shards of pottery, utilizing an alphabet that was previously unfamiliar to me. I studied the characteristics of the writing in order to analyze and compare the inscriptions, while benefiting from the skills and knowledge I acquired during my bachelor’s degree studies in classical archaeology and ancient Greek at Tel Aviv University. I delved into the microscopic details of these inscriptions written by people from the First Temple period, from routine issues such as orders concerning the movement of soldiers and the supply of wine, oil, and flour, through correspondence with neighboring fortresses, to orders that reached the Tel Arad fortress from the high ranks of the Judahite military system. I had the feeling that time had stood still and there was no gap of 2,600 years between the writers of the ostraca and ourselves.

    advertisement

    “Handwriting is made up of unconscious habit patterns. The handwriting identification is based on the principle that these writing patterns are unique to each person and no two people write exactly alike. It is also assumed that repetitions of the same text or characters by the same writer are not exactly identical and one can define a range of natural handwriting variations specific to each one. Thus, forensic handwriting analysis aims at tracking features corresponding to specific individuals, and concluding whether a single or rather different authors wrote the given documents.
    “The examination process is divided into three steps: analysis, comparison, and evaluation. The analysis includes a detailed examination of every single inscription, according to various features, such as the spacing between letters, their proportions, slant, etc. The comparison is based upon the aforementioned features across various handwritings. In addition, consistent patterns,such the same combinations of letters, words, and punctuation, are identified. Finally, an evaluation of identicalness or distinctiveness of the writers is made. It should be noted that, according to an Israel Supreme Court ruling, a person can be convicted of a crime based on the opinion of a forensic handwriting expert.”
    Dr. Shaus further elaborated:
    “We were in for a big surprise: Yana identified more authors than our algorithms did. It must be understood that our current algorithms are of a “cautious” nature — they know how to identify cases in which the texts were written by people with significantly different writing; in other cases they refrain from definite conclusions. In contrast, an expert in handwriting analysis knows not only how to spot the differences between writers more accurately, but in some cases may also arrive at the conclusion that several texts were actually written by a single person. Naturally, in terms of consequences, it is very interesting to see who the authors are. Thanks to the findings, we were able to construct an entire flowchart of the correspondence concerning the military fortress — who wrote to whom and regarding what matter. This reflects the chain of command within the Judahite army.
    “For example, in the area of Arad, close to the border between the kingdoms of Judah and Edom, there was a military force whose soldiers are referred to as “Kittiyim” in the inscriptions, most likely Greek mercenaries. Someone, probably their Judahite commander or liaison officer, requested provisions for the Kittiyim unit. He writes to the quartermaster of the fortress in Arad “give the Kittiyim flour, bread, wine” and so on. Now, thanks to the identification of the handwriting, we can say with high probability that there was not only one Judahite commander writing, but at least four different commanders. It is conceivable that each time another officer was sent to join the patrol, they took turns.”
    According to the researchers, the findings shed new light on Judahite society on the eve of the destruction of the First Temple — and on the setting of the compilation of biblical texts. Dr. Sober explains:
    “It should be remembered that this was a small outpost, one of a series of outposts on the southern border of the kingdom of Judah. Since we found at least 12 different authors out of 18 texts in total, we can conclude that there was a high level of literacy throughout the entire kingdom. The commanding ranks and liaison officers at the outpost, and even the quartermaster Eliashib and his deputy, Nahum, were literate. Someone had to teach them how to read and write, so we must assume the existence of an appropriate educational system in Judah at the end of the First Temple period. This, of course, does not mean that there was almost universal literacy as there is today, but it seems that significant portions of the residents of the kingdom of Judah were literate. This is important to the discussion on the composition of biblical texts. If there were only two or three people in the whole kingdom who could read and write, then it is unlikely that complex texts would have been composed.”
    Prof. Finkelstein concludes:
    “Whoever wrote the biblical works did not do so for us, so that we could read them after 2,600 years. They did so in order to promote the ideological messages of the time. There are different opinions regarding the date of the composition of biblical texts. Some scholars suggest that many of the historical texts in the Bible, from Joshua to II Kings, were written at the end of the 7th century BC, very close to the period of the Arad ostraca. It is important to ask who these texts were written for. According to one view, there were events in which the few people who could read and write stood before the illiterate public and read texts out to them. A high literacy rate in Judah puts things into a different light.
    “Until now, the discussion of literacy in the kingdom of Judah has been based on circular arguments, on what is written within the Bible itself, for example on scribes in the kingdom. We have shifted the discussion to an empirical perspective. If in a remote place like Tel Arad there was, over a short period of time, a minimum of 12 authors of 18 inscriptions, out of the population of Judah which is estimated to have been no more than 120,000 people, it means that literacy was not the exclusive domain of a handful of royal scribes in Jerusalem. The quartermaster from the Tel Arad outpost also had the ability to read and appreciate them.” More

  • in

    Superconductors are super resilient to magnetic fields

    A researcher at the University of Tsukuba has offered a new explanation for how superconductors exposed to a magnetic field can recover — without loss of energy — to their previous state after the field is removed. This work may lead to a new theory of superconductivity and a more eco-friendly electrical distribution system.
    Superconductors are a class of materials with the amazing property of being able to conduct electricity with zero resistance. In fact, an electrical current can circle around a loop of superconducting wire indefinitely. The catch is that these materials must be kept very cold, and even so, a strong magnetic field can cause a superconductor to revert back to normal.
    It was once assumed that the superconducting-to-normal transition caused by a magnetic field could not be reversed easily, since the energy would be dissipated by the usual process of Joule heating. This mechanism, by which the resistance in normal wires converts electrical energy into heat, is what allows us to use an electric stovetop or space heater.
    “Joule heating is usually considered negatively, because it wastes energy and can even cause overloaded wires to melt,” explains Professor Hiroyasu Koizumi of the Division of Quantum Condensed Matter Physics, the Center for Computational Sciences at the University of Tsukuba. “However, it has been known for a long time from experiments that, if you remove the magnetic field, a current-carrying superconductor can, in fact, be returned to its previous state without loss of energy,”
    Now, Professor Koizumi has proposed a new explanation for this phenomenon. In the superconducting state, elections pair up and move in sync, but the true cause of this synchronized motion is the presence of so-called “Berry connection,” characterized by the topological quantum number. It is an integer and if it is nonzero, current flows. Thus, this supercurrent can be switched off abruptly by changing this number to zero without Joule heating.
    The founder of modern electromagnetic theory, James Clerk Maxwell, once postulated a similar molecular vortex model that imagined space being filled with the rotation of currents in tiny circles. Since everything was spinning the same way, it reminded Maxwell of “idle wheels,” which were gears used in machines for this purpose.
    “The surprising thing is that a model from the early days of electromagnetism, like Maxwell’s idle wheels, can help us resolve questions arising today,” Professor Koizumi says. “This research may help lead to a future in which energy can be delivered from power plants to homes with perfect efficiency.”

    Story Source:
    Materials provided by University of Tsukuba. Note: Content may be edited for style and length. More

  • in

    Positive results for ReWalk ReStore exosuit in stroke rehabilitation

    A team of U.S. researchers published the results of a multi-center, single-arm trial of the ReWalk ReStore™ for gait training in individuals undergoing post-stroke rehabilitation. They found the device safe and reliable during treadmill and overground walking under the supervision of physical therapists. The article, “The ReWalk ReStore soft robotic exosuit: a multisite clinical trial of the safety, reliability, and feasibility of exosuit-augmented post-stroke gait rehabilitation,” was published open access in the Journal of NeuroEngineering and Rehabilitation on June 18, 2020.
    The authors are the principal investigators of each of the five testing sites: Louis N. Awad, PT, DPT, PhD, of Spaulding Rehabilitation Hospital, Boston, MA, Alberto Esquenazi, MD, of MossRehab Stroke and Neurological Disease Center, Elkins Park, PA, Gerard E. Francisco, MD, of TIRR Memorial Hermann, Houston, TX, Karen J, Nolan, PhD, of Kessler Foundation, West Orange, NJ, and lead investigator Arun Jayaramam, PT, PhD, of the Shirley Ryan AbilityLab, Chicago, IL.
    The ReStore™ exosuit (ReWalk Robotics, Ltd) is the first soft robotic exosuit cleared by the FDA for use in stroke survivors with mobility deficits. The device is indicated for individuals with hemiplegia undergoing stroke rehabilitation under the care of licensed physical therapists. Hemiplegia causes weakness of the ankle, limiting the ability to clear the ground during stepping and hindering forward movement. This leads to compensatory walking patterns that increase effort and decrease stability.
    ReStore is designed to augment ankle plantarflexion and dorsiflexion, allowing a more normal gait pattern. Motors mounted on a waist belt transmit power through cables to attachment points on an insole and the patient’s calf. Sensors clipped to the patient’s shoes transmit data to a handheld smartphone controller used by a trained therapist to adjust levels of assistance and monitor and record key metrics of gait training.
    The trial enrolled 44 participants with post stroke hemiparesis who were able to walk unassisted for 5 feet. The protocol consisted of 5 days of 20-minute sessions of treadmill and overground training under the supervision of licensed physical therapists. To assess the therapeutic potential for ReStore in rehabilitation, the researchers also explored the effects of the device on maximum walking speed, measuring participants’ walking speed in and out of the device using the 10-m walk test, before and after the five training visits. For safety purposes, some participants were allowed to use an AFO or cane during walking sessions.
    The trial determined the safety, reliability, and feasibility of the device in this stroke population. “We found that the ReStore provided targeted assistance for plantarflexion and dorsiflexion of the paretic ankle, improving the gait pattern,” explained Dr. Nolan, senior research scientist in the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation. “This is an important first step toward expanding options for rehabilitative care for the millions of individuals with mobility impairments caused by ischemic and hemorrhagic stroke.”
    The trial’s exploratory data indicated positive effects of the training on the walking speed of participants during exosuit-assisted walking and unassisted walking (walking without the device). More than one third of participants achieved a significant increase in unassisted walking speed, indicating that further research is warranted.
    Dr. Nolan emphasized that the trial was not designed to measure the device’s efficacy: “Controlled trials are needed to determine the efficacy of ReStore for improving mobility outcomes of stroke rehabilitation.”

    Story Source:
    Materials provided by Kessler Foundation. Note: Content may be edited for style and length. More

  • in

    Vibration device makes homes 'smart' by tracking appliances

    To boost efficiency in typical households — where people forget to take wet clothes out of washing machines, retrieve hot food from microwaves and turn off dripping faucets — Cornell University researchers have developed a single device that can track 17 types of appliances using vibrations.
    The device, called VibroSense, uses lasers to capture subtle vibrations in walls, ceilings and floors, as well as a deep learning network that models the vibrometer’s data to create different signatures for each appliance — bringing researchers closer to a more efficient and integrated smart home.
    “Recognizing home activities can help computers better understand human behaviors and needs, with the hope of developing a better human-machine interface,” said Cheng Zhang, assistant professor of information science and senior author of “VibroSense: Recognizing Home Activities by Deep Learning Subtle Vibrations on an Interior Surface of a House from a Single Point Using Laser Doppler Vibrometry.” The paper was published in Proceedings of the Association for Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies and will be presented at the ACM International Joint Conference on Pervasive and Ubiquitous Computing, which will be held virtually Sept. 12-17.
    “In order to have a smart home at this point, you’d need each device to be smart, which is not realistic; or you’d need to install separate sensors on each device or in each area,” said Zhang, who directs Cornell’s SciFi Lab. “Our system is the first that can monitor devices across different floors, in different rooms, using one single device.”
    In order to detect usage across an entire house, the researchers’ task was twofold: detect tiny vibrations using a laser Doppler vibrometer; and differentiate similar vibrations created by multiple devices by identifying the paths traveled by the vibrations from room to room.
    The deep learning network was trained to distinguish different activities, partly by learning path signatures — the distinctive path vibrations followed through the house — as well as their distinct noises.
    The device showed nearly 96% accuracy in identifying 17 different activities across five houses — including dripping faucets, an exhaust fan, an electric kettle, a refrigerator and a range hood — in five houses over two days, according to the paper. VibroSense could also distinguish five different stages of appliance usage with an average accuracy more than 97%.
    In single-story houses, the laser was pointed at an interior wall at the center of the home. It was pointed at the ceiling in two-story homes.
    The device is primarily useful in single-family houses, Zhang said, because in buildings it could pick up activities in neighboring apartments, presenting a potential privacy risk.
    “It would definitely require collaboration between researches, industry practitioners and government to make sure this was used for the right purposes,” Zhang said.
    Among other uses, the system could help homes monitor energy usage and potentially help reduce consumption.
    “Since our system can detect both the occurrence of an indoor event, as well as the time of an event, it could be used to estimate electricity and water-usage rates, and provide energy-saving advice for homeowners,” Zhang said. “It could also prevent water and electrical waste, as well as electrical failures such as short circuits in home appliances.”

    Story Source:
    Materials provided by Cornell University. Original written by Melanie Lefkowitz. Note: Content may be edited for style and length. More

  • in

    Designed antiviral proteins inhibit SARS-CoV-2 in the lab

    Computer-designed small proteins have now been shown to protect lab-grown human cells from SARS-CoV-2, the coronavirus that causes COVID-19.
    The findings are reported today, Sept. 9, in Science
    In the experiments, the lead antiviral candidate, named LCB1, rivaled the best-known SARS-CoV-2 neutralizing antibodies in its protective actions. LCB1 is currently being evaluated in rodents.
    Coronaviruses are studded with so-called Spike proteins. These latch onto human cells to enable the virus to break in and infect them. The development of drugs that interfere with this entry mechanism could lead to treatment of or even prevention of infection.
    Institute for Protein Design researchers at the University of Washington School of Medicine used computers to originate new proteins that bind tightly to SARS-CoV-2 Spike protein and obstruct it from infecting cells.
    Beginning in January, more than two million candidate Spike-binding proteins were designed on the computer. Over 118,000 were then produced and tested in the lab.

    advertisement

    “Although extensive clinical testing is still needed, we believe the best of these computer-generated antivirals are quite promising,” said lead author Longxing Cao, a postdoctoral scholar at the Institute for Protein Design.
    “They appear to block SARS-CoV-2 infection at least as well as monoclonal antibodies, but are much easier to produce and far more stable, potentially eliminating the need for refrigeration,” he added.
    The researchers created antiviral proteins through two approaches. First, a segment of the ACE2 receptor, which SARS-CoV-2 naturally binds to on the surface of human cells, was incorporated into a series of small protein scaffolds.
    Second, completely synthetic proteins were designed from scratch. The latter method produced the most potent antivirals, including LCB1, which is roughly six times more potent on a per mass basis than the most effective monoclonal antibodies reported thus far.
    Scientists from the University of Washington School of Medicine in Seattle and Washington University School of Medicine in St. Louis collaborated on this work.
    “Our success in designing high-affinity antiviral proteins from scratch is further proof that computational protein design can be used to create promising drug candidates,” said senior author and Howard Hughes Medical Institute Investigator David Baker, professor of biochemistry at the UW School of Medicine and head of the Institute for Protein Design. In 2019, Baker gave a TED talk on how protein design might be used to stop viruses.
    To confirm that the new antiviral proteins attached to the coronavirus Spike protein as intended, the team collected snapshots of the two molecules interacting by using cryo-electron microscopy. These experiments were performed by researchers in the laboratories of David Veesler, assistant professor of biochemistry at the UW School of Medicine, and Michael S. Diamond, the Herbert S. Gasser Professor in the Division of Infectious Diseases at Washington University School of Medicine in St. Louis.
    “The hyperstable minibinders provide promising starting points for new SARS-CoV-2 therapeutics,” the antiviral research team wrote in their study pre-print, “and illustrate the power of computational protein design for rapidly generating potential therapeutic candidates against pandemic threats.”

    Story Source:
    Materials provided by University of Washington Health Sciences/UW Medicine. Original written by Ian Haydon, Institute for Protein Design. Note: Content may be edited for style and length. More

  • in

    Seeing objects through clouds and fog

    Like a comic book come to life, researchers at Stanford University have developed a kind of X-ray vision — only without the X-rays. Working with hardware similar to what enables autonomous cars to “see” the world around them, the researchers enhanced their system with a highly efficient algorithm that can reconstruct three-dimensional hidden scenes based on the movement of individual particles of light, or photons. In tests, detailed in a paper published Sept. 9 in Nature Communications, their system successfully reconstructed shapes obscured by 1-inch-thick foam. To the human eye, it’s like seeing through walls.
    “A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible,” said Gordon Wetzstein, assistant professor of electrical engineering at Stanford and senior author of the paper. “This is really pushing the frontier of what may be possible with any kind of sensing system. It’s like superhuman vision.”
    This technique complements other vision systems that can see through barriers on the microscopic scale — for applications in medicine — because it’s more focused on large-scale situations, such as navigating self-driving cars in fog or heavy rain and satellite imaging of the surface of Earth and other planets through hazy atmosphere.
    Supersight from scattered light
    In order to see through environments that scatter light every-which-way, the system pairs a laser with a super-sensitive photon detector that records every bit of laser light that hits it. As the laser scans an obstruction like a wall of foam, an occasional photon will manage to pass through the foam, hit the objects hidden behind it and pass back through the foam to reach the detector. The algorithm-supported software then uses those few photons — and information about where and when they hit the detector — to reconstruct the hidden objects in 3D.
    This is not the first system with the ability to reveal hidden objects through scattering environments, but it circumvents limitations associated with other techniques. For example, some require knowledge about how far away the object of interest is. It is also common that these systems only use information from ballistic photons, which are photons that travel to and from the hidden object through the scattering field but without actually scattering along the way.

    advertisement

    “We were interested in being able to image through scattering media without these assumptions and to collect all the photons that have been scattered to reconstruct the image,” said David Lindell, a graduate student in electrical engineering and lead author of the paper. “This makes our system especially useful for large-scale applications, where there would be very few ballistic photons.”
    In order to make their algorithm amenable to the complexities of scattering, the researchers had to closely co-design their hardware and software, although the hardware components they used are only slightly more advanced than what is currently found in autonomous cars. Depending on the brightness of the hidden objects, scanning in their tests took anywhere from one minute to one hour, but the algorithm reconstructed the obscured scene in real-time and could be run on a laptop.
    “You couldn’t see through the foam with your own eyes, and even just looking at the photon measurements from the detector, you really don’t see anything,” said Lindell. “But, with just a handful of photons, the reconstruction algorithm can expose these objects — and you can see not only what they look like, but where they are in 3D space.”
    Space and fog
    Someday, a descendant of this system could be sent through space to other planets and moons to help see through icy clouds to deeper layers and surfaces. In the nearer term, the researchers would like to experiment with different scattering environments to simulate other circumstances where this technology could be useful.
    “We’re excited to push this further with other types of scattering geometries,” said Lindell. “So, not just objects hidden behind a thick slab of material but objects that are embedded in densely scattering material, which would be like seeing an object that’s surrounded by fog.”
    Lindell and Wetzstein are also enthusiastic about how this work represents a deeply interdisciplinary intersection of science and engineering.
    “These sensing systems are devices with lasers, detectors and advanced algorithms, which puts them in an interdisciplinary research area between hardware and physics and applied math,” said Wetzstein. “All of those are critical, core fields in this work and that’s what’s the most exciting for me.”

    Story Source:
    Materials provided by Stanford University. Original written by Taylor Kubota. Note: Content may be edited for style and length. More

  • in

    As collegiate esports become more professional, women are being left out

    A new study from North Carolina State University reports that the rapidly growing field of collegiate esports is effectively becoming a two-tiered system, with club-level programs that are often supportive of gender diversity being clearly distinct from well-funded varsity programs that are dominated by men.
    “Five years ago, we thought collegiate esports might be an opportunity to create a welcoming, diverse competitive arena, which was a big deal given how male-dominated the professional esports scene was,” says Nick Taylor, co-author of the study and an associate professor of communication at NC State. “Rapid growth of collegiate esports over the past five years has led to it becoming more professional, with many universities having paid esports positions, recruiting players, and so on. We wanted to see how that professionalization has affected collegiate esports and what that means for gender diversity. The findings did not give us reason to be optimistic.”
    For this qualitative study, the researchers conducted in-depth interviews with 21 collegiate esports leaders from the U.S. and Canada. Eight of the study participants were involved in varsity-level esports, such as coaches or administrators, while the remaining 13 participants were presidents of collegiate esports clubs. Six of the participants identified as women; 15 identified as men.
    “Essentially, we found that women are effectively pushed out of esports at many colleges when they start investing financial resources in esports programs,” says Bryce Stout, co-author of the study and a Ph.D. student at NC State. “We thought collegiate esports might help to address the disenfranchisement of women in esports and in gaming more generally; instead, it seems to simply be an extension of that disenfranchisement.”
    “Higher education has been spending increasing amounts of time, money and effort on professionalizing esports programs,” Taylor says. “With some key exceptions, these institutions are clearly not putting as much effort into encouraging diversity in these programs. That effectively cuts out women and minorities.
    “Some leaders stress that they will welcome any player onto their team, as long as the player has a certain skill level,” Taylor says. “But this ignores the systemic problems that effectively drive most women out of gaming — such as harassment. There needs to be a focus on cultivating skill and developing players, rather than focusing exclusively on recruitment.”

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More