More stories

  • in

    Drugs aren't typically tested on women — artificial intelligence could correct that bias

    Researchers at Columbia University have developed AwareDX — Analysing Women At Risk for Experiencing Drug toXicity — a machine learning algorithm that identifies and predicts differences in adverse drug effects between men and women by analyzing 50 years’ worth of reports in an FDA database. The algorithm, described September 22 in the journal Patterns, automatically corrects for the biases in these data that stem from an overrepresentation of male subjects in clinical research trials.
    Though men and women can have different responses to medications — the sleep aid Ambien, for example, metabolizes more slowly in women, causing next-day grogginess — even doctors may not know about these differences because most clinical trial data itself are biased toward men. This trickles down to impact prescribing guidelines, drug marketing, and ultimately, patients’ health.
    “Pharma has a history of ignoring complex problems. Traditionally, clinical trials have not even included women in their studies. The old-fashioned way used to be to get a group of healthy guys together to give them the drug, make sure it didn’t kill them, and you’re off to the races. As a result, we have a lot less information about how women respond to drugs than men,” says Nicholas Tatonetti (@nicktatonetti), an associate professor of biomedical informatics at Columbia University and a co-author on the paper. “We haven’t had the ability to evaluate these differences before, or even to quantify them.”
    Tatonetti teamed up with one of his students — Payal Chandak, a senior biomedical informatics major at Columbia University and the other co-author on the paper. Together they developed AwareDX. Because it is a machine learning algorithm, AwareDX can automatically adjust for sex-based biases in a way that would take concerted effort to do manually.
    “Machine learning is definitely a buzzword, but essentially the idea is to correct for these biases before you do any other statistical analysis by building a balanced subset of patients with equal parts men and women for each drug,” says Chandak.
    The algorithm uses data from the FDA Adverse Event Reporting System (FAERS), which contains reports of adverse drug effects from consumers, healthcare providers, and manufacturers all the way back to 1968. AwareDX groups the data into sex-balanced subsets before looking for patterns and trends. To improve the results, the algorithm then repeats the whole process 25 times.
    The researchers compiled the results into a bank of over 20,000 potential sex-specific drug effects, which can then be verified either by looking back at older data or by conducting new studies down the line. Though there is a lot of work left to do, the researchers have already had success verifying the results for several drugs based on previous genetic research.
    For example, the ABCB1 gene, which affects how much of a drug is usable by the body and for how long, is known to be more active in men than women. Because of this, the researchers expected to see a greater risk of muscle aches for men taking simvastatin — a cholesterol medication — and a greater risk of slowing heart rate for women taking risperidone — an antipsychotic. AwareDX successfully predicted both of these effects.
    “The most exciting thing to me is that not only do we have a database of adverse events that we’ve developed from this FDA resource, but we’ve shown that for some of these events, there is preexisting knowledge of genetic differences between men and women,” says Chandak. “Using that knowledge, we can actually predict different responses that men and women should have and validate our method against those. That gives us a lot of confidence in the method itself.”
    By continuing to verify their results, the researchers hope that the insights from AwareDX will help doctors make more informed choices when prescribing drugs, especially to women. “Doctors actually look at adverse effect information specific to the drug they prescribe. So once this information is studied further and corroborated, it’s actually going to impact drug prescriptions and people’s health,” says Tatonetti.
    This work was supported by National Institutes of Health.

    Story Source:
    Materials provided by Cell Press. Note: Content may be edited for style and length. More

  • in

    Screen time can change visual perception — and that's not necessarily bad

    The coronavirus pandemic has shifted many of our interactions online, with Zoom video calls replacing in-person classes, work meetings, conferences and other events. Will all that screen time damage our vision?
    Maybe not. It turns out that our visual perception is highly adaptable, according to research from Psychology Professor and Cognitive and Brain Sciences Coordinator Peter Gerhardstein’s lab at Binghamton University.
    Gerhardstein, Daniel Hipp and Sara Olsen — his former doctoral students — will publish “Mind-Craft: Exploring the Effect of Digital Visual Experience on Changes in Orientation Sensitivity in Visual Contour Perception,” in an upcoming issue of the academic journal Perception. Hipp, the lead author and main originator of the research, is now at the VA Eastern Colorado Health Care System’s Laboratory for Clinical and Translational Research. Olsen, who designed stimuli for the research and aided in the analysis of the results, is now at the University of Minnesota’s Department of Psychiatry.
    “The finding in the work is that the human perceptual system rapidly adjusts to a substantive alteration in the statistics of the visual world, which, as we show, is what happens when someone is playing video games,” Gerhardstein said.
    The experiments
    The research focuses on a basic element of vision: our perception of orientation in the environment.

    advertisement

    Take a walk through the Binghamton University Nature Preserve and look around. Stimuli — trees, branches, bushes, the path — are oriented in many different angles. According to an analysis by Hipp, there is a slight predominance of horizontal and then vertical planes — think of the ground and the trees — but no shortage of oblique angles.
    Then consider the “carpentered world” of a cityscape — downtown Binghamton, perhaps. The percentage of horizontal and vertical orientations increases dramatically, while the obliques fall away. Buildings, roofs, streets, lampposts: The cityscape is a world of sharp angles, like the corner of a rectangle. The digital world ramps up the predominance of the horizontal and vertical planes, Gerhardstein explained.
    Research shows that we tend to pay more attention to horizontal and vertical orientations, at least in the lab; in real-world environments, these differences probably aren’t noticeable, although they likely still drive behavior. Painters, for example, tend to exacerbate these distinctions in their work, a focus of a different research group.
    Orientation is a fundamental aspect of how our brain and eyes work together to build the visual world. Interestingly, it’s not fixed; our visual system can adapt to changes swiftly, as the group’s two experiments show.
    The first experiment established a method of eye tracking that doesn’t require an overt response, such as touching a screen. The second had college students play four hours of Minecraft — one of the most popular computer games in the world — before and after showing them visual stimuli. Then, researchers determined subjects’ ability to perceive phenomena in the oblique and vertical/horizontal orientations using the eye-tracking method from the first experiment.

    advertisement

    A single session produced a clearly detectable change. While the screen-less control group showed no changes in their perception, the game-players detected horizontal and vertical orientations more easily. Neither group changed their perception in oblique orientations.
    We still don’t know how temporary these changes are, although Gerhardstein speculates that the vision of the game-playing research subjects likely returned to normal quickly.
    “So, the immediate takeaway is the impressive extent to which the young adult visual system can rapidly adapt to changes in the statistics of the visual environment,” he said.
    In the next phase of research, Gerhardstein’s lab will track the visual development of two groups of children, one assigned to regularly play video games and the other to avoid screen-time, including television. If the current experiment is any indication, there may be no significant differences, at least when it comes to orientation sensitivity. The pandemic has put in-person testing plans on hold, although researchers have given a survey about children’s playing habits to local parents and will use the results to design a study.
    Adaptive vision
    Other research groups who have examined the effects of digital exposure on other aspects of visual perception have concluded that long-term changes do take place, at least some of which are seen as helpful.
    Helpful? Like other organisms, humans tend to adapt fully to the environment they experience. The first iPhone came out in 2008 and the first iPad in 2010. Children who are around 10 to 12 years old have grown up with these devices, and will live and operate in a digital world as adults, Gerhardstein pointed out.
    “Is it adaptive for them to develop a visual system that is highly sensitive to this particular environment? Many would argue that it is,” he said. “I would instead suggest that a highly flexible system that can shift from one perceptual ‘set’ to another rapidly, so that observers are responding appropriately to the statistics of a digital environment while interacting with digital media, and then shifting to respond appropriately to the statistics of a natural scene or a cityscape, would be most adaptive.” More

  • in

    New detector breakthrough pushes boundaries of quantum computing

    Physicists at Aalto University and VTT Technical Research Centre of Finland have developed a new detector for measuring energy quanta at unprecedented resolution. This discovery could help bring quantum computing out of the laboratory and into real-world applications. The results have been published today in Nature.
    The type of detector the team works on is called a bolometer, which measures the energy of incoming radiation by measuring how much it heats up the detector. Professor Mikko Möttönen’s Quantum Computing and Devices group at Aalto has been developing their expertise in bolometers for quantum computing over the past decade, and have now developed a device that can match current state-of-the-art detectors used in quantum computers.
    ‘It is amazing how we have been able to improve the specs of our bolometer year after year, and now we embark on an exciting journey into the world of quantum devices,’ says Möttönen.
    Measuring the energy of qubits is at the heart of how quantum computers operate. Most quantum computers currently measure a qubit’s energy state by measuring the voltage induced by the qubit. However, there are three problems with voltage measurements: firstly, measuring the voltage requires extensive amplification circuitry, which may limit the scalability of the quantum computer; secondly, this circuitry consumes a lot of power; and thirdly, the voltage measurements carry quantum noise which introduces errors in the qubit readout. Quantum computer researchers hope that by using bolometers to measure qubit energy, they can overcome all of these complications, and now Professor Möttönen’s team have developed one that is fast enough and sensitive enough for the job.
    ‘Bolometers are now entering the field of quantum technology and perhaps their first application could be in reading out the quantum information from qubits. The bolometer speed and accuracy seems now right for it,’ says Professor Möttönen.
    The team had previously produced a bolometer made of a gold-palladium alloy with unparalleled low noise levels in its measurements, but it was still too slow to measure qubits in quantum computers. The breakthrough in this new work was achieved by swapping from making the bolometer out of gold-palladium alloys to making them out of graphene. To do this, they collaborated with Professor Pertti Hakonen’s NANO group — also at Aalto University — who have expertise in fabricating graphene-based devices. Graphene has a very low heat capacity, which means that it is possible to detect very small changes in its energy quickly. It is this speed in detecting the energy differences that makes it perfect for a bolometer with applications in measuring qubits and other experimental quantum systems. By swapping to graphene, the researchers have produced a bolometer that can make measurements in well below a microsecond, as fast as the technology currently used to measure qubits.
    ‘Changing to graphene increased the detector speed by 100 times, while the noise level remained the same. After these initial results, there is still a lot of optimisation we can do to make the device even better,’ says Professor Hakonen.
    Now that the new bolometers can compete when it comes to speed, the hope is to utilise the other advantages bolometers have in quantum technology. While the bolometers reported in the current work performs on par with the current state-of-the-art voltage measurements, future bolometers have the potential to outperform them. Current technology is limited by Heisenberg’s uncertainty principle: voltage measurements will always have quantum noise, but bolometers do not. This higher theoretical accuracy, combined with the lower energy demands and smaller size — the graphene flake could fit comfortably inside a single bacterium — means that bolometers are an exciting new device concept for quantum computing.
    The next steps for their research is to resolve the smallest energy packets ever observed using bolometers in real-time and to use the bolometer to measure the quantum properties of microwave photons, which not only have exciting applications in quantum technologies such as computing and communications, but also in fundamental understanding of quantum physics.
    Many of the scientists involved in the researchers also work at IQM, a spin-out of Aalto University developing technology for quantum computers. “IQM is constantly looking for new ways to enhance its quantum-computer technology and this new bolometer certainly fits the bill,” explains Dr Kuan Yen Tan, Co-Founder of IQM who was also involved in the research.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    By 2100, Greenland will be losing ice at its fastest rate in 12,000 years

    By 2100, Greenland will be shedding ice faster than at any time in the past 12,000 years, scientists report October 1 in Nature.
    Since the 1990s, Greenland has shed its ice at an increasing rate (SN: 8/2/19). Meltwater from the island’s ice sheet now contributes about 0.7 millimeters per year to global sea level rise (SN: 9/25/19). But how does this rapid loss stack up against the ice sheet’s recent history, including during a 3,000-year-long warm period?
    Glacial geologist Jason Briner of the University at Buffalo in New York and colleagues created a master timeline of ice sheet changes spanning nearly 12,000 years, from the dawn of the Holocene Epoch 11,700 years ago and projected out to 2100.
    The researchers combined climate and ice physics simulations with observations of the extent of past ice sheets, marked by moraines. Those rocky deposits denote the edges of ancient, bulldozing glaciers. New fine-tuned climate simulations that include spatial variations in temperature and precipitation across the island also improved on past temperature reconstructions.
    During the past warm episode from about 10,000 to 7,000 years ago, Greenland lost ice at a rate of about 6,000 billion metric tons each century, the team estimates. That rate remained unmatched until the past two decades: From 2000 to 2018, the average rate of ice loss was similar, at about 6,100 billion tons per century.
    Over the next century, that pace will accelerate, the team says. How much depends on future greenhouse gas emissions: Under a lower-emissions scenario, ice loss is projected to average around 8,800 billion tons per century by 2100. With higher emissions, the rate of loss could ramp up to 35,900 billion tons per century.
    Lower emissions could slow the loss, but “no matter what humanity does, the ice will melt this century at a faster clip than it did during that warm period,” Briner says. More

  • in

    'Liking' an article online may mean less time spent reading it

    When people have the option to click “like” on a media article they encounter online, they spend less time actually reading the text, a new study suggests.
    In a lab experiment, researchers found that people spent about 7 percent less time reading articles on controversial topics when they had the opportunity to upvote or downvote them than if there was no interactive element.
    The finding was strongest when an article agreed with the reader’s point of view.
    The results suggest that the ability to interact with online content may change how we consume it, said Daniel Sude, who led the work while earning a doctoral degree in communication at The Ohio State University.
    “When people are voting whether they like or dislike an article, they’re expressing themselves. They are focused on their own thoughts and less on the content in the article,” Sude said.
    “It is like the old phrase, ‘If you’re talking, you’re not listening.’ People were talking back to the articles without listening to what they had to say.”
    In another finding, people’s existing views on controversial topics like gun control or abortion became stronger after voting on articles that agreed with their views, even when they spent less time reading them.

    advertisement

    “Just having the ability to like an article you agreed with was enough to amplify your attitude,” said study co-author Silvia Knobloch-Westerwick, professor of communication at Ohio State.
    “You didn’t need to read the article carefully, you didn’t have to learn anything new, but you are more committed to what you already believed.”
    The study, also co-authored by former Ohio State doctoral student George Pearson, was published online recently in the journal Computers in Human Behavior and will appear in the January 2021 print edition.
    The study involved 235 college students. Before the study, the researchers measured their views on four controversial topics used in the experiment: abortion, welfare benefits, gun control and affirmative action.
    Participants were then showed four versions of an online news website created for the study, each on one of the controversial topics. Each webpage showed headlines and first paragraphs for four articles, two with a conservative slant and two with a liberal slant. Participants could click on the headlines to read the full stories.

    advertisement

    Two versions of the websites had a banner that said, “Voting currently enabled for this topic,” and each article had an up arrow or down arrow that participants could click on to express their opinion.
    The other two websites had a banner that said, “Voting currently disabled for this topic.”
    Participants were given three minutes to browse each website as they wished, although they were not told about the time limit. The researchers measured the time participants spent on each story and whether they voted if they had the opportunity.
    As expected, for each website, participants spent more time reading articles that agreed with their views (about 1.5 minutes) than opposing views (less than a minute).
    But they spent about 12 seconds less time reading the articles they agreed with if they could vote.
    In addition, people voted on about 12 percent of articles that they didn’t select to read, the study showed.
    “Rather than increasing engagement with website content, having the ability to interact may actually distract from it,” Sude said.
    The researchers measured the participants’ views on the four topics again after they read the websites to see if their attitudes had changed at all.
    Results showed that when participants were not able to vote, time spent reading articles that agreed with their original views strengthened these views. The more time they spent reading, the stronger their views became.
    When participants were able to vote, their voting behavior was as influential as their reading time. Even if they stopped reading and upvoted an article, their attitudes still became stronger.
    “It is important that people’s views still became stronger by just having the opportunity to vote, Knobloch-Westerwick said.
    “When they had the opportunity to vote on the articles, their attitudes were getting more extreme with limited or no input from the articles themselves. They were in an echo chamber of one.”
    Sude said there is a better way to interact with online news.
    “Don’t just click the like button. Read the article and leave thoughtful comments that are more than just a positive or negative rating,” he said.
    “Say why you liked or disliked the article. The way we express ourselves is important and can influence the way we think about an issue.” More

  • in

    The secretive networks used to move money offshore

    In 2016, the world’s largest ever data leak dubbed “The Panama Papers” exposed a scandal, uncovering a vast global network of people — including celebrities and world leaders, who used offshore tax havens, anonymous transactions through intermediaries and shell corporations to hide their wealth, grow their fortunes and avoid taxes.
    Researchers at USC Viterbi School of Engineering have now conducted a deep analysis of the entities and their interrelationships that were originally revealed in the 11.5 million files leaked to the International Consortium of Investigative Journalists. The academic researchers have made some discoveries about how this network and transactions operate, uncovering uniquely fragmented network behavior, vastly different from more traditional social or organizational networks, demonstrating why these systems of transactions and associations are so robust and difficult to infiltrate or take down. The work has been published in Applied Network Science.
    Lead author Mayank Kejriwal is an assistant professor working in the Daniel J. Epstein Department of Industrial and Systems Engineering and USC’s Information Sciences Institute who studies complex (typically, social) systems like online trafficking markets using computational methods and network science. He said the research team’s aim was to study the Panama Papers network as a whole, in the same way you might study a social network like Facebook, to try to understand what the network behavior can tell us about how money can be moved.
    “In general, in any social network like LinkedIn or Facebook, there is something called ‘Small World Phenomenon’, which means that you’re only ever around six people away from anyone in the world,” Kejriwal said.
    “For instance, if you want get from yourself to Bill Gates, on average you would be around six connections away,” he said.
    However the team discovered that the Panama Papers network was about as far removed from this traditional social or organizational network behavior as it could possibly be. Instead of a network of highly integrated connections, the researchers discovered a series of secretive disconnected fragments, with entities, intermediaries and individuals involved in transactions and corporations exhibiting very few connections with other entities in the system.

    advertisement

    “It was really unusual. The degree of fragmentation is something I have never seen before,” said Kejriwal. “I’m not aware of any other network that has this kind of fragmentation.”
    “So (without any documentation or leak), if you wanted to find the chain between one organization and another organization, you would not be able to find it, because the chances are that that there is no chain — it’s completely disconnected,” Kejriwal said.
    Most social, friendship or organizational networks contain a series of triangular structures in a system known as the ‘friend of a friend phenomenon.”
    “The simple notion is that a friend of a friend is also a friend,” Kejriwal said. “And we can measure that by counting the number of triangles in the network.”
    However, the team discovered that this triangular structure was not a feature of the Panama Papers network.

    advertisement

    “It turns out that not only is it not prevalent, but it’s far less than prevalent than even for a random network,” Kejriwal said. “If you literally randomly connect things, in a haphazard fashion and then you count the triangles in that network, this network is even sparser than that.” He added, “Compared to a random network, in this type of network, links between financial entities are scrambled until they are essentially meaningless (so that anyone can be transacting with anyone else).”
    It is precisely this disconnectedness that makes the system of secret global financial dealings so robust. Because there was no way to trace relationships between entities, the network could not be easily compromised.
    “So what this suggests is that secrecy is built into the system and you cannot penetrate it,” Kejriwal said.
    “In an interconnected world, we don’t expect anyone to be impenetrable. Everyone has a weak link,” Kejriwal said. “But not in this network. The fact it is so fragmented actually protects them.”
    Kejriwal said the network behavior demonstrates that those involved in the Panama Papers network of offshore entities and transactions were very sophisticated, knowing exactly how to move money around in a way that it becomes untraceable and they are not vulnerable through their connections to others in the system. Because it is a global network, there are few options for national or international bodies to intervene in order to recoup taxes and investigate corruption and money laundering.
    “I don’t know how anyone would try to bring this down, and I’m not sure that they would be able to. The system seems unattackable,” Kejriwal said. More

  • in

    App analyzes coronavirus genome on a smartphone

    A new mobile app has made it possible to analyse the genome of the SARS-CoV-2 virus on a smartphone in less than half an hour.
    Cutting-edge nanopore devices have enabled scientists to read or ‘sequence’ the genetic material in a biological sample outside a laboratory, however analysing the raw data has still required access to high-end computing power — until now.
    The app Genopo, developed by the Garvan Institute of Medical Research, in collaboration with the University of Peradeniya in Sri Lanka, makes genomics more accessible to remote or under-resourced regions, as well as the hospital bedside.
    “Not everyone has access to the high-power computing resources that are required for DNA and RNA analysis, but most people have access to a smartphone,” says co-senior author Dr Ira Deveson, who heads the Genomic Technologies Group at Garvan’s Kinghorn Centre for Clinical Genomics.
    “Fast, real-time genomic analysis is more crucial today than ever, as a central method for tracking the spread of coronavirus. Our app makes genomic analysis more accessible, literally placing the technology into the pockets of scientists around the world.”
    The researchers report the app Genopo in the journal Communications Biology.

    advertisement

    Taking genome analysis off-line
    Genomic sequencing no longer requires a sophisticated lab setup.
    At the size of a USB stick, portable devices such as the Oxford Nanopore Technologies MinION sequencer can rapidly generate genomic sequences from a sample in the field or the clinic. The technology has been used for Ebola surveillance in West Africa, to profile microbial communities in the Arctic and determine coronavirus evolution during the current pandemic.
    However, analysing genome sequencing data requires powerful computation. Scientists need to piece the many strings of genetic letters from the raw data into a single sequence and pinpoint the instances of genetic variation that shed light on how a virus evolves.
    “Until now, genomic analysis has required the processing power of high-end server computers or cloud services. We set out to change that,” explains co-senior author Hasindu Gamaarachchi, Genomics Computing Systems Engineer at the Garvan Institute.

    advertisement

    “To enable in situ genomic sequencing and analysis, in real time and without major laboratory infrastructure, we developed an app that could execute bioinformatics workflows on nanopore sequencing datasets that are downloaded to a smartphone. The reengineering process, spearheaded by first author Hiruna Samarakoon, required overcoming a number of technical challenges due to various resource constraints in smartphones. The app Genopo combines a number of available bioinformatics tools into a single Android application, ‘miniaturised’ to work on the processing power of a consumer Android device.”
    Coronavirus testing
    The researchers tested Genopo on the raw sequencing data of virus samples isolated from nine Sydney patients infected with SARS-CoV-2, which involved extracting and amplifying the virus RNA from a swab sample, sequencing the amplified DNA with a MinION device and analysing the data on a smartphone. The researchers tested their app on different Android devices, including models from Nokia, Huawei, LG and Sony.
    The Genopo app took an average 27 minutes to determine the complete SARS-CoV-2 genome sequence from the raw data, which the researchers say opens the possibility to do genomic analysis at the point of care, in real time. The researchers also showed that Genopo can be used to profile DNA methylation — a modification which changes gene activity — in a sample of the human genome.
    “This illustrates a flexible, efficient architecture that is suitable to run many popular bioinformatics tools and accommodate small or large genomes,” says Dr Deveson. “We hope this will make genomics much more accessible to researchers to unlock the information in DNA or RNA to the benefit of human health, including in the current pandemic.”
    Genopo is a free, open-source application available through the Google Play store (https://play.google.com/store/apps/details?id=com.mobilegenomics.genopo&hl=en).
    This project was supported by a Medical Research Future Fund (grant APP1173594), a Cancer Institute NSW Early Career Fellowship and The Kinghorn Foundation. Garvan is affiliated with St Vincent’s Hospital Sydney and UNSW Sydney. More

  • in

    Driving behavior less 'robotic' thanks to new model

    Researchers from TU Delft have now developed a new model that describes driving behaviour on the basis of one underlying ‘human’ principle: managing the risk below a threshold level. This model can accurately predict human behaviour during a wide range of driving tasks. In time, the model could be used in intelligent cars, to make them feel less ‘robotic’. The research conducted by doctoral candidate Sarvesh Kolekar and his supervisors Joost de Winter and David Abbink will be published in Nature Communications on Tuesday 29 September 2020.
    Risk threshold
    Driving behaviour is usually described using models that predict an optimum path. But this is not how people actually drive. ‘You don’t always adapt your driving behaviour to stick to one optimum path,’ says researcher Sarvesh Kolekar from the Department of Cognitive Robotics. ‘People don’t drive continuously in the middle of their lane, for example: as long as they are within the acceptable lane limits, they are fine with it.’
    Models that predict an optimum path are not only popular in research, but also in vehicle applications. ‘The current generation of intelligent cars drive very neatly. They continuously search for the safest path: i.e. one path at the appropriate speed. This leads to a “robotic” style of driving,’ continues Kolekar. ‘To get a better understanding of human driving behaviour, we tried to develop a new model that used the human risk threshold as the underlying principle.’
    Driver’s Risk Field
    To get to grips with this concept, Kolekar introduced the so-called Driver’s Risk Field (DRF). This is an ever-changing two-dimensional field around the car that indicates how high the driver considers the risk to be at each point. Kolekar devised these risk assessments in previous research. The gravity of the consequences of the risk in question are then taken into account in the DRF. For example, having a cliff on one side of the road boundary is much more dangerous than having grass. ‘The DRF was inspired by a concept from psychology, put forward a long time ago (in 1938) by Gibson and Crooks. These authors claimed that car drivers ‘feel’ the risk field around them, as it were, and base their traffic manoeuvres on these perceptions.’ Kolekar managed to turn this theory into a computer algorithm.
    Predictions
    Kolekar then tested the model in seven scenarios, including overtaking and avoiding an obstacle. ‘We compared the predictions made by the model with experimental data on human driving behaviour taken from the literature. Luckily, a lot of information is already available. It turned out that our model only needs a small amount of data to ‘get’ the underlying human driving behaviour and could even predict reasonable human behaviour in previously unseen scenarios. Thus, driving behaviour rolls out more or less automatically; it is ’emergent’.
    Elegant
    This elegant description of human driving behaviour has huge predictive and generalising value. Apart from the academic value, the model can also be used in intelligent cars. ‘If intelligent cars were to take real human driving habits into account, they would have a better chance of being accepted. The car would behave less like a robot.’

    Story Source:
    Materials provided by Delft University of Technology. Note: Content may be edited for style and length. More