More stories

  • in

    Can AI predict how you'll vote in the next election?

    Artificial intelligence technologies like ChatGPT are seemingly doing everything these days: writing code, composing music, and even creating images so realistic you’ll think they were taken by professional photographers. Add thinking and responding like a human to the conga line of capabilities. A recent study from BYU proves that artificial intelligence can respond to complex survey questions just like a real human.
    To determine the possibility of using artificial intelligence as a substitute for human responders in survey-style research, a team of political science and computer science professors and graduate students at BYU tested the accuracy of programmed algorithms of a GPT-3 language model — a model that mimics the complicated relationship between human ideas, attitudes, and sociocultural contexts of subpopulations.
    In one experiment, the researchers created artificial personas by assigning the AI certain characteristics like race, age, ideology, and religiosity; and then tested to see if the artificial personas would vote the same as humans did in 2012, 2016, and 2020 U.S. presidential elections. Using the American National Election Studies (ANES) for their comparative human database, they found a high correspondence between how the AI and humans voted.
    “I was absolutely surprised to see how accurately it matched up,” said David Wingate, BYU computer science professor, and co-author on the study. “It’s especially interesting because the model wasn’t trained to do political science — it was just trained on a hundred billion words of text downloaded from the internet. But the consistent information we got back was so connected to how people really voted.”
    In another experiment, they conditioned artificial personas to offer responses from a list of options in an interview-style survey, again using the ANES as their human sample. They found high similarity between nuanced patterns in human and AI responses.
    This innovation holds exciting prospects for researchers, marketers, and pollsters. Researchers envision a future where artificial intelligence is used to craft better survey questions, refining them to be more accessible and representative; and even simulate populations that are difficult to reach. It can be used to test surveys, slogans, and taglines as a precursor to focus groups.
    “We’re learning that AI can help us understand people better,” said BYU political science professor Ethan Busby. “It’s not replacing humans, but it is helping us more effectively study people. It’s about augmenting our ability rather than replacing it. It can help us be more efficient in our work with people by allowing us to pre-test our surveys and our messaging.”
    And while the expansive possibilities of large language models are intriguing, the rise of artificial intelligence poses a host of questions — how much does AI really know? Which populations will benefit from this technology and which will be negatively impacted? And how can we protect ourselves from scammers and fraudsters who will manipulate AI to create more sophisticated phishing scams?
    While much of that is still to be determined, the study lays out a set of criteria that future researchers can use to determine how accurate an AI model is for different subject areas.
    “We’re going to see positive benefits because it’s going to unlock new capabilities,” said Wingate, noting that AI can help people in many different jobs be more efficient. “We’re also going to see negative things happen because sometimes computer models are inaccurate and sometimes they’re biased. It will continue to churn society.”
    Busby says surveying artificial personas shouldn’t replace the need to survey real people and that academics and other experts need to come together to define the ethical boundaries of artificial intelligence surveying in research related to social science. More

  • in

    New chip design to provide greatest precision in memory to date

    Everyone is talking about the newest AI and the power of neural networks, forgetting that software is limited by the hardware on which it runs. But it is hardware, says USC Professor of Electrical and Computer Engineering Joshua Yang, that has become “the bottleneck.” Now, Yang’s new research with collaborators might change that. They believe that they have developed a new type of chip with the best memory of any chip thus far for edge AI (AI in portable devices).
    For approximately the past 30 years, while the size of the neural networks needed for AI and data science applications doubled every 3.5 months, the hardware capability needed to process them doubled only every 3.5 years. According to Yang, hardware presents a more and more severe problem for which few have patience.
    Governments, industry, and academia are trying to address this hardware challenge worldwide. Some continue to work on hardware solutions with silicon chips, while others are experimenting with new types of materials and devices. Yang’s work falls into the middle — focusing on exploiting and combining the advantages of the new materials and traditional silicon technology that could support heavy AI and data science computation.
    Their new paper in Nature focuses on the understanding of fundamental physics that leads to a drastic increase in memory capacity needed for AI hardware. The team led by Yang, with researchers from USC (including Han Wang’s group), MIT, and the University of Massachusetts, developed a protocol for devices to reduce “noise” and demonstrated the practicality of using this protocol in integrated chips. This demonstration was made at TetraMem, a startup company co-founded by Yang and his co-authors (Miao Hu, Qiangfei Xia, and Glenn Ge), to commercialize AI acceleration technology. According to Yang, this new memory chip has the highest information density per device (11 bits) among all types of known memory technologies thus far. Such small but powerful devices could play a critical role in bringing incredible power to the devices in our pockets. The chips are not just for memory but also for the processor. And millions of them in a small chip, working in parallel to rapidly run your AI tasks, could only require a small battery to power it.
    The chips that Yang and his colleagues are creating combine silicon with metal oxide memristors in order to create powerful but low-energy intensive chips. The technique focuses on using the positions of atoms to represent information rather than the number of electrons (which is the current technique involved in computations on chips). The positions of the atoms offer a compact and stable way to store more information in an analog, instead of digital fashion. Moreover, the information can be processed where it is stored instead of being sent to one of the few dedicated ‘processors,’ eliminating the so-called ‘von Neumann bottleneck’ existing in current computing systems. In this way, says Yang, computing for AI is “more energy efficient with a higher throughput.”
    How it works
    Yang explains that electrons which are manipulated in traditional chips, are “light.” And this lightness, makes them prone to moving around and being more volatile. Instead of storing memory through electrons, Yang and collaborators are storing memory in full atoms. Here is why this memory matters. Normally, says Yang, when one turns off a computer, the information memory is gone — but if you need that memory to run a new computation and your computer needs the information all over again, you have lost both time and energy. This new method, focusing on activating atoms rather than electrons, does not require battery power to maintain stored information. Similar scenarios happen in AI computations, where a stable memory capable of high information density is crucial. Yang imagines this new tech that may enable powerful AI capability in edge devices, such as Google Glasses, which he says previously suffered from a frequent recharging issue.
    Further, by converting chips to rely on atoms as opposed to electrons, chips become smaller. Yang adds that with this new method, there is more computing capacity at a smaller scale. And this method, he says, could offer “many more levels of memory to help increase information density.”
    To put it in context, right now, ChatGPT is running on a cloud. The new innovation, followed by some further development, could put the power of a mini version of ChatGPT in everyone’s personal device. It could make such high-powered tech more affordable and accessible for all sorts of applications. More

  • in

    AI could set a new bar for designing hurricane-resistant buildings

    Being able to withstand hurricane-force winds is the key to a long life for many buildings on the Eastern Seaboard and Gulf Coast of the U.S. Determining the right level of winds to design for is tricky business, but support from artificial intelligence may offer a simple solution.
    Equipped with 100 years of hurricane data and modern AI techniques, researchers at the National Institute of Standards and Technology (NIST) have devised a new method of digitally simulating hurricanes. The results of a study published today in Artificial Intelligence for the Earth Systems demonstrate that the simulations can accurately represent the trajectory and wind speeds of a collection of actual storms. The authors suggest that simulating numerous realistic hurricanes with the new approach can help to develop improved guidelines for the design of buildings in hurricane-prone regions.
    State and local laws that regulate building design and construction — more commonly known as building codes — point designers to standardized maps. On these maps, engineers can find the level of wind their structure must handle based on its location and its relative importance (i.e., the bar is higher for a hospital than for a self-storage facility). The wind speeds in the maps are derived from scores of hypothetical hurricanes simulated by computer models, which are themselves based on real-life hurricane records.
    “Imagine you had a second Earth, or a thousand Earths, where you could observe hurricanes for 100 years and see where they hit on the coast, how intense they are. Those simulated storms, if they behave like real hurricanes, can be used to create the data in the maps almost directly,” said NIST mathematical statistician Adam Pintar, a study co-author.
    The researchers who developed the latest maps did so by simulating the complex inner workings of hurricanes, which are influenced by physical parameters such as sea surface temperatures and the Earth’s surface roughness. However, the requisite data on these specific factors is not always readily available.
    More than a decade later, advances in AI-based tools and years of additional hurricane records have made an unprecedented approach possible, which could result in more realistic hurricane wind maps down the road.

    NIST postdoctoral researcher Rikhi Bose, together with Pintar and NIST Fellow Emil Simiu, used these new techniques and resources to tackle the issue from a different angle. Rather than having their model mathematically build a storm from the ground up, the authors of the new study taught it to mimic actual hurricane data with machine learning, Pintar said.
    Studying for a physics exam by only looking at the questions and answers of previous assignments may not play out in a student’s favor, but for powerful AI-based techniques, this type of approach could be worthwhile.
    With enough quality information to study, machine-learning algorithms can construct models based on patterns they uncover within datasets that other methods may miss. Those models can then simulate specific behaviors, such as the wind strength and movement of a hurricane.
    In the new research, the study material came in the form of the National Hurricane Center’s Atlantic Hurricane Database (HURDAT2), which contains information about hurricanes going back more than 100 years, such as the coordinates of their paths and windspeeds.
    The researchers split data on more than 1,500 storms into sets for training and testing their model. When challenged with concurrently simulating the trajectory and wind of historical storms it had not seen before, the model scored highly.

    “It performs very well. Depending on where you’re looking at along the coast, it would be quite difficult to identify a simulated hurricane from a real one, honestly,” Pintar said.
    They also used the model to generate sets of 100 years’ worth of hypothetical storms. It produced the simulations in a matter of seconds, and the authors saw a large degree of overlap with the general behavior of the HURDAT2 storms, suggesting that their model could rapidly produce collections of realistic storms.
    However, there were some discrepancies, such as in the Northeastern coastal states. In these regions, HURDAT2 data was sparse, and thus, the model generated less realistic storms.
    “Hurricanes are not as frequent in, say, Boston as in Miami, for example. The less data you have, the larger the uncertainty of your predictions,” Simiu said.
    As a next step, the team plans to use simulated hurricanes to develop coastal maps of extreme wind speeds as well as quantify uncertainty in those estimated speeds.
    Since the model’s understanding of storms is limited to historical data for now, it cannot simulate the effects that climate change will have on storms of the future. The traditional approach of simulating storms from the ground up is better suited to that task. However, in the short term, the authors are confident that wind maps based on their model — which is less reliant on elusive physical parameters than other models are — would better reflect reality.
    Within the next several years they aim to produce and propose new maps for inclusion in building standards and codes. More

  • in

    Machine learning model helps forecasters improve confidence in storm prediction

    When severe weather is brewing and life-threatening hazards like heavy rain, hail or tornadoes are possible, advance warning and accurate predictions are of utmost importance. Colorado State University weather researchers have given storm forecasters a powerful new tool to improve confidence in their forecasts and potentially save lives.
    Over the last several years, Russ Schumacher, professor in the Department of Atmospheric Science and Colorado State Climatologist, has led a team developing a sophisticated machine learning model for advancing skillful prediction of hazardous weather across the continental United States. First trained on historical records of excessive rainfall, the model is now smart enough to make accurate predictions of events like tornadoes and hail four to eight days in advance — the crucial sweet spot for forecasters to get information out to the public so they can prepare. The model is called CSU-MLP, or Colorado State University-Machine Learning Probabilities.
    Led by research scientist Aaron Hill, who has worked on refining the model for the last two-plus years, the team recently published their medium-range (four to eight days) forecasting ability in the American Meteorological Society journal Weather and Forecasting.
    Working with Storm Prediction Center forecasters
    The researchers have now teamed with forecasters at the national Storm Prediction Center in Norman, Oklahoma, to test the model and refine it based on practical considerations from actual weather forecasters. The tool is not a stand-in for the invaluable skill of human forecasters, but rather provides an agnostic, confidence-boosting measure to help forecasters decide whether to issue public warnings about potential weather.
    “Our statistical models can benefit operational forecasters as a guidance product, not as a replacement,” Hill said.

    Israel Jirak, M.S. ’02, Ph.D. ’05, is science and operations officer at the Storm Prediction Center and co-author of the paper. He called the collaboration with the CSU team “a very successful research-to-operations project.”
    “They have developed probabilistic machine learning-based severe weather guidance that is statistically reliable and skillful while also being practically useful for forecasters,” Jirak said. The forecasters in Oklahoma are using the CSU guidance product daily, particularly when they need to issue medium-range severe weather outlooks.
    Nine years of historical weather data
    The model is trained on a very large dataset containing about nine years of detailed historical weather observations over the continental U.S. These data are combined with meteorological retrospective forecasts, which are model “re-forecasts” created from outcomes of past weather events. The CSU researchers pulled the environmental factors from those model forecasts and associated them with past events of severe weather like tornadoes and hail. The result is a model that can run in real time with current weather events and produce a probability of those types of hazards with a four- to eight-day lead time, based on current environmental factors like temperature and wind.
    Ph.D. student Allie Mazurek is working on the project and is seeking to understand which atmospheric data inputs are the most important to the model’s predictive capabilities. “If we can better decompose how the model is making its predictions, we can hopefully better diagnose why the model’s predictions are good or bad during certain weather setups,” she said.
    Hill and Mazurek are working to make the model not only more accurate, but also more understandable and transparent for the forecasters using it.
    For Hill, it’s most gratifying to know that years of work refining the machine learning tool are now making a difference in a public, operational setting.
    “I love fundamental research. I love understanding new things about our atmosphere. But having a system that is providing improved warnings and improved messaging around the threat of severe weather is extremely rewarding,” Hill said. More

  • in

    Can a solid be a superfluid? Engineering a novel supersolid state from layered 2D materials

    A collaboration of Australian and European physicists predict that layered electronic 2D semiconductors can host a curious quantum phase of matter called the supersolid.
    The supersolid is a very counterintuitive phase indeed. It is made up of particles that simultaneously form a rigid crystal and yet at the same time flow without friction since all the particles belong to the same single quantum state.
    A solid becomes ‘super’ when its quantum properties match the well-known quantum properties of superconductors. A supersolid simultaneously has two orders, solid and super: solid because of the spatially repeating pattern of particles, super because the particles can flow without resistance.”Although a supersolid is rigid, it can flow like a liquid without resistance,” explains Lead author Dr Sara Conti (University of Antwerp).
    The study was conducted at UNSW (Australia), University of Antwerp (Belgium) and University of Camerino (Italy).
    A 50-Year Journey Towards the Exotic Supersolid
    Geoffrey Chester, a Professor at Cornell University, predicted in 1970 that solid helium-4 under pressure should at low temperatures display: Crystalline solid order, with each helium atom at a specific point in a regularly ordered lattice and, at the same time, Bose-Einstein condensation of the atoms, with every atom in the same single quantum state, so they flow without resistance.

    However in the following five decades the Chester supersolid has not been unambiguously detected.
    Alternative approaches to forming a supersolid-like state have reported supersolid-like phases in cold-atom systems in optical lattices. These are either clusters of condensates or condensates with varying density determined by the trapping geometries. These supersolid-like phases should be distinguished from the original Chester supersolid in which each single particle is localised in its place in the crystal lattice purely by the forces acting between the particles.
    The new Australia-Europe study predicts that such a state could instead be engineered in two-dimensional (2D) electronic materials in a semiconductor structure, fabricated with two conducting layers separated by an insulating barrier of thickness d.
    One layer is doped with negatively-charged electrons and the other with positively-charged holes.
    The particles forming the supersolid are interlayer excitons, bound states of an electron and hole tied together by their strong electrical attraction. The insulating barrier prevents fast self-annihilation of the exciton bound pairs. Voltages applied to top and bottom metal ‘gates’ tune the average separation r0 between excitons.

    The research team predicts that excitons in this structure will form a supersolid over a wide range of layer separations and average separations between the excitons. The electrical repulsion between the excitons can constrain them into a fixed crystalline lattice.
    “A key novelty is that a supersolid phase with Bose-Einstein quantum coherence appears at layer separations much smaller than the separation predicted for the non-super exciton solid that is driven by the same electrical repulsion between excitons,” says co-corresponding author Prof David Neilson (University of Antwerp).
    “In this way, the supersolid pre-empts the non-super exciton solid. At still larger separations, the non-super exciton solid eventually wins, and the quantum coherence collapses.”
    “This is an extremely robust state, readily achievable in experimental setups,” adds co-corresponding author Prof Alex Hamilton (UNSW). “Ironically, the layer separations are relatively large and are easier to fabricate than the extremely small layer separations in such systems that have been the focus of recent experiments aimed at maximising the interlayer exciton binding energies.”
    As for detection, for a superfluid it is well known that this cannot be rotated until it can host a quantum vortex, analogous to a whirlpool. But to form this vortex requires a finite amount of energy, and hence a sufficiently strong rotational force. So up to this point, the measured rotational moment of inertia (the extent to which an object resists rotational acceleration) will remain zero. In the same way, a supersolid can be identified by detecting such an anomaly in its rotational moment of inertia.
    The research team has reported the complete phase diagram of this system at low temperatures.
    “By changing the layer separation relative to the average exciton spacing, the strength of the exciton-exciton interactions can be tuned to stabilise either the superfluid, or the supersolid, or the normal solid,” says Dr Sara Conti.
    “The existence of a triple point is also particularly intriguing. At this point, the boundaries of supersolid and normal-solid melting, and the supersolid to normal-solid transition, all cross. There should be exciting physics coming from the exotic interfaces separating these domains, for example, Josephson tunnelling between supersolid puddles embedded in a normal-background.” More

  • in

    Magnon-based computation could signal computing paradigm shift

    Like electronics or photonics, magnonics is an engineering subfield that aims to advance information technologies when it comes to speed, device architecture, and energy consumption. A magnon corresponds to the specific amount of energy required to change the magnetization of a material via a collective excitation called a spin wave.
    Because they interact with magnetic fields, magnons can be used to encode and transport data without electron flows, which involve energy loss through heating (known as Joule heating) of the conductor used. As Dirk Grundler, head of the Lab of Nanoscale Magnetic Materials and Magnonics (LMGN) in the School of Engineering explains, energy losses are an increasingly serious barrier to electronics as data speeds and storage demands soar.
    “With the advent of AI, the use of computing technology has increased so much that energy consumption threatens its development,” Grundler says. “A major issue is traditional computing architecture, which separates processors and memory. The signal conversions involved in moving data between different components slow down computation and waste energy.”
    This inefficiency, known as the memory wall or Von Neumann bottleneck, has had researchers searching for new computing architectures that can better support the demands of big data. And now, Grundler believes his lab might have stumbled on such a “holy grail.”
    While doing other experiments on a commercial wafer of the ferrimagnetic insulator yttrium iron garnet (YIG) with nanomagnetic strips on its surface, LMGN PhD student Korbinian Baumgaertl was inspired to develop precisely engineered YIG-nanomagnet devices. With the Center of MicroNanoTechnology’s support, Baumgaertl was able to excite spin waves in the YIG at specific gigahertz frequencies using radiofrequency signals, and — crucially — to reverse the magnetization of the surface nanomagnets.
    “The two possible orientations of these nanomagnets represent magnetic states 0 and 1, which allows digital information to be encoded and stored,” Grundler explains.

    A route to in-memory computation
    The scientists made their discovery using a conventional vector network analyzer, which sent a spin wave through the YIG-nanomagnet device. Nanomagnet reversal happened only when the spin wave hit a certain amplitude, and could then be used to write and read data.
    “We can now show that the same waves we use for data processing can be used to switch the magnetic nanostructures so that we also have nonvolatile magnetic storage within the very same system,” Grundler explains, adding that “nonvolatile” refers to the stable storage of data over long time periods without additional energy consumption.
    It’s this ability to process and store data in the same place that gives the technique its potential to change the current computing architecture paradigm by putting an end to the energy-inefficient separation of processors and memory storage, and achieving what is known as in-memory computation.
    Optimization on the horizon
    Baumgaertl and Grundler have published the groundbreaking results in the journal Nature Communications, and the LMGN team is already working on optimizing their approach.
    “Now that we have shown that spin waves write data by switching the nanomagnets from states 0 to 1, we need to work on a process to switch them back again — this is known as toggle switching,” Grundler says.
    He also notes that theoretically, the magnonics approach could process data in the terahertz range of the electromagnetic spectrum (for comparison, current computers function in the slower gigahertz range). However, they still need to demonstrate this experimentally.
    “The promise of this technology for more sustainable computing is huge. With this publication, we are hoping to reinforce interest in wave-based computation, and attract more young researchers to the growing field of magnonics.” More

  • in

    Could changes in Fed's interest rates affect pollution and the environment?

    Can monetary policy such as the United States Federal Reserve raising interest rates affect the environment? According to a new study by Florida Atlantic University’s College of Business, it can.
    Using a stylized dynamic aggregate demand-aggregate supply (AD-AS) model, researchers explored the consequences of traditional monetary tools — namely changes in the short-term interest rate — to the environment. Specifically, they looked at how monetary policy impacts CO2 emissions in the short and long run. The AD-AS model conveys several interlocking relationships between the four macroeconomic goals of growth, unemployment, inflation and a sustainable balance of trade.
    For the study, researchers also used the Global Vector AutoRegressive (GVAR) methodology, which interconnects regions using an explicit economic integration variable, in this case, bilateral trade, allowing for spillover effects.
    Joao Ricardo Faria, Ph.D., co-author and a professor in the Economics Department within FAU’s College of Business, and collaborators from Federal University of Ouro Preto and the University of São Paulo in Brazil, examined four regions for the study: U.S., United Kingdom, Japan and the Eurozone (all the European Union countries that incorporate the euro as their national currency).
    In addition, they used data from eight other countries to characterize the international economy. Their method explicitly models their interplay to assess not only the domestic impact of a policy shift, but also its repercussion to other economies.
    Results of the study, published in the journal Energy Economics, suggest that the impact of monetary policy on pollution is basically domestic: a monetary contraction or reduction in a region reduces its own emissions, but this does not seem to spread out to other economies. However, the findings do not imply that the international economy is irrelevant to determining one region’s emissions level.
    “The actions of a country, like the U.S., are not restricted to its borders. For example, a positive shock in the Federal Reserve’s monetary policy may cause adjustments in the whole system, including the carbon emissions of the other regions,” said Faria.
    The approach used in this study considered the U.S.’s own dynamics as well as the responses of other economies. Moreover, analysis of four distinct regions allowed researchers to verify and compare how domestic markets react to the same policy.
    The study also identified important differences across regions. For example, monetary policy does not seem to reduce short-run emissions in the U.K., or long-run emissions in the Eurozone. Moreover, the cointegration coefficient for Japan is much larger than those of the other regions, suggesting strong effects of monetary policy on CO2 emissions. Furthermore, cointegration analysis suggests a relationship between interest rates and emissions in the long run.
    Statistical analyses also suggest that external factors are relevant to understanding each region’s fluctuations in emissions. A large fraction of the fluctuations in domestic CO2 emissions come from external sources.
    “Findings from our study suggest efforts to reduce emissions can benefit from internationally coordinated policies,” said Faria. “Thus, the main policy prescription is to increase international coordination and efforts to reduce CO2 emissions. We realize that achieving coordination is not an easy endeavor despite international efforts to reduce carbon emissions, such as the Paris Agreement. Our paper highlights the payoffs of coordinated policies. We hope it motivates future research on how to achieve successful coordination.” More

  • in

    Preschoolers prefer to learn from a competent robot than an incompetent human

    Who do children prefer to learn from? Previous research has shown that even infants can identify the best informant. But would preschoolers prefer learning from a competent robot over an incompetent human?
    According to a new paper by Concordia researchers published in the Journal of Cognition and Development, the answer largely depends on age.
    The study compared two groups of preschoolers: one of three-year-olds, the other of five-year-olds. The children participated in Zoom meetings featuring a video of a young woman and a small robot with humanoid characteristics (head, face, torso, arms and legs) called Nao sitting side by side. Between them were familiar objects that the robot would label correctly while the human would label them incorrectly, e.g., referring to a car as a book, a ball as a shoe and a cup as a dog.
    Next, the two groups of children were presented with unfamiliar items: the top of a turkey baster, a roll of twine and a silicone muffin container. Both the robot and the human used different nonsense terms like “mido,” “toma,” “fep” and “dax” to label the objects. The children were then asked what the object was called, endorsing either the label offered by the robot or by the human.
    While the three-year-olds showed no preference for one word over another, the five-year-olds were much more likely to state the term provided by the robot than the human.
    “We can see that by age five, children are choosing to learn from a competent teacher over someone who is more familiar to them — even if the competent teacher is a robot,” says the paper’s lead author, PhD candidate Anna-Elisabeth Baumann. Horizon Postdoctoral Fellow Elizabeth Goldman and undergraduate research assistant Alexandra Meltzer also contributed to the study. Professor and Concordia University Chair of Developmental Cybernetics Diane Poulin-Dubois in the Department of Psychology supervised the study.

    The researchers repeated the experiments with new groups of three- and five-year-olds, replacing the humanoid Nao with a small truck-shaped robot called Cozmo. The results resembled those observed with the human-like robot, suggesting that the robot’s morphology does not affect the children’s selective trust strategies.
    Baumann adds that, along with the labelling task, the researchers administered a naive biology task. The children were asked if biological organs or mechanical gears formed the internal parts of unfamiliar animals and robots. The three-year-olds appeared confused, assigning both biological and mechanical internal parts to the robots. However, the five-year-olds were much more likely to indicate that only mechanical parts belonged inside the robots.
    “This data tells us that the children will choose to learn from a robot even though they know it is not like them. They know that the robot is mechanical,” says Baumann.
    Being right is better than being human
    While there has been a substantial amount of literature on the benefits of using robots as teaching aides for children, the researchers note that most studies focus on a single robot informant or two robots pitted against each other. This study, they write, is the first to use both a human speaker and a robot to see if children deem social affiliation and similarity more important than competency when choosing which source to trust and learn from.
    Poulin-Dubois points out that this study builds on a previous paper she co-wrote with Goldman and Baumann. That paper shows that by age five, children treat robots similarly to how adults do, i.e., as depictions of social agents.
    “Older preschoolers know that robots have mechanical insides, but they still anthropomorphize them. Like adults, these children attribute certain human-like qualities to robots, such as the ability to talk, think and feel,” she says.
    “It is important to emphasize that we see robots as tools to study how children can learn from both human and non-human agents,” concludes Goldman. “As technology use increases, and as children interact with technological devices more, it is important for us to understand how technology can be a tool to help facilitate their learning.” More