More stories

  • in

    Vikings may have fled Greenland to escape rising seas

    In 1721, a Norwegian missionary set sail for Greenland in the hopes of converting the Viking descendants living there to Protestantism. When he arrived, the only traces he found of the Nordic society were ruins of settlements that had been abandoned 300 years earlier.

    There is no written record to explain why the Vikings left or died out. But a new simulation of Greenland’s coastline reveals that as the ice sheet covering most of the island started to expand around that time, sea levels rose drastically, researchers report December 15 at the American Geophysical Union’s fall meeting in New Orleans.

    These shifting coastlines would have inundated grazing areas and farmland, and could have helped bring about the end of the Nordic way of life in Greenland, says Marisa Borreggine, a geophysicist at Harvard University.

    Greenland was first colonized by Vikings in 985 by a group of settlers in 14 ships led by Erik the Red, who had been banished from neighboring Iceland for manslaughter. Erik and his followers settled across southern Greenland, where they and their descendants hunted for seals, grazed livestock, built churches and traded walrus ivory with European mainlanders.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The settlers arrived during what’s known as the Medieval Warm Period, when conditions across Europe and Greenland were temperate for a handful of centuries (SN: 7/24/19). But by 1350, the climate had started taking a turn for the worse with the beginning of the Little Ice Age, a period of regional cooling that lasted well into the 19th century.

    Researchers have long speculated that a rapidly changing climate could have dealt a blow to Greenland’s Norse society. The island probably became much colder in the last 100 years of Norse occupation, says paleoclimatologist Boyang Zhao at Brown University in Providence, R.I, who was not involved in the new research. Lower temperatures could have made farming and raising livestock more difficult, he says. 

    These lower temperatures would have had another impact on Greenland: the steady expansion of the island’s ice sheet, Borreggine and colleagues say.

    Though rising sea levels usually go hand in hand with ice melting from ice sheets, oceans do not rise and fall uniformly in every place, Borreggine says. Around Greenland, sea level tends to rise when the ice sheet there grows.

    This is for two main reasons: First, ice is heavy. The sheer weight of the ice sheet pushes the land it rests on down, meaning that as the ice sheet grows, more land is submerged. Second is gravity. Being massive, ice sheets exert some gravitational pull on nearby water. This makes the seawater around Greenland tilt upward toward the ice, meaning that water closer to the coast is higher than water in the open ocean. As the ice sheet grows, that pull becomes even stronger, and sea level close to the coast rises further.

    Simulating the impact of the weight of the ice and its tug on Greenland’s waters, Borreggine and their colleagues found that sea level rose enough to flood the coast by hundreds of meters in some areas. Between the time the Vikings arrived and when they left, there was “pretty intense coastal flooding, such that certain pieces of land that were connected to each other were no longer connected,” they say.  

    Today, some Viking sites are being inundated as a result of the overall rise in global sea level from climate change, which is being only marginally offset around Greenland by its melting ice sheet. Something similar could have happened back in the 14th and 15th centuries, destroying land that the Norse relied on for farming and grazing, Borreggine says.

    “Previous theories about why Vikings left have really focused on the idea that they all died because it got really cold, and they were too dumb to adapt,” Borreggine says. But they say that archaeological digs have revealed a far more nuanced story, showing that Greenland’s Norse people did change their lifestyle by increasingly relying on seafood in the last century of their occupation.

    But learning to adapt may have been too difficult in the face of an increasingly harsh landscape. The idea that rising sea levels may have been one of these challenges has merit, Zhao says, noting that the reasons why the Vikings disappeared from Greenland is nuanced.

    As the climate changed, for example, these people may have also found themselves increasingly cut off from trade routes as the season for thick sea ice extended. And by the mid-14th century, the Black Plague was tearing through Europe, cutting into the Vikings’ biggest market for walrus ivory.

    “Norse people came and left,” Zhao says. “But there are still a lot of unsolved questions,” including why exactly they left, he says.

    The last written record of this society is a letter describing a wedding in 1408. A few years later, that couple moved to Iceland and started farming. Why the pair chose to leave is lost to history, but, as the new research suggests, sea level rise may have been part of the equation.  More

  • in

    Using sparse data to predict lab earthquakes

    A machine-learning approach developed for sparse data reliably predicts fault slip in laboratory earthquakes and could be key to predicting fault slip and potentially earthquakes in the field. The research by a Los Alamos National Laboratory team builds on their previous success using data-driven approaches that worked for slow-slip events in earth but came up short on large-scale stick-slip faults that generate relatively little data — but big quakes.
    “The very long timescale between major earthquakes limits the data sets, since major faults may slip only once in 50 to 100 years or longer, meaning seismologists have had little opportunity to collect the vast amounts of observational data needed for machine learning,” said Paul Johnson, a geophysicist at Los Alamos and a co-author on a new paper, “Predicting Fault Slip via Transfer Learning,” in Nature Communications.
    To compensate for limited data, Johnson said, the team trained a convolutional neural network on the output of numerical simulations of laboratory quakes as well as on a small set of data from lab experiments. Then they were able to predict fault slips in the remaining unseen lab data.
    This research was the first application of transfer learning to numerical simulations for predicting fault slip in lab experiments, Johnson said, and no one has applied it to earth observations.
    With transfer learning, researchers can generalize from one model to another as a way of overcoming data sparsity. The approach allowed the Laboratory team to build on their earlier data-driven machine learning experiments successfully predicting slip in laboratory quakes and apply it to sparse data from the simulations. Specifically, in this case, transfer learning refers to training the neural network on one type of data — simulation output — and applying it to another — experimental data — with the additional step of training on a small subset of experimental data, as well.
    “Our aha moment came when I realized we can take this approach to earth,” Johnson said. “We can simulate a seismogenic fault in earth, then incorporate data from the actual fault during a portion of the slip cycle through the same kind of cross training.” The aim would be to predict fault movement in a seismogenic fault such as the San Andreas, where data is limited by infrequent earthquakes.
    The team first ran numerical simulations of the lab quakes. These simulations involve building a mathematical grid and plugging in values to simulate fault behavior, which are sometimes just good guesses.
    For this paper, the convolutional neural network comprised an encoder that boils down the output of the simulation to its key features, which are encoded in the model’s hidden, or latent space, between the encoder and decoder. Those features are the essence of the input data that can predict fault-slip behavior.
    The neural network decoded the simplified features to estimate the friction on the fault at any given time. In a further refinement of this method, the model’s latent space was additionally trained on a small slice of experimental data. Armed with this “cross-training,” the neural network predicted fault-slip events accurately when fed unseen data from a different experiment.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Using ergonomics to reduce pain from technology use

    The use of smartphones, tablets and laptops has become commonplace throughout the world and has been especially prevalent among college students. Recent studies have found that college students have higher levels of screen time, and they utilize multiple devices at higher rates compared to previous generations.
    With the increased use of these devices, especially smartphones, students tend to use a less-traditional workplace such as a couch or chair with no desk, leading to an increase in musculoskeletal disorders in that age group. A team of Texas A&M researchers led by Mark E. Benden conducted a study looking at the technology students use, the postures they adapt when they use their devices, and the amount of pain the students were currently experiencing.
    Benden and his co-authors found that smartphones have become the most common link to educational materials though they have the least favorable control and display scenario from an ergonomic perspective. Additionally, the team concluded that regardless of device, ergonomic interventions focused on improving posture and facilitating stress management may reduce the likelihood of pain.
    The results of the team’s study were published recently in the open-access, peer reviewed journal BMC Public Health.
    “When we started this study a few years ago it was because we had determined that college students were the heavy users of smartphones,” Benden said. “Now those same levels we were concerned about in college students are seen in 40-year-olds and college students have increased to new levels.”
    Benden, professor and head of the Department of Environmental and Occupational Health (EOH) at the Texas A&M University School of Public Health and director of the Ergo Center, co-authored the study with EOH associate professors Adam Pickens, S. Camille Peres, and Matthew Lee Smith, Ranjana Mehta, associate professor in the Wm Michael Barnes ’64 Department of Industrial & Systems Engineering, Brett Harp, a recent EOH graduate, and Samuel Towne Jr., adjunct assistant professor at the School of Public Health.
    The research team used a 35-minute online survey that asked participants about their technology use, posture when using the technology, current level of pain or discomfort, and their activity and stress levels.
    Among the respondents, 64 percent indicated that their smartphone was the electronic device they used most frequently, followed by laptops, tablets and desktop computers. On average, the students used their smartphone 4.4 hours per day, and they indicated that when doing so, they were more likely to do so on the couch or at a chair with no desk.
    “It is amazing to consider how quickly smartphones have become the dominant tech device in our daily lives with little research into how that level of use would impact our health,” Benden said.
    The researchers found that posture components and stress more consistently contributed to the pain reported by the students, not the variables associated with the devices they were using.
    Still, the researchers point out that in our ever-increasing technology-focused society, efforts are needed to ensure that pain is deferred or delayed until an individual’s later years to preserve the productivity of the workforce.
    “Now that we are moving toward hybrid and/or remote workspaces for our jobs, college students are taking habits formed in dorm and apartment rooms during college into young adulthood as employees in home offices,” Benden said. “We need to get this right or it could have adverse impacts on an entire generation.”
    Story Source:
    Materials provided by Texas A&M University. Original written by Tim Schnettler. Note: Content may be edited for style and length. More

  • in

    Magnetic ‘hedgehogs’ could store big data in a small space

    Atomic-scale magnetic patterns resembling a hedgehog’s spikes could result in hard disks with massively larger capacities than today’s devices, a new study suggests. The finding could help data centers keep up with the exponentially increasing demand for video and cloud data storage.
    In a study published today in the journal Science, researchers at The Ohio State University used a magnetic microscope to visualize the patterns, formed in thin films of an unusual magnetic material, manganese germanide. Unlike familiar magnets such as iron, the magnetism in this material follows helices, similar to the structure of DNA. This leads to a new zoo of magnetic patterns with names such as hedgehogs, anti-hedgehogs, skyrmions and merons that can be much smaller than today’s magnetic bits.
    “These new magnetic patterns could be used for next-generation data storage,” said Jay Gupta, senior author of the study and a professor of physics at Ohio State. “The density of storage in hard disks is approaching its limits, related to how small you can make the magnetic bits that allow for that storage. And that’s motivated us to look for new materials, where we might be able to make the magnetic bits much smaller.”
    To visualize the magnetic patterns, Gupta and his team used a scanning tunneling microscope in his lab, modified with special tips. This microscope provides pictures of the magnetic patterns with atomic resolution. Their images revealed that in certain parts of the sample, the magnetism at the surface was twisted into a pattern resembling the spikes of a hedgehog. However, in this case the “body” of the hedgehog is only 10 nanometers wide, which is much smaller than today’s magnetic bits (about 50 nanometers), and nearly impossible to visualize. By comparison, a single human hair is about 80,000 nanometers thick.
    The research team also found that the hedgehog patterns could be shifted on the surface with electric currents, or inverted with magnetic fields. This foreshadows the reading and writing of magnetic data, potentially using much less energy than currently possible.
    “There is enormous potential for these magnetic patterns to allow data storage to be more energy efficient,” Gupta said, though he cautions that there is more research to do before the material could be put into use on a data storage site. “We have a huge amount of fundamental science still to do about understanding these magnetic patterns and improving how we control them. But this is a very exciting step.”
    This research was funded by the Defense Advanced Research Projects Agency, a research division of the U.S. Department of Defense. Other Ohio State researchers who co-authored this study include Jacob Repicky, Po-Kuwan Wu, Tao Liu, Joseph Corbett, Tiancong Zhu, Shuyu Cheng, Adam Ahmed, Mohit Randeria and Roland Kawakami.
    Story Source:
    Materials provided by Ohio State University. Original written by Laura Arenschield. Note: Content may be edited for style and length. More

  • in

    Redrawing the lines: Growing inexpensive, high-quality iron-based superconductors

    Superconducting materials show zero electrical resistance at low temperatures, which allows them to conduct “supercurrents” without dissipation. Recently, a group of scientists led by Dr. Kazumasa Iida from Nagoya University, Japan, developed an inexpensive, scalable way to produce high-temperature superconductors using “grain boundary engineering” techniques. The new method could help develop stronger, inexpensive, and high operating temperature superconductors with impactful technological applications.
    Key to the dissipation-free conduction of currents in superconductors in the presence of a magnetic field is a property called “pinning potential.” Pinning describes how defects in the superconducting matrix pin vortices against the Lorentz force. Controlling the micro-structure of the material allows for careful introduction of defects into the material to form “artificial pinning centers” (APCs), which can then improve its properties. The most common approach to introducing such defects into superconductors is “ion irradiation.” However, ion irradiation is both complicated and expensive.
    In their study published in NPG Asia Materials, Professor Iida and his research team successfully grew a thin film superconductor that has a surprisingly high pinning efficiency without APCs. “Crystalline materials are made up of different regions with different crystalline orientations called ‘grains.’ When the angle between the boundaries of different grains in the material are less than their critical angle, ?c, we call it a ‘low-angle grain boundary (LAGB).’ LAGBs contribute to magnetic flux pinning, which enhances the properties of the superconductor,” explains Dr. Iida.
    Iron (Fe)-based superconductors (FBS) are considered to be the next-generation superconductor technology. In their study, Professor Iida and team grew an FBS called “potassium (K)-doped BaFe2As2 (Ba122)” using a technique called “molecular beam epitaxy,” in which the superconductor is grown on a substrate. “The difficulties involved in controlling volatile potassium made the realization of epitaxial K-doped Ba122 challenging, but we succeeded in growing the thin films on fluoride substrates,” says Dr. Iida.
    The team then characterized the FBS using transmission electron microscopy and found that the film was composed of columnar grains approximately 30-60 nm wide. These grains were rotated around the crystallographic principle axes by angles well within ?c for K-doped Ba122 and formed LAGB networks.
    The researchers then performed measurements of the thin film’s electrical resistivity and magnetic properties. They observed that the thin films had a surprisingly high critical current (the maximum current in a superconductor above which it transitions to a dissipation state). The LAGB networks further ensured a strong pinning efficiency in the material. “The in-field properties obtained in our study are comparable to that of ion-irradiated K-doped Ba122. Moreover, grain boundary engineering is a simple technique and can be scaled up for industrial applications,” comments Dr. Iida.
    The findings of this study could accelerate the development of strong magnets using superconductors, leading to advances in magnetic resonance imaging (MRI). The widespread application of MRI is currently limited by the high investment and operational cost of the MRI machines due to the cooling costs of the superconductors within. But with simple and inexpensive techniques such as grain boundary engineering for fabricating superconductors, MRIs could become more accessible to patients, improving our quality of life.
    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More

  • in

    Rollercoaster of emotions: Exploring emotions with virtual reality

    To the left and right, the landscape drifts idly by, the track in front of you. Suddenly, a fire. The tension builds. The ride reaches its highest point. Only one thing lies ahead: the abyss. Plummeting down into the depths of the earth. These are scenes of a rollercoaster ride as experienced by participants in a recent study at the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig. However, not in real life, but virtually, with the help of virtual reality (VR) glasses. The aim of the research was to find out what happens in participants’ brains while they experience emotionally engaging situations.
    To find out how the human brain processes emotions, highly simplified experiments have been used until now. Researchers would show participants photos of emotional scenes and record their brain activity. The studies took place under controlled laboratory conditions, so that the results could be easily compared. However, the simulated situations were usually not particularly emotionally arousing and were far removed from the experiences we normally have. This is because emotions are continuously created through an interplay of past experiences and various external influences with which we interact. With regard to emotions, it is therefore particularly important to create situations that are perceived as real as possible. Only in this way can we assume that the simultaneously measured brain activation comes close to that which occurs in real life outside the laboratory. VR glasses provide a remedy here. Through them, participants can immerse themselves dynamically and interactively in situations and experience them close to reality. Emotions are thus evoked in a more natural way.
    The results of the current study showed that the degree to which a person is emotionally aroused can be seen in a specific form of rhythmic brain activity, the so-called alpha oscillations. Accordingly, the lower the strength of this oscillation in the measured EEG signal, the higher the arousal. “The findings thus confirm earlier investigations from classical experiments and prove that the signals also occur under conditions that are closer to everyday life,” says Simon M. Hofmann, one of the authors of the underlying study, which has now appeared in the scientific journal eLife. “Using alpha oscillations, we were able to predict how strongly a person experiences a situation emotionally. Our models learned which brain areas are particularly important for this prediction. Roughly speaking, the less alpha activity measured here, the more aroused the person is,” explains author Felix Klotzsche.
    “In the future, it could be possible to apply these findings and methods to practical applications beyond basic research,” adds author Alberto Mariola. VR glasses, for example, are increasingly being used in psychological therapy. Neurophysiological information about the emotional state of patients could lead to an improvement in treatment. Therapists could, for instance, directly gain an insight into the current emotional feeling during an exposure situation without having to ask the patient directly and thus interrupt the situation.
    The scientists investigated these relationships with the help of electroencephalography (EEG), which allowed them to record the participants’ brain waves during the virtual rollercoaster ride — in order to determine what happens in the brain during the ride. Additionally, the subjects were asked to rate afterwards how excited they were over the course of the VR experience using a video. In this way, the researchers wanted to find out whether the subjective sensations during the ride correlate with the measured data from brain activity. Since people differ in how much they like to ride on rollercoasters, it was irrelevant whether the situation was perceived as positive or negative. What mattered was the strength of the sensation.
    For the evaluation, the researchers used three different machine learning models to predict the subjective sensations as accurately as possible from the EEG data. The authors thereby showed that with the help of these approaches, the connection between EEG signals and emotional feelings can also be confirmed under naturalistic conditions.
    Story Source:
    Materials provided by Max Planck Institute for Human Cognitive and Brain Sciences. Note: Content may be edited for style and length. More

  • in

    Mind-controlled robots now one step closer

    Two EPFL research groups teamed up to develop a machine-learning program that can be connected to a human brain and used to command a robot. The program adjusts the robot’s movements based on electrical signals from the brain. The hope is that with this invention, tetraplegic patients will be able to carry out more day-to-day activities on their own.
    Tetraplegic patients are prisoners of their own bodies, unable to speak or perform the slightest movement. Researchers have been working for years to develop systems that can help these patients carry out some tasks on their own. “People with a spinal cord injury often experience permanent neurological deficits and severe motor disabilities that prevent them from performing even the simplest tasks, such as grasping an object,” says Prof. Aude Billard, the head of EPFL’s Learning Algorithms and Systems Laboratory. “Assistance from robots could help these people recover some of their lost dexterity, since the robot can execute tasks in their place.”
    Prof. Billard carried out a study with Prof. José del R. Millán, who at the time was the head of EPFL’s Brain-Machine Interface laboratory but has since moved to the University of Texas. The two research groups have developed a computer program that can control a robot using electrical signals emitted by a patient’s brain. No voice control or touch function is needed; patients can move the robot simply with their thoughts. The study has been published in Communications Biology, an open-access journal from Nature Portfolio.
    Avoiding obstacles
    To develop their system, the researchers started with a robotic arm that had been developed several years ago. This arm can move back and forth from right to left, reposition objects in front of it and get around objects in its path. “In our study we programmed a robot to avoid obstacles, but we could have selected any other kind of task, like filling a glass of water or pushing or pulling an object,” says Prof. Billard.
    The engineers began by improving the robot’s mechanism for avoiding obstacles so that it would be more precise. “At first, the robot would choose a path that was too wide for some obstacles, taking it too far away, and not wide enough for others, keeping it too close,” says Carolina Gaspar Pinto Ramos Correia, a PhD student at Prof. Billard’s lab. “Since the goal of our robot was to help paralyzed patients, we had to find a way for users to be able to communicate with it that didn’t require speaking or moving.”
    An algorithm that can learn from thoughts
    This entailed developing an algorithm that could adjust the robot’s movements based only on a patient’s thoughts. The algorithm was connected to a headcap equipped with electrodes for running electroencephalogram (EEG) scans of a patient’s brain activity. To use the system, all the patient needs to do is look at the robot. If the robot makes an incorrect move, the patient’s brain will emit an “error message” through a clearly identifiable signal, as if the patient is saying “No, not like that.” The robot will then understand that what it’s doing is wrong — but at first it won’t know exactly why. For instance, did it get too close to, or too far away from, the object? To help the robot find the right answer, the error message is fed into the algorithm, which uses an inverse reinforcement learning approach to work out what the patient wants and what actions the robot needs to take. This is done through a trial-and-error process whereby the robot tries out different movements to see which one is correct. The process goes pretty quickly — only three to five attempts are usually needed for the robot to figure out the right response and execute the patient’s wishes. “The robot’s AI program can learn rapidly, but you have to tell it when it makes a mistake so that it can correct its behavior,” says Prof. Millán. “Developing the detection technology for error signals was one of the biggest technical challenges we faced.” Iason Batzianoulis, the study’s lead author, adds: “What was particularly difficult in our study was linking a patient’s brain activity to the robot’s control system — or in other words, ‘translating’ a patient’s brain signals into actions performed by the robot. We did that by using machine learning to link a given brain signal to a specific task. Then we associated the tasks with individual robot controls so that the robot does what the patient has in mind.”
    Next step: a mind-controlled wheelchair
    The researchers hope to eventually use their algorithm to control wheelchairs. “For now there are still a lot of engineering hurdles to overcome,” says Prof. Billard. “And wheelchairs pose an entirely new set of challenges, since both the patient and the robot are in motion.” The team also plans to use their algorithm with a robot that can read several different kinds of signals and coordinate data received from the brain with those from visual motor functions.
    Story Source:
    Materials provided by Ecole Polytechnique Fédérale de Lausanne. Original written by Valérie Geneux. Note: Content may be edited for style and length. More

  • in

    Giving bug-like bots a boost

    When it comes to robots, bigger isn’t always better. Someday, a swarm of insect-sized robots might pollinate a field of crops or search for survivors amid the rubble of a collapsed building.
    MIT researchers have demonstrated diminutive drones that can zip around with bug-like agility and resilience, which could eventually perform these tasks. The soft actuators that propel these microrobots are very durable, but they require much higher voltages than similarly-sized rigid actuators. The featherweight robots can’t carry the necessary power electronics that would allow them fly on their own.
    Now, these researchers have pioneered a fabrication technique that enables them to build soft actuators that operate with 75 percent lower voltage than current versions while carrying 80 percent more payload. These soft actuators are like artificial muscles that rapidly flap the robot’s wings.
    This new fabrication technique produces artificial muscles with fewer defects, which dramatically extends the lifespan of the components and increases the robot’s performance and payload.
    “This opens up a lot of opportunity in the future for us to transition to putting power electronics on the microrobot. People tend to think that soft robots are not as capable as rigid robots. We demonstrate that this robot, weighing less than a gram, flies for the longest time with the smallest error during a hovering flight. The take-home message is that soft robots can exceed the performance of rigid robots,” says Kevin Chen, who is the D. Reid Weedon, Jr. ’41 assistant professor in the Department of Electrical Engineering and Computer Science, the head of the Soft and Micro Robotics Laboratory in the Research Laboratory of Electronics (RLE), and the senior author of the paper.
    Chen’s coauthors include Zhijian Ren and Suhan Kim, co-lead authors and EECS graduate students; Xiang Ji, a research scientist in EECS; Weikun Zhu, a chemical engineering graduate student; Farnaz Niroui, an assistant professor in EECS; and Jing Kong, a professor in EECS and principal investigator in RLE. The research has been accepted for publication in Advanced Materials and is included in the journal’s Rising Stars series, which recognizes outstanding works from early-career researchers. More