More stories

  • in

    Novel magnetic spray transforms objects into millirobots for biomedical applications

    An easy way to make millirobots by coating objects with a glue-like magnetic spray was developed in a joint research led by a scientist from City University of Hong Kong (CityU). Driven by the magnetic field, the coated objects can crawl, walk, or roll on different surfaces. As the magnetic coating is biocompatible and can be disintegrated into powders when needed, this technology demonstrates the potential for biomedical applications, including catheter navigation and drug delivery.
    The research team is led by Dr Shen Yajing, Associate Professor of the Department of Biomedical Engineering (BME) at CityU in collaboration with the Shenzhen Institutes of Advanced Technology (SIAT), Chinese Academy of Sciences (CAS). The research findings have been published in the scientific journal Science Robotics, titled “An agglutinate magnetic spray transforms inanimate objects into millirobots for biomedical applications.”
    Transforming objects into millirobots with a “magnetic coat”
    Scientists have been developing millirobots or insect-scale robots that can adapt to different environments for exploration and biomedical applications.
    Dr Shen’s research team came up with a simple approach to construct millirobots by coating objects with a composited glue-like magnetic spray, called M-spray. “Our idea is that by putting on this ‘magnetic coat’, we can turn any objects into a robot and control their locomotion. The M-spray we developed can stick on the targeted object and ‘activate’ the object when driven by a magnetic field,” explained Dr Shen.
    Composed of polyvinyl alcohol (PVA), gluten and iron particles, M-spray can adhere to the rough and smooth surfaces of one (1D), two (2D) or three-dimensional (3D) objects instantly, stably and firmly. The film it formed on the surface is just about 0.1 to 0.25mm thick, which is thin enough to preserve the original size, form and structure of the objects.

    advertisement

    After coating the object with M-spray, the researchers magnetised it with single or multiple magnetisation directions, which could control how the object moved by a magnetic field. Then they applied heat on the object until the coating was solidified.
    In this way, when driven by a magnetic field, the objects can be transformed into millirobots with different locomotion modes, such as crawling, flipping, walking, and rolling, on various surfaces from glass, skin, wood to sand. The team demonstrated this feature by converting cotton thread (1D), origami (2D flat plane), polydimethylsiloxane (PDMS) film (2D curved/soft surface) and plastic pipe (3D round object) into soft reptile robot, multi-foot robot, walking robot and rolling robot respectively.
    On-demand reprogramming to change locomotion mode
    What makes this approach special is that the team can reprogramme the millirobot’s locomotion mode on demand.
    Mr Yang Xiong, the co-first author of this paper, explained that conventionally, the robot’s initial structure is usually fixed once it is constructed, hence constraining its versatility in motion. However, by wetting the solidified M-spray coating fully to make it adhesive like glue and then by applying a strong magnetic field, the distribution and alignment direction of the magnetic particles (easy magnetisation axis) of the M-spray coating can be changed.

    advertisement

    Their experiments showed that the same millirobot could switch between different locomotion modes, such as from a faster 3D caterpillar movement in a spacious environment to a slower 2D concertina movement for passing through a narrow gap.
    Navigating ability and disintegrable property
    This reprogrammable actuation feature is also helpful for navigation towards targets. To explore the potential in biomedical applications, the team carried out experiments with a catheter, which is widely used for inserting into the body to treat disease or perform surgical procedures. They demonstrated that the M-spray coated catheter could perform sharp or smooth turns. And the impact of blood/liquid flow on the motion ability and stability on the M-spray coated catheter was limited.
    By reprograming the M-spray coating of different sections of a cotton thread based on the delivery task and environment, they further showed that it could achieve a fast-steering and smoothly pass through an irregular, narrow structure. Dr Shen pointed out that from the view of clinical application, this can prevent the unexpected plunging in the throat wall during insertion. “Task-based reprogramming offers promising potential for catheter manipulation in the complex esophagus, vessel and urethra where navigation is always required,” he said.
    Another important feature of this technology is that the M-spray coating can be disintegrated into powders on demand with the manipulation of a magnetic field. “All the raw materials of M-spray, namely PVA, gluten and iron particles, are biocompatible. The disintegrated coating could be absorbed or excreted by the human body,” said Dr Shen, stressing the side effect of the disintegration of M-spray is negligible.
    Successful drug delivery in rabbit stomach
    To further verify the feasibility and effectiveness of the M-spray enabled millirobot for drug delivery, the team conducted in vivo test with rabbits and capsule coated with M-spray. During the delivery process, the rabbits were anaesthetised, and the position of the capsule in the stomach was tracked by radiology imaging. When the capsule reached the targeted region, the researchers disintegrated the coating by applying an oscillating magnetic field. “The controllable disintegration property of M-spray enables the drug to be released in a targeted location rather than scattering in the organ,” Dr Shen added.
    Though the M-spray coating will start to disintegrate in about eight minutes under strongly acidic environment (pH level 1), the team showed that an additional PVA layer on the surface of the M-spray coating could prolong it to about 15 minutes. And if replacing the iron particles with nickel particles, the coating could keep stable in a strongly acidic environment even after 30 minutes.
    “Our experiment results indicated that different millirobots could be constructed with the M-spray adapting to various environment, surface conditions and obstacles. We hope this construction strategy can contribute to the development and application of millirobots in different fields, such as active transportation, moveable sensor and devices, particularly for the tasks in limited space,” said Dr Shen.
    The research was supported by the National Science Foundation of China and the Research Grants Council of Hong Kong. More

  • in

    Curved origami provides new range of stiffness-to-flexibility in robots

    New research that employs curved origami structures has dramatic implications in the development of robotics going forward, providing tunable flexibility — the ability to adjust stiffness based on function — that historically has been difficult to achieve using simple design.
    “The incorporation of curved origami structures into robotic design provides a remarkable possibility in tunable flexibility, or stiffness, as its complementary concept,” explained Hanqing Jiang, a mechanical engineering professor at Arizona State University. “High flexibility, or low stiffness, is comparable to the soft landing navigated by a cat. Low flexibility, or high stiffness, is similar to executing of a hard jump in a pair of stiff boots,” he said.
    Jiang is the lead author of a paper, “In Situ Stiffness Manipulation Using Elegant Curved Origami,” published this week in Science Advances. “Curved Origami can add both strength and cat-like flexibility to robotic actions,” he said.
    Jiang also compared employing curved origami to the operational differences between sporty cars sought by drivers who want to feel the rigidity of the road and vehicles desired by those who seek a comfortable ride that alleviates jarring movements. “Similar to switching between a sporty car mode to a comfortable ride mode, these curved origami structures will simultaneously offer a capability to on-demand switch between soft and hard modes depending on how the robots interact with the environment,” he said.
    Robotics requires a variety of stiffness modes: high rigidity is necessary for lifting weights; high flexibility is needed for impact absorption, and negative stiffness, or the ability to quickly release stored energy like a spring, is needed for sprint.
    Traditionally, the mechanics of accommodating rigidity variances can be bulky with nominal range, whereas curved origami can compactly support an expanded stiffness scale with on-demand flexibility. The structures covered in Jiang and team’s research combine the folding energy at the origami creases with the bending of the panel, tuned by switching among multiple curved creases between two points.
    Curved origami enables a single robot to accomplish a variety of movements. A pneumatic, swimming robot developed by the team can accomplish a range of nine different movements, including fast, medium, slow, linear and rotational movements, by simply adjusting which creases are used.
    In addition to applications for robotics, the curved origami research principles are also relevant for the design of mechanical metamaterials in the fields of electromagnetics, automobile and aerospace components, and biomedical devices. “The beauty of this work is that the design of curved origami is very similar, just by changing the straight creases to curved creases, and each curved crease corresponds to a particular flexibility,” Jiang said.
    The research was funded by the Mechanics of Materials and Structures program of the National Science Foundation. Authors contributing to the paper are Hanqing Jiang, Zirui Zhai and Lingling Wu from the School for Engineering, Matter, Transport and Energy, Arizona State University, and Yong Wang, Ken Lin from the Department of Engineering Mechanics at Zhejiang University, China.

    Story Source:
    Materials provided by Arizona State University. Note: Content may be edited for style and length. More

  • in

    Deep learning helps robots grasp and move objects with ease

    In the past year, lockdowns and other COVID-19 safety measures have made online shopping more popular than ever, but the skyrocketing demand is leaving many retailers struggling to fulfill orders while ensuring the safety of their warehouse employees.
    Researchers at the University of California, Berkeley, have created new artificial intelligence software that gives robots the speed and skill to grasp and smoothly move objects, making it feasible for them to soon assist humans in warehouse environments. The technology is described in a paper published online today (Wednesday, Nov. 18) in the journal Science Robotics.
    Automating warehouse tasks can be challenging because many actions that come naturally to humans — like deciding where and how to pick up different types of objects and then coordinating the shoulder, arm and wrist movements needed to move each object from one location to another — are actually quite difficult for robots. Robotic motion also tends to be jerky, which can increase the risk of damaging both the products and the robots.
    “Warehouses are still operated primarily by humans, because it’s still very hard for robots to reliably grasp many different objects,” said Ken Goldberg, William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley and senior author of the study. “In an automobile assembly line, the same motion is repeated over and over again, so that it can be automated. But in a warehouse, every order is different.”
    In earlier work, Goldberg and UC Berkeley postdoctoral researcher Jeffrey Ichnowski created a Grasp-Optimized Motion Planner that could compute both how a robot should pick up an object and how it should move to transfer the object from one location to another.
    However, the motions generated by this planner were jerky. While the parameters of the software could be tweaked to generate smoother motions, these calculations took an average of about half a minute to compute.

    advertisement

    In the new study, Goldberg and Ichnowski, in collaboration with UC Berkeley graduate student Yahav Avigal and undergraduate student Vishal Satish, dramatically sped up the computing time of the motion planner by integrating a deep learning neural network.
    Neural networks allow a robot to learn from examples. Later, the robot can often generalize to similar objects and motions.
    However, these approximations aren’t always accurate enough. Goldberg and Ichnowski found that the approximation generated by the neural network could then be optimized using the motion planner.
    “The neural network takes only a few milliseconds to compute an approximate motion. It’s very fast, but it’s inaccurate,” Ichnowski said. “However, if we then feed that approximation into the motion planner, the motion planner only needs a few iterations to compute the final motion.”
    By combining the neural network with the motion planner, the team cut average computation time from 29 seconds to 80 milliseconds, or less than one-tenth of a second.
    Goldberg predicts that, with this and other advances in robotic technology, robots could be assisting in warehouse environments in the next few years.
    “Shopping for groceries, pharmaceuticals, clothing and many other things has changed as a result of COVID-19, and people are probably going to continue shopping this way even after the pandemic is over,” Goldberg said. “This is an exciting new opportunity for robots to support human workers.”

    Story Source:
    Materials provided by University of California – Berkeley. Original written by Kara Manke. Note: Content may be edited for style and length. More

  • in

    Versatile building blocks make structures with surprising mechanical properties

    Researchers at MIT’s Center for Bits and Atoms have created tiny building blocks that exhibit a variety of unique mechanical properties, such as the ability to produce a twisting motion when squeezed. These subunits could potentially be assembled by tiny robots into a nearly limitless variety of objects with built-in functionality, including vehicles, large industrial parts, or specialized robots that can be repeatedly reassembled in different forms.
    The researchers created four different types of these subunits, called voxels (a 3D variation on the pixels of a 2D image). Each voxel type exhibits special properties not found in typical natural materials, and in combination they can be used to make devices that respond to environmental stimuli in predictable ways. Examples might include airplane wings or turbine blades that respond to changes in air pressure or wind speed by changing their overall shape.
    The findings, which detail the creation of a family of discrete “mechanical metamaterials,” are described in a paper published today in the journal Science Advances, authored by recent MIT doctoral graduate Benjamin Jenett PhD ’20, Professor Neil Gershenfeld, and four others.
    Metamaterials get their name because their large-scale properties are different from the microlevel properties of their component materials. They are used in electromagnetics and as “architected” materials, which are designed at the level of their microstructure. “But there hasn’t been much done on creating macroscopic mechanical properties as a metamaterial,” Gershenfeld says.
    With this approach, engineers should be able to build structures incorporating a wide range of material properties — and produce them all using the same shared production and assembly processes, Gershenfeld says.
    The voxels are assembled from flat frame pieces of injection-molded polymers, then combined into three-dimensional shapes that can be joined into larger functional structures. They are mostly open space and thus provide an extremely lightweight but rigid framework when assembled. Besides the basic rigid unit, which provides an exceptional combination of strength and light weight, there are three other variations of these voxels, each with a different unusual property.

    advertisement

    The “auxetic” voxels have a strange property in which a cube of the material, when compressed, instead of bulging out at the sides actually bulges inward. This is the first demonstration of such a material produced through conventional and inexpensive manufacturing methods.
    There are also “compliant” voxels, with a zero Poisson ratio, which is somewhat similar to the auxetic property, but in this case, when the material is compressed, the sides do not change shape at all. Few known materials exhibit this property, which can now be produced through this new approach.
    Finally, “chiral” voxels respond to axial compression or stretching with a twisting motion. Again, this is an uncommon property; research that produced one such material through complex fabrication techniques was hailed last year as a significant finding. This work makes this property easily accessible at macroscopic scales.
    “Each type of material property we’re showing has previously been its own field,” Gershenfeld says. “People would write papers on just that one property. This is the first thing that shows all of them in one single system.”
    To demonstrate the real-world potential of large objects constructed in a LEGO-like manner out of these mass-produced voxels, the team, working in collaboration with engineers at Toyota, produced a functional super-mileage race car, which they demonstrated in the streets during an international robotics conference earlier this year.

    advertisement

    They were able to assemble the lightweight, high-performance structure in just a month, Jenett says, whereas building a comparable structure using conventional fiberglass construction methods had previously taken a year.
    During the demonstration, the streets were slick from rain, and the race car ended up crashing into a barrier. To the surprise of everyone involved, the car’s lattice-like internal structure deformed and then bounced back, absorbing the shock with little damage. A conventionally built car, Jenett says, would likely have been severely dented if it was made of metal, or shattered if it was composite.
    The car provided a vivid demonstration of the fact that these tiny parts can indeed be used to make functional devices at human-sized scales. And, Gershenfeld points out, in the structure of the car, “these aren’t parts connected to something else. The whole thing is made out of nothing but these parts,” except for the motors and power supply.
    Because the voxels are uniform in size and composition, they can be combined in any way needed to provide different functions for the resulting device. “We can span a wide range of material properties that before now have been considered very specialized,” Gershenfeld says. “The point is that you don’t have to pick one property. You can make, for example, robots that bend in one direction and are stiff in another direction and move only in certain ways. And so, the big change over our earlier work is this ability to span multiple mechanical material properties, that before now have been considered in isolation.”
    Jenett, who carried out much of this work as the basis for his doctoral thesis, says “these parts are low-cost, easily produced, and very fast to assemble, and you get this range of properties all in one system. They’re all compatible with each other, so there’s all these different types of exotic properties, but they all play well with each other in the same scalable, inexpensive system.”
    “Think about all the rigid parts and moving parts in cars and robots and boats and planes,” Gershenfeld says. “And we can span all of that with this one system.”
    A key factor is that a structure made up of one type of these voxels will behave exactly the same way as the subunit itself, Jenett says. “We were able to demonstrate that the joints effectively disappear when you assemble the parts together. It behaves as a continuum, monolithic material.”
    Whereas robotics research has tended to be divided between hard and soft robots, “this is very much neither,” Gershenfeld says, because of its potential to mix and match these properties within a single device.
    One of the possible early application of this technology, Jenett says, could be for building the blades of wind turbines. As these structures become ever larger, transporting the blades to their operating site becomes a serious logistical issue, whereas if they are assembled from thousands of tiny subunits, that job can be done at the site, eliminating the transportation issue. Similarly, the disposal of used turbine blades is already becoming a serious problem because of their large size and lack of recyclability. But blades made up of tiny voxels could be disassembled on site, and the voxels then reused to make something else.
    And in addition, the blades themselves could be more efficient, because they could have a mix of mechanical properties designed into the structure that would allow them to respond dynamically, passively, to changes in wind strength, he says.
    Overall, Jenett says, “Now we have this low-cost, scalable system, so we can design whatever we want to. We can do quadrupeds, we can do swimming robots, we can do flying robots. That flexibility is one of the key benefits of the system.”
    The research team included Filippos Tourlomousis, Alfonso Parra Rubio, and Megan Ochalek at MIT, and Christopher Cameron at the U.S. Army Research Laboratory. The work was supported by NASA, the U.S. Army Research Laboratory and the Center for Bits and Atoms Consortia. More

  • in

    Fostering creativity in researchers: How automation can revolutionize materials research

    At the heart of many past scientific breakthroughs lies the discovery of novel materials. However, the cycle of synthesizing, testing, and optimizing new materials routinely takes scientists long hours of hard work. Because of this, lots of potentially useful materials with exotic properties remain undiscovered. But what if we could automate the entire novel material development process using robotics and artificial intelligence, making it much faster?
    In a recent study published at APL Material, scientists from Tokyo Institute of Technology (Tokyo Tech), Japan, led by Associate Professor Ryota Shimizu and Professor Taro Hitosugi, devised a strategy that could make fully autonomous materials research a reality. Their work is centered around the revolutionary idea of laboratory equipment being ‘CASH’ (Connected, Autonomous, Shared, High-throughput). With a CASH setup in a materials laboratory, researchers need only decide which material properties they want to optimize and feed the system the necessary ingredients; the automatic system then takes control and repeatedly prepares and tests new compounds until the best one is found. Using machine learning algorithms, the system can employ previous knowledge to decide how synthesis conditions should be changed to approach the desired outcome in each cycle.
    To demonstrate that CASH is a feasible strategy in solid-state materials research, Associate Prof Shimizu and team created a proof-of-concept system comprising a robotic arm surrounded by several modules. Their setup was geared toward minimizing the electrical resistance of a titanium dioxide thin film by adjusting the deposition conditions. Therefore, the modules are a sputter deposition apparatus and a device for measuring resistance. The robotic arm transferred the samples from module to module as needed, and the system autonomously predicted the synthesis parameters for the next iteration based on previous data. For the prediction, they used the Bayesian optimization algorithm.
    Amazingly, their CASH setup managed to produce and test about twelve samples per day, a tenfold increase in throughput compared to what scientists can manually achieve in a conventional laboratory. In addition to this significant increase in speed, one of the main advantages of the CASH strategy is the possibility of creating huge shared databases describing how material properties vary according to synthesis conditions. In this regard, Prof Hitosugi remarks: “Today, databases of substances and their properties remain incomplete. With the CASH approach, we could easily complete them and then discover hidden material properties, leading to the discovery of new laws of physics and resulting in insights through statistical analysis.”
    The research team believes that the CASH approach will bring about a revolution in materials science. Databases generated quickly and effortlessly by CASH systems will be combined into big data and scientists will use advanced algorithms to process them and extract human-understandable expressions. However, as Prof Hitosugi notes, machine learning and robotics alone cannot find insights nor discover concepts in physics and chemistry. “The training of future materials scientists must evolve; they will need to understand what machine learning can solve and set the problem accordingly. The strength of human researchers lies in creating concepts or identifying problems in society. Combining those strengths with machine learning and robotics is very important,” he says.
    Overall, this perspective article highlights the tremendous benefits that automation could bring to materials science. If the weight of repetitive tasks is lifted off the shoulders of researchers, they will be able to focus more on uncovering the secrets of the material world for the benefit of humanity.

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Researchers establish proof of principle in superconductor study

    Three physicists in the Department of Physics and Astronomy at the University of Tennessee, Knoxville, together with their colleagues from the Southern University of Science and Technology and Sun Yat-sen University in China, have successfully modified a semiconductor to create a superconductor.
    Professor and Department Head Hanno Weitering, Associate Professor Steve Johnston, and PhD candidate Tyler Smith were part of the team that made the breakthrough in fundamental research, which may lead to unforeseen advancements in technology.
    Semiconductors are electrical insulators but conduct electrical currents under special circumstances. They are an essential component in many of the electronic circuits used in everyday items including mobile phones, digital cameras, televisions, and computers.
    As technology has progressed, so has the development of semiconductors, allowing the fabrication of electronic devices that are smaller, faster, and more reliable.
    Superconductors, first discovered in 1911, allow electrical charges to move without resistance, so current flows without any energy loss. Although scientists are still exploring practical applications, superconductors are currently used most widely in MRI machines.
    Using a silicon semiconductor platform — which is the standard for nearly all electronic devices — Weitering and his colleagues used tin to create the superconductor.
    “When you have a superconductor and you integrate it with a semiconductor, there are also new types of electronic devices that you can make,” Weitering stated.
    Superconductors are typically discovered by accident; the development of this novel superconductor is the first example ever of intentionally creating an atomically thin superconductor on a conventional semiconductor template, exploiting the knowledge base of high-temperature superconductivity in doped ‘Mott insulating’ copper oxide materials.
    “The entire approach — doping a Mott insulator, the tin on silicon — was a deliberate strategy. Then came proving we’re seeing the properties of a doped Mott insulator as opposed to anything else and ruling out other interpretations. The next logical step was demonstrating superconductivity, and lo and behold, it worked,” Weitering said.
    “Discovery of new knowledge is a core mission of UT,” Weitering stated. “Although we don’t have an immediate application for our superconductor, we have established a proof of principle, which may lead to future practical applications.”

    Story Source:
    Materials provided by University of Tennessee at Knoxville. Note: Content may be edited for style and length. More

  • in

    Parental restrictions on tech use have little lasting effect into adulthood

    “Put your phone away!” “No more video games!” “Ten more minutes of YouTube and you’re done!”
    Kids growing up in the mobile internet era have heard them all, often uttered by well-meaning parents fearing long-term problems from overuse.
    But new University of Colorado Boulder research suggests such restrictions have little effect on technology use later in life, and that fears of widespread and long-lasting “tech addiction” may be overblown.
    “Are lots of people getting addicted to tech as teenagers and staying addicted as young adults? The answer from our research is ‘no’,” said lead author Stefanie Mollborn, a professor of sociology at the Institute of Behavioral Science. “We found that there is only a weak relationship between early technology use and later technology use, and what we do as parents matters less than most of us believe it will.”
    The study, which analyzes a survey of nearly 1,200 young adults plus extensive interviews with another 56, is the first to use such data to examine how digital technology use evolves from childhood to adulthood.
    The data were gathered prior to the pandemic, which has resulted in dramatic increases in the use of technology as millions of students have been forced to attend school and socialize online. But the authors say the findings should come as some comfort to parents worried about all that extra screen time.

    advertisement

    “This research addresses the moral panic about technology that we so often see,” said Joshua Goode, a doctoral student in sociology and co-author of the paper. “Many of those fears were anecdotal, but now that we have some data, they aren’t bearing out.”
    Published in Advances in Life Course Research, the paper is part of a 4-year National Science Foundation-funded project aimed at exploring how the mobile internet age truly is shaping America’s youth.
    Since 1997, time spent with digital technology has risen 32% among 2- to 5-year-olds and 23% among 6- to 11-year-olds, the team’s previous papers found. Even before the pandemic, adolescents spent 33 hours per week using digital technology outside of school.
    For the latest study, the research team shed light on young adults ages 18 to 30, interviewing dozens of people about their current technology use, their tech use as teens and how their parents or guardians restricted or encouraged it. The researchers also analyzed survey data from a nationally representative sample of nearly 1,200 participants, following the same people from adolescence to young adulthood.
    Surprisingly, parenting practices like setting time limits or prohibiting kids from watching shows during mealtimes had no effect on how much the study subjects used technology as young adults, researchers found.

    advertisement

    Those study subjects who grew up with fewer devices in the home or spent less time using technology as kids tended to spend slightly less time with tech in young adulthood — but statistically, the relationship was weak.
    What does shape how much time young adults spend on technology? Life in young adulthood, the research suggests.
    Young adults who hang out with a lot of people who are parents spend more time with tech (perhaps as a means of sharing parenting advice). Those whose friends are single tend toward higher use than the married crowd. College students, meantime, tend to believe they spend more time with technology than they ever have before or ever plan to again, the study found.
    “They feel like they are using tech a lot because they have to, they have it under control and they see a future when they can use less of it,” said Mollborn.
    From the dawn of comic books and silent movies to the birth of radio and TV, technological innovation has bred moral panic among older generations, the authors note.
    “We see that everyone is drawn to it, we get scared and we assume it is going to ruin today’s youth,” said Mollborn.
    In some cases, excess can have downsides. For instance, the researchers found that adolescents who play a lot of video games tend to get less physical activity.
    But digital technology use does not appear to crowd out sleep among teens, as some had feared, and use of social media or online videos doesn’t squeeze out exercise.
    In many ways, Goode notes, teens today are just swapping one form of tech for another, streaming YouTube instead watching TV, or texting instead of talking on the phone.
    That is not to say that no one ever gets addicted, or that parents should never instill limits or talk to their kids about its pros and cons, Mollborn stresses.
    “What these data suggest is that the majority of American teens are not becoming irrevocably addicted to technology. It is a message of hope.”
    She recently launched a new study, interviewing teens and parents in the age of COVID-19. Interestingly, she said, parents seem less worried about their kids’ tech use during the pandemic than they were in the past.
    “They realize that kids need social interaction and the only way to get that right now is through screens. Many of them are saying, ‘Where would we be right now without technology?'” More

  • in

    For neural research, wireless chip shines light on the brain

    Researchers have developed a chip that is powered wirelessly and can be surgically implanted to read neural signals and stimulate the brain with both light and electrical current. The technology has been demonstrated successfully in rats and is designed for use as a research tool.
    “Our goal was to create a research tool that can be used to help us better understand the behavior of different regions of the brain, particularly in response to various forms of neural stimulation,” says Yaoyao Jia, corresponding author of a paper on the work and an assistant professor of electrical and computer engineering at North Carolina State University. “This tool will help us answer fundamental questions that could then pave the way for advances in addressing neurological disorders such as Alzheimer’s or Parkinson’s disease.”
    The new technology has two features that set it apart from the previous state of the art.
    First, it is fully wireless. Researchers can power the 5×3 mm2 chip, which has an integrated power receiver coil, by applying an electromagnetic field. For example, in testing the researchers did with lab rats, the electromagnetic field surrounded each rat’s cage — so the device was fully powered regardless of what the rat was doing. The chip is also capable of sending and receiving information wirelessly.
    The second feature is that the chip is trimodal, meaning that it can perform three tasks.
    Current state-of-the-art neural interface chips of this kind can do two things: they can read neural signals in targeted regions of the brain by detecting electrical changes in those regions; and they can stimulate the brain by introducing a small electrical current into the brain tissue.
    The new chip can do both of those things, but it can also shine light onto the brain tissue — a function called optical stimulation. But for optical stimulation to work, you have to first genetically modify targeted neurons to make them respond to specific wavelengths of light.
    “When you use electrical stimulation, you have little control over where the electrical current goes,” Jia says. “But with optical stimulation, you can be far more precise, because you have only modified those neurons that you want to target in order to make them sensitive to light. This is an active field of research in neuroscience, but the field has lacked the electronic tools it needs to move forward. That’s where this work comes in.”
    In other words, by helping researchers (literally) shine a light on neural tissue, the new chip will help them (figuratively) shine a light on how the brain works.

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More