More stories

  • in

    Diamonds are a quantum scientist's best friend

    Diamonds have a firm foothold in our lexicon. Their many properties often serve as superlatives for quality, clarity and hardiness. Aside from the popularity of this rare material in ornamental and decorative use, these precious stones are also highly valued in industry where they are used to cut and polish other hard materials and build radiation detectors.
    More than a decade ago, a new property was uncovered in diamonds when high concentrations of boron are introduced to it — superconductivity. Superconductivity occurs when two electrons with opposite spin form a pair (called a Cooper pair), resulting in the electrical resistance of the material being zero. This means a large supercurrent can flow in the material, bringing with it the potential for advanced technological applications. Yet, little work has been done since to investigate and characterise the nature of a diamond’s superconductivity and therefore its potential applications.
    New research led by Professor Somnath Bhattacharyya in the Nano-Scale Transport Physics Laboratory (NSTPL) in the School of Physics at the University of the Witwatersrand in Johannesburg, South Africa, details the phenomenon of what is called “triplet superconductivity” in diamond. Triplet superconductivity occurs when electrons move in a composite spin state rather than as a single pair. This is an extremely rare, yet efficient form of superconductivity that until now has only been known to occur in one or two other materials, and only theoretically in diamonds.
    “In a conventional superconducting material such as aluminium, superconductivity is destroyed by magnetic fields and magnetic impurities, however triplet superconductivity in a diamond can exist even when combined with magnetic materials. This leads to more efficient and multifunctional operation of the material,” explains Bhattacharyya.
    The team’s work has recently been published in an article in the New Journal of Physics, titled “Effects of Rashba-spin-orbit coupling on superconducting boron-doped nanocrystalline diamond films: evidence of interfacial triplet superconductivity.” This research was done in collaboration with Oxford University (UK) and Diamond Light Source (UK). Through these collaborations, beautiful atomic arrangement of diamond crystals and interfaces that have never been seen before could be visualised, supporting the first claims of ‘triplet’ superconductivity.
    Practical proof of triplet superconductivity in diamonds came with much excitement for Bhattacharyya and his team. “We were even working on Christmas day, we were so excited,” says Davie Mtsuko. “This is something that has never been before been claimed in diamond,” adds Christopher Coleman. Both Mtsuko and Coleman are co-authors of the paper.
    Despite diamonds’ reputation as a highly rare and expensive resource, they can be manufactured in a laboratory using a specialised piece of equipment called a vapour deposition chamber. The Wits NSTPL has developed their own plasma deposition chamber which allows them to grow diamonds of a higher than normal quality — making them ideal for this kind of advanced research.
    This finding expands the potential uses of diamond, which is already well-regarded as a quantum material. “All conventional technology is based on semiconductors associated with electron charge. Thus far, we have a decent understanding of how they interact, and how to control them. But when we have control over quantum states such as superconductivity and entanglement, there is a lot more physics to the charge and spin of electrons, and this also comes with new properties,” says Bhattacharyya. “With the new surge of superconducting materials such as diamond, traditional silicon technology can be replaced by cost effective and low power consumption solutions.”
    The induction of triplet superconductivity in diamond is important for more than just its potential applications. It speaks to our fundamental understanding of physics. “Thus far, triplet superconductivity exists mostly in theory, and our study gives us an opportunity to test these models in a practical way,” says Bhattacharyya.

    Story Source:
    Materials provided by University of the Witwatersrand. Note: Content may be edited for style and length. More

  • in

    Faster COVID-19 testing with simple algebraic equations

    A mathematician from Cardiff University has developed a new method for processing large volumes of COVID-19 tests which he believes could lead to significantly more tests being performed at once and results being returned much quicker.
    Dr Usama Kadri, from the University’s School of Mathematics, believes the new technique could allow many more patients to be tested using the same amount of tests tubes and with a lower possibility of false negatives occurring.
    Dr Kadri’s technique, which has been published in the journal Health Systems, uses simple algebraic equations to identify positive samples in tests and takes advantage of a testing technique known as ‘pooling’.
    Pooling involves grouping a large number of samples from different patients into one test tube and performing a single test on that tube.
    If the tube is returned negative then you know that everybody from that group does not have the virus.
    Pooling can be applied by laboratories to test more samples in a shorter space of time, and works well when the overall infection rate in a certain population is expected to be low. If a tube is returned positive then each person within that group needs to be tested once again, this time individually, to determine who has the virus.

    advertisement

    In this instance, and particularly when it is known that infection rates in the population are high, the savings from the pooling technique in terms of time and cost become less significant.
    However, Dr Kadri’s new technique removes the need to perform a second round of tests once a batch is returned positive and can identify the individuals who have the virus using simple equations.
    The technique works with a fixed number of individuals and test tubes, for example 200 individuals and 10 test tubes, and starts by taking a fixed number of samples from a single individual, for example 5, and distributing these into 5 of the 10 test tubes.
    Another 5 samples are taken from the second individual and these are distributed into a different combination of 5 of the 10 tubes.
    This is then repeated for each of the 200 individuals in the group so that no individual shares the same combination of tubes.

    advertisement

    Each of the 10 test tubes is then sent for testing and any tube that returns negative indicates that all patients that have samples in that tube must be negative.
    If only one individual has the virus, then the combinations of the tubes that return positive, which is unique to the individual, will directly indicate that individual.
    However, if the number of positive tubes is larger than the number of samples from each individual, in this example 5, then there should be at least two individuals with the virus.
    The individuals that have all of their test tubes return positive are then selected.
    The method assumes that each individual that is positive should have the same quantity of virus in each tube, and that each of the individuals testing positive will have a unique quantity of virus in their sample which is different to the others.
    From this, the method then assumes that there are exactly two individuals with the virus and, for every two suspected individuals, a computer is used to calculate any combination of virus quantity that would return the actual overall quantity of virus that was measured in the tests.
    If the right combination is found then the selected two individuals have to be positive and no one else. Otherwise, the procedure is repeated but with an additional suspected individual, and so on until the right combination is found.
    “Applying the proposed method allows testing many more patients using the same number of testing tubes, where all positives are identified with no false negatives, and no need for a second round of independent testing, with the effective testing time reduced drastically,” Dr Kadri said.
    So far, the method has been assessed using simulations of testing scenarios and Dr Kadri acknowledges that lab testing will need to be carried out to increase confidence in the proposed method.
    Moreover, for clinical use, additional factors need to be considered including sample types, viral load, prevalence, and inhibitor substances. More

  • in

    Applying artificial intelligence to science education

    A new review published in the Journal of Research in Science Teaching highlights the potential of machine learning — a subset of artificial intelligence — in science education. Although the authors initiated their review before the COVID-19 outbreak, the pandemic highlights the need to examine cutting-edge digital technologies as we re-think the future of teaching and learning.
    Based on a review of 47 studies, investigators developed a framework to conceptualize machine learning applications in science assessment. The article aims to examine how machine learning has revolutionized the capacity of science assessment in terms of tapping into complex constructs, improving assessment functionality, and facilitating scoring automaticity.
    Based on their investigation, the researchers identified various ways in which machine learning has transformed traditional science assessment, as well as anticipated impacts that it will likely have in the future (such as providing personalized science learning and changing the process of educational decision-making).
    “Machine learning is increasingly impacting every aspect of our lives, including education,” said lead author Xiaoming Zhai, an assistant professor in the University of Georgia’s Mary Frances Early’s Department of Mathematics and Science Education. “It is anticipated that the cutting-edge technology may be able to redefine science assessment practices and significantly change education in the future.”

    Story Source:
    Materials provided by Wiley. Note: Content may be edited for style and length. More

  • in

    Deep learning takes on synthetic biology

    DNA and RNA have been compared to “instruction manuals” containing the information needed for living “machines” to operate. But while electronic machines like computers and robots are designed from the ground up to serve a specific purpose, biological organisms are governed by a much messier, more complex set of functions that lack the predictability of binary code. Inventing new solutions to biological problems requires teasing apart seemingly intractable variables — a task that is daunting to even the most intrepid human brains.
    Two teams of scientists from the Wyss Institute at Harvard University and the Massachusetts Institute of Technology have devised pathways around this roadblock by going beyond human brains; they developed a set of machine learning algorithms that can analyze reams of RNA-based “toehold” sequences and predict which ones will be most effective at sensing and responding to a desired target sequence. As reported in two papers published concurrently today in Nature Communications, the algorithms could be generalizable to other problems in synthetic biology as well, and could accelerate the development of biotechnology tools to improve science and medicine and help save lives.
    “These achievements are exciting because they mark the starting point of our ability to ask better questions about the fundamental principles of RNA folding, which we need to know in order to achieve meaningful discoveries and build useful biological technologies,” said Luis Soenksen, Ph.D., a Postdoctoral Fellow at the Wyss Institute and Venture Builder at MIT’s Jameel Clinic who is a co-first author of the first of the two papers.
    Getting ahold of toehold switches
    The collaboration between data scientists from the Wyss Institute’s Predictive BioAnalytics Initiative and synthetic biologists in Wyss Core Faculty member Jim Collins’ lab at MIT was created to apply the computational power of machine learning, neural networks, and other algorithmic architectures to complex problems in biology that have so far defied resolution. As a proving ground for their approach, the two teams focused on a specific class of engineered RNA molecules: toehold switches, which are folded into a hairpin-like shape in their “off” state. When a complementary RNA strand binds to a “trigger” sequence trailing from one end of the hairpin, the toehold switch unfolds into its “on” state and exposes sequences that were previously hidden within the hairpin, allowing ribosomes to bind to and translate a downstream gene into protein molecules. This precise control over the expression of genes in response to the presence of a given molecule makes toehold switches very powerful components for sensing substances in the environment, detecting disease, and other purposes.
    However, many toehold switches do not work very well when tested experimentally, even though they have been engineered to produce a desired output in response to a given input based on known RNA folding rules. Recognizing this problem, the teams decided to use machine learning to analyze a large volume of toehold switch sequences and use insights from that analysis to more accurately predict which toeholds reliably perform their intended tasks, which would allow researchers to quickly identify high-quality toeholds for various experiments.

    advertisement

    The first hurdle they faced was that there was no dataset of toehold switch sequences large enough for deep learning techniques to analyze effectively. The authors took it upon themselves to generate a dataset that would be useful to train such models. “We designed and synthesized a massive library of toehold switches, nearly 100,000 in total, by systematically sampling short trigger regions along the entire genomes of 23 viruses and 906 human transcription factors,” said Alex Garruss, a Harvard graduate student working at the Wyss Institute who is a co-first author of the first paper. “The unprecedented scale of this dataset enables the use of advanced machine learning techniques for identifying and understanding useful switches for immediate downstream applications and future design.”
    Armed with enough data, the teams first employed tools traditionally used for analyzing synthetic RNA molecules to see if they could accurately predict the behavior of toehold switches now that there were manifold more examples available. However, none of the methods they tried — including mechanistic modeling based on thermodynamics and physical features — were able to predict with sufficient accuracy which toeholds functioned better.
    A picture is worth a thousand base pairs
    The researchers then explored various machine learning techniques to see if they could create models with better predictive abilities. The authors of the first paper decided to analyze toehold switches not as sequences of bases, but rather as two-dimensional “images” of base-pair possibilities. “We know the baseline rules for how an RNA molecule’s base pairs bond with each other, but molecules are wiggly — they never have a single perfect shape, but rather a probability of different shapes they could be in,” said Nicolaas Angenent-Mari, a MIT graduate student working at the Wyss Institute and co-first author of the first paper. “Computer vision algorithms have become very good at analyzing images, so we created a picture-like representation of all the possible folding states of each toehold switch, and trained a machine learning algorithm on those pictures so it could recognize the subtle patterns indicating whether a given picture would be a good or a bad toehold.”
    Another benefit of their visually-based approach is that the team was able to “see” which parts of a toehold switch sequence the algorithm “paid attention” to the most when determining whether a given sequence was “good” or “bad.” They named this interpretation approach Visualizing Secondary Structure Saliency Maps, or VIS4Map, and applied it to their entire toehold switch dataset. VIS4Map successfully identified physical elements of the toehold switches that influenced their performance, and allowed the researchers to conclude that toeholds with more potentially competing internal structures were “leakier” and thus of lower quality than those with fewer such structures, providing insight into RNA folding mechanisms that had not been discovered using traditional analysis techniques.

    advertisement

    “Being able to understand and explain why certain tools work or don’t work has been a secondary goal within the artificial intelligence community for some time, but interpretability needs to be at the forefront of our concerns when studying biology because the underlying reasons for those systems’ behaviors often cannot be intuited,” said Jim Collins, Ph.D., the senior author of the first paper. “Meaningful discoveries and disruptions are the result of deep understanding of how nature works, and this project demonstrates that machine learning, when properly designed and applied, can greatly enhance our ability to gain important insights about biological systems.” Collins is also the Termeer Professor of Medical Engineering and Science at MIT.
    Now you’re speaking my language
    While the first team analyzed toehold switch sequences as 2D images to predict their quality, the second team created two different deep learning architectures that approached the challenge using orthogonal techniques. They then went beyond predicting toehold quality and used their models to optimize and redesign poorly performing toehold switches for different purposes, which they report in the second paper.
    The first model, based on a convolutional neural network (CNN) and multi-layer perceptron (MLP), treats toehold sequences as 1D images, or lines of nucleotide bases, and identifies patterns of bases and potential interactions between those bases to predict good and bad toeholds. The team used this model to create an optimization method called STORM (Sequence-based Toehold Optimization and Redesign Model), which allows for complete redesign of a toehold sequence from the ground up. This “blank slate” tool is optimal for generating novel toehold switches to perform a specific function as part of a synthetic genetic circuit, enabling the creation of complex biological tools.
    “The really cool part about STORM and the model underlying it is that after seeding it with input data from the first paper, we were able to fine-tune the model with only 168 samples and use the improved model to optimize toehold switches. That calls into question the prevailing assumption that you need to generate massive datasets every time you want to apply a machine learning algorithm to a new problem, and suggests that deep learning is potentially more applicable for synthetic biologists than we thought,” said co-first author Jackie Valeri, a graduate student at MIT and the Wyss Institute.
    The second model is based on natural language processing (NLP), and treats each toehold sequence as a “phrase” consisting of patterns of “words,” eventually learning how certain words are put together to make a coherent phrase. “I like to think of each toehold switch as a haiku poem: like a haiku, it’s a very specific arrangement of phrases within its parent language — in this case, RNA. We are essentially training this model to learn how to write a good haiku by feeding it lots and lots of examples,” said co-first author Pradeep Ramesh, Ph.D., a Visiting Postdoctoral Fellow at the Wyss Institute and Machine Learning Scientist at Sherlock Biosciences.
    Ramesh and his co-authors integrated this NLP-based model with the CNN-based model to create NuSpeak (Nucleic Acid Speech), an optimization approach that allowed them to redesign the last 9 nucleotides of a given toehold switch while keeping the remaining 21 nucleotides intact. This technique allows for the creation of toeholds that are designed to detect the presence of specific pathogenic RNA sequences, and could be used to develop new diagnostic tests.
    The team experimentally validated both of these platforms by optimizing toehold switches designed to sense fragments from the SARS-CoV-2 viral genome. NuSpeak improved the sensors’ performances by an average of 160%, while STORM created better versions of four “bad” SARS-CoV-2 viral RNA sensors whose performances improved by up to 28 times.
    “A real benefit of the STORM and NuSpeak platforms is that they enable you to rapidly design and optimize synthetic biology components, as we showed with the development of toehold sensors for a COVID-19 diagnostic,” said co-first author Katie Collins, an undergraduate MIT student at the Wyss Institute who worked with MIT Associate Professor Timothy Lu, M.D., Ph.D., a corresponding author of the second paper.
    “The data-driven approaches enabled by machine learning open the door to really valuable synergies between computer science and synthetic biology, and we’re just beginning to scratch the surface,” said Diogo Camacho, Ph.D., a corresponding author of the second paper who is a Senior Bioinformatics Scientist and co-lead of the Predictive BioAnalytics Initiative at the Wyss Institute. “Perhaps the most important aspect of the tools we developed in these papers is that they are generalizable to other types of RNA-based sequences such as inducible promoters and naturally occurring riboswitches, and therefore can be applied to a wide range of problems and opportunities in biotechnology and medicine.”
    Additional authors of the papers include Wyss Core Faculty member and Professor of Genetics at HMS George Church, Ph.D.; and Wyss and MIT Graduate Students Miguel Alcantar and Bianca Lepe.
    “Artificial intelligence is wave that is just beginning to impact science and industry, and has incredible potential for helping to solve intractable problems. The breakthroughs described in these studies demonstrate the power of melding computation with synthetic biology at the bench to develop new and more powerful bioinspired technologies, in addition to leading to new insights into fundamental mechanisms of biological control,” said Don Ingber, M.D., Ph.D., the Wyss Institute’s Founding Director. Ingber is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences.
    This work was supported by the DARPA Synergistic Discovery and Design program, the Defense Threat Reduction Agency, the Paul G. Allen Frontiers Group, the Wyss Institute for Biologically Inspired Engineering, Harvard University, the Institute for Medical Engineering and Science, the Massachusetts Institute of Technology, the National Science Foundation, the National Human Genome Research Institute, the Department of Energy, the National Institutes of Health, and a CONACyT grant. More

  • in

    This 'squidbot' jets around and takes pics of coral and fish

    Engineers at the University of California San Diego have built a squid-like robot that can swim untethered, propelling itself by generating jets of water. The robot carries its own power source inside its body. It can also carry a sensor, such as a camera, for underwater exploration.
    The researchers detail their work in a recent issue of Bioinspiration and Biomimetics.
    “Essentially, we recreated all the key features that squids use for high-speed swimming,” said Michael T. Tolley, one of the paper’s senior authors and a professor in the Department of Mechanical and Aerospace Engineering at UC San Diego. “This is the first untethered robot that can generate jet pulses for rapid locomotion like the squid and can achieve these jet pulses by changing its body shape, which improves swimming efficiency.”
    This squid robot is made mostly from soft materials such as acrylic polymer, with a few rigid, 3D printed and laser cut parts. Using soft robots in underwater exploration is important to protect fish and coral, which could be damaged by rigid robots. But soft robots tend to move slowly and have difficulty maneuvering.
    The research team, which includes roboticists and experts in computer simulations as well as experimental fluid dynamics, turned to cephalopods as a good model to solve some of these issues. Squid, for example, can reach the fastest speeds of any aquatic invertebrates thanks to a jet propulsion mechanism.
    Their robot takes a volume of water into its body while storing elastic energy in its skin and flexible ribs. It then releases this energy by compressing its body and generates a jet of water to propel itself.
    At rest, the squid robot is shaped roughly like a paper lantern, and has flexible ribs, which act like springs, along its sides. The ribs are connected to two circular plates at each end of the robot. One of them is connected to a nozzle that both takes in water and ejects it when the robot’s body contracts. The other plate can carry a water-proof camera or a different type of sensor.
    Engineers first tested the robot in a water testbed in the lab of Professor Geno Pawlak, in the UC San Diego Department of Mechanical and Aerospace Engineering. Then they took it out for a swim in one of the tanks at the UC San Diego Birch Aquarium at the Scripps Institution of Oceanography.
    They demonstrated that the robot could steer by adjusting the direction of the nozzle. As with any underwater robot, waterproofing was a key concern for electrical components such as the battery and camera.They clocked the robot’s speed at about 18 to 32 centimeters per second (roughly half a mile per hour), which is faster than most other soft robots.
    “After we were able to optimize the design of the robot so that it would swim in a tank in the lab, it was especially exciting to see that the robot was able to successfully swim in a large aquarium among coral and fish, demonstrating its feasibility for real-world applications,” said Caleb Christianson, who led the study as part of his Ph.D. work in Tolley’s research group. He is now a senior medical devices engineering at San Diego-based Dexcom.
    Researchers conducted several experiments to find the optimal size and shape for the nozzle that would propel the robot. This in turn helped them increase the robot’s efficiency and its ability to maneuver and go faster. This was done mostly by simulating this kind of jet propulsion, work that was led by Professor Qiang Zhu and his team in the Department of Structural Engineering at UC San Diego. The team also learned more about how energy can be stored in the elastic component of the robot’s body and skin, which is later released to generate a jet.
    Video: https://www.youtube.com/watch?v=v-UMDnSB8k0&feature=emb_logo

    Story Source:
    Materials provided by University of California – San Diego. Note: Content may be edited for style and length. More

  • in

    Underwater robots to autonomously dock mid-mission to recharge and transfer data

    Robots can be amazing tools for search-and-rescue missions and environmental studies, but eventually they must return to a base to recharge their batteries and upload their data. That can be a challenge if your robot is an autonomous underwater vehicle (AUV) exploring deep ocean waters.
    Now, a Purdue University team has created a mobile docking system for AUVs, enabling them to perform longer tasks without the need for human intervention.
    The team also has published papers on ways to adapt this docking system for AUVs that will explore extraterrestrial lakes, such as those of Jupiter and Saturn’s moons.
    “My research focuses on persistent operation of robots in challenging environments,” said Nina Mahmoudian, an associate professor of mechanical engineering. “And there’s no more challenging environment than underwater.”
    Once a marine robot submerges in water, it loses the ability to transmit and receive radio signals, including GPS data. Some may use acoustic communication, but this method can be difficult and unreliable, especially for long-range transmissions. Because of this, underwater robots currently have a limited range of operation.
    “Typically these robots perform a pre-planned itinerary underwater,” Mahmoudian said. “Then they come to the surface and send out a signal to be retrieved. Humans have to go out, retrieve the robot, get the data, recharge the battery and then send it back out. That’s very expensive, and it limits the amount of time these robots can be performing their tasks.”
    Mahmoudian’s solution is to create a mobile docking station that underwater robots could return to on their own.

    advertisement

    “And what if we had multiple docks, which were also mobile and autonomous?” she said. “The robots and the docks could coordinate with each other, so that they could recharge and upload their data, and then go back out to continue exploring, without the need for human intervention. We’ve developed the algorithms to maximize these trajectories, so we get the optimum use of these robots.”
    A paper on the mission planning system that Mahmoudian and her team developed has been published in IEEE Robotics and Automation Letters. The researchers validated the method by testing the system on a short mission in Lake Superior.
    “What’s key is that the docking station is portable,” Mahmoudian said. “It can be deployed in a stationary location, but it can also be deployed on autonomous surface vehicles or even on other autonomous underwater vehicles. And it’s designed to be platform-agnostic, so it can be utilized with any AUV. The hardware and software work hand-in-hand.”
    Mahmoudian points out that systems like this already exist in your living room. “An autonomous vacuum, like a Roomba, does its vacuum cleaning, and when it runs out of battery, it autonomously returns to its dock to get recharged,” she said, “That’s exactly what we are doing here, but the environment is much more challenging.”
    If her system can successfully function in a challenging underwater environment, then Mahmoudian sees even greater horizons for this technology.
    “This system can be used anywhere,” she said. “Robots on land, air or sea will be able to operate indefinitely. Search-and-rescue robots will be able to explore much wider areas. They will go into the Arctic and explore the effects of climate change. They will even go into space.”
    Video: https://www.youtube.com/watch?v=_kS0_-qc_r0&_ga=2.99992349.282287155.1601990769-129101217.1578788059
    A patent on this mobile underwater docking station design has been issued. The patent was filed through the Secretary of the U.S. Navy. This work is funded by the National Science Foundation (grant 19078610) and the Office of Naval Research (grant N00014-20-1-2085).

    Story Source:
    Materials provided by Purdue University. Original written by Jared Pike. Note: Content may be edited for style and length. More

  • in

    Could megatesla magnetic fields be realized on Earth?

    Magnetic fields are used in various areas of modern physics and engineering, with practical applications ranging from doorbells to maglev trains. Since Nikola Tesla’s discoveries in the 19th century, researchers have strived to realize strong magnetic fields in laboratories for fundamental studies and diverse applications, but the magnetic strength of familiar examples are relatively weak. Geomagnetism is 0.3-0.5 gauss (G) and magnetic tomography (MRI) used in hospitals is about 1 tesla (T = 104 G). By contrast, future magnetic fusion and maglev trains will require magnetic fields on the kilotesla (kT = 107 G) order. To date, the highest magnetic fields experimentally observed are on the kT order.
    Recently, scientists at Osaka University discovered a novel mechanism called a “microtube implosion,” and demonstrated the generation of megatesla (MT = 1010G) order magnetic fields via particle simulations using a supercomputer. Astonishingly, this is three orders of magnitude higher than what has ever been achieved in a laboratory. Such high magnetic fields are expected only in celestial bodies like neutron stars and black holes.
    Irradiating a tiny plastic microtube one-tenth the thickness of a human hair by ultraintense laser pulses produces hot electrons with temperatures of tens of billion of degrees. These hot electrons, along with cold ions, expand into the microtube cavity at velocities approaching the speed of light. Pre-seeding with a kT-order magnetic field causes the imploding charged particles infinitesimally twisted due to Lorenz force. Such a unique cylindrical flow collectively produces unprecedentedly high spin currents of about 1015 ampere/cm2 on the target axis and consequently, generates ultrahigh magnetic fields on the MT order.
    The study conducted by Masakatsu Murakami and colleagues has confirmed that current laser technology can realize MT-order magnetic fields based on the concept. The present concept for generating MT-order magnetic fields will lead to pioneering fundamental research in numerous areas, including materials science, quantum electrodynamics (QED), and astrophysics, as well as other cutting-edge practical applications.

    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    How mobile apps grab our attention

    As part of an international collaboration, Aalto University researchers have shown that our common understanding of what attracts visual attention to screens, in fact, does not transfer to mobile applications. Despite the widespread use of mobile phones and tablets in our everyday lives, this is the first study to empirically test how users’ eyes follow commonly used mobile app elements.
    Previous work on what attracts visual attention, or visual saliency, has centered on desktop and web-interfaces.
    ‘Apps appear differently on a phone than on a desktop computer or browser: they’re on a smaller screen which simply fits fewer elements and, instead of a horizontal view, mobile devices typically use a vertical layout. Until now it was unclear how these factors would affect how apps actually attract our eyes,’ explains Aalto University Professor Antti Oulasvirta.
    In the study, the research team used a large set of representative mobile interfaces and eye tracking to see how users look at screenshots of mobile apps, for both Android and Apple iOS devices.
    According to previous thinking, our eyes should not only jump to bigger or brighter elements, but also stay there longer. Previous studies have also concluded that when we look at certain kinds of images, our attention is drawn to the centre of screens and also spread horizontally across the screen, rather than vertically. The researchers found these principles to have little effect on mobile interfaces.
    ‘It actually came as a surprise that bright colours didn’t affect how people fixate on app details. One possible reason is that the mobile interface itself is full of glossy and colourful elements, so everything on the screen can potentially catch your attention — it’s just how they’re designed. It seems that when everything is made to stand out, nothing pops out in the end,’ says lead author and Post-doctoral Researcher Luis Leiva.
    The study also confirms that some other design principles hold true for mobile apps. Gaze, for example, drifts to the top-left corner, as an indication of exploration or scanning. Text plays an important role, likely due to its role in relaying information; on first use, users thus tend to focus on text elements of a mobile app as parts of icons, labels and logos.
    Image elements drew visual attention more frequently than expected for the area they cover, though the average length of time users spent looking at images was similar to other app elements. Faces, too, attracted concentrated attention, though when accompanied by text, eyes wander much closer to the location of text.
    ‘Various factors influence where our visual attention goes. For photos, these factors include colour, edges, texture and motion. But when it comes to generated visual content, such as graphical user interfaces, design composition is a critical factor to consider,’ says Dr Hamed Tavakoli, who was also part of the Aalto University research team.
    The study was completed with international collaborators including IIT Goa (India), Yildiz Technical University (Turkey) and Huawei Technologies (China). The team will present the findings on 6 October 2020 at MobileHCI’20, the flagship conference on Human-Computer Interaction with mobile devices and services.

    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More