More stories

  • in

    Autonomous robot plays with NanoLEGO

    Molecules are the building blocks of everyday life. Many materials are composed of them, a little like a LEGO model consists of a multitude of different bricks. But while individual LEGO bricks can be simply shifted or removed, this is not so easy in the nanoworld. Atoms and molecules behave in a completely different way to macroscopic objects and each brick requires its own “instruction manual.” Scientists from Jülich and Berlin have now developed an artificial intelligence system that autonomously learns how to grip and move individual molecules using a scanning tunnelling microscope. The method, which has been published in Science Advances, is not only relevant for research but also for novel production technologies such as molecular 3D printing.
    Rapid prototyping, the fast and cost-effective production of prototypes or models — better known as 3D printing — has long since established itself as an important tool for industry. “If this concept could be transferred to the nanoscale to allow individual molecules to be specifically put together or separated again just like LEGO bricks, the possibilities would be almost endless, given that there are around 1060 conceivable types of molecule,” explains Dr. Christian Wagner, head of the ERC working group on molecular manipulation at Forschungszentrum Jülich.
    There is one problem, however. Although the scanning tunnelling microscope is a useful tool for shifting individual molecules back and forth, a special custom “recipe” is always required in order to guide the tip of the microscope to arrange molecules spatially in a targeted manner. This recipe can neither be calculated, nor deduced by intuition — the mechanics on the nanoscale are simply too variable and complex. After all, the tip of the microscope is ultimately not a flexible gripper, but rather a rigid cone. The molecules merely adhere lightly to the microscope tip and can only be put in the right place through sophisticated movement patterns.
    “To date, such targeted movement of molecules has only been possible by hand, through trial and error. But with the help of a self-learning, autonomous software control system, we have now succeeded for the first time in finding a solution for this diversity and variability on the nanoscale, and in automating this process,” says a delighted Prof. Dr. Stefan Tautz, head of Jülich’s Quantum Nanoscience institute.
    The key to this development lies in so-called reinforcement learning, a special variant of machine learning. “We do not prescribe a solution pathway for the software agent, but rather reward success and penalize failure,” explains Prof. Dr. Klaus-Robert Müller, head of the Machine Learning department at TU Berlin. The algorithm repeatedly tries to solve the task at hand and learns from its experiences. The general public first became aware of reinforcement learning a few years ago through AlphaGo Zero. This artificial intelligence system autonomously developed strategies for winning the highly complex game of Go without studying human players — and after just a few days, it was able to beat professional Go players.
    “In our case, the agent was given the task of removing individual molecules from a layer in which they are held by a complex network of chemical bonds. To be precise, these were perylene molecules, such as those used in dyes and organic light-emitting diodes,” explains Dr. Christian Wagner. The special challenge here is that the force required to move them must never exceed the strength of the bond with which the tip of the scanning tunnelling microscope attracts the molecule, since this bond would otherwise break. “The microscope tip therefore has to execute a special movement pattern, which we previously had to discover by hand, quite literally,” Wagner adds. While the software agent initially performs completely random movement actions that break the bond between the tip of the microscope and the molecule, over time it develops rules as to which movement is the most promising for success in which situation and therefore gets better with each cycle.
    However, the use of reinforcement learning in the nanoscopic range brings with it additional challenges. The metal atoms that make up the tip of the scanning tunnelling microscope can end up shifting slightly, which alters the bond strength to the molecule each time. “Every new attempt makes the risk of a change and thus the breakage of the bond between tip and molecule greater. The software agent is therefore forced to learn particularly quickly, since its experiences can become obsolete at any time,” Prof. Dr. Stefan Tautz explains. “It’s a little as if the road network, traffic laws, bodywork, and rules for operating the vehicle are constantly changing while driving autonomously.” The researchers have overcome this challenge by making the software learn a simple model of the environment in which the manipulation takes place in parallel with the initial cycles. The agent then simultaneously trains both in reality and in its own model, which has the effect of significantly accelerating the learning process.
    “This is the first time ever that we have succeeded in bringing together artificial intelligence and nanotechnology,” emphasizes Klaus-Robert Müller. “Up until now, this has only been a ‘proof of principle’,” Tautz adds. “However, we are confident that our work will pave the way for the robot-assisted automated construction of functional supramolecular structures, such as molecular transistors, memory cells, or qubits — with a speed, precision, and reliability far in excess of what is currently possible.” More

  • in

    Heavy electronic media use in late childhood linked to lower academic performance

    A new study of 8- to 11-year olds reveals an association between heavy television use and poorer reading performance, as well as between heavy computer use and poorer numeracy — the ability to work with numbers. Lisa Mundy of the Murdoch Children’s Research Institute in Melbourne, Australia, and colleagues present these findings in the open-access journal PLOS ONE on September 2, 2020.
    Previous studies of children and adolescents have found links between use of electronic media — such as television, computers, and videogames — and obesity, poor sleep, and other physical health risks. Electronic media use is also associated with better access to information, tech skills, and social connection. However, comparatively less is known about links with academic performance.
    To help clarify these links, Mundy and colleagues studied 1,239 8- to 9-year olds in Melbourne, Australia. They used a national achievement test data to measure the children’s academic performance at baseline and again after two years. They also asked the children’s parents to report on their kids’ use of electronic media.
    The researchers found that watching two or more hours of television per day at the age of 8 or 9 was associated with lower reading performance compared to peers two years later; the difference was equivalent to losing four months of learning. Using a computer for more than one hour per day was linked to a similar degree of lost numeracy. The analysis showed no links between use of videogames and academic performance.
    By accounting for baseline academic performance and potentially influencing factors such as mental health difficulties and body mass index (BMI) and controlling for prior media use, the researchers were able to pinpoint cumulative television and computer use, as well as short-term use, as associated with poorer academic performance.
    These findings could help parents, teachers, and clinicians refine plans and recommendations for electronic media use in late childhood. Future research could build on these results by examining continued associations in later secondary school.
    The authors add: “The debate about the effects of modern media on children’s learning has never been more important given the effects of today’s pandemic on children’s use of time. This is the first large, longitudinal study of electronic media use and learning in primary school children, and results showed heavier users of television and computers had significant declines in reading and numeracy two years later compared with light users.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Revolutionary quantum breakthrough paves way for safer online communication

    The world is one step closer to having a totally secure internet and an answer to the growing threat of cyber-attacks, thanks to a team of international scientists who have created a unique prototype which could transform how we communicate online.
    The invention led by the University of Bristol, revealed today in the journal Science Advances, has the potential to serve millions of users, is understood to be the largest-ever quantum network of its kind, and could be used to secure people’s online communication, particularly in these internet-led times accelerated by the COVID-19 pandemic.
    By deploying a new technique, harnessing the simple laws of physics, it can make messages completely safe from interception while also overcoming major challenges which have previously limited advances in this little used but much-hyped technology.
    Lead author Dr Siddarth Joshi, who headed the project at the university’s Quantum Engineering Technology (QET) Labs, said: “This represents a massive breakthrough and makes the quantum internet a much more realistic proposition. Until now, building a quantum network has entailed huge cost, time, and resource, as well as often compromising on its security which defeats the whole purpose.”
    “Our solution is scalable, relatively cheap and, most important of all, impregnable. That means it’s an exciting game changer and paves the way for much more rapid development and widespread rollout of this technology.”
    The current internet relies on complex codes to protect information, but hackers are increasingly adept at outsmarting such systems leading to cyber-attacks across the world which cause major privacy breaches and fraud running into trillions of pounds annually. With such costs projected to rise dramatically, the case for finding an alternative is even more compelling and quantum has for decades been hailed as the revolutionary replacement to standard encryption techniques.

    advertisement

    So far physicists have developed a form of secure encryption, known as quantum key distribution, in which particles of light, called photons, are transmitted. The process allows two parties to share, without risk of interception, a secret key used to encrypt and decrypt information. But to date this technique has only been effective between two users.
    “Until now efforts to expand the network have involved vast infrastructure and a system which requires the creation of another transmitter and receiver for every additional user. Sharing messages in this way, known as trusted nodes, is just not good enough because it uses so much extra hardware which could leak and would no longer be totally secure,” Dr Joshi said.
    The team’s quantum technique applies a seemingly magical principle, called entanglement, which Albert Einstein described as ‘spooky action at a distance.’ It exploits the power of two different particles placed in separate locations, potentially thousands of miles apart, to simultaneously mimic each other. This process presents far greater opportunities for quantum computers, sensors, and information processing.
    “Instead of having to replicate the whole communication system, this latest methodology, called multiplexing, splits the light particles, emitted by a single system, so they can be received by multiple users efficiently,” Dr Joshi said.
    The team created a network for eight users using just eight receiver boxes, whereas the former method would need the number of users multiplied many times — in this case, amounting to 56 boxes. As the user numbers grow, the logistics become increasingly unviable — for instance 100 users would take 9,900 receiver boxes.

    advertisement

    To demonstrate its functionality across distance, the receiver boxes were connected to optical fibres via different locations across Bristol and the ability to transmit messages via quantum communication was tested using the city’s existing optical fibre network.
    “Besides being completely secure, the beauty of this new technique is its streamline agility, which requires minimal hardware because it integrates with existing technology,” Dr Joshi said.
    The team’s unique system also features traffic management, delivering better network control which allows, for instance, certain users to be prioritised with a faster connection.
    Whereas previous quantum systems have taken years to build, at a cost of millions or even billions of pounds, this network was created within months for less than £300,000. The financial advantages grow as the network expands, so while 100 users on previous quantum systems might cost in the region of £5 billion, Dr Joshi believes multiplexing technology could slash that to around £4.5 million, less than 1 per cent.
    In recent years quantum cryptography has been successfully used to protect transactions between banking centres in China and secure votes at a Swiss election. Yet its wider application has been held back by the sheer scale of resources and costs involved.
    “With these economies of scale, the prospect of a quantum internet for universal usage is much less far-fetched. We have proved the concept and by further refining our multiplexing methods to optimise and share resources in the network, we could be looking at serving not just hundreds or thousands, but potentially millions of users in the not too distant future,” Dr Joshi said.
    “The ramifications of the COVID-19 pandemic have not only shown importance and potential of the internet, and our growing dependence on it, but also how its absolute security is paramount. Multiplexing entanglement could hold the vital key to making this security a much-needed reality.” More

  • in

    Predictive placentas: Using artificial intelligence to protect mothers' future pregnancies

    After a baby is born, doctors sometimes examine the placenta — the organ that links the mother to the baby — for features that indicate health risks in any future pregnancies. Unfortunately, this is a time-consuming process that must be performed by a specialist, so most placentas go unexamined after the birth. A team of researchers from Carnegie Mellon University (CMU) and the University of Pittsburgh Medical Center (UPMC) report the development of a machine learning approach to examine placenta slides in the American Journal of Pathology, published by Elsevier, so more women can be informed of their health risks.
    One reason placentas are examined is to look for a type of blood vessel lesions called decidual vasculopathy (DV). These indicate the mother is at risk for preeclampsia — a complication that can be fatal to the mother and baby — in any future pregnancies. Once detected, preeclampsia can be treated, so there is considerable benefit from identifying at-risk mothers before symptoms appear. However, although there are hundreds of blood vessels in a single slide, only one diseased vessel is needed to indicate risk.
    “Pathologists train for years to be able to find disease in these images, but there are so many pregnancies going through the hospital system that they don’t have time to inspect every placenta,” said Daniel Clymer, PhD, alumnus, Department of Mechanical Engineering, CMU, Pittsburgh, PA, USA. “Our algorithm helps pathologists know which images they should focus on by scanning an image, locating blood vessels, and finding patterns of the blood vessels that identify DV.”
    Machine learning works by “training” the computer to recognize certain features in data files. In this case, the data file is an image of a thin slice of a placenta sample. Researchers show the computer various images and indicate whether the placenta is diseased or healthy. After sufficient training, the computer is able to identify diseased lesions on its own.
    It is quite difficult for a computer to simply look at a large picture and classify it, so the team introduced a novel approach through which the computer follows a series of steps to make the task more manageable. First, the computer detects all blood vessels in an image. Each blood vessel can then be considered individually, creating smaller data packets for analysis. The computer will then access each blood vessel and determine if it should be deemed diseased or healthy. At this stage, the algorithm also considers features of the pregnancy, such as gestational age, birth weight, and any conditions the mother might have. If there are any diseased blood vessels, then the picture — and therefore the placenta — is marked as diseased. The UPMC team provided the de-identified placenta images for training the algorithm.
    “This algorithm isn’t going to replace a pathologist anytime soon,” Dr. Clymer explained. “The goal here is that this type of algorithm might be able to help speed up the process by flagging regions of the image where the pathologist should take a closer look.”
    “This is a beautiful collaboration between engineering and medicine as each brings expertise to the table that, when combined, creates novel findings that can help so many individuals,” added lead investigators Jonathan Cagan, PhD, and Philip LeDuc, PhD, professors of mechanical engineering at CMU, Pittsburgh, PA, USA.
    “As healthcare increasingly embraces the role of artificial intelligence, it is important that doctors partner early on with computer scientists and engineers so that we can design and develop the right tools for the job to positively impact patient outcomes,” noted co-author Liron Pantanowitz, MBBCh, formerly vice chair for pathology informatics at UPMC, Pittsburgh, PA, USA. “This partnership between CMU and UPMC is a perfect example of what can be accomplished when this happens.”

    Story Source:
    Materials provided by Elsevier. Note: Content may be edited for style and length. More

  • in

    A molecular approach to quantum computing

    The technology behind the quantum computers of the future is fast developing, with several different approaches in progress. Many of the strategies, or “blueprints,” for quantum computers rely on atoms or artificial atom-like electrical circuits. In a new theoretical study in the journal Physical Review X, a group of physicists at Caltech demonstrates the benefits of a lesser-studied approach that relies not on atoms but molecules.
    “In the quantum world, we have several blueprints on the table and we are simultaneously improving all of them,” says lead author Victor Albert, the Lee A. DuBridge Postdoctoral Scholar in Theoretical Physics. “People have been thinking about using molecules to encode information since 2001, but now we are showing how molecules, which are more complex than atoms, could lead to fewer errors in quantum computing.”
    At the heart of quantum computers are what are known as qubits. These are similar to the bits in classical computers, but unlike classical bits they can experience a bizarre phenomenon known as superposition in which they exist in two states or more at once. Like the famous Schrödinger’s cat thought experiment, which describes a cat that is both dead and alive at the same time, particles can exist in multiple states at once. The phenomenon of superposition is at the heart of quantum computing: the fact that qubits can take on many forms simultaneously means that they have exponentially more computing power than classical bits.
    But the state of superposition is a delicate one, as qubits are prone to collapsing out of their desired states, and this leads to computing errors.
    “In classical computing, you have to worry about the bits flipping, in which a ‘1’ bit goes to a ‘0’ or vice versa, which causes errors,” says Albert. “This is like flipping a coin, and it is hard to do. But in quantum computing, the information is stored in fragile superpositions, and even the quantum equivalent of a gust of wind can lead to errors.”
    However, if a quantum computer platform uses qubits made of molecules, the researchers say, these types of errors are more likely to be prevented than in other quantum platforms. One concept behind the new research comes from work performed nearly 20 years ago by Caltech researchers John Preskill, Richard P. Feynman Professor of Theoretical Physics and director of the Institute of Quantum Information and Matter (IQIM), and Alexei Kitaev, the Ronald and Maxine Linde Professor of Theoretical Physics and Mathematics at Caltech, along with their colleague Daniel Gottesman (PhD ’97) of the Perimeter Institute in Ontario, Canada. Back then, the scientists proposed a loophole that would provide a way around a phenomenon called Heisenberg’s uncertainty principle, which was introduced in 1927 by German physicist Werner Heisenberg. The principle states that one cannot simultaneously know with very high precision both where a particle is and where it is going.

    advertisement

    “There is a joke where Heisenberg gets pulled over by a police officer who says he knows Heisenberg’s speed was 90 miles per hour, and Heisenberg replies, ‘Now I have no idea where I am,'” says Albert.
    The uncertainty principle is a challenge for quantum computers because it implies that the quantum states of the qubits cannot be known well enough to determine whether or not errors have occurred. However, Gottesman, Kitaev, and Preskill figured out that while the exact position and momentum of a particle could not be measured, it was possible to detect very tiny shifts to its position and momentum. These shifts could reveal that an error has occurred, making it possible to push the system back to the correct state. This error-correcting scheme, known as GKP after its discoverers, has recently been implemented in superconducting circuit devices.
    “Errors are okay but only if we know they happen,” says Preskill, a co-author on the Physical Review X paper and also the scientific coordinator for a new Department of Energy-funded science center called the Quantum Systems Accelerator. “The whole point of error correction is to maximize the amount of knowledge we have about potential errors.”
    In the new paper, this concept is applied to rotating molecules in superposition. If the orientation or angular momentum of the molecule shifts by a small amount, those shifts can be simultaneously corrected.
    “We want to track the quantum information as it’s evolving under the noise,” says Albert. “The noise is kicking us around a little bit. But if we have a carefully chosen superposition of the molecules’ states, we can measure both orientation and angular momentum as long as they are small enough. And then we can kick the system back to compensate.”
    Jacob Covey, a co-author on the paper and former Caltech postdoctoral scholar who recently joined the faculty at the University of Illinois, says that it might be possible to eventually individually control molecules for use in quantum information systems such as these. He and his team have made strides in using optical laser beams, or “tweezers,” to control single neutral atoms (neutral atoms are another promising platform for quantum-information systems).
    “The appeal of molecules is that they are very complex structures that can be very densely packed,” says Covey. “If we can figure out how to utilize molecules in quantum computing, we can robustly encode information and improve the efficiency in which qubits are packed.”
    Albert says that the trio of himself, Preskill, and Covey provided the perfect combination of theoretical and experimental expertise to achieve the latest results. He and Preskill are both theorists while Covey is an experimentalist. “It was really nice to have somebody like John to help me with the framework for all this theory of error-correcting codes, and Jake gave us crucial guidance on what is happening in labs.”
    Says Preskill, “This is a paper that no one of the three of us could have written on our own. What’s really fun about the field of quantum information is that it’s encouraging us to interact across some of these divides, and Caltech, with its small size, is the perfect place to get this done.” More

  • in

    A surprising opportunity for telehealth in shaping the future of medicine

    Expanded telehealth services at UT Southwestern have proved effective at safely delivering patient care during the pandemic, leading to an increase in patients even in specialties such as plastic surgery, according to a new study.
    The study, published in the Aesthetic Surgery Journal, illuminates the unexpected benefits that telehealth has had during the pandemic and provides insight into what this may mean for the future of medicine in the United States.
    “Prior to COVID-19, it was not clear if telehealth would meet the standard of care in highly specialized clinical practices. Out of necessity, we were forced to innovate quickly. What we found is that it is actually a really good fit,” says Alan Kramer, M.P.H., assistant vice president of health system emerging strategies at UTSW and co-author of the study.
    UT Southwestern was already equipped with telehealth technology when COVID-19 hit — but only as a small pilot program. Through incredible team efforts, telehealth was expanded across the institution within days, bringing with it several unanticipated benefits for both the medical center and patients.
    “The conversion rate to telehealth is higher than in person,” says Bardia Amirlak, M.D., FACS, associate professor of plastic surgery and the study’s senior corresponding author. The study found 25,197 of 34,706 telehealth appointments across the institution were completed in April 2020 — a 72.6 percent completion rate — compared with a 65.8 percent completion rate of in-person visits from April 2019.
    The study notes the significant increases in the volume of new patients seen by telehealth beginning in March 2020. This resulted from a combination of relaxed regulations and an increasing comfort level with telehealth visits among physicians and patients. UTSW saw the percentage of new patients seen through telehealth visits increase from 0.77 percent in February to 14.2 percent and 16.7 percent in March and April, respectively.

    advertisement

    Even within a niche field like plastic surgery, the implementation of telehealth has been incredibly successful, demonstrating the tractability of telehealth to a wide range of practices. From April to mid-May, plastic surgery completed 340 telehealth visits in areas such as breast cancer reconstruction, hand surgery, and wound care, with completion rates similar to the whole of UTSW. Likewise, plastic surgery also saw a large number of new patients, who comprised 41 percent of the telehealth visits.
    “The fear was that the platform wouldn’t be able to handle it: the privacy issues, insurance issues, malpractice issues … but it came together well and we were able to ramp up into the thousands, and were able to not only decrease patient anxiety, but also increase many beneficial factors, such as patient access,” says Amirlak.
    The study reported several boons for telehealth patients, including reductions in stress, missed work, the number of hospital visits, travel time, and exposure to pathogens, in addition to improving access to care with the option for out-of-state consultations. Indeed, patients from 43 states and Puerto Rico have participated in telehealth visits at UTSW facilities since March.
    Even as COVID-19 restrictions have eased in Texas, telehealth is still proving to be a major part of UT Southwestern’s clinical practice. “The feedback from patients has been very positive,” says Kramer. “We’re now sustaining 25 percent of our practice being done virtually, a major win for our patients. It’s changed the way we think about care.”
    Whether this trend continues into the post-COVID-19 world remains to be seen, he says. But either way, Kramer says, it is clear that telehealth will be a useful tool.
    The numerous benefits that telehealth has to offer are accompanied by several challenges, however, such as the practicality and risks of remote diagnostic medicine. Though technology is starting to address some issues with the development of tools such as electronic stethoscopes and consumer-facing apps that can measure blood oxygen levels and perform electrocardiograms, for example, some argue the value of the in-person physical exam cannot be replaced. Moving forward, Amirlak says, “it will be our responsibility as physicians and scientists to recognize the potential dangers of taking telehealth to the extreme right now and missing a clinical diagnosis.”
    Aside from patient-facing issues, other challenges need to be included in discussions of the future of telehealth, including federal, state, and local laws; privacy concerns; and Health Insurance Portability and Accountability Act (HIPAA) regulations. Many statutes and restrictions have been loosened during the pandemic, allowing institutions like UTSW to implement telehealth rapidly and effectively. But the future of telehealth will necessitate the development of long-term regulations.
    “Based on the trends, it seems that telehealth is here to stay. So it’s important to think about the concerns, and based on this information, the issues that we have and how we can resolve them going forward,” says Christine Wamsley, a UTSW research fellow and first author of the study. With the ramp-up of telehealth and related restrictions amid the COVID-19 pandemic, now may be the best opportunity for health care providers and governmental agencies to address these challenges and set out guidelines for the practice of telehealth. More

  • in

    Miniature antenna enables robotic teaming in complex environments

    A new, miniature, low-frequency antenna with enhanced bandwidth will enable robust networking among compact, mobile robots in complex environments.
    In a collaborative effort between the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory and the University of Michigan, researchers developed a novel design approach that improves upon limitations of conventional antennas operating at low frequencies — demonstrating smaller antennas that maintain performance.
    Impedance matching is a key aspect of antenna design, ensuring that the radio transmits power through the antenna with minimal reflections while in transmit mode — and that when the antenna is in receive mode, it captures power to efficiently couple to the radio over all frequencies within the operational bandwidth.
    “Conventional impedance matching techniques with passive components — such as resistors, inductors and capacitors — have a fundamental limit, known as the Chu-Wheeler limit, which defines a bound for the maximum achievable bandwidth-efficiency product for a given antenna size,” said Army researcher Dr. Fikadu Dagefu. “In general, low-frequency antennas are physically large, or their miniaturized counterparts have very limited bandwidth and efficiency, resulting in higher power requirement.”
    With those challenges in mind, the researchers developed a novel approach that improves bandwidth and efficiency without increasing size or changing the topology of the antenna.
    “The proposed impedance matching approach applies a modular active circuit to a highly miniaturized, efficient, lightweight antenna — overcoming the aforementioned Chu-Wheeler performance limit,” said Army postdoctoral researcher Dr. Jihun Choi. “This miniature, actively matched antenna enables the integration of power-efficient, low-frequency radio systems on compact mobile agents such as unmanned ground and aerial vehicles.”
    The researchers said this approach could create new opportunities for networking in the Army.

    advertisement

    The ability to integrate low-frequency radio systems with low size, weight, and power — or SWAP — opens the door for the exploitation of this underutilized and underexplored frequency band as part of the heterogeneous autonomous networking paradigm. In this paradigm, agents equipped with complementary communications modalities must adapt their approaches based on challenges in the environment for that specific mission. Specifically, the lower frequencies are suitable for reliable communications in complex propagation environments and terrain due to their improved penetration and reduced multipath.
    “We integrated the developed antenna on small, unmanned ground vehicles and demonstrated reliable, real-time digital video streaming between UGVs, which has not been done before with such compact low-frequency radio systems,” Dagefu said. “By exploiting this technology, the robotic agents could coordinate and form teams, enabling unique capabilities such as distributed on-demand beamforming for directional and secure battlefield networking.”
    With more than 80 percent of the world’s population expected to live in dense urban environments by 2050, innovative Army networking capabilities are necessary to create and maintain transformational overmatch, the researchers said. Lack of fixed infrastructure coupled with the increasing need for a competitive advantage over near-peer adversaries imposes further challenges on Army networks, a top modernization priority for multi-domain operations.
    While previous experimental studies demonstrated bandwidth enhancement with active matching applied to a small non-resonant antenna (e.g., a short metallic wire), no previous work simultaneously ensures bandwidth and radiation efficiency enhancement compared to small, resonant antennas with performance near the Chu-Wheeler limit.
    The Army-led active matching design approach addresses these key challenges stemming from the trade-off among bandwidth, efficiency and stability. The researchers built a 15-centimeter prototype (2 percent of the operating wavelength) and demonstrated that the new design achieves more than threefold bandwidth enhancement compared to the same antenna without applying active matching, while also improving the transmission efficiency 10 times compared to the state-of-the-art actively matched antennas with the same size.
    “In the design, a highly accurate model captures sharp impedance variation of the highly miniaturized resonant antenna” Choi said. “Based on the model, we develop an active matching circuit that enhances bandwidth and efficiency simultaneously while ensuring the circuit is fully stable.”
    The team published their research, A Miniature Actively Matched Antenna for Power-Efficient and Bandwidth-Enhanced Operation at Low VHF, authored by Drs. Jihun Choi, Fikadu Dagefu, Brian Sadler, and Prof. Kamal Sarabandi, in the peer-reviewed journal Institute of Electrical and Electronics Engineers Transactions on Antennas and Propagation.
    “This technology is ripe for future development and transition to our various partners within the Army,” Dagefu said. “We are optimistic that with the integration of aspects of our heterogeneous networking research, this technology will further develop and will be integrated into future Army communications systems.” More

  • in

    A multinational study overturns a 130-year old assumption about seawater chemistry

    There’s more to seawater than salt. Ocean chemistry is a complex mixture of particles, ions and nutrients. And for over a century, scientists believed that certain ion ratios held relatively constant over space and time.
    But now, following a decade of research, a multinational study has refuted this assumption. Debora Iglesias-Rodriguez, professor and vice chair of UC Santa Barbara’s Department of Ecology, Evolution, and Marine Biology, and her colleagues discovered that the seawater ratios of three key elements vary across the ocean, which means scientists will have to re-examine many of their hypotheses and models. The results appear in the Proceedings of the National Academy of Sciences.
    Calcium, magnesium and strontium (Ca, Mg and Sr) are important elements in ocean chemistry, involved in a number of biologic and geologic processes. For instance, a host of different animals and microbes use calcium to build their skeletons and shells. These elements enter the ocean via rivers and tectonic features, such as hydrothermal vents. They’re taken up by organisms like coral and plankton, as well as by ocean sediment.
    The first approximation of modern seawater composition took place over 130 years ago. The scientists who conducted the study concluded that, despite minor variations from place to place, the ratios between the major ions in the waters of the open ocean are nearly constant.
    Researchers have generally accepted this idea from then on, and it made a lot of sense. Based on the slow turnover of these elements in the ocean — on the order of millions of years — scientists long thought the ratios of these ions would remain relatively stable over extended periods of time.
    “The main message of this paper is that we have to revisit these ratios,” said Iglesias-Rodriguez. “We cannot just continue to make the assumptions we have made in the past essentially based on the residency time of these elements.”
    Back in 2010, Iglesias-Rodriguez was participating in a research expedition over the Porcupine Abyssal Plain, a region of North Atlantic seafloor west of Europe. She had invited a former student of hers, this paper’s lead author Mario Lebrato, who was pursuing his doctorate at the time.

    advertisement

    Their study analyzed the chemical composition of water at various depths. Lebrato found that the Ca, Mg and Sr ratios from their samples deviated significantly from what they had expected. The finding was intriguing, but the data was from only one location.
    Over the next nine years, Lebrato put together a global survey of these element ratios. Scientists including Iglesias-Rodriguez collected over 1,100 water samples on 79 cruises ranging from the ocean’s surface to 6,000 meters down. The data came from 14 ecosystems across 10 countries. And to maintain consistency, all the samples were processed by a single person in one lab.
    The project’s results overturned the field’s 130-year old assumption about seawater chemistry, revealing that the ratio of these ions varies considerably across the ocean.
    Scientists have long used these ratios to reconstruct past ocean conditions, like temperature. “The main implication is that the paleo-reconstructions we have been conducting have to be revisited,” Iglesias-Rodriguez explained, “because environmental conditions have a substantial impact on these ratios, which have been overlooked.”
    Oceanographers can no longer assume that data they have on past ocean chemistry represent the whole ocean. It has become clear they can extrapolate only regional conditions from this information.
    This revelation also has implications for modern marine science. Seawater ratios of Mg to Ca affect the composition of animal shells. For example, a higher magnesium content tends to make shells more vulnerable to dissolution, which is an ongoing issue as increasing carbon dioxide levels gradually make the ocean more acidic. “Biologically speaking, it is important to figure out these ratios with some degree of certainty,” said Iglesias-Rodriguez.
    Iglesias-Rodriguez’s latest project focuses on the application of rock dissolution as a method to fight ocean acidification. She’s looking at lowering the acidity of seawater using pulverized stones like olivine and carbonate rock. This intervention will likely change the balance of ions in the water, which is something worth considering. As climate change continues unabated, this intervention could help keep acidity in check in small areas, like coral reefs. More