More stories

  • in

    Revolutionary quantum breakthrough paves way for safer online communication

    The world is one step closer to having a totally secure internet and an answer to the growing threat of cyber-attacks, thanks to a team of international scientists who have created a unique prototype which could transform how we communicate online.
    The invention led by the University of Bristol, revealed today in the journal Science Advances, has the potential to serve millions of users, is understood to be the largest-ever quantum network of its kind, and could be used to secure people’s online communication, particularly in these internet-led times accelerated by the COVID-19 pandemic.
    By deploying a new technique, harnessing the simple laws of physics, it can make messages completely safe from interception while also overcoming major challenges which have previously limited advances in this little used but much-hyped technology.
    Lead author Dr Siddarth Joshi, who headed the project at the university’s Quantum Engineering Technology (QET) Labs, said: “This represents a massive breakthrough and makes the quantum internet a much more realistic proposition. Until now, building a quantum network has entailed huge cost, time, and resource, as well as often compromising on its security which defeats the whole purpose.”
    “Our solution is scalable, relatively cheap and, most important of all, impregnable. That means it’s an exciting game changer and paves the way for much more rapid development and widespread rollout of this technology.”
    The current internet relies on complex codes to protect information, but hackers are increasingly adept at outsmarting such systems leading to cyber-attacks across the world which cause major privacy breaches and fraud running into trillions of pounds annually. With such costs projected to rise dramatically, the case for finding an alternative is even more compelling and quantum has for decades been hailed as the revolutionary replacement to standard encryption techniques.

    advertisement

    So far physicists have developed a form of secure encryption, known as quantum key distribution, in which particles of light, called photons, are transmitted. The process allows two parties to share, without risk of interception, a secret key used to encrypt and decrypt information. But to date this technique has only been effective between two users.
    “Until now efforts to expand the network have involved vast infrastructure and a system which requires the creation of another transmitter and receiver for every additional user. Sharing messages in this way, known as trusted nodes, is just not good enough because it uses so much extra hardware which could leak and would no longer be totally secure,” Dr Joshi said.
    The team’s quantum technique applies a seemingly magical principle, called entanglement, which Albert Einstein described as ‘spooky action at a distance.’ It exploits the power of two different particles placed in separate locations, potentially thousands of miles apart, to simultaneously mimic each other. This process presents far greater opportunities for quantum computers, sensors, and information processing.
    “Instead of having to replicate the whole communication system, this latest methodology, called multiplexing, splits the light particles, emitted by a single system, so they can be received by multiple users efficiently,” Dr Joshi said.
    The team created a network for eight users using just eight receiver boxes, whereas the former method would need the number of users multiplied many times — in this case, amounting to 56 boxes. As the user numbers grow, the logistics become increasingly unviable — for instance 100 users would take 9,900 receiver boxes.

    advertisement

    To demonstrate its functionality across distance, the receiver boxes were connected to optical fibres via different locations across Bristol and the ability to transmit messages via quantum communication was tested using the city’s existing optical fibre network.
    “Besides being completely secure, the beauty of this new technique is its streamline agility, which requires minimal hardware because it integrates with existing technology,” Dr Joshi said.
    The team’s unique system also features traffic management, delivering better network control which allows, for instance, certain users to be prioritised with a faster connection.
    Whereas previous quantum systems have taken years to build, at a cost of millions or even billions of pounds, this network was created within months for less than £300,000. The financial advantages grow as the network expands, so while 100 users on previous quantum systems might cost in the region of £5 billion, Dr Joshi believes multiplexing technology could slash that to around £4.5 million, less than 1 per cent.
    In recent years quantum cryptography has been successfully used to protect transactions between banking centres in China and secure votes at a Swiss election. Yet its wider application has been held back by the sheer scale of resources and costs involved.
    “With these economies of scale, the prospect of a quantum internet for universal usage is much less far-fetched. We have proved the concept and by further refining our multiplexing methods to optimise and share resources in the network, we could be looking at serving not just hundreds or thousands, but potentially millions of users in the not too distant future,” Dr Joshi said.
    “The ramifications of the COVID-19 pandemic have not only shown importance and potential of the internet, and our growing dependence on it, but also how its absolute security is paramount. Multiplexing entanglement could hold the vital key to making this security a much-needed reality.” More

  • in

    Predictive placentas: Using artificial intelligence to protect mothers' future pregnancies

    After a baby is born, doctors sometimes examine the placenta — the organ that links the mother to the baby — for features that indicate health risks in any future pregnancies. Unfortunately, this is a time-consuming process that must be performed by a specialist, so most placentas go unexamined after the birth. A team of researchers from Carnegie Mellon University (CMU) and the University of Pittsburgh Medical Center (UPMC) report the development of a machine learning approach to examine placenta slides in the American Journal of Pathology, published by Elsevier, so more women can be informed of their health risks.
    One reason placentas are examined is to look for a type of blood vessel lesions called decidual vasculopathy (DV). These indicate the mother is at risk for preeclampsia — a complication that can be fatal to the mother and baby — in any future pregnancies. Once detected, preeclampsia can be treated, so there is considerable benefit from identifying at-risk mothers before symptoms appear. However, although there are hundreds of blood vessels in a single slide, only one diseased vessel is needed to indicate risk.
    “Pathologists train for years to be able to find disease in these images, but there are so many pregnancies going through the hospital system that they don’t have time to inspect every placenta,” said Daniel Clymer, PhD, alumnus, Department of Mechanical Engineering, CMU, Pittsburgh, PA, USA. “Our algorithm helps pathologists know which images they should focus on by scanning an image, locating blood vessels, and finding patterns of the blood vessels that identify DV.”
    Machine learning works by “training” the computer to recognize certain features in data files. In this case, the data file is an image of a thin slice of a placenta sample. Researchers show the computer various images and indicate whether the placenta is diseased or healthy. After sufficient training, the computer is able to identify diseased lesions on its own.
    It is quite difficult for a computer to simply look at a large picture and classify it, so the team introduced a novel approach through which the computer follows a series of steps to make the task more manageable. First, the computer detects all blood vessels in an image. Each blood vessel can then be considered individually, creating smaller data packets for analysis. The computer will then access each blood vessel and determine if it should be deemed diseased or healthy. At this stage, the algorithm also considers features of the pregnancy, such as gestational age, birth weight, and any conditions the mother might have. If there are any diseased blood vessels, then the picture — and therefore the placenta — is marked as diseased. The UPMC team provided the de-identified placenta images for training the algorithm.
    “This algorithm isn’t going to replace a pathologist anytime soon,” Dr. Clymer explained. “The goal here is that this type of algorithm might be able to help speed up the process by flagging regions of the image where the pathologist should take a closer look.”
    “This is a beautiful collaboration between engineering and medicine as each brings expertise to the table that, when combined, creates novel findings that can help so many individuals,” added lead investigators Jonathan Cagan, PhD, and Philip LeDuc, PhD, professors of mechanical engineering at CMU, Pittsburgh, PA, USA.
    “As healthcare increasingly embraces the role of artificial intelligence, it is important that doctors partner early on with computer scientists and engineers so that we can design and develop the right tools for the job to positively impact patient outcomes,” noted co-author Liron Pantanowitz, MBBCh, formerly vice chair for pathology informatics at UPMC, Pittsburgh, PA, USA. “This partnership between CMU and UPMC is a perfect example of what can be accomplished when this happens.”

    Story Source:
    Materials provided by Elsevier. Note: Content may be edited for style and length. More

  • in

    A molecular approach to quantum computing

    The technology behind the quantum computers of the future is fast developing, with several different approaches in progress. Many of the strategies, or “blueprints,” for quantum computers rely on atoms or artificial atom-like electrical circuits. In a new theoretical study in the journal Physical Review X, a group of physicists at Caltech demonstrates the benefits of a lesser-studied approach that relies not on atoms but molecules.
    “In the quantum world, we have several blueprints on the table and we are simultaneously improving all of them,” says lead author Victor Albert, the Lee A. DuBridge Postdoctoral Scholar in Theoretical Physics. “People have been thinking about using molecules to encode information since 2001, but now we are showing how molecules, which are more complex than atoms, could lead to fewer errors in quantum computing.”
    At the heart of quantum computers are what are known as qubits. These are similar to the bits in classical computers, but unlike classical bits they can experience a bizarre phenomenon known as superposition in which they exist in two states or more at once. Like the famous Schrödinger’s cat thought experiment, which describes a cat that is both dead and alive at the same time, particles can exist in multiple states at once. The phenomenon of superposition is at the heart of quantum computing: the fact that qubits can take on many forms simultaneously means that they have exponentially more computing power than classical bits.
    But the state of superposition is a delicate one, as qubits are prone to collapsing out of their desired states, and this leads to computing errors.
    “In classical computing, you have to worry about the bits flipping, in which a ‘1’ bit goes to a ‘0’ or vice versa, which causes errors,” says Albert. “This is like flipping a coin, and it is hard to do. But in quantum computing, the information is stored in fragile superpositions, and even the quantum equivalent of a gust of wind can lead to errors.”
    However, if a quantum computer platform uses qubits made of molecules, the researchers say, these types of errors are more likely to be prevented than in other quantum platforms. One concept behind the new research comes from work performed nearly 20 years ago by Caltech researchers John Preskill, Richard P. Feynman Professor of Theoretical Physics and director of the Institute of Quantum Information and Matter (IQIM), and Alexei Kitaev, the Ronald and Maxine Linde Professor of Theoretical Physics and Mathematics at Caltech, along with their colleague Daniel Gottesman (PhD ’97) of the Perimeter Institute in Ontario, Canada. Back then, the scientists proposed a loophole that would provide a way around a phenomenon called Heisenberg’s uncertainty principle, which was introduced in 1927 by German physicist Werner Heisenberg. The principle states that one cannot simultaneously know with very high precision both where a particle is and where it is going.

    advertisement

    “There is a joke where Heisenberg gets pulled over by a police officer who says he knows Heisenberg’s speed was 90 miles per hour, and Heisenberg replies, ‘Now I have no idea where I am,'” says Albert.
    The uncertainty principle is a challenge for quantum computers because it implies that the quantum states of the qubits cannot be known well enough to determine whether or not errors have occurred. However, Gottesman, Kitaev, and Preskill figured out that while the exact position and momentum of a particle could not be measured, it was possible to detect very tiny shifts to its position and momentum. These shifts could reveal that an error has occurred, making it possible to push the system back to the correct state. This error-correcting scheme, known as GKP after its discoverers, has recently been implemented in superconducting circuit devices.
    “Errors are okay but only if we know they happen,” says Preskill, a co-author on the Physical Review X paper and also the scientific coordinator for a new Department of Energy-funded science center called the Quantum Systems Accelerator. “The whole point of error correction is to maximize the amount of knowledge we have about potential errors.”
    In the new paper, this concept is applied to rotating molecules in superposition. If the orientation or angular momentum of the molecule shifts by a small amount, those shifts can be simultaneously corrected.
    “We want to track the quantum information as it’s evolving under the noise,” says Albert. “The noise is kicking us around a little bit. But if we have a carefully chosen superposition of the molecules’ states, we can measure both orientation and angular momentum as long as they are small enough. And then we can kick the system back to compensate.”
    Jacob Covey, a co-author on the paper and former Caltech postdoctoral scholar who recently joined the faculty at the University of Illinois, says that it might be possible to eventually individually control molecules for use in quantum information systems such as these. He and his team have made strides in using optical laser beams, or “tweezers,” to control single neutral atoms (neutral atoms are another promising platform for quantum-information systems).
    “The appeal of molecules is that they are very complex structures that can be very densely packed,” says Covey. “If we can figure out how to utilize molecules in quantum computing, we can robustly encode information and improve the efficiency in which qubits are packed.”
    Albert says that the trio of himself, Preskill, and Covey provided the perfect combination of theoretical and experimental expertise to achieve the latest results. He and Preskill are both theorists while Covey is an experimentalist. “It was really nice to have somebody like John to help me with the framework for all this theory of error-correcting codes, and Jake gave us crucial guidance on what is happening in labs.”
    Says Preskill, “This is a paper that no one of the three of us could have written on our own. What’s really fun about the field of quantum information is that it’s encouraging us to interact across some of these divides, and Caltech, with its small size, is the perfect place to get this done.” More

  • in

    A surprising opportunity for telehealth in shaping the future of medicine

    Expanded telehealth services at UT Southwestern have proved effective at safely delivering patient care during the pandemic, leading to an increase in patients even in specialties such as plastic surgery, according to a new study.
    The study, published in the Aesthetic Surgery Journal, illuminates the unexpected benefits that telehealth has had during the pandemic and provides insight into what this may mean for the future of medicine in the United States.
    “Prior to COVID-19, it was not clear if telehealth would meet the standard of care in highly specialized clinical practices. Out of necessity, we were forced to innovate quickly. What we found is that it is actually a really good fit,” says Alan Kramer, M.P.H., assistant vice president of health system emerging strategies at UTSW and co-author of the study.
    UT Southwestern was already equipped with telehealth technology when COVID-19 hit — but only as a small pilot program. Through incredible team efforts, telehealth was expanded across the institution within days, bringing with it several unanticipated benefits for both the medical center and patients.
    “The conversion rate to telehealth is higher than in person,” says Bardia Amirlak, M.D., FACS, associate professor of plastic surgery and the study’s senior corresponding author. The study found 25,197 of 34,706 telehealth appointments across the institution were completed in April 2020 — a 72.6 percent completion rate — compared with a 65.8 percent completion rate of in-person visits from April 2019.
    The study notes the significant increases in the volume of new patients seen by telehealth beginning in March 2020. This resulted from a combination of relaxed regulations and an increasing comfort level with telehealth visits among physicians and patients. UTSW saw the percentage of new patients seen through telehealth visits increase from 0.77 percent in February to 14.2 percent and 16.7 percent in March and April, respectively.

    advertisement

    Even within a niche field like plastic surgery, the implementation of telehealth has been incredibly successful, demonstrating the tractability of telehealth to a wide range of practices. From April to mid-May, plastic surgery completed 340 telehealth visits in areas such as breast cancer reconstruction, hand surgery, and wound care, with completion rates similar to the whole of UTSW. Likewise, plastic surgery also saw a large number of new patients, who comprised 41 percent of the telehealth visits.
    “The fear was that the platform wouldn’t be able to handle it: the privacy issues, insurance issues, malpractice issues … but it came together well and we were able to ramp up into the thousands, and were able to not only decrease patient anxiety, but also increase many beneficial factors, such as patient access,” says Amirlak.
    The study reported several boons for telehealth patients, including reductions in stress, missed work, the number of hospital visits, travel time, and exposure to pathogens, in addition to improving access to care with the option for out-of-state consultations. Indeed, patients from 43 states and Puerto Rico have participated in telehealth visits at UTSW facilities since March.
    Even as COVID-19 restrictions have eased in Texas, telehealth is still proving to be a major part of UT Southwestern’s clinical practice. “The feedback from patients has been very positive,” says Kramer. “We’re now sustaining 25 percent of our practice being done virtually, a major win for our patients. It’s changed the way we think about care.”
    Whether this trend continues into the post-COVID-19 world remains to be seen, he says. But either way, Kramer says, it is clear that telehealth will be a useful tool.
    The numerous benefits that telehealth has to offer are accompanied by several challenges, however, such as the practicality and risks of remote diagnostic medicine. Though technology is starting to address some issues with the development of tools such as electronic stethoscopes and consumer-facing apps that can measure blood oxygen levels and perform electrocardiograms, for example, some argue the value of the in-person physical exam cannot be replaced. Moving forward, Amirlak says, “it will be our responsibility as physicians and scientists to recognize the potential dangers of taking telehealth to the extreme right now and missing a clinical diagnosis.”
    Aside from patient-facing issues, other challenges need to be included in discussions of the future of telehealth, including federal, state, and local laws; privacy concerns; and Health Insurance Portability and Accountability Act (HIPAA) regulations. Many statutes and restrictions have been loosened during the pandemic, allowing institutions like UTSW to implement telehealth rapidly and effectively. But the future of telehealth will necessitate the development of long-term regulations.
    “Based on the trends, it seems that telehealth is here to stay. So it’s important to think about the concerns, and based on this information, the issues that we have and how we can resolve them going forward,” says Christine Wamsley, a UTSW research fellow and first author of the study. With the ramp-up of telehealth and related restrictions amid the COVID-19 pandemic, now may be the best opportunity for health care providers and governmental agencies to address these challenges and set out guidelines for the practice of telehealth. More

  • in

    Miniature antenna enables robotic teaming in complex environments

    A new, miniature, low-frequency antenna with enhanced bandwidth will enable robust networking among compact, mobile robots in complex environments.
    In a collaborative effort between the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory and the University of Michigan, researchers developed a novel design approach that improves upon limitations of conventional antennas operating at low frequencies — demonstrating smaller antennas that maintain performance.
    Impedance matching is a key aspect of antenna design, ensuring that the radio transmits power through the antenna with minimal reflections while in transmit mode — and that when the antenna is in receive mode, it captures power to efficiently couple to the radio over all frequencies within the operational bandwidth.
    “Conventional impedance matching techniques with passive components — such as resistors, inductors and capacitors — have a fundamental limit, known as the Chu-Wheeler limit, which defines a bound for the maximum achievable bandwidth-efficiency product for a given antenna size,” said Army researcher Dr. Fikadu Dagefu. “In general, low-frequency antennas are physically large, or their miniaturized counterparts have very limited bandwidth and efficiency, resulting in higher power requirement.”
    With those challenges in mind, the researchers developed a novel approach that improves bandwidth and efficiency without increasing size or changing the topology of the antenna.
    “The proposed impedance matching approach applies a modular active circuit to a highly miniaturized, efficient, lightweight antenna — overcoming the aforementioned Chu-Wheeler performance limit,” said Army postdoctoral researcher Dr. Jihun Choi. “This miniature, actively matched antenna enables the integration of power-efficient, low-frequency radio systems on compact mobile agents such as unmanned ground and aerial vehicles.”
    The researchers said this approach could create new opportunities for networking in the Army.

    advertisement

    The ability to integrate low-frequency radio systems with low size, weight, and power — or SWAP — opens the door for the exploitation of this underutilized and underexplored frequency band as part of the heterogeneous autonomous networking paradigm. In this paradigm, agents equipped with complementary communications modalities must adapt their approaches based on challenges in the environment for that specific mission. Specifically, the lower frequencies are suitable for reliable communications in complex propagation environments and terrain due to their improved penetration and reduced multipath.
    “We integrated the developed antenna on small, unmanned ground vehicles and demonstrated reliable, real-time digital video streaming between UGVs, which has not been done before with such compact low-frequency radio systems,” Dagefu said. “By exploiting this technology, the robotic agents could coordinate and form teams, enabling unique capabilities such as distributed on-demand beamforming for directional and secure battlefield networking.”
    With more than 80 percent of the world’s population expected to live in dense urban environments by 2050, innovative Army networking capabilities are necessary to create and maintain transformational overmatch, the researchers said. Lack of fixed infrastructure coupled with the increasing need for a competitive advantage over near-peer adversaries imposes further challenges on Army networks, a top modernization priority for multi-domain operations.
    While previous experimental studies demonstrated bandwidth enhancement with active matching applied to a small non-resonant antenna (e.g., a short metallic wire), no previous work simultaneously ensures bandwidth and radiation efficiency enhancement compared to small, resonant antennas with performance near the Chu-Wheeler limit.
    The Army-led active matching design approach addresses these key challenges stemming from the trade-off among bandwidth, efficiency and stability. The researchers built a 15-centimeter prototype (2 percent of the operating wavelength) and demonstrated that the new design achieves more than threefold bandwidth enhancement compared to the same antenna without applying active matching, while also improving the transmission efficiency 10 times compared to the state-of-the-art actively matched antennas with the same size.
    “In the design, a highly accurate model captures sharp impedance variation of the highly miniaturized resonant antenna” Choi said. “Based on the model, we develop an active matching circuit that enhances bandwidth and efficiency simultaneously while ensuring the circuit is fully stable.”
    The team published their research, A Miniature Actively Matched Antenna for Power-Efficient and Bandwidth-Enhanced Operation at Low VHF, authored by Drs. Jihun Choi, Fikadu Dagefu, Brian Sadler, and Prof. Kamal Sarabandi, in the peer-reviewed journal Institute of Electrical and Electronics Engineers Transactions on Antennas and Propagation.
    “This technology is ripe for future development and transition to our various partners within the Army,” Dagefu said. “We are optimistic that with the integration of aspects of our heterogeneous networking research, this technology will further develop and will be integrated into future Army communications systems.” More

  • in

    A multinational study overturns a 130-year old assumption about seawater chemistry

    There’s more to seawater than salt. Ocean chemistry is a complex mixture of particles, ions and nutrients. And for over a century, scientists believed that certain ion ratios held relatively constant over space and time.
    But now, following a decade of research, a multinational study has refuted this assumption. Debora Iglesias-Rodriguez, professor and vice chair of UC Santa Barbara’s Department of Ecology, Evolution, and Marine Biology, and her colleagues discovered that the seawater ratios of three key elements vary across the ocean, which means scientists will have to re-examine many of their hypotheses and models. The results appear in the Proceedings of the National Academy of Sciences.
    Calcium, magnesium and strontium (Ca, Mg and Sr) are important elements in ocean chemistry, involved in a number of biologic and geologic processes. For instance, a host of different animals and microbes use calcium to build their skeletons and shells. These elements enter the ocean via rivers and tectonic features, such as hydrothermal vents. They’re taken up by organisms like coral and plankton, as well as by ocean sediment.
    The first approximation of modern seawater composition took place over 130 years ago. The scientists who conducted the study concluded that, despite minor variations from place to place, the ratios between the major ions in the waters of the open ocean are nearly constant.
    Researchers have generally accepted this idea from then on, and it made a lot of sense. Based on the slow turnover of these elements in the ocean — on the order of millions of years — scientists long thought the ratios of these ions would remain relatively stable over extended periods of time.
    “The main message of this paper is that we have to revisit these ratios,” said Iglesias-Rodriguez. “We cannot just continue to make the assumptions we have made in the past essentially based on the residency time of these elements.”
    Back in 2010, Iglesias-Rodriguez was participating in a research expedition over the Porcupine Abyssal Plain, a region of North Atlantic seafloor west of Europe. She had invited a former student of hers, this paper’s lead author Mario Lebrato, who was pursuing his doctorate at the time.

    advertisement

    Their study analyzed the chemical composition of water at various depths. Lebrato found that the Ca, Mg and Sr ratios from their samples deviated significantly from what they had expected. The finding was intriguing, but the data was from only one location.
    Over the next nine years, Lebrato put together a global survey of these element ratios. Scientists including Iglesias-Rodriguez collected over 1,100 water samples on 79 cruises ranging from the ocean’s surface to 6,000 meters down. The data came from 14 ecosystems across 10 countries. And to maintain consistency, all the samples were processed by a single person in one lab.
    The project’s results overturned the field’s 130-year old assumption about seawater chemistry, revealing that the ratio of these ions varies considerably across the ocean.
    Scientists have long used these ratios to reconstruct past ocean conditions, like temperature. “The main implication is that the paleo-reconstructions we have been conducting have to be revisited,” Iglesias-Rodriguez explained, “because environmental conditions have a substantial impact on these ratios, which have been overlooked.”
    Oceanographers can no longer assume that data they have on past ocean chemistry represent the whole ocean. It has become clear they can extrapolate only regional conditions from this information.
    This revelation also has implications for modern marine science. Seawater ratios of Mg to Ca affect the composition of animal shells. For example, a higher magnesium content tends to make shells more vulnerable to dissolution, which is an ongoing issue as increasing carbon dioxide levels gradually make the ocean more acidic. “Biologically speaking, it is important to figure out these ratios with some degree of certainty,” said Iglesias-Rodriguez.
    Iglesias-Rodriguez’s latest project focuses on the application of rock dissolution as a method to fight ocean acidification. She’s looking at lowering the acidity of seawater using pulverized stones like olivine and carbonate rock. This intervention will likely change the balance of ions in the water, which is something worth considering. As climate change continues unabated, this intervention could help keep acidity in check in small areas, like coral reefs. More

  • in

    An embedded ethics approach for AI development

    The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.
    Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …
    Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July.
    What exactly is meant by the “embedded ethics approach”?
    Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.
    Is there an example of this concept in practice?
    Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units — with few doors or individual rooms — would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.
    Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?
    That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language — and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical.” Ultimately, that will require laws, codes of conduct and possibly state incentives.

    Story Source:
    Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length. More

  • in

    Managing data flow boosts cyber-physical system performance

    Researchers from North Carolina State University have developed a suite of algorithms to improve the performance of cyber-physical systems — from autonomous vehicles to smart power grids — by balancing each component’s need for data with how fast that data can be sent and received.
    “Cyber-physical systems integrate sensors, devices, and communications tools, allowing all of the elements of a system to share information and coordinate their activities in order to accomplish goals,” says Aranya Chakrabortty, co-author of a paper on the new algorithms and a professor of electrical and computer engineering at NC State. “These systems have tremendous potential — the National Science Foundation refers to them as ‘enabling a smart and connected world’ — but these systems also pose challenges.
    “Specifically, the physical agents in a system — the devices — need a lot of communication links in order to function effectively. This leads to large volumes of data flowing through the communication network, which causes routing and queuing delays. These delays can cause long waiting times for the agents to take action, thereby degrading the quality of the system. In other words, there’s so much data, being passed through so many links, that a system may not be able to accomplish its established goals — the lag time is just too long.”
    This creates a dilemma. Reducing communication can hurt the quality of the system’s performance, because each element of the system will be operating with less information. On the other hand, reducing communication means that each element of the system would be able to get that information more quickly.
    “So, it’s all a trade-off,” Chakrabortty says. “The right balance needs to be struck between all three variables — namely, the right amount of communication sparsity, the optimal delay, and the best achievable performance of the agents. Striking this fine balance to carry out the mission in the best possible way while also ensuring safe and stable operation of every agent is not easy. This is where our algorithms come in.”
    Chakrabortty and graduate student Nandini Negi developed three algorithms that, taken together, reduce the overall number of data requests from each node in a system, but ensure that each node receives enough information, quickly enough, to achieve system goals.
    “There is no one-size-fits-all solution that will apply to every cyber-physical system,” Negi says. “But our algorithms allow users to identify the optimal communications solution for any system.”

    Story Source:
    Materials provided by North Carolina State University. Note: Content may be edited for style and length. More