More stories

  • in

    Robot, can you say ‘cheese’?

    What would you do if you walked up to a robot with a human-like head and it smiled at you first? You’d likely smile back and perhaps feel the two of you were genuinely interacting. But how does a robot know how to do this? Or a better question, how does it know to get you to smile back?
    While we’re getting accustomed to robots that are adept at verbal communication, thanks in part to advancements in large language models like ChatGPT, their nonverbal communication skills, especially facial expressions, have lagged far behind. Designing a robot that can not only make a wide range of facial expressions but also know when to use them has been a daunting task.
    Tackling the challenge
    The Creative Machines Lab at Columbia Engineering has been working on this challenge for more than five years. In a new study published today in Science Robotics, the group unveils Emo, a robot that anticipates facial expressions and executes them simultaneously with a human. It has even learned to predict a forthcoming smile about 840 milliseconds before the person smiles, and to co-express the smile simultaneously with the person.
    The team, led by Hod Lipson, a leading researcher in the fields of artificial intelligence (AI) and robotics, faced two challenges: how to mechanically design an expressively versatile robotic face which involves complex hardware and actuation mechanisms, and knowing which expression to generate so that they appear natural, timely, and genuine. The team proposed training a robot to anticipate future facial expressions in humans and execute them simultaneously with a person. The timing of these expressions was critical — delayed facial mimicry looks disingenuous, but facial co-expression feels more genuine since it requires correctly inferring the human’s emotional state for timely execution.
    How Emo connects with you
    Emo is a human-like head with a face that is equipped with 26 actuators that enable a broad range of nuanced facial expressions. The head is covered with a soft silicone skin with a magnetic attachment system, allowing for easy customization and quick maintenance. For more lifelike interactions, the researchers integrated high-resolution cameras within the pupil of each eye, enabling Emo to make eye contact, crucial for nonverbal communication.

    The team developed two AI models: one that predicts human facial expressions by analyzing subtle changes in the target face and another that generates motor commands using the corresponding facial expressions.
    To train the robot how to make facial expressions, the researchers put Emo in front of the camera and let it do random movements. After a few hours, the robot learned the relationship between their facial expressions and the motor commands — much the way humans practice facial expressions by looking in the mirror. This is what the team calls “self modeling” — similar to our human ability to imagine what we look like when we make certain expressions.
    Then the team ran videos of human facial expressions for Emo to observe them frame by frame. After training, which lasts a few hours, Emo could predict people’s facial expressions by observing tiny changes in their faces as they begin to form an intent to smile.
    “I think predicting human facial expressions accurately is a revolution in HRI. Traditionally, robots have not been designed to consider humans’ expressions during interactions. Now, the robot can integrate human facial expressions as feedback,” said the study’s lead author Yuhang Hu, who is a PhD student at Columbia Engineering in Lipson’s lab. “When a robot makes co-expressions with people in real-time, it not only improves the interaction quality but also helps in building trust between humans and robots. In the future, when interacting with a robot, it will observe and interpret your facial expressions, just like a real person.”
    What’s next
    The researchers are now working to integrate verbal communication, using a large language model like ChatGPT into Emo. As robots become more capable of behaving like humans, Lipson is well aware of the ethical considerations associated with this new technology.
    “Although this capability heralds a plethora of positive applications, ranging from home assistants to educational aids, it is incumbent upon developers and users to exercise prudence and ethical considerations,” says Lipson, James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering at Columbia Engineering, co-director of the Makerspace at Columbia, and a member of the Data Science Institute. “But it’s also very exciting — by advancing robots that can interpret and mimic human expressions accurately, we’re moving closer to a future where robots can seamlessly integrate into our daily lives, offering companionship, assistance, and even empathy. Imagine a world where interacting with a robot feels as natural and comfortable as talking to a friend.” More

  • in

    More efficient TVs, screens and lighting

    New multidisciplinary research from the University of St Andrews could lead to more efficient televisions, computer screens and lighting.
    Researchers at the Organic Semiconductor Centre in the School of Physics and Astronomy, and the School of Chemistry have proposed a new approach to designing efficient light-emitting materials in a paper published this week in Nature (27 March).
    Light-emitting materials are used in organic light-emitting diodes (OLEDs) that are now found in the majority of mobile phone displays and smartwatches, and some televisions and automotive lighting.
    The latest generation of emitter materials under development produce OLEDs that have high efficiency at low brightness, but suffer reduced efficiency as the brightness is increased to the levels required for lighting and outdoor applications. This problem is known as ‘efficiency roll-off’.
    Researchers have identified the combination of features of materials required to overcome this problem. Guidelines developed by the team of researchers, led by Professor Ifor Samuel and Professor Eli Zysman-Colman, will help OLED researchers develop materials that maintain high efficiency at high brightness, enabling the latest materials to be used for applications in displays, lighting and medicine.
    Commenting on the research, Professor Zysman-Colman explained that the findings “provide clearer insight into the link between the properties of the emitter material and the performance of the OLED.”
    Professor Samuel said, “Our new approach to this problem will help to develop bright, efficient and colourful OLEDs that use less power.” More

  • in

    New software enables blind and low-vision users to create interactive, accessible charts

    A growing number of tools enable users to make online data representations, like charts, that are accessible for people who are blind or have low vision. However, most tools require an existing visual chart that can then be converted into an accessible format.
    This creates barriers that prevent blind and low-vision users from building their own custom data representations, and it can limit their ability to explore and analyze important information.
    A team of researchers from MIT and University College London (UCL) wants to change the way people think about accessible data representations.
    They created a software system called Umwelt (which means “environment” in German) that can enable blind and low-vision users to build customized, multimodal data representations without needing an initial visual chart.
    Umwelt, an authoring environment designed for screen-reader users, incorporates an editor that allows someone to upload a dataset and create a customized representation, such as a scatterplot, that can include three modalities: visualization, textual description, and sonification. Sonification involves converting data into nonspeech audio.
    The system, which can represent a variety of data types, includes a viewer that enables a blind or low-vision user to interactively explore a data representation, seamlessly switching between each modality to interact with data in a different way.
    The researchers conducted a study with five expert screen-reader users who found Umwelt to be useful and easy to learn. In addition to offering an interface that empowered them to create data representations — something they said was sorely lacking — the users said Umwelt could facilitate communication between people who rely on different senses.

    “We have to remember that blind and low-vision people aren’t isolated. They exist in these contexts where they want to talk to other people about data,” says Jonathan Zong, an electrical engineering and computer science (EECS) graduate student and lead author of a paper introducing Umwelt. “I am hopeful that Umwelt helps shift the way that researchers think about accessible data analysis. Enabling the full participation of blind and low-vision people in data analysis involves seeing visualization as just one piece of this bigger, multisensory puzzle.”
    Joining Zong on the paper are fellow EECS graduate students Isabella Pedraza Pineros and Mengzhu “Katie” Chen; Daniel Hajas, a UCL researcher who works with the Global Disability Innovation Hub; and senior author Arvind Satyanarayan, associate professor of computer science at MIT who leads the Visualization Group in the Computer Science and Artificial Intelligence Laboratory. The paper will be presented at the ACM Conference on Human Factors in Computing.
    De-centering visualization
    The researchers previously developed interactive interfaces that provide a richer experience for screen reader users as they explore accessible data representations. Through that work, they realized most tools for creating such representations involve converting existing visual charts.
    Aiming to decenter visual representations in data analysis, Zong and Hajas, who lost his sight at age 16, began co-designing Umwelt more than a year ago.
    At the outset, they realized they would need to rethink how to represent the same data using visual, auditory, and textual forms.

    “We had to put a common denominator behind the three modalities. By creating this new language for representations, and making the output and input accessible, the whole is greater than the sum of its parts,” says Hajas.
    To build Umwelt, they first considered what is unique about the way people use each sense.
    For instance, a sighted user can see the overall pattern of a scatterplot and, at the same time, move their eyes to focus on different data points. But for someone listening to a sonification, the experience is linear since data are converted into tones that must be played back one at a time.
    “If you are only thinking about directly translating visual features into nonvisual features, then you miss out on the unique strengths and weaknesses of each modality,” Zong adds.
    They designed Umwelt to offer flexibility, enabling a user to switch between modalities easily when one would better suit their task at a given time.
    To use the editor, one uploads a dataset to Umwelt, which employs heuristics to automatically creates default representations in each modality.
    If the dataset contains stock prices for companies, Umwelt might generate a multiseries line chart, a textual structure that groups data by ticker symbol and date, and a sonification that uses tone length to represent the price for each date, arranged by ticker symbol.
    The default heuristics are intended to help the user get started.
    “In any kind of creative tool, you have a blank-slate effect where it is hard to know how to begin. That is compounded in a multimodal tool because you have to specify things in three different representations,” Zong says.
    The editor links interactions across modalities, so if a user changes the textual description, that information is adjusted in the corresponding sonification. Someone could utilize the editor to build a multimodal representation, switch to the viewer for an initial exploration, then return to the editor to make adjustments.
    Helping users communicate about data
    To test Umwelt, they created a diverse set of multimodal representations, from scatterplots to multiview charts, to ensure the system could effectively represent different data types. Then they put the tool in the hands of five expert screen reader users.
    Study participants mostly found Umwelt to be useful for creating, exploring, and discussing data representations. One user said Umwelt was like an “enabler” that decreased the time it took them to analyze data. The users agreed that Umwelt could help them communicate about data more easily with sighted colleagues.
    Moving forward, the researchers plan to create an open-source version of Umwelt that others can build upon. They also want to integrate tactile sensing into the software system as an additional modality, enabling the use of tools like refreshable tactile graphics displays.
    “In addition to its impact on end users, I am hoping that Umwelt can be a platform for asking scientific questions around how people use and perceive multimodal representations, and how we can improve the design beyond this initial step,” says Zong.
    This work was supported, in part, by the National Science Foundation and the MIT Morningside Academy for Design Fellowship. More

  • in

    A new type of cooling for quantum simulators

    Quantum experiments always have to deal with the same problem, regardless of whether they involve quantum computers, quantum teleportation or new types of quantum sensors: quantum effects break down very easily. They are extremely sensitive to external disturbances — for example, to fluctuations caused simply by the surrounding temperature. It is therefore important to be able to cool down quantum experiments as effectively as possible.
    At TU Wien (Vienna), it has now been shown that this type of cooling can be achieved in an interesting new way: A Bose-Einstein condensate is split into two parts, neither abruptly nor particularly slowly, but with a very specific temporal dynamic that ensures that random fluctuations are prevented as perfectly as possible. In this way, the relevant temperature in the already extremely cold Bose-Einstein condensate can be significantly reduced. This is important for quantum simulators, which are used at TU Wien to gain insights into quantum effects that could not be investigated using previous methods.
    Quantum simulators
    “We work with quantum simulators in our research,” says Maximilian Prüfer, who is researching new methods at TU Wien’s Atomic Institute with the help of an Esprit Grant from the FWF. “Quantum simulators are systems whose behavior is determined by quantum mechanical effects and which can be controlled and monitored particularly well. These systems can therefore be used to study fundamental phenomena of quantum physics that also occur in other quantum systems, which cannot be studied so easily.”
    This means that a physical system is used to actually learn something about other systems. This idea is not entirely new in physics: for example, you can also carry out experiments with water waves in order to learn something about sound waves — but water waves are easier to observe.
    “In quantum physics, quantum simulators have become an extremely useful and versatile tool in recent years,” says Maximilian Prüfer. “Among the most important tools for realizing interesting model systems are clouds of extremely cold atoms, such as those we study in our laboratory.” In the current paper published in Physical Review X, the scientists led by Jörg Schmiedmayer and Maximilian Prüfer investigated how quantum entanglement evolves over time and how this can be used to achieve an even colder temperature equilibrium than before. Quantum simulation is also a central topic in the recently launched QuantA Cluster of Excellence, in which various quantum systems are being investigated.
    The colder, the better
    The decisive factor that usually limits the suitability of such quantum simulators at present is their temperature: “The better we cool down the interesting degrees of freedom of the condensate, the better we can work with it and the more we can learn from it,” says Maximilian Prüfer.

    There are different ways to cool something down: For example, you can cool a gas by increasing its volume very slowly. With extremely cold Bose-Einstein condensates, other tricks are typically used: the most energetic atoms are quickly removed until only a collection of atoms remains, which have a fairly uniformly low energy and are therefore cooler.
    “But we use a completely different technique,” says Tiantian Zhang, first author of the study, who investigated this topic as part of her doctoral thesis at the Doctoral College of the Vienna Center for Quantum Science and Technology. “We create a Bose-Einstein condensate and then split it into two parts by creating a barrier in the middle.” The number of particles which end up on the right side and on the left side of the barrier is undetermined. Due to the laws of quantum physics, there is a certain amount of uncertainty here. One could say that both sides are in a quantum-physical superposition of different particle number states.
    “On average, exactly 50% of the particles are on the left and 50% on the right,” says Maximilian Prüfer. “But quantum physics says that there are always certain fluctuations. The fluctuations, i.e. the deviations from the expected value, are closely related to the temperature.”
    Cooling by controlling the fluctuations
    The research team at TU Wien was able to show: neither an extremely abrupt nor an extremely slow splitting of the Bose-Einstein condensate is optimal. A compromise must be found, a cleverly tailored way to dynamically split the condensate, in order to control the quantum fluctuations as well as possible. This cannot be calculated: this problem cannot be solved using conventional computers. But with experiments, the research team was able to show: The appropriate splitting dynamics can be used to suppress the fluctuation in the number of particles, and this in turn translates into a reduction the temperature that you want to minimize.
    “Different temperature scales exist simultaneously in this system, and we lower a very specific one of them,” explains Maximilian Prüfer. “So you can’t think of it like a mini-fridge that gets noticeably colder overall. But that’s not what we’re talking about: suppressing the fluctuations is exactly what we need to be able to use our system as a quantum simulator even better than before. We can now use it to answer questions from fundamental quantum physics that were previously inaccessible.” More

  • in

    Hidden geometry of learning: Neural networks think alike

    Penn Engineers have uncovered an unexpected pattern in how neural networks — the systems leading today’s AI revolution — learn, suggesting an answer to one of the most important unanswered questions in AI: why these methods work so well.
    Inspired by biological neurons, neural networks are computer programs that take in data and train themselves by repeatedly making small modifications to the weights or parameters that govern their output, much like neurons adjusting their connections to one another. The final result is a model that allows the network to predict on data it has not seen before. Neural networks are being used today in essentially all fields of science and engineering, from medicine to cosmology, identifying potentially diseased cells and discovering new galaxies.
    In a new paper published in the Proceedings of the National Academy of Sciences (PNAS), Pratik Chaudhari, Assistant Professor in Electrical and Systems Engineering (ESE) and core faculty at the General Robotics, Automation, Sensing and Perception (GRASP) Lab, and co-author James Sethna, James Gilbert White Professor of Physical Sciences at Cornell University, show that neural networks, no matter their design, size or training recipe, follow the same route from ignorance to truth when presented with images to classify.
    Jialin Mao, a doctoral student in Applied Mathematics and Computational Science at the University of Pennsylvania School of Arts & Sciences, is the paper’s lead author.
    “Suppose the task is to identify pictures of cats and dogs,” says Chaudhari. “You might use the whiskers to classify them, while another person might use the shape of the ears — you would presume that different networks would use the pixels in the images in different ways, and some networks certainly achieve better results than others, but there is a very strong commonality in how they all learn. This is what makes the result so surprising.”
    The result not only illuminates the inner workings of neural networks, but gestures toward the possibility of developing hyper-efficient algorithms that could classify images in a fraction of the time, at a fraction of the cost. Indeed, one of the highest costs associated with AI is the immense computational power required to develop neural networks. “These results suggest that there may exist new ways to train them,” says Chaudhari.
    To illustrate the potential of this new method, Chaudhari suggests imagining the networks as trying to chart a course on a map. “Let us imagine two points,” he says. “Ignorance, where the network does not know anything about the correct labels, and Truth, where it can correctly classify all images. Training a network corresponds to charting a path between Ignorance and Truth in probability space — in billions of dimensions. But it turns out that different networks take the same path, and this path is more like three-, four-, or five-dimensional.”
    In other words, despite the staggering complexity of neural networks, classifying images — one of the foundational tasks for AI systems — requires only a small fraction of that complexity. “This is actually evidence that the details of the network design, size or training recipes matter less than we think,” says Chaudhari.

    To arrive at these insights, Chaudhari and Sethna borrowed tools from information geometry, a field that brings together geometry and statistics. By treating each network as a distribution of probabilities, the researchers were able to make a true apples-to-apples comparison among the networks, revealing their unexpected, underlying similarities. “Because of the peculiarities of high-dimensional spaces, all points are far away from one another,” says Chaudhari. “We developed more sophisticated tools that give us a cleaner picture of the networks’ differences.”
    Using a wide variety of techniques, the team trained hundreds of thousands of networks, of many different varieties, including multi-layer perceptrons, convolutional and residual networks, and the transformers that are at the heart of systems like ChatGPT. “Then, this beautiful picture emerged,” says Chaudhari. “The output probabilities of these networks were neatly clustered together on these thin manifolds in gigantic spaces.” In other words, the paths that represented the networks’ learning aligned with one another, showing that they learned to classify images the same way.
    Chaudhari offers two potential explanations for this surprising phenomenon: first, neural networks are never trained on random assortments of pixels. “Imagine salt and pepper noise,” says Chaudhari. “That is clearly an image, but not a very interesting one — images of actual objects like people and animals are a tiny, tiny subset of the space of all possible images.” Put differently, asking a neural network to classify images that matter to humans is easier than it seems, because there are many possible images the network never has to consider.
    Second, the labels neural networks use are somewhat special. Humans group objects into broad categories, like dogs and cats, and do not have separate words for every particular member of every breed of animals. “If the networks had to use all the pixels to make predictions,” says Chaudhari, “then the networks would have figured out many, many different ways.” But the features that distinguish, say, cats and dogs are themselves low-dimensional. “We believe these networks are finding the same relevant features,” adds Chaudhari, likely by identifying commonalities like ears, eyes, markings and so on.
    Discovering an algorithm that will consistently find the path needed to train a neural network to classify images using just a handful of inputs is an unresolved challenge. “This is the billion-dollar question,” says Chaudhari. “Can we train neural networks cheaply? This paper gives evidence that we might be able to. We just don’t know how.”
    This study was conducted at the University of Pennsylvania School of Engineering and Applied Science and Cornell University. It was supported by grants from the National Science Foundation, National Institutes of Health, the Office of Naval Research, Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship and cloud computing credits from Amazon Web Services.
    Other co-authors include Rahul Ramesh at Penn Engineering; Rubing Yang at the University of Pennsylvania School of Arts & Sciences; Itay Griniasty and Han Kheng Teoh at Cornell University; and Mark K. Transtrum at Brigham Young University. More

  • in

    Memory self-test via smartphone can identify early signs of Alzheimer’s disease

    Dedicated memory tests on smartphones enable the detection of “mild cognitive impairment,” a condition that may indicate Alzheimer’s disease, with high accuracy. Researchers from DZNE, the Otto-von-Guericke University Magdeburg and the University of Wisconsin-Madison in the United States who collaborated with the Magdeburg-based company “neotiv” report these findings in the scientific journal npj Digital Medicine. Their study is based on data from 199 older adults. The results underline the potential of mobile apps for Alzheimer’s disease research, clinical trials and routine medical care. The app that has been evaluated is now being offered to medical doctors to support the early detection of memory problems.
    Memory problems are a key symptom of Alzheimer’s disease. Not surprisingly, their severity and progression play a central role in the diagnosis of Alzheimer’s disease and also in Alzheimer’s research. In current clinical practice, memory assessment is performed under the guidance of a medical professional. The individuals being tested have to complete standardized tasks in writing or in conversation: for example, remembering and repeating words, spontaneously naming as many terms as possible on a certain topic or drawing geometric figures according to instructions. All these tests necessarily require professional supervision, otherwise the results are not conclusive. Thus, these tests cannot be completed alone, for example at home.
    Prof. Emrah Düzel, a senior neuroscientist at DZNE’s Magdeburg site and at University Magdeburg as well as entrepreneur in medical technology, advocates a new approach: “It has advantages if you can carry out such tests on your own and only have to visit the doctor’s office to evaluate the results. Just as we know it from a long-term ECG, for example. Unsupervised testing would help to detect clinically relevant memory impairment at an earlier stage and track disease progression more closely than is currently possible. In view of recent developments in Alzheimer’s therapy and new treatment options, early diagnosis is becoming increasingly important.”
    Comparison between remote at-home and supervised in-clinic testing
    In addition to his involvement in dementia research, Düzel is also “Chief Medical Officer” of “neotiv,” a Magdeburg-based start-up with which the DZNE has been cooperating for several years. The company has developed an app with which memory tests can be carried out autonomously with no need for professional supervision. The software runs on smartphones and tablets, and has been scientifically validated; it is used in Alzheimer’s disease research and is now also offered as a digital tool for medical doctors to support the detection of mild cognitive impairment (MCI). Although MCI has little impact on the affected individuals daily living, they have nevertheless an increased risk of developing Alzheimer’s dementia within a few years.
    Dr. David Berron, research group leader at DZNE and also co-founder of neotiv explains: “As part of the validation process, we applied these novel remote and unsupervised assessments as well as an established in-clinic neuropsychological test battery. We found that the novel method is comparable to in-clinic assessments and detects mild cognitive impairment, also known as MCI, with high accuracy. This technology has enormous potential to provide clinicians with information that they cannot obtain during a patient vist to the clinic.” These findings have now been published in the scientific journal npj Digital Medicine.
    Participants from Germany and the USA
    A total of 199 women and men over the age of 60 participated in the current study. They were located either in Germany or the USA and were each involved in one of two long-term observational studies, both of which address Alzheimer’s — the most common dementia: DZNE’s DELCODE study (Longitudinal Cognitive Impairment and Dementia Study) and the WRAP (Wisconsin Registry for Alzheimer’s Prevention) study of the University of Wisconsin-Madison. The study sample reflected varying cognitive conditions as they occur in a real world situation: It included individuals who were cognitively healthy, patients with MCI and others with subjectively perceived but not measurable memory problems. The diagnosis was based on established assessments that included e. g. memory and language tasks. In addition, all participants completed multiple memory assessments with the neotiv app over a period of at least six weeks, using their own smartphones or tablets — and wherever it was convenient for them. “We found that a majority of our WRAP participants were able to complete the unsupervised digital tasks remotely and they were satisfied with the tasks and the digital platform,” says Lindsay Clark, PhD, neuropsychologist and lead investigator of the Assessing Memory with Mobile Devices study at the University of Wisconsin-Madison.

    Remembering images and detecting differences
    “Assessments with the neotiv app are interactive and comprise three types of memory tasks. These address different areas of the brain that can be affected by Alzheimer’s disease in different disease stages. Many years of research have gone into this,” Düzel explains. Essentially, these tests involve remembering images or recognizing differences between images that are presented by the app. Using a specially developed score, the German-US research team was able to compare the results of the app with the findings of the established in-clinic assessments. “Our study shows that memory complaints can be meaningfully assessed using this digital, remote and unsupervised approach,” says Düzel. “If the results from the digital assessment indicate that there is memory impairment typical of MCI, this paves the way for further clinical examinations. If test results indicate that memory is within the age-specific normal range, individuals can be given an all-clear signal for the time being. And for Alzheimer’s disease research, this approach provides a digital cognitive assessment tool that can be used in clinical studies — as is already being done in Germany, the USA, Sweden and other countries.”
    Outlook
    Further studies are in preparation or already underway. The novel memory assessment is to be tested on even larger study groups, and the researchers also intend to investigate whether it can be used to track the progression of Alzheimer’s disease over a longer period of time. Berron: “Information about how quickly memory declines over time is important for medical doctors and patients. It is also important for clinical trials as new treatments aim to slow the rate of cognitive decline.” The cognitive neuroscientist describes the challenges: “To advance such self-tests, a patient’s clinical data must be linked to self-tests outside the clinic, in the real-world. This is no easy task, but as our current study shows, we are making progress as a field.” More

  • in

    Optimizing electronic health records: Study reveals improvements in departmental productivity

    In a study published in the Annals of Family Medicine, researchers at the Marshall University Joan C. Edwards School of Medicine identify transformative effects of electronic health record (EHR) optimization on departmental productivity. With the universal implementation of EHR systems, the study sheds light on the importance of collaborative efforts between clinicians and information technology (IT) experts in maximizing the potential of these digital tools.
    The study, led by a team of health care professionals in a family medicine department, embarked on a department-wide EHR optimization initiative in collaboration with IT specialists over a four-month period. Unlike previous efforts that primarily focused on institutional-level successes, this study delved deep into the intricacies of EHR interface development and its impact on clinical workflow.
    “There has been a longstanding disconnect between EHR developers and end-users, resulting in interfaces that often fail to capture the intricacies of clinical workflows,” said Adam M. Franks, M.D., interim chair of family and community health at the Joan C. Edwards School of Medicine and lead researcher on the study. “Our study aimed to bridge this gap and demonstrate the tangible benefits of collaborative optimization efforts.”
    The methodology involved an intensive quality improvement process engaging clinicians and clinical staff at all levels. Four categories of optimizations emerged: accommodations (adjustments made by the department to fit EHR workflows); creations (novel workflows developed by IT); discoveries (previously unnoticed workflows within the EHR); and modifications (changes made by IT to existing workflows).
    Key findings from the study showed significant improvements in productivity: The optimization efforts led to remarkable enhancements in departmental productivity. Monthly charges increased from 0.74 to 1.28, while payments surged from 0.83 to 1.58. Although monthly visit ratios also increased from 0.65 to 0.98, the change was not statistically significant.
    The study also revealed that a significant number of solutions to EHR usability issues were already embedded within the system, emphasizing the need for thorough exploration and understanding of existing workflows.
    Finally, accommodation optimizations underscored the necessity for better collaboration between EHR developers and end-users before implementation, highlighting the potential for more user-centric design approaches.
    “Our study not only demonstrates the efficacy of departmental collaboration with IT for EHR optimization but also underscores the importance of detailed workflow analysis in enhancing productivity,” Franks said.
    The research provides valuable insights for health care institutions aiming to maximize the potential of their EHR systems, with implications for improving patient care, efficiency and overall organizational performance. More

  • in

    Bullseye! Accurately centering quantum dots within photonic chips

    Traceable microscopy could improve the reliability of quantum information technologies, biological imaging, and more.
    Devices that capture the brilliant light from millions of quantum dots, including chip-scale lasers and optical amplifiers, have made the transition from laboratory experiments to commercial products. But newer types of quantum-dot devices have been slower to come to market because they require extraordinarily accurate alignment between individual dots and the miniature optics that extract and guide the emitted radiation.
    Researchers at the National Institute of Standards and Technology (NIST) and their colleagues have now developed standards and calibrations for optical microscopes that allow quantum dots to be aligned with the center of a photonic component to within an error of 10 to 20 nanometers (about one-thousandth the thickness of a sheet of paper). Such alignment is critical for chip-scale devices that employ the radiation emitted by quantum dots to store and transmit quantum information.
    For the first time, the NIST researchers achieved this level of accuracy across the entire image from an optical microscope, enabling them to correct the positions of many individual quantum dots. A model developed by the researchers predicts that if microscopes are calibrated using the new standards, then the number of high-performance devices could increase by as much as a hundred-fold.
    That new ability could enable quantum information technologies that are slowly emerging from research laboratories to be more reliably studied and efficiently developed into commercial products.
    In developing their method, Craig Copeland, Samuel Stavis, and their collaborators, including colleagues from the Joint Quantum Institute (JQI), a research partnership between NIST and the University of Maryland, created standards and calibrations that were traceable to the International System of Units (SI) for optical microscopes used to guide the alignment of quantum dots.
    “The seemingly simple idea of finding a quantum dot and placing a photonic component on it turns out to be a tricky measurement problem,” Copeland said.

    In a typical measurement, errors begin to accumulate as researchers use an optical microscope to find the location of individual quantum dots, which reside at random locations on the surface of a semiconductor material. If researchers ignore the shrinkage of semiconductor materials at the ultracold temperatures at which quantum dots operate, the errors grow larger. Further complicating matters, these measurement errors are compounded by inaccuracies in the fabrication process that researchers use to make their calibration standards, which also affects the placement of the photonic components.
    The NIST method, which the researchers described in an article posted online in Optica Quantum on March 18, identifies and corrects such errors, which were previously overlooked.
    The NIST team created two types of traceable standards to calibrate optical microscopes — first at room temperature to analyze the fabrication process, and then at cryogenic temperatures to measure the location of quantum dots. Building on their previous work, the room-temperature standard consisted of an array of nanoscale holes spaced a set distance apart in a metal film.
    The researchers then measured the actual positions of the holes with an atomic force microscope, ensuring that the positions were traceable to the SI. By comparing the apparent positions of the holes as viewed by the optical microscope with the actual positions, the researchers assessed errors from magnification calibration and image distortion of the optical microscope. The calibrated optical microscope could then be used to rapidly measure other standards that the researchers fabricated, enabling a statistical analysis of the accuracy and variability of the process.
    “Good statistics are essential to every link in a traceability chain,” said NIST researcher Adam Pintar, a coauthor of the article.
    Extending their method to low temperatures, the research team calibrated an ultracold optical microscope for imaging quantum dots. To perform this calibration, the team created a new microscopy standard — an array of pillars fabricated on a silicon wafer. The scientists worked with silicon because the shrinkage of the material at low temperatures has been accurately measured.

    The researchers discovered several pitfalls in calibrating the magnification of cryogenic optical microscopes, which tend to have worse image distortion than microscopes operating at room temperature. These optical imperfections bend the images of straight lines into gnarled curves that the calibration effectively straightens out. If uncorrected, the image distortion causes large errors in determining the position of quantum dots and in aligning the dots within targets, waveguides, or other light-controlling devices.
    “These errors have likely prevented researchers from fabricating devices that perform as predicted,” said NIST researcher Marcelo Davanco, a coauthor of the article.
    The researchers developed a detailed model of the measurement and fabrication errors in integrating quantum dots with chip-scale photonic components. They studied how these errors limit the ability of quantum-dot devices to perform as designed, finding the potential for a hundred-fold improvement.
    “A researcher might be happy if one out of a hundred devices works for their first experiment, but a manufacturer might need ninety-nine out of a hundred devices to work,” Stavis noted. “Our work is a leap ahead in this lab-to-fab transition.”
    Beyond quantum-dot devices, traceable standards and calibrations under development at NIST may improve accuracy and reliability in other demanding applications of optical microscopy, such as imaging brain cells and mapping neural connections. For these endeavors, researchers also seek to determine accurate positions of the objects under study across an entire microscope image. In addition, scientists may need to coordinate position data from different instruments at different temperatures, as is true for quantum-dot devices. More