More stories

  • in

    Cloud services without servers: What’s behind it

    In cloud computing, commercial providers make computing resources available on demand to their customers over the Internet. This service is partly offered “serverless,” that is, without servers. How can that work? Computing resources without a server, isn’t that like a restaurant without a kitchen?
    “The term is misleading,” says computer science Professor Samuel Kounev from Julius-Maximilians-Universität (JMU) Würzburg in Bavaria, Germany. Because even serverless cloud services don’t get by without servers.
    In classical cloud computing, for example, a web shop rents computing resources from a cloud provider in the form of virtual machines (VMs). However, the shop itself remains responsible for the management of “its” servers, that is, the VMs. It has to take care of security aspects as well as the avoidance of overload situations or the recovery from system failures.
    The situation is different with serverless computing. Here, the cloud provider takes over responsibility for the complete server management. The cloud users can no longer even access the server, it remains hidden from them — hence the term “serverless.”
    Research article in ACM’s “Communications of the ACM” magazine
    “The basic idea of serverless computing has been around since the beginning of cloud computing. However, it has not become widely accepted,” explains Samuel Kounev, who heads the JMU Chair of Computer Science II (Software Engineering). But a shift can currently be observed in the industry and in science, the focus is increasingly moving towards serverless computing.
    A recent article in the Communications of the ACM magazine of the Association for Computing Machinery (ACM) deals with the history, status and potential of serverless computing. Among the authors are Samuel Kounev and Dr. Nikolas Herbst, who heads the JMU research group “Data Analytics Clouds.” ACM has also produced a video with Professor Samuel Kounev to accompany the publication: https://vimeo.com/849237573
    Experts define serverless computing inconsistently More

  • in

    Novel organic light-emitting diode with ultralow turn-on voltage for blue emission

    An upconversion organic light-emitting diode (OLED) based on a typical blue-fluorescence emitter achieves emission at an ultralow turn-on voltage of 1.47 V, as demonstrated by researchers from Tokyo Tech. Their technology circumvents the traditional high voltage requirement for blue OLEDs, leading to potential advancements in commercial smartphone and large screen displays.
    Blue light is vital for light-emitting devices, lighting applications, as well as smartphone screens and large screen displays. However, it is challenging to develop efficient blue organic light-emitting diodes (OLEDs) owing to the high applied voltage required for their function. Conventional blue OLEDs typically require around 4 V for a luminance of 100 cd/m2; this is higher than the industrial target of 3.7 V — the voltage of lithium-ion batteries commonly used in smartphones. Therefore, there is an urgent need to develop novel blue OLEDs that can operate at lower voltages.
    In this regard, Associate Professor Seiichiro Izawa from Tokyo Institute of Technology and Osaka University, collaborated with researchers from University of Toyama, Shizuoka University, and the Institute for Molecular Science has recently presented a novel OLED device with a remarkable ultralow turn-on voltage of 1.47 V for blue emission and a peak wavelength at 462 nm (2.68 eV). Their work will be published in Nature Communications.
    The choice of materials used in this OLED significantly influences its turn-on voltage. The device utilizes NDI-HF (2,7-di(9H-fluoren-2-yl)benzo[lmn][3,8]-phenanthroline-1,3,6,8(2H,7H)-tetraone) as the acceptor, 1,2-ADN (9-(naphthalen-1-yl)-10-(naphthalen-2-yl)anthracene) as the donor, and TbPe (2,5,8,11-tetra-tert-butylperylene) as the fluorescent dopant. This OLED operates via a mechanism called upconversion (UC). Herein, holes and electrons are injected into donor (emitter) and acceptor (electron transport) layers, respectively. They recombine at the donor/acceptor (D/A) interface to form a charge transfer (CT) state. Dr. Izawa points out: “The intermolecular interactions at the D/A interface play a significant role in CT state formation, with stronger interactions yielding superior results.”
    Subsequently, the energy of the CT state is selectively transferred to the low-energy first triplet excited states of the emitter, which results in blue light emission through the formation of a high-energy first singlet excited state by triplet-triplet annihilation (TTA). “As the energy of the CT state is much lower than the emitter’s bandgap energy, the UC mechanism with TTA significantly decreases the applied voltage required for exciting the emitter. As a result, this UC-OLED reaches a luminance of 100 cd/m2, equivalent to that of a commercial display, at just 1.97 V,” explains Dr. Izawa.
    In effect, this study efficiently produces a novel OLED, with blue light emission at an ultralow turn-on voltage, using a typical fluorescent emitter widely utilized in commercial displays, thus marking a significant step toward meeting the commercial requirements for blue OLEDs. It emphasizes the importance of optimizing the design of the D/A interface for controlling excitonic processes and holds promise not only for OLEDs but also for organic photovoltaics and other organic electronic devices. More

  • in

    Machine learning models can produce reliable results even with limited training data

    Researchers have determined how to build reliable machine learning models that can understand complex equations in real-world situations while using far less training data than is normally expected.
    The researchers, from the University of Cambridge and Cornell University, found that for partial differential equations — a class of physics equations that describe how things in the natural world evolve in space and time — machine learning models can produce reliable results even when they are provided with limited data.
    Their results, reported in the Proceedings of the National Academy of Sciences, could be useful for constructing more time- and cost-efficient machine learning models for applications such as engineering and climate modelling.
    Most machine learning models require large amounts of training data before they can begin returning accurate results. Traditionally, a human will annotate a large volume of data — such as a set of images, for example — to train the model.
    “Using humans to train machine learning models is effective, but it’s also time-consuming and expensive,” said first author Dr Nicolas Boullé, from the Isaac Newton Institute for Mathematical Sciences. “We’re interested to know exactly how little data we actually need to train these models and still get reliable results.”
    Other researchers have been able to train machine learning models with a small amount of data and get excellent results, but how this was achieved has not been well-explained. For their study, Boullé and his co-authors, Diana Halikias and Alex Townsend from Cornell University, focused on partial differential equations (PDEs).
    “PDEs are like the building blocks of physics: they can help explain the physical laws of nature, such as how the steady state is held in a melting block of ice,” said Boullé, who is an INI-Simons Foundation Postdoctoral Fellow. “Since they are relatively simple models, we might be able to use them to make some generalisations about why these AI techniques have been so successful in physics.”
    The researchers found that PDEs that model diffusion have a structure that is useful for designing AI models. “Using a simple model, you might be able to enforce some of the physics that you already know into the training data set to get better accuracy and performance,” said Boullé. More

  • in

    Combustion powers bug-sized robots to leap, lift and race

    Cornell researchers combined soft microactuators with high-energy-density chemical fuel to create an insect-scale quadrupedal robot that is powered by combustion and can outrace, outlift, outflex and outleap its electric-driven competitors.
    The group’s paper, “Powerful, Soft Combustion Actuators for Insect-Scale Robots,” was published Sept. 14 in Science. The lead author is postdoctoral researcher Cameron Aubin, Ph.D. ’23.
    The project was led by Rob Shepherd, associate professor of mechanical and aerospace engineering in Cornell Engineering, whose Organic Robotics Lab has previously used combustion to create a braille display for electronics.
    As anyone who has witnessed an ant carry off food from a picnic knows, insects are far stronger than their puny size suggests. However, robots at that scale have yet to reach their full potential. One of the challenges is “motors and engines and pumps don’t really work when you shrink them down to this size,” Aubin said, so researchers have tried to compensate by creating bespoke mechanisms to perform such functions. So far, the majority of these robots have been tethered to their power sources — which usually means electricity.
    “We thought using a high-energy-density chemical fuel, just like we would put in an automobile, would be one way that we could increase the onboard power and performance of these robots,” he said. “We’re not necessarily advocating for the return of fossil fuels on a large scale, obviously. But in this case, with these tiny, tiny robots, where a milliliter of fuel could lead to an hour of operation, instead of a battery that is too heavy for the robot to even lift, that’s kind of a no brainer.”
    While the team has yet to create a fully untethered model — Aubin says they are halfway there — the current iteration “absolutely throttles the competition, in terms of their force output.”
    The four-legged robot, which is just over an inch long and weighs the equivalent of one and a half paperclips, is 3D-printed with a flame-resistant resin. The body contains a pair of separated combustion chambers that lead to the four actuators, which serve as the feet. Each actuator/foot is a hollow cylinder capped with a piece of silicone rubber, like a drum skin, on the bottom. When offboard electronics are used to create a spark in the combustion chambers, premixed methane and oxygen are ignited, the combustion reaction inflates the drum skin, and the robot pops up into the air. More

  • in

    Researchers unveil new flexible adhesive with exceptional recovery and adhesion properties for electronic devices

    The rapid advancements in flexible electronic technology have led to the emergence of innovative devices such as foldable displays, wearables, e-skin, and medical devices. These breakthroughs have created a growing demand for flexible adhesives that can quickly recover their shape while effectively connecting various components in these devices. However, conventional pressure-sensitive adhesives (PSAs) often face challenges in achieving a balance between recovery capabilities and adhesive strength. In an extraordinary study conducted at UNIST, researchers have successfully synthesized new types of urethane-based crosslinkers that address this critical challenge.
    Led by Professor Dong Woog Lee from the School of Energy and Chemical Engineering at UNIST, the research team developed novel crosslinkers utilizing m-xylylene diisocyanate (XDI) or 1,3-bis(isocyanatomethyl)cyclohexane (H6XDI) as hard segments along with poly(ethylene glycol) (PEG) groups serving as soft segments. By incorporating these newly synthesized materials into pressure-sensitive adhesives, they achieved significantly improved recoverability compared to traditional methods.
    The PSA formulated with H6XDI-PEG diacrylate (HPD) demonstrated exceptional recovery properties while maintaining high adhesion strength (~25.5 N 25 mm?1). Through extensive folding tests totaling 100k folds and multi-directional stretching tests spanning 10k cycles, the PSA crosslinked with HPD exhibited remarkable stability under repeated deformation — showcasing its potential for applications requiring both flexibility and recoverability.
    Furthermore, even after subjecting the adhesive to strains up to 20%, it displayed high optical transmittance ( >90%), making it suitable for fields such as foldable displays that demand not only flexibility but also optical clarity.
    “This breakthrough in adhesive technology offers promising possibilities for electronic products that require both high flexibility and rapid recovery characteristics,” said Professor Lee. “Our research addresses the long-standing challenge of balancing adhesion strength and resilience, opening up new avenues for the development of flexible electronic devices.”
    Hyunok Park, a researcher involved in the study, emphasized the significance of this research by stating, “The introduction of this new crosslinking structure has led to an adhesive with exceptional adhesion and recovery properties. We believe it will drive future advancements in adhesive research while contributing to further developments in flexible electronics.”
    The study findings have been published ahead of their official publication in the online version of Advanced Functional Materials on July 12, 2023. This work was supported through the 2023 Research Fund at UNIST and received additional support from organizations including the National Research Foundation (NRF) of Korea, Defense Acquisition Program Administration and Ministry of Trade. More

  • in

    Engineers grow full wafers of high-performing 2D semiconductor that integrates with state-of-the-art chips

    The semiconductor industry today is working to respond to a threefold mandate: increasing computing power, decreasing chip sizes and managing power in densely packed circuits.
    To meet these demands, the industry must look beyond silicon to produce devices appropriate for the growing role of computing.
    While unlikely to abandon the workhorse material anytime in the near or distant future, the technology sector will require creative enhancements in chip materials and architectures to produce devices appropriate for the growing role of computing.
    One of the biggest shortcomings of silicon is that it can only be made so thin because its material properties are fundamentally limited to three dimensions [3D]. For this reason, two-dimensional [2D] semiconductors — so thin as to have almost no height — have become an object of interest to scientists, engineers and microelectronics manufacturers.
    Thinner chip components would provide greater control and precision over the flow of electricity in a device, while lowering the amount of energy required to power it. A 2D semiconductor would also contribute to keeping the surface area of a chip to a minimum, lying in a thin film atop a supporting silicon device.
    But until recently, attempts to create such a material have been unsuccessful.
    Certain 2D semiconductors have performed well on their own, but required such high temperatures to deposit they destroyed the underlying silicon chip. Others could be deposited at silicon-compatible temperatures, but their electronic properties — energy usage, speed, precision — were lacking. Some fit the bill for temperature and performance but could not be grown to the requisite purity at industry-standard sizes. More

  • in

    Scientists develop method to detect deadly infectious diseases

    Rutgers researchers have developed a way of detecting the early onset of deadly infectious diseases using a test so ultrasensitive that it could someday revolutionize medical approaches to epidemics.
    The test, described in Science Advances, is an electronic sensor contained within a computer chip. It employs nanoballs — microscopic spherical clumps made of tinier particles of genetic material, each of those with diameters 1,000 times smaller than the width of a human hair — and combines that technology with advanced electronics.
    “During the COVID pandemic, one of the things that didn’t exist but could have stemmed the spread of the virus was a low-cost diagnostic that could flag people known as the ‘quiet infected’ — patients who don’t know they are infected because they are not exhibiting symptoms,” said Mehdi Javanmard, a professor in the Department of Electrical and Computer Engineering in the Rutgers School of Engineering and an author of the study. “In a pandemic, pinpointing an infection early with accuracy is the Holy Grail. Because once a person is showing symptoms — sneezing and coughing — it’s too late. That person has probably infected 20 people.”
    For the past 20 years, Javanmard has been developing biosensors — devices that monitor and transmit information about a life process. During the COVID-19 pandemic, he became disheartened about the extent of infections and the extreme loss of life. He believed there had to be a way of using biosensors as a test to detect illness earlier.
    Working with Muhammad Tayyab, a Rutgers doctoral student and co-author of the study, Javanmard and research colleagues at the Karolinska Institute in Sweden and Stanford and Yale universities started brainstorming.
    “We thought: How is there a way where we can leverage our individual expertise to build something new?” Javanmard said.
    The biosensor developed by the team works through a series of steps. First, it zeroes in on a virus’ characteristic sequence of nucleic acids — naturally occurring chemical compounds that serve as the primary information-carrying molecules in a cell. Next, because it amplifies any nucleic acid sequence found in the sample, it makes many more copies, as many as 10,000. Then, it clumps those thousands of specks of nucleic acids into nanoballs that are “large” enough to be detected. More

  • in

    Assessing unintended consequences in AI-based neurosurgical training

    Virtual reality simulators can help learners improve their technical skills faster and with no risk to patients. In the field of neurosurgery, they allow medical students to practice complex operations before using a scalpel on a real patient. When combined with artificial intelligence, these tutoring systems can offer tailored feedback like a human instructor, identifying areas where the students need to improve and making suggestions on how to achieve expert performance.
    A new study from the Neurosurgical Simulation and Artificial Intelligence Learning Centre at The Neuro (Montreal Neurological Institute-Hospital) of McGill University, however, shows that human instruction is still necessary to detect and compensate for unintended, and sometimes negative, changes in neurosurgeon behaviour after virtual reality AI training.
    In the study, 46 medical students performed a tumour removal procedure on a virtual reality simulator. Half of them were randomly selected to receive instruction from an AI-powered intelligent tutor called the Virtual Operative Assistant (VOA), which uses a machine learning algorithm to teach surgical techniques and provide personalized feedback. The other half served as a control group by receiving no feedback. The students’ work was then compared to performance benchmarks selected by a team of established neurosurgeons.
    Comparing the results, AI-tutored students caused 55 per cent less damage to healthy tissues than the control group. AI-tutored students also showed a 59 per cent reduction in average distance between instruments in each hand and 46 per cent less maximum force applied, both important safety measures.
    However, AI-tutored students also showed some negative outcomes. For example, their dominant hand movements had 50 per cent lower velocity and 45 per cent lower acceleration than the control group, making their operations less efficient. The speed at which they removed tumour tissue was also 29 per cent lower in the AI-tutored group than the control group.
    These unintended outcomes underline the importance of human instructors in the learning process, to promote both safety and efficiency in students.
    “AI systems are not perfect,” says Ali Fazlollahi, a medical student researcher at the Neurosurgical Simulation and Artificial Intelligence Learning Centre and the study’s first author. “Achieving mastery will still require some level of apprenticeship from an expert. Programs adopting AI will enable learners to monitor their competency and focus their intraoperative learning time with instructors more efficiently and on their individual tailored learning goals. We’re currently working towards finding an optimal hybrid mode of instruction in a crossover trial.”
    Fazlollahi says his findings have implications beyond neurosurgery because many of the same principles are applied in other fields of skills’ training.
    “This includes surgical education, not just neurosurgery, and also a range of other fields from aviation to military training and construction,” he says. “Using AI alone to design and run a technical skills curriculum can lead to unintended outcomes that will require oversight from human experts to ensure excellence in training and patient care.”
    “Intelligent tutors powered by AI are becoming a valuable tool in the evaluation and training of the next generation of neurosurgeons,” says Dr. Rolando Del Maestro, the study’s senior author. “However, it is essential that surgical educators are an integral part of the development, application, and monitoring of these AI systems to maximize their ability to increase the mastery of neurosurgical skills and improve patient outcomes.” More