More stories

  • in

    New recipe for restaurant, app contracts

    A novel contract proposed by a University of Texas at Dallas researcher and his colleagues could help alleviate key sources of conflict between restaurants and food-delivery platforms.
    In a study published online March 28 in the INFORMS journal Management Science, Dr. Andrew Frazelle, assistant professor of operations management in the Naveen Jindal School of Management, and co-authors Dr. Pnina Feldman of Boston University and Dr. Robert Swinney of Duke University examined how to best structure relationships between food-delivery platforms and the restaurants with which they partner.
    Other platforms in the sharing economy, such as ride-hailing and vacation-rental services, allow people to sell access to resources that would otherwise be generating no revenue for them, Frazelle said. The interests of the resource owner and the platform are reasonably well aligned in that more transactions are good for both.
    “However, restaurant delivery is different,” Frazelle said. “Delivery orders represent incremental business on top of the restaurant’s existing dine-in operation. More business sounds good, but it comes at the cost of a commission charged by the delivery platform.”
    Platforms such as Grubhub, DoorDash and Uber Eats collect customer orders online, transmit them to restaurants and deliver the orders to customers. While this service helps restaurants expand their markets, the study found the relationship has inherent flaws.
    The most common contractual relationship between platforms and restaurants, in which the platform takes a commission, or a percentage cut, of each delivery order, has two key issues, according to the study. More

  • in

    High school students measure Earth's magnetic field from ISS

    A group of high school students used a tiny, inexpensive computer to try to measure Earth’s magnetic field from the International Space Station, showing a way to affordably explore and understand our planet.
    In the American Journal of Physics, published on behalf of the American Association of Physics Teachers by AIP Publishing, three high school students from Portugal, along with their faculty mentor, report the results of their project. The students programmed an add-on board for the Raspberry Pi computer to take measurements of Earth’s magnetic field in orbit. This add-on component, known as the Sense Hat, contained a magnetometer, gyroscope, accelerometer, and sensors for temperature, pressure, and humidity.
    The European Space Agency teamed up with the U.K.’s Raspberry Pi Foundation to hold a contest for high school students. The contest, known as the Astro Pi Challenge, required the students to program a Raspberry Pi computer with code to be run aboard the space station.
    The students used the data acquired from the space station to map out Earth’s magnetic field and compared their results to data provided by the International Geomagnetic Reference Field (IGRF), which uses measurements from observatories and satellites to compute Earth’s magnetic field.
    “I saw the Astro Pi challenge as an opportunity to broaden my knowledge and skill set, and it ended up introducing me to the complex but exciting reality of the practical world,” Lourenço Faria, co-author and one of the students involved in the project, said.
    The IGRF data is updated every five years, so the students compared their measurements, taken in April 2021, with the latest IGRF data from 2020. They found their data differed from the IGRF results by a significant, but fixed, amount. This difference could be due to a static magnetic field inside the space station.
    The students repeated their analysis using another 15 orbits worth of ISS data and found a slight improvement in results. The students thought it surprising the main features of Earth’s magnetic field could be reconstructed with only three hours’ worth of measurements from their low-cost magnetometer aboard the space station.
    Although this project was carried out aboard the space station, it could easily be adapted to ground-based measurements using laboratory equipment or magnetometer apps for smartphones.
    “Taking measurements around the globe and sharing data via the internet or social media would make for an interesting science project that could connect students in different countries,” said Nuno Barros e Sá, co-author and faculty mentor for the students.
    The article “Modeling the Earth’s magnetic field” is authored by Nuno Barros e Sá, Lourenço Faria, Bernardo Alves, and Miguel Cymbron. The article will appear in American Journal of Physics on May 23, 2022.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More

  • in

    Spinning is key for line-dancing electrons in iron selenide

    Rice University quantum physicists are part of an international team that has answered a puzzling question at the forefront of research into iron-based superconductors: Why do electrons in iron selenide dance to a different tune when they move right and left rather than forward and back?
    A research team led by Xingye Lu at Beijing Normal University, Pengcheng Dai at Rice and Thorsten Schmitt at the Paul Scherrer Institute (PSI) in Switzerland used resonant inelastic X-ray scattering (RIXS) to measure the behavior of electron spins in iron selenide at high energy levels.
    Spin is the property of electrons related to magnetism, and the researchers discovered spins in iron selenide begin behaving in a directionally dependent way at the same time the material begins exhibiting directionally dependent electronic behavior, or nematicity. The team’s results were published online in Nature Physics.
    Electronic nematicity is believed to be an important ingredient for bringing about superconductivity in iron selenide and similar iron-based materials. First discovered in 2008, these iron-based superconductors number in the dozens. All become superconductors at very cold temperatures, and most exhibit nematicity before they reach the critical temperature where superconductivity begins.
    Whether nematicity helps or hinders the onset of superconductivity is unclear. But the results of the high-energy spin experiments at PSI’s Swiss Light Source are a surprise because iron selenide is the only iron-based superconductor in which nematicity occurs in the absence of a long-range magnetic ordering of electron spins.
    “Iron selenide has something special going for it,” said Rice study co-author Qimiao Si, who, like Dai, is a member of the Rice Quantum Initiative. “Being nematic without long-range magnetic order provides an extra knob to access the physics of the iron-based superconductors. In this work, the experiment uncovered something truly striking, namely that high-energy spin excitations are dispersive and undamped, meaning they have a well-defined energy-versus-momentum relationship.”
    In all iron-based superconductors, iron atoms are arranged in 2D sheets that are sandwiched between top and bottom sheets of other elements, selenium in the case of iron selenide. The atoms in the 2D iron sheets are spaced in checkerboard fashion, exactly the same distance from one another in both the left-right and forward-back directions. But as the materials are cooled near the point of superconductivity, the iron sheets undergo a slight structural shift. Instead of exact squares, the atoms form oblong rhombuses like baseball diamonds, where the distance from home plate to second base is shorter than the distance from first to third base. Electronic nematicity occurs along with this shift, taking the form of increased or decreased electrical resistance or conductivity only in the direction of home-to-second or first-to-third. More

  • in

    Haptics device creates realistic virtual textures

    Technology has allowed us to immerse ourselves in a world of sights and sounds from the comfort of our home, but there’s something missing: touch.
    Tactile sensation is an incredibly important part of how humans perceive their reality. Haptics or devices that can produce extremely specific vibrations that can mimic the sensation of touch are a way to bring that third sense to life. However, as far as haptics have come, humans are incredibly particular about whether or not something feels “right,” and virtual textures don’t always hit the mark.
    Now, researchers at the USC Viterbi School of Engineering have developed a new method for computers to achieve that true texture — with the help of human beings.
    Called a preference-driven model, the framework uses our ability to distinguish between the details of certain textures as a tool in order to give these virtual counterparts a tune-up.
    The research was published in IEEE Transactions on Haptics by three USC Viterbi Ph.D. students in computer science, Shihan Lu, Mianlun Zheng and Matthew Fontaine, as well as Stefanos Nikolaidis, USC Viterbi assistant professor in computer science and Heather Culbertson, USC Viterbi WiSE Gabilan Assistant Professor in Computer Science.
    “We ask users to compare their feeling between the real texture and the virtual texture,” Lu, the first author, explained. “The model then iteratively updates a virtual texture so that the virtual texture can match the real one in the end.”
    According to Fontaine, the idea first emerged when they shared a Haptic Interfaces and Virtual Environments class back in Fall of 2019 taught by Culbertson. They drew inspiration from the art application Picbreeder, which can generate images based on a user’s preference over and over until it reaches the desired result. More

  • in

    Long-hypothesized 'next generation wonder material' created

    For over a decade, scientists have attempted to synthesize a new form of carbon called graphyne with limited success. That endeavor is now at an end, though, thanks to new research from the University of Colorado Boulder.
    Graphyne has long been of interest to scientists because of its similarities to the “wonder material” graphene — another form of carbon that is highly valued by industry whose research was even awarded the Nobel Prize in Physics in 2010. However, despite decades of work and theorizing, only a few fragments have ever been created before now.
    This research, announced last week in Nature Synthesis, fills a longstanding gap in carbon material science, potentially opening brand-new possibilities for electronics, optics and semiconducting material research.
    “The whole audience, the whole field, is really excited that this long-standing problem, or this imaginary material, is finally getting realized,” said Yiming Hu, lead author on the paper and 2022 doctoral graduate in chemistry.
    Scientists have long been interested in the construction of new or novel carbon allotropes, or forms of carbon, because of carbon’s usefulness to industry, as well as its versatility.
    There are different ways carbon allotropes can be constructed depending on how sp2, sp3 and sp hybridized carbon (or the different ways carbon atoms can bind to other elements), and their corresponding bonds, are utilized. The most well-known carbon allotropes are graphite (used in tools like pencils and batteries) and diamonds, which are created out of sp2 carbon and sp3 carbon, respectively. More

  • in

    Organic crystals can serve as energy converters for emerging technologies

    New research by a team of researchers at the NYU Abu Dhabi (NYUAD) Smart Materials Lab published today in the journal Nature Communications demonstrates that organic crystals, a new class of smart engineering materials, can serve as efficient and sustainable energy conversion materials for advanced technologies such as robotics and electronics.
    While organic crystals were previously thought to be fragile, the NYUAD researchers have discovered that some organic crystals are mechanically very robust. They developed a material that establishes a new world record for its ability to switch between different shapes by expansion or contraction over half of its length, without losing its perfectly-ordered structure.
    In the study titled “Exceptionally High Work Density of a Ferroelectric Dynamic Organic Crystal around Room Temperature” the team, led by NYUAD Professor of Chemistry Panče Naumov, presents the process of observing how the organic crystalline material reacted to different temperatures. The researchers found that the organic crystals were able to reversibly change shape in a similar manner to plastics and rubber. Specifically, this material could expand and contract over half of its length (51 percent) repeatedly, over thousands of cycles, without any deterioration. It was also able to both expand and contract at room temperature, as opposed to other materials that require a higher temperature to transform, creating higher energy costs for operation.
    Unlike traditional materials that are silicon- or silica-based, and inevitably stiff, heavy and brittle, the materials that will be used for future electronics will be soft and organic in nature. These advanced technologies require materials that are lightweight, resilient to damage, efficient in performance, and also have added qualities such as mechanical flexibility and ability to operate sustainably, with minimal consumption of energy. The results of this study have demonstrated, for the first time, that certain organic crystalline materials meet the needs of these technologies, and can be used in applications such as soft robotics, artificial muscles, organic optics, and organic electronics (electronics created solely from organic materials).
    “This latest discovery from the Smart Materials Lab at NYUAD builds on a series of our previous discoveries about the untapped potential of this new class of materials, which includes adaptive crystals, self-healing crystals, and organic crystalline materials with shape memory,” said Naumov. “Our work has shown that organic crystals can not only meet the needs of the emerging technologies, but in some cases can also surpass the levels of efficiency and sustainability of other, more common materials.”
    Story Source:
    Materials provided by New York University. Note: Content may be edited for style and length. More

  • in

    Mixing laser- and x-ray-beams

    Unlike fictional laser swords, real laser beams do not interact with each other when they cross — unless the beams meet within a suitable material allowing for nonlinear light-matter interaction. In such a case, wave mixing can give rise to beams with changed colors and directions.
    Wave-mixing processes between different light beams are one cornerstone of the field of nonlinear optics, which is firmly established since lasers have become widely available. Within a suitable material such as particular crystals, two laser beams can “feel each other’s presence.” In this process, energy and momentum can be exchanged, giving rise to additional laser beams emerging from the interaction zone in different directions and with different frequencies, in the visible spectral range seen as different colors. These effects are commonly used to design and realize new laser light sources. Just as important, the analysis of the emerging light beams in wave mixing phenomena provides insights into the nature of the material in which the wave mixing process occurs. Such wave-mixing based spectroscopy allows researchers to understand intricacies of the electronic structure of a specimen and how light can excite and interact with the material. So far, however, these approaches have been hardly used outside of the visible or infrared spectral range.
    A team of researchers from Max Born Institute (MBI), Berlin, and DESY, Hamburg, has now observed a new kind of such wave mixing process involving soft x-rays. Overlapping ultrashort pulses of soft x-rays and infrared radiation in a single crystal of lithium fluoride (LiF), they see how energy from two infrared photons is transferred to or from the x-ray photon, changing the x-ray “color” in a so-called third-order nonlinear process . Not only do they observe this particular process with x-rays for the first time, they were also able to map out its efficiency when changing the color of the incoming x-rays. It turns out that the mixing signals are only detectable when the process involves an inner-shell electron from a lithium atom being promoted into a state where this electron is tightly bound to the vacancy it left behind — a state known as exciton. Furthermore, comparison with theory shows that an otherwise “optically forbidden” transition of an inner-shell electron contributes to the wave mixing process.
    Via analysis of this resonant four-wave mixing process, the researchers get a detailed picture of where the optically excited electron travels in its very short lifetime. “Only if the excited electron is localized in the immediate vicinity of the hole it has left behind do we observe the four-wave mixing signal,” says Robin Engel, a PhD student involved in the work, “and because we have used a specific color of x-rays, we know that this hole is very close to the atomic nucleus of the lithium atom.” Due to the ability of x-rays to excite inner shell electrons selectively at the different atomic species in a material, the demonstrated approach allows researchers to track electrons moving around in molecules or solids after they have been stimulated by an ultrafast laser pulse. Exactly such processes — electrons moving towards different atoms after having been excited by light — are crucial steps in photochemical reactions or applications such as light harvesting, e.g., via photovoltaics or direct solar fuel generation. “As our wave-mixing spectroscopy approach can be scaled to much higher photon energies at x-ray lasers, many different atoms of the periodic table can be selectively excited. In this way we expect that it will be possible to track the transient presence of electrons at many different atoms of a more complex material, giving new insight into these important processes,” explains Daniel Schick, researcher at MBI.
    Story Source:
    Materials provided by Max Born Institute for Nonlinear Optics and Short Pulse Spectroscopy (MBI). Note: Content may be edited for style and length. More

  • in

    Neuromorphic memory device simulates neurons and synapses

    Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.
    Neuromorphic computing aims to realize artificial intelligence (AI) by mimicking the mechanisms of neurons and synapses that make up the human brain. Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.
    Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency.
    The human brain consists of a complex network of 100 billion neurons and 100 trillion synapses. The functions and structures of neurons and synapses can flexibly change according to the external stimuli, adapting to the surrounding environment. The research team developed a neuromorphic device in which short-term and long-term memories coexist using volatile and non-volatile memory devices that mimic the characteristics of neurons and synapses, respectively. A threshold switch device is used as volatile memory and phase-change memory is used as a non-volatile device. Two thin-film devices are integrated without intermediate electrodes, implementing the functional adaptability of neurons and synapses in the neuromorphic memory.
    Professor Keon Jae Lee explained, “Neurons and synapses interact with each other to establish cognitive functions such as memory and learning, so simulating both is an essential element for brain-inspired artificial intelligence. The developed neuromorphic memory device also mimics the retraining effect that allows quick learning of the forgotten information by implementing a positive feedback effect between neurons and synapses.”
    Story Source:
    Materials provided by The Korea Advanced Institute of Science and Technology (KAIST). Note: Content may be edited for style and length. More