More stories

  • in

    DIY digital archaeology: New methods for visualizing small objects and artifacts

    The ability to visually represent artefacts, whether inorganics like stone, ceramic and metal, or organics such as bone and plant material, has always been of great importance to the field of anthropology and archaeology. For researchers, educators, students and the public, the ability to see the past, not only read about it, offers invaluable insights into the production of cultural materials and the populations who made and used them.
    Digital photography is the most commonly used method of visual representation, but despite its speed and efficiency, it often fails to faithfully represent the artefact being studied. In recent years, 3-D scanning has emerged as an alternative source of high-quality visualizations, but the cost of the equipment and the time needed to produce a model are often prohibitive.
    Now, a paper published in PLOS ONE presents two new methods for producing high-resolution visualizations of small artefacts, each achievable with basic software and equipment. Using expertise from fields which include archaeological science, computer graphics and video game development, the methods are designed to allow anyone to produce high-quality images and models with minimal effort and cost.
    The first method, Small Object and Artefact Photography or SOAP, deals with the photographic application of modern digital techniques. The protocol guides users through small object and artefact photography from the initial set up of the equipment to the best methods for camera handling and functionality and the application of post-processing software.
    The second method, High Resolution Photogrammetry or HRP, is used for the photographic capturing, digital reconstruction and three-dimensional modelling of small objects. This method aims to give a comprehensive guide for the development of high-resolution 3D models, merging well-known techniques used in academic and computer graphic fields, allowing anyone to independently produce high resolution and quantifiable models.
    “These new protocols combine detailed, concise, and user-friendly workflows covering photographic acquisition and processing, thereby contributing to the replicability and reproducibility of high-quality visualizations,” says Jacopo Niccolò Cerasoni, lead author of the paper. “By clearly explaining every step of the process, including theoretical and practical considerations, these methods will allow users to produce high-quality, publishable two- and three-dimensional visualisations of their archaeological artefacts independently.”
    The SOAP and HRP protocols were developed using Adobe Camera Raw, Adobe Photoshop, RawDigger, DxO Photolab, and RealityCapture and take advantage of native functions and tools that make image capture and processing easier and faster. Although most of these softwares are readily available in academic environments, SOAP and HRP can be applied to any other non-subscription based softwares with similar features. This enables researchers to use free or open-access software as well, albeit with minor changes to some of the presented steps.
    Both the SOAP protocol and the HRP protocol are published openly on protocols.io.
    “Because visual communication is so important to understanding past behavior, technology and culture, the ability to faithfully represent artefacts is vital for the field of archaeology,” says co-author Felipe do Nascimento Rodrigues, from the University of Exeter.
    Even as new technologies revolutionize the field of archaeology, practical instruction on archaeological photography and three-dimensional reconstructions are lacking. The authors of the new paper hope to fill this gap, providing researchers, educators and enthusiasts with step-by-step instructions for creating high quality visualizations of artefacts. More

  • in

    A novel computing approach to recognizing chaos

    Chaos isn’t always harmful to technology, in fact, it can have several useful applications if it can be detected and identified.
    Chaos and its chaotic dynamics are prevalent throughout nature and through manufactured devices and technology. Though chaos is usually considered a negative, something to be removed from systems to ensure their optimal operation, there are circumstances in which chaos can be a benefit and can even have important applications. Hence a growing interest in the detection and classification of chaos in systems.
    A new paper published in EPJ B authored by Dagobert Wenkack Liedji and Jimmi Hervé Talla Mbé of the Research unit of Condensed Matter, Electronics and Signal Processing, Department of Physics, University of Dschang, Cameroon, and Godpromesse Kenné, from Laboratoire d’ Automatique et d’Informatique Appliquée, Department of Electrical Engineering, IUT-FV Bandjoun, University of Dschang, Cameroon, proposes using the single nonlinear node delay-based reservoir computer to identify chaotic dynamics.
    In the paper, the authors show that the classification capabilities of this system are robust with an accuracy of over 99 per cent. Examining the effect of the length of the time series on the performance of the method they found higher accuracy achieved when the single nonlinear node delay-based reservoir computer was used with short time series.
    Several quantifiers have been developed to distinguish chaotic dynamics in the past, prominently the largest Lyapunov exponent (LLE), which is highly reliable and helps display numerical values that help to decide on the dynamical state of the system.
    The team overcame issues with the LLE like expense, need for the mathematical modelling of the system, and long-processing times by studying several deep learning models finding these models obtained poor classification rates. The exception to this was a large kernel size convolutional neural network (LKCNN) which could classify chaotic and nonchaotic time series with high accuracy.
    Thus, using the Mackey-Glass (MG) delay-based reservoir computer system to classify nonchaotic and chaotic dynamical behaviours, the authors showed the ability of the system to act as an efficient and robust quantifier for classifying non-chaotic and chaotic signals.
    They listed the advantages of the system they used as not necessarily requiring the knowledge of the set of equations, instead, describing the dynamics of a system but only data from the system, and the fact that neuromorphic implementation using an analogue reservoir computer enables the real-time detection of dynamical behaviours from a given oscillator.
    The team concludes that future research will be devoted to deep reservoir computers to explore their performances in classifications of more complex dynamics.
    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More

  • in

    Exposure assessment for Deepwater Horizon oil spill: Health outcomes

    Nearly 12 years after the Deepwater Horizon oil spill, scientists are still examining the potential health effects on workers and volunteers who experienced oil-related exposures.
    To help shape future prevention efforts, one West Virginia University researcher — Caroline Groth, assistant professor in the School of Public Health’s Department of Epidemiology and Biostatistics — has developed novel statistical methods for assessing airborne exposure. Working with collaborators from multiple institutions, Groth has made it possible for researchers to characterize oil spill exposures in greater detail than has ever been done before.
    With very few Ph.D. biostatisticians in the area of occupational health, there were few appropriate statistical methodologies for the assessment of inhalation exposures for the GuLF STUDY, a study launched by the National Institute of Environmental Health Sciences shortly after the Deepwater Horizon oil spill. The purpose of the study, which is the largest ever following an oil spill: examine the health of persons involved in the response and clean-up efforts. Groth was part of the exposure assessment team tasked with characterizing worker exposures and led by Patricia Stewart and Mark Stenzel.
    Groth’s statistical methods, which she began in 2012, laid the framework for a crucial step for determining whether there are associations between exposures and health outcomes from the oil spill and clean-up work, which involved over 9,000 vessels deployed in the Gulf of Mexico waters across Alabama, Florida, Louisiana and Mississippi and tens of thousands of workers on the water and on land.
    The Deepwater Horizon oil spill is considered the largest marine oil spill in the history of the U.S.
    “Workers were exposed differently based on their activities, time of exposure, etc., and our research team’s goal was to develop exposure estimates for each of those scenarios and then link them to the participants’ work history through an ‘exposure matrix,'” Groth said. More

  • in

    Predicting the most stable boron nitride structure with quantum simulations

    Boron nitride (BN) is a versatile material with applications in a variety of engineering and scientific fields. This is largely due to an interesting property of BN called “polymorphism,” characterized by the ability to crystallize into more than one type of structure. This generally occurs as a response to changes in temperature, pressure, or both. Furthermore, the different structures, called “polymorphs,” differ remarkably in their physical properties despite having the same chemical formula. As a result, polymorphs play an important role in material design, and a knowledge of how to selectively favor the formation of the desired polymorph is crucial in this regard.
    However, BN polymorphs pose a particular problem. Despite conducting several experiments to assess the relative stabilities of BN polymorphs, a consensus has not emerged on this topic. While computational methods are often the go-to approach for these problems, BN polymorphs have posed serious challenges to standard computation techniques due to the weak “van der Waals (vdW) interactions” between their layers, which is not accounted for in these computations. Moreover, the four stable BN polymorphs, namely rhombohedral (rBN), hexagonal (hBN), wurtzite (wBN), and zinc-blende (cBN), manifest within a narrow energy range, making the capture of small energy differences together with vdW interactions even more challenging.
    Fortunately, an international research team led by Assistant Professor Kousuke Nakano from Japan Advanced Institute of Science and Technology (JAIST) has now provided evidence to settle the debate. In their study, they addressed the issue with a state-of-the-art first principles calculations framework, namely fixed-node diffusion Monte Carlo (FNDMC) simulations. FNDMC represents a step in the popular quantum Monte Carlo simulations method, in which a parametrized many-body quantum “wavefunction” is first optimized to attain the ground state and then supplied to the FNDMC.
    Additionally, the team also computed the Gibbs energy (the useful work obtainable from a system at constant pressure and temperature) of BN polymorphs for different temperatures and pressures using density functional theory (DFT) and phonon calculations. This paper was made available online on March 24, 2022 published in The Journal of Physical Chemistry C.
    According to the FNDMC results, hBN was the most stable structure, followed by rBN, cBN, and wBN. These results were consistent at both 0 K and 300 K (room temperature). However, the DFT estimations yielded conflicting results for two different approximations. Dr. Nakano explains these contradictory findings: “Our results reveal that the estimation of relative stabilities is greatly influenced by the exchange correlational functional, or the approximation used in the DFT calculation. As a result, a quantitative conclusion cannot be reached using DFT findings, and a more accurate approach, such as FNDMC, is required.”
    Notably, the FNDMC results were in agreement with that generated by other refined computation methods, such as “coupled cluster,” suggesting that FNDMC is an effective tool for dealing with polymorphs, especially those governed by vdW forces. The team also showed that it can provide other important information, such as reliable reference energies, when experimental data is unavailable.
    Dr. Nakano is excited about the future prospects of the method in the area of materials science. “Our study demonstrates the ability of FNDMC to detect tiny energy changes involving vdW forces, which will stimulate the use of this method for other van der Waals materials,” he says. “Moreover, molecular simulations based on this accurate and reliable method could empower material designs, enabling the development of medicines and catalysts.” More

  • in

    Hybrid quantum bit based on topological insulators

    With their superior properties, topological qubits could help achieve a breakthrough in the development of a quantum computer designed for universal applications. So far, no one has yet succeeded in unambiguously demonstrating a quantum bit, or qubit for short, of this kind in a lab. However, scientists from Forschungszentrum Jülich have now gone some way to making this a reality. For the first time, they succeeded in integrating a topological insulator into a conventional superconducting qubit. Just in time for “World Quantum Day” on 14 April, their novel hybrid qubit made it to the cover of the latest issue of the journal Nano Letters.
    Quantum computers are regarded as the computers of the future. Using quantum effects, they promise to deliver solutions for highly complex problems that cannot be processed by conventional computers in a realistic time frame. However, the widespread use of such computers is still a long way off. Current quantum computers generally contain only a small number of qubits. The main problem is that they are highly prone to error. The bigger the system, the more difficult it is to fully isolate it from its environment.
    Many hopes are therefore pinned on a new type of quantum bit — the topological qubit. This approach is being pursued by several research groups as well as companies such as Microsoft. This type of qubit exhibits the special feature that it is topologically protected; the particular geometric structure of the superconductors as well as their special electronic material properties ensure that quantum information is retained. Topological qubits are therefore considered to be particularly robust and largely immune to external sources of decoherence. They also appear to enable fast switching times comparable to those achieved by the conventional superconducting qubits used by Google and IBM in current quantum processors.
    However, it is not yet clear whether we will ever succeed in actually producing topological qubits. This is because a suitable material basis is still lacking to experimentally generate the special quasiparticles required for this without any doubt. These quasiparticles are also known as Majorana states. Until now, they could only be unambiguously demonstrated in theory, but not in experiments. Hybrid qubits, as they have now been constructed for the first time by the research group led by Dr. Peter Schüffelgen at the Peter Grünberg Institute (PGI-9) of Forschungszentrum Jülich, are now opening up new possibilities in this area. They already contain topological materials at crucial points. Therefore, this novel type of hybrid qubit provides researchers with a new experimental platform to test the behaviour of topological materials in highly sensitive quantum circuits.
    Story Source:
    Materials provided by Forschungszentrum Juelich. Note: Content may be edited for style and length. More

  • in

    Graphene-hBN breakthrough to spur new LEDs, quantum computing

    In a discovery that could speed research into next-generation electronics and LED devices, a University of Michigan research team has developed the first reliable, scalable method for growing single layers of hexagonal boron nitride on graphene.
    The process, which can produce large sheets of high-quality hBN with the widely used molecular-beam epitaxy process, is detailed in a study in Advanced Materials.
    Graphene-hBN structures can power LEDs that generate deep-UV light, which is impossible in today’s LEDs, said Zetian Mi, U-M professor of electrical engineering and computer science and a corresponding author of the study. Deep-UV LEDs could drive smaller size and greater efficiency in a variety of devices including lasers and air purifiers.
    “The technology used to generate deep-UV light today is mercury-xenon lamps, which are hot, bulky, inefficient and contain toxic materials,” Mi said. “If we can generate that light with LEDs, we could see an efficiency revolution in UV devices similar to what we saw when LED light bulbs replaced incandescents.”
    Hexagonal boron nitride is the world’s thinnest insulator while graphene is the thinnest of a class of materials called semimetals, which have highly malleable electrical properties and are important for their role in computers and other electronics.
    Bonding hBN and graphene together in smooth, single-atom-thick layers unleashes a treasure trove of exotic properties. In addition to deep-UV LEDs, graphene-hBN structures could enable quantum computing devices, smaller and more efficient electronics and optoelectronics and a variety of other applications. More

  • in

    Researchers create miniature wide-angle camera with flat metalenses

    Researchers have designed a new compact camera that acquires wide-angle images of high-quality using an array of metalenses — flat nanopatterned surfaces used to manipulate light. By eliminating the bulky and heavy lenses typically required for this type of imaging, the new approach could enable wide-angle cameras to be incorporated into smartphones and portable imaging devices for vehicles such as cars or drones.
    Tao Li and colleagues from Nanjing University in China report their new ultrathin camera in Optica, Optica Publishing Group’s journal for high-impact research. The new camera, which is just 0.3 centimeters thick, can produce clear images of a scene with a viewing angle of more than 120 degrees.
    Wide-angle imaging is useful for capturing large amounts of information that can create stunning, high-quality images. For machine vision applications such as autonomous driving and drone-based surveillance, wide-angle imaging can enhance performance and safety, for example by revealing an obstacle you couldn’t otherwise see while backing up in a vehicle.
    “To create an extremely compact wide-angle camera, we used an array of metalenses that each capture certain parts of the wide-angle scene,” said Li. “The images are then stitched together to create a wide-angle image without any degradation in image quality.”
    Miniaturizing the wide-angle lens
    Wide-angle imaging is usually accomplished with a fish-eye compound lens or other type of multilayer lens. Although researchers have previously tried to use metalenses to create wide-angle cameras, they tend to suffer from poor image quality or other drawbacks.
    In the new work, the researchers used an array of metalenses that are each carefully designed to focus a different range of illumination angles. This allows each lens to clearly image part of a wide-angle object or scene. The clearest parts of each image can then be computationally stitched together to create the final image.
    “Thanks to the flexible design of the metasurfaces, the focusing and imaging performance of each lens can be optimized independently,” said Li. “This gives rise to a high quality final wide-angle image after a stitching process. What’s more, the array can be manufactured using just one layer of material, which helps keep cost down.”
    Seeing more with flat lenses
    To demonstrate the new approach, the researchers used nanofabrication to create a metalens array and mounted it directly to a CMOS sensor, creating a planar camera that measured about 1 cm × 1 cm × 0.3 cm. They then used this camera to image a wide-angle scene created by using two projectors to illuminate a curved screen surrounding the camera at a distance of 15 cm.
    They compared their new planar camera with one based on a single traditional metalens while imaging the words “Nanjing University” projected across the curved screen. The planar camera produced an image that showed every letter clearly and had a viewing angle larger than 120°, more than three times larger than that of the camera based on a traditional metalens.
    The researchers note that the planar camera demonstrated in this research used individual metalenses just 0.3 millimeters in diameter. They plan to enlarge these to about 1 to 5 millimeters to increase the camera’s imaging quality. After optimization, the array could be mass produced to reduce the cost of each device.
    Story Source:
    Materials provided by Optica. Note: Content may be edited for style and length. More

  • in

    Tear-free hair brushing? All you need is math

    As anyone who has ever had to brush long hair knows, knots are a nightmare. But with enough experience, most learn the tricks of detangling with the least amount of pain — start at the bottom, work your way up to the scalp with short, gentle brushes, and apply detangler when necessary.
    L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics, learned the mechanics of combing years ago while brushing his young daughter’s hair.
    “I recall that detangling spray seemed to work sometimes, but I still had to be careful to comb gently, by starting from the free ends,” said Mahadevan. “But I was soon fired from the job as I was not very patient.”
    While Mahadevan lost his role as hairdresser, he was still a scientist and the topology, geometry and mechanics of detangling posed interesting mathematical questions that are relevant to a range of applications including textile manufacturing and chemical processes such as polymer processing.
    In a new paper, published in the journal Soft Matter, Mahadevan and co-authors Thomas Plumb Reyes and Nicholas Charles, explore the mathematics of combing and explain why the brushing technique used by so many is the most effective method to detangle a bundle of fibers.
    To simplify the problem, the researchers simulated two helically entwined filaments, rather than a whole head of hair.
    “Using this minimal model, we study the detangling of the double helix via a single stiff tine that moves along it, leaving two untangled filaments in its wake,” said Plumb-Reyes, a graduate student at SEAS. “We measured the forces and deformations associated with combing and then simulated it numerically.”
    “Short strokes that start at the free end and move towards the clamped end remove tangles by creating a flow of a mathematical quantity called the ‘link density’ that characterizes the amount that hair strands that are braided with each other, consistent with simulations of the process” said Nicholas Charles, a graduate student at SEAS.
    The researchers also identified the optimal minimum length for each stroke — any smaller and it would take forever to comb out all the tangles and any longer and it would be too painful.
    The mathematical principles of brushing developed by Plumb-Reyes, Charles and Mahadevan were recently used by Professor Daniela Rus and her team at MIT to design algorithms for brushing hair by a robot.
    Next, the team aims to study the mechanics of brushing curlier hair and how it responds to humidity and temperature, which may lead to a mathematical understanding of a fact every person with curly hair knows: never brush dry hair.
    This research was supported by funds from the US National Science Foundation, and the Henri Seydoux Fund. More