More stories

  • in

    Scientists propose new way to detect emotions using wireless signals

    A novel artificial intelligence (AI) approach based on wireless signals could help to reveal our inner emotions, according to new research from Queen Mary University of London.
    The study, published in the journal PLOS ONE, demonstrates the use of radio waves to measure heartrate and breathing signals and predict how someone is feeling even in the absence of any other visual cues, such as facial expressions.
    Participants were initially asked to watch a video selected by researchers for its ability to evoke one of four basic emotion types; anger, sadness, joy and pleasure. Whilst the individual was watching the video the researchers then emitted harmless radio signals, like those transmitted from any wireless system including radar or WiFi, towards the individual and measured the signals that bounced back off them. By analysing changes to these signals caused by slight body movements, the researchers were able to reveal ‘hidden’ information about an individual’s heart and breathing rates.
    Previous research has used similar non-invasive or wireless methods of emotion detection, however in these studies data analysis has depended on the use of classical machine learning approaches, whereby an algorithm is used to identify and classify emotional states within the data. For this study the scientists instead employed deep learning techniques, where an artificial neural network learns its own features from time-dependent raw data, and showed that this approach could detect emotions more accurately than traditional machine learning methods.
    Achintha Avin Ihalage, a PhD student at Queen Mary, said: “Deep learning allows us to assess data in a similar way to how a human brain would work looking at different layers of information and making connections between them. Most of the published literature that uses machine learning measures emotions in a subject-dependent way, recording a signal from a specific individual and using this to predict their emotion at a later stage.
    “With deep learning we’ve shown we can accurately measure emotions in a subject-independent way, where we can look at a whole collection of signals from different individuals and learn from this data and use it to predict the emotion of people outside of our training database.”
    Traditionally, emotion detection has relied on the assessment of visible signals such as facial expressions, speech, body gestures or eye movements. However, these methods can be unreliable as they do not effectively capture an individual’s internal emotions and researchers are increasingly looking towards ‘invisible’ signals, such as ECG to understand emotions.

    advertisement

    ECG signals detect electrical activity in the heart, providing a link between the nervous system and heart rhythm. To date the measurement of these signals has largely been performed using sensors that are placed on the body, but recently researchers have been looking towards non-invasive approaches that use radio waves, to detect these signals.
    Methods to detect human emotions are often used by researchers involved in psychological or neuroscientific studies but it is thought that these approaches could also have wider implications for the management of health and wellbeing.
    In the future, the research team plan to work with healthcare professionals and social scientists on public acceptance and ethical concerns around the use of this technology.
    Ahsan Noor Khan, a PhD student at Queen Mary and first author of the study, said: “Being able to detect emotions using wireless systems is a topic of increasing interest for researchers as it offers an alternative to bulky sensors and could be directly applicable in future ‘smart’ home and building environments. In this study, we’ve built on existing work using radio waves to detect emotions and show that the use of deep learning techniques can improve the accuracy of our results.”
    “We’re now looking to investigate how we could use low-cost existing systems, such as WiFi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment. This type of approach would enable us to classify emotions of people on individual basis while performing routine activities. Moreover, we aim to improve the accuracy of emotion detection in a work environment using advanced deep learning techniques.”
    Professor Yang Hao, the project lead added: “This research opens up many opportunities for practical applications, especially in areas such as human/robot interaction and healthcare and emotional wellbeing, which has become increasingly important during the current Covid-19 pandemic.” More

  • in

    Researchers create novel photonic chip

    Researchers at the George Washington University and University of California, Los Angeles, have developed and demonstrated for the first time a photonic digital to analog converter without leaving the optical domain. Such novel converters can advance next-generation data processing hardware with high relevance for data centers, 6G networks, artificial intelligence and more.
    Current optical networks, through which most of the world’s data is transmitted, as well as many sensors, require a digital-to-analog conversion, which links digital systems synergistically to analog components.
    Using a silicon photonic chip platform, Volker J. Sorger, an associate professor of electrical and computer engineering at GW, and his colleagues have created a digital-to-analog converter that does not require the signal to be converted in the electrical domain, thus showing the potential to satisfy the demand for high data-processing capabilities while acting on optical data, interfacing to digital systems, and performing in a compact footprint, with both short signal delay and low power consumption.
    “We found a way to seamlessly bridge the gap that exists between these two worlds, analog and digital,” Sorger said. “This device is a key stepping stone for next-generation data processing hardware.”
    This work was funded by the Air Force Office of Scientific Research (FA9550-19-1-0277) and the Office of Navy Research (N00014-19-1-2595 of the Electronic Warfare Program).

    Story Source:
    Materials provided by George Washington University. Note: Content may be edited for style and length. More

  • in

    A new hands-off probe uses light to explore electron behavior in a topological insulator

    Topological insulators are one of the most puzzling quantum materials — a class of materials whose electrons cooperate in surprising ways to produce unexpected properties. The edges of a TI are electron superhighways where electrons flow with no loss, ignoring any impurities or other obstacles in their path, while the bulk of the material blocks electron flow.
    Scientists have studied these puzzling materials since their discovery just over a decade ago with an eye to harnessing them for things like quantum computing and information processing.
    Now researchers at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have invented a new, hands-off way to probe the fastest and most ephemeral phenomena within a TI and clearly distinguish what its electrons are doing on the superhighway edges from what they’re doing everywhere else.
    The technique takes advantage of a phenomenon called high harmonic generation, or HHG, which shifts laser light to higher energies and higher frequencies — much like pressing a guitar string produces a higher note — by shining it through a material. By varying the polarization of laser light going into a TI and analyzing the shifted light coming out, researchers got strong and separate signals that told them what was happening in each of the material’s two contrasting domains.
    “What we found out is that the light coming out gives us information about the properties of the superhighway surfaces,” said Shambhu Ghimire, a principal investigator with the Stanford PULSE Institute at SLAC, where the work was carried out. “This signal is quite remarkable, and its dependence on the polarization of the laser light is dramatically different from what we see in conventional materials. We think we have a potentially novel approach for initiating and probing quantum behaviors that are supposed to be present in a broad range of quantum materials.”
    The research team reported the results in Physical Review A today.

    advertisement

    Light in, light out
    Starting in 2010, a series of experiments led by Ghimire and PULSE Director David Reis showed HHG can be produced in ways that were previously thought unlikely or even impossible: by beaming laser light into a crystal, a frozen argon gas or an atomically thin semiconductor material. Another study described how to use HHG to generate attosecond laser pulses, which can be used to observe and control the movements of electrons, by shining a laser through ordinary glass.
    In 2018, Denitsa Baykusheva, a Swiss National Science Foundation Fellow with a background in HHG research, joined the PULSE group as a postdoctoral researcher. Her goal was to study the potential for generating HHG in topological insulators — the first such study in a quantum material. “We wanted to see what happens to the intense laser pulse used to generate HHG,” she said. “No one had actually focused such a strong laser light on these materials before.”
    But midway through those experiments, the COVID-19 pandemic hit and the lab shut down in March 2020 for all but essential research. So the team had to think of other ways to make progress, Baykusheva said.
    “In a new area of research like this one, theory and experiment have to go hand in hand,” she explained. “Theory is essential for explaining experimental results and also predicting the most promising avenues for future experiments. So we all turned ourselves into theorists” — first working with pen and paper and then writing code and doing calculations to feed into computer models.

    advertisement

    An illuminating result
    To their surprise, the results predicted that circularly polarized laser light, whose waves spiral around the beam like a corkscrew, could be used to trigger HHG in topological insulators.
    “One of the interesting things we observed is that circularly polarized laser light is very efficient at generating harmonics from the superhighway surfaces of the topological insulator, but not from the rest of it,” Baykusheva said. “This is something very unique and specific to this type of material. It can be used to get information about electrons that travel the superhighways and those that don’t, and it can also be used to explore other types of materials that can’t be probed with linearly polarized light.”
    The results lay out a recipe for continuing to explore HHG in quantum materials, said Reis, who is a co-author of the study.
    “It’s remarkable that a technique that generates strong and potentially disruptive fields, which takes electrons in the material and jostles them around and uses them to probe the properties of the material itself, can give you such a clear and robust signal about the material’s topological states,” he said.
    “The fact that we can see anything at all is amazing, not to mention the fact that we could potentially use that same light to change the material’s topological properties.”
    Experiments at SLAC have resumed on a limited basis, Reis added, and the results of the theoretical work have given the team new confidence that they know exactly what they are looking for.
    Researchers from the Max Planck POSTECH/KOREA Research Initiative also contributed to this report. Major funding for the study came from the DOE Office of Science and the Swiss National Science Foundation. More

  • in

    Highly deformable piezoelectric nanotruss for tactile electronics

    With the importance of non-contact environments growing due to COVID-19, tactile electronic devices using haptic technology are gaining traction as new mediums of communication.
    Haptic technology is being applied in a wide array of fields such as robotics or interactive displays. haptic gloves are being used for augmented information communication technology. Efficient piezoelectric materials that can convert various mechanical stimuli into electrical signals and vice versa are a prerequisite for advancing high-performing haptic technology.
    A research team led by Professor Seungbum Hong confirmed the potential of tactile devices by developing ceramic piezoelectric materials that are three times more deformable. For the fabrication of highly deformable nanomaterials, the research team built a zinc oxide hollow nanostructure using proximity field nanopatterning and atomic layered deposition. The piezoelectric coefficient was measured to be approximately 9.2 pm/V and the nanopillar compression test showed an elastic strain limit of approximately 10%, which is more than three times greater than that of the bulk zinc oxide one.
    Piezoelectric ceramics have a high piezoelectric coefficient with a low elastic strain limit, whereas the opposite is true for piezoelectric polymers. Therefore, it has been very challenging to obtain good performance in both high piezoelectric coefficients as well as high elastic strain limits. To break the elastic limit of piezoelectric ceramics, the research team introduced a 3D truss-like hollow nanostructure with nanometer-scale thin walls.
    According to the Griffith criterion, the fracture strength of a material is inversely proportional to the square root of the preexisting flaw size. However, a large flaw is less likely to occur in a small structure, which, in turn, enhances the strength of the material. Therefore, implementing the form of a 3D truss-like hollow nanostructure with nanometer-scale thin walls can extend the elastic limit of the material. Furthermore, a monolithic 3D structure can withstand large strains in all directions while simultaneously preventing the loss from the bottleneck. Previously, the fracture property of piezoelectric ceramic materials was difficult to control, owing to the large variance in crack sizes. However, the research team structurally limited the crack sizes to manage the fracture properties.
    Professor Hong’s results demonstrate the potential for the development of highly deformable ceramic piezoelectric materials by improving the elastic limit using a 3D hollow nanostructure. Since zinc oxide has a relatively low piezoelectric coefficient compared to other piezoelectric ceramic materials, applying the proposed structure to such components promised better results in terms of the piezoelectric activity.
    “With the advent of the non-contact era, the importance of emotional communication is increasing. Through the development of novel tactile interaction technologies, in addition to the current visual and auditory communication, humankind will enter a new era where they can communicate with anyone using all five senses regardless of location as if they are with them in person,” Professor Hong said.
    “While additional research must be conducted to realize the application of the proposed designs for haptic enhancement devices, this study holds high value in that it resolves one of the most challenging issues in the use of piezoelectric ceramics, specifically opening new possibilities for their application by overcoming their mechanical constraints. More

  • in

    Beyond qubits: Next big step to scale up quantum computing

    Scientists and engineers at the University of Sydney and Microsoft Corporation have opened the next chapter in quantum technology with the invention of a single chip that can generate control signals for thousands of qubits, the building blocks of quantum computers.
    “To realise the potential of quantum computing, machines will need to operate thousands if not millions of qubits,” said Professor David Reilly, a designer of the chip who holds a joint position with Microsoft and the University of Sydney.
    “The world’s biggest quantum computers currently operate with just 50 or so qubits,” he said. “This small scale is partly because of limits to the physical architecture that control the qubits.”
    “Our new chip puts an end to those limits.”
    The results have been published in Nature Electronics.
    Most quantum systems require quantum bits, or qubits, to operate at temperatures close to absolute zero (-273.15 degrees). This is to prevent them losing their ‘quantumness’, the character of matter or light that quantum computers need to perform their specialised computations.

    advertisement

    In order for quantum devices to do anything useful, they need instructions. That means sending and receiving electronic signals to and from the qubits. With current quantum architecture, that involves a lot of wires.
    “Current machines create a beautiful array of wires to control the signals; they look like an inverted gilded birds’ nest or chandelier. They’re pretty, but fundamentally impractical. It means we can’t scale the machines up to perform useful calculations. There is a real input-output bottleneck,” said Professor Reilly, also a Chief Investigator at the ARC Centre for Engineered Quantum Systems (EQUS) .
    Microsoft Senior Hardware Engineer, Dr Kushal Das, a joint inventor of the chip, said: “Our device does away with all those cables. With just two wires carrying information as input, it can generate control signals for thousands of qubits.
    “This changes everything for quantum computing.”
    The control chip was developed at the Microsoft Quantum Laboratories at the University of Sydney, a unique industry-academic partnership that is changing the way scientists tackle engineering challenges.

    advertisement

    “Building a quantum computer is perhaps the most challenging engineering task of the 21st century. This can’t be achieved working with a small team in a university laboratory in a single country but needs the scale afforded by a global tech giant like Microsoft,” Professor Reilly said.
    “Through our partnership with Microsoft, we haven’t just suggested a theoretical architecture to overcome the input-output bottleneck, we’ve built it.
    “We have demonstrated this by designing a custom silicon chip and coupling it to a quantum system,” he said. “I’m confident to say this is the most advanced integrated circuit ever built to operate at deep cryogenic temperatures.”
    If realised, quantum computers promise to revolutionise information technology by solving problems beyond the scope of classical computers in fields as diverse as cryptography, medicine, finance, artificial intelligence and logistics.
    POWER BUDGET
    Quantum computers are at a similar stage that classical computers were in the 1940s. Machines like ENIAC, the world’s first electronic computer, required rooms of control systems to achieve any useful function.
    It has taken decades to overcome the scientific and engineering challenges that now allows for billions of transistors to fit into your mobile phone.
    “Our industry is facing perhaps even bigger challenges to take quantum computing beyond the ENIAC stage,” Professor Reilly said.
    “We need to engineer highly complex silicon chips that operate at 0.1 Kelvin,” he said. “That’s an environment 30 times colder than deep space.”
    Dr Sebastian Pauka’s doctoral research at the University of Sydney encompassed much of the work to interface quantum devices with the chip. He said: “Operating at such cold temperatures means we have an incredibly low power budget. If we try to put more power into the system, we overheat the whole thing.”
    In order to achieve their result, the scientists at Sydney and Microsoft built the most advanced integrated circuit to operate at cryogenic temperatures.
    “We have done this by engineering a system that operates in close proximity to the qubits without disturbing their operations,” Professor Reilly said.
    “Current control systems for qubits are removed metres away from the action, so to speak. They exist mostly at room temperature.
    “In our system we don’t have to come off the cryogenic platform. The chip is right there with the qubits. This means lower power and higher speeds. It’s a real control system for quantum technology.”
    YEARS OF ENGINEERING
    “Working out how to control these devices takes years of engineering development,” Professor Reilly said. “For this device we started four years ago when the University of Sydney started its partnership with Microsoft, which represents the single biggest investment in quantum technology in Australia.
    “We built lots of models and design libraries to capture the behaviour of transistors at deep cryogenic temperatures. Then we had to build devices, get them verified, characterised and finally connect them to qubits to see them work in practice.”
    Vice-Chancellor and Principal of the University of Sydney, Professor Stephen Garton, said: “The whole university community is proud of Professor Reilly’s success and we look forward to many years of continued partnership with Microsoft.”
    Professor Reilly said the field has now fundamentally changed. “It’s not just about ‘here is my qubit’. It’s about how you build all the layers and all the tech to build a real machine.
    ‘Our partnership with Microsoft allows us to work with academic rigour, with the benefit of seeing our results quickly put into practice.”
    The Deputy Vice-Chancellor (Research), Professor Duncan Ivison, said: “Our partnership with Microsoft has been about realising David Reilly’s inspired vision to enable quantum technology. It’s great to see that vision becoming a reality.”
    Professor Reilly said: “If we had remained solely in academia this chip would never have been built.”
    The Australian scientist said he isn’t stopping there.
    “We are just getting started on this new wave of quantum innovation,” he said. “The great thing about the partnership is we don’t just publish a paper and move on. We can now continue with the blueprint to realise quantum technology at the industrial scale.” More

  • in

    Desktop PCs run simulations of mammals' brains

    University of Sussex academics have established a method of turbocharging desktop PCs to give them the same capability as supercomputers worth tens of millions of pounds.
    Dr James Knight and Prof Thomas Nowotny from the University of Sussex’s School of Engineering and Informatics used the latest Graphical Processing Units (GPUs) to give a single desktop PC the capacity to simulate brain models of almost unlimited size.
    The researchers believe the innovation, detailed in Nature Computational Science, will make it possible for many more researchers around the world to carry out research on large-scale brain simulation, including the investigation of neurological disorders.
    Currently, the cost of supercomputers is so prohibitive they are only affordable to very large institutions and government agencies and so are not accessible for large numbers of researchers.
    As well as shaving tens of millions of pounds off the costs of a supercomputer, the simulations run on the desktop PC require approximately 10 times less energy bringing a significant sustainability benefit too.
    Dr Knight, Research Fellow in Computer Science at the University of Sussex, said: “I think the main benefit of our research is one of accessibility. Outside of these very large organisations, academics typically have to apply to get even limited time on a supercomputer for a particular scientific purpose. This is quite a high barrier for entry which is potentially holding back a lot of significant research.

    advertisement

    “Our hope for our own research now is to apply these techniques to brain-inspired machine learning so that we can help solve problems that biological brains excel at but which are currently beyond simulations.
    “As well as the advances we have demonstrated in procedural connectivity in the context of GPU hardware, we also believe that there is also potential for developing new types of neuromorphic hardware built from the ground up for procedural connectivity. Key components could be implemented directly in hardware which could lead to even more truly significant compute time improvements.”
    The research builds on the pioneering work of US researcher Eugene Izhikevich who pioneered a similar method for large-scale brain simulation in 2006.
    At the time, computers were too slow for the method to be widely applicable meaning simulating large-scale brain models has until now only been possible for a minority of researchers privileged to have access to supercomputer systems.
    The researchers applied Izhikevich’s technique to a modern GPU, with approximately 2,000 times the computing power available 15 years ago, to create a cutting-edge model of a Macaque’s visual cortex (with 4.13 × 106 neurons and 24.2 × 109 synapse) which previously could only be simulated on a supercomputer.
    The researchers’ GPU accelerated spiking neural network simulator uses the large amount of computational power available on a GPU to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered — removing the need to store connectivity data in memory.
    Initialization of the researchers’ model took six minutes and simulation of each biological second took 7.7 min in the ground state and 8.4 min in the resting state- up to 35 % less time than a previous supercomputer simulation. In 2018, one rack of an IBM Blue Gene/Q supercomputer initialization of the model took around five minutes and simulating one second of biological time took approximately 12 minutes.
    Prof Nowotny, Professor of Informatics at the University of Sussex, said: “Large-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 × 1012 synaptic connections meaning that simulations require several terabytes of data — an unrealistic memory requirement for a single desktop machine.
    “This research is a game-changer for computational Neuroscience and AI researchers who can now simulate brain circuits on their local workstations, but it also allows people outside academia to turn their gaming PC into a supercomputer and run large neural networks.”

    Story Source:
    Materials provided by University of Sussex. Original written by Neil Vowles. Note: Content may be edited for style and length. More

  • in

    Team develops portable device that creates 3D images of skin in 10 minutes

    A team from Nanyang Technological University, Singapore (NTU Singapore) has developed a portable device that produces high-resolution 3D images of human skin within 10 minutes.
    The team said the portable skin mapping (imaging) device could be used to assess the severity of skin conditions, such as eczema and psoriasis.
    3D skin mapping could be useful to clinicians, as most equipment used to assess skin conditions only provide 2D images of the skin surface. As the device also maps out the depth of the ridges and grooves of the skin at up to 2mm, it could also help with monitoring wound healing.
    The device presses a specially devised film onto the subject’s skin to obtain an imprint of up to 5 by 5 centimetres, which is then subjected to an electric charge, generating a 3D image.
    The researchers designed and 3D printed a prototype of their device using polylactic acid (PLA), a biodegradable bioplastic. The battery-operated device which measures 7cm by 10cm weighs only 100 grams.
    The made-in-NTU prototype is developed at a fraction of the cost of devices with comparable technologies, such as optical coherence tomography (OCT) machines, which may cost thousands of dollars and weigh up to 30 kilogrammes.

    advertisement

    Assistant Professor Grzegorz Lisak from NTU’s School of Civil and Environmental Engineering, who led the research, said: “Our non-invasive, simple and inexpensive device could be used to complement current methods of diagnosing and treating skin diseases. In rural areas that do not have ready access to healthcare, non-medically trained personnel can make skin maps using the device and send them to physicians for assessment.”
    Providing an independent comment on how the device may be useful to clinicians, Dr Yew Yik Weng, a Consultant Dermatologist at the National Skin Centre and an Assistant Professor at NTU’s Lee Kong Chian School of Medicine, said: “The technology is an interesting way to map the surface texture of human skin. It could be a useful method to map skin texture and wound healing in a 3D manner, which is especially important in research and clinical trials. As the device is battery-operated and portable, there is a lot of potential in its development into a tool for point of care assessment in clinical settings.”
    Asst Prof Dr Yew added: “The device could be especially useful in studies involving wound healing, as we are currently lacking a tool that maps the length and the depth of skin ridges. Currently, we rely on photographs or measurements in our trials which could only provide a 2D assessment.”
    First author of the study, Mr Fu Xiaoxu, a PhD student from NTU’s School of Civil and Environmental Engineering, said: “The 3D skin mapping device is simple to operate. On top of that, a 1.5V dry battery is all that is necessary to run the device. It is an example of a basic, yet very effective application of electrochemistry, as no expensive electronic hardware is required.”
    Published in the scientific journal Analytica Chimica Acta this month, the technology was developed by Asst Prof Lisak, who is also Director of Residues & Resource Reclamation Centre at the Nanyang Environment and Water Research Institute (NEWRI) and his PhD student, Mr Fu Xiaoxu.

    advertisement

    The ‘golden’ solution to 3D skin mapping
    The key component of the NTU device is a polymer called PEDOT:PSS , commonly used in solar panels to convert light into electricity. However, the team found a different use for its electrical conductivity — to reproduce skin patterns on gold-coated film. Gold is used as it has excellent electrical conductivity and flexibility.
    To use the device, a person pushes a button to press the gold-coated film onto the subject’s skin to obtain an imprint. This causes sebum, an oily substance produced by the skin, to be transferred onto the film, creating an imprint of the skin surface.
    Next, the imprint of the skin is transferred to the portable device where a set of electrodes is immersed in a solution. With another push of a button, the device triggers a flow of electric charge, causing PEDOT:PSS to be deposited on the surfaces of the gold-coated film in areas that are not covered with sebum. This results in a high-resolution 3D map of the skin, which reflects the ridges and grooves of the subject’s skin.
    Using pig skin as a model, the researchers demonstrated that the technology was able to map the pattern of various wounds such as punctures, lacerations, abrasions, and incisions.
    The team also showed that even the complex network of wrinkles on the back of a human hand could be captured on the film. The thin film is also flexible enough to map features on uneven skin areas, such as the creases of an elbow and fingerprints.
    Asst Prof Lisak added: “The device has also proven to be effective in lifting fingerprints and gives a high-resolution 3D image of their characteristics.”
    Commenting on the potential uses of the device, Asst Prof Dr Yew added: “The device may aid in fingerprint identification, which is commonly performed in forensic analysis. The device could offer a higher degree of accuracy when it comes to differentiating between similar prints, due to the 3D nature of its imagery.”
    To further validate its efficacy, the team is exploring conducting clinical trials later this year to test the feasibility of their device, as well as other potential therapeutic uses. More

  • in

    Say goodbye to the dots and dashes to enhance optical storage media

    Purdue University innovators have created technology aimed at replacing Morse code with colored “digital characters” to modernize optical storage. They are confident the advancement will help with the explosion of remote data storage during and after the COVID-19 pandemic.
    Morse code has been around since the 1830s. The familiar dots and dashes system may seem antiquated given the amount of information needed to be acquired, digitally archived and rapidly accessed every day. But those same basic dots and dashes are still used in many optical media to aid in storage.
    A new technology developed at Purdue is aimed at modernizing the optical digital storage technology. This advancement allows for more data to be stored and for that data to be read at a quicker rate. The research is published in Laser & Photonics Reviews.
    Rather than using the traditional dots and dashes as commonly used in these technologies, the Purdue innovators encode information in the angular position of tiny antennas, allowing them to store more data per unit area.
    “The storage capacity greatly increases because it is only defined by the resolution of the sensor by which you can determine the angular positions of antennas,” said Alexander Kildishev, an associate professor of electrical and computer engineering in Purdue’s College of Engineering. “We map the antenna angles into colors, and the colors are decoded.”
    Technology has aided in increasing storage space availability in optical digital storage technologies. Not all optical data storage media needs to be laser-writable or rewritable.
    The majority of CDs, DVDs, and Blu-Ray discs are “stamped” and not recordable at all. This class of optical media is an essential part of disposable cold storage with a rapid access rate, long-lasting shelf life, and excellent archival capabilities.
    The making of a Blu-Ray disc is based on the pressing process, where the silicon stamper replicates the same dot-and-dashes format the final disc is getting. A thin nickel coating is then added to get a negative stamp. The Blu-Rays, as well as DVDs and CDs, are just mass-produced.
    “Our metasurface-based ‘optical storage’ is just like that,” said Di Wang, a former Ph.D. student who fabricated the prototype structure. “Whereas in our demo prototype, the information is ‘burnt in’ by electron-beam lithography, it could be replicated by a more scalable manufacturing process in the final product.”
    This new development not only allows for more information to be stored but also increases the readout rate.
    “You can put four sensors nearby, and each sensor would read its own polarization of light,” Kildishev said. “This helps increase the speed of readout of information compared to the use of a single sensor with dots and dashes.”
    Future applications for this technology include security tagging and cryptography. To continue developing these capabilities, the team is looking to partner with interested parties in the industry.

    Story Source:
    Materials provided by Purdue University. Original written by Chris Adam. Note: Content may be edited for style and length. More