More stories

  • in

    Researchers leverage shadows to model 3D scenes, including objects blocked from view

    Imagine driving through a tunnel in an autonomous vehicle, but unbeknownst to you, a crash has stopped traffic up ahead. Normally, you’d need to rely on the car in front of you to know you should start braking. But what if your vehicle could see around the car ahead and apply the brakes even sooner?
    Researchers from MIT and Meta have developed a computer vision technique that could someday enable an autonomous vehicle to do just that.
    They have introduced a method that creates physically accurate, 3D models of an entire scene, including areas blocked from view, using images from a single camera position. Their technique uses shadows to determine what lies in obstructed portions of the scene.
    They call their approach PlatoNeRF, based on Plato’s allegory of the cave, a passage from the Greek philosopher’s “Republic”in which prisoners chained in a cave discern the reality of the outside world based on shadows cast on the cave wall.
    By combining lidar (light detection and ranging) technology with machine learning, PlatoNeRF can generate more accurate reconstructions of 3D geometry than some existing AI techniques. Additionally, PlatoNeRF is better at smoothly reconstructing scenes where shadows are hard to see, such as those with high ambient light or dark backgrounds.
    In addition to improving the safety of autonomous vehicles, PlatoNeRF could make AR/VR headsets more efficient by enabling a user to model the geometry of a room without the need to walk around taking measurements. It could also help warehouse robots find items in cluttered environments faster.
    “Our key idea was taking these two things that have been done in different disciplines before and pulling them together — multibounce lidar and machine learning. It turns out that when you bring these two together, that is when you find a lot of new opportunities to explore and get the best of both worlds,” says Tzofi Klinghoffer, an MIT graduate student in media arts and sciences, affiliate of the MIT Media Lab, and lead author of a paper on PlatoNeRF.

    Klinghoffer wrote the paper with his advisor, Ramesh Raskar, associate professor of media arts and sciences and leader of the Camera Culture Group at MIT; senior author Rakesh Ranjan, a director of AI research at Meta Reality Labs; as well as Siddharth Somasundaram at MIT, and Xiaoyu Xiang, Yuchen Fan, and Christian Richardt at Meta. The research will be presented at the Conference on Computer Vision and Pattern Recognition.
    Shedding light on the problem
    Reconstructing a full 3D scene from one camera viewpoint is a complex problem.
    Some machine-learning approaches employ generative AI models that try to guess what lies in the occluded regions, but these models can hallucinate objects that aren’t really there. Other approaches attempt to infer the shapes of hidden objects using shadows in a color image, but these methods can struggle when shadows are hard to see.
    For PlatoNeRF, the MIT researchers built off these approaches using a new sensing modality called single-photon lidar. Lidars map a 3D scene by emitting pulses of light and measuring the time it takes that light to bounce back to the sensor. Because single-photon lidars can detect individual photons, they provide higher-resolution data.
    The researchers use a single-photon lidar to illuminate a target point in the scene. Some light bounces off that point and returns directly to the sensor. However, most of the light scatters and bounces off other objects before returning to the sensor. PlatoNeRF relies on these second bounces of light.

    By calculating how long it takes light to bounce twice and then return to the lidar sensor, PlatoNeRF captures additional information about the scene, including depth. The second bounce of light also contains information about shadows.
    The system traces the secondary rays of light — those that bounce off the target point to other points in the scene — to determine which points lie in shadow (due to an absence of light). Based on the location of these shadows, PlatoNeRF can infer the geometry of hidden objects.
    The lidar sequentially illuminates 16 points, capturing multiple images that are used to reconstruct the entire 3D scene.
    “Every time we illuminate a point in the scene, we are creating new shadows. Because we have all these different illumination sources, we have a lot of light rays shooting around, so we are carving out the region that is occluded and lies beyond the visible eye,” Klinghoffer says.
    A winning combination
    Key to PlatoNeRF is the combination of multibounce lidar with a special type of machine-learning model known as a neural radiance field (NeRF). A NeRF encodes the geometry of a scene into the weights of a neural network, which gives the model a strong ability to interpolate, or estimate, novel views of a scene.
    This ability to interpolate also leads to highly accurate scene reconstructions when combined with multibounce lidar, Klinghoffer says.
    “The biggest challenge was figuring out how to combine these two things. We really had to think about the physics of how light is transporting with multibounce lidar and how to model that with machine learning,” he says.
    They compared PlatoNeRF to two common alternative methods, one that only uses lidar and the other that only uses a NeRF with a color image.
    They found that their method was able to outperform both techniques, especially when the lidar sensor had lower resolution. This would make their approach more practical to deploy in the real world, where lower resolution sensors are common in commercial devices.
    “About 15 years ago, our group invented the first camera to ‘see’ around corners, that works by exploiting multiple bounces of light, or ‘echoes of light.’ Those techniques used special lasers and sensors, and used three bounces of light. Since then, lidar technology has become more mainstream, that led to our research on cameras that can see through fog. This new work uses only two bounces of light, which means the signal to noise ratio is very high, and 3D reconstruction quality is impressive,” Raskar says.
    In the future, the researchers want to try tracking more than two bounces of light to see how that could improve scene reconstructions. In addition, they are interested in applying more deep learning techniques and combining PlatoNeRF with color image measurements to capture texture information.
    Further information: https://openaccess.thecvf.com/content/CVPR2024/html/Klinghoffer_PlatoNeRF_3D_Reconstruction_in_Platos_Cave_via_Single-View_Two-Bounce_Lidar_CVPR_2024_paper.html More

  • in

    Breakthrough may clear major hurdle for quantum computers

    The potential of quantum computers is currently thwarted by a trade-off problem. Quantum systems that can carry out complex operations are less tolerant to errors and noise, while systems that are more protected against noise are harder and slower to compute with. Now a research team from Chalmers University of Technology, in Sweden, has created a unique system that combats the dilemma, thus paving the way for longer computation time and more robust quantum computers.
    For the impact of quantum computers to be realised in society, quantum researchers first need to deal with some major obstacles. So far, errors and noise stemming from, for example, electromagnetic interference or magnetic fluctuations, cause the sensitive qubits to lose their quantum states — and subsequently their ability to continue the calculation. The amount of time that a quantum computer can work on a problem is thus so far limited. Additionally, for a quantum computer to be able to tackle complex problems, quantum researchers need to find a way to control the quantum states. Like a car without a steering wheel, quantum states may be considered somewhat useless if there is no efficient control system to manipulate them.
    However, the research field is facing a trade-off problem. Quantum systems that allow for efficient error correction and longer computation times are on the other hand deficient in their ability to control quantum states — and vice versa. But now a research team at Chalmers University of Technology has managed to find a way to battle this dilemma.
    “We have created a system that enables extremely complex operations on a multi-state quantum system, at an unprecedented speed.” says Simone Gasparinetti, leader of the 202Q-lab at Chalmers University of Technology and senior author of the study.
    Deviates from the two-quantum-state principle
    While the building blocks of a classical computer, bits, have either the value 1 or 0, the most common building blocks of quantum computers, qubits, can have the value 1 and 0 at the same time — in any combination. The phenomenon is called superposition and is one of the key ingredients that enable a quantum computer to perform simultaneous calculations, with enormous computing potential as a result. However, qubits encoded in physical systems are extremely sensitive to errors, which has led researchers in the field to search for ways to detect and correct these errors. The system created by the Chalmers researchers is based on so called continuous-variable quantum computing and uses harmonic oscillators, a type of microscopic component, to encode information linearly. The oscillators used in the study consist of thin strips of superconducting material patterned on an insulating substrate to form microwave resonators, a technology fully compatible with the most advanced superconducting quantum computers. The method is previously known in the field and departs from the two-quantum state principle as it offers a much larger number of physical quantum states, thus making quantum computers significantly better equipped against errors and noise.
    “Think of a qubit as a blue lamp that, quantum mechanically, can be both switched on and off simultaneously. In contrast, a continuous variable quantum system is like an infinite rainbow, offering a seamless gradient of colours. This illustrates its ability to access a vast number of states, providing far richer possibilities than the qubit’s two states,” says Axel Eriksson, researcher in quantum technology at Chalmers University of Technology and lead author of the study.

    Combats trade-off problem between operation complexity and fault tolerance
    Although continuous-variable quantum computing based on harmonic oscillators enables improved error correction, its linear nature does not allow for complex operations to be carried out. Attempts to combine harmonic oscillators with control systems such as superconducting quantum systems have been made but have been hindered by the so-called Kerr-effect. The Kerr-effect in turn scrambles the many quantum states offered by the oscillator, canceling the desired effect.
    By putting a control system device inside the oscillator, the Chalmers researchers were able to circumvent the Kerr-effect and combat the trade-off problem. The system presents a solution that preserves the advantages of the harmonic oscillators, such as a resource-efficient path towards fault tolerance, while enabling accurate control of quantum states at high speed. The system is described in an article published in Nature Communications and may pave the way for more robust quantum computers.
    “Our community has often tried to keep superconducting elements away from quantum oscillators, not to scramble the fragile quantum states. In this work, we have challenged this paradigm. By embedding a controlling device at the heart of the oscillator we were able to avoid scrambling the many quantum states while at the same time being able to control and manipulate them. As a result, we demonstrated a novel set of gate operations performed at very high speed,” says Simone Gasparinetti. More

  • in

    Advanced artificial intelligence: A revolution for sustainable agriculture

    The rise of advanced artificial intelligence (edge AI) could well mark the beginning of a new era for sustainable agriculture. A recent study proposes a roadmap for integrating this technology into farming practices. The aim? To improve the efficiency, quality and safety of agricultural production, while addressing a range of environmental, social and economic challenges.
    One of the main objectives of sustainable agricultural practices is to efficiently feed a growing world population. Digital technology, such as artificial intelligence (AI), can bring substantial benefits to agriculture by improving farming practices that can increase the efficiency, yield, quality and safety of agricultural production. Edge AI refers to the implementation of artificial intelligence in an advanced computing environment. “This technology enables calculations to be carried out close to where the data is collected, rather than in a centralized cloud computing facility or off-site datacenter,” explains Moussa El Jarroudi, researcher in Crop Environment and Epidemiology at the University of Liège (Belgium). This means devices can make smarter decisions faster, without connecting to the cloud or off-site datacenters.”
    In a new study published in the scientific journal Nature Sustainability, a scientific team led by Moussa El Jarroudi, demonstrates how to overcome these challenges and how AI can be practically integrated into agricultural systems to meet the growing needs of sustainable food production. “Deploying AI in agriculture is not without its challenges. It requires innovative solutions and the right infrastructure. Experts like Professor Said Hamdioui of Delft University of Technology have developed low-energy systems capable of operating autonomously.” Although challenges remain, particularly in the context of climate change, the prospects opened up by these advances are promising.
    The University of Liège played a crucial role in this study, contributing cutting-edge resources and expertise in the fields of artificial intelligence and sustainable agriculture. ULiège researchers have developed innovative edge AI solutions and conducted in-depth analyses of their potential impact on agricultural practices.
    A new era for agriculture
    “The results of our study are part of a growing trend to integrate advanced technologies into agriculture to achieve sustainability goals,” resumes Benoît Mercatoris, co-author of the study and agronomy researcher at ULiège. The adoption of edge AI can transform agricultural practices by increasing resource efficiency, improving crop quality and reducing environmental impacts. This technology is positioning itself as an essential pillar for the future of sustainable agriculture.”
    The applications are vast: improving crop management with real-time data, optimizing the use of resources such as water and fertilizers, reducing post-harvest losses and increasing food safety, or enhancing monitoring and response capabilities to changing weather conditions. This study paves the way for smarter, more environmentally-friendly agriculture, thanks to edge AI. A technological revolution that could well transform the way we produce and consume. More

  • in

    Towards wider 5G network coverage: Novel wirelessly powered relay transceiver

    A novel 256-element wirelessly powered transceiver array for non-line-of-sight 5G communication, featuring efficient wireless power transmission and high-power conversion efficiency, has been designed by scientists at Tokyo Tech. The innovative design can enhance the 5G network coverage even to places with link blockage, improving flexibility and coverage area, and potentially making high-speed, low-latency communication more accessible.
    Millimeter wave 5G communication, which uses extremely high-frequency radio signals (24 to 100 GHz), is a promising technology for next-generation wireless communication, exhibiting high speed, low latency, and large network capacity. However, current 5G networks face two key challenges. The first one is the low signal-to-noise ratio (SNR). A high SNR is crucial for good communication. Another challenge is link blockage, which refers to the disruption in signal between transmitter and receiver due to obstacles such as buildings.
    Beamforming is a key technique for long-distance communication using millimeter waves which improves SNR. This technique uses an array of sensors to focus radio signals into a narrow beam in a specific direction, akin to focusing a flashlight beam on a single point. However, it is limited to line-of-sight communication, where transmitters and receivers must be in a straight line, and the received signal can become degraded due to obstacles. Furthermore, concrete and modern glass materials can cause high propagation losses. Hence, there is an urgent need for a non-line-of-sight (NLoS) relay system to extend the 5G network coverage, especially indoors.
    To address these issues, a team of researchers led by Associate Professor Atsushi Shirane from the Laboratory for Future Interdisciplinary Research of Science and Technology at Tokyo Institute of Technology(Tokyo Tech) designed a novel wirelessly powered relay transceiver for 28 GHz millimeter-wave 5G communication. Their study has been published in the Proceedings of the 2024 IEEE MTT-S International Microwave Symposium.
    Explaining the motivation behind their study, Shirane says, “Previously, for NLoS communication, two types of 5G relays have been explored: an active type and a wireless-powered type. While the active relay can maintain a good SNR even with few rectifier arrays, it has high power consumption. The wirelessly powered type does not require a dedicated power supply but needs many rectifier arrays to maintain SNR due to low conversion gain and uses CMOS diodes with lower than ten percent power conversion efficiency. Our design addresses their issues while using commercially available semiconductor integrated circuits (ICs).”
    The proposed transceiver consists of 256 rectifier arrays with 24 GHz wireless power transfer (WPT). These arrays consist of discrete ICs, including gallium arsenide diodes, and baluns, which interface between balanced and unbalanced (bal-un) signal lines, DPDT switches, and digital ICs. Notably, the transceiver is capable of simultaneous data and power transmission, converting 24 GHz WPT signal to direct current (DC) and facilitating 28 GHz bi-directional transmission and reception at the same time. The 24 GHz signal is received at each rectifier individually, while the 28 GHz signal is transmitted and received using beamforming. Both signals can be received from the same or different directions and the 28 GHz signal can be transmitted either with retro-reflecting with the 24 GHz pilot signal or in any direction.
    Testing revealed that the proposed transceiver can achieve a power conversion efficiency of 54% and a conversion gain of -19 decibels, higher than conventional transceivers while maintaining SNR over long distances. Additionally, it achieves about 56 milliwatts of power generation which can be increased even further by increasing the number of arrays. This can also improve the resolution of the transmission and reception beams. “The proposed transceiver can contribute to the deployment of the millimeter-wave 5G network even to places where the link is blocked, improving installation flexibility and coverage area,” remarks Shirane about the benefits of their device. More

  • in

    Researchers teach AI to spot what you’re sketching

    A new way to teach artificial intelligence (AI) to understand human line drawings — even from non-artists — has been developed by a team from the University of Surrey and Stanford University.
    The new model approaches human levels of performance in recognising scene sketches.
    Dr Yulia Gryaditskaya, Lecturer at Surrey’s Centre for Vision, Speech and Signal Processing (CVSSP) and Surrey Institute for People-Centred AI (PAI), said:
    “Sketching is a powerful language of visual communication. It is sometimes even more expressive and flexible than spoken language.
    “Developing tools for understanding sketches is a step towards more powerful human-computer interaction and more efficient design workflows. Examples include being able to search for or create images by sketching something.”
    People of all ages and backgrounds use drawings to explore new ideas and communicate. Yet, AI systems have historically struggled to understand sketches.
    AI has to be taught how to understand images. Usually, this involves a labour-intensive process of collecting labels for every pixel in the image. The AI then learns from these labels.

    Instead, the team taught the AI using a combination of sketches and written descriptions. It learned to group pixels, matching them against one of the categories in a description.
    The resulting AI displayed a much richer and more human-like understanding of these drawings than previous approaches. It correctly identified and labelled kites, trees, giraffes and other objects with an 85% accuracy. This outperformed other models which relied on labelled pixels.
    As well as identifying objects in a complex scene, it could identify which pen strokes were intended to depict each object. The new method works well with informal sketches drawn by non-artists, as well as drawings of objects it was not explicitly trained on.
    Professor Judith Fan, Assistant Professor of Psychology at Stanford University, said:
    “Drawing and writing are among the most quintessentially human activities and have long been useful for capturing people’s observations and ideas.
    “This work represents exciting progress towards AI systems that understand the essence of the ideas people are trying to get across, regardless of whether they are using pictures or text.”
    The research forms part of Surrey’s Institute for People-Centred AI, and in particular its SketchX programme. Using AI, SketchX seeks to understand the way we see the world by the way we draw it.

    Professor Yi-Zhe Song, Co-director of the Institute for People-Centred AI, and SketchX lead, said:
    “This research is a prime example of how AI can enhance fundamental human activities like sketching. By understanding rough drawings with near-human accuracy, this technology has immense potential to empower people’s natural creativity, regardless of artistic ability.”
    The findings will be presented at the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024. It takes place in Seattle from 17-21 June 2024. More

  • in

    Wirelessly powered relay will help bring 5G technology to smart factories

    A recently developed wirelessly powered 5G relay could accelerate the development of smart factories, report scientists from Tokyo Tech. By adopting a lower operating frequency for wireless power transfer, the proposed relay design solves many of the current limitations, including range and efficiency. In turn, this allows for a more versatile and widespread arrangement of sensors and transceivers in industrial settings.
    One of the hallmarks of the Information Age is the transformation of industries towards a greater flow of information. This can be readily seen in high-tech factories and warehouses, where wireless sensors and transceivers are installed in robots, production machinery, and automatic vehicles. In many cases, 5G networks are used to orchestrate operations and communications between these devices.
    To avoid relying on cumbersome wired power sources, sensors and transceivers can be energized remotely via wireless power transfer (WPT). However, one problem with conventional WPT designs is that they operate at 24 GHz. At such high frequencies, transmission beams must be extremely narrow to avoid energy losses. Moreover, power can only be transmitted if there is a clear line of sight between the WPT system and the target device. Since 5G relays are often used to extend the range of 5G base stations, WPT needs to reach even further, which is yet another challenge for 24 GHz systems.
    To address the limitations of WPT, a research team from Tokyo Institute of Technology has come up with a clever solution. In a recent study, whose results have been presented in the2024 IEEE Symposium on VLSI Technology & Circuits, they developed a novel 5G relay that can be powered wirelessly at a lower frequency of 5.7 GHz. “By using 5.7 GHz as the WPT frequency, we can get wider coverage than conventional 24 GHz WPT systems, enabling a wider range of devices to operate simultaneously,” explains senior author and Associate Professor Atsushi Shirane.
    The proposed wirelessly powered relay is meant to act as an intermediary receiver and transmitter of 5G signals, which can originate from a 5G base station or wireless devices. The key innovation of this system is the use of a rectifier-type mixer, which performs 4th-order subharmonic mixing while also generating DC power.
    Notably, the mixer uses the received 5.7 GHz WPT signal as a local signal. With this local signal, together with multiplying circuits, phase shifters, and a power combiner, the mixer ‘down-converts’ a received 28 GHz signal into a 5.2 GHz signal. Then, this 5.2 GHz signal is internally amplified, up-converted to 28 GHz through the inverse process, and retransmitted to its intended destination.
    To drive these internal amplifiers, the proposed system first rectifies the 5.7 GHz WPT signal to produce DC power, which is managed by a dedicated power management unit. This ingenious approach offers several advantages, as Shirane highlights: “Since the 5.7 GHz WPT signal has less path loss than the 24 GHz signal, more power can be obtained from a rectifier. In addition, the 5.7 GHz rectifier has a lower loss than 24 GHz rectifiers and can operate at a higher power conversion efficiency.” Finally, this proposed circuit design allows for selecting the transistor size, bias voltage, matching, cutoff frequency of the filter, and load to maximize conversion efficiency and conversion gain simultaneously.
    Through several experiments, the research team showcased the capabilities of their proposed relay. Occupying only a 1.5 mm by 0.77 mm chip using standard CMOS technology, a single chip can output a high power of 6.45 mW at an input power of 10.7 dBm. Notably, multiple chips could be combined to achieve a higher power output. Considering its many advantages, the proposed 5.7 GHz WPT system could thus greatly contribute to the development of smart factories. More

  • in

    Simplicity versus adaptability: Understanding the balance between habitual and goal-directed behaviors

    Both living creatures and AI-driven machines need to act quickly and adaptively in response to situations. In psychology and neuroscience, behavior can be categorized into two types — habitual (fast and simple but inflexible), and goal-directed (flexible but complex and slower). Daniel Kahneman, who won the Nobel Prize in Economic Sciences, distinguishes between these as System 1 and System 2. However, there is ongoing debate as to whether they are independent and conflicting entities or mutually supportive components.
    Scientists from the Okinawa Institute of Science and Technology (OIST) and Microsoft Research Asia in Shanghai have proposed a new AI method in which systems of habitual and goal-directed behaviors learn to help each other. Through computer simulations that mimicked the exploration of a maze, the method quickly adapts to changing environments and also reproduced the behavior of humans and animals after they had been accustomed to a certain environment for a long time.
    The study, published in Nature Communications, not only paves the way for the development of systems that adapt quickly and reliably in the burgeoning field of AI, but also provides clues to how we make decisions in the fields of neuroscience and psychology.
    The scientists derived a model that integrates habitual and goal-directed systems for learning behavior in AI agents that perform reinforcement learning, a method of learning based on rewards and punishments, based on the theory of “active inference,” which has been the focus of much attention recently. In the paper, they created a computer simulation mimicking a task in which mice explore a maze based on visual cues and are rewarded with food when they reach the goal.
    They examined how these two systems adapt and integrate while interacting with the environment, showing that they can achieve adaptive behavior quickly. It was observed that the AI agent collected data and improved its own behavior through reinforcement learning.
    What our brains prefer
    After a long day at work, we usually head home on autopilot (habitual behavior). However, if you have just moved house and are not paying attention, you might find yourself driving back to your old place out of habit. When you catch yourself doing this, you switch gears (goal-directed behavior) and reroute to your new home. Traditionally, these two behaviors are considered to work independently, resulting in behavior being either habitual and fast but inflexible, or goal-directed and flexible but slow.

    “The automatic transition from goal-directed to habitual behavior during learning is a very famous finding in psychology. Our model and simulations can explain why this happens: The brain would prefer behavior with higher certainty. As learning progresses, habitual behavior becomes less random, thereby increasing certainty. Therefore, the brain prefers to rely on habitual behavior after significant training,” Dr. Dongqi Han, a former PhD student at OIST’s Cognitive Neurorobotics Research Unit and first author of the paper, explained.
    For a new goal that AI has not trained for, it uses an internal model of the environment to plan its actions. It does not need to consider all possible actions but uses a combination of its habitual behaviors, which makes planning more efficient. This challenges traditional AI approaches which require all possible goals to be explicitly included in training for them to be achieved. In this model each desired goal can be achieved without explicit training but by flexibly combining learned knowledge.
    “It’s important to achieve a kind of balance or trade-off between flexible and habitual behavior,” Prof. Jun Tani, head of the Cognitive Neurorobotics Research Unit stated. “There could be many possible ways to achieve a goal, but to consider all possible actions is very costly, therefore goal directed behavior is limited by habitual behavior to narrow down options.”
    Building better AI
    Dr. Han got interested in neuroscience and the gap between artificial and human intelligence when he started working on AI algorithms. “I started thinking about how AI can behave more efficiently and adaptably, like humans. I wanted to understand the underlying mathematical principles and how we can use them to improve AI. That was the motivation for my PhD research.”
    Understanding the difference between habitual and goal-directed behaviors has important implications, especially in the field of neuroscience, because it can shed light on neurological disorders such as ADHD, OCD, and Parkinson’s disease.
    “We are exploring the computational principles by which multiple systems in the brain work together. We have also seen that neuromodulators such as dopamine and serotonin play a crucial role in this process,” Prof. Kenji Doya, head of the Neural Computation Unit explained. “AI systems developed with inspiration from the brain and proven capable of solving practical problems can serve as valuable tools in understanding what is happening in the brains of humans and animals.”
    Dr. Han would like to help build better AI that can adapt their behavior to achieve complex goals. “We are very interested in developing AI that have near human abilities when performing everyday tasks, so we want to address this human-AI gap. Our brains have two learning mechanisms, and we need to better understand how they work together to achieve our goal.” More

  • in

    New material puts eco-friendly methanol conversion within reach

    Griffith University researchers have developed innovative, eco-friendly quantum materials that can drive the transformation of methanol into ethylene glycol.
    Ethylene glycol is an important chemical used to make polyester (including PET) and antifreeze agents, with a global production of over 35 million tons annually with strong growth.
    Currently, it’s mainly produced from petrochemicals through energy-intensive processes.
    Methanol (CH3OH) can be produced sustainably from CO2, agricultural biomass waste, and plastic waste through various methods such as hydrogenation, catalytic partial oxidation, and fermentation. As a fuel, methanol also serves as a circular hydrogen carrier and a precursor for numerous chemicals.
    Led by Professor Qin Li, the Griffith team’s method uses solar-driven photocatalysis to convert methanol into ethylene glycol under mild conditions.
    This process uses sunlight to drive chemical reactions, which minimises waste and maximises the use of renewable energy.
    While previous attempts at this conversion have faced challenges — such as the need for toxic or precious materials — Professor Li and the research team have identified a greener solution.

    “Climate change is a major challenge facing humanity today,” Professor Li said.
    “To tackle this, we need to focus on zero-emission power generation, low-emission manufacturing, and a circular economy. Methanol stands out as a crucial chemical that links these three strategies.
    “What we have created is a novel material that combines carbon quantum dots with zinc selenide quantum wells.”
    “This combination significantly enhances the photocatalytic activity more than four times higher than using carbon quantum dots alone, demonstrating the effectiveness of the new material,” Lead author Dr Dechao Chen said.
    The approach has also shown high photocurrent, indicating efficient charge transfer within the material, crucial for driving the desired chemical reactions.
    Analyses confirmed the formation of ethylene glycol, showcasing the potential of this new method. It’s worth noting that the by-product of this reaction is green hydrogen.

    This discovery opens up new possibilities for using eco-friendly materials in photocatalysis, paving the way for sustainable chemical production.
    As a new quantum material, it also has the potential to lead to further advancements in photocatalysis, sensing, and optoelectronics.
    “Our research demonstrates a significant step towards green chemistry, showing how sustainable materials can be used to achieve important chemical transformations,” Professor Li said.
    “This could transform methanol conversion and contribute significantly to emissions reduction.”
    The findings ‘Colloidal Synthesis of Carbon Dot-ZnSe Nanoplatelet Vander Waals Heterostructures for Boosting PhotocatalyticGeneration of Methanol-Storable Hydrogen’ have been published in the journal Small. More