More stories

  • in

    Quan­tum com­puter in reverse gear

    Today’s computers are based on microprocessors that execute so-called gates. A gate can, for example, be an AND operation, i.e. an operation that adds two bits. These gates, and thus computers, are irreversible. That is, algorithms cannot simply run backwards. “If you take the multiplication 2*2=4, you cannot simply run this operation in reverse, because 4 could be 2*2, but likewise 1*4 or 4*1,” explains Wolfgang Lechner, professor of theoretical physics at the University of Innsbruck. If this were possible, however, it would be feasible to factorize large numbers, i.e. divide them into their factors, which is an important pillar of cryptography.

    Martin Lanthaler, Ben Niehoff and Wolfgang Lechner from the Department of Theoretical Physics at the University of Innsbruck and the quantum spin-off ParityQC have now developed exactly this inversion of algorithms with the help of quantum computers. The starting point is a classical logic circuit, which multiplies two numbers. If two integers are entered as the input value, the circuit returns their product. Such a circuit is built from irreversible operations. “However, the logic of the circuit can be encoded within ground states of a quantum system,” explains Martin Lanthaler from Wolfgang Lechner’s team. “Thus, both multiplication and factorization can be understood as ground-state problems and solved using quantum optimization methods.”
    Superposition of all possible results
    “The core of our work is the encoding of the basic building blocks of the multiplier circuit, specifically AND gates, half and full adders with the parity architecture as the ground state problem on an ensemble of interacting spins,” says Martin Lanthaler. The coding allows the entire circuit to be built from repeating subsystems that can be arranged on a two-dimensional grid. By stringing several of these subsystems together, larger problem instances can be realized. Instead of the classical brute force method, where all possible factors are tested, quantum methods can speed up the search process: To find the ground state, and thus solve an optimization problem, it is not necessary to search the whole energy landscape, but deeper valleys can be reached by “tunneling.”
    The current research work provides a blueprint for a new type of quantum computer to solve the factorization problem, which is a cornerstone of modern cryptography. This blueprint is based on the parity architecture developed at the University of Innsbruck and can be implemented on all current quantum computing platforms.
    The results were recently published in Nature Communications Physics. Financial support for the research was provided by the Austrian Science Fund FWF, the European Union and the Austrian Research Promotion Agency FFG, among others. More

  • in

    Researchers detect and classify multiple objects without images

    Researchers have developed a new high-speed way to detect the location, size and category of multiple objects without acquiring images or requiring complex scene reconstruction. Because the new approach greatly decreases the computing power necessary for object detection, it could be useful for identifying hazards while driving.
    “Our technique is based on a single-pixel detector, which enables efficient and robust multi-object detection directly from a small number of 2D measurements,” said research team leader Liheng Bian from the Beijing Institute of Technology in China. “This type of image-free sensing technology is expected to solve the problems of heavy communication load, high computing overhead and low perception rate of existing visual perception systems.”
    Today’s image-free perception methods can only achieve classification, single object recognition or tracking. To accomplish all three at once, the researchers developed a technique known as image-free single-pixel object detection (SPOD). In the Optica Publishing Group journal Optics Letters, they report that SPOD can achieve an object detection accuracy of just over 80%.
    The SPOD technique builds on the research group’s previous accomplishments in developing imaging-free sensing technology as efficient scene perception technology. Their prior work includes image-free classification, segmentation and character recognition based on a single-pixel detector.
    “For autonomous driving, SPOD could be used with lidar to help improve scene reconstruction speed and object detection accuracy,” said Bian. “We believe that it has a high enough detection rate and accuracy for autonomous driving while also reducing the transmission bandwidth and computing resource requirements needed for object detection.”
    Detection without images
    Automating advanced visual tasks — whether used to navigate a vehicle or track a moving plane — usually require detailed images of a scene to extract the features necessary to identify an object. However, this requires either complex imaging hardware or complicated reconstruction algorithms, which leads to high computational cost, long running time and heavy data transmission load.For this reason, the traditional image first, perceive later approaches may not be best for object detection.

    Image-free sensing methods based on single-pixel detectors can cut down on the computational power needed for object detection. Instead of employing a pixelated detector such as a CMOS or CCD, single-pixel imaging illuminates the scene with a sequence of structured light patterns and then records the transmitted light intensity to acquire the spatial information of objects. This information is then used to computationally reconstruct the object or to calculate its properties.
    For SPOD, the researchers used a small but optimized structured light pattern to quickly scan the entire scene and obtain 2D measurements. These measurements are fed into a deep learning model known as a transformer-based encoder to extract the high-dimensional meaningful features in the scene. These features are then fed into a multi-scale attention network-based decoder, which outputs the class, location and size information of all targets in the scene simultaneously.
    “Compared to the full-size pattern used by other single-pixel detection methods, the small, optimized pattern produces better image-free sensing performance,” said group member Lintao Peng. “Also, the multi-scale attention network in the SPOD decoder reinforces the network’s attention to the target area in the scene. This allows more efficient extraction of scene features, enabling state-of-the art object detection performance.”
    Proof-of-concept demonstration
    To experimentally demonstrate SPOD, the researchers built a proof-of-concept setup. Images randomly selected from the Pascal Voc 2012 test dataset were printed on film and used as target scenes. When a sampling rate of 5% was used, the average time to complete spatial light modulation and image-free object detection per scene with SPOD was just 0.016 seconds. This is much faster than performing scene reconstruction first (0.05 seconds) and then object detection (0.018 seconds. SPOD showed an average detection accuracy of 82.2% for all the object classes included in the test dataset.
    “Currently, SPOD cannot detect every possible object category because the existing object detection dataset used to train the model only contains 80 categories,” said Peng. “However, when faced with a specific task, the pre-trained model can be fine-tuned to achieve image-free multi-object detection of new target classes for applications such as pedestrian, vehicle or boat detection.”
    Next, the researchers plan to extend the image-free perception technology to other kinds of detectors and computational acquisition systems to achieve reconstruction-free sensing technology. More

  • in

    Engineers tap into good vibrations to power the Internet of Things

    In a world hungry for clean energy, engineers have created a new material that converts the simple mechanical vibrations all around us into electricity to power sensors in everything from pacemakers to spacecraft.
    The first of its kind and the product of a decade of work by researchers at the University of Waterloo and the University of Toronto, the novel generating system is compact, reliable, low-cost and very, very green.
    “Our breakthrough will have a significant social and economic impact by reducing our reliance on non-renewable power sources,” said Asif Khan, a Waterloo researcher and co-author of a new study on the project. “We need these energy-generating materials more critically at this moment than at any other time in history.”
    The system Khan and his colleagues developed is based on the piezoelectric effect, which generates an electrical current by applying pressure — mechanical vibrations are one example — to an appropriate substance.
    The effect was discovered in 1880, and since then, a limited number of piezoelectric materials, such as quartz and Rochelle salts, have been used in technologies ranging from sonar and ultrasonic imaging to microwave devices.
    The problem is that until now, traditional piezoelectric materials used in commercial devices have had limited capacity for generating electricity. They also often use lead, which Khan describes as “detrimental to the environment and human health.”
    The researchers solved both problems.
    They started by growing a large single crystal of a molecular metal-halide compound called edabco copper chloride using the Jahn-Teller effect, a well-known chemistry concept related to spontaneous geometrical distortion of a crystal field.
    Khan said that highly piezoelectric material was then used to fabricate nanogenerators “with a record power density that can harvest tiny mechanical vibrations in any dynamic circumstances, from human motion to automotive vehicles” in a process requiring neither lead nor non-renewable energy.
    The nanogenerator is tiny — 2.5 centimetres square and about the thickness of a business card — and could be conveniently used in countless situations. It has the potential to power sensors in a vast array of electronic devices, including billions needed for the Internet of Things — the burgeoning global network of objects embedded with sensors and software that connect and exchange data with other devices.
    Dr. Dayan Ban, a researcher at the Waterloo Institute for Nanotechnology, said that in future, an aircraft’s vibrations could power its sensory monitoring systems, or a person’s heartbeat could keep their battery-free pacemaker running.
    “Our new material has shown record-breaking performance,” said Ban, a professor of electrical and computer engineering. “It represents a new path forward in this field.” More

  • in

    ‘Raw’ data show AI signals mirror how the brain listens and learns

    New research from the University of California, Berkeley, shows that artificial intelligence (AI) systems can process signals in a way that is remarkably similar to how the brain interprets speech, a finding scientists say might help explain the black box of how AI systems operate.
    Using a system of electrodes placed on participants’ heads, scientists with the Berkeley Speech and Computation Lab measured brain waves as participants listened to a single syllable — “bah.” They then compared that brain activity to the signals produced by an AI system trained to learn English.
    “The shapes are remarkably similar,” said Gasper Begus, assistant professor of linguistics at UC Berkeley and lead author on the study published recently in the journal Scientific Reports. “That tells you similar things get encoded, that processing is similar. ”
    A side-by-side comparison graph of the two signals shows that similarity strikingly.
    “There are no tweaks to the data,” Begus added. “This is raw.”
    AI systems have recently advanced by leaps and bounds. Since ChatGPT ricocheted around the world last year, these tools have been forecast to upend sectors of society and revolutionize how millions of people work. But despite these impressive advances, scientists have had a limited understanding of how exactly the tools they created operate between input and output.

    A question and answer in ChatGPT has been the benchmark to measure an AI system’s intelligence and biases. But what happens between those steps has been something of a black box. Knowing how and why these systems provide the information they do — how they learn — becomes essential as they become ingrained in daily life in fields spanning health care to education.
    Begus and his co-authors, Alan Zhou of Johns Hopkins University and T. Christina Zhao of the University of Washington, are among a cadre of scientists working to crack open that box.
    To do so, Begus turned to his training in linguistics.
    When we listen to spoken words, Begus said, the sound enters our ears and is converted into electrical signals. Those signals then travel through the brainstem and to the outer parts of our brain. With the electrode experiment, researchers traced that path in response to 3,000 repetitions of a single sound and found that the brain waves for speech closely followed the actual sounds of language.
    The researchers transmitted the same recording of the “bah” sound through an unsupervised neural network — an AI system — that could interpret sound. Using a technique developed in the Berkeley Speech and Computation Lab, they measured the coinciding waves and documented them as they occurred.

    Previous research required extra steps to compare waves from the brain and machines. Studying the waves in their raw form will help researchers understand and improve how these systems learn and increasingly come to mirror human cognition, Begus said.
    “I’m really interested as a scientist in the interpretability of these models,” Begus said. “They are so powerful. Everyone is talking about them. And everyone is using them. But much less is being done to try to understand them.”
    Begus believes that what happens between input and output doesn’t have to remain a black box. Understanding how those signals compare to the brain activity of human beings is an important benchmark in the race to build increasingly powerful systems. So is knowing what’s going on under the hood.
    For example, having that understanding could help put guardrails on increasingly powerful AI models. It could also improve our understanding of how errors and bias are baked into the learning processes.
    Begus said he and his colleagues are collaborating with other researchers using brain imaging techniques to measure how these signals might compare. They’re also studying how other languages, like Mandarin, are decoded in the brain differently and what that might indicate about knowledge.
    Many models are trained on visual cues, like colors or written text — both of which have thousands of variations at the granular level. Language, however, opens the door for a more solid understanding, Begus said.
    The English language, for example, has just a few dozen sounds.
    “If you want to understand these models, you have to start with simple things. And speech is way easier to understand,” Begus said. “I am very hopeful that speech is the thing that will help us understand how these models are learning.”
    In cognitive science, one of the primary goals is to build mathematical models that resemble humans as closely as possible. The newly documented similarities in brain waves and AI waves are a benchmark on how close researchers are to meeting that goal.
    “I’m not saying that we need to build things like humans,” Begus said. “I’m not saying that we don’t. But understanding how different architectures are similar or different from humans is important.” More

  • in

    Deep neural network provides robust detection of disease biomarkers in real time

    Sophisticated systems for the detection of biomarkers — molecules such as DNA or proteins that indicate the presence of a disease — are crucial for real-time diagnostic and disease-monitoring devices.
    Holger Schmidt, distinguished professor of electrical and computer engineering at UC Santa Cruz, and his group have long been focused on developing unique, highly sensitive devices called optofluidic chips to detect biomarkers.
    Schmidt’s graduate student Vahid Ganjalizadeh led an effort to use machine learning to enhance their systems by improving its ability to accurately classify biomarkers. The deep neural network he developed classifies particle signals with 99.8 percent accuracy in real time, on a system that is relatively cheap and portable for point-of-care applications, as shown in a new paper in Nature Scientific Reports.
    When taking biomarker detectors into the field or a point-of-care setting such as a health clinic, the signals received by the sensors may not be as high quality as those in a lab or a controlled environment. This may be due to a variety of factors, such as the need to use cheaper chips to bring down costs, or environmental characteristics such as temperature and humidity.
    To address the challenges of a weak signal, Schmidt and his team developed a deep neural network that can identify the source of that weak signal with high confidence. The researchers trained the neural network with known training signals, teaching it to recognize potential variations it could see, so that it can recognize patterns and identify new signals with very high accuracy.
    First, a parallel cluster wavelet analysis (PCWA) approach designed in Schmidt’s lab detects that a signal is present. Then, the neural network processes the potentially weak or noisy signal, identifying its source. This system works in real time, so users are able to receive results in a fraction of a second.

    “It’s all about making the most of possibly low quality signals, and doing that really fast and efficiently,” Schmidt said.
    A smaller version of the neural network model can run on portable devices. In the paper, the researchers run the system over a Google Coral Dev board, a relatively cheap edge device for accelerated execution of artificial intelligence algorithms. This means the system also requires less power to execute the processing compared to other techniques.
    “Unlike some research that requires running on supercomputers to do high-accuracy detection, we proved that even a compact, portable, relatively cheap device can do the job for us,” Ganjalizadeh said. “It makes it available, feasible, and portable for point-of-care applications.”
    The entire system is designed to be used completely locally, meaning the data processing can happen without internet access, unlike other systems that rely on cloud computing. This also provides a data security advantage, because results can be produced without the need to share data with a cloud server provider.
    It is also designed to be able to give results on a mobile device, eliminating the need to bring a laptop into the field.
    “You can build a more robust system that you could take out to under-resourced or less- developed regions, and it still works,” Schmidt said.
    This improved system will work for any other biomarkers Schmidt’s lab’s systems have been used to detect in the past, such as COVID-19, Ebola, flu, and cancer biomarkers. Although they are currently focused on medical applications, the system could potentially be adapted for the detection of any type of signal.
    To push the technology further, Schmidt and his lab members plan to add even more dynamic signal processing capabilities to their devices. This will simplify the system and combine the processing techniques needed to detect signals at both low and high concentrations of molecules. The team is also working to bring discrete parts of the setup into the integrated design of the optofluidic chip. More

  • in

    A touch-responsive fabric armband — for flexible keyboards, wearable sketchpads

    It’s time to roll up your sleeves for the next advance in wearable technology — a fabric armband that’s actually a touch pad. In ACS Nano, researchers say they have devised a way to make playing video games, sketching cartoons and signing documents easier. Their proof-of-concept silk armband turns a person’s forearm into a keyboard or sketchpad. The three-layer, touch-responsive material interprets what a user draws or types and converts it into images on a computer.
    Computer trackpads and electronic signature-capture devices seem to be everywhere, but they aren’t as widely used in wearables. Researchers have suggested making flexible touch-responsive panels from clear, electrically conductive hydrogels, but these substances are sticky, making them hard to write on and irritating to the skin. So, Xueji Zhang, Lijun Qu, Mingwei Tian and colleagues wanted to incorporate a similar hydrogel into a comfortable fabric sleeve for drawing or playing games on a computer.
    The researchers sandwiched a pressure-sensitive hydrogel between layers of knit silk. The top piece was coated in graphene nanosheets to make the fabric electrically conductive. Attaching the sensing panel to electrodes and a data collection system produced a pressure-responsive pad with real-time, rapid sensing when a finger slid over it, writing numbers and letters. The device was then incorporated into an arm-length silk sleeve with a touch-responsive area on the forearm. In experiments, a user controlled the direction of blocks in a computer game and sketched colorful cartoons in a computer drawing program from the armband. The researchers say that their proof-of-concept wearable touch panel could inspire the next generation of flexible keyboards and wearable sketchpads. More

  • in

    Joyful music could be a game changer for virtual reality headaches

    Listening to music could reduce the dizziness, nausea and headaches virtual reality users might experience after using digital devices, research suggests.
    Cybersickness — a type of motion sickness from virtual reality experiences such as computer games — significantly reduces when joyful music is part of the immersive experience, the study found.
    The intensity of the nausea-related symptoms of cybersickness was also found to substantially decrease with both joyful and calming music.
    Researchers from the University of Edinburgh assessed the effects of music in a virtual reality environment among 39 people aged between 22 and 36.
    They conducted a series of tests to assess the effect cybersickness had on a participant’s memory skills reading speed and reaction times.
    Participants were immersed in a virtual environment, where they experienced three roller coaster rides aimed at inducing cybersickness.

    Two of the three rides were accompanied by electronic music with no lyrics by artists or from music streams that people might listen to which had been selected as being calming or joyful in a previous study.
    One ride was completed in silence and the order of the rides was randomised across participants.
    After each ride, participants rated their cybersickness symptoms and performed some memory and reaction time tests.
    Eye-tracking tests were also conducted to measure their reading speed and pupil size.
    For comparison purposes the participants had completed the same tests before the rides.

    The study found that joyful music significantly decreased the overall cybersickness intensity. Joyful and calming music substantially decreased the intensity of nausea-related symptoms.
    Cybersickness among the participants was associated with a temporary reduction in verbal working memory test scores, and a decrease in pupil size. It also significantly slowed reaction times and reading speed.
    The researchers also found higher levels of gaming experience were associated with lower cybersickness. There was no difference in the intensity of the cybersickness between female and male participants with comparable gaming experience.
    Researchers say the findings show the potential of music in lessening cybersickness, understanding how gaming experience is linked to cybersickness levels, and the significant effects of cybersickness on thinking skills, reaction times, reading ability and pupil size,
    Dr Sarah E MacPherson, of the University of Edinburgh’s School of Philosophy, Psychology & Language Sciences, said: “Our study suggests calming or joyful music as a solution for cybersickness in immersive virtual reality. Virtual reality has been used in educational and clinical settings but the experience of cybersickness can temporarily impair someone’s thinking skills as well as slowing down their reaction times. The development of music as an intervention could encourage virtual reality to be used more extensively within educational and clinical settings.”
    The study was made possible through a collaboration between Psychology at the University of Edinburgh and the Inria Centre at the University of Rennes in France. More

  • in

    Self-folding origami machines powered by chemical reaction

    A Cornell-led collaboration harnessed chemical reactions to make microscale origami machines self-fold — freeing them from the liquids in which they usually function, so they can operate in dry environments and at room temperature.
    The approach could one day lead to the creation of a new fleet of tiny autonomous devices that can rapidly respond to their chemical environment.
    The group’s paper, “Gas-Phase Microactuation Using Kinetically Controlled Surface States of Ultrathin Catalytic Sheets,” published May 1 in Proceedings of the National Academy of Sciences. The paper’s co-lead authors are Nanqi Bao, Ph.D. ’22, and former postdoctoral researcher Qingkun Liu, Ph.D. ’22.
    The project was led by senior author Nicholas Abbott, a Tisch University Professor in the Robert F. Smith School of Chemical and Biomolecular Engineering in Cornell Engineering, along with Itai Cohen, professor of physics, and Paul McEuen, the John A. Newman Professor of Physical Science, both in the College of Arts and Sciences; and David Muller, the Samuel B. Eckert Professor of Engineering in Cornell Engineering.
    “There are quite good technologies for electrical to mechanical energy transduction, such as the electric motor, and the McEuen and Cohen groups have shown a strategy for doing that on the microscale, with their robots,” Abbott said. “But if you look for direct chemical to mechanical transductions, actually there are very few options.”
    Prior efforts depended on chemical reactions that could only occur in extreme conditions, such as at high temperatures of several 100 degrees Celsius, and the reactions were often tediously slow — sometimes as long as 10 minutes — making the approach impractical for everyday technological applications.

    However, Abbott’s group found a loophole of sorts while reviewing data from a catalysis experiment: a small section of the chemical reaction pathway contained both slow and fast steps.
    “If you look at the response of the chemical actuator, it’s not that it goes from one state directly to the other state. It actually goes through an excursion into a bent state, a curvature, which is more extreme than either of the two end states,” Abbott said. “If you understand the elementary reaction steps in a catalytic pathway, you can go in and sort of surgically extract out the rapid steps. You can operate your chemical actuator around those rapid steps, and just ignore the rest of it.”
    The researchers needed the right material platform to leverage that rapid kinetic moment, so they turned to McEuen and Cohen, who had worked with Muller to develop ultrathin platinum sheets capped with titanium.
    The group also collaborated with theorists, led by professor Manos Mavrikakis at the University of Wisconsin, Madison, who used electronic structure calculations to dissect the chemical reaction that occurs when hydrogen — adsorbed to the material — is exposed to oxygen.
    The researchers were then able to exploit the crucial moment that the oxygen quickly strips the hydrogen, causing the atomically thin material to deform and bend, like a hinge.

    The system actuates at 600 milliseconds per cycle and can operate at 20 degrees Celsius — i.e., room temperature — in dry environments.
    “The result is quite generalizable,” Abbott said. “There are a lot of catalytic reactions which have been developed based on all sorts of species. So carbon monoxide, nitrogen oxides, ammonia: they’re all candidates to use as fuels for chemically driven actuators.”
    The team anticipates applying the technique to other catalytic metals, such as palladium and palladium gold alloys. Eventually this work could lead to autonomous material systems in which the controlling circuitry and onboard computation are handled by the material’s response — for example, an autonomous chemical system that regulates flows based on chemical composition.
    “We are really excited because this work paves the way to microscale origami machines that work in gaseous environments,” Cohen said.
    Co-authors include postdoctoral researcher Michael Reynolds, M.S. ’17, Ph.D. ’21; doctoral student Wei Wang; Michael Cao ’14; and researchers at the University of Wisconsin, Madison.
    The research was supported by the Cornell Center for Materials Research, which is supported by the National Science Foundation’s MRSEC program, the Army Research Office, the NSF, the Air Force Office of Scientific Research and the Kavli Institute at Cornell for Nanoscale Science.
    The researchers made use of the Cornell NanoScale Facility, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the NSF; and National Energy Research Scientific Computing Center (NERSC) resources, which is supported by the U.S. Department of Energy’s Office of Science.
    The project is part of the Nanoscale Science and Microsystems Engineering (NEXT Nano) program, which is designed to push nanoscale science and microsystems engineering to the next level of design, function and integration. More