More stories

  • in

    Comprehensive characterization of vascular structure in plants

    The leaf vasculature of plants plays a key role in transporting solutes from where they are made — for example from the plant cells driving photosynthesis — to where they are stored or used. Sugars and amino acids are transported from the leaves to the roots and the seeds via the conductive pathways of the phloem.
    Phloem is the part of the tissue in vascular plants that comprises the sieve elements — where actual translocation takes place — and the companion cells as well as the phloem parenchyma cells. The leaf veins consist of at least seven distinct cell types, with specific roles in transport, metabolism and signalling.
    Little is known about the vascular cells in leaves, in particular the phloem parenchyma. Two teams of Alexander von Humboldt professorship students from Düsseldorf and Tübingen, a colleague from Champaign Urbana in Illinois, USA, and a chair of bioinformatics from Düsseldorf have presented the first comprehensive analysis of the vascular cells in the leaves of thale cress (Arabidopsis thaliana) using single cell sequencing.
    The team led up by Alexander von Humboldt Professor Dr. Marja Timmermans from Tübingen University was the first to use single cell sequencing in plants to characterise root cells. In collaboration with Prof. Timmermans’ group, researchers from the Alexander von Humboldt Professor Dr. Wolf Frommer in Düsseldorf succeeded for the first time in isolating plant cells to create an atlas of all regulatory RNA molecules (the transcriptome) of the leaf vasculature. They were able to define the role of the different cells by analysing the metabolic pathways.
    Among other things, the research team proved for the first time that the transcript of sugars (SWEET) and amino acids (UmamiT) transporters are found in the phloem parenchyma cells which transport these compounds from where they are produced to the vascular system. The compounds are subsequently actively imported into the sieve element companion cell complex via the second group of transporters (SUT or AAP) and then exported from the source leaf.
    These extensive investigations involved close collaborations with HHU bioinformatics researchers in Prof. Dr. Martin Lercher’s working group. Together they were able to determine that phloem parenchyma and companion cells have complementary metabolic pathways and are therefore in a position to control the composition of the phloem sap.
    First author and leader of the work group Dr. Ji-Yun Kim from HHU explains: “Our analysis provides completely new insights into the leaf vasculature and the role and relationship of the individual leaf cell types.” Institute Head Prof. Frommer adds: “The cooperation between the four working groups made it possible to use new methods to gain insights for the first time into the important cells in plant pathways and to therefore obtain a basis for a better understanding of plant metabolism.”

    Story Source:
    Materials provided by Heinrich-Heine University Duesseldorf. Original written by Arne Claussen. Note: Content may be edited for style and length. More

  • in

    Researchers report quantum-limit-approaching chemical sensing chip

    University at Buffalo researchers are reporting an advancement of a chemical sensing chip that could lead to handheld devices that detect trace chemicals — everything from illicit drugs to pollution — as quickly as a breathalyzer identifies alcohol.
    The chip, which also may have uses in food safety monitoring, anti-counterfeiting and other fields where trace chemicals are analyzed, is described in a study that appears on the cover of the Dec. 17 edition of the journal Advanced Optical Materials.
    “There is a great need for portable and cost-effective chemical sensors in many areas, especially drug abuse,” says the study’s lead author Qiaoqiang Gan, PhD, professor of electrical engineering in the UB School of Engineering and Applied Sciences.
    The work builds upon previous research Gan’s lab led that involved creating a chip that traps light at the edges of gold and silver nanoparticles.
    When biological or chemical molecules land on the chip’s surface, some of the captured light interacts with the molecules and is “scattered” into light of new energies. This effect occurs in recognizable patterns that act as fingerprints of chemical or biological molecules, revealing information about what compounds are present.
    Because all chemicals have unique light-scattering signatures, the technology could eventually be integrated into a handheld device for detecting drugs in blood, breath, urine and other biological samples. It could also be incorporated into other devices to identify chemicals in the air or from water, as well as other surfaces.
    The sensing method is called surface-enhanced Raman spectroscopy (SERS).
    While effective, the chip the Gan group previously created wasn’t uniform in its design. Because the gold and silver was spaced unevenly, it could make scattered molecules difficult to identify, especially if they appeared on different locations of the chip.
    Gan and a team of researchers — featuring members of his lab at UB, and researchers from the University of Shanghai for Science and Technology in China, and King Abdullah University of Science and Technology in Saudi Arabia — have been working to remedy this shortcoming.
    The team used four molecules (BZT, 4-MBA, BPT, and TPT), each with different lengths, in the fabrication process to control the size of the gaps in between the gold and silver nanoparticles. The updated fabrication process is based upon two techniques, atomic layer deposition and self-assembled monolayers, as opposed to the more common and expensive method for SERS chips, electron-beam lithography.
    The result is a SERS chip with unprecedented uniformity that is relatively inexpensive to produce. More importantly, it approaches quantum-limit sensing capabilities, says Gan, which was a challenge for conventional SERS chips
    “We think the chip will have many uses in addition to handheld drug detection devices,” says the first author of this work, Nan Zhang, PhD, a postdoctoral researcher in Gan’s lab. “For example, it could be used to assess air and water pollution or the safety of food. It could be useful in the security and defense sectors, and it has tremendous potential in health care.”

    Story Source:
    Materials provided by University at Buffalo. Original written by Cory Nealon. Note: Content may be edited for style and length. More

  • in

    2D compound shows unique versatility

    An atypical two-dimensional sandwich has the tasty part on the outside for scientists and engineers developing multifunctional nanodevices.
    An atom-thin layer of semiconductor antimony paired with ferroelectric indium selenide would display unique properties depending on the side and polarization by an external electric field.
    The field could be used to stabilize indium selenide’s polarization, a long-sought property that tends to be wrecked by internal fields in materials like perovskites but would be highly useful for solar energy applications.
    Calculations by Rice materials theorist Boris Yakobson, lead author and researcher Jun-Jie Zhang and graduate student Dongyang Zhu shows switching the material’s polarization with an external electric field makes it either a simple insulator with a band gap suitable for visible light absorption or a topological insulator, a material that only conducts electrons along its surface.
    Turning the field inward would make the material good for solar panels. Turning it outward could make it useful as a spintronic device for quantum computing.
    The lab’s study appears in the American Chemical Society journal Nano Letters.
    “The ability to switch at will the material’s electronic band structure is a very attractive knob,” Yakobson said. “The strong coupling between ferroelectric state and topological order can help: the applied voltage switches the topology through the ferroelectric polarization, which serves as an intermediary. This provides a new paradigm for device engineering and control.”
    Weakly bound by the van der Waals force, the layers change their physical configuration when exposed to an electric field. That changes the compound’s band gap, and the change is not trivial, Zhang said.
    “The central selenium atoms shift along with switching ferroelectric polarization,” he said. “This kind of switching in indium selenide has been observed in recent experiments.”
    Unlike other structures proposed and ultimately made by experimentalists — boron buckyballs are a good example — the switching material may be relatively simple to make, according to the researchers.
    “As opposed to typical bulk solids, easy exfoliation of van der Waals crystals along the low surface energy plane realistically allows their reassembly into heterobilayers, opening new possibilities like the one we discovered here,” Zhang said.

    Story Source:
    Materials provided by Rice University. Note: Content may be edited for style and length. More

  • in

    Using light to revolutionize artificial intelligence

    An international team of researchers, including Professor Roberto Morandotti of the Institut national de la recherche scientifique (INRS), just introduced a new photonic processor that could revolutionize artificial intelligence, as reported by the  journal Nature.
    Artificial neural networks, layers of interconnected artificial neurons, are of great interest for machine learning tasks such as speech recognition and medical diagnosis. Actually, electronic computing hardware are nearing the limit of their capabilities, yet the demand for greater computing power is constantly growing.
    Researchers turned themselves to photons instead of electrons to carry information at the speed of light. In fact, not only photons can process information much faster than electrons, but they are the basis of the current Internet, where it is important to avoid the so-called electronic bottleneck (conversion of an optical signal into an electronic signal, and vice versa).
    Increased Computing Speed
    The proposed optical neural network is capable of recognizing and processing large-scale data and images at ultra-high computing speeds, beyond ten trillion operations per second. Professor Morandotti, an expert in integrated photonics, explains how an optical frequency comb, a light source comprised of many equally spaced frequency modes, was integrated into a computer chip and used as a power-efficient source for optical computing.
    This device performs a type of matrix-vector multiplication known as a convolution for image-processing applications. It shows promising results for real-time massive-data machine learning tasks, such as identifying faces in cameras or pathology identification in clinical scanning applications. Their approach is scalable and trainable to much more complex networks for demanding applications such as unmanned vehicles and real-time video recognition, allowing, in a not-so-far future, a full integration with the up-and-coming Internet of Things.

    Story Source:
    Materials provided by Institut national de la recherche scientifique – INRS. Original written by Audrey-Maude Vézina. Note: Content may be edited for style and length. More

  • in

    Computational model offers help for new hips

    Rice University engineers hope to make life better for those with replacement joints by modeling how artificial hips are likely to rub them the wrong way.
    The computational study by the Brown School of Engineering lab of mechanical engineer Fred Higgs simulates and tracks how hips evolve, uniquely incorporating fluid dynamics and roughness of the joint surfaces as well as factors clinicians typically use to predict how well implants will stand up over their expected 15-year lifetime.
    The team’s immediate goal is to advance the design of more robust prostheses.
    Ultimately, they say the model could help clinicians personalize hip joints for patients depending on gender, weight, age and gait variations.
    Higgs and co-lead authors Nia Christian, a Rice graduate student, and Gagan Srivastava, a mechanical engineering lecturer at Rice and now a research scientist at Dow Chemical, reported their results in Biotribology.
    The researchers saw a need to look beyond the limitations of earlier mechanical studies and standard clinical practices that use simple walking as a baseline to evaluate artificial hips without incorporating higher-impact activities.

    advertisement

    “When we talk to surgeons, they tell us a lot of their decisions are based on their wealth of experience,” Christian said. “But some have expressed a desire for better diagnostic tools to predict how long an implant is going to last.
    “Fifteen years sounds like a long time but if you need to put an artificial hip into someone who’s young and active, you want it to last longer so they don’t have multiple surgeries,” she said.
    Higgs’ Particle Flow and Tribology Lab was invited by Rice mechanical and bioengineer B.J. Fregly, to collaborate on his work to model human motion to improve life for patients with neurologic and orthopedic impairments.
    “He wanted to know if we could predict how long their best candidate hip joints would last,” said Higgs, Rice’s John and Ann Doerr Professor in Mechanical Engineering and a joint professor of Bioengineering, whose own father’s knee replacement partially inspired the study. “So our model uses walking motion of real patients.”
    Physical simulators need to run millions of cycles to predict wear and failure points, and can take months to get results. Higgs’ model seeks to speed up and simplify the process by analyzing real motion capture data like that produced by the Fregly lab along with data from “instrumented” hip implants studied by Georg Bergmann at the Free University of Berlin.

    advertisement

    The new study incorporates the four distinct modes of physics — contact mechanics, fluid dynamics, wear and particle dynamics — at play in hip motion. No previous studies considered all four simultaneously, according to the researchers.
    One issue others didn’t consider was the changing makeup of the lubricant between bones. Natural joints contain synovial fluid, an extracellular liquid with a consistency similar to egg whites and secreted by the synovial membrane, connective tissue that lines the joint. When a hip is replaced, the membrane is preserved and continues to express the fluid.
    “In healthy natural joints, the fluid generates enough pressure so that you don’t have contact, so we all walk without pain,” Higgs said. “But an artificial hip joint generally undergoes partial contact, which increasingly wears and deteriorates your implanted joint over time. We call this kind of rubbing mixed lubrication.”
    That rubbing can lead to increased generation of wear debris, especially from the plastic material — an ultrahigh molecular weight polyethylene — commonly used as the socket (the acetabular cup) in artificial joints. These particles, estimated at up to 5 microns in size, mix with the synovial fluid can sometimes escape the joint.
    “Eventually, they can loosen the implant or cause the surrounding tissue to break down,” Christian said. “And they often get carried to other parts of the body, where they can cause osteolysis. There’s a lot of debate over where they end up but you want to avoid having them irritate the rest of your body.”
    She noted the use of metal sockets rather than plastic is a topic of interest. “There’s been a strong push toward metal-on-metal hips because metal is durable,” Christian said. “But some of these cause metal shavings to break off. As they build up over time, they seem to be much more damaging than polyethylene particles.”
    Further inspiration for the new study came from two previous works by Higgs and colleagues that had nothing to do with bioengineering. The first looked at chemical mechanical polishing of semiconductor wafers used in integrated circuit manufacturing. The second pushed their predictive modeling from micro-scale to full wafer-scale interfaces.
    The researchers noted future iterations of the model will incorporate more novel materials being used in joint replacement. More

  • in

    Researchers acquire 3D images with LED room lighting and a smartphone

    As LEDs replace traditional lighting systems, they bring more smart capabilities to everyday lighting. While you might use your smartphone to dim LED lighting at home, researchers have taken this further by tapping into dynamically controlled LEDs to create a simple illumination system for 3D imaging.
    “Current video surveillance systems such as the ones used for public transport rely on cameras that provide only 2D information,” said Emma Le Francois, a doctoral student in the research group led by Martin Dawson, Johannes Herrnsdorf and Michael Strain at the University of Strathclyde in the UK. “Our new approach could be used to illuminate different indoor areas to allow better surveillance with 3D images, create a smart work area in a factory, or to give robots a more complete sense of their environment.”
    In The Optical Society (OSA) journal Optics Express, the researchers demonstrate that 3D optical imaging can be performed with a cell phone and LEDs without requiring any complex manual processes to synchronize the camera with the lighting.
    “Deploying a smart-illumination system in an indoor area allows any camera in the room to use the light and retrieve the 3D information from the surrounding environment,” said Le Francois. “LEDs are being explored for a variety of different applications, such as optical communication, visible light positioning and imaging. One day the LED smart-lighting system used for lighting an indoor area might be used for all of these applications at the same time.”
    Illuminating from above
    Human vision relies on the brain to reconstruct depth information when we view a scene from two slightly different directions with our two eyes. Depth information can also be acquired using a method called photometric stereo imaging in which one detector, or camera, is combined with illumination that comes from multiple directions. This lighting setup allows images to be recorded with different shadowing, which can then be used to reconstruct a 3D image.

    advertisement

    Photometric stereo imaging traditionally requires four light sources, such as LEDs, which are deployed symmetrically around the viewing axis of a camera. In the new work, the researchers show that 3D images can also be reconstructed when objects are illuminated from the top down but imaged from the side. This setup allows overhead room lighting to be used for illumination.
    In work supported under the UK’s EPSRC ‘Quantic’ research program, the researchers developed algorithms that modulate each LED in a unique way. This acts like a fingerprint that allows the camera to determine which LED generated which image to facilitate the 3D reconstruction. The new modulation approach also carries its own clock signal so that the image acquisition can be self-synchronized with the LEDs by simply using the camera to passively detect the LED clock signal.
    “We wanted to make photometric stereo imaging more easily deployable by removing the link between the light sources and the camera,” said Le Francois. “To our knowledge, we are the first to demonstrate a top-down illumination system with a side image acquisition where the modulation of the light is self-synchronized with the camera.”
    3D imaging with a smartphone
    To demonstrate this new approach, the researchers used their modulation scheme with a photometric stereo setup based on commercially available LEDs. A simple Arduino board provided the electronic control for the LEDs. Images were captured using the high-speed video mode of a smartphone. They imaged a 48-millimeter-tall figurine that they 3D printed with a matte material to avoid any shiny surfaces that might complicate imaging.
    After identifying the best position for the LEDs and the smartphone, the researchers achieved a reconstruction error of just 2.6 millimeters for the figurine when imaged from 42 centimeters away. This error rate shows that the quality of the reconstruction was comparable to that of other photometric stereo imaging approaches. They were also able to reconstruct images of a moving object and showed that the method is not affected by ambient light.
    In the current system, the image reconstruction takes a few minutes on a laptop. To make the system practical, the researchers are working to decrease the computational time to just a few seconds by incorporating a deep-learning neural network that would learn to reconstruct the shape of the object from the raw image data.

    Story Source:
    Materials provided by The Optical Society. Note: Content may be edited for style and length. More

  • in

    Computer scientists: We wouldn't be able to control super intelligent machines

    We are fascinated by machines that can control cars, compose symphonies, or defeat people at chess, Go, or Jeopardy! While more progress is being made all the time in Artificial Intelligence (AI), some scientists and philosophers warn of the dangers of an uncontrollable superintelligent AI. Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a superintelligent AI. The study was published in the Journal of Artificial Intelligence Research.
    Suppose someone were to program an AI system with intelligence superior to that of humans, so it could learn independently. Connected to the Internet, the AI may have access to all the data of humanity. It could replace all existing programs and take control all machines online worldwide. Would this produce a utopia or a dystopia? Would the AI cure cancer, bring about world peace, and prevent a climate disaster? Or would it destroy humanity and take over the Earth?
    Computer scientists and philosophers have asked themselves whether we would even be able to control a superintelligent AI at all, to ensure it would not pose a threat to humanity. An international team of computer scientists used theoretical calculations to show that it would be fundamentally impossible to control a super-intelligent AI.
    “A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity,” says study co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development.
    Scientists have explored two different ideas for how a superintelligent AI could be controlled. On one hand, the capabilities of superintelligent AI could be specifically limited, for example, by walling it off from the Internet and all other technical devices so it could have no contact with the outside world — yet this would render the superintelligent AI significantly less powerful, less able to answer humanities quests. Lacking that option, the AI could be motivated from the outset to pursue only goals that are in the best interests of humanity, for example by programming ethical principles into it. However, the researchers also show that these and other contemporary and historical ideas for controlling super-intelligent AI have their limits.
    In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.
    “If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable,” says Iyad Rahwan, Director of the Center for Humans and Machines.
    Based on these calculations the containment problem is incomputable, i.e. no single algorithm can find a solution for determining whether an AI would produce harm to the world. Furthermore, the researchers demonstrate that we may not even know when superintelligent machines have arrived, because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.

    Story Source:
    Materials provided by Max Planck Institute for Human Development. Note: Content may be edited for style and length. More

  • in

    Electrically switchable qubit can tune between storage and fast calculation modes

    To perform calculations, quantum computers need qubits to act as elementary building blocks that process and store information. Now, physicists have produced a new type of qubit that can be switched from a stable idle mode to a fast calculation mode. The concept would also allow a large number of qubits to be combined into a powerful quantum computer, as researchers from the University of Basel and TU Eindhoven have reported in the journal Nature Nanotechnology.
    Compared with conventional bits, quantum bits (qubits) are much more fragile and can lose their information content very quickly. The challenge for quantum computing is therefore to keep the sensitive qubits stable over a prolonged period of time, while at the same time finding ways to perform rapid quantum operations. Now, physicists from the University of Basel and TU Eindhoven have developed a switchable qubit that should allow quantum computers to do both.
    The new type of qubit has a stable but slow state that is suitable for storing quantum information. However, the researchers were also able to switch the qubit into a much faster but less stable manipulation mode by applying an electrical voltage. In this state, the qubits can be used to process information quickly.
    Selective coupling of individual spins
    In their experiment, the researchers created the qubits in the form of “hole spins.” These are formed when an electron is deliberately removed from a semiconductor, and the resulting hole has a spin that can adopt two states, up and down — analogous to the values 0 and 1 in classical bits. In the new type of qubit, these spins can be selectively coupled — via a photon, for example — to other spins by tuning their resonant frequencies.
    This capability is vital, since the construction of a powerful quantum computer requires the ability to selectively control and interconnect many individual qubits. Scalability is particularly necessary to reduce the error rate in quantum calculations.
    Ultrafast spin manipulation
    The researchers were also able to use the electrical switch to manipulate the spin qubits at record speed. “The spin can be coherently flipped from up to down in as little as a nanosecond,” says project leader Professor Dominik Zumbühl from the Department of Physics at the University of Basel. “That would allow up to a billion switches per second. Spin qubit technology is therefore already approaching the clock speeds of today’s conventional computers.”
    For their experiments, the researchers used a semiconductor nanowire made of silicon and germanium. Produced at TU Eindhoven, the wire has a tiny diameter of about 20 nanometers. As the qubit is therefore also extremely small, it should in principle be possible to incorporate millions or even billions of these qubits onto a chip.

    Story Source:
    Materials provided by University of Basel. Note: Content may be edited for style and length. More