More stories

  • in

    Why independent cultures think alike when it comes to categories: It's not in the brain

    Imagine you gave the exact same art pieces to two different groups of people and asked them to curate an art show. The art is radical and new. The groups never speak with one another, and they organize and plan all the installations independently. On opening night, imagine your surprise when the two art shows are nearly identical. How did these groups categorize and organize all the art the same way when they never spoke with one another?
    The dominant hypothesis is that people are born with categories already in their brains, but a study from the Network Dynamics Group (NDG) at the Annenberg School for Communication has discovered a novel explanation. In an experiment in which people were asked to categorize unfamiliar shapes, individuals and small groups created many different unique categorization systems while large groups created systems nearly identical to one another.
    “If people are all born seeing the world the same way, we would not observe so many differences in how individuals organize things,” says senior author Damon Centola, Professor of Communication, Sociology, and Engineering at the University of Pennsylvania. “But this raises a big scientific puzzle. If people are so different, why do anthropologists find the same categories, for instance for shapes, colors, and emotions, arising independently in many different cultures? Where do these categories come from and why is there so much similarity across independent populations?”
    To answer this question, the researchers assigned participants to various sized groups, ranging from 1 to 50, and then asked them to play an online game in which they were shown unfamiliar shapes that they then had to categorize in a meaningful way. All of the small groups invented wildly different ways of categorizing the shapes. Yet, when large groups were left to their own devices, each one independently invented a nearly identical category system.
    “If I assign an individual to a small group, they are much more likely to arrive at a category system that is very idiosyncratic and specific to them,” says lead author and Annenberg alum Douglas Guilbeault (Ph.D. ’20), now an Assistant Professor at the Haas School of Business at the University of California, Berkeley. “But if I assign that same individual to a large group, I can predict the category system that they will end up creating, regardless of whatever unique viewpoint that person happens to bring to the table.”
    “Even though we predicted it,” Centola adds, “I was nevertheless stunned to see it really happen. This result challenges many long — held ideas about culture and how it forms.”
    The explanation is connected to previous work conducted by the NDG on tipping points and how people interact within networks. As options are suggested within a network, certain ones begin to be reinforced as they are repeated through individuals’ interactions with one another, and eventually a particular idea has enough traction to take over and become dominant. This only applies to large enough networks, but according to Centola, even just 50 people is enough to see this phenomenon occur.
    Centola and Guilbeault say they plan to build on their findings and apply them to a variety of real — world problems. One current study involves content moderation on Facebook and Twitter. Can the process of categorizing free speech versus hate speech (and thus what should be allowed versus removed) be improved if done in networks rather than by solitary individuals? Another current study is investigating how to use network interactions among physicians and other health care professionals to decrease the likelihood that patients will be incorrectly diagnosed or treated due to prejudice or bias, like racism or sexism. These topics are explored in Centola’s forthcoming book, CHANGE: How to Make Big Things Happen (Little, Brown & Co., 2021).
    “Many of the worst social problems reappear in every culture, which leads some to believe these problems are intrinsic to the human condition,” says Centola. “Our research shows that these problems are intrinsic to the social experiences humans have, not necessarily to humans themselves. If we can alter that social experience, we can change the way people organize things, and address some of the world’s greatest problems.”
    This study was partially funded by a Dissertation Award granted to Guilbeault by the Institute for Research on Innovation and Science at the University of Michigan.

    Story Source:
    Materials provided by University of Pennsylvania. Note: Content may be edited for style and length. More

  • in

    Tweaking AI software to function like a human brain improves computer's learning ability

    Computer-based artificial intelligence can function more like human intelligence when programmed to use a much faster technique for learning new objects, say two neuroscientists who designed such a model that was designed to mirror human visual learning.
    In the journal Frontiers in Computational Neuroscience, Maximilian Riesenhuber, PhD, professor of neuroscience, at Georgetown University Medical Center, and Joshua Rule, PhD, a postdoctoral scholar at UC Berkeley, explain how the new approach vastly improves the ability of AI software to quickly learn new visual concepts.
    “Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” says Riesenhuber. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”
    Humans can quickly and accurately learn new visual concepts from sparse data ¬- sometimes just a single example. Even three- to four-month-old babies can easily learn to recognize zebras and distinguish them from cats, horses, and giraffes. But computers typically need to “see” many examples of the same object to know what it is, Riesenhuber explains.
    The big change needed was in designing software to identify relationships between entire visual categories, instead of trying the more standard approach of identifying an object using only low-level and intermediate information, such as shape and color, Riesenhuber says.
    “The computational power of the brain’s hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects,” he says.

    advertisement

    Riesenhuber and Rule found that artificial neural networks, which represent objects in terms of previously learned concepts, learned new visual concepts significantly faster.
    Rule explains, “Rather than learn high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts. It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter.”
    The brain architecture underlying human visual concept learning builds on the neural networks involved in object recognition. The anterior temporal lobe of the brain is thought to contain “abstract” concept representations that go beyond shape. These complex neural hierarchies for visual recognition allow humans to learn new tasks and, crucially, leverage prior learning.
    “By reusing these concepts, you can more easily learn new concepts, new meaning, such as the fact that a zebra is simply a horse of a different stripe,” Riesenhuber says.
    Despite advances in AI, the human visual system is still the gold standard in terms of ability to generalize from few examples, robustly deal with image variations, and comprehend scenes, the scientists say.
    “Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood,” Riesenhuber concludes.
    This work was supported in part by Lawrence Livermore National Laboratory and by the National Science Foundation (1026934 and 1232530) Graduate Research Fellowship Grants. More

  • in

    Comprehensive characterization of vascular structure in plants

    The leaf vasculature of plants plays a key role in transporting solutes from where they are made — for example from the plant cells driving photosynthesis — to where they are stored or used. Sugars and amino acids are transported from the leaves to the roots and the seeds via the conductive pathways of the phloem.
    Phloem is the part of the tissue in vascular plants that comprises the sieve elements — where actual translocation takes place — and the companion cells as well as the phloem parenchyma cells. The leaf veins consist of at least seven distinct cell types, with specific roles in transport, metabolism and signalling.
    Little is known about the vascular cells in leaves, in particular the phloem parenchyma. Two teams of Alexander von Humboldt professorship students from Düsseldorf and Tübingen, a colleague from Champaign Urbana in Illinois, USA, and a chair of bioinformatics from Düsseldorf have presented the first comprehensive analysis of the vascular cells in the leaves of thale cress (Arabidopsis thaliana) using single cell sequencing.
    The team led up by Alexander von Humboldt Professor Dr. Marja Timmermans from Tübingen University was the first to use single cell sequencing in plants to characterise root cells. In collaboration with Prof. Timmermans’ group, researchers from the Alexander von Humboldt Professor Dr. Wolf Frommer in Düsseldorf succeeded for the first time in isolating plant cells to create an atlas of all regulatory RNA molecules (the transcriptome) of the leaf vasculature. They were able to define the role of the different cells by analysing the metabolic pathways.
    Among other things, the research team proved for the first time that the transcript of sugars (SWEET) and amino acids (UmamiT) transporters are found in the phloem parenchyma cells which transport these compounds from where they are produced to the vascular system. The compounds are subsequently actively imported into the sieve element companion cell complex via the second group of transporters (SUT or AAP) and then exported from the source leaf.
    These extensive investigations involved close collaborations with HHU bioinformatics researchers in Prof. Dr. Martin Lercher’s working group. Together they were able to determine that phloem parenchyma and companion cells have complementary metabolic pathways and are therefore in a position to control the composition of the phloem sap.
    First author and leader of the work group Dr. Ji-Yun Kim from HHU explains: “Our analysis provides completely new insights into the leaf vasculature and the role and relationship of the individual leaf cell types.” Institute Head Prof. Frommer adds: “The cooperation between the four working groups made it possible to use new methods to gain insights for the first time into the important cells in plant pathways and to therefore obtain a basis for a better understanding of plant metabolism.”

    Story Source:
    Materials provided by Heinrich-Heine University Duesseldorf. Original written by Arne Claussen. Note: Content may be edited for style and length. More

  • in

    Researchers report quantum-limit-approaching chemical sensing chip

    University at Buffalo researchers are reporting an advancement of a chemical sensing chip that could lead to handheld devices that detect trace chemicals — everything from illicit drugs to pollution — as quickly as a breathalyzer identifies alcohol.
    The chip, which also may have uses in food safety monitoring, anti-counterfeiting and other fields where trace chemicals are analyzed, is described in a study that appears on the cover of the Dec. 17 edition of the journal Advanced Optical Materials.
    “There is a great need for portable and cost-effective chemical sensors in many areas, especially drug abuse,” says the study’s lead author Qiaoqiang Gan, PhD, professor of electrical engineering in the UB School of Engineering and Applied Sciences.
    The work builds upon previous research Gan’s lab led that involved creating a chip that traps light at the edges of gold and silver nanoparticles.
    When biological or chemical molecules land on the chip’s surface, some of the captured light interacts with the molecules and is “scattered” into light of new energies. This effect occurs in recognizable patterns that act as fingerprints of chemical or biological molecules, revealing information about what compounds are present.
    Because all chemicals have unique light-scattering signatures, the technology could eventually be integrated into a handheld device for detecting drugs in blood, breath, urine and other biological samples. It could also be incorporated into other devices to identify chemicals in the air or from water, as well as other surfaces.
    The sensing method is called surface-enhanced Raman spectroscopy (SERS).
    While effective, the chip the Gan group previously created wasn’t uniform in its design. Because the gold and silver was spaced unevenly, it could make scattered molecules difficult to identify, especially if they appeared on different locations of the chip.
    Gan and a team of researchers — featuring members of his lab at UB, and researchers from the University of Shanghai for Science and Technology in China, and King Abdullah University of Science and Technology in Saudi Arabia — have been working to remedy this shortcoming.
    The team used four molecules (BZT, 4-MBA, BPT, and TPT), each with different lengths, in the fabrication process to control the size of the gaps in between the gold and silver nanoparticles. The updated fabrication process is based upon two techniques, atomic layer deposition and self-assembled monolayers, as opposed to the more common and expensive method for SERS chips, electron-beam lithography.
    The result is a SERS chip with unprecedented uniformity that is relatively inexpensive to produce. More importantly, it approaches quantum-limit sensing capabilities, says Gan, which was a challenge for conventional SERS chips
    “We think the chip will have many uses in addition to handheld drug detection devices,” says the first author of this work, Nan Zhang, PhD, a postdoctoral researcher in Gan’s lab. “For example, it could be used to assess air and water pollution or the safety of food. It could be useful in the security and defense sectors, and it has tremendous potential in health care.”

    Story Source:
    Materials provided by University at Buffalo. Original written by Cory Nealon. Note: Content may be edited for style and length. More

  • in

    2D compound shows unique versatility

    An atypical two-dimensional sandwich has the tasty part on the outside for scientists and engineers developing multifunctional nanodevices.
    An atom-thin layer of semiconductor antimony paired with ferroelectric indium selenide would display unique properties depending on the side and polarization by an external electric field.
    The field could be used to stabilize indium selenide’s polarization, a long-sought property that tends to be wrecked by internal fields in materials like perovskites but would be highly useful for solar energy applications.
    Calculations by Rice materials theorist Boris Yakobson, lead author and researcher Jun-Jie Zhang and graduate student Dongyang Zhu shows switching the material’s polarization with an external electric field makes it either a simple insulator with a band gap suitable for visible light absorption or a topological insulator, a material that only conducts electrons along its surface.
    Turning the field inward would make the material good for solar panels. Turning it outward could make it useful as a spintronic device for quantum computing.
    The lab’s study appears in the American Chemical Society journal Nano Letters.
    “The ability to switch at will the material’s electronic band structure is a very attractive knob,” Yakobson said. “The strong coupling between ferroelectric state and topological order can help: the applied voltage switches the topology through the ferroelectric polarization, which serves as an intermediary. This provides a new paradigm for device engineering and control.”
    Weakly bound by the van der Waals force, the layers change their physical configuration when exposed to an electric field. That changes the compound’s band gap, and the change is not trivial, Zhang said.
    “The central selenium atoms shift along with switching ferroelectric polarization,” he said. “This kind of switching in indium selenide has been observed in recent experiments.”
    Unlike other structures proposed and ultimately made by experimentalists — boron buckyballs are a good example — the switching material may be relatively simple to make, according to the researchers.
    “As opposed to typical bulk solids, easy exfoliation of van der Waals crystals along the low surface energy plane realistically allows their reassembly into heterobilayers, opening new possibilities like the one we discovered here,” Zhang said.

    Story Source:
    Materials provided by Rice University. Note: Content may be edited for style and length. More

  • in

    Using light to revolutionize artificial intelligence

    An international team of researchers, including Professor Roberto Morandotti of the Institut national de la recherche scientifique (INRS), just introduced a new photonic processor that could revolutionize artificial intelligence, as reported by the  journal Nature.
    Artificial neural networks, layers of interconnected artificial neurons, are of great interest for machine learning tasks such as speech recognition and medical diagnosis. Actually, electronic computing hardware are nearing the limit of their capabilities, yet the demand for greater computing power is constantly growing.
    Researchers turned themselves to photons instead of electrons to carry information at the speed of light. In fact, not only photons can process information much faster than electrons, but they are the basis of the current Internet, where it is important to avoid the so-called electronic bottleneck (conversion of an optical signal into an electronic signal, and vice versa).
    Increased Computing Speed
    The proposed optical neural network is capable of recognizing and processing large-scale data and images at ultra-high computing speeds, beyond ten trillion operations per second. Professor Morandotti, an expert in integrated photonics, explains how an optical frequency comb, a light source comprised of many equally spaced frequency modes, was integrated into a computer chip and used as a power-efficient source for optical computing.
    This device performs a type of matrix-vector multiplication known as a convolution for image-processing applications. It shows promising results for real-time massive-data machine learning tasks, such as identifying faces in cameras or pathology identification in clinical scanning applications. Their approach is scalable and trainable to much more complex networks for demanding applications such as unmanned vehicles and real-time video recognition, allowing, in a not-so-far future, a full integration with the up-and-coming Internet of Things.

    Story Source:
    Materials provided by Institut national de la recherche scientifique – INRS. Original written by Audrey-Maude Vézina. Note: Content may be edited for style and length. More

  • in

    Computational model offers help for new hips

    Rice University engineers hope to make life better for those with replacement joints by modeling how artificial hips are likely to rub them the wrong way.
    The computational study by the Brown School of Engineering lab of mechanical engineer Fred Higgs simulates and tracks how hips evolve, uniquely incorporating fluid dynamics and roughness of the joint surfaces as well as factors clinicians typically use to predict how well implants will stand up over their expected 15-year lifetime.
    The team’s immediate goal is to advance the design of more robust prostheses.
    Ultimately, they say the model could help clinicians personalize hip joints for patients depending on gender, weight, age and gait variations.
    Higgs and co-lead authors Nia Christian, a Rice graduate student, and Gagan Srivastava, a mechanical engineering lecturer at Rice and now a research scientist at Dow Chemical, reported their results in Biotribology.
    The researchers saw a need to look beyond the limitations of earlier mechanical studies and standard clinical practices that use simple walking as a baseline to evaluate artificial hips without incorporating higher-impact activities.

    advertisement

    “When we talk to surgeons, they tell us a lot of their decisions are based on their wealth of experience,” Christian said. “But some have expressed a desire for better diagnostic tools to predict how long an implant is going to last.
    “Fifteen years sounds like a long time but if you need to put an artificial hip into someone who’s young and active, you want it to last longer so they don’t have multiple surgeries,” she said.
    Higgs’ Particle Flow and Tribology Lab was invited by Rice mechanical and bioengineer B.J. Fregly, to collaborate on his work to model human motion to improve life for patients with neurologic and orthopedic impairments.
    “He wanted to know if we could predict how long their best candidate hip joints would last,” said Higgs, Rice’s John and Ann Doerr Professor in Mechanical Engineering and a joint professor of Bioengineering, whose own father’s knee replacement partially inspired the study. “So our model uses walking motion of real patients.”
    Physical simulators need to run millions of cycles to predict wear and failure points, and can take months to get results. Higgs’ model seeks to speed up and simplify the process by analyzing real motion capture data like that produced by the Fregly lab along with data from “instrumented” hip implants studied by Georg Bergmann at the Free University of Berlin.

    advertisement

    The new study incorporates the four distinct modes of physics — contact mechanics, fluid dynamics, wear and particle dynamics — at play in hip motion. No previous studies considered all four simultaneously, according to the researchers.
    One issue others didn’t consider was the changing makeup of the lubricant between bones. Natural joints contain synovial fluid, an extracellular liquid with a consistency similar to egg whites and secreted by the synovial membrane, connective tissue that lines the joint. When a hip is replaced, the membrane is preserved and continues to express the fluid.
    “In healthy natural joints, the fluid generates enough pressure so that you don’t have contact, so we all walk without pain,” Higgs said. “But an artificial hip joint generally undergoes partial contact, which increasingly wears and deteriorates your implanted joint over time. We call this kind of rubbing mixed lubrication.”
    That rubbing can lead to increased generation of wear debris, especially from the plastic material — an ultrahigh molecular weight polyethylene — commonly used as the socket (the acetabular cup) in artificial joints. These particles, estimated at up to 5 microns in size, mix with the synovial fluid can sometimes escape the joint.
    “Eventually, they can loosen the implant or cause the surrounding tissue to break down,” Christian said. “And they often get carried to other parts of the body, where they can cause osteolysis. There’s a lot of debate over where they end up but you want to avoid having them irritate the rest of your body.”
    She noted the use of metal sockets rather than plastic is a topic of interest. “There’s been a strong push toward metal-on-metal hips because metal is durable,” Christian said. “But some of these cause metal shavings to break off. As they build up over time, they seem to be much more damaging than polyethylene particles.”
    Further inspiration for the new study came from two previous works by Higgs and colleagues that had nothing to do with bioengineering. The first looked at chemical mechanical polishing of semiconductor wafers used in integrated circuit manufacturing. The second pushed their predictive modeling from micro-scale to full wafer-scale interfaces.
    The researchers noted future iterations of the model will incorporate more novel materials being used in joint replacement. More

  • in

    Researchers acquire 3D images with LED room lighting and a smartphone

    As LEDs replace traditional lighting systems, they bring more smart capabilities to everyday lighting. While you might use your smartphone to dim LED lighting at home, researchers have taken this further by tapping into dynamically controlled LEDs to create a simple illumination system for 3D imaging.
    “Current video surveillance systems such as the ones used for public transport rely on cameras that provide only 2D information,” said Emma Le Francois, a doctoral student in the research group led by Martin Dawson, Johannes Herrnsdorf and Michael Strain at the University of Strathclyde in the UK. “Our new approach could be used to illuminate different indoor areas to allow better surveillance with 3D images, create a smart work area in a factory, or to give robots a more complete sense of their environment.”
    In The Optical Society (OSA) journal Optics Express, the researchers demonstrate that 3D optical imaging can be performed with a cell phone and LEDs without requiring any complex manual processes to synchronize the camera with the lighting.
    “Deploying a smart-illumination system in an indoor area allows any camera in the room to use the light and retrieve the 3D information from the surrounding environment,” said Le Francois. “LEDs are being explored for a variety of different applications, such as optical communication, visible light positioning and imaging. One day the LED smart-lighting system used for lighting an indoor area might be used for all of these applications at the same time.”
    Illuminating from above
    Human vision relies on the brain to reconstruct depth information when we view a scene from two slightly different directions with our two eyes. Depth information can also be acquired using a method called photometric stereo imaging in which one detector, or camera, is combined with illumination that comes from multiple directions. This lighting setup allows images to be recorded with different shadowing, which can then be used to reconstruct a 3D image.

    advertisement

    Photometric stereo imaging traditionally requires four light sources, such as LEDs, which are deployed symmetrically around the viewing axis of a camera. In the new work, the researchers show that 3D images can also be reconstructed when objects are illuminated from the top down but imaged from the side. This setup allows overhead room lighting to be used for illumination.
    In work supported under the UK’s EPSRC ‘Quantic’ research program, the researchers developed algorithms that modulate each LED in a unique way. This acts like a fingerprint that allows the camera to determine which LED generated which image to facilitate the 3D reconstruction. The new modulation approach also carries its own clock signal so that the image acquisition can be self-synchronized with the LEDs by simply using the camera to passively detect the LED clock signal.
    “We wanted to make photometric stereo imaging more easily deployable by removing the link between the light sources and the camera,” said Le Francois. “To our knowledge, we are the first to demonstrate a top-down illumination system with a side image acquisition where the modulation of the light is self-synchronized with the camera.”
    3D imaging with a smartphone
    To demonstrate this new approach, the researchers used their modulation scheme with a photometric stereo setup based on commercially available LEDs. A simple Arduino board provided the electronic control for the LEDs. Images were captured using the high-speed video mode of a smartphone. They imaged a 48-millimeter-tall figurine that they 3D printed with a matte material to avoid any shiny surfaces that might complicate imaging.
    After identifying the best position for the LEDs and the smartphone, the researchers achieved a reconstruction error of just 2.6 millimeters for the figurine when imaged from 42 centimeters away. This error rate shows that the quality of the reconstruction was comparable to that of other photometric stereo imaging approaches. They were also able to reconstruct images of a moving object and showed that the method is not affected by ambient light.
    In the current system, the image reconstruction takes a few minutes on a laptop. To make the system practical, the researchers are working to decrease the computational time to just a few seconds by incorporating a deep-learning neural network that would learn to reconstruct the shape of the object from the raw image data.

    Story Source:
    Materials provided by The Optical Society. Note: Content may be edited for style and length. More