More stories

  • in

    Chumash Indians were using highly worked shell beads as currency 2,000 years ago

    As one of the most experienced archaeologists studying California’s Native Americans, Lynn Gamble(link is external) knew the Chumash Indians had been using shell beads as money for at least 800 years.
    But an exhaustive review of some of the shell bead record led the UC Santa Barbara professor emerita of anthropology to an astonishing conclusion: The hunter-gatherers centered on the Southcentral Coast of Santa Barbara were using highly worked shells as currency as long as 2,000 years ago.
    “If the Chumash were using beads as money 2,000 years ago,” Gamble said, “this changes our thinking of hunter-gatherers and sociopolitical and economic complexity. This may be the first example of the use of money anywhere in the Americas at this time.”
    Although Gamble has been studying California’s indigenous people since the late 1970s, the inspiration for her research on shell bead money came from far afield: the University of Tübingen in Germany. At a symposium there some years ago, most of the presenters discussed coins and other non-shell forms of money. Some, she said, were surprised by the assumptions of California archaeologists about what constituted money.
    Intrigued, she reviewed the definitions and identifications of money in California and questioned some of the long-held beliefs. Her research led to “The origin and use of shell bead money in California” in the Journal of Anthropological Archaeology.
    Gamble argues that archaeologists should use four criteria in assessing whether beads were used for currency versus adornment: Shell beads used as currency should be more labor-intensive than those for decorative purposes; highly standardized beads are likely currency; bigger, eye-catching beads were more likely used as decoration; and currency beads are widely distributed.

    advertisement

    “I then compared the shell beads that had been accepted as a money bead for over 40 years by California archaeologists to another type that was widely distributed,” she said. “For example, tens of thousands were found with just one individual up in the San Francisco Bay Area. This bead type, known as a saucer bead, was produced south of Point Conception and probably on the northern [Santa Barbara] Channel Islands, according to multiple sources of data, at least most, if not all of them.
    “These earlier beads were just as standardized, if not more so, than those that came 1,000 years later,” Gamble continued. “They also were traded throughout California and beyond. Through sleuthing, measurements and comparison of standardizations among the different bead types, it became clear that these were probably money beads and occurred much earlier than we previously thought.”
    As Gamble notes, shell beads have been used for over 10,000 years in California, and there is extensive evidence for the production of some of these beads, especially those common in the last 3,000 to 4,000 years, on the northern Channel Islands. The evidence includes shell bead-making tools, such as drills, and massive amounts of shell bits — detritus — that littered the surface of archaeological sites on the islands.
    In addition, specialists have noted that the isotopic signature of the shell beads found in the San Francisco Bay Area indicate that the shells are from south of Point Conception.
    “We know that right around early European contact,” Gamble said, “the California Indians were trading for many types of goods, including perishable foods. The use of shell beads no doubt greatly facilitated this wide network of exchange.”
    Gamble’s research not only resets the origins of money in the Americas, it calls into question what constitutes “sophisticated” societies in prehistory. Because the Chumash were non-agriculturists — hunter-gatherers — it was long held that they wouldn’t need money, even though early Spanish colonizers marveled at extensive Chumash trading networks and commerce.
    Recent research on money in Europe during the Bronze Age suggests it was used there some 3,500 years ago. For Gamble, that and the Chumash example are significant because they challenge a persistent perspective among economists and some archaeologists that so-called “primitive” societies could not have had “commercial” economies.
    “Both the terms ‘complex’ and ‘primitive’ are highly charged, but it is difficult to address this subject without avoiding those terms,” she said. “In the case of both the Chumash and the Bronze Age example, standardization is a key in terms of identifying money. My article on the origin of money in California is not only pushing the date for the use of money back 1,000 years in California, and possibly the Americas, it provides evidence that money was used by non-state level societies, commonly identified as ‘civilizations.’ ” More

  • in

    How the brain is programmed for computer programming?

    Countries around the world are seeing a surge in the number of computer science students. Enrolment in related university programs in the U.S. and Canada tripled between 2006-2016 and Europe too has seen rising numbers. At the same time, the age to start coding is becoming younger and younger because governments in many different countries are pushing K-12 computer science education. Despite the increasing popularity of computer programming, little is known about how our brains adapt to this relatively new activity. A new study by researchers in Japan has examined the brain activity of thirty programmers of diverse levels of expertise, finding that seven regions of the frontal, parietal and temporal cortices in expert programmer’s brain are fine-tuned for programming. The finding suggests that higher programming skills are built upon fine-tuned brain activities on a network of multiple distributed brain regions.
    “Many studies have reported differences between expert and novice programmers in behavioural performance, knowledge structure and selective attention. What we don’t know is where in the brain these differences emerge,” says Takatomi Kubo, an associate professor at Nara Institute of Science and Technology, Japan, and one of the lead authors of the study.
    To answer this question, the researchers observed groups of novices, experienced, and expert programmers. The programmers were shown 72 different code snippets while under the observation of functional MRI (fMRI) and asked to place each snippet into one of four functional categories. As expected, programmers with higher skills were better at correctly categorizing the snippets. A subsequent searchlight analysis revealed that the amount of information in seven brain regions strengthened with the skill level of the programmer: the bilateral inferior frontal gyrus pars triangularis (IFG Tri), left inferior parietal lobule (IPL), left supramarginal gyrus (SMG), left middle and inferior temporal gyri (MTG/IT), and right middle frontal gyrus (MFG).
    “Identifying these characteristics in expert programmers’ brains offers a good starting point for understanding the cognitive mechanisms behind programming expertise. Our findings illuminate the potential set of cognitive functions constituting programming expertise,” Kubo says.
    More specifically, the left IFG Tri and MTG are known to be associated with natural language processing and, in particular, semantic knowledge retrieval in a goal-oriented way. The left IPL and SMG are associated with episodic memory retrieval. The right MFG and IFG Tri are functionally related to stimulus-driven attention control.
    “Programming is a relatively new activity in human history and the mechanism is largely unknown. Connecting the activity to other well-known human cognitive functions will improve our understanding of programming expertise. If we get more comprehensive theory about programming expertise, it will lead to better methods for learning and teaching computer programming,” Kubo says.

    Story Source:
    Materials provided by Nara Institute of Science and Technology. Note: Content may be edited for style and length. More

  • in

    'Liquid' machine-learning system adapts to changing conditions

    MIT researchers have developed a type of neural network that learns on the job, not just during its training phase. These flexible algorithms, dubbed “liquid” networks, change their underlying equations to continuously adapt to new data inputs. The advance could aid decision making based on data streams that change over time, including those involved in medical diagnosis and autonomous driving.
    “This is a way forward for the future of robot control, natural language processing, video processing — any form of time series data processing,” says Ramin Hasani, the study’s lead author. “The potential is really significant.”
    The research will be presented at February’s AAAI Conference on Artificial Intelligence. In addition to Hasani, a postdoc in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT co-authors include Daniela Rus, CSAIL director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, and PhD student Alexander Amini. Other co-authors include Mathias Lechner of the Institute of Science and Technology Austria and Radu Grosu of the Vienna University of Technology.
    Time series data are both ubiquitous and vital to our understanding the world, according to Hasani. “The real world is all about sequences. Even our perception — you’re not perceiving images, you’re perceiving sequences of images,” he says. “So, time series data actually create our reality.”
    He points to video processing, financial data, and medical diagnostic applications as examples of time series that are central to society. The vicissitudes of these ever-changing data streams can be unpredictable. Yet analyzing these data in real time, and using them to anticipate future behavior, can boost the development of emerging technologies like self-driving cars. So Hasani built an algorithm fit for the task.
    Hasani designed a neural network that can adapt to the variability of real-world systems. Neural networks are algorithms that recognize patterns by analyzing a set of “training” examples. They’re often said to mimic the processing pathways of the brain — Hasani drew inspiration directly from the microscopic nematode, C. elegans. “It only has 302 neurons in its nervous system,” he says, “yet it can generate unexpectedly complex dynamics.”
    Hasani coded his neural network with careful attention to how C. elegans neurons activate and communicate with each other via electrical impulses. In the equations he used to structure his neural network, he allowed the parameters to change over time based on the results of a nested set of differential equations.

    advertisement

    This flexibility is key. Most neural networks’ behavior is fixed after the training phase, which means they’re bad at adjusting to changes in the incoming data stream. Hasani says the fluidity of his “liquid” network makes it more resilient to unexpected or noisy data, like if heavy rain obscures the view of a camera on a self-driving car. “So, it’s more robust,” he says.
    There’s another advantage of the network’s flexibility, he adds: “It’s more interpretable.”
    Hasani says his liquid network skirts the inscrutability common to other neural networks. “Just changing the representation of a neuron,” which Hasani did with the differential equations, “you can really explore some degrees of complexity you couldn’t explore otherwise.” Thanks to Hasani’s small number of highly expressive neurons, it’s easier to peer into the “black box” of the network’s decision making and diagnose why the network made a certain characterization.
    “The model itself is richer in terms of expressivity,” says Hasani. That could help engineers understand and improve the liquid network’s performance.
    Hasani’s network excelled in a battery of tests. It edged out other state-of-the-art time series algorithms by a few percentage points in accurately predicting future values in datasets, ranging from atmospheric chemistry to traffic patterns. “In many applications, we see the performance is reliably high,” he says. Plus, the network’s small size meant it completed the tests without a steep computing cost. “Everyone talks about scaling up their network,” says Hasani. “We want to scale down, to have fewer but richer nodes.”
    Hasani plans to keep improving the system and ready it for industrial application. “We have a provably more expressive neural network that is inspired by nature. But this is just the beginning of the process,” he says. “The obvious question is how do you extend this? We think this kind of network could be a key element of future intelligence systems.”
    This research was funded, in part, by Boeing, the National Science Foundation, the Austrian Science Fund, and Electronic Components and Systems for European Leadership. More

  • in

    A metalens for virtual and augmented reality

    Despite all the advances in consumer technology over the past decades, one component has remained frustratingly stagnant: the optical lens. Unlike electronic devices, which have gotten smaller and more efficient over the years, the design and underlying physics of today’s optical lenses haven’t changed much in about 3,000 years.
    This challenge has caused a bottleneck in the development of next-generation optical systems such as wearable displays for virtual reality, which require compact, lightweight, and cost-effective components.
    At the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), a team of researchers led by Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, has been developing the next generation of lenses that promise to open that bottleneck by replacing bulky curved lenses with a simple, flat surface that uses nanostructures to focus light.
    In 2018, the Capasso’s team developed achromatic, aberration-free metalenses that work across the entire visible spectrum of light. But these lenses were only tens of microns in diameter, too small for practical use in VR and augmented reality systems.
    Now, the researchers have developed a two-millimeter achromatic metalenses that can focus RGB (red, blue, green) colors without aberrations and developed a miniaturized display for virtual and augmented reality applications.
    The research is published in Science Advances.

    advertisement

    “This state-of-the-art lens opens a path to a new type of virtual reality platform and overcomes the bottleneck that has slowed the progress of new optical device,” said Capasso, the senior author of the paper.
    “Using new physics and a new design principle, we have developed a flat lens to replace the bulky lenses of today’s optical devices,” said Zhaoyi Li, a postdoctoral fellow at SEAS and first author of the paper. “This is the largest RGB-achromatic metalens to date and is a proof of concept that these lenses can be scaled up to centimeter size, mass produced, and integrated in commercial platforms.”
    Like previous metalenses, this lens uses arrays of titanium dioxide nanofins to equally focus wavelengths of light and eliminate chromatic aberration. By engineering the shape and pattern of these nanoarrays, the researchers could control the focal length of red, green and blue color of light. To incorporate the lens into a VR system, the team developed a near-eye display using a method called fiber scanning.
    The display, inspired by fiber-scanning-based endoscopic bioimaging techniques, uses an optical fiber through a piezoelectric tube. When a voltage is applied onto the tube, the fiber tip scans left and right and up and down to display patterns, forming a miniaturized display. The display has high resolution, high brightness, high dynamic range, and wide color gamut.
    In a VR or AR platform, the metalens would sit directly in front of the eye, and the display would sit within the focal plane of the metalens. The patterns scanned by the display are focused onto the retina, where the virtual image forms, with the help of the metalens. To the human eye, the image appears as part of the landscape in the AR mode, some distance from our actual eyes.

    advertisement

    “We have demonstrated how meta-optics platforms can help resolve the bottleneck of current VR technologies and potentially be used in our daily life,” said Li.
    Next, the team aims to scale up the lens even further, making it compatible with current large-scale fabrication techniques for mass production at a low cost.
    The Harvard Office of Technology Development has protected the intellectual property relating to this project and is exploring commercialization opportunities.
    The research was co-authored by Yao-Wei Huang, Joon-Suh Park, Wei Ting Chen, and Zhujun Shi from Harvard University, Peng Lin and Ji-Xin Cheng from Boston University, and Cheng-Wei Qiu from the National University of Singapore.
    The research was supported in part by the Defense Advanced Research Projects Agency under award no. HR00111810001, the National Science Foundation under award no. 1541959 and the SAMSUNG GRO research program under award no. A35924. More

  • in

    A NEAT reduction of complex neuronal models accelerates brain research

    Neurons, the fundamental units of the brain, are complex computers by themselves. They receive input signals on a tree-like structure — the dendrite. This structure does more than simply collect the input signals: it integrates and compares them to find those special combinations that are important for the neurons’ role in the brain. Moreover, the dendrites of neurons come in a variety of shapes and forms, indicating that distinct neurons may have separate roles in the brain.
    A simple yet faithful model
    In neuroscience, there has historically been a tradeoff between a model’s faithfulness to the underlying biological neuron and its complexity. Neuroscientists have constructed detailed computational models of many different types of dendrites. These models mimic the behavior of real dendrites to a high degree of accuracy. The tradeoff, however, is that such models are very complex. Thus, it is hard to exhaustively characterize all possible responses of such models and to simulate them on a computer. Even the most powerful computers can only simulate a small fraction of the neurons in any given brain area.
    Researchers from the Department of Physiology at the University of Bern have long sought to understand the role of dendrites in computations carried out by the brain. On the one hand, they have constructed detailed models of dendrites from experimental measurements, and on the other hand they have constructed neural network models with highly abstract dendrites to learn computations such as object recognition. A new study set out to find a computational method to make highly detailed models of neurons simpler, while retaining a high degree of faithfulness. This work emerged from the collaboration between experimental and computational neuroscientists from the research groups of Prof. Thomas Nevian and Prof. Walter Senn, and was led by Dr Willem Wybo. “We wanted the method to be flexible, so that it could be applied to all types of dendrites. We also wanted it to be accurate, so that it could faithfully capture the most important functions of any given dendrite. With these simpler models, neural responses can more easily be characterized and simulation of large networks of neurons with dendrites can be conducted,” Dr Wybo explains.
    This new approach exploits an elegant mathematical relation between the responses of detailed dendrite models and of simplified dendrite models. Due to this mathematical relation, the objective that is optimized is linear in the parameters of the simplified model. “This crucial observation allowed us to use the well-known linear least squares method to find the optimized parameters. This method is very efficient compared to methods that use non-linear parameter searches, but also achieves a high degree of accuracy,” says Prof. Senn.
    Tools available for AI applications
    The main result of the work is the methodology itself: a flexible yet accurate way to construct reduced neuron models from experimental data and morphological reconstructions. “Our methodology shatters the perceived tradeoff between faithfulness and complexity, by showing that extremely simplified models can still capture much of the important response properties of real biological neurons,” Prof. Senn explains. “Which also provides insight into ‘the essential dendrite’, the simplest possible dendrite model that still captures all possible responses of the real dendrite from which it is derived,” Dr Wybo adds.
    Thus, in specific situations, hard bounds can be established on how much a dendrite can be simplified, while retaining its important response properties. “Furthermore, our methodology greatly simplifies deriving neuron models directly from experimental data,” Prof. Senn highlights, who is also a member of the steering committe of the Center for Artifical Intelligence (CAIM) of the University of Bern. The methodology has been compiled into NEAT (NEural Analysis Toolkit) — an open-source software toolbox that automatizes the simplification process. NEAT is publicly available on GitHub.
    The neurons used currently in AI applications are exceedingly simplistic compared to their biological counterparts, as they don’t include dendrites at all. Neuroscientists believe that including dendrite-like operations in artificial neural networks will lead to the next leap in AI technology. By enabling the inclusion of very simple, but very accurate dendrite models in neural networks, this new approach and toolkit provide an important step towards that goal.
    This work was supported by the Human Brain Project, by the Swiss National Science foundation and by the European Research Council.

    Story Source:
    Materials provided by University of Bern. Note: Content may be edited for style and length. More

  • in

    Mira's last journey: Exploring the dark universe

    A team of physicists and computer scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory performed one of the five largest cosmological simulations ever. Data from the simulation will inform sky maps to aid leading large-scale cosmological experiments.
    The simulation, called the Last Journey, follows the distribution of mass across the universe over time — in other words, how gravity causes a mysterious invisible substance called “dark matter” to clump together to form larger-scale structures called halos, within which galaxies form and evolve.
    The scientists performed the simulation on Argonne’s supercomputer Mira. The same team of scientists ran a previous cosmological simulation called the Outer Rim in 2013, just days after Mira turned on. After running simulations on the machine throughout its seven-year lifetime, the team marked Mira’s retirement with the Last Journey simulation.
    The Last Journey demonstrates how far observational and computational technology has come in just seven years, and it will contribute data and insight to experiments such as the Stage-4 ground-based cosmic microwave background experiment (CMB-S4), the Legacy Survey of Space and Time (carried out by the Rubin Observatory in Chile), the Dark Energy Spectroscopic Instrument and two NASA missions, the Roman Space Telescope and SPHEREx.
    “We worked with a tremendous volume of the universe, and we were interested in large-scale structures, like regions of thousands or millions of galaxies, but we also considered dynamics at smaller scales,” said Katrin Heitmann, deputy division director for Argonne’s High Energy Physics (HEP) division.
    The code that constructed the cosmos
    The six-month span for the Last Journey simulation and major analysis tasks presented unique challenges for software development and workflow. The team adapted some of the same code used for the 2013 Outer Rim simulation with some significant updates to make efficient use of Mira, an IBM Blue Gene/Q system that was housed at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

    advertisement

    Specifically, the scientists used the Hardware/Hybrid Accelerated Cosmology Code (HACC) and its analysis framework, CosmoTools, to enable incremental extraction of relevant information at the same time as the simulation was running.
    “Running the full machine is challenging because reading the massive amount of data produced by the simulation is computationally expensive, so you have to do a lot of analysis on the fly,” said Heitmann. “That’s daunting, because if you make a mistake with analysis settings, you don’t have time to redo it.”
    The team took an integrated approach to carrying out the workflow during the simulation. HACC would run the simulation forward in time, determining the effect of gravity on matter during large portions of the history of the universe. Once HACC determined the positions of trillions of computational particles representing the overall distribution of matter, CosmoTools would step in to record relevant information — such as finding the billions of halos that host galaxies — to use for analysis during post-processing.
    “When we know where the particles are at a certain point in time, we characterize the structures that have formed by using CosmoTools and store a subset of data to make further use down the line,” said Adrian Pope, physicist and core HACC and CosmoTools developer in Argonne’s Computational Science (CPS) division. “If we find a dense clump of particles, that indicates the location of a dark matter halo, and galaxies can form inside these dark matter halos.”
    The scientists repeated this interwoven process — where HACC moves particles and CosmoTools analyzes and records specific data — until the end of the simulation. The team then used features of CosmoTools to determine which clumps of particles were likely to host galaxies. For reference, around 100 to 1,000 particles represent single galaxies in the simulation.

    advertisement

    “We would move particles, do analysis, move particles, do analysis,” said Pope. “At the end, we would go back through the subsets of data that we had carefully chosen to store and run additional analysis to gain more insight into the dynamics of structure formation, such as which halos merged together and which ended up orbiting each other.”
    Using the optimized workflow with HACC and CosmoTools, the team ran the simulation in half the expected time.
    Community contribution
    The Last Journey simulation will provide data necessary for other major cosmological experiments to use when comparing observations or drawing conclusions about a host of topics. These insights could shed light on topics ranging from cosmological mysteries, such as the role of dark matter and dark energy in the evolution of the universe, to the astrophysics of galaxy formation across the universe.
    “This huge data set they are building will feed into many different efforts,” said Katherine Riley, director of science at the ALCF. “In the end, that’s our primary mission — to help high-impact science get done. When you’re able to not only do something cool, but to feed an entire community, that’s a huge contribution that will have an impact for many years.”
    The team’s simulation will address numerous fundamental questions in cosmology and is essential for enabling the refinement of existing models and the development of new ones, impacting both ongoing and upcoming cosmological surveys.
    “We are not trying to match any specific structures in the actual universe,” said Pope. “Rather, we are making statistically equivalent structures, meaning that if we looked through our data, we could find locations where galaxies the size of the Milky Way would live. But we can also use a simulated universe as a comparison tool to find tensions between our current theoretical understanding of cosmology and what we’ve observed.”
    Looking to exascale
    “Thinking back to when we ran the Outer Rim simulation, you can really see how far these scientific applications have come,” said Heitmann, who performed Outer Rim in 2013 with the HACC team and Salman Habib, CPS division director and Argonne Distinguished Fellow. “It was awesome to run something substantially bigger and more complex that will bring so much to the community.”
    As Argonne works towards the arrival of Aurora, the ALCF’s upcoming exascale supercomputer, the scientists are preparing for even more extensive cosmological simulations. Exascale computing systems will be able to perform a billion billion calculations per second — 50 times faster than many of the most powerful supercomputers operating today.
    “We’ve learned and adapted a lot during the lifespan of Mira, and this is an interesting opportunity to look back and look forward at the same time,” said Pope. “When preparing for simulations on exascale machines and a new decade of progress, we are refining our code and analysis tools, and we get to ask ourselves what we weren’t doing because of the limitations we have had until now.”
    The Last Journey was a gravity-only simulation, meaning it did not consider interactions such as gas dynamics and the physics of star formation. Gravity is the major player in large-scale cosmology, but the scientists hope to incorporate other physics in future simulations to observe the differences they make in how matter moves and distributes itself through the universe over time.
    “More and more, we find tightly coupled relationships in the physical world, and to simulate these interactions, scientists have to develop creative workflows for processing and analyzing,” said Riley. “With these iterations, you’re able to arrive at your answers — and your breakthroughs — even faster.” More

  • in

    Smart algorithm cleans up images by searching for clues buried in noise

    To enter the world of the fantastically small, the main currency is either a ray of light or electrons.
    Strong beams, which yield clearer images, are damaging to specimens. On the other hand, weak beams can give noisy, low-resolution images.
    In a new study published in Nature Machine Intelligence, researchers at Texas A&M University describe a machine learning-based algorithm that can reduce graininess in low-resolution images and reveal new details that were otherwise buried within the noise.
    “Images taken with low-powered beams can be noisy, which can hide interesting and valuable visual details of biological specimens,” said Shuiwang Ji, associate professor in the Department of Computer Science and Engineering. “To solve this problem, we use a pure computational approach to create higher-resolution images, and we have shown in this study that we can improve the resolution up to an extent very similar to what you might obtain using a high beam.”
    Ji added that unlike other denoising algorithms that can only use information coming from a small patch of pixels within a low-resolution image, their smart algorithm can identify pixel patterns that may be spread across the entire noisy image, increasing its efficacy as a denoising tool.
    Instead of solely relying on microscope hardware to improve the images’ resolution, a technique known as augmented microscopy uses a combination of software and hardware to enhance the quality of images. Here, a regular image taken on a microscope is superimposed on a computer-generated digital image. This image processing method holds promise to not just cut down costs but also automatize medical image analysis and reveal details that the eye can sometimes miss.

    advertisement

    Currently, a type of software based on a machine-learning algorithm called deep learning has been shown to be effective at removing the blurriness or noise in images. These algorithms can be visualized as consisting of many interconnected layers or processing steps that take in a low-resolution input image and generate a high-resolution output image.
    In conventional deep-learning-based image processing techniques, the number and network between layers decide how many pixels in the input image contribute to the value of a single pixel in the output image. This value is immutable after the deep-learning algorithm has been trained and is ready to denoise new images. However, Ji said fixing the number for the input pixels, technically called the receptive field, limits the performance of the algorithm.
    “Imagine a piece of specimen having a repeating motif, like a honeycomb pattern. Most deep-learning algorithms only use local information to fill in the gaps in the image created by the noise,” Ji said. “But this is inefficient because the algorithm is, in essence, blind to the repeating pattern within the image since the receptive field is fixed. Instead, deep-learning algorithms need to have adaptive receptive fields that can capture the information in the overall image structure.”
    To overcome this hurdle, Ji and his students developed another deep-learning algorithm that can dynamically change the size of the receptive field. In other words, unlike earlier algorithms that can only aggregate information from a small number of pixels, their new algorithm, called global voxel transformer networks (GVTNets), can pool information from a larger area of the image if required.
    When they analyzed their algorithm’s performance against other deep-learning software, the researchers found that GVTNets required less training data and could denoise images better than other deep-learning algorithms. Furthermore, the high-resolution images obtained were comparable to those obtained using a high-energy light beam.
    The researchers noted that their new algorithm can easily be adapted to other applications in addition to denoising, such as label-free fluorescence imaging and 3D to 2D conversions for computer graphics.
    “Our research contributes to the emerging area of a smart microscopy, where artificial intelligence is seamlessly integrated into the microscope,” Ji said. “Deep-learning algorithms such as ours will allow us to potentially transcend the physical limit posed by light that was not possible before. This can be extremely valuable for a myriad of applications, including clinical ones, like estimating the stage of cancer progression and distinguishing between cell types for disease prognosis.”
    This research is funded by the National Science Foundation, the National Institutes of Health and the Defense Advanced Research Projects Agency.

    Story Source:
    Materials provided by Texas A&M University. Original written by Vandana Suresh. Note: Content may be edited for style and length. More

  • in

    Pace of prehistoric human innovation could be revealed by 'linguistic thermometer'

    Multi-disciplinary researchers at The University of Manchester have helped develop a powerful physics-based tool to map the pace of language development and human innovation over thousands of years — even stretching into pre-history before records were kept.
    Tobias Galla, a professor in theoretical physics, and Dr Ricardo Bermúdez-Otero, a specialist in historical linguistics, from The University of Manchester, have come together as part of an international team to share their diverse expertise to develop the new model, revealed in a paper entitled ‘Geospatial distributions reflect temperatures of linguistic feature’ authored by Henri Kauhanen, Deepthi Gopal, Tobias Galla and Ricardo Bermúdez-Otero, and published by the journal Science Advances.
    Professor Galla has applied statistical physics — usually used to map atoms or nanoparticles — to help build a mathematically-based model that responds to the evolutionary dynamics of language. Essentially, the forces that drive language change can operate across thousands of years and leave a measurable “geospatial signature,” determining how languages of different types are distributed over the surface of the Earth.
    Dr Bermúdez-Otero explained: “In our model each language has a collection of properties or features and some of those features are what we describe as ‘hot’ or ‘cold’.
    “So, if a language puts the object before the verb, then it is relatively likely to get stuck with that order for a long period of time — so that’s a ‘cold’ feature. In contrast, markers like the English article ‘the’ come and go a lot faster: they may be here in one historical period, and be gone in the next. In that sense, definite articles are ‘hot’ features.
    “The striking thing is that languages with ‘cold’ properties tend to form big clumps, whereas languages with ‘hot’ properties tend to be more scattered geographically.”
    This method therefore works like a thermometer, enabling researchers to retrospectively tell whether one linguistic property is more prone to change in historical time than another. This modelling could also provide a similar benchmark for the pace of change in other social behaviours or practices over time and space.
    “For example, suppose that you have a map showing the spatial distribution of some variable cultural practice for which you don’t have any historical records — this could be be anything, like different rules on marriage or on the inheritance of possessions,” added Dr Bermúdez-Otero.
    “Our method could, in principle, be used to ascertain whether one practice changes in the course of historical time faster than another, ie whether people are more innovative in one area than in another, just by looking at how the present-day variation is distributed in space.”
    The source data for the linguistic modelling comes from present-day languages and the team relied on The World Atlas of Language Structures (WALS). This records information of 2,676 contemporary languages.
    Professor Galla explained: “We were interested in emergent phenomena, such as how large-scale effects, for example patterns in the distribution of language features arise from relatively simple interactions. This is a common theme in complex systems research.
    “I was able to help with my expertise in the mathematical tools we used to analyse the language model and in simulation techniques. I also contributed to setting up the model in the first place, and by asking questions that a linguist would perhaps not ask in the same way.”

    Story Source:
    Materials provided by University of Manchester. Note: Content may be edited for style and length. More