More stories

  • in

    New machine learning-assisted method rapidly classifies quantum sources

    For quantum optical technologies to become more practical, there is a need for large-scale integration of quantum photonic circuits on chips.
    This integration calls for scaling up key building blocks of these circuits — sources of particles of light — produced by single quantum optical emitters.
    Purdue University engineers created a new machine learning-assisted method that could make quantum photonic circuit development more efficient by rapidly preselecting these solid-state quantum emitters.
    The work is published in the journal Advanced Quantum Technologies.
    Researchers around the world have been exploring different ways to fabricate identical quantum sources by “transplanting” nanostructures containing single quantum optical emitters into conventional photonic chips.
    “With the growing interest in scalable realization and rapid prototyping of quantum devices that utilize large emitter arrays, high-speed, robust preselection of suitable emitters becomes necessary,” said Alexandra Boltasseva, Purdue’s Ron and Dotty Garvin Tonjes Professor of Electrical and Computer Engineering.

    advertisement

    Quantum emitters produce light with unique, non-classical properties that can be used in many quantum information protocols.
    The challenge is that interfacing most solid-state quantum emitters with existing scalable photonic platforms requires complex integration techniques. Before integrating, engineers need to first identify bright emitters that produce single photons rapidly, on-demand and with a specific optical frequency.
    Emitter preselection based on “single-photon purity” — which is the ability to produce only one photon at a time — typically takes several minutes for each emitter. Thousands of emitters may need to be analyzed before finding a high-quality candidate suitable for quantum chip integration.
    To speed up screening based on single-photon purity, Purdue researchers trained a machine to recognize promising patterns in single-photon emission within a split second.
    According to the researchers, rapidly finding the purest single-photon emitters within a set of thousands would be a key step toward practical and scalable assembly of large quantum photonic circuits.

    advertisement

    “Given a photon purity standard that emitters must meet, we have taught a machine to classify single-photon emitters as sufficiently or insufficiently ‘pure’ with 95% accuracy, based on minimal data acquired within only one second,” said Zhaxylyk Kudyshev, a Purdue postdoctoral researcher.
    The researchers found that the conventional photon purity measurement method used for the same task took 100 times longer to reach the same level of accuracy.
    “The machine learning approach is such a versatile and efficient technique because it is capable of extracting the information from the dataset that the fitting procedure usually ignores,” Boltasseva said.
    The researchers believe that their approach has the potential to dramatically advance most quantum optical measurements that can be formulated as binary or multiclass classification problems.
    “Our technique could, for example, speed up super-resolution microscopy methods built on higher-order correlation measurements that are currently limited by long image acquisition times,” Kudyshev said.

    Story Source:
    Materials provided by Purdue University. Note: Content may be edited for style and length. More

  • in

    Quirky response to magnetism presents quantum physics mystery

    The search is on to discover new states of matter, and possibly new ways of encoding, manipulating, and transporting information. One goal is to harness materials’ quantum properties for communications that go beyond what’s possible with conventional electronics. Topological insulators — materials that act mostly as insulators but carry electric current across their surface — provide some tantalizing possibilities.
    “Exploring the complexity of topological materials — along with other intriguing emergent phenomena such as magnetism and superconductivity — is one of the most exciting and challenging areas of focus for the materials science community at the U.S. Department of Energy’s Brookhaven National Laboratory,” said Peter Johnson, a senior physicist in the Condensed Matter Physics & Materials Science Division at Brookhaven. “We’re trying to understand these topological insulators because they have lots of potential applications, particularly in quantum information science, an important new area for the division.”
    For example, materials with this split insulator/conductor personality exhibit a separation in the energy signatures of their surface electrons with opposite “spin.” This quantum property could potentially be harnessed in “spintronic” devices for encoding and transporting information. Going one step further, coupling these electrons with magnetism can lead to novel and exciting phenomena.
    “When you have magnetism near the surface you can have these other exotic states of matter that arise from the coupling of the topological insulator with the magnetism,” said Dan Nevola, a postdoctoral fellow working with Johnson. “If we can find topological insulators with their own intrinsic magnetism, we should be able to efficiently transport electrons of a particular spin in a particular direction.”
    In a new study just published and highlighted as an Editor’s Suggestion in Physical Review Letters, Nevola, Johnson, and their coauthors describe the quirky behavior of one such magnetic topological insulator. The paper includes experimental evidence that intrinsic magnetism in the bulk of manganese bismuth telluride (MnBi2Te4) also extends to the electrons on its electrically conductive surface. Previous studies had been inconclusive as to whether or not the surface magnetism existed.
    But when the physicists measured the surface electrons’ sensitivity to magnetism, only one of two observed electronic states behaved as expected. Another surface state, which was expected to have a larger response, acted as if the magnetism wasn’t there.

    advertisement

    “Is the magnetism different at the surface? Or is there something exotic that we just don’t understand?” Nevola said.
    Johnson leans toward the exotic physics explanation: “Dan did this very careful experiment, which enabled him to look at the activity in the surface region and identify two different electronic states on that surface, one that might exist on any metallic surface and one that reflected the topological properties of the material,” he said. “The former was sensitive to the magnetism, which proves that the magnetism does indeed exist in the surface. However, the other one that we expected to be more sensitive had no sensitivity at all. So, there must be some exotic physics going on!”
    The measurements
    The scientists studied the material using various types of photoemission spectroscopy, where light from an ultraviolet laser pulse knocks electrons loose from the surface of the material and into a detector for measurement.
    “For one of our experiments, we use an additional infrared laser pulse to give the sample a little kick to move some of the electrons around prior to doing the measurement,” Nevola explained. “It takes some of the electrons and kicks them [up in energy] to become conducting electrons. Then, in very, very short timescales — picoseconds — you do the measurement to look at how the electronic states have changed in response.”
    The map of the energy levels of the excited electrons shows two distinct surface bands that each display separate branches, electrons in each branch having opposite spin. Both bands, each representing one of the two electronic states, were expected to respond to the presence of magnetism.

    advertisement

    To test whether these surface electrons were indeed sensitive to magnetism, the scientists cooled the sample to 25 Kelvin, allowing its intrinsic magnetism to emerge. However only in the non-topological electronic state did they observe a “gap” opening up in the anticipated part of the spectrum.
    “Within such gaps, electrons are prohibited from existing, and thus their disappearance from that part of the spectrum represents the signature of the gap,” Nevola said.
    The observation of a gap appearing in the regular surface state was definitive evidence of magnetic sensitivity — and evidence that the magnetism intrinsic in the bulk of this particular material extends to its surface electrons.
    However, the “topological” electronic state the scientists studied showed no such sensitivity to magnetism — no gap.
    “That throws in a bit of a question mark,” Johnson said.
    “These are properties we’d like to be able to understand and engineer, much like we engineer the properties of semiconductors for a variety of technologies,” Johnson continued.
    In spintronics, for example, the idea is to use different spin states to encode information in the way positive and negative electric charges are presently used in semiconductor devices to encode the “bits” — 1s and 0s — of computer code. But spin-coded quantum bits, or qubits, have many more possible states — not just two. This will greatly expand on the potential to encode information in new and powerful ways.
    “Everything about magnetic topological insulators looks like they’re right for this kind of technological application, but this particular material doesn’t quite obey the rules,” Johnson said.
    So now, as the team continues their search for new states of matter and further insights into the quantum world, there’s a new urgency to explain this particular material’s quirky quantum behavior. More

  • in

    New genetic analysis method could advance personal genomics

    Geneticists could identify the causes of disorders that currently go undiagnosed if standard practices for collecting individual genetic information were expanded to capture more variants that researchers can now decipher, concludes new Johns Hopkins University research.
    The laboratory of Johns Hopkins biomedical engineering professor Alexis Battle has developed a technique to begin identifying potentially problematic rare genetic variants that exist in the genomes of all people, particularly if additional genetic sequencing information was included in standard collection methods. The team’s findings are published in the latest issue of Science and are part of the Genotype-Tissue Expression (GTEx) Program funded by the National Institutes of Health.
    “The implications of this could be quite large. Everyone has around 50,000 variants that are rare in the population and we have absolutely no idea what most of them are doing,” Battle said. “If you collect gene expression data, which shows which proteins are being produced in a patient’s cells at what levels, we’re going to be able to identify what’s going on at a much higher rate.”
    While approximately 8% of U.S. citizens, mostly children, suffer from genetic disorders, the genetic cause has not been found for about half of the cases. What’s even more frustrating, according to Battle, is that even more people are likely living with more subtle genetically-influenced health ailments that have not been identified.
    “We really don’t know how many people are out there walking around with a genetic aberration that is causing them health issues,” she said. “They go completely undiagnosed, meaning we cannot find the genetic cause of their problems.”
    The field of personalized genomics is unable to characterize these rare variants because most genetic variants, specifically variants that are in “non-coding” parts of the genome that do not specify a protein, are not tested. Doing so would represent a major advance in a growing field that is focused on the sequencing and analysis of individuals’ genomes, she said
    The Battle Lab developed a computational system called “Watershed” that can scour reams of genetic data along with gene expression to predict the functions of variants from individual’s genomes. They validated those predictions in the lab and applied the findings to assess the rare variants captured in massive gene collections such as the UK Biobank, the Million Veterans Program and the Jackson Heart Study. The results have helped to show which rare variants may be impacting human traits.
    “Any improvement we can make in this area has implications for public health,” Battle said. “Even pointing to what the genetic cause is gives parents and patients a huge sense of relief and understanding and can point to potential therapeutics.”
    Battle’s team worked in collaboration with researchers from Scripps Translational Science Institute, the New York Genome Center, the Massachusetts Institute of Technology and Stanford, Harvard and Columbia universities.
    “Looking at the cross-tissue transcriptional footprint of rare genetic variants across many human tissues in GTEx data also helps us better understand the gaps and the potential of these analyses for clinical diagnostics,” said Pejman Mohammadi, a co-author and professor of integrative structural and computational biology at Scripps Research.
    The grant numbers involved in the research include: R01MH109905, 1R01HG010480, Searle Scholar Program, R01HG008150.

    Story Source:
    Materials provided by Johns Hopkins University. Note: Content may be edited for style and length. More

  • in

    Evidence of power: Phasing quantum annealers into experiments from nonequilibrium physics

    Scientists at Tokyo Institute of Technology (Tokyo Tech) use commercially available quantum annealers, a type of quantum computer, to experimentally probe the validity of an important mechanism from nonequilibrium physics in open quantum systems. The results not only shed light into the extent of applicability of this mechanism and an extension of it, but also showcase how quantum annealers can serve as effective platforms for quantum simulations.
    It is established that matter can transition between different phases when certain parameters, such as temperature, are changed. Although phase transitions are common (like water turning into ice in a freezer), the dynamics that govern these processes are highly complex and constitute a prominent problem in the field of nonequilibrium physics.
    When a system undergoes a phase transition, matter in the new phase has many possible energetically equal “configurations” to adopt. In these cases, different parts of the system adopt different configurations over regions called “domains.” The interfaces between these domains are known as topological defects and reducing the number of these defects formed can be immensely valuable in many applications.
    One common strategy to reduce defects is easing the system through the phase transition slowly. In fact, according to the “Kibble-Zurek” mechanism (KZM), it is predicted that the average number of defects and the driving time of the phase transition follow a universal power law. However, experimentally testing the KZM in a quantum system has remained a coveted goal.
    In a recent study published in Physical Review Research, a team of scientists led by Professor Emeritus Hidetoshi Nishimori from Tokyo Institute of Technology, Japan, probed the validity of the KZM in two commercially available quantum annealers, a type of quantum computer designed for solving complex optimization problems. These devices, known as D-Wave annealers, can recreate controllable quantum systems and control their evolution over time, providing a suitable experimental testbed for the KZM.
    First, the scientists checked whether the “power law” between the average number of defects and the annealing time (driving time of the phase transition) predicted by the KZM held for a quantum magnetic system called the “one-dimensional transverse-field Ising model.” This model represents the orientations (spins) of a long chain of “magnetic dipoles,” where homogenous regions are separated by defects seen as neighboring spins pointing in incorrect directions.
    While the original prediction of the KZM regarding the average number of defects was valid in this system, the scientists took it a step further: although this extension of the KZM was originally intended for a completely “isolated” quantum system unaffected by external parameters, they found good agreement between its predictions and their experimental results even in the D-Wave annealers, which are “open” quantum systems.
    Excited by these results, Prof Nishimori remarks: “Our work provides the first experimental test of universal critical dynamics in a many-body open quantum system. It also constitutes the first test of certain physics beyond the original KZM, providing strong experimental evidence that the generalized theory holds beyond the regime of validity theoretically established.”
    This study showcases the potential of quantum annealers to perform simulations of quantum systems and also helps gain insight on other areas of physics. In this regard, Prof Nishimori states: “Our results leverage quantum annealing devices as platforms to test and explore the frontiers of nonequilibrium physics. We hope our work will motivate further research combining quantum annealing and other universal principles in nonequilibrium physics.” Hopefully, this study will also promote the use of quantum annealers in experimental physics. After all, who doesn’t love finding a new use for a tool?

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Detailed picture of US bachelor's programs in computing

    ACM, the Association for Computing Machinery, recently released its eighth annual Study of Non-Doctoral Granting Departments in Computing (NDC study). With the aim of providing a comprehensive look at computing education, the study includes information on enrollments, degree completions, faculty demographics, and faculty salaries. For the first time, this year’s ACM NDC study includes enrollment and degree completion data from the National Student Clearinghouse Research Center (NSC).
    In previous years, ACM directly surveyed Computer Science departments, and would work with a sample of approximately 18,000 students. By accessing the NSC’s data, the ACM NDC study now includes information on approximately 300,000 students across the United States, allowing for a more reliable understanding of the state of enrollment and graduation in Bachelor’s programs. Also for the first time, the ACM NDC study includes data from private, for-profit institutions, which are playing an increasingly important role in computing education.
    “By partnering with the NSC, we now have a much fuller picture of computing enrollment and degree production at the Bachelor’s level,” explained ACM NDC study co-author Stuart Zweben, Professor Emeritus, Ohio State University. “The NSC also gives us more specific data on the gender and ethnicity of students. This is an important tool, as increasing the participation of women and other underrepresented groups has been an important goal for leaders in academia and industry. For example, having a clear picture of the current landscape for underrepresented people is an essential first step toward developing approaches to increase diversity.”
    “The computing community has come to rely on the ACM NDC study to understand trends in undergraduate computing education,” added ACM NDC study co-author Jodi Tims, Professor, Northeastern University. “At the same time, using our previous data collection methods, we were only capturing about 15-20% of institutions offering Bachelor’s degrees in computing. The NSC data gives us a much broader sample, as well as more precise information about enrollment and graduation in specific computing disciplines — such as computer science, information systems, information technology, software engineering, computer engineering and cybersecurity. For example, we’ve seen a noticeable increase in cybersecurity program offerings between the 2017/2018 and 2018/2019 academic years, and we believe this trend will continue next year. Going forward, we also plan to begin collecting information on data science offerings in undergraduate education. Our overall goal will be to maintain the ACM NDC study as the most up-to-date and authoritative resource on this topic.”
    As with previous NDC studies, information on faculty salaries, retention, and demographics was collected by sending surveys to academic departments across the United States. Responses were received from 151 departments. The average number of full-time faculty members at the responding departments was 12.
    Important findings of the ACM NDC study include:
    -Between the 2017/2018 and the 2018/2019 academic years, there was a 4.7% increase in degree production across all computing disciplines. The greatest increases in degree production were in software engineering (9% increase) and computer science (7.5% increase)
    -The representation of women in information systems (24.5% of degree earners in the 2018/2019 academic year) and information technology (21.5% of degree earners in the 2018/2019 academic year) is much higher than in areas such as computer engineering (12.2% of degree earners in the 2018/2019 academic year).
    -Bachelor’s programs, as recorded by the ACM NDC study, had a stronger representation of African American and Hispanic students than PhD programs, as recorded by the Computer Research Association’s (CRA) Taulbee Survey. For example, during the 2018/2019 academic year, the ACM NDC records that 15.6% of enrollees in Bachelor’s programs were African American, whereas the CRA Taulbee survey records that 4.7% of enrollees in PhD programs were African American.
    -In some disciplines of computing, African Americans and Hispanics are actually over-represented, based on their percentage of the US population.
    -Based on aggregate salary data from 89 non-doctoral-granting computer science departments (including public and private institutions), the average median salary for a full professor was $109,424.
    – Of 40 non-doctoral granting departments reporting over 56 faculty departures, only 10.7% of faculty departed for non-academic positions. Most departed due to retirement (46.4%) or other academic positions (26.9%).

    In addition to Stuart Zweben, and Jodi Tims, the ACM NDC study was co-authored by Yan Timanovsky, Association for Computing Machinery. By employing the NSC data in future ACM NDC studies, the co-authors are confident that an even fuller picture will emerge regarding student retention with respect to computing disciplines, gender and ethnicity. More

  • in

    Experiments reveal why human-like robots elicit uncanny feelings

    Androids, or robots with humanlike features, are often more appealing to people than those that resemble machines — but only up to a certain point. Many people experience an uneasy feeling in response to robots that are nearly lifelike, and yet somehow not quite “right.” The feeling of affinity can plunge into one of repulsion as a robot’s human likeness increases, a zone known as “the uncanny valley.”
    The journal Perception published new insights into the cognitive mechanisms underlying this phenomenon made by psychologists at Emory University.
    Since the uncanny valley was first described, a common hypothesis developed to explain it. Known as the mind-perception theory, it proposes that when people see a robot with human-like features, they automatically add a mind to it. A growing sense that a machine appears to have a mind leads to the creepy feeling, according to this theory.
    “We found that the opposite is true,” says Wang Shensheng, first author of the new study, who did the work as a graduate student at Emory and recently received his PhD in psychology. “It’s not the first step of attributing a mind to an android but the next step of ‘dehumanizing’ it by subtracting the idea of it having a mind that leads to the uncanny valley. Instead of just a one-shot process, it’s a dynamic one.”
    The findings have implications for both the design of robots and for understanding how we perceive one another as humans.
    “Robots are increasingly entering the social domain for everything from education to healthcare,” Wang says. “How we perceive them and relate to them is important both from the standpoint of engineers and psychologists.”
    “At the core of this research is the question of what we perceive when we look at a face,” adds Philippe Rochat, Emory professor of psychology and senior author of the study. “It’s probably one of the most important questions in psychology. The ability to perceive the minds of others is the foundation of human relationships. ”

    advertisement

    The research may help in unraveling the mechanisms involved in mind-blindness — the inability to distinguish between humans and machines — such as in cases of extreme autism or some psychotic disorders, Rochat says.
    Co-authors of the study include Yuk Fai Cheong and Daniel Dilks, both associate professors of psychology at Emory.
    Anthropomorphizing, or projecting human qualities onto objects, is common. “We often see faces in a cloud for instance,” Wang says. “We also sometimes anthropomorphize machines that we’re trying to understand, like our cars or a computer.”
    Naming one’s car or imagining that a cloud is an animated being, however, is not normally associated with an uncanny feeling, Wang notes. That led him to hypothesize that something other than just anthropomorphizing may occur when viewing an android.
    To tease apart the potential roles of mind-perception and dehumanization in the uncanny valley phenomenon the researchers conducted experiments focused on the temporal dynamics of the process. Participants were shown three types of images — human faces, mechanical-looking robot faces and android faces that closely resembled humans — and asked to rate each for perceived animacy or “aliveness.” The exposure times of the images were systematically manipulated, within milliseconds, as the participants rated their animacy.
    The results showed that perceived animacy decreased significantly as a function of exposure time for android faces but not for mechanical-looking robot or human faces. And in android faces, the perceived animacy drops at between 100 and 500 milliseconds of viewing time. That timing is consistent with previous research showing that people begin to distinguish between human and artificial faces around 400 milliseconds after stimulus onset.
    A second set of experiments manipulated both the exposure time and the amount of detail in the images, ranging from a minimal sketch of the features to a fully blurred image. The results showed that removing details from the images of the android faces decreased the perceived animacy along with the perceived uncanniness.
    “The whole process is complicated but it happens within the blink of an eye,” Wang says. “Our results suggest that at first sight we anthropomorphize an android, but within milliseconds we detect deviations and dehumanize it. And that drop in perceived animacy likely contributes to the uncanny feeling.”

    Story Source:
    Materials provided by Emory Health Sciences. Original written by Carol Clark. Note: Content may be edited for style and length. More

  • in

    How do people prefer coronavirus contact tracing to be carried out?

    People prefer coronavirus contact tracing to be carried out by a combination of apps and humans, a new study shows.
    The research shows people are more concerned about who runs the process than the risks of others having unauthorised access to their private information, or their data being stolen.
    Most people who took part in the research were in favour of the NHS processing personal data rather than the Government or even a decentralised system that stores only minimal personal data.
    A total of 41 per cent of those questioned wanted a mixture of an app and human contact during the tracing process, compared to 22 per cent who wanted it purely to be run via contact with another person and 37 per cent who wanted the process to only be digital.
    The research was conducted by Laszlo Horvath, Susan Banducci and Oliver James from the University of Exeter during May and is published in the Journal of Experimental Political Science.
    They ran an experiment on 1,504 people who were given information about two apps though a series of five pairings, with their properties relating to privacy and data security displayed randomly, and asked which they would prefer to use. In a second study, the academics also surveyed 809 people about their preferences for how apps should be run and designed.
    The decentralised system of contact tracing, currently trialled in the UK, was chosen by participants with a 50 per cent probability, meaning this particular design didn’t influence people’s choice. However the probability of people choosing the app designed to work as part of a NHS-led centralised system was 57 per cent, meaning it was more popular, while 43 per cent of apps chosen were described as having data which would be stored on servers belonging to the UK government, making them less popular.
    A randomly selected group of people were also informed about the risk of data breach issues, but this didn’t have an impact on people’s preferences.
    Dr Horvath said: “We had thought people would prefer apps which were less intrusive and protected their privacy, for example not needing as much information about their location, but this wasn’t the case. Our research shows people are supportive of taking part in the contact tracing process if needed. They are less concerned about the possibility of data breach problems than who their app is run by, and privacy didn’t affect their preferences when they had a choice of apps.”
    Professor Banducci said: “Our research shows people are supportive of the NHS storing and using their personal information. Faith and trust in the NHS is high at the moment so it may motivate people to take part in the process if the Government involves the health service in its development and deployment. Trust in the provider of contact tracing will be crucial if it is to be used successfully to reduce the spread of infection.”
    Professor James said: “People who took part in this research preferred a balanced — human plus digital — approach to contract tracing. Privacy concerns were not as influential as we expected. Trust in the provider of the app is currently more important, something for the Government to remember as work on the UK’s contact tracing system continues.”

    Story Source:
    Materials provided by University of Exeter. Note: Content may be edited for style and length. More

  • in

    Study confirms widespread literacy in biblical-period kingdom of Judah

    Researchers at Tel Aviv University (TAU) have analyzed 18 ancient texts dating back to around 600 BCE from the Tel Arad military post using state-of-the-art image processing, machine learning technologies, and the expertise of a senior handwriting examiner. They have concluded that the texts were written by no fewer than 12 authors, suggesting that many of the inhabitants of the kingdom of Judah during that period were able to read and write, with literacy not reserved as an exclusive domain in the hands of a few royal scribes.
    The special interdisciplinary study was conducted by TAU’s Dr. Arie Shaus, Ms. Shira Faigenbaum-Golovin, and Dr. Barak Sober of the Department of Applied Mathematics; Prof. Eli Piasetzky of the Raymond and Beverly Sackler School of Physics and Astronomy; and Prof. Israel Finkelstein of the Jacob M. Alkow Department of Archeology and Ancient Near Eastern Civilizations. The forensic handwriting specialist, Ms. Yana Gerber, is a senior expert who served for 27 years in the Questioned Documents Laboratory of the Israel Police Division of Identification and Forensic Science and its International Crime Investigations Unit.
    The results were published in PLOS ONE on September 9, 2020.
    “There is a lively debate among experts as to whether the books of Deuteronomy, Joshua, Judges, Samuel, and Kings were compiled in the last days of the kingdom of Judah or after the destruction of the First Temple by the Babylonians,” Dr. Shaus explains. “One way to try to get to the bottom of this question is to ask when there was the potential for the writing of such complex historical works.
    “For the period following the destruction of the First Temple in 586 BC, there is very scant archaeological evidence of Hebrew writing in Jerusalem and its surroundings, but an abundance of written documents has been found for the period preceding the destruction of the Temple. But who wrote these documents? Was this a society with widespread literacy, or was there just a handful of literate people?”
    To answer this question, the researchers examined the ostraca (fragments of pottery vessels containing ink inscriptions) writings discovered at the Tel Arad site in the 1960s. Tel Arad was a small military post on the southern border of the kingdom of Judah; its built-up area was about 20,000 square feet and it housed between 20 and 30 soldiers.

    advertisement

    “We examined the question of literacy empirically, from different directions of image processing and machine learning,” says Ms. Faigenbaum-Golovin. “Among other things, these areas help us today with the identification, recognition, and analysis of handwriting, signatures, and so on. The big challenge was to adapt modern technologies to 2,600-year-old ostraca. With a lot of effort, we were able to produce two algorithms that could compare letters and answer the question of whether two given ostraca were written by two different people.”
    In 2016, the researchers theorized that 18 of the Tel Arad inscriptions were written by at least four different authors. Combined with additional textual evidence, the researchers concluded that there were in fact at least six different writers. The study aroused great interest around the world.
    The TAU researchers then decided to compare the algorithmic methods, which have since been refined, to the forensic approach. To this end, Ms. Gerber joined the team. After an in-depth examination of the ancient inscriptions, she found that the 18 texts were written by at least 12 distinct writers with varying degrees of certainty. She examined the original Tel Arad ostraca at the Israel Museum, the Eretz Israel Museum, the Sonia and Marco Nedler Institute of Archaeology of Tel Aviv University, and the Israel Antiquities Authority’s warehouses at Beit Shemesh.
    Ms. Gerber explained:
    “This study was very exciting, perhaps the most exciting in my professional career. These are ancient Hebrew inscriptions written in ink on shards of pottery, utilizing an alphabet that was previously unfamiliar to me. I studied the characteristics of the writing in order to analyze and compare the inscriptions, while benefiting from the skills and knowledge I acquired during my bachelor’s degree studies in classical archaeology and ancient Greek at Tel Aviv University. I delved into the microscopic details of these inscriptions written by people from the First Temple period, from routine issues such as orders concerning the movement of soldiers and the supply of wine, oil, and flour, through correspondence with neighboring fortresses, to orders that reached the Tel Arad fortress from the high ranks of the Judahite military system. I had the feeling that time had stood still and there was no gap of 2,600 years between the writers of the ostraca and ourselves.

    advertisement

    “Handwriting is made up of unconscious habit patterns. The handwriting identification is based on the principle that these writing patterns are unique to each person and no two people write exactly alike. It is also assumed that repetitions of the same text or characters by the same writer are not exactly identical and one can define a range of natural handwriting variations specific to each one. Thus, forensic handwriting analysis aims at tracking features corresponding to specific individuals, and concluding whether a single or rather different authors wrote the given documents.
    “The examination process is divided into three steps: analysis, comparison, and evaluation. The analysis includes a detailed examination of every single inscription, according to various features, such as the spacing between letters, their proportions, slant, etc. The comparison is based upon the aforementioned features across various handwritings. In addition, consistent patterns,such the same combinations of letters, words, and punctuation, are identified. Finally, an evaluation of identicalness or distinctiveness of the writers is made. It should be noted that, according to an Israel Supreme Court ruling, a person can be convicted of a crime based on the opinion of a forensic handwriting expert.”
    Dr. Shaus further elaborated:
    “We were in for a big surprise: Yana identified more authors than our algorithms did. It must be understood that our current algorithms are of a “cautious” nature — they know how to identify cases in which the texts were written by people with significantly different writing; in other cases they refrain from definite conclusions. In contrast, an expert in handwriting analysis knows not only how to spot the differences between writers more accurately, but in some cases may also arrive at the conclusion that several texts were actually written by a single person. Naturally, in terms of consequences, it is very interesting to see who the authors are. Thanks to the findings, we were able to construct an entire flowchart of the correspondence concerning the military fortress — who wrote to whom and regarding what matter. This reflects the chain of command within the Judahite army.
    “For example, in the area of Arad, close to the border between the kingdoms of Judah and Edom, there was a military force whose soldiers are referred to as “Kittiyim” in the inscriptions, most likely Greek mercenaries. Someone, probably their Judahite commander or liaison officer, requested provisions for the Kittiyim unit. He writes to the quartermaster of the fortress in Arad “give the Kittiyim flour, bread, wine” and so on. Now, thanks to the identification of the handwriting, we can say with high probability that there was not only one Judahite commander writing, but at least four different commanders. It is conceivable that each time another officer was sent to join the patrol, they took turns.”
    According to the researchers, the findings shed new light on Judahite society on the eve of the destruction of the First Temple — and on the setting of the compilation of biblical texts. Dr. Sober explains:
    “It should be remembered that this was a small outpost, one of a series of outposts on the southern border of the kingdom of Judah. Since we found at least 12 different authors out of 18 texts in total, we can conclude that there was a high level of literacy throughout the entire kingdom. The commanding ranks and liaison officers at the outpost, and even the quartermaster Eliashib and his deputy, Nahum, were literate. Someone had to teach them how to read and write, so we must assume the existence of an appropriate educational system in Judah at the end of the First Temple period. This, of course, does not mean that there was almost universal literacy as there is today, but it seems that significant portions of the residents of the kingdom of Judah were literate. This is important to the discussion on the composition of biblical texts. If there were only two or three people in the whole kingdom who could read and write, then it is unlikely that complex texts would have been composed.”
    Prof. Finkelstein concludes:
    “Whoever wrote the biblical works did not do so for us, so that we could read them after 2,600 years. They did so in order to promote the ideological messages of the time. There are different opinions regarding the date of the composition of biblical texts. Some scholars suggest that many of the historical texts in the Bible, from Joshua to II Kings, were written at the end of the 7th century BC, very close to the period of the Arad ostraca. It is important to ask who these texts were written for. According to one view, there were events in which the few people who could read and write stood before the illiterate public and read texts out to them. A high literacy rate in Judah puts things into a different light.
    “Until now, the discussion of literacy in the kingdom of Judah has been based on circular arguments, on what is written within the Bible itself, for example on scribes in the kingdom. We have shifted the discussion to an empirical perspective. If in a remote place like Tel Arad there was, over a short period of time, a minimum of 12 authors of 18 inscriptions, out of the population of Judah which is estimated to have been no more than 120,000 people, it means that literacy was not the exclusive domain of a handful of royal scribes in Jerusalem. The quartermaster from the Tel Arad outpost also had the ability to read and appreciate them.” More