More stories

  • in

    Pushed to the limit: A CMOS-based transceiver for beyond 5G applications at 300 GHz

    Scientists at Tokyo Institute of Technology (Tokyo Tech) and NTT Corporation (NTT) develop a novel CMOS-based transceiver for wireless communications at the 300 GHz band, enabling future beyond-5G applications. Their design addresses the challenges of operating CMOS technology at its practical limit and represents the first wideband CMOS phased-array system to operate at such elevated frequencies.
    Communication at higher frequencies is a perpetually sought-after goal in electronics because of the greater data rates that would be possible and to take advantage of underutilized portions of the electromagnetic spectrum. Many applications beyond 5G, as well as the IEEE802.15.3d standard for wireless communications, call for transmitters and receivers capable of operating close to or above 300 GHz.
    Unfortunately, our trusty CMOS technology is not entirely suitable for such elevated frequencies. Near 300 GHz, amplification becomes considerably difficult. Although a few CMOS-based transceivers for 300 GHz have been proposed, they either lack enough output power, can only operate in direct line-of-sight conditions, or require a large circuit area to be implemented.
    To address these issues, a team of scientists from Tokyo Tech, in collaboration with NTT, proposed an innovative design for a 300 GHz CMOS-based transceiver. Their work will be presented in the Digests of Technical Papers in the 2021 IEEE ISSCC (International Solid-State Circuits Conference), a conference where the latest advances in solid-state and integrated circuits are exposed.
    One of the key features of the proposed design is that it is bidirectional; a great portion of the circuit, including the mixer, antennas, and local oscillator, is shared between the receiver and the transmitter. This means the overall circuit complexity and the total circuit area required are much lower than in unidirectional implementations.
    Another important aspect is the use of four antennas in a phased array configuration. Existing solutions for 300 GHz CMOS transmitters use a single radiating element, which limits the antenna gain and the system’s output power. An additional advantage is the beamforming capability of phased arrays, which allows the device to adjust the relative phases of the antenna signals to create a combined radiation pattern with custom directionality. The antennas used are stacked “Vivaldi antennas,” which can be etched directly onto PCBs, making them easy to fabricate.
    The proposed transceiver uses a subharmonic mixer, which is compatible with a bidirectional operation and requires a local oscillator with a comparatively lower frequency. However, this type of mixing results in low output power, which led the team to resort to an old yet functional technique to boost it. Professor Kenichi Okada from Tokyo Tech, who led the study, explains: “Outphasing is a method generally used to improve the efficiency of power amplifiers by enabling their operation at output powers close to the point where they no longer behave linearly — that is, without distortion. In our work, we used this approach to increase the transmitted output power by operating the mixers at their saturated output power.” Another notable feature of the new transceiver is its excellent cancellation of local oscillator feedthrough (a “leakage” from the local oscillator through the mixer and onto the output) and image frequency (a common type of interference for the method of reception used).
    The entire transceiver was implemented in an area as small as 4.17 mm2. It achieved maximum rates of 26 Gbaud for transmission and 18 Gbaud for reception, outclassing most state-of-the-art solutions. Excited about the results, Okada remarks: “Our work demonstrates the first implementation of a wideband CMOS phased-array system that operates at frequencies higher than 200 GHz.” Let us hope this study helps us squeeze more juice out of CMOS technology for upcoming applications in wireless communications!

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    'Audeo' teaches artificial intelligence to play the piano

    Anyone who’s been to a concert knows that something magical happens between the performers and their instruments. It transforms music from being just “notes on a page” to a satisfying experience.
    A University of Washington team wondered if artificial intelligence could recreate that delight using only visual cues — a silent, top-down video of someone playing the piano. The researchers used machine learning to create a system, called Audeo, that creates audio from silent piano performances. When the group tested the music Audeo created with music-recognition apps, such as SoundHound, the apps correctly identified the piece Audeo played about 86% of the time. For comparison, these apps identified the piece in the audio tracks from the source videos 93% of the time.
    The researchers presented Audeo Dec. 8 at the NeurIPS 2020 conference.
    “To create music that sounds like it could be played in a musical performance was previously believed to be impossible,” said senior author Eli Shlizerman, an assistant professor in both the applied mathematics and the electrical and computer engineering departments. “An algorithm needs to figure out the cues, or ‘features,’ in the video frames that are related to generating music, and it needs to ‘imagine’ the sound that’s happening in between the video frames. It requires a system that is both precise and imaginative. The fact that we achieved music that sounded pretty good was a surprise.”
    Audeo uses a series of steps to decode what’s happening in the video and then translate it into music. First, it has to detect which keys are pressed in each video frame to create a diagram over time. Then it needs to translate that diagram into something that a music synthesizer would actually recognize as a sound a piano would make. This second step cleans up the data and adds in more information, such as how strongly each key is pressed and for how long.
    “If we attempt to synthesize music from the first step alone, we would find the quality of the music to be unsatisfactory,” Shlizerman said. “The second step is like how a teacher goes over a student composer’s music and helps enhance it.”
    The researchers trained and tested the system using YouTube videos of the pianist Paul Barton. The training consisted of about 172,000 video frames of Barton playing music from well-known classical composers, such as Bach and Mozart. Then they tested Audeo with almost 19,000 frames of Barton playing different music from these composers and others, such as Scott Joplin.
    Once Audeo has generated a transcript of the music, it’s time to give it to a synthesizer that can translate it into sound. Every synthesizer will make the music sound a little different — this is similar to changing the “instrument” setting on an electric keyboard. For this study, the researchers used two different synthesizers.
    “Fluidsynth makes synthesizer piano sounds that we are familiar with. These are somewhat mechanical-sounding but pretty accurate,” Shlizerman said. “We also used PerfNet, a new AI synthesizer that generates richer and more expressive music. But it also generates more noise.”
    Audeo was trained and tested only on Paul Barton’s piano videos. Future research is needed to see how well it could transcribe music for any musician or piano, Shlizerman said.
    “The goal of this study was to see if artificial intelligence could generate music that was played by a pianist in a video recording — though we were not aiming to replicate Paul Barton because he is such a virtuoso,” Shlizerman said. “We hope that our study enables novel ways to interact with music. For example, one future application is that Audeo can be extended to a virtual piano with a camera recording just a person’s hands. Also, by placing a camera on top of a real piano, Audeo could potentially assist in new ways of teaching students how to play.”
    Kun Su and Xiulong Liu, both doctoral students in electrical and computer engineering, are co-authors on this paper. This research was funded by the Washington Research Foundation Innovation Fund as well as the applied mathematics and electrical and computer engineering departments.

    Story Source:
    Materials provided by University of Washington. Original written by Sarah McQuate. Note: Content may be edited for style and length. More

  • in

    Shopping online? Here's what you should know about user reviews

    If you’re about to buy something online and its only customer review is negative, you’d probably reconsider the purchase, right? It turns out a product’s first review can have an outsized effect on the item’s future — it can even cause the product to fail.
    Shoppers, retailers and manufacturers alike feel the effects of customer reviews. Researchers at the University of Florida’s Warrington College of Business looked at the influence of the first review after noticing the exact same products getting positive reviews on one retailer’s website but negative reviews on others, said Sungsik Park, Ph.D., who studied the phenomenon as a doctoral student at UF.
    “Why would a product receive a 4.7-star rating with 100 reviews on Amazon, but only four or five reviews with a two-star rating Walmart or BestBuy?” Park wondered.
    To find out, Park — now an assistant professor at the Darla Moore School of Business at the University of South Carolina — teamed up with UF professors Jinhong Xie, Ph.D., and Woochoel Shin, Ph.D., to analyze what might cause the variation. By comparing identical vacuum cleaners, toasters and digital cameras on Amazon and Best Buy, they were able to isolate the first review as the variable in how the product fared. They showed that the first review can affect a product’s overall reviews for up to three years, influencing both the amount and the tone of later reviews.
    “The first review has the potential to sway the entire evolution path of online consumer reviews,” Shin said.
    How could one review have such a lasting impact? When the first review on a retailer’s site was positive, the product went on to garner a larger number of reviews overall, and they were more likely to be positive. When a product got a negative first review, fewer people were willing to take a chance on buying it, so it had fewer opportunities to receive positive reviews, creating a lingering impact from the first unhappy customer.
    “Once you think about how user reviews are generated, it makes sense,” Park said.
    The findings, published in the journal Marketing Science, suggest that retailers and manufacturers should take steps to detect negative first reviews and mitigate their impact.
    Firms generally monitor their online reviews and evaluate their strategies accordingly, Xie explained. “However, they do so by focusing on average rating rather than a single rating, and after the product has sufficient time to be evaluated by consumers. Our research suggests that firms need to pay attention to a special single review (i.e., the first one) as soon as it is posted.”
    Consumers, on the other hand, might want to check multiple sites’ reviews before they rule out a product. If you’re looking at several sites to compare prices, Park suggests comparison shopping reviews, too. (For big ticket items, Park also checks third-party reviews like Consumer Reports.)
    Because shoppers consider user reviews more trustworthy than information from advertising, it’s important to understand the factors that could skew those ratings.
    “We want consumers to know that this information can be easily distorted,” Park said.

    Story Source:
    Materials provided by University of Florida. Original written by Alisson Clark. Note: Content may be edited for style and length. More

  • in

    Using Artificial Intelligence to prevent harm caused by immunotherapy

    Researchers at Case Western Reserve University, using artificial intelligence (AI) to analyze simple tissue scans, say they have discovered biomarkers that could tell doctors which lung cancer patients might actually get worse from immunotherapy.
    Until recently, researchers and oncologists had placed these lung cancer patients into two broad categories: those who would benefit from immunotherapy, and those who likely would not.
    But a third category — patients called hyper-progressors who would actually be harmed by immunotherapy, including a shortened lifespan after treatment — has begun to emerge, said Pranjal Vaidya, a PhD student in biomedical engineering and researcher at the university’s Center for Computational Imaging and Personalized Diagnostics (CCIPD).
    “This is a significant subset of patients who should potentially avoid immunotherapy entirely,” said Vaidya, first author on a 2020 paper announcing the findings in the Journal for Immunotherapy of Cancer. “Eventually, we would want this to be integrated into clinical settings, so that the doctors would have all the information needed to make the call for each individual patient.”
    Ongoing research into immunotherapy
    Currently, only about 20% of all cancer patients will actually benefit from immunotherapy, a treatment that differs from chemotherapy in that it uses drugs to help the immune system fight cancer, while chemotherapy uses drugs to directly kill cancer cells, according to the National Cancer Institute.

    advertisement

    The CCIPD, led by Anant Madabhushi, Donnell Institute Professor of Biomedical Engineering, has become a global leader in the detection, diagnosis and characterization of various cancers and other diseases by meshing medical imaging, machine learning and AI.
    This new work follows other recent research by CCIPD scientists which has demonstrated that AI and machine learning can be used to predict which lung cancer patients will benefit from immunotherapy.
    In this and previous research, scientists from Case Western Reserve and Cleveland Clinic essentially teach computers to seek and identify patterns in CT scans taken when lung cancer is first diagnosed to reveal information that could have been useful if known before treatment.
    And while many cancer patients have benefitted from immunotherapy, researchers are seeking a better way to identify who would mostly likely respond to those treatments.
    “This is an important finding because it shows that radiomic patterns from routine CT scans are able to discern three kinds of response in lung cancer patients undergoing immunotherapy treatment — responders, non-responders and the hyper-progressors,” said Madabhushi, senior author of the study.

    advertisement

    “There are currently no validated biomarkers to distinguish this subset of high risk patients that not only don’t benefit from immunotherapy but may in fact develop rapid acceleration of disease on treatment,” said Pradnya Patil, MD, FACP, associate staff at Taussig Cancer Institute, Cleveland Clinic, and study author.
    “Analysis of radiomic features on pre-treatment routinely performed scans could provide a non-invasive means to identify these patients,” Patil said. “This could prove to be an invaluable tool for treating clinicians while determining optimal systemic therapy for their patients with advanced non- small cell lung cancer.”
    Information outside the tumor
    As with other previous cancer research at the CCIPD, scientists again found some of the most significant clues to which patients would be harmed by immunotherapy outside the tumor.
    “We noticed the radiomic features outside the tumor were more predictive than those inside the tumor, and changes in the blood vessels surrounding the nodule were also more predictive,” Vaidya said.
    This most recent research was conducted with data collected from 109 patients with non-small cell lung cancer being treated with immunotherapy, she said. More

  • in

    Machine-learning model helps determine protein structures

    Cryo-electron microscopy (cryo-EM) allows scientists to produce high-resolution, three-dimensional images of tiny molecules such as proteins. This technique works best for imaging proteins that exist in only one conformation, but MIT researchers have now developed a machine-learning algorithm that helps them identify multiple possible structures that a protein can take.
    Unlike AI techniques that aim to predict protein structure from sequence data alone, protein structure can also be experimentally determined using cryo-EM, which produces hundreds of thousands, or even millions, of two-dimensional images of protein samples frozen in a thin layer of ice. Computer algorithms then piece together these images, taken from different angles, into a three-dimensional representation of the protein in a process termed reconstruction.
    In a Nature Methods paper, the MIT researchers report a new AI-based software for reconstructing multiple structures and motions of the imaged protein — a major goal in the protein science community. Instead of using the traditional representation of protein structure as electron-scattering intensities on a 3D lattice, which is impractical for modeling multiple structures, the researchers introduced a new neural network architecture that can efficiently generate the full ensemble of structures in a single model.
    “With the broad representation power of neural networks, we can extract structural information from noisy images and visualize detailed movements of macromolecular machines,” says Ellen Zhong, an MIT graduate student and the lead author of the paper.
    With their software, they discovered protein motions from imaging datasets where only a single static 3D structure was originally identified. They also visualized large-scale flexible motions of the spliceosome — a protein complex that coordinates the splicing of the protein coding sequences of transcribed RNA.
    “Our idea was to try to use machine-learning techniques to better capture the underlying structural heterogeneity, and to allow us to inspect the variety of structural states that are present in a sample,” says Joseph Davis, the Whitehead Career Development Assistant Professor in MIT’s Department of Biology.

    advertisement

    Davis and Bonnie Berger, the Simons Professor of Mathematics at MIT and head of the Computation and Biology group at the Computer Science and Artificial Intelligence Laboratory, are the senior authors of the study, which appears today in Nature Methods. MIT postdoc Tristan Bepler is also an author of the paper.
    Visualizing a multistep process
    The researchers demonstrated the utility of their new approach by analyzing structures that form during the process of assembling ribosomes — the cell organelles responsible for reading messenger RNA and translating it into proteins. Davis began studying the structure of ribosomes while a postdoc at the Scripps Research Institute. Ribosomes have two major subunits, each of which contains many individual proteins that are assembled in a multistep process.
    To study the steps of ribosome assembly in detail, Davis stalled the process at different points and then took electron microscope images of the resulting structures. At some points, blocking assembly resulted in accumulation of just a single structure, suggesting that there is only one way for that step to occur. However, blocking other points resulted in many different structures, suggesting that the assembly could occur in a variety of ways.
    Because some of these experiments generated so many different protein structures, traditional cryo-EM reconstruction tools did not work well to determine what those structures were.

    advertisement

    “In general, it’s an extremely challenging problem to try to figure out how many states you have when you have a mixture of particles,” Davis says.
    After starting his lab at MIT in 2017, he teamed up with Berger to use machine learning to develop a model that can use the two-dimensional images produced by cryo-EM to generate all of the three-dimensional structures found in the original sample.
    In the new Nature Methods study, the researchers demonstrated the power of the technique by using it to identify a new ribosomal state that hadn’t been seen before. Previous studies had suggested that as a ribosome is assembled, large structural elements, which are akin to the foundation for a building, form first. Only after this foundation is formed are the “active sites” of the ribosome, which read messenger RNA and synthesize proteins, added to the structure.
    In the new study, however, the researchers found that in a very small subset of ribosomes, about 1 percent, a structure that is normally added at the end actually appears before assembly of the foundation. To account for that, Davis hypothesizes that it might be too energetically expensive for cells to ensure that every single ribosome is assembled in the correct order.
    “The cells are likely evolved to find a balance between what they can tolerate, which is maybe a small percentage of these types of potentially deleterious structures, and what it would cost to completely remove them from the assembly pathway,” he says.
    Viral proteins
    The researchers are now using this technique to study the coronavirus spike protein, which is the viral protein that binds to receptors on human cells and allows them to enter cells. The receptor binding domain (RBD) of the spike protein has three subunits, each of which can point either up or down.
    “For me, watching the pandemic unfold over the past year has emphasized how important front-line antiviral drugs will be in battling similar viruses, which are likely to emerge in the future. As we start to think about how one might develop small molecule compounds to force all of the RBDs into the ‘down’ state so that they can’t interact with human cells, understanding exactly what the ‘up’ state looks like and how much conformational flexibility there is will be informative for drug design. We hope our new technique can reveal these sorts of structural details,” Davis says.
    The research was funded by the National Science Foundation Graduate Research Fellowship Program, the National Institutes of Health, and the MIT Jameel Clinic for Machine Learning and Health. This work was supported by MIT Satori computation cluster hosted at the MGHPCC. More

  • in

    Researchers create 'whirling' nano-structures in anti-ferromagnets

    Today’s digital world generates vast amounts of data every second. Hence, there is a need for memory chips that can store more data in less space, as well as the ability to read and write that data faster while using less energy.
    Researchers from the National University of Singapore (NUS), working with collaborators from the University of Oxford, Diamond Light Source (the United Kingdom’s national synchrotron science facility) and University of Wisconsin Madison, have now developed an ultra-thin material with unique properties that could eventually achieve some of these goals. Their results were first published online in the journal Nature on 4 February 2021.
    Storing data in anti-ferromagnets
    In existing ferromagnet memory devices like hard drives, information is stored into specific patterns of atoms (called bits), within which all the little magnetic poles are oriented in the same direction. This arrangement makes them slow and susceptible to damage by stray magnetic fields. In contrast, a special class of materials called anti-ferromagnets, made up with magnetic poles on adjacent atoms aligned oppositely, are emerging to be important for future memory technology.
    In particular, there is a lot of interest in creating special magnetic nano-patterns in anti-ferromagnets that are shaped as whirls or vortices. In essence, each pattern consists of many little magnetic poles winding around a central core region in a clockwise or anti-clockwise manner, very much like air circulating inside a tornado or whirlwind. When realised experimentally, combinations of these anti-ferromagnetic whirls would be quite useful, as they are very stable structures and can potentially be moved along magnetic ‘race tracks’ at whirlwind speeds of a few kilometres per second!
    They could act as new types of information bits that not only store memory but also participate in computational operations. Hence, they would enable a new generation of chips that are significantly faster yet more energy efficient than today’s devices.
    Experimental discovery of whirls
    To date, constructing and manipulating patterns in anti-ferromagnetic materials has been very challenging, as they appear almost non-magnetic from afar. “Standard approaches for control, such as using external fields, fail to work on these materials. Therefore, to realise these elusive anti-ferromagnetic whirls, we came up with a novel strategy that combined high-quality film synthesis from materials engineering, phase transitions from physics and topology from mathematics,” explained Dr Hariom Jani, who is the lead author of the paper and a Research Fellow from the NUS Department of Physics.
    To grow these materials the researchers fired a laser at an extremely common and cheap material — iron-oxide, which is the main component of rust. By using ultra-short pulses of laser, they created a hot vapour of atomic particles that formed a thin film of iron-oxide on a surface.
    Professor Thirumalai Venky Venkatesan, who led the NUS group and invented the pulsed laser deposition process for making the thin film, highlighted the versatility of the team’s approach. “The deposition process allows precise atom-level control during the growth, which is important for making high-quality materials. Our work points to a large class of anti-ferromagnetic material systems, containing phase transitions, in which one can study the formation and control of these whirls for eventual technological applications,” he said.
    Explaining the underlying mechanism, Professor Paolo Radaelli, leader of the Oxford group, shared, “We drew inspiration from a celebrated idea in cosmological physics, from nearly 50 years ago, which proposed that a phase transition in the early universe, during the expansion after the Big Bang, may have resulted in the formation of cosmic whirls. Accordingly, we investigated an analogous magnetic process occurring in high-quality iron-oxide, which allowed us to create at will a large family of anti-ferromagnetic whirls.”
    The team’s next step is to construct innovative circuits that can electrically control the whirls. More

  • in

    New quantum receiver the first to detect entire radio frequency spectrum

    A new quantum sensor can analyze the full spectrum of radio frequency and real-world signals, unleashing new potentials for soldier communications, spectrum awareness and electronic warfare.
    Army researchers built the quantum sensor, which can sample the radio-frequency spectrum — from zero frequency up to 20 GHz — and detect AM and FM radio, Bluetooth, Wi-Fi and other communication signals.
    The Rydberg sensor uses laser beams to create highly-excited Rydberg atoms directly above a microwave circuit, to boost and hone in on the portion of the spectrum being measured. The Rydberg atoms are sensitive to the circuit’s voltage, enabling the device to be used as a sensitive probe for the wide range of signals in the RF spectrum.
    “All previous demonstrations of Rydberg atomic sensors have only been able to sense small and specific regions of the RF spectrum, but our sensor now operates continuously over a wide frequency range for the first time,” said Dr. Kevin Cox, a researcher at the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory. “This is a really important step toward proving that quantum sensors can provide a new, and dominant, set of capabilities for our Soldiers, who are operating in an increasingly complex electro-magnetic battlespace.”
    The Rydberg spectrum analyzer has the potential to surpass fundamental limitations of traditional electronics in sensitivity, bandwidth and frequency range. Because of this, the lab’s Rydberg spectrum analyzer and other quantum sensors have the potential to unlock a new frontier of Army sensors for spectrum awareness, electronic warfare, sensing and communications — part of the Army’s modernization strategy.
    “Devices that are based on quantum constituents are one of the Army’s top priorities to enable technical surprise in the competitive future battlespace,” said Army researcher Dr. David Meyer. “Quantum sensors in general, including the one demonstrated here, offer unparalleled sensitivity and accuracy to detect a wide range of mission-critical signals.”
    The peer-reviewed journal Physical Review Applied published the researchers’ findings, Waveguide-coupled Rydberg spectrum analyzer from 0 to 20 GigaHerz, co-authored by Army researchers Drs. David Meyer, Paul Kunz, and Kevin Cox
    The researchers plan additional development to improve the signal sensitivity of the Rydberg spectrum analyzer, aiming to outperform existing state-of-the-art technology.
    “Significant physics and engineering effort is still necessary before the Rydberg analyzer can integrate into a field-testable device,” Cox said. “One of the first steps will be understanding how to retain and improve the device’s performance as the sensor size is decreased. The Army has emerged as a leading developer of Rydberg sensors, and we expect more cutting-edge research to result as this futuristic technology concept quickly becomes a reality.”

    Story Source:
    Materials provided by U.S. Army Research Laboratory. Note: Content may be edited for style and length. More

  • in

    State-funded pre-K may enhance math achievement

    In the first longitudinal study to follow Georgia pre-K students through middle school, Stacey Neuharth-Pritchett, associate dean for academic programs and professor in UGA’s Mary Frances Early College of Education, found that participating in pre-K programs positively predicted mathematical achievement in students through seventh grade.
    “Students who participated in the study were twice as likely to meet the state standards in their mathematics achievement,” said Neuharth-Pritchett. “School becomes more challenging as one progresses through the grades, and so if in middle school, students are still twice as likely to meet the state standards, it’s clear that something that happened early on was influencing their trajectory.”
    The study found that, in fourth through seventh grades, the odds of a pre-K participant in the study meeting Georgia’s state academic standards on the statewide standardized test were 1.67-2.10 times greater than the odds for a nonparticipant, providing evidence of sustained benefits of state-funded pre-K programs.
    “Pre-K is a critical space where children experience success, and it sets them on a trajectory for being successful as they make the transition to kindergarten,” she said. “The hope is that when children are successful early in school, they are more likely to be engaged as they progress and more likely to complete high school.”
    Although quality learning experiences during the early years of development have been shown to provide the skills and knowledge for later mathematics achievement, access and entry to high-quality preschool programs remain unequal across the nation.
    “Our study looked at a high-needs school district that enrolled children from vulnerable situations in terms of economics and access to early learning experiences,” said Neuharth-Pritchett. “A number of the children in the study had not had any other formative experiences before they went to kindergarten.”
    Educational experiences are seen as foundational to later school success with some studies documenting other beneficial outcomes for students who attend pre-K, including a higher chance to complete high school, less mental health concerns, less reliance on the welfare system and more. However, students from low-income families often have more limited opportunities to learn at home as well as in pre-K programs.

    advertisement

    While some families are knowledgeable about providing their children with basic mathematical concepts and other foundational skills for a smooth home to school entry, other families might not be aware of the expectations for having mastered a number of these foundational skills before entering kindergarten.
    “Equal access to pre-K education has a long history that goes all the way back to the war on poverty. Part of the thinking during the 1960s was that such early learning opportunities would provide the high-quality preschool education that could level the educational playing field between those with economic resources and those without,” she said. “Our study indicated sustained benefits for children’s early learning experiences that persist into the elementary and middle school years.”
    Some implications of the study for policymakers to consider include ensuring more equitable access to pre-K programs and hiring highly skilled teachers to promote children’s learning and development. More than half of the pre-K teachers involved in the study held either a master’s or specialist degree, indicating the importance and influence of high-quality, experienced instructors on children’s academic success.
    Because of a change in program support for the Georgia Prekindergarten Program during Gov. Nathan Deal’s term, a high proportion of pre-K teachers are now very early in their teaching careers.
    Along with Jisu Han, an assistant professor at Kyung Hee University and co-author of the study, Neuharth-Pritchett plans to continue following the study’s participants as they progress through high school.
    “The state of Georgia invests substantial resources into this program, so it’s good that these outcomes can be cited for its efficacy,” said Neuharth-Pritchett. “The data from this study gives a much more longitudinal view of success and suggests these programs contribute to children’s education and success. Our results ultimately contribute to evidence supporting early learning and factors influencing long-term academic success for Georgia’s children.”

    Story Source:
    Materials provided by University of Georgia. Note: Content may be edited for style and length. More