More stories

  • in

    N-channel diamond field-effect transistor

    A NIMS research team has developed the world’s first n-channel diamond MOSFET (metal-oxide-semiconductor field-effect transistor). The developed n-channel diamond MOSFET provides a key step toward CMOS (complementary metal-oxide-semiconductor: one of the most popular technologies in the computer chip) integrated circuits for harsh-environment- applications as well as the development of diamond power electronics.
    Semiconductor diamond has outstanding physical properties such as ultra wide-bandgap energy of 5.5 eV, high carriers mobilities, and high thermal conductivity etc, which is promising for the applications under extreme environmental conditions with high performance and high reliability, such as the environments of high temperatures and high levels of radiation (e.g., in proximity to nuclear reactor cores). By using diamond electronics, not only can the thermal management demand for conventional semiconductors be alleviated but these devices are also more energy efficient and can endure much higher breakdown voltages and harsh environments. On the other hand, with the development of diamond growth technologies, power electronics, spintronics, and microelectromechanical system (MEMS) sensors operatable under high-temperature and strong-radiation conditions, the demand for peripheral circuitry based on diamond CMOS devices has increased for monolithic integration. For the fabrication of CMOS integrated circuits, both p- and n-type channel MOSFETs are required as those required for conventional silicon electronics. However, n-channel diamond MOSFETs had yet to be developed.
    This NIMS research team developed a technique to grow high-quality monocrystalline n-type diamond semiconductors with smooth and flat terraces at the atomic level by doping diamond with a low concentration of phosphorus. Using this technique, the team succeeded in fabricating an n-channel diamond MOSFET for the first time in the world. This MOSFET is composed mainly of an n-channel diamond semiconductor layer atop another diamond layer doped with a high concentration of phosphorus. The use of the latter diamond layer significantly reduced source and drain contact resistance. The team confirmed that the fabricated diamond MOSFET actually functioned as an n-channel transistor. In addition, the team verified the excellent high-temperature performance of the MOSFET as indicated by its field-effect mobility — an important transistor performance indicator — of approximately 150 cm2/V・sec at 300°C.
    These achievements are expected to facilitate the development of CMOS integrated circuits for the manufacture of energy-efficient power electronics, spintronic devices and (MEMS) sensors under harsh environments. More

  • in

    AI can now detect COVID-19 in lung ultrasound images

    Artificial intelligence can spot COVID-19 in lung ultrasound images much like facial recognition software can spot a face in a crowd, new research shows.
    The findings boost AI-driven medical diagnostics and bring health care professionals closer to being able to quickly diagnose patients with COVID-19 and other pulmonary diseases with algorithms that comb through ultrasound images to identify signs of disease.
    The findings, newly published in Communications Medicine, culminate an effort that started early in the pandemic when clinicians needed tools to rapidly assess legions of patients in overwhelmed emergency rooms.
    “We developed this automated detection tool to help doctors in emergency settings with high caseloads of patients who need to be diagnosed quickly and accurately, such as in the earlier stages of the pandemic,” said senior author Muyinatu Bell, the John C. Malone Associate Professor of Electrical and Computer Engineering, Biomedical Engineering, and Computer Science at Johns Hopkins University. “Potentially, we want to have wireless devices that patients can use at home to monitor progression of COVID-19, too.”
    The tool also holds potential for developing wearables that track such illnesses as congestive heart failure, which can lead to fluid overload in patients’ lungs, not unlike COVID-19, said co-author Tiffany Fong, an assistant professor of emergency medicine at Johns Hopkins Medicine.
    “What we are doing here with AI tools is the next big frontier for point of care,” Fong said. “An ideal use case would be wearable ultrasound patches that monitor fluid buildup and let patients know when they need a medication adjustment or when they need to see a doctor.”
    The AI analyzes ultrasound lung images to spot features known as B-lines, which appear as bright, vertical abnormalities and indicate inflammation in patients with pulmonary complications. It combines computer-generated images with real ultrasounds of patients — including some who sought care at Johns Hopkins.

    “We had to model the physics of ultrasound and acoustic wave propagation well enough in order to get believable simulated images,” Bell said. “Then we had to take it a step further to train our computer models to use these simulated data to reliably interpret real scans from patients with affected lungs.”
    Early in the pandemic, scientists struggled to use artificial intelligence to assess COVID-19 indicators in lung ultrasound images because of a lack of patient data and because they were only beginning to understand how the disease manifests in the body, Bell said.
    Her team developed software that can learn from a mix of real and simulated data and then discern abnormalities in ultrasound scans that indicate a person has contracted COVID-19. The tool is a deep neural network, a type of AI designed to behave like the interconnected neurons that enable the brain to recognize patterns, understand speech, and achieve other complex tasks.
    “Early in the pandemic, we didn’t have enough ultrasound images of COVID-19 patients to develop and test our algorithms, and as a result our deep neural networks never reached peak performance,” said first author Lingyi Zhao, who developed the software while a postdoctoral fellow in Bell’s lab and is now working at Novateur Research Solutions. “Now, we are proving that with computer-generated datasets we still can achieve a high degree of accuracy in evaluating and detecting these COVID-19 features.”
    The team’s code and data are publicly available here: https://gitlab.com/pulselab/covid19 More

  • in

    Verifying the work of quantum computers

    Quantum computers of the future may ultimately outperform their classical counterparts to solve intractable problems in computer science, medicine, business, chemistry, physics, and other fields. But the machines are not there yet: They are riddled with inherent errors, which researchers are actively working to reduce. One way to study these errors is to use classical computers to simulate the quantum systems and verify their accuracy. The only catch is that as quantum machines become increasingly complex, running simulations of them on traditional computers would take years or longer.
    Now, Caltech researchers have invented a new method by which classical computers can measure the error rates of quantum machines without having to fully simulate them. The team describes the method in a paper in the journal Nature.
    “In a perfect world, we want to reduce these errors. That’s the dream of our field,” says Adam Shaw, lead author of the study and a graduate student who works in the laboratory of Manuel Endres, professor of physics at Caltech. “But in the meantime, we need to better understand the errors facing our system, so we can work to mitigate them. That motivated us to come up with a new approach for estimating the success of our system.”
    In the new study, the team performed experiments using a type of simple quantum computer known as a quantum simulator. Quantum simulators are more limited in scope than current rudimentary quantum computers and are tailored for specific tasks. The group’s simulator is made up of individually controlled Rydberg atoms — atoms in highly excited states — which they manipulate using lasers.
    One key feature of the simulator, and of all quantum computers, is entanglement — a phenomenon in which certain atoms become connected to each other without actually touching. When quantum computers work on a problem, entanglement is naturally built up in the system, invisibly connecting the atoms. Last year, Endres, Shaw, and colleagues revealed that as entanglement grows, those connections spread out in a chaotic or random fashion, meaning that small perturbations lead to big changes in the same way that a butterfly’s flapping wings could theoretically affect global weather patterns.
    This increasing complexity is believed to be what gives quantum computers the power to solve certain types of problems much faster than classical computers, such as those in cryptography in which large numbers must be quickly factored.
    But once the machines reach a certain number of connected atoms, or qubits, they can no longer be simulated using classical computers. “When you get past 30 qubits, things get crazy,” Shaw says. “The more qubits and entanglement you have, the more complex the calculations are.”
    The quantum simulator in the new study has 60 qubits, which Shaw says puts it in a regime that is impossible to simulate exactly. “It becomes a catch-22. We want to study a regime that is hard for classical computers to work in, but still rely on those classical computers to tell if our quantum simulator is correct.” To meet the challenge, Shaw and colleagues took a new approach, running classical computer simulations that allow for different amounts of entanglement. Shaw likens this to painting with brushes of different size.

    “Let’s say our quantum computer is painting the Mona Lisa as an analogy,” he says. “The quantum computer can paint very efficiently and, in theory, perfectly, but it makes errors that smear out the paint in parts of the painting. It’s like the quantum computer has shaky hands. To quantify these errors, we want our classical computer to simulate what the quantum computer has done, but our Mona Lisa would be too complex for it. It’s as if the classical computers only have giant brushes or rollers and can’t capture the finer details.
    “Instead, we have many classical computers paint the same thing with progressively finer and finer brushes, and then we squint our eyes and estimate what it would have looked like if they were perfect. Then we use that to compare against the quantum computer and estimate its errors. With many cross-checks, we were able to show this ‘squinting’ is mathematically sound and gives the answer quite accurately.”
    The researchers estimated that their 60-qubit quantum simulator operates with an error rate of 91 percent (or an accuracy rate of 9 percent). That may sound low, but it is, in fact, relatively high for the state of the field. For reference, the 2019 Google experiment, in which the team claimed their quantum computer outperformed classical computers, had an accuracy of 0.3 percent (though it was a different type of system than the one in this study).
    Shaw says: “We now have a benchmark for analyzing the errors in quantum computing systems. That means that as we make improvements to the hardware, we can measure how well the improvements worked. Plus, with this new benchmark, we can also measure how much entanglement is involved in a quantum simulation, another metric of its success.”
    The Nature paper titled “Benchmarking highly entangled states on a 60-atom analog quantum simulator” was funded by the National Science Foundation (partially via Caltech’s Institute for Quantum Information and Matter, or IQIM), the Defense Advanced Research Projects Agency (DARPA), the Army Research Office, the U.S. Department of Energy’s Quantum Systems Accelerator, the Troesh postdoctoral fellowship, the German National Academy of Sciences Leopoldina, and Caltech’s Walter Burke Institute for Theoretical Physics. Other Caltech authors include former postdocs Joonhee Choi and Pascal Scholl; Ran Finkelstein, Troesh Postdoctoral Scholar Research Associate in Physics; and Andreas Elben, Sherman Fairchild Postdoctoral Scholar Research Associate in Theoretical Physics. Zhuo Chen, Daniel Mark, and Soonwon Choi (BS ’12) of MIT are also authors. More

  • in

    Universal controller could push robotic prostheses, exoskeletons into real-world use

    Robotic exoskeletons designed to help humans with walking or physically demanding work have been the stuff of sci-fi lore for decades. Remember Ellen Ripley in that Power Loader in Alien? Or the crazy mobile platform George McFly wore in 2015 in Back to the Future, Part II because he threw his back out?
    Researchers are working on real-life robotic assistance that could protect workers from painful injuries and help stroke patients regain their mobility. So far, they have required extensive calibration and context-specific tuning, which keeps them largely limited to research labs.
    Mechanical engineers at Georgia Tech may be on the verge of changing that, allowing exoskeleton technology to be deployed in homes, workplaces, and more.
    A team of researchers in Aaron Young’s lab have developed a universal approach to controlling robotic exoskeletons that requires no training, no calibration, and no adjustments to complicated algorithms. Instead, users can don the “exo” and go.
    Their system uses a kind of artificial intelligence called deep learning to autonomously adjust how the exoskeleton provides assistance, and they’ve shown it works seamlessly to support walking, standing, and climbing stairs or ramps. They described their “unified control framework” March 20 in Science Robotics.
    “The goal was not just to provide control across different activities, but to create a single unified system. You don’t have to press buttons to switch between modes or have some classifier algorithm that tries to predict that you’re climbing stairs or walking,” said Young, associate professor in the George W. Woodruff School of Mechanical Engineering.
    Machine Learning as Translator
    Most previous work in this area has focused on one activity at a time, like walking on level ground or up a set of stairs. The algorithms involved typically try to classify the environment to provide the right assistance to users.

    The Georgia Tech team threw that out the window. Instead of focusing on the environment, they focused on the human — what’s happening with muscles and joints — which meant the specific activity didn’t matter.
    “We stopped trying to bucket human movement into what we call discretized modes — like level ground walking or climbing stairs — because real movement is a lot messier,” said Dean Molinaro, lead author on the study and a recently graduated Ph.D. student in Young’s lab. “Instead, we based our controller on the user’s underlying physiology. What the body is doing at any point in time will tell us everything we need to know about the environment. Then we used machine learning essentially as the translator between what the sensors are measuring on the exoskeleton and what torques the muscles are generating.”
    With the controller delivering assistance through a hip exoskeleton developed by the team, they found they could reduce users’ metabolic and biomechanical effort: they expended less energy, and their joints didn’t have to work as hard compared to not wearing the device at all.
    In other words, wearing the exoskeleton was a benefit to users, even with the extra weight added by the device itself.
    “What’s so cool about this is that it adjusts to each person’s internal dynamics without any tuning or heuristic adjustments, which is a huge difference from a lot of work in the field,” Young said. “There’s no subject-specific tuning or changing parameters to make it work.”
    The control system in this study is designed for partial-assist devices. These exoskeletons support movement rather than completely replacing the effort.

    The team, which also included Molinaro and Inseung Kang, another former Ph.D. student now at Carnegie Mellon University, used an existing algorithm and trained it on mountains of force and motion-capture data they collected in Young’s lab. Subjects of different genders and body types wore the powered hip exoskeleton and walked at varying speeds on force plates, climbed height-adjustable stairs, walked up and down ramps, and transitioned between those movements.
    And like the motion-capture studios used to make movies, every movement was recorded and cataloged to understand what joints were doing for each activity.
    The Science Robotics study is “application agnostic,” as Young put it. Yet their controller offers the first bridge to real-world viability for robotic exoskeleton devices.
    Imagine how robotic assistance could benefit soldiers, airline baggage handlers, or any workers doing physically demanding jobs where musculoskeletal injury risk is high. More

  • in

    Quantum talk with magnetic disks

    Quantum computers promise to tackle some of the most challenging problems facing humanity today. While much attention has been directed towards the computation of quantum information, the transduction of information within quantum networks is equally crucial in materializing the potential of this new technology. Addressing this need, a research team at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) is now introducing a new approach for transducing quantum information: the team has manipulated quantum bits, so called qubits, by harnessing the magnetic field of magnons — wave-like excitations in a magnetic material — that occur within microscopic magnetic disks. The researchers presented their results in the journal Science Advances.
    The construction of a programmable, universal quantum computer stands as one of the most challenging engineering and scientific endeavors of our time. The realization of such a computer holds great potential for diverse industry fields such as logistics, finance, and pharmaceutics. However, the construction of a practical quantum computer has been hindered by the intrinsic fragility of how the information is stored and processed in this technology. Quantum information is encoded in qubits, which are extremely susceptible to the noise in their environment. Tiny thermal fluctuations, a fraction of a degree, could entirely disrupt the computation.
    This has prompted researchers to distribute the functionalities of quantum computers among distinct separate building blocks, in an effort to reduce error rates, and harness complementary advantages from their constituents. “However, this poses the problem of transferring the quantum information between the modules in a way that the information doesn’t go missing,” says HZDR researcher Mauricio Bejarano, first author of the publication. “Our research lies precisely in this specific niche, transducing communication between distinct quantum modules.”
    The currently established method to transfer quantum information and addressing qubits is through microwave antennas. This is the approach used by Google and IBM in their superconducting chips, the technological platform standing at the forefront in this quantum race. “We, on the other hand, address the qubits with magnons,” says HZDR physicist Helmut Schultheiß, who supervised the work. “These can be thought of as magnetic excitation waves that pass through a magnetic material. The advantage here is that the wavelength of magnons lies in the micrometer range and is significantly shorter than the centimeter waves of conventional microwave technology. Consequently, the microwave footprint of magnons costs less space in the chip.”
    Sophisticated frequency divider
    The HZDR group investigated the interaction of magnons and qubits formed by vacancies of silicon atoms in the crystal structure of silicon carbide, a material commonly used in high-power electronics. Such types of qubits are typically called spin qubits, given the quantum information is encoded in the spin state of the vacancy. But how can magnons be utilized to control these types of qubits? “Typically, magnons are generated with microwave antennas. This poses the problem that it is very difficult to separate the microwave drive coming from the antenna from the one coming from the magnons,” explains Bejarano.
    To isolate the microwaves from the magnons, the HZDR team used an exotic magnetic phenomena observable in microscopic magnetic disks of a nickel-iron alloy. “Due to a nonlinear process, some magnons inside the disk possess a much lower frequency than the driving frequency of the antenna. We manipulate qubits only with these lower frequency magnons.” The research team emphasizes they did not perform any quantum calculations yet. However, they showed that it is fundamentally feasible to address qubits exclusively with magnons.
    Leveraging magnon power
    “To date, the quantum engineering community has not yet realized that magnons can be used to control qubits,” stresses Schultheiß. “But our experiments demonstrate that these magnetic waves could indeed be useful.” In order to further develop their approach, the team is already preparing for their future plans: they want to try to control several closely spaced individual qubits in such a way that magnons mediate their entanglement process — a prerequisite for performing quantum computations.
    Their vision is that, in the long term, magnons could be excited by direct electrical currents with such precision that they specifically and exclusively address a single qubit in an array of qubits. This would make it possible to use magnons as a programmable quantum bus to address qubits in an extremely effective manner. While there is plenty of work ahead, the group’s research highlights that combining magnonic systems with quantum technologies could provide useful insights for the development of a practical quantum computer in the future. More

  • in

    Robotic metamaterial: An endless domino effect

    If it walks like a particle, and talks like a particle… it may still not be a particle. A topological soliton is a special type of wave or dislocation which behaves like a particle: it can move around but cannot spread out and disappear like you would expect from, say, a ripple on the surface of a pond. In a new study published in Nature, researchers from the University of Amsterdam demonstrate the atypical behaviour of topological solitons in a robotic metamaterial, something which in the future may be used to control how robots move, sense their surroundings and communicate.
    Topological solitons can be found in many places and at many different length scales. For example, they take the form of kinks incoiled telephone cords and large molecules such as proteins. At a very different scale, a black hole can be understood as a topological soliton in the fabric of spacetime. Solitons play an important role in biological systems, being relevant forprotein folding andmorphogenesis — the development of cells or organs.
    The unique features of topological solitons — that they can move around but always retain their shape and cannot suddenly disappear — are particularly interesting when combined with so-called non-reciprocal interactions. “In such an interaction, an agent A reacts to an agent B differently to the way agent B reacts to agent A,” explains Jonas Veenstra, a PhD student at the University of Amsterdam and first author of the new publication.
    Veenstra continues: “Non-reciprocal interactions are commonplace in society and complex living systems but have long been overlooked by most physicists because they can only exist in a system out of equilibrium. By introducing non-reciprocal interactions in materials, we hope to blur the boundary between materials and machines and to create animate or lifelike materials.”
    TheMachine Materials Laboratory where Veenstra does his research specialises in designing metamaterials: artificial materials and robotic systems that interact with their environment in a programmable fashion. The research team decided to study the interplay between non-reciprocal interactions and topological solitons almost two years ago, when then-students Anahita Sarvi and Chris Ventura Meinersen decided to follow up on their research project for the MSc course ‘Academic Skills for Research’.
    Solitons moving like dominoes
    The soliton-hosting metamaterial developed by the researchers consists of a chain of rotating rods that are linked to each other by elastic bands. Each rod is mounted on a little motor which applies a small force to the rod, depending on how it is oriented with respect to its neighbours. Importantly, the force applied depends on which side the neighbour is on, making the interactions between neighbouring rods non-reciprocal. Finally, magnets on the rods are attracted by magnets placed next to the chain in such a way that each rod has two preferred positions, rotated either to the left or the right.

    Solitons in this metamaterial are the locations where left- and right-rotated sections of the chain meet. The complementary boundaries between right- and left-rotated chain sections are then so-called ‘anti-solitons’. This is analogous to kinks in an old-fashioned coiled telephone cord, where clockwise and anticlockwise-rotating sections of the cord meet.
    When the motors in the chain are turned off, the solitons and anti-solitons can be manually pushed around in either direction. However, once the motors — and thereby the reciprocal interactions — are turned on, the solitons and anti-solitons automatically slide along the chain. They both move in the same direction, with a speed set by the anti-reciprocity imposed by the motors.
    Veenstra: “A lot of research has focussed on moving topological solitons by applying external forces. In systems studied so far, solitons and anti-solitons were found to naturally travel in opposite directions. However, if you want to control the behaviour of (anti-)solitons, you might want to drive them in the same direction. We discovered that non-reciprocal interactions achieve exactly this. The non-reciprocal forces are proportional to the rotation caused by the soliton, such that each soliton generates its own driving force.”
    The movement of the solitons is similar to a chain of dominoes falling, each one toppling its neighbour. However, unlike dominoes, the non-reciprocal interactions ensure that the ‘toppling’ can only happen in one direction. And while dominoes can only fall down once, a soliton moving along the metamaterial simply sets up the chain for an anti-soliton to move through it in the same direction. In other words, any number of alternating solitons and anti-solitons can move through the chain without the need to ‘reset’.
    Motion control
    Understanding the role of non-reciprocal driving will not only help us to better understand the behaviour of topological solitons in living systems, but can also lead to technological advances. The mechanism that generates the self-driving, one-directional solitons uncovered in this study, can be used to control the motion of different types of waves (known as waveguiding), or to endow a metamaterial with a basic information processing capability such as filtering.
    Future robots can also use topological solitons for basic robotic functionalities such as movement, sending out signals and sensing their surroundings. These functionalities would then not be controlled from a central point, but rather emerge from the sum of the robot’s active parts.
    All in all, the domino effect of solitons in metamaterials, now an interesting observation in the lab, may soon start to play a role in different branches of engineering and design. More

  • in

    Metamaterials and AI converge, igniting innovative breakthroughs

    A research team, comprising Professor Junsuk Rho from the Department of Mechanical Engineering, the Department of Chemical Engineering, and the Department of Electrical Engineering, and PhD candidates Seokho Lee and Cherry Park from the Department of Mechanical Engineering at Pohang University of Science and Technology (POSTECH), has recently published a paper that highlights the next generation of research trends that combine metaphotonics research with artificial intelligence. The paper has been published in the international journal, Current Opinion in Solid State and Materials Science.
    Metalenses have sparked a revolution in optics, drastically slimming down conventional lens thickness to one/10,000th while maintaining control over light properties. Notably, the academic community has begun harnessing AI as a mapping tool to discern relationships between input and output data. In their paper, the research team outlines three key trends emerging from AI-fueled metaphotonics research.
    Previous research involving simulations to develop metamaterial-based devices were time-consuming endeavors. However, with the application of AI technology, researchers have achieved rapid predictions of optical properties based on input data, significantly saving time and energy. By inputting data regarding optical properties into AI systems, researchers can now design optical devices with desired properties.
    In the realm of optical neural networks, a burgeoning field of optical computer technology is emerging, aiming to enable AI at the speed of light by using metamaterials to convert information into light. The research team, in particular, offers a fresh perspective on the synergy between AI and future metaphotonics research by classifying optical neural networks into encoders, responsible for compressing and abstracting information, and decoders, tasked with interpreting information.
    The team also highlighted metasensors based on metamaterials as a next-generation research trend. Metasensors, devices that encode measured data into light and concurrently amplify it, enable remarkably precise and swift data analysis when integrated with AI. These metasensors hold promise across various domains including diagnosis and treatment of patients, environmental monitoring, security, and beyond, facilitating the highly detailed detection and analysis of data.
    Professor Junsuk Rho expressed the team’s expectation by stating, “This paper presents the trajectory of metaphotonics research, encompassing past, present, and future endeavors, spanning from recent research to challenges and forthcoming trends.” He added, “We anticipate further creative and innovative research that capitalizes on the intrinsic attributes of AI and metamaterials.”
    The research was conducted with support from the STEAM Research Program, the RLRC Program, and the Nano Connect Program of the National Research Foundation of Korea and the Ministry of Science and ICT, the Alchemist Project of the Ministry of Trade, Industry and Energy and the Korea Planning & Evaluation Institute of Industrial Technology, and the N.EX.T Impact Project of POSCO Holdings. More

  • in

    ChatGPT is an effective tool for planning field work, school trips and even holidays

    Researchers exploring ways to utilise ChatGPT for work, say it could save organisations and individuals a lot of time and money when it comes to planning trips.
    A new study, published in Innovations in Education and Teaching International (IETI), has tested whether ChatGPT can be used to design University field studies. It found that the free-to-use AI model is an effective tool for not only planning educational trips around the world, but also could be used by other industries.
    The research, led by scientists from the University of Portsmouth and University of Plymouth, specifically focused on marine biology courses. It involved the creation of a brand new field course using ChatGPT, and the integration of the AI-planned activities into an existing university module.
    The team developed a comprehensive guide for using the chatbot, and successfully organised a single-day trip in the UK using the AI’s suggestion of a beach clean-up activity to raise awareness about marine pollution and its impact on marine ecosystems.
    They say the established workflow could also be easily adapted to support other projects and professions outside of education, including environmental impact studies, travel itineraries, and business trips.
    Dr Mark Tupper, from the University of Portsmouth’s School of Biological Sciences, said: “It’s well known that universities and schools across the UK are stretched thin when it comes to resources. We set out to find a way to utilise ChatGPT for planning field work, because of the considerable amount of effort that goes into organising these trips. There’s a lot to consider, including safety procedures, risks, and design logistics. This process can take several days, but we found ChatGPT effectively does most of the leg work in just a few hours. The simple framework we’ve created can be used across the whole education sector, not just by universities. With many facing budget constraints and staffing limitations, this could save a lot of time and money.”
    Chatbots like ChatGPT are powered by large amounts of data and computing techniques to make predictions to string words together in a meaningful way. They not only tap into a vast amount of vocabulary and information, but also understand words in context.

    Since OpenAI launched the 3.0 model in November 2022, millions of users have used the technology to improve their personal lives and boost productivity. Some workers have used it to write papers, make music, develop code, and create lesson plans.
    “If you’re a school teacher and want to plan a class with 40 kids, our ChatGPT roadmap will be a game changer,” said Dr Reuben Shipway, Lecturer in Marine Biology at the University of Plymouth. “All a person needs to do is input some basic data, and the AI model will be able to design a course or trip based on their needs and requirements. It can competently handle various tasks, from setting learning objectives to outlining assessment criteria. For businesses, ChatGPT is like having a personal planning assistant at your fingertips. Imagine trips with itineraries that unfold effortlessly, or fieldwork logistics handled with the ease of conversation.”
    The paper says while the AI model is adaptable and user-friendly, there are limitations when it comes to field course planning, including risk assessments.
    Dr Ian Hendy, from the University of Portsmouth, explained: “We asked ChatGPT to identify the potential hazards of this course and assess the overall risk of this activity from low to high, and the results were mixed. In some instances, ChatGPT was able to identify hazards specific to the activity — like the increased risk of slipping on seaweed-covered rocks exposed at low tide — but in other instances, ChatGPT exaggerated threats. For example, we find the risk of students suffering from physical strain and fatigue from carrying bags of collected litter to be low. That’s why there still needs to be a human element in the planning stages, to iron out any issues. It’s also important that the individual sifting through the results understands the nuances of successful field courses so they can recognise these discrepancies.”
    The paper concludes with a series of recommendations for best practices in using ChatGPT for field course design, underscoring the need for thoughtful human input, logical prompt sequencing, critical evaluation, and adaptive management to refine course designs.
    Top tips to help potential users get the most out of ChatGPT: Get the ball rolling with ChatGPT: Ask what details it thrives on for crafting the perfect assignment plan. By understanding the key information it needs, you’ll be well-equipped to structure your prompts effectively and ensure ChatGPT provides tailored and insightful assistance; Time Management Made Easy: Share your preferred schedule, and let ChatGPT handle the logistics. Whether you’re a back-to-back meetings person or prefer a more relaxed pace, ChatGPT creates an itinerary that suits your working style; Flexible Contingency Plans: Anticipate the unexpected. ChatGPT can help you create contingency plans in case of unforeseen events, ensuring that the trip remains adaptable to changing circumstances without compromising the educational goals; Cultural Etiquette Guidance: Familiarise yourself with local cultural norms and business etiquette. ChatGPT can provide tips on appropriate greetings, gift-giving customs, and other cultural considerations, ensuring smooth interactions with local business partners; Become a proficient Prompt Engineer: There are many quality, low-cost courses in the field of ChatGPT prompt engineering. These are available from online learning platforms such as Udemy, Coursera, and LinkedIn Learning. Poor input leads to poor ChatGPT output, so improving your prompt engineering will always lead to better results; Use your unique experiences to improve ChatGPT output: Remember that AI knowledge cannot replace personal experience, but AI can learn from your experiences and use them to improve its recommendations; Remember, planning is a two-way street! Engage in feedback with ChatGPT. Don’t hesitate to tweak and refine the itinerary until it feels just right. It’s your trip, after all. More