More stories

  • in

    Researchers find little evidence of cheating with online, unsupervised exams

    When Iowa State University switched from in-person to remote learning halfway through the spring semester of 2020, psychology professor Jason Chan was worried. Would unsupervised, online exams unleash rampant cheating?
    His initial reaction flipped to surprise as test results rolled in. Individual student scores were slightly higher but consistent with their results from in-person, proctored exams. Those receiving B’s before the COVID-19 lockdown were still pulling in B’s when the tests were online and unsupervised. This pattern held true for students up and down the grading scale.
    “The fact that the student rankings stayed mostly the same regardless of whether they were taking in-person or online exams indicated that cheating was either not prevalent or that it was ineffective at significantly boosting scores,” says Chan.
    To know if this was happening at a broader level, Chan and Dahwi Ahn, a Ph.D. candidate in psychology, analyzed test score data from nearly 2,000 students across 18 classes during the spring 2020 semester. Their sample ranged from large, lecture-style courses with high enrollment, like introduction to statistics, to advanced courses in engineering and veterinary medicine.
    Across different academic disciplines, class sizes, course levels and test styles (i.e., predominantly multiple choice or short answer), the researchers found the same results. Unsupervised, online exams produced scores very similar to in-person, proctored exams, indicating they can provide a valid and reliable assessment of student learning.
    The research findings were recently published in Proceedings of the National Academy of Sciences.
    “Before conducting this research, I had doubts about online and unproctored exams, and I was quite hesitant to use them if there was an option to have them in-person. But after seeing the data, I feel more confident and hope other instructors will, as well,” says Ahn. More

  • in

    That’s funny — but AI models don’t get the joke

    Large neural networks, a form of artificial intelligence, can generate thousands of jokes along the lines of “Why did the chicken cross the road?” But do they understand why they’re funny?
    Using hundreds of entries from the New Yorker magazine’s Cartoon Caption Contest as a testbed, researchers challenged AI models and humans with three tasks: matching a joke to a cartoon; identifying a winning caption; and explaining why a winning caption is funny.
    In all tasks, humans performed demonstrably better than machines, even as AI advances such as ChatGPT have closed the performance gap. So are machines beginning to “understand” humor? In short, they’re making some progress, but aren’t quite there yet.
    “The way people challenge AI models for understanding is to build tests for them — multiple choice tests or other evaluations with an accuracy score,” said Jack Hessel, Ph.D. ’20, research scientist at the Allen Institute for AI (AI2). “And if a model eventually surpasses whatever humans get at this test, you think, ‘OK, does this mean it truly understands?’ It’s a defensible position to say that no machine can truly `understand’ because understanding is a human thing. But, whether the machine understands or not, it’s still impressive how well they do on these tasks.”
    Hessel is lead author of “Do Androids Laugh at Electric Sheep? Humor ‘Understanding’ Benchmarks from The New Yorker Caption Contest,” which won a best-paper award at the 61st annual meeting of the Association for Computational Linguistics, held July 9-14 in Toronto.
    Lillian Lee ’93, the Charles Roy Davis Professor in the Cornell Ann S. Bowers College of Computing and Information Science, and Yejin Choi, Ph.D. ’10, professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, and the senior director of common-sense intelligence research at AI2, are also co-authors on the paper.
    For their study, the researchers compiled 14 years’ worth of New Yorker caption contests — more than 700 in all. Each contest included: a captionless cartoon; that week’s entries; the three finalists selected by New Yorker editors; and, for some contests, crowd quality estimates for each submission. More

  • in

    GPT-3 can reason about as well as a college student, psychologists report

    People solve new problems readily without any special training or practice by comparing them to familiar problems and extending the solution to the new problem. That process, known as analogical reasoning, has long been thought to be a uniquely human ability.
    But now people might have to make room for a new kid on the block.
    Research by UCLA psychologists shows that, astonishingly, the artificial intelligence language model GPT-3 performs about as well as college undergraduates when asked to solve the sort of reasoning problems that typically appear on intelligence tests and standardized tests such as the SAT. The study is published in Nature Human Behaviour.
    But the paper’s authors write that the study raises the question: Is GPT-3 mimicking human reasoning as a byproduct of its massive language training dataset or it is using a fundamentally new kind of cognitive process?
    Without access to GPT-3’s inner workings — which are guarded by OpenAI, the company that created it — the UCLA scientists can’t say for sure how its reasoning abilities work. They also write that although GPT-3 performs far better than they expected at some reasoning tasks, the popular AI tool still fails spectacularly at others.
    “No matter how impressive our results, it’s important to emphasize that this system has major limitations,” said Taylor Webb, a UCLA postdoctoral researcher in psychology and the study’s first author. “It can do analogical reasoning, but it can’t do things that are very easy for people, such as using tools to solve a physical task. When we gave it those sorts of problems — some of which children can solve quickly — the things it suggested were nonsensical.”
    Webb and his colleagues tested GPT-3’s ability to solve a set of problems inspired by a test known as Raven’s Progressive Matrices, which ask the subject to predict the next image in a complicated arrangement of shapes. To enable GPT-3 to “see,” the shapes, Webb converted the images to a text format that GPT-3 could process; that approach also guaranteed that the AI would never have encountered the questions before. More

  • in

    When electrons slowly vanish during cooling

    Many substances change their properties when they are cooled below a certain critical temperature. Such a phase transition occurs, for example, when water freezes. However, in certain metals there are phase transitions that do not exist in the macrocosm. They arise because of the special laws of quantum mechanics that apply in the realm of nature’s smallest building blocks. It is thought that the concept of electrons as carriers of quantized electric charge no longer applies near these exotic phase transitions. Researchers at the University of Bonn and ETH Zurich have now found a way to prove this directly. Their findings allow new insights into the exotic world of quantum physics. The publication has now been released in the journal Nature Physics.
    If you cool water below zero degrees Celsius, it solidifies into ice. In the process, it abruptly changes its properties. As ice, for example, it has a much lower density than in a liquid state — which is why icebergs float. In physics, this is referred to as a phase transition.
    But there are also phase transitions in which characteristic features of a substance change gradually. If, for example, an iron magnet is heated up to 760 degrees Celsius, it loses its attraction to other pieces of metal — it is then no longer ferromagnetic, but paramagnetic. However, this does not happen abruptly, but continuously: The iron atoms behave like tiny magnets. At low temperatures, they are oriented parallel to each other. When heated, they fluctuate more and more around this rest position until they are completely randomly aligned, and the material loses its magnetism completely. So while the metal is being heated, it can be both somewhat ferromagnetic and somewhat paramagnetic.
    Matter particles cannot be destroyed
    The phase transition thus takes place gradually, so to speak, until finally all the iron is paramagnetic. Along the way, the transition slows down more and more. This behavior is characteristic of all continuous phase transitions. “We call it ‘critical slowing down,'” explains Prof. Dr. Hans Kroha of the Bethe Center for Theoretical Physics at the University of Bonn. “The reason is that with continuous transitions, the two phases get energetically closer and closer together.” It is similar to placing a ball on a ramp: It then rolls downhill, but the smaller the difference in altitude, the more slowly it rolls. When iron is heated, the energy difference between the phases decreases more and more, in part because the magnetization disappears progressively during the transition.
    Such a “slowing down” is typical for phase transitions based on the excitation of bosons. Bosons are particles that “generate” interactions (on which, for example, magnetism is based). Matter, on the other hand, is not made up of bosons but of fermions. Electrons, for example, belong to the fermions.
    Phase transitions are based on the fact that particles (or also the phenomena triggered by them) disappear. This means that the magnetism in iron becomes smaller and smaller as fewer atoms are aligned in parallel. “Fermions, however, cannot be destroyed due to fundamental laws of nature and therefore cannot disappear,” Kroha explains. “That’s why normally they are never involved in phase transitions.”
    Electrons turn into quasi-particles More

  • in

    3D display could soon bring touch to the digital world

    Imagine an iPad that’s more than just an iPad — with a surface that can morph and deform, allowing you to draw 3D designs, create haiku that jump out from the screen and even hold your partner’s hand from an ocean away.
    That’s the vision of a team of engineers from the University of Colorado Boulder. In a new study, they’ve created a one-of-a-kind shape-shifting display that fits on a card table. The device is made from a 10-by-10 grid of soft robotic “muscles” that can sense outside pressure and pop up to create patterns. It’s precise enough to generate scrolling text and fast enough to shake a chemistry beaker filled with fluid.
    It may also deliver something even rarer: the sense of touch in a digital age.
    “As technology has progressed, we started with sending text over long distances, then audio and now video,” said Brian Johnson, one of two lead authors of the new study who earned his doctorate in mechanical engineering at CU Boulder in 2022. “But we’re still missing touch.”
    Johnson and his colleagues described their shape display July 31 in the journal Nature Communications.
    The group’s innovation builds off a class of soft robots pioneered by a team led by Christoph Keplinger, formerly an assistant professor of mechanical engineering at CU Boulder. They’re called Hydraulically Amplified Self-Healing ELectrostatic (HASEL) actuators. The prototype display isn’t ready for the market yet. But the researchers envision that, one day, similar technologies could lead to sensory gloves for virtual gaming or a smart conveyer belt that can undulate to sort apples from bananas.
    “You could imagine arranging these sensing and actuating cells into any number of different shapes and combinations,” said Mantas Naris, co-lead author of the paper and a doctoral student in the Paul M. Rady Department of Mechanical Engineering. “There’s really no limit to what these technologies could, ultimately, lead to.”
    Playing the accordion More

  • in

    Way cool: ‘freeze ray’ technology

    You know that freeze-ray gun that “Batman” villain Mr. Freeze uses to “ice” his enemies? A University of Virginia professor thinks he may have figured out how to make one in real life.
    The discovery — which, unexpectedly, relies on heat-generating plasma — is not meant for weaponry, however. Mechanical and aerospace engineering professor Patrick Hopkins wants to create on-demand surface cooling for electronics inside spacecraft and high-altitude jets.
    “That’s the primary problem right now,” Hopkins said. “A lot of electronics on board heat up, but they have no way to cool down.”
    The U.S. Air Force likes the prospect of a freeze ray enough that it has granted the professor’s ExSiTE Lab (Experiments and Simulations in Thermal Engineering) $750,000 over three years to study how to maximize the technology.
    From there, the lab will partner with Hopkins’ UVA spinout company, Laser Thermal, for the fabrication of a prototype device.
    The professor explained that, on Earth — or in the air closer to it — the electronics in military craft can often be cooled by nature. The Navy, for example, uses ocean water as part of its liquid cooling systems. And closer to the ground, the air is dense enough to help keep aircraft components chilled.
    However, “With the Air Force and Space Force, you’re in space, which is a vacuum, or you’re in the upper atmosphere, where there’s very little air that can cool,” he said. “So what happens is your electronics keep getting hotter and hotter and hotter. And you can’t bring a payload of coolant onboard because that’s going to increase the weight, and you lose efficiency.”
    Hopkins believes he’s on track toward a lightweight solution. He and collaborators recently published a review article about this and other prospects for the technology in the journal ACS Nano. More

  • in

    Researchers successfully train a machine learning model in outer space for the first time

    For the first time, a project led by the University of Oxford has trained a machine learning model in outer space, on board a satellite. This achievement could revolutionise the capabilities of remote-sensing satellites by enabling real-time monitoring and decision making for a range of applications.
    Data collected by remote-sensing satellites is fundamental for many key activities, including aerial mapping, weather prediction, and monitoring deforestation. Currently, most satellites can only passively collect data, since they are not equipped to make decisions or detect changes. Instead, data has to be relayed to Earth to be processed, which typically takes several hours or even days. This limits the ability to identify and respond to rapidly emerging events, such as a natural disaster.
    To overcome these restrictions, a group of researchers led by DPhil student Vít Růžička (Department of Computer Science, University of Oxford), took on the challenge of training the first machine learning program in outer space. During 2022, the team successfully pitched their idea to the Dashing through the Stars mission, which had issued an open call for project proposals to be carried out on board the ION SCV004 satellite, launched in January 2022. During the autumn of 2022, the team uplinked the code for the program to the satellite already in orbit.
    The researchers trained a simple model to detect changes in cloud cover from aerial images directly onboard the satellite, in contrast to training on the ground. The model was based on an approach called few-shot learning, which enables a model to learn the most important features to look for when it has only a few samples to train from. A key advantage is that the data can be compressed into smaller representations, making the model faster and more efficient.
    Vít Růžička explained: ‘The model we developed, called RaVAEn, first compresses the large image files into vectors of 128 numbers. During the training phase, the model learns to keep only the informative values in this vector; the ones that relate to the change it is trying to detect (in this case, whether there is a cloud present or not). This results in extremely fast training due to having only a very small classification model to train.’
    Whilst the first part of the model, to compress the newly-seen images, was trained on the ground, the second part (which decided whether the image contained clouds or not) was trained directly on the satellite.
    Normally, developing a machine learning model would require several rounds of training, using the power of a cluster of linked computers. In contrast, the team’s tiny model completed the training phase (using over 1300 images) in around one and a half seconds. More

  • in

    Engineering team uses diamond microparticles to create high security anti-counterfeit labels

    Counterfeiting is a serious problem affecting a wide range of industries — from medicine to electronics, inflicting enormous economic losses, posing safety concerns and putting health at risk.
    Counterfeiters and anti-counterfeiters are now locked in a technological arms race. Despite anti-counterfeiting tools becoming more and more high-tech — including holograms, thermochromic ink and radio frequency identification tags, fake products are becoming harder and harder to tell apart from the genuine articles because counterfeiters are using increasingly advanced technology.
    Recently, a team of researchers led by Dr Zhiqin Chu of the Department of Electrical and Electronic Engineering of the University of Hong Kong (HKU), together with Professor Lei Shao of the School of Electronics and Information Technology of Sun Yat-sen University, and Professor Qi Wang from Dongguan Institute of Opto-Electronics of Peking University developed a pioneering technological solution that counterfeiters have no response to.
    Dr Chu’s team created diamond-based anti-counterfeiting labels that are unique and known in the industry as PUFs — Physically Unclonable Functions.
    The team made these labels by planting tiny artificial diamonds — known as diamond microparticles, on a silicon plate using a method called Chemical Vapour Deposition (CVD).
    The diamond microparticles, all different in shape and size, form a unique pattern when they scatter on the silicon substrate. Such pattern is impossible to replicate and therefore scatters light in a unique way. Put simply, it forms a unique “fingerprint” than can be scanned using a phone.
    The second level of uniqueness, and hence security, comes from the fact that these diamond microparticles have defects known as silicon-vacancy (SiV) centers. More