More stories

  • in

    Can’t find your phone? There’s a robot for that

    Engineers at the University of Waterloo have discovered a new way to program robots to help people with dementia locate medicine, glasses, phones and other objects they need but have lost.
    And while the initial focus is on assisting a specific group of people, the technology could someday be used by anyone who has searched high and low for something they’ve misplaced.
    “The long-term impact of this is really exciting,” said Dr. Ali Ayub, a post-doctoral fellow in electrical and computer engineering. “A user can be involved not just with a companion robot but a personalized companion robot that can give them more independence.”
    Ayub and three colleagues were struck by the rapidly rising number of people coping with dementia, a condition that restricts brain function, causing confusion, memory loss and disability. Many of these individuals repeatedly forget the location of everyday objects, which diminishes their quality of life and places additional burdens on caregivers.
    Engineers believed a companion robot with an episodic memory of its own could be a game-changer in such situations. And they succeeded in using artificial intelligence to create a new kind of artificial memory.
    The research team began with a Fetch mobile manipulator robot, which has a camera for perceiving the world around it.
    Next, using an object-detection algorithm, they programmed the robot to detect, track and keep a memory log of specific objects in its camera view through stored video. With the robot capable of distinguishing one object from another, it can record the time and date objects enter or leave its view.
    Researchers then developed a graphical interface to enable users to choose objects they want to be tracked and, after typing the objects’ names, search for them on a smartphone app or computer. Once that happens, the robot can indicate when and where it last observed the specific object.
    Tests have shown the system is highly accurate. And while some individuals with dementia might find the technology daunting, Ayub said caregivers could readily use it.
    Moving forward, researchers will conduct user studies with people without disabilities, then people with dementia.
    A paper on the project, Where is my phone? Towards developing an episodic memory model for companion robots to track users’ salient objects, was presented at the recent 2023 ACM/IEEE International Conference on Human-Robot Interaction. More

  • in

    Tetris reveals how people respond to unfair AI

    A Cornell University-led experiment in which two people play a modified version of Tetris revealed that players who get fewer turns perceived the other player as less likable, regardless of whether a person or an algorithm allocated the turns.
    Most studies on algorithmic fairness focus on the algorithm or the decision itself, but researchers sought to explore the relationships among the people affected by the decisions.
    “We are starting to see a lot of situations in which AI makes decisions on how resources should be distributed among people,” said Malte Jung, associate professor of information science, whose group conducted the study. “We want to understand how that influences the way people perceive one another and behave towards each other. We see more and more evidence that machines mess with the way we interact with each other.”
    In an earlier study, a robot chose which person to give a block to and studied the reactions of each individual to the machine’s allocation decisions.
    “We noticed that every time the robot seemed to prefer one person, the other one got upset,” said Jung. “We wanted to study this further, because we thought that, as machines making decisions becomes more a part of the world — whether it be a robot or an algorithm — how does that make a person feel?”
    Using open-source software, Houston Claure — the study’s first author and postdoctoral researcher at Yale University — developed a two-player version of Tetris, in which players manipulate falling geometric blocks in order to stack them without leaving gaps before the blocks pile to the top of the screen. Claure’s version, Co-Tetris, allows two people (one at a time) to work together to complete each round.
    An “allocator” — either human or AI, which was conveyed to the players — determines which player takes each turn. Jung and Claure devised their experiment so that players would have either 90% of the turns (the “more” condition), 10% (“less”) or 50% (“equal”).
    The researchers found, predictably, that those who received fewer turns were acutely aware that their partner got significantly more. But they were surprised to find that feelings about it were largely the same regardless of whether a human or an AI was doing the allocating.
    The effect of these decisions is what the researchers have termed “machine allocation behavior” — similar to the established phenomenon of “resource allocation behavior,” the observable behavior people exhibit based on allocation decisions. Jung said machine allocation behavior is “the concept that there is this unique behavior that results from a machine making a decision about how something gets allocated.”
    The researchers also found that fairness didn’t automatically lead to better game play and performance. In fact, equal allocation of turns led, on average, to a worse score than unequal allocation.
    “If a strong player receives most of the blocks,” Claure said, “the team is going to do better. And if one person gets 90%, eventually they’ll get better at it than if two average players split the blocks.” More

  • in

    Students positive towards AI, but uncertain about what counts as cheating

    Students in Sweden are positive towards AI tools such as ChatGPT in education, but 62 percent believe that using chatbots during exams is cheating. However, where the boundary for cheating lies is highly unclear. This is shown in a survey from Chalmers University of Technology, which is the first large-scale study in Europe to investigate students’ attitudes towards artificial intelligence in higher education.
    “I am afraid of AI and what it could mean for the future.”
    “Don’t worry so much! Keep up with the development and adapt your teaching for the future.”
    “ChatGPT and similar tools will revolutionise how we learn, and we will be able to come up with amazing things.”
    These are three out of nearly two thousand optional comments from the survey which almost 6,000 students in Sweden recently participated in.
    “The students express strong, diverse, and in many cases emotionally charged opinions,” says Hans Malmström, Professor at the Department of Communication and Learning in Science at Chalmers University of technology. He, together with his colleagues Christian Stöhr and Amy Wanyu Ou, conducted the study.

    More than a third use ChatGPT regularly
    A majority of the respondents believe that chatbots and AI language tools make them more efficient as students and argue that such tools improve their academic writing and overall language skills. Virtually all the responding students are familiar with ChatGPT, the majority use the tool, and 35 percent use the chatbot regularly.
    Lack guidance — opposed a ban
    Despite their positive attitude towards AI, many students feel anxious and lack clear guidance on how to use AI in the learning environments they are in. It is simply difficult to know where the boundary for cheating lies.
    “Most students have no idea whether their educational institution has any rules or guidelines for using AI responsibly, and that is of course worrying. At the same time, an overwhelming majority is against a ban on AI in educational contexts,” says Hans Malmström.

    No replacement for critical thinking
    Many students perceive chatbots as a mentor or teacher that they can ask questions or get help from, for example, with explanations of concepts and summaries of ideas. The dominant attitude is that chatbots should be used as an aid, not replace students’ own critical thinking. Or as one student put it: “You should be able to do the same things as the AI, but it should help you do it. You should not use a calculator if you don’t know what the plus sign on it does.”
    Aid in case of disabilities
    Another important aspect that emerged in the survey was that AI serves as an effective aid for people with various disabilities. A student with ADD and dyslexia described how they had spent 20 minutes writing down their answer in the survey and then improved it by inputting the text into ChatGPT: “It’s like being color blind and suddenly being able to see all the beautiful colors.”
    Giving students a voice
    The researchers have now gathered a wealth of important information and compiled the results in an overview report.
    “We hope and believe that the answers from this survey will give students a voice and the results will thus be an important contribution to our collective understanding of AI and learning,” says Christian Stöhr, Associate Professor at the Department of Communication and Learning in Science at Chalmers.
    More about the study
    “Chatbots and other AI for learning: A survey on use and views among university students in Sweden” was conducted in the following way: The researchers at Chalmers conducted the survey between 5 April and 5 May, 2023. Students at all universities in Sweden could participate. The survey was distributed through social media and targeted efforts from multiple universities and student organisations. In total, the survey was answered by 5,894 students.
    Summary of results: 95 percent of students are familiar with ChatGPT, while awareness of other chatbots is very low. 56 percent are positive about using chatbots in their studies; 35 percent use ChatGTP regularly. 60 percent are opposed to a ban on chatbots, and 77 percent are against a ban on other AI tools (such as Grammarly) in education. More than half of the students do not know if their institution has guidelines for how AI can be used in education; one in four explicitly says that their institution lack such regulations. 62 percent believe that using chatbots during examinations is cheating. Students express some concern about AI development, and there is particular concern over the impact of chatbots on future education. More

  • in

    Better than humans: Artificial intelligence in intensive care units

    In the future, artificial intelligence will play an important role in medicine. In diagnostics, successful tests have already been performed: for example, the computer can learn to categorise images with great accuracy according to whether they show pathological changes or not. However, it is more difficult to train an artificial intelligence to examine the time-varying conditions of patients and to calculate treatment suggestions — this is precisely what has now been achieved at TU Wien in cooperation with the Medical University of Vienna.
    With the help of extensive data from intensive care units of various hospitals, an artificial intelligence was developed that provides suggestions for the treatment of people who require intensive care due to sepsis. Analyses show that artificial intelligence already surpasses the quality of human decisions. However, it is now important to also discuss the legal aspects of such methods.
    Making optimal use of existing data
    “In an intensive care unit, a lot of different data is collected around the clock. The patients are constantly monitored medically. We wanted to investigate whether these data could be used even better than before,” says Prof. Clemens Heitzinger from the Institute for Analysis and Scientific Computing at TU Wien (Vienna). He is also Co-Director of the cross-faculty “Center for Artificial Intelligence and Machine Learning” (CAIML) at TU Wien.
    Medical staff make their decisions on the basis of well-founded rules. Most of the time, they know very well which parameters they have to take into account in order to provide the best care. However, the computer can easily take many more parameters than a human into account — and in some cases this can lead to even better decisions.
    The computer as planning agent
    “In our project, we used a form of machine learning called reinforcement learning,” says Clemens Heitzinger. “This is not just about simple categorisation — for example, separating a large number of images into those that show a tumour and those that do not — but about a temporally changing progression, about the development that a certain patient is likely to go through. Mathematically, this is something quite different. There has been little research in this regard in the medical field.”

    The computer becomes an agent that makes its own decisions: if the patient is well, the computer is “rewarded.” If the condition deteriorates or death occurs, the computer is “punished.” The computer programme has the task of maximising its virtual “reward” by taking actions. In this way, extensive medical data can be used to automatically determine a strategy which achieves a particularly high probability of success.
    Already better than a human
    “Sepsis is one of the most common causes of death in intensive care medicine and poses an enormous challenge for doctors and hospitals, as early detection and treatment is crucial for patient survival,” says Prof. Oliver Kimberger from the Medical University of Vienna. “So far, there have been few medical breakthroughs in this field, which makes the search for new treatments and approaches all the more urgent. For this reason, it is particularly interesting to investigate the extent to which artificial intelligence can contribute to improve medical care here. Using machine learning models and other AI technologies are an opportunity to improve the diagnosis and treatment of sepsis, ultimately increasing the chances of patient survival.”
    Analysis shows that AI capabilities are already outperforming humans: “Cure rates are now higher with an AI strategy than with purely human decisions. In one of our studies, the cure rate in terms of 90-day mortality was increased by about 3% to about 88%,” says Clemens Heitzinger.
    Of course, this does not mean that one should leave medical decisions in an intensive care unit to the computer alone. But the artificial intelligence may run along as an additional device at the bedside — and the medical staff can consult it and compare their own assessment with the artificial intelligence’s suggestions. Such artificial intelligences can also be highly useful in education.
    Discussion about legal issues is necessary
    “However, this raises important questions, especially legal ones,” says Clemens Heitzinger. “One probably thinks of the question who will be held liable for any mistakes made by the artificial intelligence first. But there is also the converse problem: what if the artificial intelligence had made the right decision, but the human chose a different treatment option and the patient suffered harm as a result?” Does the doctor then face the accusation that it would have been better to trust the artificial intelligence because it comes with a huge wealth of experience? Or should it be the human’s right to ignore the computer’s advice at all times?
    “The research project shows: artificial intelligence can already be used successfully in clinical practice with today’s technology — but a discussion about the social framework and clear legal rules are still urgently needed,” Clemens Heitzinger is convinced. More

  • in

    Robotic proxy brings remote users to life in real time

    Cornell University researchers have developed a robot, called ReMotion, that occupies physical space on a remote user’s behalf, automatically mirroring the user’s movements in real time and conveying key body language that is lost in standard virtual environments.
    “Pointing gestures, the perception of another’s gaze, intuitively knowing where someone’s attention is — in remote settings, we lose these nonverbal, implicit cues that are very important for carrying out design activities,” said Mose Sakashita, a doctoral student of information science.
    Sakashita is the lead author of “ReMotion: Supporting Remote Collaboration in Open Space with Automatic Robotic Embodiment,” which he presented at the Association for Computing Machinery CHI Conference on Human Factors in Computing Systems in Hamburg, Germany. “With ReMotion, we show that we can enable rapid, dynamic interactions through the help of a mobile, automated robot.”
    The lean, nearly six-foot-tall device is outfitted with a monitor for a head, omnidirectional wheels for feet and game-engine software for brains. It automatically mirrors the remote user’s movements — thanks to another Cornell-made device, NeckFace, which the remote user wears to track head and body movements. The motion data is then sent remotely to the ReMotion robot in real-time.
    Telepresence robots are not new, but remote users generally need to steer them manually, distracting from the task at hand, researchers said. Other options such as virtual reality and mixed reality collaboration can also require an active role from the user and headsets may limit peripheral awareness, researchers added.
    In a small study, nearly all participants reported having a better connection with their remote teammates when using ReMotion compared to an existing telerobotic system. Participants also reported significantly higher shared attention among remote collaborators.
    In its current form, ReMotion only works with two users in a one-on-one remote environment, and each user must occupy physical spaces of identical size and layout. In future work, ReMotion developers intend to explore asymmetrical scenarios, like a single remote team member collaborating virtually via ReMotion with multiple teammates in a larger room.
    With further development, Sakashita says ReMotion could be deployed in virtual collaborative environments as well as in classrooms and other educational settings.
    This research was funded in part by the National Science Foundation and the Nakajima Foundation. More

  • in

    Researcher uses artificial intelligence to discover new materials for advanced computing

    A team of researchers led by Rensselaer Polytechnic Institute’s Trevor David Rhone, assistant professor in the Department of Physics, Applied Physics, and Astronomy, has identified novel van der Waals (vdW) magnets using cutting-edge tools in artificial intelligence (AI). In particular, the team identified transition metal halide vdW materials with large magnetic moments that are predicted to be chemically stable using semi-supervised learning. These two-dimensional (2D) vdW magnets have potential applications in data storage, spintronics, and even quantum computing.
    Rhone specializes in harnessing materials informatics to discover new materials with unexpected properties that advance science and technology. Materials informatics is an emerging field of study at the intersection of AI and materials science. His team’s latest research was recently featured on the cover of Advanced Theory and Simulations.
    2D materials, which can be as thin as a single atom, were only discovered in 2004 and have been the subject of great scientific curiosity because of their unexpected properties. 2D magnets are significant because their long-range magnetic ordering persists when they are thinned down to one or a few layers. This is due to magnetic anisotropy. The interplay with this magnetic anisotropy and low dimensionality could give rise to exotic spin degrees of freedom, such as spin textures that can be used in the development of quantum computing architectures. 2D magnets also span the full range of electronic properties and can be used in high-performance and energy-efficient devices.
    Rhone and team combined high-throughput density functional theory (DFT) calculations, to determine the vdW materials’ properties, with AI to implement a form of machine learning called semi-supervised learning. Semi-supervised learning uses a combination of labeled and unlabeled data to identify patterns in data and make predictions. Semi-supervised learning mitigates a major challenge in machine learning — the scarcity of labeled data.
    “Using AI saves time and money,” said Rhone. “The typical materials discovery process requires expensive simulations on a supercomputer that can take months. Lab experiments can take even longer and can be more expensive. An AI approach has the potential to speed up the materials discovery process.”
    Using an initial subset of 700 DFT calculations on a supercomputer, an AI model was trained that could predict the properties of many thousands of materials candidates in milliseconds on a laptop. The team then identified promising candidate vdW materials with large magnetic moments and low formation energy. Low formation energy is an indicator of chemical stability, which is an important requirement for synthesizing the material in a laboratory and subsequent industrial applications.
    “Our framework can easily be applied to explore materials with different crystal structures, as well,” said Rhone. “Mixed crystal structure prototypes, such as a data set of both transition metal halides and transition metal trichalcogenides, can also be explored with this framework.”
    “Dr. Rhone’s application of AI to the field of materials science continues to produce exciting results,” said Curt Breneman, dean of Rensselaer’s School of Science. “He has not only accelerated our understanding of 2D materials that have novel properties, but his findings and methods are likely to contribute to new quantum computing technologies.”
    Rhone was joined in research by Romakanta Bhattarai and Haralambos Gavras of Renselaer; Bethany Lusch and Misha Salim of Argonne National Laboratory; Marios Mattheakis, Daniel T. Larson, and Efthimios Kaxiras of Harvard University; and Yoshiharu Krockenberger of NTT Basic Research Laboratories. More

  • in

    Research shows mobile phone users do not understand what data they might be sharing

    Privacy and security features that aim to give consumers more control over the sharing of their data by smartphone apps are widely misunderstood, shows new research from the University of Bath’s School of Management.
    43 per cent of phone users in the study were confused or unclear about what app tracking means. People commonly mistook the purpose of tracking, thinking that it was intrinsic to the app function, or that it would provide a better user experience.
    App tracking is used by companies to deliver targeted advertising to smartphone users.
    When iPhone users first open an app, a pop-up asks whether they want to allow the app company to track their activity across other apps. They can choose either ‘Ask App Not to Track’ or ‘Allow’, as introduced by Apple’s App Tracking Transparency framework in April 2021. Android users must access tracking consent via their phone settings.
    If people opt out of tracking, their use of apps and websites on their device can no longer be traced by the company, and the data can’t be used for targeted advertising, or shared with data brokers.
    The most common misapprehension (24 per cent) was that tracking refers to sharing the physical location of the device — rather than tracing the use of apps and websites. People thought they needed to accept tracking for food delivery and collection services, such as Deliveroo, or for health and fitness apps, because they believed their location was integral to the functioning of the app.

    While just over half of participants (51 per cent) said they were concerned about privacy or security — including security of their data after it had been collected — analysis showed no association between their concern for privacy in their daily life and a lower rate of tracking acceptance.
    “We asked people about their privacy concerns and expected to see people who are concerned about protecting their privacy allowing fewer apps to track their data, but this wasn’t the case,” said Hannah Hutton, postgraduate researcher from the University of Bath’s School of Management. “There were significant misunderstandings about what app tracking means. People commonly believed they needed to allow tracking for the app to function correctly.
    “Some of the confusion is likely to be due to lack of clarity in wording chosen by companies in the tracking prompts, which are easy to misinterpret. For example, when ASOS said ‘We’ll use your data to give you a more personalised ASOS experience and to make our app even more amazing’ it’s probably no surprise that people thought they were opting for additional functionality rather than just more relevant adverts.”
    Although the main text of the prompt for app tracking consent is standardised, app developers can include a sentence explaining why they are requesting tracking permission, and this can open the door to false or misleading information, either intentionally or unknowingly.
    Other misconceptions included believing that consenting to sharing for health apps (such as period tracking apps) would mean private data being shared, or that denying tracking would remove adverts from the app.
    The study, Exploring User Motivations Behind iOS App Tracking Transparency Decisions, is published in the proceedings of The ACM CHI Conference on Human Factors in Computing Systems and was presented at the CHI23 conference in Hamburg, Germany (23-28 April). It isthought to be the first academic analysis of the decisions people make when faced with tracking requests.
    The researchers collected data on the tracking decisions of 312 study participants (aged 18 to 75) and analysed reasons for allowing or rejecting tracking across a range of apps, including social media, shopping, health, and food delivery.
    David Ellis, a Professor of Behavioural Science and co-author, added: “This research further exposes how most consumers are not aware of how their digital data is being used. Everyday millions of us share information with tech companies and while some of this data is essential for these services to function correctly, other data allows them to generate money from advertising revenue. For example, Meta predicted that they would lose $10Billion from people rejecting tracking.
    “While people are now familiar with the benefits of having PIN numbers and facial recognition to protect our devices, more work needs to be done so people can make transparent decisions about what other data is used for in the digital age.” More

  • in

    Extracting the best flavor from coffee

    Espresso coffee is brewed by first grinding roasted coffee beans into grains. Hot water then forces its way through a bed of coffee grains at high pressure, and the soluble content of the coffee grains dissolves into the water (extraction) to produce espresso.
    In 2020, researchers found that more finely ground coffee beans brew a weaker espresso. This counterintuitive experimental result makes sense if, for some reason, regions exist within the coffee bed where less or even no coffee is extracted. This uneven extraction becomes more pronounced when coffee is ground more finely.
    In Physics of Fluids, from AIP Publishing, University of Huddersfield researchers explored the role of uneven coffee extraction using a simple mathematical model. They split the coffee into two regions to examine whether uneven flow does in fact make weaker espresso.
    One of the regions in the model system hosted more tightly packed coffee than the other, which caused an initial disparity in flow resistance because water flows more quickly through more tightly packed grains. The extraction of coffee decreased the flow resistance further, as coffee grains lose about 20% to 25% of their mass during the process.
    “Our model shows that flow and extraction widened the initial disparity in flow between the two regions due to a positive feedback loop, in which more flow leads to more extraction, which in turn reduces resistance and leads to more flow,” said co-author William Lee. “This effect appears to always be active, and it isn’t until one of the regions has all of its soluble coffee extracted that we see the experimentally observed decrease in extraction with decreasing grind size.”
    The researchers were surprised to find the model always predicts uneven flow across different parts of the coffee bed.
    “This is important because the taste of the coffee depends on the level of extraction,” said Lee. “Too little extraction and the taste of the coffee is what experts call ‘underdeveloped,’ or as I describe it: smoky water. Too much extraction and the coffee tastes very bitter. These results suggest that even if it looks like the overall extraction is at the right level, it might be due to a mixture of underdeveloped and bitter coffee.”
    Understanding the origin of uneven extraction and avoiding or preventing it could enable better brews and substantial financial savings by using coffee more efficiently.
    “Our next step is to make the model more realistic to see if we can obtain more detailed insights into this confusing phenomenon,” said Lee. “Once this is achieved, we can start to think about whether it is possible to make changes to the way espresso coffee is brewed to reduce the amount of uneven extraction.” More