More stories

  • in

    Researchers have taught an algorithm to ‘taste’

    For non-connoisseurs, picking out a bottle of wine can be challenging when scanning an array of unfamiliar labels on the shop shelf. What does it taste like? What was the last one I bought that tasted so good?
    Here, wine apps like Vivino, Hello Vino, Wine Searcher and a host of others can help. Apps like these let wine buyers scan bottle labels and get information about a particular wine and read the reviews of others. These apps build upon artificially intelligent algorithms.
    Now, scientists from the Technical University of Denmark (DTU), the University of Copenhagen and Caltech have shown that you can add a new parameter to the algorithms that makes it easier to find a precise match for your own taste buds: Namely, people’s impressions of flavour.
    “We have demonstrated that, by feeding an algorithm with data consisting of people’s flavour impressions, the algorithm can make more accurate predictions of what kind of wine we individually prefer,” says Thoranna Bender, a graduate student at DTU who conducted the study under the auspices of the Pioneer Centre for AI at the University of Copenhagen.
    More accurate predictions of people’s favourite wines
    The researchers held wine tastings during which 256 participants were asked to arrange shot-sized cups of different wines on a piece of A3 paper based upon which wines they thought tasted most similarly. The greater the distance between the cups, the greater the difference in their flavour. The method is widely used in consumer tests. The researchers then digitized the points on the sheets of paper by photographing them.
    The data collected from the wine tastings was then combined with hundreds of thousands of wine labels and user reviews provided to the researchers by Vivino, a global wine app and marketplace. Next, the researchers developed an algorithm based on the enormous data set.

    “The dimension of flavour that we created in the model provides us with information about which wines are similar in taste and which are not. So, for example, I can stand with my favourite bottle of wine and say: I would like to know which wine is most similar to it in taste — or both in taste and price,” says Thoranna Bender.
    Professor and co-author Serge Belongie from the Department of Computer Science, who heads the Pioneer Centre for AI at the University of Copenhagen, adds:
    “We can see that when the algorithm combines the data from wine labels and reviews with the data from the wine tastings, it makes more accurate predictions of people’s wine preferences than when it only uses the traditional types of data in the form of images and text. So, teaching machines to use human sensory experiences results in better algorithms that benefit the user.”
    Can also be used for beer and coffee
    According to Serge Belongie, there is a growing trend in machine learning of using so-called multimodal data, which usually consists of a combination of images, text and sound. Using taste or other sensory inputs as data sources is entirely new. And it has great potential — e.g., in the food sector. Belongie states:
    “Understanding taste is a key aspect of food science and essential for achieving healthy, sustainable food production. But the use of AI in this context remains very much in its infancy. This project shows the power of using human-based inputs in artificial intelligence, and I predict that the results will spur more research at the intersection of food science and AI.”
    Thoranna Bender points out that the researchers’ method can easily be transferred to other types of food and drink as well:

    “We’ve chosen wine as a case, but the same method can just as well be applied to beer and coffee. For example, the approach can be used to recommend products and perhaps even food recipes to people. And if we can better understand the taste similarities in food, we can also use it in the healthcare sector to put together meals that meet with the tastes and nutritional needs of patients. It might even be used to develop foods tailored to different taste profiles.”
    The researchers have published their data on an open server and can be used for free.
    “We hope that someone out there will want to build upon our data. I’ve already fielded requests from people who have additional data that they would like to include in our dataset. I think that’s really cool,” concludes Thoranna Bender. More

  • in

    Photonic chip that ‘fits together like Lego’ opens door to semiconductor industry

    Researchers at the University of Sydney Nano Institute have invented a compact silicon semiconductor chip that integrates electronics with photonic, or light, components. The new technology significantly expands radio-frequency (RF) bandwidth and the ability to accurately control information flowing through the unit.
    Expanded bandwidth means more information can flow through the chip and the inclusion of photonics allows for advanced filter controls, creating a versatile new semiconductor device.
    Researchers expect the chip will have application in advanced radar, satellite systems, wireless networks and the roll-out of 6G and 7G telecommunications and also open the door to advanced sovereign manufacturing. It could also assist in the creation of high-tech value-add factories at places like Western Sydney’s Aerotropolis precinct.
    The chip is built using an emerging technology in silicon photonics that allows integration of diverse systems on semiconductors less than 5 millimetres wide. Pro-Vice-Chancellor (Research) Professor Ben Eggleton, who guides the research team, likened it to fitting together Lego building blocks, where new materials are integrated through advanced packaging of components, using electronic ‘chiplets’.
    The research for this invention has been published in Nature Communications.
    Dr Alvaro Casas Bedoya, Associate Director for Photonic Integration in the School of Physics, who led the chip design, said the unique method of heterogenous materials integration has been 10 years in the making.
    “The combined use of overseas semiconductor foundries to make the basic chip wafer with local research infrastructure and manufacturing has been vital in developing this photonic integrated circuit,” he said.

    “This architecture means Australia could develop its own sovereign chip manufacturing without exclusively relying on international foundries for the value-add process.”
    Professor Eggleton highlighted the fact that most of the items on the Federal Government’s List of Critical Technologies in the National Interest depend upon semiconductors.
    He said the invention means the work at Sydney Nano fits well with initiatives like the Semiconductor Sector Service Bureau (S3B), sponsored by the NSW Government, which aims to develop the local semiconductor ecosystem.
    Dr Nadia Court, Director of S3B, said, “This work aligns with our mission to drive advancements in semiconductor technology, holding great promise for the future of semiconductor innovation in Australia. The result reinforces local strength in research and design at a pivotal time of increased global focus and investment in the sector.”
    Designed in collaboration with scientists at the Australian National University, the integrated circuit was built at the Core Research Facility cleanroom at the University of Sydney Nanoscience Hub, a purpose-built $150 million building with advanced lithography and deposition facilities.
    The photonic circuit in the chip means a device with an impressive 15 gigahertz bandwidth of tunable frequencies with spectral resolution down to just 37 megahertz, which is less than a quarter of one percent of the total bandwidth.

    Professor Eggleton said: “Led by our impressive PhD student Matthew Garrett, this invention is a significant advance for microwave photonics and integrated photonics research.
    “Microwave photonic filters play a crucial role in modern communication and radar applications, offering the flexibility to precisely filter different frequencies, reducing electromagnetic interference and enhancing signal quality.
    “Our innovative approach of integrating advanced functionalities into semiconductor chips, particularly the heterogenous integration of chalcogenide glass with silicon, holds the potential to reshape the local semiconductor landscape.”
    Co-author and Senior Research Fellow Dr Moritz Merklein said: “This work paves the way for a new generation of compact, high-resolution RF photonic filters with wideband frequency tunability, particularly beneficial in air and spaceborne RF communication payloads, opening possibilities for enhanced communications and sensing capabilities.” More

  • in

    To help autonomous vehicles make moral decisions, researchers ditch the ‘trolley problem’

    Researchers have developed a new experiment to better understand what people view as moral and immoral decisions related to driving vehicles, with the goal of collecting data to train autonomous vehicles how to make “good” decisions. The work is designed to capture a more realistic array of moral challenges in traffic than the widely discussed life-and-death scenario inspired by the so-called “trolley problem.”
    “The trolley problem presents a situation in which someone has to decide whether to intentionally kill one person (which violates a moral norm) in order to avoid the death of multiple people,” says Dario Cecchini, first author of a paper on the work and a postdoctoral researcher at North Carolina State University.
    “In recent years, the trolley problem has been utilized as a paradigm for studying moral judgment in traffic,” Cecchini says. “The typical situation comprises a binary choice for a self-driving car between swerving left, hitting a lethal obstacle, or proceeding forward, hitting a pedestrian crossing the street. However, these trolley-like cases are unrealistic. Drivers have to make many more realistic moral decisions every day. Should I drive over the speed limit? Should I run a red light? Should I pull over for an ambulance?”
    “Those mundane decisions are important because they can ultimately lead to life-or-death situations,” says Veljko Dubljevic, corresponding author of the paper and an associate professor in the Science, Technology & Society program at NC State.
    “For example, if someone is driving 20 miles over the speed limit and runs a red light, then they may find themselves in a situation where they have to either swerve into traffic or get into a collision. There’s currently very little data in the literature on how we make moral judgments about the decisions drivers make in everyday situations.”
    To address that lack of data, the researchers developed a series of experiments designed to collect data on how humans make moral judgments about decisions that people make in low-stakes traffic situations. The researchers created seven different driving scenarios, such as a parent who has to decide whether to violate a traffic signal while trying to get their child to school on time. Each scenario is programmed into a virtual reality environment, so that study participants engaged in the experiment have audiovisual information about what drivers are doing when they make decisions, rather than simply reading about the scenario.
    For this work, the researchers built on something called the Agent Deed Consequence (ADC) model, which posits that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that resulted from the deed.

    Researchers created eight different versions of each traffic scenario, varying the combinations of agent, deed and consequence. For example, in one version of the scenario where a parent is trying to get the child to school, the parent is caring, brakes at a yellow light, and gets the child to school on time. In a second version, the parent is abusive, runs a red light, and causes an accident. The other six versions alter the nature of the parent (the agent), their decision at the traffic signal (the deed), and/or the outcome of their decision (the consequence).
    “The goal here is to have study participants view one version of each scenario and determine how moral the behavior of the driver was in each scenario, on a scale from one to 10,” Cecchini says. “This will give us robust data on what we consider moral behavior in the context of driving a vehicle, which can then be used to develop AI algorithms for moral decision making in autonomous vehicles.”
    The researchers have done pilot testing to fine-tune the scenarios and ensure that they reflect believable and easily understood situations.
    “The next step is to engage in large-scale data collection, getting thousands of people to participate in the experiments,” says Dubljevic. “We can then use that data to develop more interactive experiments with the goal of further fine-tuning our understanding of moral decision making. All of this can then be used to create algorithms for use in autonomous vehicles. We’ll then need to engage in additional testing to see how those algorithms perform.” More

  • in

    Brainstorming with a bot

    A researcher has just finished writing a scientific paper. She knows her work could benefit from another perspective. Did she overlook something? Or perhaps there’s an application of her research she hadn’t thought of. A second set of eyes would be great, but even the friendliest of collaborators might not be able to spare the time to read all the required background publications to catch up.
    Kevin Yager — leader of the electronic nanomaterials group at the Center for Functional Nanomaterials (CFN), a U.S. Department of Energy (DOE) Office of Science User Facility at DOE’s Brookhaven National Laboratory — has imagined how recent advances in artificial intelligence (AI) and machine learning (ML) could aid scientific brainstorming and ideation. To accomplish this, he has developed a chatbot with knowledge in the kinds of science he’s been engaged in.
    Rapid advances in AI and ML have given way to programs that can generate creative text and useful software code. These general-purpose chatbots have recently captured the public imagination. Existing chatbots — based on large, diverse language models — lack detailed knowledge of scientific sub-domains. By leveraging a document-retrieval method, Yager’s bot is knowledgeable in areas of nanomaterial science that other bots are not. The details of this project and how other scientists can leverage this AI colleague for their own work have recently been published in Digital Discovery.
    Rise of the Robots
    “CFN has been looking into new ways to leverage AI/ML to accelerate nanomaterial discovery for a long time. Currently, it’s helping us quickly identify, catalog, and choose samples, automate experiments, control equipment, and discover new materials. Esther Tsai, a scientist in the electronic nanomaterials group at CFN, is developing an AI companion to help speed up materials research experiments at the National Synchrotron Light Source II (NSLS-II).” NSLS-II is another DOE Office of Science User Facility at Brookhaven Lab.
    At CFN, there has been a lot of work on AI/ML that can help drive experiments through the use of automation, controls, robotics, and analysis, but having a program that was adept with scientific text was something that researchers hadn’t explored as deeply. Being able to quickly document, understand, and convey information about an experiment can help in a number of ways — from breaking down language barriers to saving time by summarizing larger pieces of work.
    Watching Your Language
    To build a specialized chatbot, the program required domain-specific text — language taken from areas the bot is intended to focus on. In this case, the text is scientific publications. Domain-specific text helps the AI model understand new terminology and definitions and introduces it to frontier scientific concepts. Most importantly, this curated set of documents enables the AI model to ground its reasoning using trusted facts.

    To emulate natural human language, AI models are trained on existing text, enabling them to learn the structure of language, memorize various facts, and develop a primitive sort of reasoning. Rather than laboriously retrain the AI model on nanoscience text, Yager gave it the ability to look up relevant information in a curated set of publications. Providing it with a library of relevant data was only half of the battle. To use this text accurately and effectively, the bot would need a way to decipher the correct context.
    “A challenge that’s common with language models is that sometimes they ‘hallucinate’ plausible sounding but untrue things,” explained Yager. “This has been a core issue to resolve for a chatbot used in research as opposed to one doing something like writing poetry. We don’t want it to fabricate facts or citations. This needed to be addressed. The solution for this was something we call ’embedding,’ a way of categorizing and linking information quickly behind the scenes.”
    Embedding is a process that transforms words and phrases into numerical values. The resulting “embedding vector” quantifies the meaning of the text. When a user asks the chatbot a question, it’s also sent to the ML embedding model to calculate its vector value. This vector is used to search through a pre-computed database of text chunks from scientific papers that were similarly embedded. The bot then uses text snippets it finds that are semantically related to the question to get a more complete understanding of the context.
    The user’s query and the text snippets are combined into a “prompt” that is sent to a large language model, an expansive program that creates text modeled on natural human language, that generates the final response. The embedding ensures that the text being pulled is relevant in the context of the user’s question. By providing text chunks from the body of trusted documents, the chatbot generates answers that are factual and sourced.
    “The program needs to be like a reference librarian,” said Yager. “It needs to heavily rely on the documents to provide sourced answers. It needs to be able to accurately interpret what people are asking and be able to effectively piece together the context of those questions to retrieve the most relevant information. While the responses may not be perfect yet, it’s already able to answer challenging questions and trigger some interesting thoughts while planning new projects and research.”
    Bots Empowering Humans
    CFN is developing AI/ML systems as tools that can liberate human researchers to work on more challenging and interesting problems and to get more out of their limited time while computers automate repetitive tasks in the background. There are still many unknowns about this new way of working, but these questions are the start of important discussions scientists are having right now to ensure AI/ML use is safe and ethical.
    “There are a number of tasks that a domain-specific chatbot like this could clear from a scientist’s workload. Classifying and organizing documents, summarizing publications, pointing out relevant info, and getting up to speed in a new topical area are just a few potential applications,” remarked Yager. “I’m excited to see where all of this will go, though. We never could have imagined where we are now three years ago, and I’m looking forward to where we’ll be three years from now.” More

  • in

    Scientists build tiny biological robots from human cells

    Researchers at Tufts University and Harvard University’s Wyss Institute have created tiny biological robots that they call Anthrobots from human tracheal cells that can move across a surface and have been found to encourage the growth of neurons across a region of damage in a lab dish.
    The multicellular robots, ranging in size from the width of a human hair to the point of a sharpened pencil, were made to self-assemble and shown to have a remarkable healing effect on other cells. The discovery is a starting point for the researchers’ vision to use patient-derived biobots as new therapeutic tools for regeneration, healing, and treatment of disease.
    The work follows from earlier research in the laboratories of Michael Levin, Vannevar Bush Professor of Biology at Tufts University School of Arts & Sciences, and Josh Bongard at the University of Vermont in which they created multicellular biological robots from frog embryo cells called Xenobots, capable of navigating passageways, collecting material, recording information, healing themselves from injury, and even replicating for a few cycles on their own. At the time, researchers did not know if these capabilities were dependent on their being derived from an amphibian embryo, or if biobots could be constructed from cells of other species.
    In the current study, published in Advanced Science, Levin, along with PhD student Gizem Gumuskaya discovered that bots can in fact be created from adult human cells without any genetic modification and they are demonstrating some capabilities beyond what was observed with the Xenobots. The discovery starts to answer a broader question that the lab has posed — what are the rules that govern how cells assemble and work together in the body, and can the cells be taken out of their natural context and recombined into different “body plans” to carry out other functions by design?
    In this case, researchers gave human cells, after decades of quiet life in the trachea, a chance to reboot and find ways of creating new structures and tasks. “We wanted to probe what cells can do besides create default features in the body,” said Gumuskaya, who earned a degree in architecture before coming into biology. “By reprogramming interactions between cells, new multicellular structures can be created, analogous to the way stone and brick can be arranged into different structural elements like walls, archways or columns.” The researchers found that not only could the cells create new multicellular shapes, but they could move in different ways over a surface of human neurons grown in a lab dish and encourage new growth to fill in gaps caused by scratching the layer of cells.
    Exactly how the Anthrobots encourage growth of neurons is not yet clear, but the researchers confirmed that neurons grew under the area covered by a clustered assembly of Anthrobots, which they called a “superbot.”
    “The cellular assemblies we construct in the lab can have capabilities that go beyond what they do in the body,” said Levin, who also serves as the director of the Allen Discovery Center at Tufts and is an associate faculty member of the Wyss Institute. “It is fascinating and completely unexpected that normal patient tracheal cells, without modifying their DNA, can move on their own and encourage neuron growth across a region of damage,” said Levin. “We’re now looking at how the healing mechanism works, and asking what else these constructs can do.”
    The advantages of using human cells include the ability to construct bots from a patient’s own cells to perform therapeutic work without the risk of triggering an immune response or requiring immunosuppressants. They only last a few weeks before breaking down, and so can easily be re-absorbed into the body after their work is done.

    In addition, outside of the body, Anthrobots can only survive in very specific laboratory conditions, and there is no risk of exposure or unintended spread outside the lab. Likewise, they do not reproduce, and they have no genetic edits, additions or deletions, so there is no risk of their evolving beyond existing safeguards.
    How Are Anthrobots Made?
    Each Anthrobot starts out as a single cell, derived from an adult donor. The cells come from the surface of the trachea and are covered with hairlike projections called cilia that wave back and forth. The cilia help the tracheal cells push out tiny particles that find their way into air passages of the lung. We all experience the work of ciliated cells when we take the final step of expelling the particles and excess fluid by coughing or clearing our throats. Earlier studies by others had shown that when the cells are grown in the lab, they spontaneously form tiny multicellular spheres called organoids.
    The researchers developed growth conditions that encouraged the cilia to face outward on organoids. Within a few days they started moving around, driven by the cilia acting like oars. They noted different shapes and types of movement — the first. important feature observed of the biorobotics platform. Levin says that if other features could be added to the Anthrobots (for example, contributed by different cells), they could be designed to respond to their environment, and travel to and perform functions in the body, or help build engineered tissues in the lab.
    The team, with the help of Simon Garnier at the New Jersey Institute of Technology, characterized the different types of Anthrobots that were produced. They observed that bots fell into a few discrete categories of shape and movement, ranging in size from 30 to 500 micrometers (from the thickness of a human hair to the point of a sharpened pencil), filling an important niche between nanotechnology and larger engineered devices.
    Some were spherical and fully covered in cilia, and some were irregular or football shaped with more patchy coverage of cilia, or just covered with cilia on one side. They traveled in straight lines, moved in tight circles, combined those movements, or just sat around and wiggled. The spherical ones fully covered with cilia tended to be wigglers. The Anthrobots with cilia distributed unevenly tended to move forward for longer stretches in straight or curved paths. They usually survived about 45-60 days in laboratory conditions before they naturally biodegraded.

    “Anthrobots self-assemble in the lab dish,” said Gumuskaya, who created the Anthrobots. “Unlike Xenobots, they don’t require tweezers or scalpels to give them shape, and we can use adult cells — even cells from elderly patients — instead of embryonic cells. It’s fully scalable — we can produce swarms of these bots in parallel, which is a good start for developing a therapeutic tool.”
    Little Healers
    Because Levin and Gumuskaya ultimately plan to make Anthrobots with therapeutic applications, they created a lab test to see how the bots might heal wounds. The model involved growing a two-dimensional layer of human neurons, and simply by scratching the layer with a thin metal rod, they created an open ‘wound’ devoid of cells.
    To ensure the gap would be exposed to a dense concentration of Anthrobots, they created “superbots” a cluster that naturally forms when the Anthrobots are confined to a small space. The superbots were made up primarily of circlers and wigglers, so they would not wander too far away from the open wound.
    Although it might be expected that genetic modifications of Anthrobot cells would be needed to help the bots encourage neural growth, surprisingly the unmodified Anthrobots triggered substantial regrowth, creating a bridge of neurons as thick as the rest of the healthy cells on the plate. Neurons did not grow in the wound where Anthrobots were absent. At least in the simplified 2D world of the lab dish, the Anthrobot assemblies encouraged efficient healing of live neural tissue.
    According to the researchers, further development of the bots could lead to other applications, including clearing plaque buildup in the arteries of atherosclerosis patients, repairing spinal cord or retinal nerve damage, recognizing bacteria or cancer cells, or delivering drugs to targeted tissues. The Anthrobots could in theory assist in healing tissues, while also laying down pro-regenerative drugs.
    Making New Blueprints, Restoring Old Ones
    Gumuskaya explained that cells have the innate ability to self-assemble into larger structures in certain fundamental ways. “The cells can form layers, fold, make spheres, sort and separate themselves by type, fuse together, or even move,” Gumuskaya said. “Two important differences from inanimate bricks are that cells can communicate with each other and create these structures dynamically, and each cell is programmed with many functions, like movement, secretion of molecules, detection of signals and more. We are just figuring out how to combine these elements to create new biological body plans and functions — different than those found in nature.”
    Taking advantage of the inherently flexible rules of cellular assembly helps the scientists construct the bots, but it can also help them understand how natural body plans assemble, how the genome and environment work together to create tissues, organs, and limbs, and how to restore them with regenerative treatments. More

  • in

    Scientists use A.I.-generated images to map visual functions in the brain

    Researchers at Weill Cornell Medicine, Cornell Tech and Cornell’s Ithaca campus have demonstrated the use of AI-selected natural images and AI-generated synthetic images as neuroscientific tools for probing the visual processing areas of the brain. The goal is to apply a data-driven approach to understand how vision is organized while potentially removing biases that may arise when looking at responses to a more limited set of researcher-selected images.
    In the study, published Oct. 23 in Communications Biology, the researchers had volunteers look at images that had been selected or generated based on an AI model of the human visual system. The images were predicted to maximally activate several visual processing areas. Using functional magnetic resonance imaging (fMRI) to record the brain activity of the volunteers, the researchers found that the images did activate the target areas significantly better than control images.
    The researchers also showed that they could use this image-response data to tune their vision model for individual volunteers, so that images generated to be maximally activating for a particular individual worked better than images generated based on a general model.
    “We think this is a promising new approach to study the neuroscience of vision,” said study senior author Dr. Amy Kuceyeski, a professor of mathematics in radiology and of mathematics in neuroscience in the Feil Family Brain and Mind Research Institute at Weill Cornell Medicine.
    The study was a collaboration with the laboratory of Dr. Mert Sabuncu, a professor of electrical and computer engineering at Cornell Engineering and Cornell Tech, and of electrical engineering in radiology at Weill Cornell Medicine. The study’s first author was Dr. Zijin Gu, a who was a doctoral student co-mentored by Dr. Sabuncu and Dr. Kuceyeski at the time of the study.
    Making an accurate model of the human visual system, in part by mapping brain responses to specific images, is one of the more ambitious goals of modern neuroscience. Researchers have found for example, that one visual processing region may activate strongly in response to an image of a face whereas another may respond to a landscape. Scientists must rely mainly on non-invasive methods in pursuit of this goal, given the risk and difficulty of recording brain activity directly with implanted electrodes. The preferred non-invasive method is fMRI, which essentially records changes in blood flow in small vessels of the brain — an indirect measure of brain activity — as subjects are exposed to sensory stimuli or otherwise perform cognitive or physical tasks. An fMRI machine can read out these tiny changes in three dimensions across the brain, at a resolution on the order of cubic millimeters.
    For their own studies, Dr. Kuceyeski and Dr. Sabuncu and their teams used an existing dataset comprising tens of thousands of natural images, with corresponding fMRI responses from human subjects, to train an AI-type system called an artificial neural network (ANN) to model the human brain’s visual processing system. They then used this model to predict which images, across the dataset, should maximally activate several targeted vision areas of the brain. They also coupled the model with an AI-based image generator to generate synthetic images to accomplish the same task.

    “Our general idea here has been to map and model the visual system in a systematic, unbiased way, in principle even using images that a person normally wouldn’t encounter,” Dr. Kuceyeski said.
    The researchers enrolled six volunteers and recorded their fMRI responses to these images, focusing on the responses in several visual processing areas. The results showed that, for both the natural images and the synthetic images, the predicted maximal activator images, on average across the subjects, did activate the targeted brain regions significantly more than a set of images that were selected or generated to be only average activators. This supports the general validity of the team’s ANN-based model and suggests that even synthetic images may be useful as probes for testing and improving such models.
    In a follow-on experiment, the team used the image and fMRI-response data from the first session to create separate ANN-based visual system models for each of the six subjects. They then used these individualized models to select or generate predicted maximal-activator images for each subject. The fMRI responses to these images showed that, at least for the synthetic images, there was greater activation of the targeted visual region, a face-processing region called FFA1, compared to the responses to images based on the group model. This result suggests that AI and fMRI can be useful for individualized visual-system modeling, for example to study differences in visual system organization across populations.
    The researchers are now running similar experiments using a more advanced version of the image generator, called Stable Diffusion.
    The same general approach could be useful in studying other senses such as hearing, they noted.
    Dr. Kuceyeski also hopes ultimately to study the therapeutic potential of this approach.
    “In principle, we could alter the connectivity between two parts of the brain using specifically designed stimuli, for example to weaken a connection that causes excess anxiety,” she said. More

  • in

    2D material reshapes 3D electronics for AI hardware

    Multifunctional computer chips have evolved to do more with integrated sensors, processors, memory and other specialized components. However, as chips have expanded, the time required to move information between functional components has also grown.
    “Think of it like building a house,” said Sang-Hoon Bae, an assistant professor of mechanical engineering and materials science at the McKelvey School of Engineering at Washington University in St. Louis. “You build out laterally and up vertically to get more function, more room to do more specialized activities, but then you have to spend more time moving or communicating between rooms.”
    To address this challenge, Bae and a team of international collaborators, including researchers from the Massachusetts Institute of Technology, Yonsei University, Inha University, Georgia Institute of Technology and the University of Notre Dame, demonstrated monolithic 3D integration of layered 2D material into novel processing hardware for artificial intelligence (AI) computing. They envision that their new approach will not only provide a material-level solution for fully integrating many functions into a single, small electronic chip, but also pave the way for advanced AI computing. Their work was published Nov. 27 in Nature Materials, where it was selected as a front cover article.
    The team’s monolithic 3D-integrated chip offers advantages over existing laterally integrated computer chips. The device contains six atomically thin 2D layers, each with its own function, and achieves significantly reduced processing time, power consumption, latency and footprint. This is accomplished through tightly packing the processing layers to ensure dense interlayer connectivity. As a result, the hardware offers unprecedented efficiency and performance in AI computing tasks.
    This discovery offers a novel solution to integrate electronics and also opens the door to a new era of multifunctional computing hardware. With ultimate parallelism at its core, this technology could dramatically expand the capabilities of AI systems, enabling them to handle complex tasks with lightning speed and exceptional accuracy, Bae said.
    “Monolithic 3D integration has the potential to reshape the entire electronics and computing industry by enabling the development of more compact, powerful and energy-efficient devices,” Bae said. “Atomically thin 2D materials are ideal for this, and my collaborators and I will continue improving this material until we can ultimately integrate all functional layers on a single chip.”
    Bae said these devices also are more flexible and functional, making them suitable for more applications.
    “From autonomous vehicles to medical diagnostics and data centers, the applications of this monolithic 3D integration technology are potentially boundless,” he said. “For example, in-sensor computing combines sensor and computer functions in one device, instead of a sensor obtaining information then transferring the data to a computer. That lets us obtain a signal and directly compute data resulting in faster processing, less energy consumption and enhanced security because data isn’t being transferred.” More

  • in

    Straining memory leads to new computing possibilities

    By strategically straining materials that are as thin as a single layer of atoms, University of Rochester scientists have developed a new form of computing memory that is at once fast, dense, and low-power. The researchers outline their new hybrid resistive switches in a study published in Nature Electronics.
    Developed in the lab of Stephen M. Wu, an assistant professor of electrical and computer engineering and of physics, the approach marries the best qualities of two existing forms of resistive switches used for memory: memristors and phase-change materials. Both forms have been explored for their advantages over today’s most prevalent forms of memory, including dynamic random access memory (DRAM) and flash memory, but have their drawbacks.
    Wu says that memristors, which operate by applying voltage to a thin filament between two electrodes, tend to suffer from a relative lack of reliability compared to other forms of memory. Meanwhile, phase-change materials, which involve selectively melting a material into either an amorphous state or a crystalline state, require too much power.
    “We’ve combined the idea of a memristor and a phase-change device in a way that can go beyond the limitations of either device,” says Wu. “We’re making a two-terminal memristor device, which drives one type of crystal to another type of crystal phase. Those two crystal phases have different resistance that you can then story as memory.”
    The key is leveraging 2D materials that can be strained to the point where they lie precariously between two different crystal phases and can be nudged in either direction with relatively little power.
    “We engineered it by essentially just stretching the material in one direction and compressing it in another,” says Wu. “By doing that, you enhance the performance by orders of magnitude. I see a path where this could end up in home computers as a form of memory that’s ultra-fast and ultra-efficient. That could have big implications for computing in general.”
    Wu and his team of graduate students conducted the experimental work and partnered with researchers from Rochester’s Department of Mechanical Engineering, including assistant professors Hesam Askari and Sobhit Singh, to identify where and how to strain the material. According to Wu, the biggest hurdle remaining to making the phase-change memristors is continuing to improve their overall reliability — but he is nonetheless encouraged by the team’s progress to date. More