More stories

  • in

    Algorithms and automation: Making new technology faster and cheaper

    Additive manufacturing (AM) machinery has advanced over time, however, the necessary software for new machines often lags behind. To help mitigate this issue, Penn State researchers designed an automated process planning software to save money, time and design resources.
    Newer, five-axis machines are designed to move linearly along an x, y and z plane and rotate between the planes to allow the machine to change an object’s orientation. These machines are an advancement on the traditional three-axis machines that lack rotation capabilities and require support structures.
    Such a machine can potentially lead to large cost and time savings; however, five-axis AM lacks the same design planning and automation that three-axis machines have. This is where the creation of planning software becomes critical.
    “Five-axis AM is a young area, and the software isn’t there yet,” said Xinyi Xiao, a summer 2020 Penn State doctoral recipient in industrial engineering, now an assistant professor in mechanical and manufacturing engineering at Miami University in Ohio. “Essentially, we developed a methodology to automatically map designs from CAD — computer-aided design — software to AM to help cut unnecessary steps. You save money by taking less time to make the part and by also using less materials from three-axis support structures.”
    Xiao conducted this work as part of her doctoral program in the Penn State Harold and Inge Marcus Department of Industrial and Manufacturing Engineering under the supervision of Sanjay Joshi, professor of industrial engineering. Their research was published in the Journal of Additive Manufacturing.
    “We want to automate the decision process for manufacturing designs to get to ‘push button additive manufacturing,'” Joshi said. “The idea of the software is to make five-axis AM fully automated without the need for manual work or re-designs of a product. Xinyi came to me when she needed guidance or had questions, but ultimately, she held the key.”
    The software’s algorithm automatically determines a part’s sections and the sections’ orientations. From this, the software designates when each section will be printed, and in which orientation within the printing sequence. Through a decomposition process, the part’s geometry boils down into individual sections, each printable without support structures. As each piece is made in order, the machine can rotate throughout its axes to reorient the part and continue printing. Xiao compared it to working with Lego building blocks.
    The algorithm can help inform a designer’s process plan to manufacture a part. It allows designers opportunities to make corrections or alter the design before printing, which can positively affect cost. The algorithm can also inform a designer how feasible a part may be to create using support-free manufacturing.
    “With an algorithm, you don’t really need the expertise from the user because it’s in the software,” Joshi said. “Automation can help with trying out a bunch of different scenarios very quickly before you create anything on the machine.”
    Xiao said she intends to continue this research as some of the major application areas of this technology are aerospace and automobiles.
    “Large metal components, using traditional additive manufacturing, can takes days and waste lots of materials by using support structures,” Xiao said. “Additive manufacturing is very powerful, and it can make a lot of things due to its flexibility; however, it also has its disadvantages. There is still more work to do.”

    Story Source:
    Materials provided by Penn State. Original written by Miranda Buckheit. Note: Content may be edited for style and length. More

  • in

    Understanding COVID-19 infection and possible mutations

    The binding of a SARS-CoV-2 virus surface protein spike — a projection from the spherical virus particle — to the human cell surface protein ACE2 is the first step to infection that may lead to COVID-19 disease. Penn State researchers computationally assessed how changes to the virus spike makeup can affect binding with ACE2 and compared results to those of the original SARS-CoV virus (SARS).
    The researchers’ original manuscript preprint, made available online in March, was among the first to computationally investigate SARS-CoV-2’s high affinity, or tendency to bind, with human ACE2. The paper was published online on Sept. 18 in the Computational and Structural Biotechnology Journal. The work was conceived and led by Costas Maranas, Donald B. Broughton Professor in the Department of Chemical Engineering, and his former graduate student Ratul Chowdhury, who is currently a postdoctoral fellow at Harvard Medical School.
    “We were interested in answering two important questions,” said Veda Sheersh Boorla, doctoral student in chemical engineering and co-author on the paper. “We wanted to first discern key structural changes that give COVID-19 a higher affinity towards human ACE2 proteins when compared with SARS, and then assess its potential affinity to livestock or other animal ACE2 proteins.”
    The researchers computationally modeled the attachment of SARS-CoV-2 protein spike to ACE2, which is located in the upper respiratory tract and serves as the entry point for other coronaviruses, including SARS. The team used a molecular modeling approach to compute the binding strength and interactions of the viral protein’s attachment to ACE2.
    The team found that the SARS-CoV-2 spike protein is highly optimized to bind with human ACE2. Simulations of viral attachment to homologous ACE2 proteins of bats, cattle, chickens, horses, felines and canines showed the highest affinity for bats and human ACE2, with lower values of affinity for cats, horses, dogs, cattle and chickens, according to Chowdhury.
    “Beyond explaining the molecular mechanism of binding with ACE2, we also explored changes in the virus spike that could change its affinity with human ACE2,” said Chowdhury, who earned his doctorate in chemical engineering at Penn State in fall 2019.
    Understanding the binding behavior of the virus spike with ACE2 and the virus tolerance of these structural spike changes could inform future research on vaccine durability and the potential for the virus to spread to other species.
    “The computational workflow that we have established should be able to handle other receptor binding-mediated entry mechanisms for other viruses that may arise in the future,” Chowdhury said.
    The Department of Agriculture, the Department of Energy and the National Science Foundation supported this work.

    Story Source:
    Materials provided by Penn State. Original written by Gabrielle Stewart. Note: Content may be edited for style and length. More

  • in

    Breakthrough optical sensor mimics human eye, a key step toward better AI

    Researchers at Oregon State University are making key advances with a new type of optical sensor that more closely mimics the human eye’s ability to perceive changes in its visual field.
    The sensor is a major breakthrough for fields such as image recognition, robotics and artificial intelligence. Findings by OSU College of Engineering researcher John Labram and graduate student Cinthya Trujillo Herrera were published today in Applied Physics Letters.
    Previous attempts to build a human-eye type of device, called a retinomorphic sensor, have relied on software or complex hardware, said Labram, assistant professor of electrical engineering and computer science. But the new sensor’s operation is part of its fundamental design, using ultrathin layers of perovskite semiconductors — widely studied in recent years for their solar energy potential — that change from strong electrical insulators to strong conductors when placed in light.
    “You can think of it as a single pixel doing something that would currently require a microprocessor,” said Labram, who is leading the research effort with support from the National Science Foundation.
    The new sensor could be a perfect match for the neuromorphic computers that will power the next generation of artificial intelligence in applications like self-driving cars, robotics and advanced image recognition, Labram said. Unlike traditional computers, which process information sequentially as a series of instructions, neuromorphic computers are designed to emulate the human brain’s massively parallel networks.
    “People have tried to replicate this in hardware and have been reasonably successful,” Labram said. “However, even though the algorithms and architecture designed to process information are becoming more and more like a human brain, the information these systems receive is still decidedly designed for traditional computers.”
    In other words: To reach its full potential, a computer that “thinks” more like a human brain needs an image sensor that “sees” more like a human eye.

    advertisement

    A spectacularly complex organ, the eye contains around 100 million photoreceptors. However, the optic nerve only has 1 million connections to the brain. This means that a significant amount of preprocessing and dynamic compression must take place in the retina before the image can be transmitted.
    As it turns out, our sense of vision is particularly well adapted to detect moving objects and is comparatively “less interested” in static images, Labram said. Thus, our optical circuitry gives priority to signals from photoreceptors detecting a change in light intensity — you can demonstrate this yourself by staring at a fixed point until objects in your peripheral vision start to disappear, a phenomenon known as the Troxler effect.
    Conventional sensing technologies, like the chips found in digital cameras and smartphones, are better suited to sequential processing, Labram said. Images are scanned across a two-dimensional array of sensors, pixel by pixel, at a set frequency. Each sensor generates a signal with an amplitude that varies directly with the intensity of the light it receives, meaning a static image will result in a more or less constant output voltage from the sensor.
    By contrast, the retinomorphic sensor stays relatively quiet under static conditions. It registers a short, sharp signal when it senses a change in illumination, then quickly reverts to its baseline state. This behavior is owed to the unique photoelectric properties of a class of semiconductors known as perovskites, which have shown great promise as next-generation, low-cost solar cell materials.
    In Labram’s retinomorphic sensor, the perovskite is applied in ultrathin layers, just a few hundred nanometers thick, and functions essentially as a capacitor that varies its capacitance under illumination. A capacitor stores energy in an electrical field.

    advertisement

    “The way we test it is, basically, we leave it in the dark for a second, then we turn the lights on and just leave them on,” he said. “As soon as the light goes on, you get this big voltage spike, then the voltage quickly decays, even though the intensity of the light is constant. And that’s what we want.”
    Although Labram’s lab currently can test only one sensor at a time, his team measured a number of devices and developed a numerical model to replicate their behavior, arriving at what Labram deems “a good match” between theory and experiment.
    This enabled the team to simulate an array of retinomorphic sensors to predict how a retinomorphic video camera would respond to input stimulus.
    “We can convert video to a set of light intensities and then put that into our simulation,” Labram said. “Regions where a higher-voltage output is predicted from the sensor light up, while the lower-voltage regions remain dark. If the camera is relatively static, you can clearly see all the things that are moving respond strongly. This stays reasonably true to the paradigm of optical sensing in mammals.”
    A simulation using footage of a baseball practice demonstrates the expected results: Players in the infield show up as clearly visible, bright moving objects. Relatively static objects — the baseball diamond, the bleachers, even the outfielders — fade into darkness.
    An even more striking simulation shows a bird flying into view, then all but disappearing as it stops at an invisible bird feeder. The bird reappears as it takes off. The feeder, set swaying, becomes visible only as it starts to move.
    “The good thing is that, with this simulation, we can input any video into one of these arrays and process that information in essentially the same way the human eye would,” Labram said. “For example, you can imagine these sensors being used by a robot tracking the motion of objects. Anything static in its field of view would not elicit a response, however a moving object would be registering a high voltage. This would tell the robot immediately where the object was, without any complex image processing.” More

  • in

    New approach for more accurate epidemic modeling

    A new class of epidemiological models based on alternative thinking about how contagions propagate, particularly in the early phases of a pandemic, provide a blueprint for more accurate epidemic modeling and improved disease spread predictions and responses, according to a study published recently in Scientific Reports by researchers at the University of California, Irvine and other institutions.
    In the paper, the scientists said that standard epidemic models incorrectly assume that the rate in which an infectious disease spreads depends on a simple product of the number of infected and susceptible people. The authors instead suggest that transmission happens not through complete mingling of entire populations but at the boundary of sub-groups of infected individuals.
    “Standard epidemiological models rely on the presumption of strong mixing between infected and non-infected individuals, with widespread contact between members of those groups,” said co-author Tryphon Georgiou, UCI Distinguished Professor of mechanical & aerospace engineering. “We stress, rather, that transmission occurs in geographically concentrated cells. Therefore, in our view, the use of fractional exponents helps us more accurately predict rates of infection and disease spread.”
    The researchers proposed a “fractional power alternative” to customary models that takes into account susceptible, infected and recovered populations. The value of the exponents in these fractional (fSIR) models depends on factors such as the nature and extent of contact between infected and healthy sub-populations.
    The authors explained that during the initial phase of an epidemic, infection proceeds outwards from contagion carriers to the general population. Since the number of susceptible people is much larger than that of the infected, the boundary of infected cells scales at a fractional power of less than one of the area of the cells.
    The researchers tested their theory through a series of numerical simulations. They also fitted their fractional models to actual data from Johns Hopkins University Center for Systems Science and Engineering. Those data covered the first few months of the COVID-19 pandemic in Italy, Germany, France and Spain. Through both processes they found the exponent to be in the range of .6 and .8.
    “The fractional exponent impacts in substantially different ways how the epidemic progresses during early and later phases, and as a result, identifying the correct exponent extends the duration over which reliable predictions can be made as compared to previous models,” Georgiou said.
    In the context of the current COVID-19 pandemic, better knowledge about propagation of infections could aid in decisions related to the institution of masking and social distancing mandates in communities.
    “Accurate epidemiological models can help policy makers choose the right course of action to help prevent further spread of infectious diseases,” Georgiou said.

    Story Source:
    Materials provided by University of California – Irvine. Note: Content may be edited for style and length. More

  • in

    In new step toward quantum tech, scientists synthesize 'bright' quantum bits

    Qubits (short for quantum bits) are often made of the same semiconducting materials as our everyday electronics. But now an interdisciplinary team of chemists and physicists has developed a new method to create tailor-made qubits: by chemically synthesizing molecules that encode quantum information into their magnetic, or ‘spin,’ states. This new bottom-up approach could ultimately lead to quantum systems that have extraordinary flexibility and control, helping pave the way for next-generation quantum technology. More

  • in

    Studying trust in autonomous products

    While a certain level of trust is needed for autonomous cars and smart technologies to reach their full potential, these technologies are not infallible — hence why we’re supposed to keep our hands on the wheel of self-driving cars and follow traffic laws, even if they contradict our map app instructions. Recognizing the significance of trust in devices — and the dangers when there is too much of it — Erin MacDonald, assistant professor of mechanical engineering at Stanford University, researches whether products can be designed to encourage more appropriate levels of trust among consumers.
    In a paper published last month in The Journal of Mechanical Design, MacDonald and Ting Liao, her former graduate student, examined how altering peoples’ moods influenced their trust in a smart speaker. Their results were so surprising, they conducted the experiment a second time and with more participants — but the results didn’t change.
    “We definitely thought that if people were sad, they would be more suspicious of the speaker and if people were happy, they would be more trusting,” said MacDonald, who is a senior author of the paper. “It wasn’t even close to that simple.”
    Overall, the experiments support the notion that a user’s opinion of how well technology performs is the biggest determining factor of whether or not they trust it. However, user trust also differed by age group, gender and education level. The most peculiar result was that, among the people who said the smart speaker met their expectations, participants trusted it more if the researchers had tried to put them in either a positive or a negative mood — participants in the neutral mood group did not exhibit this same elevation in trust.
    “An important takeaway from this research is that negative emotions are not always bad for forming trust. We want to keep this in mind because trust is not always good,” said Liao, who is now an assistant professor at the Stevens Institute of Technology in New Jersey and lead author of the paper.
    Manipulating trust
    Scientific models of interpersonal trustworthiness suggest that our trust in other people can rely on our perceptions of their abilities, but that we also consider whether or not they are caring, objective, fair and honest, among many other characteristics. Beyond the qualities of who you are interacting with, it’s also been shown that our own personal qualities affect our trust in any situation. But when studying how people interact with technology, most research concentrates on the influence of the technology’s performance, overlooking trust and user perception.

    advertisement

    In their new study, MacDonald and Liao decided to address this gap — by studying mood — because previous research has shown that emotional states can affect the perceptions that inform interpersonal trustworthiness, with negative moods generally reducing trust.
    Over the span of two identical experiments, the researchers analyzed the interactions of sixty-three participants with a simulated smart speaker that consisted of a mic and pre-recorded answers hidden under a deactivated smart speaker. Before participants used the speaker, they were surveyed about their feelings regarding their trust of machines and shown images that, according to previous research, would put them in a good or a bad mood, or not alter their mood at all.
    The participants asked the speaker 10 predetermined questions and received 10 prerecorded answers of varying accuracy or helpfulness. After each question, the participant would rate their satisfaction with the answer and report whether it met their expectations. At the end of the study, they described how much they trusted the speaker.
    If participants didn’t think the speaker delivered satisfactory answers, none of the variables measured or manipulated in the experiment — including age, gender, education and mood — changed their minds. However, among participants who said the speaker lived up to their expectations, men and people with less education were more likely to trust the speaker, while people over age 65 were significantly less likely to trust the device. The biggest surprise for the researchers was that, in the group whose expectations were met, mood priming led to increased trust regardless of whether the researchers tried to put them in a good mood or a bad mood. The researchers did not follow up on why this happened, but they noted that existing theory suggests that people may become more tolerant or empathetic with a product when they are emotional.
    “Product designers always try to make people happy when they’re interacting with a product,” said Liao. “This result is quite powerful because it suggests that we should not only focus on positive emotions but that there is a whole emotional spectrum that is worth studying.”
    Proceed with caution

    advertisement

    This research suggests there is a nuanced and complicated relationship between who we are and how we feel about technology. Parsing out the details will take further work, but the researchers emphasize that the issue of trust between humans and autonomous technologies deserves increased attention now more than ever.
    “It bothers me that engineers pretend that they’re neutral to affecting people’s emotions and their decisions and judgments, but everything they design says, ‘Trust me, I’ve got this totally under control. I’m the most high-tech thing you’ve ever used,’?” said MacDonald. “We test cars for safety, so why should we not test autonomous cars to determine whether the driver has the right level of knowledge and trust in the vehicle to operate it?”
    As an example of a feature that might better regulate user trust, MacDonald recalled the more visible warnings she saw on navigation apps during the Glass Fire that burned north of Stanford in fall, which instructed people to drive cautiously since the fire may have altered road conditions. Following their findings, the researchers would also like to see design-based solutions that factor in the influence of users’ moods, both good and bad.
    “The ultimate goal is to see whether we can calibrate people’s emotions through design so that, if a product isn’t mature enough or if the environment is complicated, we can adjust their trust appropriately,” said Liao. “That is probably the future in five to 10 years.”
    This research was funded by the Hasso Plattner Design Thinking Research Program. More