More stories

  • in

    Computational model offers help for new hips

    Rice University engineers hope to make life better for those with replacement joints by modeling how artificial hips are likely to rub them the wrong way.
    The computational study by the Brown School of Engineering lab of mechanical engineer Fred Higgs simulates and tracks how hips evolve, uniquely incorporating fluid dynamics and roughness of the joint surfaces as well as factors clinicians typically use to predict how well implants will stand up over their expected 15-year lifetime.
    The team’s immediate goal is to advance the design of more robust prostheses.
    Ultimately, they say the model could help clinicians personalize hip joints for patients depending on gender, weight, age and gait variations.
    Higgs and co-lead authors Nia Christian, a Rice graduate student, and Gagan Srivastava, a mechanical engineering lecturer at Rice and now a research scientist at Dow Chemical, reported their results in Biotribology.
    The researchers saw a need to look beyond the limitations of earlier mechanical studies and standard clinical practices that use simple walking as a baseline to evaluate artificial hips without incorporating higher-impact activities.

    advertisement

    “When we talk to surgeons, they tell us a lot of their decisions are based on their wealth of experience,” Christian said. “But some have expressed a desire for better diagnostic tools to predict how long an implant is going to last.
    “Fifteen years sounds like a long time but if you need to put an artificial hip into someone who’s young and active, you want it to last longer so they don’t have multiple surgeries,” she said.
    Higgs’ Particle Flow and Tribology Lab was invited by Rice mechanical and bioengineer B.J. Fregly, to collaborate on his work to model human motion to improve life for patients with neurologic and orthopedic impairments.
    “He wanted to know if we could predict how long their best candidate hip joints would last,” said Higgs, Rice’s John and Ann Doerr Professor in Mechanical Engineering and a joint professor of Bioengineering, whose own father’s knee replacement partially inspired the study. “So our model uses walking motion of real patients.”
    Physical simulators need to run millions of cycles to predict wear and failure points, and can take months to get results. Higgs’ model seeks to speed up and simplify the process by analyzing real motion capture data like that produced by the Fregly lab along with data from “instrumented” hip implants studied by Georg Bergmann at the Free University of Berlin.

    advertisement

    The new study incorporates the four distinct modes of physics — contact mechanics, fluid dynamics, wear and particle dynamics — at play in hip motion. No previous studies considered all four simultaneously, according to the researchers.
    One issue others didn’t consider was the changing makeup of the lubricant between bones. Natural joints contain synovial fluid, an extracellular liquid with a consistency similar to egg whites and secreted by the synovial membrane, connective tissue that lines the joint. When a hip is replaced, the membrane is preserved and continues to express the fluid.
    “In healthy natural joints, the fluid generates enough pressure so that you don’t have contact, so we all walk without pain,” Higgs said. “But an artificial hip joint generally undergoes partial contact, which increasingly wears and deteriorates your implanted joint over time. We call this kind of rubbing mixed lubrication.”
    That rubbing can lead to increased generation of wear debris, especially from the plastic material — an ultrahigh molecular weight polyethylene — commonly used as the socket (the acetabular cup) in artificial joints. These particles, estimated at up to 5 microns in size, mix with the synovial fluid can sometimes escape the joint.
    “Eventually, they can loosen the implant or cause the surrounding tissue to break down,” Christian said. “And they often get carried to other parts of the body, where they can cause osteolysis. There’s a lot of debate over where they end up but you want to avoid having them irritate the rest of your body.”
    She noted the use of metal sockets rather than plastic is a topic of interest. “There’s been a strong push toward metal-on-metal hips because metal is durable,” Christian said. “But some of these cause metal shavings to break off. As they build up over time, they seem to be much more damaging than polyethylene particles.”
    Further inspiration for the new study came from two previous works by Higgs and colleagues that had nothing to do with bioengineering. The first looked at chemical mechanical polishing of semiconductor wafers used in integrated circuit manufacturing. The second pushed their predictive modeling from micro-scale to full wafer-scale interfaces.
    The researchers noted future iterations of the model will incorporate more novel materials being used in joint replacement. More

  • in

    Researchers acquire 3D images with LED room lighting and a smartphone

    As LEDs replace traditional lighting systems, they bring more smart capabilities to everyday lighting. While you might use your smartphone to dim LED lighting at home, researchers have taken this further by tapping into dynamically controlled LEDs to create a simple illumination system for 3D imaging.
    “Current video surveillance systems such as the ones used for public transport rely on cameras that provide only 2D information,” said Emma Le Francois, a doctoral student in the research group led by Martin Dawson, Johannes Herrnsdorf and Michael Strain at the University of Strathclyde in the UK. “Our new approach could be used to illuminate different indoor areas to allow better surveillance with 3D images, create a smart work area in a factory, or to give robots a more complete sense of their environment.”
    In The Optical Society (OSA) journal Optics Express, the researchers demonstrate that 3D optical imaging can be performed with a cell phone and LEDs without requiring any complex manual processes to synchronize the camera with the lighting.
    “Deploying a smart-illumination system in an indoor area allows any camera in the room to use the light and retrieve the 3D information from the surrounding environment,” said Le Francois. “LEDs are being explored for a variety of different applications, such as optical communication, visible light positioning and imaging. One day the LED smart-lighting system used for lighting an indoor area might be used for all of these applications at the same time.”
    Illuminating from above
    Human vision relies on the brain to reconstruct depth information when we view a scene from two slightly different directions with our two eyes. Depth information can also be acquired using a method called photometric stereo imaging in which one detector, or camera, is combined with illumination that comes from multiple directions. This lighting setup allows images to be recorded with different shadowing, which can then be used to reconstruct a 3D image.

    advertisement

    Photometric stereo imaging traditionally requires four light sources, such as LEDs, which are deployed symmetrically around the viewing axis of a camera. In the new work, the researchers show that 3D images can also be reconstructed when objects are illuminated from the top down but imaged from the side. This setup allows overhead room lighting to be used for illumination.
    In work supported under the UK’s EPSRC ‘Quantic’ research program, the researchers developed algorithms that modulate each LED in a unique way. This acts like a fingerprint that allows the camera to determine which LED generated which image to facilitate the 3D reconstruction. The new modulation approach also carries its own clock signal so that the image acquisition can be self-synchronized with the LEDs by simply using the camera to passively detect the LED clock signal.
    “We wanted to make photometric stereo imaging more easily deployable by removing the link between the light sources and the camera,” said Le Francois. “To our knowledge, we are the first to demonstrate a top-down illumination system with a side image acquisition where the modulation of the light is self-synchronized with the camera.”
    3D imaging with a smartphone
    To demonstrate this new approach, the researchers used their modulation scheme with a photometric stereo setup based on commercially available LEDs. A simple Arduino board provided the electronic control for the LEDs. Images were captured using the high-speed video mode of a smartphone. They imaged a 48-millimeter-tall figurine that they 3D printed with a matte material to avoid any shiny surfaces that might complicate imaging.
    After identifying the best position for the LEDs and the smartphone, the researchers achieved a reconstruction error of just 2.6 millimeters for the figurine when imaged from 42 centimeters away. This error rate shows that the quality of the reconstruction was comparable to that of other photometric stereo imaging approaches. They were also able to reconstruct images of a moving object and showed that the method is not affected by ambient light.
    In the current system, the image reconstruction takes a few minutes on a laptop. To make the system practical, the researchers are working to decrease the computational time to just a few seconds by incorporating a deep-learning neural network that would learn to reconstruct the shape of the object from the raw image data.

    Story Source:
    Materials provided by The Optical Society. Note: Content may be edited for style and length. More

  • in

    Computer scientists: We wouldn't be able to control super intelligent machines

    We are fascinated by machines that can control cars, compose symphonies, or defeat people at chess, Go, or Jeopardy! While more progress is being made all the time in Artificial Intelligence (AI), some scientists and philosophers warn of the dangers of an uncontrollable superintelligent AI. Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a superintelligent AI. The study was published in the Journal of Artificial Intelligence Research.
    Suppose someone were to program an AI system with intelligence superior to that of humans, so it could learn independently. Connected to the Internet, the AI may have access to all the data of humanity. It could replace all existing programs and take control all machines online worldwide. Would this produce a utopia or a dystopia? Would the AI cure cancer, bring about world peace, and prevent a climate disaster? Or would it destroy humanity and take over the Earth?
    Computer scientists and philosophers have asked themselves whether we would even be able to control a superintelligent AI at all, to ensure it would not pose a threat to humanity. An international team of computer scientists used theoretical calculations to show that it would be fundamentally impossible to control a super-intelligent AI.
    “A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity,” says study co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development.
    Scientists have explored two different ideas for how a superintelligent AI could be controlled. On one hand, the capabilities of superintelligent AI could be specifically limited, for example, by walling it off from the Internet and all other technical devices so it could have no contact with the outside world — yet this would render the superintelligent AI significantly less powerful, less able to answer humanities quests. Lacking that option, the AI could be motivated from the outset to pursue only goals that are in the best interests of humanity, for example by programming ethical principles into it. However, the researchers also show that these and other contemporary and historical ideas for controlling super-intelligent AI have their limits.
    In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.
    “If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable,” says Iyad Rahwan, Director of the Center for Humans and Machines.
    Based on these calculations the containment problem is incomputable, i.e. no single algorithm can find a solution for determining whether an AI would produce harm to the world. Furthermore, the researchers demonstrate that we may not even know when superintelligent machines have arrived, because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.

    Story Source:
    Materials provided by Max Planck Institute for Human Development. Note: Content may be edited for style and length. More

  • in

    Electrically switchable qubit can tune between storage and fast calculation modes

    To perform calculations, quantum computers need qubits to act as elementary building blocks that process and store information. Now, physicists have produced a new type of qubit that can be switched from a stable idle mode to a fast calculation mode. The concept would also allow a large number of qubits to be combined into a powerful quantum computer, as researchers from the University of Basel and TU Eindhoven have reported in the journal Nature Nanotechnology.
    Compared with conventional bits, quantum bits (qubits) are much more fragile and can lose their information content very quickly. The challenge for quantum computing is therefore to keep the sensitive qubits stable over a prolonged period of time, while at the same time finding ways to perform rapid quantum operations. Now, physicists from the University of Basel and TU Eindhoven have developed a switchable qubit that should allow quantum computers to do both.
    The new type of qubit has a stable but slow state that is suitable for storing quantum information. However, the researchers were also able to switch the qubit into a much faster but less stable manipulation mode by applying an electrical voltage. In this state, the qubits can be used to process information quickly.
    Selective coupling of individual spins
    In their experiment, the researchers created the qubits in the form of “hole spins.” These are formed when an electron is deliberately removed from a semiconductor, and the resulting hole has a spin that can adopt two states, up and down — analogous to the values 0 and 1 in classical bits. In the new type of qubit, these spins can be selectively coupled — via a photon, for example — to other spins by tuning their resonant frequencies.
    This capability is vital, since the construction of a powerful quantum computer requires the ability to selectively control and interconnect many individual qubits. Scalability is particularly necessary to reduce the error rate in quantum calculations.
    Ultrafast spin manipulation
    The researchers were also able to use the electrical switch to manipulate the spin qubits at record speed. “The spin can be coherently flipped from up to down in as little as a nanosecond,” says project leader Professor Dominik Zumbühl from the Department of Physics at the University of Basel. “That would allow up to a billion switches per second. Spin qubit technology is therefore already approaching the clock speeds of today’s conventional computers.”
    For their experiments, the researchers used a semiconductor nanowire made of silicon and germanium. Produced at TU Eindhoven, the wire has a tiny diameter of about 20 nanometers. As the qubit is therefore also extremely small, it should in principle be possible to incorporate millions or even billions of these qubits onto a chip.

    Story Source:
    Materials provided by University of Basel. Note: Content may be edited for style and length. More

  • in

    Engineers create hybrid chips with processors and memory to run AI on battery-powered devices

    Smartwatches and other battery-powered electronics would be even smarter if they could run AI algorithms. But efforts to build AI-capable chips for mobile devices have so far hit a wall — the so-called “memory wall” that separates data processing and memory chips that must work together to meet the massive and continually growing computational demands imposed by AI.
    “Transactions between processors and memory can consume 95 percent of the energy needed to do machine learning and AI, and that severely limits battery life,” said computer scientist Subhasish Mitra, senior author of a new study published in Nature Electronics.
    Now, a team that includes Stanford computer scientist Mary Wootters and electrical engineer H.-S. Philip Wong has designed a system that can run AI tasks faster, and with less energy, by harnessing eight hybrid chips, each with its own data processor built right next to its own memory storage.
    This paper builds on the team’s prior development of a new memory technology, called RRAM, that stores data even when power is switched off — like flash memory — only faster and more energy efficiently. Their RRAM advance enabled the Stanford researchers to develop an earlier generation of hybrid chips that worked alone. Their latest design incorporates a critical new element: algorithms that meld the eight, separate hybrid chips into one energy-efficient AI-processing engine.
    “If we could have built one massive, conventional chip with all the processing and memory needed, we’d have done so, but the amount of data it takes to solve AI problems makes that a dream,” Mitra said. “Instead, we trick the hybrids into thinking they’re one chip, which is why we call this the Illusion System.”
    The researchers developed Illusion as part of the Electronics Resurgence Initiative (ERI), a $1.5 billion program sponsored by the Defense Advanced Research Projects Agency. DARPA, which helped spawn the internet more than 50 years ago, is supporting research investigating workarounds to Moore’s Law, which has driven electronic advances by shrinking transistors. But transistors can’t keep shrinking forever.
    “To surpass the limits of conventional electronics, we’ll need new hardware technologies and new ideas about how to use them,” Wootters said.
    The Stanford-led team built and tested its prototype with help from collaborators at the French research institute CEA-Leti and at Nanyang Technological University in Singapore. The team’s eight-chip system is just the beginning. In simulations, the researchers showed how systems with 64 hybrid chips could run AI applications seven times faster than current processors, using one-seventh as much energy.
    Such capabilities could one day enable Illusion Systems to become the brains of augmented and virtual reality glasses that would use deep neural networks to learn by spotting objects and people in the environment, and provide wearers with contextual information — imagine an AR/VR system to help birdwatchers identify unknown specimens.
    Stanford graduate student Robert Radway, who is first author of the Nature Electronics study, said the team also developed new algorithms to recompile existing AI programs, written for today’s processors, to run on the new multi-chip systems. Collaborators from Facebook helped the team test AI programs that validated their efforts. Next steps include increasing the processing and memory capabilities of individual hybrid chips and demonstrating how to mass produce them cheaply.
    “The fact that our fabricated prototype is working as we expected suggests we’re on the right track,” said Wong, who believes Illusion Systems could be ready for marketability within three to five years.
    This research was supported by the Defense Advanced Research Projects Agency (DARPA), the National Science Foundation, the Semiconductor Research Corporation, the Stanford SystemX Alliance and Intel Corporation.

    Story Source:
    Materials provided by Stanford School of Engineering. Original written by Tom Abate. Note: Content may be edited for style and length. More

  • in

    Robot displays a glimmer of empathy to a partner robot

    Like a longtime couple who can predict each other’s every move, a Columbia Engineering robot has learned to predict its partner robot’s future actions and goals based on just a few initial video frames.
    When two primates are cooped up together for a long time, we quickly learn to predict the near-term actions of our roommates, co-workers or family members. Our ability to anticipate the actions of others makes it easier for us to successfully live and work together. In contrast, even the most intelligent and advanced robots have remained notoriously inept at this sort of social communication. This may be about to change.
    The study, conducted at Columbia Engineering’s Creative Machines Lab led by Mechanical Engineering Professor Hod Lipson, is part of a broader effort to endow robots with the ability to understand and anticipate the goals of other robots, purely from visual observations.
    The researchers first built a robot and placed it in a playpen roughly 3×2 feet in size. They programmed the robot to seek and move towards any green circle it could see. But there was a catch: Sometimes the robot could see a green circle in its camera and move directly towards it. But other times, the green circle would be occluded by a tall red carboard box, in which case the robot would move towards a different green circle, or not at all.
    After observing its partner puttering around for two hours, the observing robot began to anticipate its partner’s goal and path. The observing robot was eventually able to predict its partner’s goal and path 98 out of 100 times, across varying situations — without being told explicitly about the partner’s visibility handicap.
    “Our initial results are very exciting,” says Boyuan Chen, lead author of the study, which was conducted in collaboration with Carl Vondrick, assistant professor of computer science, and published today by Nature Scientific Reports. “Our findings begin to demonstrate how robots can see the world from another robot’s perspective. The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy.”
    When they designed the experiment, the researchers expected that the Observer Robot would learn to make predictions about the Subject Robot’s near-term actions. What the researchers didn’t expect, however, was how accurately the Observer Robot could foresee its colleague’s future “moves” with only a few seconds of video as a cue.
    The researchers acknowledge that the behaviors exhibited by the robot in this study are far simpler than the behaviors and goals of humans. They believe, however, that this may be the beginning of endowing robots with what cognitive scientists call “Theory of Mind” (ToM). At about age three, children begin to understand that others may have different goals, needs and perspectives than they do. This can lead to playful activities such as hide and seek, as well as more sophisticated manipulations like lying. More broadly, ToM is recognized as a key distinguishing hallmark of human and primate cognition, and a factor that is essential for complex and adaptive social interactions such as cooperation, competition, empathy, and deception.
    In addition, humans are still better than robots at describing their predictions using verbal language. The researchers had the observing robot make its predictions in the form of images, rather than words, in order to avoid becoming entangled in the thorny challenges of human language. Yet, Lipson speculates, the ability of a robot to predict the future actions visually is not unique: “We humans also think visually sometimes. We frequently imagine the future in our mind’s eyes, not in words.”
    Lipson acknowledges that there are many ethical questions. The technology will make robots more resilient and useful, but when robots can anticipate how humans think, they may also learn to manipulate those thoughts.
    “We recognize that robots aren’t going to remain passive instruction-following machines for long,” Lipson says. “Like other forms of advanced AI, we hope that policymakers can help keep this kind of technology in check, so that we can all benefit.” More

  • in

    Ocean acidification may make some species glow brighter

    A more acidic ocean could give some species a glow-up.
    As the pH of the ocean decreases as a result of climate change, some bioluminescent organisms might get brighter, while others see their lights dim, scientists report January 2 at the virtual annual meeting of the Society for Integrative and Comparative Biology.
    Bioluminescence is de rigueur in parts of the ocean (SN: 5/19/20). The ability to light the dark has evolved more than 90 times in different species. As a result, the chemical structures that create bioluminescence vary wildly — from single chains of atoms to massive ringed complexes.
    With such variability, changes in pH could have unpredictable effects on creatures’ ability to glow (SN: 7/6/10). If fossil fuel emissions continue as they are, average ocean pH is expected to drop from 8.1 to 7.7 by 2100. To find out how bioluminescence might be affected by that decrease, sensory biologist Tom Iwanicki and colleagues at the University of Hawaii at Manoa gathered 49 studies on bioluminescence across nine different phyla. The team then analyzed data from those studies to see how the brightness of the creatures’ bioluminescent compounds varied at pH levels from 8.1 to 7.7.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    As pH drops, the bioluminescent chemicals in some species, such as the sea pansy (Renilla reniformis), increase light production twofold, the data showed. Other compounds, such as those in the sea firefly (Vargula hilgendorfii), have modest increases of only about 20 percent. And some species, like the firefly squid (Watasenia scintillans), actually appear to have a 70 percent decrease in light production.
    For the sea firefly — which uses glowing trails to attract mates — a small increase could give it a sexy advantage. But for the firefly squid — which also uses luminescence for communication — low pH and less light might not be a good thing.
    Because the work was an analysis of previously published research, “I’m interpreting this as a first step, not a definitive result,” says Karen Chan, a marine biologist at Swarthmore College in Pennsylvania who wasn’t involved in the study. It “provides [a] testable hypothesis that we should … look into.”
    The next step is definitely testing, Iwanicki agrees. Most of the analyzed studies took the luminescing chemicals out of an organism to test them. Finding out how the compounds function in creatures in the ocean will be key. “Throughout our oceans, upward of 75 percent of visible critters are capable of bioluminescence,” Iwanicki says. “When we’re wholescale changing the conditions in which they can use that [ability] … that’ll have a world of impacts.” More

  • in

    New statistical method exponentially increases ability to discover genetic insights

    Pleiotropy analysis, which provides insight on how individual genes result in multiple characteristics, has become increasingly valuable as medicine continues to lean into mining genetics to inform disease treatments. Privacy stipulations, though, make it difficult to perform comprehensive pleiotropy analysis because individual patient data often can’t be easily and regularly shared between sites. However, a statistical method called Sum-Share, developed at Penn Medicine, can pull summary information from many different sites to generate significant insights. In a test of the method, published in Nature Communications, Sum-Share’s developers were able to detect more than 1,700 DNA-level variations that could be associated with five different cardiovascular conditions. If patient-specific information from just one site had been used, as is the norm now, only one variation would have been determined.
    “Full research of pleiotropy has been difficult to accomplish because of restrictions on merging patient data from electronic health records at different sites, but we were able to figure out a method that turns summary-level data into results that are exponentially greater than what we could accomplish with individual-level data currently available,” said the one of the study’s senior authors, Jason Moore, PhD, director of the Institute for Biomedical Informatics and a professor of Biostatistics, Epidemiology and Informatics. “With Sum-Share, we greatly increase our abilities to unveil the genetic factors behind health conditions that range from those dealing with heart health, as was the case in this study, to mental health, with many different applications in between.”
    Sum-Share is powered by bio-banks that pool de-identified patient data, including genetic information, from electronic health records (EHRs) for research purposes. For their study, Moore, co-senior author Yong Chen, PhD, an associate professor of Biostatistics, lead author Ruowang Li, PhD, a post-doc fellow at Penn, and their colleagues used eMERGE to pull seven different sets of EHRs to run through Sum-Share in an attempt to detect the genetic effects between five cardiovascular-related conditions: obesity, hypothyroidism, type 2 diabetes, hypercholesterolemia, and hyperlipidemia.
    With Sum-Share, the researchers found 1,734 different single-nucleotide polymorphisms (SNPs, which are differences in the building blocks of DNA) that could be tied to the five conditions. Then, using results from just one site’s EHR, only one SNP was identified that could be tied to the conditions.
    Additionally, they determined that their findings were identical whether they used summary-level data or individual-level data in Sum-Share, making it a “lossless” system.
    To determine the effectiveness of Sum-Share, the team then compared their method’s results with the previous leading method, PheWAS. This method operates best when it pulls what individual-level data has been made available from different EHRs. But when putting the two on a level playing field, allowing both to use individual-level data, Sum-Share was statistically determined to be more powerful in its findings than PheWAS. So, since Sum-Share’s summary-level data findings have been determined to be as insightful as when it uses individual-level data, it appears to be the best method for determining genetic characteristics.
    “This was notable because Sum-Share enables loss-less data integration, while PheWAS loses some information when integrating information from multiple sites,” Li explained. “Sum-Share can also reduce the multiple hypothesis testing penalties by jointly modeling different characteristics at once.”
    Currently, Sum-Share is mainly designed to be used as a research tool, but there are possibilities for using its insights to improve clinical operations. And, moving forward, there is a chance to use it for some of the most pressing needs facing health care today.
    “Sum-Share could be used for COVID-19 with research consortia, such as the Consortium for Clinical Characterization of COVID-19 by EHR (4CE),” Yong said. “These efforts use a federated approach where the data stay local to preserve privacy.”
    This study was supported by the National Institutes of Health (grant number NIH LM010098).
    Co-authors on the study include Rui Duan, Xinyuan Zhang, Thomas Lumley, Sarah Pendergrass, Christopher Bauer, Hakon Hakonarson, David S. Carrell, Jordan W. Smoller, Wei-Qi Wei, Robert Carroll, Digna R. Velez Edwards, Georgia Wiesner, Patrick Sleiman, Josh C. Denny, Jonathan D. Mosley, and Marylyn D. Ritchie. More