More stories

  • in

    Optical wiring for large quantum computers

    Hitting a specific point on a screen with a laser pointer during a presentation isn’t easy — even the tiniest nervous shaking of the hand becomes one big scrawl at a distance. Now imagine having to do that with several laser pointers at once. That is exactly the problem faced by physicists who try to build quantum computers using individual trapped atoms. They, too, need to aim laser beams — hundreds or even thousands of them in the same apparatus — precisely over several metres such as to hit regions only a few micrometres in size that contain the atoms. Any unwanted vibration will severely disturb the operation of the quantum computer.
    At ETH in Zurich, Jonathan Home and his co-workers at the Institute for Quantum Electronics have now demonstrated a new method that allows them to deliver multiple laser beams precisely to the right locations from within a chip in such a stable manner that even the most delicate quantum operations on the atoms can be carried out.
    Aiming for the quantum computer
    To build quantum computers has been an ambitious goal of physicists for more than thirty years. Electrically charged atoms — ions — trapped in electric fields have turned out to be ideal candidates for the quantum bits or qubits, which quantum computers use for their calculations. So far, mini computers containing around a dozen qubits could be realized in this way. “However, if you want to build quantum computers with several thousand qubits, which will probably be necessary for practically relevant applications, current implementations present some major hurdles,” says Karan Mehta, a postdoc in Home’s laboratory and first author of the study recently published in the scientific journal “Nature.” Essentially, the problem is how to send laser beams over several metres from the laser into a vacuum apparatus and eventually hit the bull’s eye inside a cryostat, in which the ion traps are cooled down to just a few degrees above absolute zero in order to minimize thermal disturbances.
    Optical setup as an obstacle
    “Already in current small-scale systems, conventional optics are a significant source of noise and errors — and that gets much harder to manage when trying to scale up,” Mehta explains. The more qubits one adds, the more complex the optics for the laser beams becomes which is needed for controlling the qubits. “This is where our approach comes in,” adds Chi Zhang, a PhD student in Home’s group: “By integrating tiny waveguides into the chips that contain the electrodes for trapping the ions, we can send the light directly to those ions. In this way, vibrations of the cryostat or other parts of the apparatus produce far less disturbance.”
    The researchers commissioned a commercial foundry to produce chips which contain both gold electrodes for the ion traps and, in a deeper layer, waveguides for laser light. At one end of the chips, optical fibres feed the light into the waveguides, which are only 100 nanometres thick, effectively forming optical wiring within the chips. Each of those waveguides leads to a specific point on the chip, where the light is eventually deflected towards the trapped ions on the surface.
    Work from a few years ago (by some of the authors of the present study, together with researchers at MIT and MIT Lincoln Laboratory) had demonstrated that this approach works in principle. Now the ETH group has developed and refined the technique to the point where it is also possible to use it for implementing low-error quantum logic gates between different atoms, an important prerequisite for building quantum computers.
    High-fidelity logic gates
    In a conventional computer chip, logic gates are used to carry out logic operations such as AND or NOR. To build a quantum computer, one has make sure that it can to carry out such logic operations on the qubits. The problem with this is that logic gates acting on two or more qubits are particularly sensitive to disturbances. This is because they create fragile quantum mechanical states in which two ions are simultaneously in a superposition, also known as entangled states.
    In such a superposition, a measurement of one ion influences the result of a measurement on the other ion, without the two being in direct contact. How well the production of those superposition states works, and thus how good the logic gates are, is expressed by the so-called fidelity. “With the new chip we were able to carry out two-qubit logic gates and use them to produce entangled states with a fidelity that up to now could only be achieved in the very best conventional experiments,” says Maciej Malinowski, who was also involved in the experiment as a PhD student.
    The researchers have thus shown that their approach is interesting for future ion trap quantum computers as it is not just extremely stable, but also scalable. They are currently working with different chips that are intended to control up to ten qubits at a time. Furthermore, they are pursuing new designs for fast and precise quantum operations that are made possible by the optical wiring.

    Story Source:
    Materials provided by ETH Zurich. Original written by Oliver Morsch. Note: Content may be edited for style and length. More

  • in

    Analyzing web searches can help experts predict, respond to COVID-19 hot spots

    Web-based analytics have demonstrated their value in predicting the spread of infectious disease, and a new study from Mayo Clinic indicates the value of analyzing Google web searches for keywords related to COVID-19.
    Strong correlations were found between keyword searches on the internet search engine Google Trends and COVID-19 outbreaks in parts of the U.S., according to a study published in Mayo Clinic Proceedings. These correlations were observed up to 16 days prior to the first reported cases in some states.
    “Our study demonstrates that there is information present in Google Trends that precedes outbreaks, and with predictive analysis, this data can be used for better allocating resources with regards to testing, personal protective equipment, medications and more,” says Mohamad Bydon, M.D., a Mayo Clinic neurosurgeon and principal investigator at Mayo’s Neuro-Informatics Laboratory.
    “The Neuro-Informatics team is focused on analytics for neural diseases and neuroscience. However, when the novel coronavirus emerged, my team and I directed resources toward better understanding and tracking the spread of the pandemic,” says Dr. Bydon, the study’s senior author. “Looking at Google Trends data, we found that we were able to identify predictors of hot spots, using keywords, that would emerge over a six-week timeline.”
    Several studies have noted the role of internet surveillance in early prediction of previous outbreaks such as H1N1 and Middle East respiratory syndrome. There are several benefits to using internet surveillance methods versus traditional methods, and this study says a combination of the two methods is likely the key to effective surveillance.
    The study searched for 10 keywords that were chosen based on how commonly they were used and emerging patterns on the internet and in Google News at that time.

    advertisement

    The keywords were:
    COVID symptoms
    Coronavirus symptoms
    Sore throat+shortness of breath+fatigue+cough
    Coronavirus testing center
    Loss of smell
    Lysol
    Antibody
    Face mask
    Coronavirus vaccine
    COVID stimulus check
    Most of the keywords had moderate to strong correlations days before the first COVID-19 cases were reported in specific areas, with diminishing correlations following the first case.
    “Each of these keywords had varying strengths of correlation with case numbers,” says Dr. Bydon. “If we had looked at 100 keywords, we may have found even stronger correlations to cases. As the pandemic progresses, people will search for new and different information, so the search terms also need to evolve.”
    The use of web search surveillance data is important as an adjunct for data science teams who are attempting to predict outbreaks and new hot spots in a pandemic. “Any delay in information could lead to missed opportunities to improve preparedness for an outbreak in a certain location,” says Dr. Bydon.
    Traditional surveillance, including widespread testing and public health reporting, can lag behind the incidence of infectious disease. The need for more testing, and more rapid and accurate testing, is paramount. Delayed or incomplete reporting of results can lead to inaccuracies when data is released and public health decisions are being made.
    “If you wait for the hot spots to emerge in the news media coverage, it will be too late to respond effectively,” Dr. Bydon says. “In terms of national preparedness, this is a great way of helping to understand where future hot spots will emerge.”
    Mayo Clinic recently introduced an interactive COVID-19 tracking tool that reports the latest data for every county in all 50 states, and in Washington, D.C., with insight on how to assess risk and plan accordingly. “Adding variables such as Google Trends data from Dr. Bydon’s team, as well as other leading indicators, have greatly enhanced our ability to forecast surges, plateaus and declines of cases across regions of the country,” says Henry Ting, M.D., Mayo Clinic’s chief value officer.
    Dr. Ting worked with Mayo Clinic data scientists to develop content sources, validate information and correlate expertise for the tracking tool, which is in Mayo’s COVID-19 resource center on mayoclinic.org.
    The study was conducted in collaboration with the Mayo Clinic Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery. The authors report no conflicts of interest. More

  • in

    Novel method for measuring spatial dependencies turns less data into more data

    The identification of human migration driven by climate change, the spread of COVID-19, agricultural trends, and socioeconomic problems in neighboring regions depends on data — the more complex the model, the more data is required to understand such spatially distributed phenomena. However, reliable data is often expensive and difficult to obtain, or too sparse to allow for accurate predictions.
    Maurizio Porfiri, Institute Professor of mechanical and aerospace, biomedical, and civil and urban engineering and a member of the Center for Urban Science and Progress (CUSP) at the NYU Tandon School of Engineering, devised a novel solution based on network and information theory that makes “little data” act big through, the application of mathematical techniques normally used for time-series, to spatial processes.
    The study, “An information-theoretic approach to study spatial dependencies in small datasets,” featured on the cover of Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, describes how, from a small sample of attributes in a limited number of locations, observers can make robust inferences of influences, including interpolations to intermediate areas or even distant regions that share similar key attributes.
    “Most of the time the data sets are poor,” Porfiri explained. “Therefore, we took a very basic approach, applying information theory to explore whether influence in the temporal sense could be extended to space, which allows us to work with a very small data set, between 25 and 50 observations,” he said. “We are taking one snapshot of the data and drawing connections — not based on cause-and-effect, but on interaction between the individual points — to see if there is some form of underlying, collective response in the system.”
    The method, developed by Porfiri and collaborator Manuel Ruiz Marín of the Department of Quantitative Methods, Law and Modern Languages, Technical University of Cartagena, Spain, involved:
    Consolidating a given data set into a small range of admissible symbols, similar to the way a machine learning system can identify a face with limited pixel data: a chin, cheekbones, forehead, etc.
    Applying an information-theory principle to create a test that is non-parametric (one that assumes no underlying model for the interaction between locations) to draw associations between events and to discover whether uncertainty at a particular location is reduced if one has knowledge about the uncertainty in another location.
    Porfiri explained that since a non-parametric approach posits no underlying structure for the influences between nodes, it confers flexibility in how nodes can be associated, or even how the concept of a neighbor is defined.
    “Because we abstract this concept of a neighbor, we can define it in the context of any quality that you like, for example, ideology. Ideologically, California can be a neighbor of New York, though they are not geographically co-located. They may share similar values.”
    The team validated the system against two case studies: population migrations in Bangladesh due to sea level rise and motor vehicle deaths in the U.S., to derive a statistically principled insight into the mechanisms of important socioeconomic problems.
    “In the first case, we wanted to see if migration between locations could be predicted by geographic distance or the severity of the inundation of that particular district — whether knowledge of which district is close to another district or knowledge of the level of flooding will help predict the size of migration,” said Ruiz Marín .
    For the second case, they looked at the spatial distribution of alcohol-related automobile accidents in 1980, 1994, and 2009, comparing states with a high degree of such accidents to adjacent states and to states with similar legislative ideologies about drinking and driving.
    “We discovered a stronger relationship between states sharing borders than between states sharing legislative ideologies pertaining to alcohol consumption and driving.”
    Next, Porfiri and Ruiz Marín are planning to extend their method to the analysis of spatio-temporal processes, such as gun violence in the U.S. — a major research project recently funded by the National Science Foundation’s LEAP HI program — or epileptic seizures in the brain. Their work could help understand when and where gun violence can happen or seizures may initiate. More

  • in

    Hidden states of the COVID-19 spike protein

    The virus wreaking havoc on our lives is an efficient infection machine. Composed of only 29 proteins (compared to our 400,000), with a genome 1/200,000 the size of ours, SARS-CoV-2 is expertly evolved to trick our cells to contribute its machinery to assist in its propagation.
    In the last few months, scientists have learned a great deal about the mechanics of this mindless enemy. But what we’ve learned still pales in comparison to what we don’t know.
    There are a number of ways scientists uncover the workings of a virus. Only by using these methods in tandem can we find and exploit the coronavirus’s weak spots, says Ahmet Yildiz, associate professor of Physics and Molecular Cell Biology at the University of California, Berkeley.
    Yildiz and his collaborator Mert Gur at Istanbul Technical University are combining supercomputer-powered molecular dynamics simulations with single molecule experiments to uncover the secrets of the virus. In particular, they are studying its spike (S) protein, the part of the virus that binds to human cells and begins the process of inserting viral RNA into the cell.
    “Many groups are attacking different stages of this process,” Gur said. “Our initial goal is to use molecular dynamics simulations to identify the processes that happen when the virus binds to the host cell.”
    There are three critical phases that allow the spike protein to break into the cell and begin replicating, Yildiz says.

    advertisement

    First, the spike protein needs to transform from a closed configuration to an open one. Second, the spike protein binds to its receptor on the outside of our cells. This binding triggers a conformational change within the spike protein and allows another human protein to cleave the spike. Finally, the newly exposed surface of the spike interacts with the host cell membrane and enables the viral RNA to enter and hijack the cell.
    In early February, electron microscope images revealed the structure of the spike protein. But the snapshots only showed the main configurations that the protein takes, not the transitional, in-between steps. “We only see snapshots of stable conformations,” Yildiz said. “Because we don’t know the timing of events that allow the protein to go from one stable conformation to the next one, we don’t yet know those intermediary conformations.”
    That’s where computer modeling comes in. The microscope images provide a useful starting point to create models of every atom in the protein, and its environment (water, ions, and the receptors of the cell). From there, Yildiz and Gur set the protein in motion and watched to see what happened.
    “We showed that the S protein visits an intermediate state before it can dock to the receptor protein on the host cell membrane” Gur said. “This intermediate state can be useful for drug targeting to prevent the S protein to initiate viral infection.”
    Whereas many other groups around the world are probing the binding pocket of the virus, hoping to find a drug that can block the virus from latching onto human cells, Yildiz and Gur are taking a more nuanced approach.

    advertisement

    “The spike protein strongly binds to its receptor with a complex interaction network,” Yildiz explained. “We showed that if you just break one of those interactions, you still won’t be able to stop the binding. That’s why some of the basic drug development studies may not produce the desired outcomes.”
    But if it’s possible to prevent the spike protein from going from a closed to open state — or a third, in-between state that we’re not even aware of to the open state — that might lend itself to a treatment.
    Find, and Break, the Important Bonds
    The second use of computer simulations by Yildiz and Gur identified not just new states, but the specific amino acids that stabilize each state.
    “If we can determine the important linkages at the single amino acid level — which interactions stabilize and are critical for these confirmations — it may be possible to target those states with small molecules,” Yildiz said.
    Simulating this behavior at the level of the atom or individual amino acid is incredibly computationally intensive. Yildiz and Gur were granted time on the Stampede2 supercomputer at the Texas Advanced Computing Center (TACC) — the second fastest supercomputer at a U.S. university and the 19th fastest overall — through the COVID-19 HPC Consortium. Simulating one microsecond of the virus and its interactions with human cells — roughly one million atoms in total — takes weeks on a supercomputer…and would take years without one.
    “It’s a computationally demanding process,” Yildiz said. “But the predictive power of this approach is very powerful.”
    Yildiz and Gur team, along with approximately 40 other research groups studying COVID-19, have been given priority access to TACC systems. “We’re not limited by the speed at which the simulations happen, so there’s a real-time race between our ability to run simulations and analyze the data.”
    With time of the essence, Gur and his collaborators have churned through calculations, re-enacting the atomic peregrinations of the spike protein as it approaches, binds to, and interacts with Angiotensin-converting enzyme 2 (ACE2) receptors — proteins that line the surface of many cell types.
    Their initial findings, which proposed the existence of an intermediate semi-open state of the S protein compatible to RBD-ACE2 binding via all-atom molecular dynamics (MD) simulations, was published in the Journal of Chemical Physics.
    Furthermore, by performing all-atom MD simulations, they identified an extended network of salt bridges, hydrophobic and electrostatic interactions, and hydrogen bonding between the receptor-binding domain of the spike protein and ACE2. The results of these findings were released in BioRxiv.
    Mutating the residues on the receptor-binding domain was not sufficient to destabilize binding but reduced the average work to unbind the spike protein from ACE2. They propose that blocking this site via neutralizing antibody or nanobody could prove an effective strategy to inhibit spike protein-ACE2 interactions.
    In order to confirm that the computer-derived insights are accurate, Yildiz’s team performed lab experiments using single molecule fluorescence resonance energy transfer (or smFRET) — a biophysical technique used to measure distances at the one to 10 nanometer scale in single molecules
    “The technique allows us to see the conformational changes of the protein by measuring the energy transfer between two light emitting probes,” Yildiz said.
    Though scientists still don’t have a technique to see the atomic details of molecules in motion in real-time, the combination of electron microscopy, single molecule imaging, and computer simulations can provide researchers with a rich picture of the virus’ behavior, Yildiz says.
    “We can get atomic resolution snapshots of frozen molecules using electron microscopy. We can get atomic level simulations of the protein in motion using molecular dynamics in a short time scale. And using single-molecule techniques we can derive the dynamics that are missing from electron microscopy and the simulations,” Yildiz concluded. “Combining these methods together give us the full picture and dissect the mechanism of a virus entering to the host cell.” More

  • in

    Mass screening method could slash COVID-19 testing costs, trial finds

    Using a new mathematical approach to screen large groups for Covid-19 could be around 20 times cheaper than individual testing, a study suggests.
    Applying a recently created algorithm to test multiple samples in one go reduces the total number of tests needed, lowering the cost of screening large populations for Covid-19, researchers say.
    This novel approach will make it easier to spot outbreaks early on. Initial research shows it is highly effective at identifying positive cases when most of the population is negative.
    A team of researchers, including a theoretical physicist from the University of Edinburgh, developed the method — called the hypercube algorithm — and conducted the first field trials in Africa.
    Tiny quantities taken from individual swabs were mixed to create combined samples and then tested. The team showed that a single positive case could still be detected even when mixed with 99 negative swab results.
    If this initial test highlighted that the mixed sample contained positive cases, then researchers used the algorithm to design a further series of tests. This enabled them to pinpoint individual positive swab results within the combined sample, making it easy to identify people who are infected.
    If the initial test results indicated that there were no positive cases in the mixed sample, then no follow-up action was needed.
    The new method is best suited to regular screening of a population — rather than testing individual patients — and may help to significantly lower testing costs, the team says.
    So far, the method has been trialled in Rwanda, where it is being used to screen air passengers, and in South Africa, where it is being used to test a leading rugby team regularly.
    The study, published in the journal Nature, also involved researchers from the African Institute for Mathematical Sciences (AIMS) and the University of Rwanda.
    Professor Neil Turok, who recently joined the University of Edinburgh’s School of Physics and Astronomy as the inaugural Higgs Chair of Theoretical Physics, said: “We hope our method will enable regular, cost-effective screening in multiple contexts. By doing so, it could be a game changer in helping us to overcome the Covid-19 pandemic.”

    Story Source:
    Materials provided by University of Edinburgh. Note: Content may be edited for style and length. More

  • in

    MonoEye: A human motion capture system using a single wearable camera

    Researchers at Tokyo Institute of Technology (Tokyo Tech) and Carnegie Mellon University have together developed a new human motion capture system that consists of a single ultra-wide fisheye camera mounted on the user’s chest. The simplicity of their system could be conducive to a wide range of applications in the sports, medical and entertainment fields.
    Computer vision-based technologies are advancing rapidly owing to recent developments in integrating deep learning. In particular, human motion capture is a highly active research area driving advances for example in robotics, computer generated animation and sports science.
    Conventional motion capture systems in specially equipped studios typically rely on having several synchronized cameras attached to the ceiling and walls that capture movements by a person wearing a body suit fitted with numerous sensors. Such systems are often very expensive and limited in terms of the space and environment in which the wearer can move.
    Now, a team of researchers led by Hideki Koike at Tokyo Tech present a new motion capture system that consists of a single ultra-wide fisheye camera mounted on the user’s chest. Their design not only overcomes the space constraints of existing systems but is also cost-effective.
    Named MonoEye, the system can capture the user’s body motion as well as the user’s perspective, or ‘viewport’. “Our ultra-wide fisheye lens has a 280-degree field-of-view and it can capture the user’s limbs, face, and the surrounding environment,” the researchers say.
    To achieve robust multimodal motion capture, the system has been designed with three deep neural networks capable of estimating 3D body pose, head pose and camera pose in real-time.
    Already, the researchers have trained these neural networks with an extensive synthetic dataset consisting of 680,000 renderings of people with a range of body shapes, clothing, actions, background and lighting conditions, as well as 16,000 frames of photo-realistic images.
    Some challenges remain, however, due to the inevitable domain gap between synthetic and real-world datasets. The researchers plan to keep expanding their dataset with more photo-realistic images to help minimize this gap and improve accuracy.
    The researchers envision that the chest-mounted camera could go on to be transformed into an everyday accessory such as a tie clip, brooch or sports gear in future.
    The team’s work will be presented at the 33rd ACM Symposium on User Interface Software and Technology (UIST), a leading forum for innovations in human-computer interfaces, to be held virtually on 20-23 October 2020.

    Story Source:
    Materials provided by Tokyo Institute of Technology. Note: Content may be edited for style and length. More

  • in

    Kitchen temperature supercurrents from stacked 2D materials

    Could a stack of 2D materials allow for supercurrents at ground-breakingly warm temperatures, easily achievable in the household kitchen?
    An international study published in August opens a new route to high-temperature supercurrents at temperatures as ‘warm’ as inside a kitchen fridge.
    The ultimate aim is to achieve superconductivity (ie, electrical current without any energy loss to resistance) at a reasonable temperature.
    TOWARDS ROOM-TEMPERATURE SUPERCONDUCTIVITY
    Previously, superconductivity has only been possible at impractically low temperatures, less than -170°C below zero — even the Antarctic would be far too warm!
    For this reason, the cooling costs of superconductors have been high, requiring expensive and energy-intensive cooling systems.

    advertisement

    Superconductivity at everyday temperatures is the ultimate goal of researchers in the field.
    This new semiconductor superlattice device could form the basis of a radically new class of ultra-low energy electronics with vastly lower energy consumption per computation than conventional, silicon-based (CMOS) electronics.
    Such electronics, based on new types of conduction in which solid-state transistors switch between zero and one (ie, binary switching) without resistance at room temperature, is the aim of the FLEET Centre of Excellence.
    EXCITON SUPERCURRENTS IN ENERGY-EFFICIENT ELECTRONICS
    Because oppositely-charged electrons and holes in semiconductors are strongly attracted to each other electrically, they can form tightly-bound pairs. These composite particles are called excitons, and they open up new paths towards conduction without resistance at room temperature.

    advertisement

    Excitons can in principle form a quantum, ‘superfluid’ state, in which they move together without resistance. With such tightly bound excitons, the superfluidity should exist at high temperatures — even as high as room temperature.
    But unfortunately, because the electron and hole are so close together, in practice excitons have extremely short lifetimes — just a few nanoseconds, not enough time to form a superfluid.
    As a workaround, the electron and hole can be kept completely apart in two, separated atomically-thin conducting layers, creating so-called ‘spatially indirect’ excitons. The electrons and holes move along separate but very close conducting layers. This makes the excitons long-lived, and indeed superfluidity has recently been observed in such systems.
    Counterflow in the exciton superfluid, in which the oppositely charged electrons and holes move together in their separate layers, allows so-called ‘supercurrents’ (dissipationless electrical currents) to flow with zero resistance and zero wasted energy. As such, it is clearly an exciting prospect for future, ultra-low-energy electronics.
    STACKED LAYERS OVERCOME 2D LIMITATIONS
    Sara Conti who is a co-author on the study, notes another problem however: atomically-thin conducting layers are two-dimensional, and in 2D systems there are rigid topological quantum restrictions discovered by David Thouless and Michael Kosterlitz (2016 Nobel prize), that eliminate the superfluidity at very low temperatures, above about -170°C.
    The key difference with the new proposed system of stacked atomically-thin layers of transition metal dichalcogenide (TMD) semiconducting materials, is that it is three dimensional.
    The topological limitations of 2D are overcome by using this 3D `superlattice’ of thin layers. Alternate layers are doped with excess electrons (n-doped) and excess holes (p-doped) and these form the 3D excitons.
    The study predicts exciton supercurrents will flow in this system at temperatures as warm as -3°C.
    David Neilson, who has worked for many years on exciton superfluidity and 2D systems, says “The proposed 3D superlattice breaks out from the topological limitations of 2D systems, allowing for supercurrents at -3°C. Because the electrons and holes are so strongly coupled, further design improvements should carry this right up to room temperature.”
    “Amazingly, it is becoming routine today to produce stacks of these atomically-thin layers, lining them up atomically, and holding them together with the weak van der Waals atomic attraction,” explains Prof Neilson. “And while our new study is a theoretical proposal, it is carefully designed to be feasible with present technology.” More

  • in

    Simple software creates complex wooden joints

    Wood is considered an attractive construction material for both aesthetic and environmental purposes. Construction of useful wood objects requires complicated structures and ways to connect components together. Researchers created a novel 3D design application to hugely simplify the design process and also provide milling machine instructions to efficiently produce the designed components. The designs do not require nails or glue, meaning items made with this system can be easily assembled, disassembled, reused, repaired or recycled.
    Carpentry is a practice as ancient as humanity itself. Equal parts art and engineering, it has figuratively and literally shaped the world around us. Yet despite its ubiquity, carpentry is a difficult and time-consuming skill, leading to relatively high prices for hand-crafted wooden items like furniture. For this reason, much wooden furniture around us is often, at least to some degree, made by machines. Some machines can be highly automated and programmed with designs created on computers by human designers. This in itself can be a very technical and creative challenge, out of reach to many, until now.
    Researchers from the Department of Creative Informatics at the University of Tokyo have created a 3D design application to create structural wooden components quickly, easily and efficiently. They call it Tsugite, the Japanese word for joinery, and through a simple 3D interface, users with little or no prior experience in either woodworking or 3D design can create designs for functional wooden structures in minutes. These designs can then instruct milling machines to carve the structural components, which users can then piece together without the need for additional tools or adhesives, following on-screen instructions.
    “Our intention was to make the art of joinery available to people without specific experience. When we tested the interface in a user study, people new to 3D modeling not only designed some complex structures, but also enjoyed doing so,” said researcher Maria Larsson. “Tsugite is simple to use as it guides users through the process one step at a time, starting with a gallery of existing designs that can then be modified for different purposes. But more advanced users can jump straight to a manual editing mode for more freeform creativity.”
    Tsugite gives users a detailed view of wooden joints represented by what are known as voxels, essentially 3D pixels, in this case small cubes. These voxels can be moved around at one end of a component to be joined; this automatically adjusts the voxels at the end of the corresponding component such that they are guaranteed to fit together tightly without the need for nails or even glue. Two or more components can be joined and the software algorithm will adjust all accordingly. Different colors inform the user about properties of the joints such as how easily they will slide together, or problems such as potential weaknesses.
    Something that makes Tsugite unique is that it will factor the fabrication process directly into the designs. This means that milling machines, which have physical limitations such as their degrees of freedom, tool size and so on, are only given designs they are able to create. Something that has plagued users of 3D printers, which share a common ancestry with milling machines, is that software for 3D printers cannot always be sure how the machine itself will behave which can lead to failed prints.
    “There is some great research in the field of computer graphics on how to model a wide variety of joint geometries. But that approach often lacks the practical considerations of manufacturing and material properties,” said Larsson. “Conversely, research in the fields of structural engineering and architecture may be very thorough in this regard, but they might only be concerned with a few kinds of joints. We saw the potential to combine the strengths of these approaches to create Tsugite. It can explore a large variety of joints and yet keeps them within realistic physical limits.”
    Another advantage of incorporating fabrication limitations into the design process is that Tsugite’s underlying algorithms have an easier time navigating all the different possibilities they could present to users, as those that are physically impossible are simply not given as options. The researchers hope through further refinements and advancements that Tsugite can be scaled up to design not just furniture and small structures, but also entire buildings.
    “According to the U.N., the building and construction industry is responsible for almost 40% of worldwide carbon dioxide emissions. Wood is perhaps the only natural and renewable building material that we have, and efficient joinery can add further sustainability benefits,” said Larsson. “When connecting timbers with joinery, as opposed to metal fixings, for example, it reduces mixing materials. This is good for sorting and recycling. Also, unglued joints can be taken apart without destroying building components. This opens up the possibility for buildings to be disassembled and reassembled elsewhere. Or for defective parts to be replaced. This flexibility of reuse and repair adds sustainability benefits to wood.”
    This research is supported by JST ACT-I grant number JPMJPR17UT, JSPS KAKENHI grant number 17H00752, and JST CREST grant number JPMJCR17A1, Japan.

    Story Source:
    Materials provided by University of Tokyo. Note: Content may be edited for style and length. More