965 resultados para Quantum computational complexity
Resumo:
Excepting the Peripheral and Central Nervous Systems, the Immune System is the most complex of somatic systems in higher animals. This complexity manifests itself at many levels from the molecular to that of the whole organism. Much insight into this confounding complexity can be gained through computational simulation. Such simulations range in application from epitope prediction through to the modelling of vaccination strategies. In this review, we evaluate selectively various key applications relevant to computational vaccinology: these include technique that operates at different scale that is, from molecular to organisms and even to population level.
Resumo:
Dedicated to Professor A.M. Mathai on the occasion of his 75-th birthday. Mathematics Subject Classi¯cation 2010: 26A33, 44A10, 33C60, 35J10.
Resumo:
Dissociation of molecular hydrogen is an important step in a wide variety of chemical, biological, and physical processes. Due to the light mass of hydrogen, it is recognized that quantum effects are often important to its reactivity. However, understanding how quantum effects impact the reactivity of hydrogen is still in its infancy. Here, we examine this issue using a well-defined Pd/Cu(111) alloy that allows the activation of hydrogen and deuterium molecules to be examined at individual Pd atom surface sites over a wide range of temperatures. Experiments comparing the uptake of hydrogen and deuterium as a function of temperature reveal completely different behavior of the two species. The rate of hydrogen activation increases at lower sample temperature, whereas deuterium activation slows as the temperature is lowered. Density functional theory simulations in which quantum nuclear effects are accounted for reveal that tunneling through the dissociation barrier is prevalent for H2 up to ∼190 K and for D2 up to ∼140 K. Kinetic Monte Carlo simulations indicate that the effective barrier to H2 dissociation is so low that hydrogen uptake on the surface is limited merely by thermodynamics, whereas the D2 dissociation process is controlled by kinetics. These data illustrate the complexity and inherent quantum nature of this ubiquitous and seemingly simple chemical process. Examining these effects in other systems with a similar range of approaches may uncover temperature regimes where quantum effects can be harnessed, yielding greater control of bond-breaking processes at surfaces and uncovering useful chemistries such as selective bond activation or isotope separation.
Resumo:
X-ray computed tomography (CT) is a non-invasive medical imaging technique that generates cross-sectional images by acquiring attenuation-based projection measurements at multiple angles. Since its first introduction in the 1970s, substantial technical improvements have led to the expanding use of CT in clinical examinations. CT has become an indispensable imaging modality for the diagnosis of a wide array of diseases in both pediatric and adult populations [1, 2]. Currently, approximately 272 million CT examinations are performed annually worldwide, with nearly 85 million of these in the United States alone [3]. Although this trend has decelerated in recent years, CT usage is still expected to increase mainly due to advanced technologies such as multi-energy [4], photon counting [5], and cone-beam CT [6].
Despite the significant clinical benefits, concerns have been raised regarding the population-based radiation dose associated with CT examinations [7]. From 1980 to 2006, the effective dose from medical diagnostic procedures rose six-fold, with CT contributing to almost half of the total dose from medical exposure [8]. For each patient, the risk associated with a single CT examination is likely to be minimal. However, the relatively large population-based radiation level has led to enormous efforts among the community to manage and optimize the CT dose.
As promoted by the international campaigns Image Gently and Image Wisely, exposure to CT radiation should be appropriate and safe [9, 10]. It is thus a responsibility to optimize the amount of radiation dose for CT examinations. The key for dose optimization is to determine the minimum amount of radiation dose that achieves the targeted image quality [11]. Based on such principle, dose optimization would significantly benefit from effective metrics to characterize radiation dose and image quality for a CT exam. Moreover, if accurate predictions of the radiation dose and image quality were possible before the initiation of the exam, it would be feasible to personalize it by adjusting the scanning parameters to achieve a desired level of image quality. The purpose of this thesis is to design and validate models to quantify patient-specific radiation dose prospectively and task-based image quality. The dual aim of the study is to implement the theoretical models into clinical practice by developing an organ-based dose monitoring system and an image-based noise addition software for protocol optimization.
More specifically, Chapter 3 aims to develop an organ dose-prediction method for CT examinations of the body under constant tube current condition. The study effectively modeled the anatomical diversity and complexity using a large number of patient models with representative age, size, and gender distribution. The dependence of organ dose coefficients on patient size and scanner models was further evaluated. Distinct from prior work, these studies use the largest number of patient models to date with representative age, weight percentile, and body mass index (BMI) range.
With effective quantification of organ dose under constant tube current condition, Chapter 4 aims to extend the organ dose prediction system to tube current modulated (TCM) CT examinations. The prediction, applied to chest and abdominopelvic exams, was achieved by combining a convolution-based estimation technique that quantifies the radiation field, a TCM scheme that emulates modulation profiles from major CT vendors, and a library of computational phantoms with representative sizes, ages, and genders. The prospective quantification model is validated by comparing the predicted organ dose with the dose estimated based on Monte Carlo simulations with TCM function explicitly modeled.
Chapter 5 aims to implement the organ dose-estimation framework in clinical practice to develop an organ dose-monitoring program based on a commercial software (Dose Watch, GE Healthcare, Waukesha, WI). In the first phase of the study we focused on body CT examinations, and so the patient’s major body landmark information was extracted from the patient scout image in order to match clinical patients against a computational phantom in the library. The organ dose coefficients were estimated based on CT protocol and patient size as reported in Chapter 3. The exam CTDIvol, DLP, and TCM profiles were extracted and used to quantify the radiation field using the convolution technique proposed in Chapter 4.
With effective methods to predict and monitor organ dose, Chapters 6 aims to develop and validate improved measurement techniques for image quality assessment. Chapter 6 outlines the method that was developed to assess and predict quantum noise in clinical body CT images. Compared with previous phantom-based studies, this study accurately assessed the quantum noise in clinical images and further validated the correspondence between phantom-based measurements and the expected clinical image quality as a function of patient size and scanner attributes.
Chapter 7 aims to develop a practical strategy to generate hybrid CT images and assess the impact of dose reduction on diagnostic confidence for the diagnosis of acute pancreatitis. The general strategy is (1) to simulate synthetic CT images at multiple reduced-dose levels from clinical datasets using an image-based noise addition technique; (2) to develop quantitative and observer-based methods to validate the realism of simulated low-dose images; (3) to perform multi-reader observer studies on the low-dose image series to assess the impact of dose reduction on the diagnostic confidence for multiple diagnostic tasks; and (4) to determine the dose operating point for clinical CT examinations based on the minimum diagnostic performance to achieve protocol optimization.
Chapter 8 concludes the thesis with a summary of accomplished work and a discussion about future research.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
As the complexity of parallel applications increase, the performance limitations resulting from computational load imbalance become dominant. Mapping the problem space to the processors in a parallel machine in a manner that balances the workload of each processors will typically reduce the run-time. In many cases the computation time required for a given calculation cannot be predetermined even at run-time and so static partition of the problem returns poor performance. For problems in which the computational load across the discretisation is dynamic and inhomogeneous, for example multi-physics problems involving fluid and solid mechanics with phase changes, the workload for a static subdomain will change over the course of a computation and cannot be estimated beforehand. For such applications the mapping of loads to process is required to change dynamically, at run-time in order to maintain reasonable efficiency. The issue of dynamic load balancing are examined in the context of PHYSICA, a three dimensional unstructured mesh multi-physics continuum mechanics computational modelling code.
Resumo:
In this paper we extend recent results of Fiorini et al. on the extension complexity of the cut polytope and related polyhedra. We first describe a lifting argument to show exponential extension complexity for a number of NP-complete problems including subset-sum and three dimensional matching. We then obtain a relationship between the extension complexity of the cut polytope of a graph and that of its graph minors. Using this we are able to show exponential extension complexity for the cut polytope of a large number of graphs, including those used in quantum information and suspensions of cubic planar graphs.
Resumo:
Una detallada descripción de la dinámica de bajas energías del entrelazamiento multipartito es proporcionada para sistemas armónicos en una gran variedad de escenarios disipativos. Sin hacer ninguna aproximación central, esta descripción yace principalmente sobre un conjunto razonable de hipótesis acerca del entorno e interacción entorno-sistema, ambas consistente con un análisis lineal de la dinámica disipativa. En la primera parte se deriva un criterio de inseparabilidad capaz de detectar el entrelazamiento k-partito de una extensa clase de estados gausianos y no-gausianos en sistemas de variable continua. Este criterio se emplea para monitorizar la dinámica transitiva del entrelazamiento, mostrando que los estados no-gausianos pueden ser tan robustos frente a los efectos disipativos como los gausianos. Especial atención se dedicada a la dinámica estacionaria del entrelazamiento entre tres osciladores interaccionando con el mismo entorno o diferentes entornos a distintas temperaturas. Este estudio contribuye a dilucidar el papel de las correlaciones cuánticas en el comportamiento de la corrientes energéticas.
Resumo:
We investigate protocols for generating a state t-design by using a fixed separable initial state and a diagonal-unitary t-design in the computational basis, which is a t-design of an ensemble of diagonal unitary matrices with random phases as their eigenvalues. We first show that a diagonal-unitary t-design generates a O (1/2(N))-approximate state t-design, where N is the number of qubits. We then discuss a way of improving the degree of approximation by exploiting non-diagonal gates after applying a diagonal-unitary t-design. We also show that it is necessary and sufficient to use O (log(2)(t)) -qubit gates with random phases to generate a diagonal-unitary t-design by diagonal quantum circuits, and that each multi-qubit diagonal gate can be replaced by a sequence of multi-qubit controlled-phase-type gates with discrete-valued random phases. Finally, we analyze the number of gates for implementing a diagonal-unitary t-design by non-diagonal two- and one-qubit gates. Our results provide a concrete application of diagonal quantum circuits in quantum informational tasks.
Resumo:
This work is aimed at understanding and unifying information on epidemiological modelling methods and how those methods relate to public policy addressing human health, specifically in the context of infectious disease prevention, pandemic planning, and health behaviour change. This thesis employs multiple qualitative and quantitative methods, and presents as a manuscript of several individual, data-driven projects that are combined in a narrative arc. The first chapter introduces the scope and complexity of this interdisciplinary undertaking, describing several topical intersections of importance. The second chapter begins the presentation of original data, and describes in detail two exercises in computational epidemiological modelling pertinent to pandemic influenza planning and policy, and progresses in the next chapter to present additional original data on how the confidence of the public in modelling methodology may have an effect on their planned health behaviour change as recommended in public health policy. The thesis narrative continues in the final data-driven chapter to describe how health policymakers use modelling methods and scientific evidence to inform and construct health policies for the prevention of infectious diseases, and concludes with a narrative chapter that evaluates the breadth of this data and recommends strategies for the optimal use of modelling methodologies when informing public health policy in applied public health scenarios.
Resumo:
Human and robots have complementary strengths in performing assembly operations. Humans are very good at perception tasks in unstructured environments. They are able to recognize and locate a part from a box of miscellaneous parts. They are also very good at complex manipulation in tight spaces. The sensory characteristics of the humans, motor abilities, knowledge and skills give the humans the ability to react to unexpected situations and resolve problems quickly. In contrast, robots are very good at pick and place operations and highly repeatable in placement tasks. Robots can perform tasks at high speeds and still maintain precision in their operations. Robots can also operate for long periods of times. Robots are also very good at applying high forces and torques. Typically, robots are used in mass production. Small batch and custom production operations predominantly use manual labor. The high labor cost is making it difficult for small and medium manufacturers to remain cost competitive in high wage markets. These manufactures are mainly involved in small batch and custom production. They need to find a way to reduce the labor cost in assembly operations. Purely robotic cells will not be able to provide them the necessary flexibility. Creating hybrid cells where humans and robots can collaborate in close physical proximities is a potential solution. The underlying idea behind such cells is to decompose assembly operations into tasks such that humans and robots can collaborate by performing sub-tasks that are suitable for them. Realizing hybrid cells that enable effective human and robot collaboration is challenging. This dissertation addresses the following three computational issues involved in developing and utilizing hybrid assembly cells: - We should be able to automatically generate plans to operate hybrid assembly cells to ensure efficient cell operation. This requires generating feasible assembly sequences and instructions for robots and human operators, respectively. Automated planning poses the following two challenges. First, generating operation plans for complex assemblies is challenging. The complexity can come due to the combinatorial explosion caused by the size of the assembly or the complex paths needed to perform the assembly. Second, generating feasible plans requires accounting for robot and human motion constraints. The first objective of the dissertation is to develop the underlying computational foundations for automatically generating plans for the operation of hybrid cells. It addresses both assembly complexity and motion constraints issues. - The collaboration between humans and robots in the assembly cell will only be practical if human safety can be ensured during the assembly tasks that require collaboration between humans and robots. The second objective of the dissertation is to evaluate different options for real-time monitoring of the state of human operator with respect to the robot and develop strategies for taking appropriate measures to ensure human safety when the planned move by the robot may compromise the safety of the human operator. In order to be competitive in the market, the developed solution will have to include considerations about cost without significantly compromising quality. - In the envisioned hybrid cell, we will be relying on human operators to bring the part into the cell. If the human operator makes an error in selecting the part or fails to place it correctly, the robot will be unable to correctly perform the task assigned to it. If the error goes undetected, it can lead to a defective product and inefficiencies in the cell operation. The reason for human error can be either confusion due to poor quality instructions or human operator not paying adequate attention to the instructions. In order to ensure smooth and error-free operation of the cell, we will need to monitor the state of the assembly operations in the cell. The third objective of the dissertation is to identify and track parts in the cell and automatically generate instructions for taking corrective actions if a human operator deviates from the selected plan. Potential corrective actions may involve re-planning if it is possible to continue assembly from the current state. Corrective actions may also involve issuing warning and generating instructions to undo the current task.
Resumo:
The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.
Resumo:
While fault-tolerant quantum computation might still be years away, analog quantum simulators offer a way to leverage current quantum technologies to study classically intractable quantum systems. Cutting edge quantum simulators such as those utilizing ultracold atoms are beginning to study physics which surpass what is classically tractable. As the system sizes of these quantum simulators increase, there are also concurrent gains in the complexity and types of Hamiltonians which can be simulated. In this work, I describe advances toward the realization of an adaptable, tunable quantum simulator capable of surpassing classical computation. We simulate long-ranged Ising and XY spin models which can have global arbitrary transverse and longitudinal fields in addition to individual transverse fields using a linear chain of up to 24 Yb+ 171 ions confined in a linear rf Paul trap. Each qubit is encoded in the ground state hyperfine levels of an ion. Spin-spin interactions are engineered by the application of spin-dependent forces from laser fields, coupling spin to motion. Each spin can be read independently using state-dependent fluorescence. The results here add yet more tools to an ever growing quantum simulation toolbox. One of many challenges has been the coherent manipulation of individual qubits. By using a surprisingly large fourth-order Stark shifts in a clock-state qubit, we demonstrate an ability to individually manipulate spins and apply independent Hamiltonian terms, greatly increasing the range of quantum simulations which can be implemented. As quantum systems grow beyond the capability of classical numerics, a constant question is how to verify a quantum simulation. Here, I present measurements which may provide useful metrics for large system sizes and demonstrate them in a system of up to 24 ions during a classically intractable simulation. The observed values are consistent with extremely large entangled states, as much as ~95% of the system entangled. Finally, we use many of these techniques in order to generate a spin Hamiltonian which fails to thermalize during experimental time scales due to a meta-stable state which is often called prethermal. The observed prethermal state is a new form of prethermalization which arises due to long-range interactions and open boundary conditions, even in the thermodynamic limit. This prethermalization is observed in a system of up to 22 spins. We expect that system sizes can be extended up to 30 spins with only minor upgrades to the current apparatus. These results emphasize that as the technology improves, the techniques and tools developed here can potentially be used to perform simulations which will surpass the capability of even the most sophisticated classical techniques, enabling the study of a whole new regime of quantum many-body physics.
Resumo:
Self-replication and compartmentalization are two central properties thought to be essential for minimal life, and understanding how such processes interact in the emergence of complex reaction networks is crucial to exploring the development of complexity in chemistry and biology. Autocatalysis can emerge from multiple different mechanisms such as formation of an initiator, template self-replication and physical autocatalysis (where micelles formed from the reaction product solubilize the reactants, leading to higher local concentrations and therefore higher rates). Amphiphiles are also used in artificial life studies to create protocell models such as micelles, vesicles and oil-in-water droplets, and can increase reaction rates by encapsulation of reactants. So far, no template self-replicator exists which is capable of compartmentalization, or transferring this molecular scale phenomenon to micro or macro-scale assemblies. Here a system is demonstrated where an amphiphilic imine catalyses its own formation by joining a non-polar alkyl tail group with a polar carboxylic acid head group to form a template, which was shown to form reverse micelles by Dynamic Light Scattering (DLS). The kinetics of this system were investigated by 1H NMR spectroscopy, showing clearly that a template self-replication mechanism operates, though there was no evidence that the reverse micelles participated in physical autocatalysis. Active oil droplets, composed from a mixture of insoluble organic compounds in an aqueous sub-phase, can undergo processes such as division, self-propulsion and chemotaxis, and are studied as models for minimal cells, or protocells. Although in most cases the Marangoni effect is responsible for the forces on the droplet, the behaviour of the droplet depends heavily on the exact composition. Though theoretical models are able to calculate the forces on a droplet, to model a mixture of oils on an aqueous surface where compounds from the oil phase are dissolving and diffusing through the aqueous phase is beyond current computational capability. The behaviour of a droplet in an aqueous phase can only be discovered through experiment, though it is determined by the droplet's composition. By using an evolutionary algorithm and a liquid handling robot to conduct droplet experiments and decide which compositions to test next, entirely autonomously, the composition of the droplet becomes a chemical genome capable of evolution. The selection is carried out according to a fitness function, which ranks the formulation based on how well it conforms to the chosen fitness criteria (e.g. movement or division). Over successive generations, significant increases in fitness are achieved, and this increase is higher with more components (i.e. greater complexity). Other chemical processes such as chemiluminescence and gelation were investigated in active oil droplets, demonstrating the possibility of controlling chemical reactions by selective droplet fusion. Potential future applications for this might include combinatorial chemistry, or additional fitness goals for the genetic algorithm. Combining the self-replication and the droplet protocells research, it was demonstrated that the presence of the amphiphilic replicator lowers the interfacial tension between droplets of a reaction mixture in organic solution and the alkaline aqueous phase, causing them to divide. Periodic sampling by a liquid handling robot revealed that the extent of droplet fission increased as the reaction progressed, producing more individual protocells with increased self-replication. This demonstrates coupling of the molecular scale phenomenon of template self-replication to a macroscale physicochemical effect.