878 resultados para Parallel processing (Electronic computers)
Resumo:
Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Signal processing techniques for mitigating intra-channel and inter-channel fiber nonlinearities are reviewed. More detailed descriptions of three specific examples highlight the diversity of the electronic and optical approaches that have been investigated.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
Thèse numérisée par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
A scenario-based two-stage stochastic programming model for gas production network planning under uncertainty is usually a large-scale nonconvex mixed-integer nonlinear programme (MINLP), which can be efficiently solved to global optimality with nonconvex generalized Benders decomposition (NGBD). This paper is concerned with the parallelization of NGBD to exploit multiple available computing resources. Three parallelization strategies are proposed, namely, naive scenario parallelization, adaptive scenario parallelization, and adaptive scenario and bounding parallelization. Case study of two industrial natural gas production network planning problems shows that, while the NGBD without parallelization is already faster than a state-of-the-art global optimization solver by an order of magnitude, the parallelization can improve the efficiency by several times on computers with multicore processors. The adaptive scenario and bounding parallelization achieves the best overall performance among the three proposed parallelization strategies.
Resumo:
Structured parallel programming, and in particular programming models using the algorithmic skeleton or parallel design pattern concepts, are increasingly considered to be the only viable means of supporting effective development of scalable and efficient parallel programs. Structured parallel programming models have been assessed in a number of works in the context of performance. In this paper we consider how the use of structured parallel programming models allows knowledge of the parallel patterns present to be harnessed to address both performance and energy consumption. We consider different features of structured parallel programming that may be leveraged to impact the performance/energy trade-off and we discuss a preliminary set of experiments validating our claims.
Resumo:
Graph analytics is an important and computationally demanding class of data analytics. It is essential to balance scalability, ease-of-use and high performance in large scale graph analytics. As such, it is necessary to hide the complexity of parallelism, data distribution and memory locality behind an abstract interface. The aim of this work is to build a scalable graph analytics framework that does not demand significant parallel programming experience based on NUMA-awareness.
The realization of such a system faces two key problems:
(i)~how to develop a scale-free parallel programming framework that scales efficiently across NUMA domains; (ii)~how to efficiently apply graph partitioning in order to create separate and largely independent work items that can be distributed among threads.
Resumo:
La littérature suggère que le sommeil paradoxal joue un rôle dans l'intégration associative de la mémoire émotionnelle. De plus, les rêves en sommeil paradoxal, en particulier leur nature bizarre et émotionnelle, semblent refléter cette fonction associative et émotionnelle du sommeil paradoxal. La conséquence des cauchemars fréquents sur ce processus est inconnue, bien que le réveil provoqué par un cauchemar semble interférer avec les fonctions du sommeil paradoxal. Le premier objectif de cette thèse était de reproduire conceptuellement des recherches antérieures démontrant que le sommeil paradoxal permet un accès hyper-associatif à la mémoire. L'utilisation d'une sieste diurne nous a permis d'évaluer les effets du sommeil paradoxal, comparativement au sommeil lent et à l’éveil, sur la performance des participants à une tâche sémantique mesurant « associational breadth » (AB). Les résultats ont montré que seuls les sujets réveillés en sommeil paradoxal ont répondu avec des associations atypiques, ce qui suggère que le sommeil paradoxal est spécifique dans sa capacité à intégrer les traces de la mémoire émotionnelle (article 1). En outre, les rapports de rêve en sommeil paradoxal étaient plus bizarres que ceux en sommeil lent, et plus intenses émotionnellement ; ces attributs semblent refléter la nature associative et émotionnelle du sommeil paradoxal (article 2). Le deuxième objectif de la thèse était de préciser si et comment le traitement de la mémoire émotionnelle en sommeil paradoxal est altéré dans le Trouble de cauchemars fréquents (NM). En utilisant le même protocole, nos résultats ont montré que les participants NM avaient des résultats plus élevés avant une sieste, ce qui correspond aux observations antérieures voulant que les personnes souffrant de cauchemars soient plus créatives. Après le sommeil paradoxal, les deux groupes, NM et CTL, ont montré des changements similaires dans leur accès associatif, avec des résultats AB-négatif plus bas et AB-positif plus grands. Une semaine plus tard, seul les participants NM a maintenu ce changement dans leur réseau sémantique (article 3). Ces résultats suggèrent qu’au fil du temps, les cauchemars peuvent interférer avec l'intégration de la mémoire émotionnelle pendant le sommeil paradoxal. En ce qui concerne l'imagerie, les participants NM avaient plus de bizarrerie et plus d’émotion positive, mais pas négative, dans leurs rêveries (article 4). Ces attributs intensifiés suggèrent à nouveau que les participants NM sont plus imaginatifs et créatifs à l’éveil. Dans l'ensemble, les résultats confirment le rôle du sommeil paradoxal dans l'intégration associative de la mémoire émotionnelle. Cependant, nos résultats concernant le Trouble de cauchemars ne sont pas entièrement en accord avec les théories suggérant que les cauchemars sont dysfonctionnels. Le groupe NM a montré plus d’associativité émotionnelle, de même que plus d'imagerie positive et bizarre à l’éveil. Nous proposons donc une nouvelle théorie de sensibilité environnementale associée au Trouble de cauchemar, suggérant qu'une sensibilité accrue à une gamme de contextes environnementaux sous-tendrait les symptômes uniques et la richesse imaginative observés chez les personnes souffrant de cauchemars fréquents. Bien que davantage de recherches doivent être faites, il est possible que ces personnes puissent bénéficier e milieux favorables, et qu’elles puissent avoir un avantage adaptatif à l'égard de l'expression créative, ce qui est particulièrement pertinent lorsque l'on considère leur pronostic et les différents types de traitements.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Objective: The study was designed to validate use of elec-tronic health records (EHRs) for diagnosing bipolar disorder and classifying control subjects. Method: EHR data were obtained from a health care system of more than 4.6 million patients spanning more than 20 years. Experienced clinicians reviewed charts to identify text features and coded data consistent or inconsistent with a diagnosis of bipolar disorder. Natural language processing was used to train a diagnostic algorithm with 95% specificity for classifying bipolar disorder. Filtered coded data were used to derive three additional classification rules for case subjects and one for control subjects. The positive predictive value (PPV) of EHR-based bipolar disorder and subphenotype di- agnoses was calculated against diagnoses from direct semi- structured interviews of 190 patients by trained clinicians blind to EHR diagnosis. Results: The PPV of bipolar disorder defined by natural language processing was 0.85. Coded classification based on strict filtering achieved a value of 0.79, but classifications based on less stringent criteria performed less well. No EHR- classified control subject received a diagnosis of bipolar dis- order on the basis of direct interview (PPV=1.0). For most subphenotypes, values exceeded 0.80. The EHR-based clas- sifications were used to accrue 4,500 bipolar disorder cases and 5,000 controls for genetic analyses. Conclusions: Semiautomated mining of EHRs can be used to ascertain bipolar disorder patients and control subjects with high specificity and predictive value compared with diagnostic interviews. EHRs provide a powerful resource for high-throughput phenotyping for genetic and clinical research.
Resumo:
We present Dithen, a novel computation-as-a-service (CaaS) cloud platform specifically tailored to the parallel ex-ecution of large-scale multimedia tasks. Dithen handles the upload/download of both multimedia data and executable items, the assignment of compute units to multimedia workloads, and the reactive control of the available compute units to minimize the cloud infrastructure cost under deadline-abiding execution. Dithen combines three key properties: (i) the reactive assignment of individual multimedia tasks to available computing units according to availability and predetermined time-to-completion constraints; (ii) optimal resource estimation based on Kalman-filter estimates; (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of units servicing workloads. The deployment of Dithen over Amazon EC2 spot instances is shown to be capable of processing more than 80,000 video transcoding, face detection and image processing tasks (equivalent to the processing of more than 116 GB of compressed data) for less than $1 in billing cost from EC2. Moreover, the proposed AIMD-based control mechanism, in conjunction with the Kalman estimates, is shown to provide for more than 27% reduction in EC2 spot instance cost against methods based on reactive resource estimation. Finally, Dithen is shown to offer a 38% to 500% reduction of the billing cost against the current state-of-the-art in CaaS platforms on Amazon EC2 (Amazon Lambda and Amazon Autoscale). A baseline version of Dithen is currently available at dithen.com.
Resumo:
In this paper, we develop a fast implementation of an hyperspectral coded aperture (HYCA) algorithm on different platforms using OpenCL, an open standard for parallel programing on heterogeneous systems, which includes a wide variety of devices, from dense multicore systems from major manufactures such as Intel or ARM to new accelerators such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), the Intel Xeon Phi and other custom devices. Our proposed implementation of HYCA significantly reduces its computational cost. Our experiments have been conducted using simulated data and reveal considerable acceleration factors. This kind of implementations with the same descriptive language on different architectures are very important in order to really calibrate the possibility of using heterogeneous platforms for efficient hyperspectral imaging processing in real remote sensing missions.
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.