823 resultados para Boolean Computations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ce mémoire étudie l'algorithme d'amplification de l'amplitude et ses applications dans le domaine de test de propriété. On utilise l'amplification de l'amplitude pour proposer le plus efficace algorithme quantique à ce jour qui teste la linéarité de fonctions booléennes et on généralise notre nouvel algorithme pour tester si une fonction entre deux groupes abéliens finis est un homomorphisme. Le meilleur algorithme quantique connu qui teste la symétrie de fonctions booléennes est aussi amélioré et l'on utilise ce nouvel algorithme pour tester la quasi-symétrie de fonctions booléennes. Par la suite, on approfondit l'étude du nombre de requêtes à la boîte noire que fait l'algorithme d'amplification de l'amplitude pour amplitude initiale inconnue. Une description rigoureuse de la variable aléatoire représentant ce nombre est présentée, suivie du résultat précédemment connue de la borne supérieure sur l'espérance. Suivent de nouveaux résultats sur la variance de cette variable. Il est notamment montré que, dans le cas général, la variance est infinie, mais nous montrons aussi que, pour un choix approprié de paramètres, elle devient bornée supérieurement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Notre progiciel PoweR vise à faciliter l'obtention ou la vérification des études empiriques de puissance pour les tests d'ajustement. En tant que tel, il peut être considéré comme un outil de calcul de recherche reproductible, car il devient très facile à reproduire (ou détecter les erreurs) des résultats de simulation déjà publiés dans la littérature. En utilisant notre progiciel, il devient facile de concevoir de nouvelles études de simulation. Les valeurs critiques et puissances de nombreuses statistiques de tests sous une grande variété de distributions alternatives sont obtenues très rapidement et avec précision en utilisant un C/C++ et R environnement. On peut même compter sur le progiciel snow de R pour le calcul parallèle, en utilisant un processeur multicœur. Les résultats peuvent être affichés en utilisant des tables latex ou des graphiques spécialisés, qui peuvent être incorporés directement dans vos publications. Ce document donne un aperçu des principaux objectifs et les principes de conception ainsi que les stratégies d'adaptation et d'extension.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quoique très difficile à résoudre, le problème de satisfiabilité Booléenne (SAT) est fréquemment utilisé lors de la modélisation d’applications industrielles. À cet effet, les deux dernières décennies ont vu une progression fulgurante des outils conçus pour trouver des solutions à ce problème NP-complet. Deux grandes avenues générales ont été explorées afin de produire ces outils, notamment l’approche logicielle et matérielle. Afin de raffiner et améliorer ces solveurs, de nombreuses techniques et heuristiques ont été proposées par la communauté de recherche. Le but final de ces outils a été de résoudre des problèmes de taille industrielle, ce qui a été plus ou moins accompli par les solveurs de nature logicielle. Initialement, le but de l’utilisation du matériel reconfigurable a été de produire des solveurs pouvant trouver des solutions plus rapidement que leurs homologues logiciels. Cependant, le niveau de sophistication de ces derniers a augmenté de telle manière qu’ils restent le meilleur choix pour résoudre SAT. Toutefois, les solveurs modernes logiciels n’arrivent toujours pas a trouver des solutions de manière efficace à certaines instances SAT. Le but principal de ce mémoire est d’explorer la résolution du problème SAT dans le contexte du matériel reconfigurable en vue de caractériser les ingrédients nécessaires d’un solveur SAT efficace qui puise sa puissance de calcul dans le parallélisme conféré par une plateforme FPGA. Le prototype parallèle implémenté dans ce travail est capable de se mesurer, en termes de vitesse d’exécution à d’autres solveurs (matériels et logiciels), et ce sans utiliser aucune heuristique. Nous montrons donc que notre approche matérielle présente une option prometteuse vers la résolution d’instances industrielles larges qui sont difficilement abordées par une approche logicielle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le code source de la libraire développée accompagne ce dépôt dans l'état où il était à ce moment. Il est possible de trouver une version plus à jour sur github (http://github.com/abergeron).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Utilisant les plus récentes données recueillies par le détecteur ATLAS lors de collisions pp à 7 et 8 TeV au LHC, cette thèse établira des contraintes sévères sur une multitude de modèles allant au-delà du modèle standard (MS) de la physique des particules. Plus particulièrement, deux types de particules hypothétiques, existant dans divers modèles théoriques et qui ne sont pas présentes dans le MS, seront étudiés et sondés. Le premier type étudié sera les quarks-vectoriels (QV) produits lors de collisions pp par l’entremise de couplages électrofaibles avec les quarks légers u et d. On recherchera ces QV lorsqu’ils se désintègrent en un boson W ou Z, et un quark léger. Des arguments théoriques établissent que sous certaines conditions raisonnables la production simple dominerait la production en paires des QV. La topologie particulière des évènements en production simple des QV permettra alors la mise en oeuvre de techniques d’optimisation efficaces pour leur extraction des bruits de fond électrofaibles. Le deuxième type de particules recherché sera celles qui se désintègrent en WZ lorsque ces bosons de jauges W, et Z se désintègrent leptoniquement. Les états finaux détectés par ATLAS seront par conséquent des évènements ayant trois leptons et de l’énergie transverse manquante. La distribution de la masse invariante de ces objets sera alors examinée pour déterminer la présence ou non de nouvelles résonances qui se manifesterait par un excès localisé. Malgré le fait qu’à première vue ces deux nouveaux types de particules n’ont que très peu en commun, ils ont en réalité tous deux un lien étroit avec la brisure de symétrie électrofaible. Dans plusieurs modèles théoriques, l’existence hypothétique des QV est proposé pour annuler les contributions du quark top aux corrections radiatives de la masse du Higgs du MS. Parallèlement, d’autres modèles prédisent quant à eux des résonances en WZ tout en suggérant que le Higgs est une particule composite, chambardant ainsi tout le sector Higgs du MS. Ainsi, les deux analyses présentées dans cette thèse ont un lien fondamental avec la nature même du Higgs, élargissant par le fait même nos connaissances sur l’origine de la masse intrinsèque des particules. En fin de compte, les deux analyses n’ont pas observé d’excès significatif dans leurs régions de signal respectives, ce qui permet d’établir des limites sur la section efficace de production en fonction de la masse des résonances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present research is aimed at studying the charnockites and associated rocks of the Madurai Granulite Block (MGB), especially in terms of their field settings, texture, mineralogy, and mineral chemistry analyzing their petrogenesis with the help of thermobarometrical studies and geochronological constraints. The mechanism of charnockitization by the influx of CO2 rich fluids and its relation to the graphite mineralization is actually a matter of discussion and study. The objectives of the present study are, to delineate petrological and structural relationship of charnockites and associated gneissic rocks, to study the field and petrogenetic aspects of graphite mineralization in the MGB, to establish and re-evaluate the P-T conditions of formation of the rocks with the aid of thermbarometric computations and to compare with the earlier studies, characterization of graphite with XRD, Raman spectroscopy and isotope studies together with a search in to its genesis and its relation to the high-grade metamorphism of the terrain, to evaluate the role of CO2 bearing fluids in the processes of charnockitization as well as in the genesis of graphite within the high-grade terrain and to delineate the metamorphic geochronology of selected rocks using ‘monazite dating’ technique with EPMA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a novel approach to computing the orientation moments and rheological properties of a dilute suspension of spheroids in a simple shear flow at arbitrary Peclct number based on a generalised Langevin equation method. This method differs from the diffusion equation method which is commonly used to model similar systems in that the actual equations of motion for the orientations of the individual particles are used in the computations, instead of a solution of the diffusion equation of the system. It also differs from the method of 'Brownian dynamics simulations' in that the equations used for the simulations are deterministic differential equations even in the presence of noise, and not stochastic differential equations as in Brownian dynamics simulations. One advantage of the present approach over the Fokker-Planck equation formalism is that it employs a common strategy that can be applied across a wide range of shear and diffusion parameters. Also, since deterministic differential equations are easier to simulate than stochastic differential equations, the Langevin equation method presented in this work is more efficient and less computationally intensive than Brownian dynamics simulations.We derive the Langevin equations governing the orientations of the particles in the suspension and evolve a procedure for obtaining the equation of motion for any orientation moment. A computational technique is described for simulating the orientation moments dynamically from a set of time-averaged Langevin equations, which can be used to obtain the moments when the governing equations are harder to solve analytically. The results obtained using this method are in good agreement with those available in the literature.The above computational method is also used to investigate the effect of rotational Brownian motion on the rheology of the suspension under the action of an external force field. The force field is assumed to be either constant or periodic. In the case of con- I stant external fields earlier results in the literature are reproduced, while for the case of periodic forcing certain parametric regimes corresponding to weak Brownian diffusion are identified where the rheological parameters evolve chaotically and settle onto a low dimensional attractor. The response of the system to variations in the magnitude and orientation of the force field and strength of diffusion is also analyzed through numerical experiments. It is also demonstrated that the aperiodic behaviour exhibited by the system could not have been picked up by the diffusion equation approach as presently used in the literature.The main contributions of this work include the preparation of the basic framework for applying the Langevin method to standard flow problems, quantification of rotary Brownian effects by using the new method, the paired-moment scheme for computing the moments and its use in solving an otherwise intractable problem especially in the limit of small Brownian motion where the problem becomes singular, and a demonstration of how systems governed by a Fokker-Planck equation can be explored for possible chaotic behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electromagnetic tomography has been applied to problems in nondestructive evolution, ground-penetrating radar, synthetic aperture radar, target identification, electrical well logging, medical imaging etc. The problem of electromagnetic tomography involves the estimation of cross sectional distribution dielectric permittivity, conductivity etc based on measurement of the scattered fields. The inverse scattering problem of electromagnetic imaging is highly non linear and ill posed, and is liable to get trapped in local minima. The iterative solution techniques employed for computing the inverse scattering problem of electromagnetic imaging are highly computation intensive. Thus the solution to electromagnetic imaging problem is beset with convergence and computational issues. The attempt of this thesis is to develop methods suitable for improving the convergence and reduce the total computations for tomographic imaging of two dimensional dielectric cylinders illuminated by TM polarized waves, where the scattering problem is defmed using scalar equations. A multi resolution frequency hopping approach was proposed as opposed to the conventional frequency hopping approach employed to image large inhomogeneous scatterers. The strategy was tested on both synthetic and experimental data and gave results that were better localized and also accelerated the iterative procedure employed for the imaging. A Degree of Symmetry formulation was introduced to locate the scatterer in the investigation domain when the scatterer cross section was circular. The investigation domain could thus be reduced which reduced the degrees of freedom of the inverse scattering process. Thus the entire measured scattered data was available for the optimization of fewer numbers of pixels. This resulted in better and more robust reconstructions of the scatterer cross sectional profile. The Degree of Symmetry formulation could also be applied to the practical problem of limited angle tomography, as in the case of a buried pipeline, where the ill posedness is much larger. The formulation was also tested using experimental data generated from an experimental setup that was designed. The experimental results confirmed the practical applicability of the formulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis Entitled Spectral theory of bounded self-adjoint operators -A linear algebraic approach.The main results of the thesis can be classified as three different approaches to the spectral approximation problems. The truncation method and its perturbed versions are part of the classical linear algebraic approach to the subject. The usage of block Toeplitz-Laurent operators and the matrix valued symbols is considered as a particular example where the linear algebraic techniques are effective in simplifying problems in inverse spectral theory. The abstract approach to the spectral approximation problems via pre-conditioners and Korovkin-type theorems is an attempt to make the computations involved, well conditioned. However, in all these approaches, linear algebra comes as the central object. The objective of this study is to discuss the linear algebraic techniques in the spectral theory of bounded self-adjoint operators on a separable Hilbert space. The usage of truncation method in approximating the bounds of essential spectrum and the discrete spectral values outside these bounds is well known. The spectral gap prediction and related results was proved in the second chapter. The discrete versions of Borg-type theorems, proved in the third chapter, partly overlap with some known results in operator theory. The pure linear algebraic approach is the main novelty of the results proved here.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most of the commercial and financial data are stored in decimal fonn. Recently, support for decimal arithmetic has received increased attention due to the growing importance in financial analysis, banking, tax calculation, currency conversion, insurance, telephone billing and accounting. Performing decimal arithmetic with systems that do not support decimal computations may give a result with representation error, conversion error, and/or rounding error. In this world of precision, such errors are no more tolerable. The errors can be eliminated and better accuracy can be achieved if decimal computations are done using Decimal Floating Point (DFP) units. But the floating-point arithmetic units in today's general-purpose microprocessors are based on the binary number system, and the decimal computations are done using binary arithmetic. Only few common decimal numbers can be exactly represented in Binary Floating Point (BF P). ln many; cases, the law requires that results generated from financial calculations performed on a computer should exactly match with manual calculations. Currently many applications involving fractional decimal data perform decimal computations either in software or with a combination of software and hardware. The performance can be dramatically improved by complete hardware DFP units and this leads to the design of processors that include DF P hardware.VLSI implementations using same modular building blocks can decrease system design and manufacturing cost. A multiplexer realization is a natural choice from the viewpoint of cost and speed.This thesis focuses on the design and synthesis of efficient decimal MAC (Multiply ACeumulate) architecture for high speed decimal processors based on IEEE Standard for Floating-point Arithmetic (IEEE 754-2008). The research goal is to design and synthesize deeimal'MAC architectures to achieve higher performance.Efficient design methods and architectures are developed for a high performance DFP MAC unit as part of this research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distortions in a family of conjugated polymers are studied using two complementary approaches: within a many-body valence bond approach using a transfer-matrix technique to treat the Heisenberg model of the systems, and also in terms of the tight-binding band-theoretic model with interactions limited to nearest neighbors. The computations indicate that both methods predict the presence or absence of the same distortions in most of the polymers studied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates the potential use of zerocrossing information for speech sample estimation. It provides 21 new method tn) estimate speech samples using composite zerocrossings. A simple linear interpolation technique is developed for this purpose. By using this method the A/D converter can be avoided in a speech coder. The newly proposed zerocrossing sampling theory is supported with results of computer simulations using real speech data. The thesis also presents two methods for voiced/ unvoiced classification. One of these methods is based on a distance measure which is a function of short time zerocrossing rate and short time energy of the signal. The other one is based on the attractor dimension and entropy of the signal. Among these two methods the first one is simple and reguires only very few computations compared to the other. This method is used imtea later chapter to design an enhanced Adaptive Transform Coder. The later part of the thesis addresses a few problems in Adaptive Transform Coding and presents an improved ATC. Transform coefficient with maximum amplitude is considered as ‘side information’. This. enables more accurate tfiiz assignment enui step—size computation. A new bit reassignment scheme is also introduced in this work. Finally, sum ATC which applies switching between luiscrete Cosine Transform and Discrete Walsh-Hadamard Transform for voiced and unvoiced speech segments respectively is presented. Simulation results are provided to show the improved performance of the coder

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The motion instability is an important issue that occurs during the operation of towed underwater vehicles (TUV), which considerably affects the accuracy of high precision acoustic instrumentations housed inside the same. Out of the various parameters responsible for this, the disturbances from the tow-ship are the most significant one. The present study focus on the motion dynamics of an underwater towing system with ship induced disturbances as the input. The study focus on an innovative system called two-part towing. The methodology involves numerical modeling of the tow system, which consists of modeling of the tow-cables and vehicles formulation. Previous study in this direction used a segmental approach for the modeling of the cable. Even though, the model was successful in predicting the heave response of the tow-body, instabilities were observed in the numerical solution. The present study devises a simple approach called lumped mass spring model (LMSM) for the cable formulation. In this work, the traditional LMSM has been modified in two ways. First, by implementing advanced time integration procedures and secondly, use of a modified beam model which uses only translational degrees of freedoms for solving beam equation. A number of time integration procedures, such as Euler, Houbolt, Newmark and HHT-α were implemented in the traditional LMSM and the strength and weakness of each scheme were numerically estimated. In most of the previous studies, hydrodynamic forces acting on the tow-system such as drag and lift etc. are approximated as analytical expression of velocities. This approach restricts these models to use simple cylindrical shaped towed bodies and may not be applicable modern tow systems which are diversed in shape and complexity. Hence, this particular study, hydrodynamic parameters such as drag and lift of the tow-system are estimated using CFD techniques. To achieve this, a RANS based CFD code has been developed. Further, a new convection interpolation scheme for CFD simulation, called BNCUS, which is blend of cell based and node based formulation, was proposed in the study and numerically tested. To account for the fact that simulation takes considerable time in solving fluid dynamic equations, a dedicated parallel computing setup has been developed. Two types of computational parallelisms are explored in the current study, viz; the model for shared memory processors and distributed memory processors. In the present study, shared memory model was used for structural dynamic analysis of towing system, distributed memory one was devised in solving fluid dynamic equations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Decimal multiplication is an integral part offinancial, commercial, and internet-based computations. The basic building block of a decimal multiplier is a single digit multiplier. It accepts two Binary Coded Decimal (BCD) inputs and gives a product in the range [0, 81] represented by two BCD digits. A novel design for single digit decimal multiplication that reduces the critical path delay and area is proposed in this research. Out of the possible 256 combinations for the 8-bit input, only hundred combinations are valid BCD inputs. In the hundred valid combinations only four combinations require 4 x 4 multiplication, combinations need x multiplication, and the remaining combinations use either x or x 3 multiplication. The proposed design makes use of this property. This design leads to more regular VLSI implementation, and does not require special registers for storing easy multiples. This is a fully parallel multiplier utilizing only combinational logic, and is extended to a Hex/Decimal multiplier that gives either a decimal output or a binary output. The accumulation ofpartial products generated using single digit multipliers is done by an array of multi-operand BCD adders for an (n-digit x n-digit) multiplication.