955 resultados para Boolean Computations
Resumo:
Electromagnetic tomography has been applied to problems in nondestructive evolution, ground-penetrating radar, synthetic aperture radar, target identification, electrical well logging, medical imaging etc. The problem of electromagnetic tomography involves the estimation of cross sectional distribution dielectric permittivity, conductivity etc based on measurement of the scattered fields. The inverse scattering problem of electromagnetic imaging is highly non linear and ill posed, and is liable to get trapped in local minima. The iterative solution techniques employed for computing the inverse scattering problem of electromagnetic imaging are highly computation intensive. Thus the solution to electromagnetic imaging problem is beset with convergence and computational issues. The attempt of this thesis is to develop methods suitable for improving the convergence and reduce the total computations for tomographic imaging of two dimensional dielectric cylinders illuminated by TM polarized waves, where the scattering problem is defmed using scalar equations. A multi resolution frequency hopping approach was proposed as opposed to the conventional frequency hopping approach employed to image large inhomogeneous scatterers. The strategy was tested on both synthetic and experimental data and gave results that were better localized and also accelerated the iterative procedure employed for the imaging. A Degree of Symmetry formulation was introduced to locate the scatterer in the investigation domain when the scatterer cross section was circular. The investigation domain could thus be reduced which reduced the degrees of freedom of the inverse scattering process. Thus the entire measured scattered data was available for the optimization of fewer numbers of pixels. This resulted in better and more robust reconstructions of the scatterer cross sectional profile. The Degree of Symmetry formulation could also be applied to the practical problem of limited angle tomography, as in the case of a buried pipeline, where the ill posedness is much larger. The formulation was also tested using experimental data generated from an experimental setup that was designed. The experimental results confirmed the practical applicability of the formulation.
Resumo:
This thesis Entitled Spectral theory of bounded self-adjoint operators -A linear algebraic approach.The main results of the thesis can be classified as three different approaches to the spectral approximation problems. The truncation method and its perturbed versions are part of the classical linear algebraic approach to the subject. The usage of block Toeplitz-Laurent operators and the matrix valued symbols is considered as a particular example where the linear algebraic techniques are effective in simplifying problems in inverse spectral theory. The abstract approach to the spectral approximation problems via pre-conditioners and Korovkin-type theorems is an attempt to make the computations involved, well conditioned. However, in all these approaches, linear algebra comes as the central object. The objective of this study is to discuss the linear algebraic techniques in the spectral theory of bounded self-adjoint operators on a separable Hilbert space. The usage of truncation method in approximating the bounds of essential spectrum and the discrete spectral values outside these bounds is well known. The spectral gap prediction and related results was proved in the second chapter. The discrete versions of Borg-type theorems, proved in the third chapter, partly overlap with some known results in operator theory. The pure linear algebraic approach is the main novelty of the results proved here.
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
Most of the commercial and financial data are stored in decimal fonn. Recently, support for decimal arithmetic has received increased attention due to the growing importance in financial analysis, banking, tax calculation, currency conversion, insurance, telephone billing and accounting. Performing decimal arithmetic with systems that do not support decimal computations may give a result with representation error, conversion error, and/or rounding error. In this world of precision, such errors are no more tolerable. The errors can be eliminated and better accuracy can be achieved if decimal computations are done using Decimal Floating Point (DFP) units. But the floating-point arithmetic units in today's general-purpose microprocessors are based on the binary number system, and the decimal computations are done using binary arithmetic. Only few common decimal numbers can be exactly represented in Binary Floating Point (BF P). ln many; cases, the law requires that results generated from financial calculations performed on a computer should exactly match with manual calculations. Currently many applications involving fractional decimal data perform decimal computations either in software or with a combination of software and hardware. The performance can be dramatically improved by complete hardware DFP units and this leads to the design of processors that include DF P hardware.VLSI implementations using same modular building blocks can decrease system design and manufacturing cost. A multiplexer realization is a natural choice from the viewpoint of cost and speed.This thesis focuses on the design and synthesis of efficient decimal MAC (Multiply ACeumulate) architecture for high speed decimal processors based on IEEE Standard for Floating-point Arithmetic (IEEE 754-2008). The research goal is to design and synthesize deeimal'MAC architectures to achieve higher performance.Efficient design methods and architectures are developed for a high performance DFP MAC unit as part of this research.
Resumo:
Distortions in a family of conjugated polymers are studied using two complementary approaches: within a many-body valence bond approach using a transfer-matrix technique to treat the Heisenberg model of the systems, and also in terms of the tight-binding band-theoretic model with interactions limited to nearest neighbors. The computations indicate that both methods predict the presence or absence of the same distortions in most of the polymers studied.
Resumo:
This thesis investigates the potential use of zerocrossing information for speech sample estimation. It provides 21 new method tn) estimate speech samples using composite zerocrossings. A simple linear interpolation technique is developed for this purpose. By using this method the A/D converter can be avoided in a speech coder. The newly proposed zerocrossing sampling theory is supported with results of computer simulations using real speech data. The thesis also presents two methods for voiced/ unvoiced classification. One of these methods is based on a distance measure which is a function of short time zerocrossing rate and short time energy of the signal. The other one is based on the attractor dimension and entropy of the signal. Among these two methods the first one is simple and reguires only very few computations compared to the other. This method is used imtea later chapter to design an enhanced Adaptive Transform Coder. The later part of the thesis addresses a few problems in Adaptive Transform Coding and presents an improved ATC. Transform coefficient with maximum amplitude is considered as ‘side information’. This. enables more accurate tfiiz assignment enui step—size computation. A new bit reassignment scheme is also introduced in this work. Finally, sum ATC which applies switching between luiscrete Cosine Transform and Discrete Walsh-Hadamard Transform for voiced and unvoiced speech segments respectively is presented. Simulation results are provided to show the improved performance of the coder
Resumo:
The motion instability is an important issue that occurs during the operation of towed underwater vehicles (TUV), which considerably affects the accuracy of high precision acoustic instrumentations housed inside the same. Out of the various parameters responsible for this, the disturbances from the tow-ship are the most significant one. The present study focus on the motion dynamics of an underwater towing system with ship induced disturbances as the input. The study focus on an innovative system called two-part towing. The methodology involves numerical modeling of the tow system, which consists of modeling of the tow-cables and vehicles formulation. Previous study in this direction used a segmental approach for the modeling of the cable. Even though, the model was successful in predicting the heave response of the tow-body, instabilities were observed in the numerical solution. The present study devises a simple approach called lumped mass spring model (LMSM) for the cable formulation. In this work, the traditional LMSM has been modified in two ways. First, by implementing advanced time integration procedures and secondly, use of a modified beam model which uses only translational degrees of freedoms for solving beam equation. A number of time integration procedures, such as Euler, Houbolt, Newmark and HHT-α were implemented in the traditional LMSM and the strength and weakness of each scheme were numerically estimated. In most of the previous studies, hydrodynamic forces acting on the tow-system such as drag and lift etc. are approximated as analytical expression of velocities. This approach restricts these models to use simple cylindrical shaped towed bodies and may not be applicable modern tow systems which are diversed in shape and complexity. Hence, this particular study, hydrodynamic parameters such as drag and lift of the tow-system are estimated using CFD techniques. To achieve this, a RANS based CFD code has been developed. Further, a new convection interpolation scheme for CFD simulation, called BNCUS, which is blend of cell based and node based formulation, was proposed in the study and numerically tested. To account for the fact that simulation takes considerable time in solving fluid dynamic equations, a dedicated parallel computing setup has been developed. Two types of computational parallelisms are explored in the current study, viz; the model for shared memory processors and distributed memory processors. In the present study, shared memory model was used for structural dynamic analysis of towing system, distributed memory one was devised in solving fluid dynamic equations.
Resumo:
Decimal multiplication is an integral part offinancial, commercial, and internet-based computations. The basic building block of a decimal multiplier is a single digit multiplier. It accepts two Binary Coded Decimal (BCD) inputs and gives a product in the range [0, 81] represented by two BCD digits. A novel design for single digit decimal multiplication that reduces the critical path delay and area is proposed in this research. Out of the possible 256 combinations for the 8-bit input, only hundred combinations are valid BCD inputs. In the hundred valid combinations only four combinations require 4 x 4 multiplication, combinations need x multiplication, and the remaining combinations use either x or x 3 multiplication. The proposed design makes use of this property. This design leads to more regular VLSI implementation, and does not require special registers for storing easy multiples. This is a fully parallel multiplier utilizing only combinational logic, and is extended to a Hex/Decimal multiplier that gives either a decimal output or a binary output. The accumulation ofpartial products generated using single digit multipliers is done by an array of multi-operand BCD adders for an (n-digit x n-digit) multiplication.
Resumo:
Reversibility plays a fundamental role when logic gates such as AND, OR, and XOR are not reversible. computations with minimal energy dissipation are considered. Hence, these gates dissipate heat and may reduce the life of In recent years, reversible logic has emerged as one of the most the circuit. So, reversible logic is in demand in power aware important approaches for power optimization with its circuits. application in low power CMOS, quantum computing and A reversible conventional BCD adder was proposed in using conventional reversible gates.
Resumo:
Decimal multiplication is an integral part of financial, commercial, and internet-based computations. This paper presents a novel double digit decimal multiplication (DDDM) technique that performs 2 digit multiplications simultaneously in one clock cycle. This design offers low latency and high throughput. When multiplying two n-digit operands to produce a 2n-digit product, the design has a latency of (n / 2) 1 cycles. The paper presents area and delay comparisons for 7-digit, 16-digit, 34-digit double digit decimal multipliers on different families of Xilinx, Altera, Actel and Quick Logic FPGAs. The multipliers presented can be extended to support decimal floating-point multiplication for IEEE P754 standard
Resumo:
Decimal multiplication is an integral part of financial, commercial, and internet-based computations. This paper presents a novel double digit decimal multiplication (DDDM) technique that offers low latency and high throughput. This design performs two digit multiplications simultaneously in one clock cycle. Double digit fixed point decimal multipliers for 7digit, 16 digit and 34 digit are simulated using Leonardo Spectrum from Mentor Graphics Corporation using ASIC Library. The paper also presents area and delay comparisons for these fixed point multipliers on Xilinx, Altera, Actel and Quick logic FPGAs. This multiplier design can be extended to support decimal floating point multiplication for IEEE 754- 2008 standard.
Resumo:
Decimal multiplication is an integral part of financial, commercial, and internet-based computations. A novel design for single digit decimal multiplication that reduces the critical path delay and area for an iterative multiplier is proposed in this research. The partial products are generated using single digit multipliers, and are accumulated based on a novel RPS algorithm. This design uses n single digit multipliers for an n × n multiplication. The latency for the multiplication of two n-digit Binary Coded Decimal (BCD) operands is (n + 1) cycles and a new multiplication can begin every n cycle. The accumulation of final partial products and the first iteration of partial product generation for next set of inputs are done simultaneously. This iterative decimal multiplier offers low latency and high throughput, and can be extended for decimal floating-point multiplication.
Resumo:
Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining
Resumo:
In natural languages with a high degree of word-order freedom syntactic phenomena like dependencies (subordinations) or valencies do not depend on the word-order (or on the individual positions of the individual words). This means that some permutations of sentences of these languages are in some (important) sense syntactically equivalent. Here we study this phenomenon in a formal way. Various types of j-monotonicity for restarting automata can serve as parameters for the degree of word-order freedom and for the complexity of word-order in sentences (languages). Here we combine two types of parameters on computations of restarting automata: 1. the degree of j-monotonicity, and 2. the number of rewrites per cycle. We study these notions formally in order to obtain an adequate tool for modelling and comparing formal descriptions of (natural) languages with different degrees of word-order freedom and word-order complexity.
Resumo:
Der Vielelektronen Aspekt wird in einteilchenartigen Formulierungen berücksichtigt, entweder in Hartree-Fock Näherung oder unter dem Einschluß der Elektron-Elektron Korrelationen durch die Dichtefunktional Theorie. Da die Physik elektronischer Systeme (Atome, Moleküle, Cluster, Kondensierte Materie, Plasmen) relativistisch ist, habe ich von Anfang an die relativistische 4 Spinor Dirac Theorie eingesetzt, in jüngster Zeit aber, und das wird der hauptfortschritt in den relativistischen Beschreibung durch meine Promotionsarbeit werden, eine ebenfalls voll relativistische, auf dem sogenannten Minimax Prinzip beruhende 2-Spinor Theorie umgesetzt. Im folgenden ist eine kurze Beschreibung meiner Dissertation: Ein wesentlicher Effizienzgewinn in der relativistischen 4-Spinor Dirac Rechnungen konnte durch neuartige singuläre Koordinatentransformationen erreicht werden, so daß sich auch noch für das superschwere Th2 179+ hächste Lösungsgenauigkeiten mit moderatem Computer Aufwand ergaben, und zu zwei weiteren interessanten Veröffentlichungen führten (Publikationsliste). Trotz der damit bereits ermöglichten sehr viel effizienteren relativistischen Berechnung von Molekülen und Clustern blieben diese Rechnungen Größenordnungen aufwendiger als entsprechende nicht-relativistische. Diese behandeln das tatsächliche (relativitische) Verhalten elektronischer Systeme nur näherungsweise richtig, um so besser jedoch, je leichter die beteiligten Atome sind (kleine Kernladungszahl Z). Deshalb habe ich nach einem neuen Formalismus gesucht, der dem möglichst gut Rechnung trägt und trotzdem die Physik richtig relativistisch beschreibt. Dies gelingt durch ein 2-Spinor basierendes Minimax Prinzip: Systeme mit leichten Atomen sind voll relativistisch nunmehr nahezu ähnlich effizient beschrieben wie nicht-relativistisch, was natürlich große Hoffnungen für genaue (d.h. relativistische) Berechnungen weckt. Es ergab sich eine erste grundlegende Veröffentlichung (Publikationsliste). Die Genauigkeit in stark relativistischen Systemen wie Th2 179+ ist ähnlich oder leicht besser als in 4-Spinor Dirac-Formulierung. Die Vorteile der neuen Formulierung gehen aber entscheidend weiter: A. Die neue Minimax Formulierung der Dirac-Gl. ist frei von spuriosen Zuständen und hat keine positronischen Kontaminationen. B. Der Aufwand ist weit reduziert, da nur ein 1/3 der Matrix Elemente gegenüber 4-Spinor noch zu berechnen ist, und alle Matrixdimensionen Faktor 2 kleiner sind. C. Numerisch verhält sich die neue Formulierung ähnlilch gut wie die nichtrelativistische Schrödinger Gleichung (Obwohl es eine exakte Formulierung und keine Näherung der Dirac-Gl. ist), und hat damit bessere Konvergenzeigenschaften als 4-Spinor. Insbesondere die Fehlerwichtung (singulärer und glatter Anteil) ist in 2-Spinor anders, und diese zeigt die guten Extrapolationseigenschaften wie bei der nichtrelativistischen Schrödinger Gleichung. Die Ausweitung des Anwendungsbereichs von (relativistischen) 2-Spinor ist bereits in FEM Dirac-Fock-Slater, mit zwei Beispielen CO und N2, erfolgreich gemacht. Weitere Erweiterungen sind nahezu möglich. Siehe Minmax LCAO Nährung.