956 resultados para finite difference methods
Resumo:
What is the computational power of a quantum computer? We show that determining the output of a quantum computation is equivalent to counting the number of solutions to an easily computed set of polynomials defined over the finite field Z(2). This connection allows simple proofs to be given for two known relationships between quantum and classical complexity classes, namely BQP subset of P-#P and BQP subset of PP.
Resumo:
Introduction: In the World Health Organization (WHO) MONICA (multinational MONItoring of trends and determinants in CArdiovascular disease) Project considerable effort was made to obtain basic data on non-respondents to community based surveys of cardiovascular risk factors. The first purpose of this paper is to examine differences in socio-economic and health profiles among respondents and non-respondents. The second purpose is to investigate the effect of non-response on estimates of trends. Methods:Socio-economic and health profile between respondents and non-respondents in the WHO MONICA Project final survey were compared. The potential effect of non-response on the trend estimates between the initial survey and final survey approximately ten years later was investigated using both MONICA data and hypothetical data. Results: In most of the populations, non-respondents were more likely to be single, less well educated, and had poorer lifestyles and health profiles than respondents. As an example of the consequences, temporal trends in prevalence of daily smokers are shown to be overestimated in most populations if they were based only on data from respondents. Conclusions: The socio-economic and health profiles of respondents and non-respondents differed fairly consistently across 27 populations. Hence, the estimators of population trends based on respondent data are likely to be biased. Declining response rates therefore pose a threat to the accuracy of estimates of risk factor trends in many countries.
Resumo:
We consider a problem of robust performance analysis of linear discrete time varying systems on a bounded time interval. The system is represented in the state-space form. It is driven by a random input disturbance with imprecisely known probability distribution; this distributional uncertainty is described in terms of entropy. The worst-case performance of the system is quantified by its a-anisotropic norm. Computing the anisotropic norm is reduced to solving a set of difference Riccati and Lyapunov equations and a special form equation.
Resumo:
We are developing a telemedicine application which offers automated diagnosis of facial (Bell's) palsy through a Web service. We used a test data set of 43 images of facial palsy patients and 44 normal people to develop the automatic recognition algorithm. Three different image pre-processing methods were used. Machine learning techniques (support vector machine, SVM) were used to examine the difference between the two halves of the face. If there was a sufficient difference, then the SVM recognized facial palsy. Otherwise, if the halves were roughly symmetrical, the SVM classified the image as normal. It was found that the facial palsy images had a greater Hamming Distance than the normal images, indicating greater asymmetry. The median distance in the normal group was 331 (interquartile range 277-435) and the median distance in the facial palsy group was 509 (interquartile range 334-703). This difference was significant (P
Resumo:
Titanium containing wormhole-like mesoporous silicas, denoted Ti-HMS, synthesized both via the hydrothermal synthesis route and the post synthesis grafting technique, known as molecular designed dispersion, have been successfully applied in the gas phase oxidation of Toluene to CO and CO2. Selectivity towards CO2 for all catalysts, at temperatures between 400-600degreesC, was above 80%. Benzene and benzaldehyde were observed at temperatures above 450degreesC, but in very low concentrations. The conversion of toluene was shown to increase significantly when the V-TEX/N-MESO ratios were increased from 0.07 to 0.84. No significant difference in catalytic activity was observed for catalysts prepared via the different synthesis techniques. The catalytic activity also depends on the concentration of tetrahedrally coordinated titanium atoms and not on the total concentration of titanium in the catalyst.
Resumo:
A formalism recently introduced by Prugel-Bennett and Shapiro uses the methods of statistical mechanics to model the dynamics of genetic algorithms. To be of more general interest than the test cases they consider. In this paper, the technique is applied to the subset sum problem, which is a combinatorial optimization problem with a strongly non-linear energy (fitness) function and many local minima under single spin flip dynamics. It is a problem which exhibits an interesting dynamics, reminiscent of stabilizing selection in population biology. The dynamics are solved under certain simplifying assumptions and are reduced to a set of difference equations for a small number of relevant quantities. The quantities used are the population's cumulants, which describe its shape, and the mean correlation within the population, which measures the microscopic similarity of population members. Including the mean correlation allows a better description of the population than the cumulants alone would provide and represents a new and important extension of the technique. The formalism includes finite population effects and describes problems of realistic size. The theory is shown to agree closely to simulations of a real genetic algorithm and the mean best energy is accurately predicted.
Resumo:
The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n3), where n is the sample size. We show that the optimal m-dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.
Resumo:
We determine the critical noise level for decoding low density parity check error correcting codes based on the magnetization enumerator , rather than on the weight enumerator employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.
Resumo:
The aim of this letter is to demonstrate that complete removal of spectral aliasing occurring due to finite numerical bandwidth used in the split-step Fourier simulations of nonlinear interactions of optical waves can be achieved by enlarging each dimension of the spectral domain by a factor (n+1)/2, where n is the number of interacting waves. Alternatively, when using low-pass filtering for dealiasing this amounts to the need for filtering a 2/(n+1) fraction of each spectral dimension.
Resumo:
The modelling of mechanical structures using finite element analysis has become an indispensable stage in the design of new components and products. Once the theoretical design has been optimised a prototype may be constructed and tested. What can the engineer do if the measured and theoretically predicted vibration characteristics of the structure are significantly different? This thesis considers the problems of changing the parameters of the finite element model to improve the correlation between a physical structure and its mathematical model. Two new methods are introduced to perform the systematic parameter updating. The first uses the measured modal model to derive the parameter values with the minimum variance. The user must provide estimates for the variance of the theoretical parameter values and the measured data. Previous authors using similar methods have assumed that the estimated parameters and measured modal properties are statistically independent. This will generally be the case during the first iteration but will not be the case subsequently. The second method updates the parameters directly from the frequency response functions. The order of the finite element model of the structure is reduced as a function of the unknown parameters. A method related to a weighted equation error algorithm is used to update the parameters. After each iteration the weighting changes so that on convergence the output error is minimised. The suggested methods are extensively tested using simulated data. An H frame is then used to demonstrate the algorithms on a physical structure.
Resumo:
This chapter illustrates extratextual and intratextual aspects of ideology as related to translation with a case study, a policy document by Tony Blair and Gerhard Schröder, jointly published in English and German in June 1999. Textual features of the two language versions are compared and linked to the social contexts. Concepts and methods of critical discourse analysis and of descriptive and functionalist approaches to translation are applied for this purpose. In particular, reactions to the German text in Germany are explained with reference to the socio-political and ideological conditions of the text production, which was a case of parallel text production combined with translation. It is illustrated that decisions at the linguistic micro-level have had effects for a political party, reflected for example in the German Social Democratic Party debating its identity due to the textual treatment of ideological keywords. The subtle differences revealed in a comparative analysis of the two texts indicate the text producers' awareness of ideological phenomena in the respective cultures. Both texts thus serve as windows onto ideologies and political power relations in the contemporary world.
The transformational implementation of JSD process specifications via finite automata representation
Resumo:
Conventional structured methods of software engineering are often based on the use of functional decomposition coupled with the Waterfall development process model. This approach is argued to be inadequate for coping with the evolutionary nature of large software systems. Alternative development paradigms, including the operational paradigm and the transformational paradigm, have been proposed to address the inadequacies of this conventional view of software developement, and these are reviewed. JSD is presented as an example of an operational approach to software engineering, and is contrasted with other well documented examples. The thesis shows how aspects of JSD can be characterised with reference to formal language theory and automata theory. In particular, it is noted that Jackson structure diagrams are equivalent to regular expressions and can be thought of as specifying corresponding finite automata. The thesis discusses the automatic transformation of structure diagrams into finite automata using an algorithm adapted from compiler theory, and then extends the technique to deal with areas of JSD which are not strictly formalisable in terms of regular languages. In particular, an elegant and novel method for dealing with so called recognition (or parsing) difficulties is described,. Various applications of the extended technique are described. They include a new method of automatically implementing the dismemberment transformation; an efficient way of implementing inversion in languages lacking a goto-statement; and a new in-the-large implementation strategy.
Resumo:
Methods of dynamic modelling and analysis of structures, for example the finite element method, are well developed. However, it is generally agreed that accurate modelling of complex structures is difficult and for critical applications it is necessary to validate or update the theoretical models using data measured from actual structures. The techniques of identifying the parameters of linear dynamic models using Vibration test data have attracted considerable interest recently. However, no method has received a general acceptance due to a number of difficulties. These difficulties are mainly due to (i) Incomplete number of Vibration modes that can be excited and measured, (ii) Incomplete number of coordinates that can be measured, (iii) Inaccuracy in the experimental data (iv) Inaccuracy in the model structure. This thesis reports on a new approach to update the parameters of a finite element model as well as a lumped parameter model with a diagonal mass matrix. The structure and its theoretical model are equally perturbed by adding mass or stiffness and the incomplete number of eigen-data is measured. The parameters are then identified by an iterative updating of the initial estimates, by sensitivity analysis, using eigenvalues or both eigenvalues and eigenvectors of the structure before and after perturbation. It is shown that with a suitable choice of the perturbing coordinates exact parameters can be identified if the data and the model structure are exact. The theoretical basis of the technique is presented. To cope with measurement errors and possible inaccuracies in the model structure, a well known Bayesian approach is used to minimize the least squares difference between the updated and the initial parameters. The eigen-data of the structure with added mass or stiffness is also determined using the frequency response data of the unmodified structure by a structural modification technique. Thus, mass or stiffness do not have to be added physically. The mass-stiffness addition technique is demonstrated by simulation examples and Laboratory experiments on beams and an H-frame.
Resumo:
The present dissertation is concerned with the determination of the magnetic field distribution in ma[.rnetic electron lenses by means of the finite element method. In the differential form of this method a Poisson type equation is solved by numerical methods over a finite boundary. Previous methods of adapting this procedure to the requirements of digital computers have restricted its use to computers of extremely large core size. It is shown that by reformulating the boundary conditions, a considerable reduction in core store can be achieved for a given accuracy of field distribution. The magnetic field distribution of a lens may also be calculated by the integral form of the finite element rnethod. This eliminates boundary problems mentioned but introduces other difficulties. After a careful analysis of both methods it has proved possible to combine the advantages of both in a .new approach to the problem which may be called the 'differential-integral' finite element method. The application of this method to the determination of the magnetic field distribution of some new types of magnetic lenses is described. In the course of the work considerable re-programming of standard programs was necessary in order to reduce the core store requirements to a minimum.