965 resultados para Non-stationary
Resumo:
BACKGROUND: Complete mitochondrial genome sequences have become important tools for the study of genome architecture, phylogeny, and molecular evolution. Despite the rapid increase in available mitogenomes, the taxonomic sampling often poorly reflects phylogenetic diversity and is often also biased to represent deeper (family-level) evolutionary relationships. RESULTS: We present the first fully sequenced ant (Hymenoptera: Formicidae) mitochondrial genomes. We sampled four mitogenomes from three species of fire ants, genus Solenopsis, which represent various evolutionary depths. Overall, ant mitogenomes appear to be typical of hymenopteran mitogenomes, displaying a general A+T-bias. The Solenopsis mitogenomes are slightly more compact than other hymentoperan mitogenomes (~15.5 kb), retaining all protein coding genes, ribosomal, and transfer RNAs. We also present evidence of recombination between the mitogenomes of the two conspecific Solenopsis mitogenomes. Finally, we discuss potential ways to improve the estimation of phylogenies using complete mitochondrial genome sequences. CONCLUSIONS: The ant mitogenome presents an important addition to the continued efforts in studying hymenopteran mitogenome architecture, evolution, and phylogenetics. We provide further evidence that the sampling across many taxonomic levels (including conspecifics and congeners) is useful and important to gain detailed insights into mitogenome evolution. We also discuss ways that may help improve the use of mitogenomes in phylogenetic analyses by accounting for non-stationary and non-homogeneous evolution among branches.
Resumo:
Precession electron diffraction (PED) is a hollow cone non-stationary illumination technique for electron diffraction pattern collection under quasikinematicalconditions (as in X-ray Diffraction), which enables “ab-initio” solving of crystalline structures of nanocrystals. The PED technique is recently used in TEMinstruments of voltages 100 to 300 kV to turn them into true electron iffractometers, thus enabling electron crystallography. The PED technique, when combined with fast electron diffraction acquisition and pattern matching software techniques, may also be used for the high magnification ultra-fast mapping of variable crystal orientations and phases, similarly to what is achieved with the Electron Backscatter Diffraction (EBSD) technique in Scanning ElectronMicroscopes (SEM) at lower magnifications and longer acquisition times.
Resumo:
Background: The ratio of the rates of non-synonymous and synonymous substitution (d(N)/d(S)) is commonly used to estimate selection in coding sequences. It is often suggested that, all else being equal, d(N)/d(S) should be lower in populations with large effective size (Ne) due to increased efficacy of purifying selection. As N-e is difficult to measure directly, life history traits such as body mass, which is typically negatively associated with population size, have commonly been used as proxies in empirical tests of this hypothesis. However, evidence of whether the expected positive correlation between body mass and d(N)/d(S) is consistently observed is conflicting. Results: Employing whole genome sequence data from 48 avian species, we assess the relationship between rates of molecular evolution and life history in birds. We find a negative correlation between dN/dS and body mass, contrary to nearly neutral expectation. This raises the question whether the correlation might be a method artefact. We therefore in turn consider non-stationary base composition, divergence time and saturation as possible explanations, but find no clear patterns. However, in striking contrast to d(N)/d(S), the ratio of radical to conservative amino acid substitutions (K-r/K-c) correlates positively with body mass. Conclusions: Our results in principle accord with the notion that non-synonymous substitutions causing radical amino acid changes are more efficiently removed by selection in large populations, consistent with nearly neutral theory. These findings have implications for the use of d(N)/d(S) and suggest that caution is warranted when drawing conclusions about lineage-specific modes of protein evolution using this metric.
Resumo:
This work consists of three essays investigating the ability of structural macroeconomic models to price zero coupon U.S. government bonds. 1. A small scale 3 factor DSGE model implying constant term premium is able to provide reasonable a fit for the term structure only at the expense of the persistence parameters of the structural shocks. The test of the structural model against one that has constant but unrestricted prices of risk parameters shows that the exogenous prices of risk-model is only weakly preferred. We provide an MLE based variance-covariance matrix of the Metropolis Proposal Density that improves convergence speeds in MCMC chains. 2. Affine in observable macro-variables, prices of risk specification is excessively flexible and provides term-structure fit without significantly altering the structural parameters. The exogenous component of the SDF is separating the macro part of the model from the term structure and the good term structure fit has as a driving force an extremely volatile SDF and an implied average short rate that is inexplicable. We conclude that the no arbitrage restrictions do not suffice to temper the SDF, thus there is need for more restrictions. We introduce a penalty-function methodology that proves useful in showing that affine prices of risk specifications are able to reconcile stable macro-dynamics with good term structure fit and a plausible SDF. 3. The level factor is reproduced most importantly by the preference shock to which it is strongly and positively related but technology and monetary shocks, with negative loadings, are also contributing to its replication. The slope factor is only related to the monetary policy shocks and it is poorly explained. We find that there are gains in in- and out-of-sample forecast of consumption and inflation if term structure information is used in a time varying hybrid prices of risk setting. In-sample yield forecast are better in models with non-stationary shocks for the period 1982-1988. After this period, time varying market price of risk models provide better in-sample forecasts. For the period 2005-2008, out of sample forecast of consumption and inflation are better if term structure information is incorporated in the DSGE model but yields are better forecasted by a pure macro DSGE model.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Thisresearch deals with the dynamic modeling of gas lubricated tilting pad journal bearings provided with spring supported pads, including experimental verification of the computation. On the basis of a mathematical model of a film bearing, a computer program has been developed, which can be used for the simulation of a special type of tilting pad gas journal bearing supported by a rotary spring under different loading conditions time dependently (transient running conditions due to geometry variations in time externally imposed). On the basis of literature, different transformations have been used in the model to achieve simpler calculation. The numerical simulation is used to solve a non-stationary case of a gasfilm. The simulation results were compared with literature results in a stationary case (steady running conditions) and they were found to be equal. In addition to this, comparisons were made with a number of stationary and non-stationary bearing tests, which were performed at Lappeenranta University of Technology using bearings designed with the simulation program. A study was also made using numerical simulation and literature to establish the influence of the different bearing parameters on the stability of the bearing. Comparison work was done with literature on tilting pad gas bearings. This bearing type is rarely used. One literature reference has studied the same bearing type as that used in LUT. A new design of tilting pad gas bearing is introduced. It is based on a stainless steel body and electron beam welding of the bearing parts. It has good operation characteristics and is easier to tune and faster to manufacture than traditional constructions. It is also suitable for large serial production.
Resumo:
ABSTRACTChanges in the frequency of occurrence of extreme weather events have been pointed out as a likely impact of global warming. In this context, this study aimed to detect climate change in series of extreme minimum and maximum air temperature of Pelotas, State of Rio Grande do Sul, (1896 - 2011) and its influence on the probability of occurrence of these variables. We used the general extreme value distribution (GEV) in its stationary and non-stationary forms. In the latter case, GEV parameters are variable over time. On the basis of goodness-of-fit tests and of the maximum likelihood method, the GEV model in which the location parameter increases over time presents the best fit of the daily minimum air temperature series. Such result describes a significant increase in the mean values of this variable, which indicates a potential reduction in the frequency of frosts. The daily maximum air temperature series is also described by a non-stationary model, whose location parameter decreases over time, and the scale parameter related to sample variance rises between the beginning and end of the series. This result indicates a drop in the mean of daily maximum air temperature values and increased dispersion of the sample data.
Resumo:
The desire to create a statistical or mathematical model, which would allow predicting the future changes in stock prices, was born many years ago. Economists and mathematicians are trying to solve this task by applying statistical analysis and physical laws, but there are still no satisfactory results. The main reason for this is that a stock exchange is a non-stationary, unstable and complex system, which is influenced by many factors. In this thesis the New York Stock Exchange was considered as the system to be explored. A topological analysis, basic statistical tools and singular value decomposition were conducted for understanding the behavior of the market. Two methods for normalization of initial daily closure prices by Dow Jones and S&P500 were introduced and applied for further analysis. As a result, some unexpected features were identified, such as a shape of distribution of correlation matrix, a bulk of which is shifted to the right hand side with respect to zero. Also non-ergodicity of NYSE was confirmed graphically. It was shown, that singular vectors differ from each other by a constant factor. There are for certain results no clear conclusions from this work, but it creates a good basis for the further analysis of market topology.
Resumo:
Les objets d’étude de cette thèse sont les systèmes d’équations quasilinéaires du premier ordre. Dans une première partie, on fait une analyse du point de vue du groupe de Lie classique des symétries ponctuelles d’un modèle de la plasticité idéale. Les écoulements planaires dans les cas stationnaire et non-stationnaire sont étudiés. Deux nouveaux champs de vecteurs ont été obtenus, complétant ainsi l’algèbre de Lie du cas stationnaire dont les sous-algèbres sont classifiées en classes de conjugaison sous l’action du groupe. Dans le cas non-stationnaire, une classification des algèbres de Lie admissibles selon la force choisie est effectuée. Pour chaque type de force, les champs de vecteurs sont présentés. L’algèbre ayant la dimension la plus élevée possible a été obtenues en considérant les forces monogéniques et elle a été classifiée en classes de conjugaison. La méthode de réduction par symétrie est appliquée pour obtenir des solutions explicites et implicites de plusieurs types parmi lesquelles certaines s’expriment en termes d’une ou deux fonctions arbitraires d’une variable et d’autres en termes de fonctions elliptiques de Jacobi. Plusieurs solutions sont interprétées physiquement pour en déduire la forme de filières d’extrusion réalisables. Dans la seconde partie, on s’intéresse aux solutions s’exprimant en fonction d’invariants de Riemann pour les systèmes quasilinéaires du premier ordre. La méthode des caractéristiques généralisées ainsi qu’une méthode basée sur les symétries conditionnelles pour les invariants de Riemann sont étendues pour être applicables à des systèmes dans leurs régions elliptiques. Leur applicabilité est démontrée par des exemples de la plasticité idéale non-stationnaire pour un flot irrotationnel ainsi que les équations de la mécanique des fluides. Une nouvelle approche basée sur l’introduction de matrices de rotation satisfaisant certaines conditions algébriques est développée. Elle est applicable directement à des systèmes non-homogènes et non-autonomes sans avoir besoin de transformations préalables. Son efficacité est illustrée par des exemples comprenant un système qui régit l’interaction non-linéaire d’ondes et de particules. La solution générale est construite de façon explicite.
Resumo:
During 1990's the Wavelet Transform emerged as an important signal processing tool with potential applications in time-frequency analysis and non-stationary signal processing.Wavelets have gained popularity in broad range of disciplines like signal/image compression, medical diagnostics, boundary value problems, geophysical signal processing, statistical signal processing,pattern recognition,underwater acoustics etc.In 1993, G. Evangelista introduced the Pitch- synchronous Wavelet Transform, which is particularly suited for pseudo-periodic signal processing.The work presented in this thesis mainly concentrates on two interrelated topics in signal processing,viz. the Wavelet Transform based signal compression and the computation of Discrete Wavelet Transform. A new compression scheme is described in which the Pitch-Synchronous Wavelet Transform technique is combined with the popular linear Predictive Coding method for pseudo-periodic signal processing. Subsequently,A novel Parallel Multiple Subsequence structure is presented for the efficient computation of Wavelet Transform. Case studies also presented to highlight the potential applications.
Resumo:
Sonar signal processing comprises of a large number of signal processing algorithms for implementing functions such as Target Detection, Localisation, Classification, Tracking and Parameter estimation. Current implementations of these functions rely on conventional techniques largely based on Fourier Techniques, primarily meant for stationary signals. Interestingly enough, the signals received by the sonar sensors are often non-stationary and hence processing methods capable of handling the non-stationarity will definitely fare better than Fourier transform based methods.Time-frequency methods(TFMs) are known as one of the best DSP tools for nonstationary signal processing, with which one can analyze signals in time and frequency domains simultaneously. But, other than STFT, TFMs have been largely limited to academic research because of the complexity of the algorithms and the limitations of computing power. With the availability of fast processors, many applications of TFMs have been reported in the fields of speech and image processing and biomedical applications, but not many in sonar processing. A structured effort, to fill these lacunae by exploring the potential of TFMs in sonar applications, is the net outcome of this thesis. To this end, four TFMs have been explored in detail viz. Wavelet Transform, Fractional Fourier Transfonn, Wigner Ville Distribution and Ambiguity Function and their potential in implementing five major sonar functions has been demonstrated with very promising results. What has been conclusively brought out in this thesis, is that there is no "one best TFM" for all applications, but there is "one best TFM" for each application. Accordingly, the TFM has to be adapted and tailored in many ways in order to develop specific algorithms for each of the applications.
Resumo:
This study is concerned with Autoregressive Moving Average (ARMA) models of time series. ARMA models form a subclass of the class of general linear models which represents stationary time series, a phenomenon encountered most often in practice by engineers, scientists and economists. It is always desirable to employ models which use parameters parsimoniously. Parsimony will be achieved by ARMA models because it has only finite number of parameters. Even though the discussion is primarily concerned with stationary time series, later we will take up the case of homogeneous non stationary time series which can be transformed to stationary time series. Time series models, obtained with the help of the present and past data is used for forecasting future values. Physical science as well as social science take benefits of forecasting models. The role of forecasting cuts across all fields of management-—finance, marketing, production, business economics, as also in signal process, communication engineering, chemical processes, electronics etc. This high applicability of time series is the motivation to this study.
Resumo:
Speech is the most natural means of communication among human beings and speech processing and recognition are intensive areas of research for the last five decades. Since speech recognition is a pattern recognition problem, classification is an important part of any speech recognition system. In this work, a speech recognition system is developed for recognizing speaker independent spoken digits in Malayalam. Voice signals are sampled directly from the microphone. The proposed method is implemented for 1000 speakers uttering 10 digits each. Since the speech signals are affected by background noise, the signals are tuned by removing the noise from it using wavelet denoising method based on Soft Thresholding. Here, the features from the signals are extracted using Discrete Wavelet Transforms (DWT) because they are well suitable for processing non-stationary signals like speech. This is due to their multi- resolutional, multi-scale analysis characteristics. Speech recognition is a multiclass classification problem. So, the feature vector set obtained are classified using three classifiers namely, Artificial Neural Networks (ANN), Support Vector Machines (SVM) and Naive Bayes classifiers which are capable of handling multiclasses. During classification stage, the input feature vector data is trained using information relating to known patterns and then they are tested using the test data set. The performances of all these classifiers are evaluated based on recognition accuracy. All the three methods produced good recognition accuracy. DWT and ANN produced a recognition accuracy of 89%, SVM and DWT combination produced an accuracy of 86.6% and Naive Bayes and DWT combination produced an accuracy of 83.5%. ANN is found to be better among the three methods.
Resumo:
Speech is the primary, most prominent and convenient means of communication in audible language. Through speech, people can express their thoughts, feelings or perceptions by the articulation of words. Human speech is a complex signal which is non stationary in nature. It consists of immensely rich information about the words spoken, accent, attitude of the speaker, expression, intention, sex, emotion as well as style. The main objective of Automatic Speech Recognition (ASR) is to identify whatever people speak by means of computer algorithms. This enables people to communicate with a computer in a natural spoken language. Automatic recognition of speech by machines has been one of the most exciting, significant and challenging areas of research in the field of signal processing over the past five to six decades. Despite the developments and intensive research done in this area, the performance of ASR is still lower than that of speech recognition by humans and is yet to achieve a completely reliable performance level. The main objective of this thesis is to develop an efficient speech recognition system for recognising speaker independent isolated words in Malayalam.
Resumo:
The motion of a viscous incompressible fluid flow in bounded domains with a smooth boundary can be described by the nonlinear Navier-Stokes equations. This description corresponds to the so-called Eulerian approach. We develop a new approximation method for the Navier-Stokes equations in both the stationary and the non-stationary case by a suitable coupling of the Eulerian and the Lagrangian representation of the flow, where the latter is defined by the trajectories of the particles of the fluid. The method leads to a sequence of uniquely determined approximate solutions with a high degree of regularity containing a convergent subsequence with limit function v such that v is a weak solution of the Navier-Stokes equations.