877 resultados para Non-stationary iterative method
Resumo:
BACKGROUND: Complete mitochondrial genome sequences have become important tools for the study of genome architecture, phylogeny, and molecular evolution. Despite the rapid increase in available mitogenomes, the taxonomic sampling often poorly reflects phylogenetic diversity and is often also biased to represent deeper (family-level) evolutionary relationships. RESULTS: We present the first fully sequenced ant (Hymenoptera: Formicidae) mitochondrial genomes. We sampled four mitogenomes from three species of fire ants, genus Solenopsis, which represent various evolutionary depths. Overall, ant mitogenomes appear to be typical of hymenopteran mitogenomes, displaying a general A+T-bias. The Solenopsis mitogenomes are slightly more compact than other hymentoperan mitogenomes (~15.5 kb), retaining all protein coding genes, ribosomal, and transfer RNAs. We also present evidence of recombination between the mitogenomes of the two conspecific Solenopsis mitogenomes. Finally, we discuss potential ways to improve the estimation of phylogenies using complete mitochondrial genome sequences. CONCLUSIONS: The ant mitogenome presents an important addition to the continued efforts in studying hymenopteran mitogenome architecture, evolution, and phylogenetics. We provide further evidence that the sampling across many taxonomic levels (including conspecifics and congeners) is useful and important to gain detailed insights into mitogenome evolution. We also discuss ways that may help improve the use of mitogenomes in phylogenetic analyses by accounting for non-stationary and non-homogeneous evolution among branches.
Resumo:
La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.
Resumo:
La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.
Resumo:
Precession electron diffraction (PED) is a hollow cone non-stationary illumination technique for electron diffraction pattern collection under quasikinematicalconditions (as in X-ray Diffraction), which enables “ab-initio” solving of crystalline structures of nanocrystals. The PED technique is recently used in TEMinstruments of voltages 100 to 300 kV to turn them into true electron iffractometers, thus enabling electron crystallography. The PED technique, when combined with fast electron diffraction acquisition and pattern matching software techniques, may also be used for the high magnification ultra-fast mapping of variable crystal orientations and phases, similarly to what is achieved with the Electron Backscatter Diffraction (EBSD) technique in Scanning ElectronMicroscopes (SEM) at lower magnifications and longer acquisition times.
Resumo:
This work consists of three essays investigating the ability of structural macroeconomic models to price zero coupon U.S. government bonds. 1. A small scale 3 factor DSGE model implying constant term premium is able to provide reasonable a fit for the term structure only at the expense of the persistence parameters of the structural shocks. The test of the structural model against one that has constant but unrestricted prices of risk parameters shows that the exogenous prices of risk-model is only weakly preferred. We provide an MLE based variance-covariance matrix of the Metropolis Proposal Density that improves convergence speeds in MCMC chains. 2. Affine in observable macro-variables, prices of risk specification is excessively flexible and provides term-structure fit without significantly altering the structural parameters. The exogenous component of the SDF is separating the macro part of the model from the term structure and the good term structure fit has as a driving force an extremely volatile SDF and an implied average short rate that is inexplicable. We conclude that the no arbitrage restrictions do not suffice to temper the SDF, thus there is need for more restrictions. We introduce a penalty-function methodology that proves useful in showing that affine prices of risk specifications are able to reconcile stable macro-dynamics with good term structure fit and a plausible SDF. 3. The level factor is reproduced most importantly by the preference shock to which it is strongly and positively related but technology and monetary shocks, with negative loadings, are also contributing to its replication. The slope factor is only related to the monetary policy shocks and it is poorly explained. We find that there are gains in in- and out-of-sample forecast of consumption and inflation if term structure information is used in a time varying hybrid prices of risk setting. In-sample yield forecast are better in models with non-stationary shocks for the period 1982-1988. After this period, time varying market price of risk models provide better in-sample forecasts. For the period 2005-2008, out of sample forecast of consumption and inflation are better if term structure information is incorporated in the DSGE model but yields are better forecasted by a pure macro DSGE model.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Työn tavoitteena on ollut selvittää runkoelementtitehtaan materiaalien hankinnanorganisointi ja ohjaus nykytilanteessa. Tutkimuksessa on pyritty löytämään materiaaliprosessin kannalta toimintaa rajoittavia pullonkauloja sekä etsitty kehitystoimenpiteitä ongelmakohtiin prosessiajattelun näkökulmasta. Tarkastelun kohteena on ollut yrityksen operatiivinen materiaaliprosessi nimikkeiden tilauksesta varastointiin. Työssä on käytetty kvalitatiivista tutkimusmenetelmää ja empiirisen osuuden tiedot on hankittu haastatteluilla ja laatuohjeistuksesta. Yrityksen nykytilanne on mallinnettu prosessikaavioiden avulla, ja on selvitetty mitkä ovat prosessin tieto- ja materiaalivirrat sekä mitkä ovat tärkeimmät toiminnot materiaaliketjussa. Prosessianalyysin ja haastatteluiden pohjalta määriteltiin kehitysehdotukset prosessin suorituskyvyn tehostamiseksi. Nykytilan kartoituksen jälkeen suurimmat ongelmat materiaaliprosessissa liittyvät tilausten ajoitusten hallintaan, muutoksien vaikutukseen prosessissa sekä vastuiden ja kokonaishallinnan puuttumiseen. Ongelmat johtuvat pääosin rakennusalan projektimaisesta luonteesta. Yhdeksi kehityskohteeksi nousi myös tiedonhallinnan tehostaminen, etenkin prosessin vaiheiden automatisointi tietojärjestelmiä hyödyntäen. Toimintaan on pyritty etsimään ratkaisuja prosessiajattelun avulla, mikä osoittautui sopivaksi menetelmäksi toiminnan kehittämisessä. Tutkimuksen tuloksena syntyi kehitysehdotuksia, joiden pohjalta muodostettiin uusi materiaalien ohjauksen toimintamalli. Toimintamallissa tärkeimpänä on ennakkotiedon hyödyntäminen tilaussuunnittelun tukena. Alustavat materiaalimäärät välitetään ennakkotietona myös toimittajille, jotka voivat paremmin suunnitella omaa tuotantokapasiteettiaan. Tilausten suunnittelu tapahtuu tarkentuvasti ja lopullinen materiaalimäärä ja tarveajankohta välitetään kotiinkutsun yhteydessä. Toimintamalliin liittyy lisäksi materiaalien vastaanoton ja varastoinnin kehittäminen sekä muutoksien hallinta tietojärjestelmää paremmin hyödyntäen. Kriittisintä materiaaliprosessissa tulee olemaan prosessin tiedonhallinta ja siihen liittyvät vastuukysymykset.
Resumo:
Thisresearch deals with the dynamic modeling of gas lubricated tilting pad journal bearings provided with spring supported pads, including experimental verification of the computation. On the basis of a mathematical model of a film bearing, a computer program has been developed, which can be used for the simulation of a special type of tilting pad gas journal bearing supported by a rotary spring under different loading conditions time dependently (transient running conditions due to geometry variations in time externally imposed). On the basis of literature, different transformations have been used in the model to achieve simpler calculation. The numerical simulation is used to solve a non-stationary case of a gasfilm. The simulation results were compared with literature results in a stationary case (steady running conditions) and they were found to be equal. In addition to this, comparisons were made with a number of stationary and non-stationary bearing tests, which were performed at Lappeenranta University of Technology using bearings designed with the simulation program. A study was also made using numerical simulation and literature to establish the influence of the different bearing parameters on the stability of the bearing. Comparison work was done with literature on tilting pad gas bearings. This bearing type is rarely used. One literature reference has studied the same bearing type as that used in LUT. A new design of tilting pad gas bearing is introduced. It is based on a stainless steel body and electron beam welding of the bearing parts. It has good operation characteristics and is easier to tune and faster to manufacture than traditional constructions. It is also suitable for large serial production.
Resumo:
Glycerol, a co-product of biodiesel production, was used as a carbon source for the kinetics studies and production of biosurfactants by P. aeruginosa MSIC02. The highest fermentative parameters (Y PX = 3.04 g g-1; Y PS = 0.189 g g-1, P B = 31.94 mg L-1 h-1 and P X = 10.5 mg L-1 h-1) were obtained at concentrations of 0.4% (w/v) NaNO3 and 2% (w/v) glycerol. The rhamnolipid exhibited 80% of emulsification on kerosene, surface tension of 32.5 mN m-1, CMC = 28.2 mg L-1, C20 (concentration of surfactant in the bulk phase that produces a reduction of 20 dyn/cm in the surface tension of the solvent) = 0.99 mg L-1, Γm (surface concentration excess) = 2.4 x 10-26 mol Å-2 and S (surface area) = 70.4 Ų molecule-1 with solutions containing 10% NaCl. A mathematical model based on logistic equation was considered to representing the process. Model parameters were estimated by non-linear regression method. This approach was able to give a good description of the process.
Resumo:
The major aim of this study was to characterize a soluble Plasmodium falciparum antigen from the plasma of malaria-infected humans and Plasmodium falciparum culture supernatants, using immunoabsorbent techniques and Western blotting. An Mr 60-kDa protein was isolated from the plasma of patients with Plasmodium falciparum malaria by affinity chromatography using rabbit anti-Proteus spp GDH(NADP+) serum as ligand. This protein, present in plasma of patients with acute Plasmodium falciparum infection, in Plasmodium falciparum culture supernatants, and in immune complexes, was tested with Plasmodium falciparum malaria hyperimmune serum from patients living in hyperendemic areas and rabbit anti-Proteus spp GDH(NADP+) serum prepared in the laboratory. In this report, we describe the results of a study showing that parasite GDH(NADP+) can be used to detect the presence of Plasmodium falciparum. It appears that this technique permits the chromatographic detection of a Plasmodium falciparum excretion antigen that may be used in the production of monoclonal antibodies to improve immunodiagnostic assays for the detection of antigenemia, and opens the possibility of its use as a non-microscopic screening method.
Resumo:
Alzheimer’s disease (AD) is the most common form of dementia. Characteristic changes in an AD brain are the formation of β-amyloid protein (Aβ) plaques and neurofibrillary tangles, though other alterations in the brain have also been connected to AD. No cure is available for AD and it is one of the leading causes of death among the elderly in developed countries. Liposomes are biocompatible and biodegradable spherical phospholipid bilayer vesicles that can enclose various compounds. Several functional groups can be attached on the surface of liposomes in order to achieve long-circulating target-specific liposomes. Liposomes can be utilized as drug carriers and vehicles for imaging agents. Positron emission tomography (PET) is a non-invasive imaging method to study biological processes in living organisms. In this study using nucleophilic 18F-labeling synthesis, various synthesis approaches and leaving groups for novel PET imaging tracers have been developed to target AD pathology in the brain. The tracers were the thioflavin derivative [18F]flutemetamol, curcumin derivative [18F]treg-curcumin, and functionalized [18F]nanoliposomes, which all target Aβ in the AD brain. These tracers were evaluated using transgenic AD mouse models. In addition, 18F-labeling synthesis was developed for a tracer targeting the S1P3 receptor. The chosen 18F-fluorination strategy had an effect on the radiochemical yield and specific activity of the tracers. [18F]Treg-curcumin and functionalized [18F]nanoliposomes had low uptake in AD mouse brain, whereas [18F]flutemetamol exhibited the appropriate properties for preclinical Aβ-imaging. All of these tracers can be utilized in studies of the pathology and treatment of AD and related diseases.
Resumo:
The desire to create a statistical or mathematical model, which would allow predicting the future changes in stock prices, was born many years ago. Economists and mathematicians are trying to solve this task by applying statistical analysis and physical laws, but there are still no satisfactory results. The main reason for this is that a stock exchange is a non-stationary, unstable and complex system, which is influenced by many factors. In this thesis the New York Stock Exchange was considered as the system to be explored. A topological analysis, basic statistical tools and singular value decomposition were conducted for understanding the behavior of the market. Two methods for normalization of initial daily closure prices by Dow Jones and S&P500 were introduced and applied for further analysis. As a result, some unexpected features were identified, such as a shape of distribution of correlation matrix, a bulk of which is shifted to the right hand side with respect to zero. Also non-ergodicity of NYSE was confirmed graphically. It was shown, that singular vectors differ from each other by a constant factor. There are for certain results no clear conclusions from this work, but it creates a good basis for the further analysis of market topology.
Resumo:
Mondialement, l’infanticide est une cause importante de mortalité infantile. Dans ce mémoire, les infanticides sont analysés en fonction du statut parental, du mode de décès et de l’âge de l’enfant. La première hypothèse de ce mémoire propose qu’il y ait une surreprésentation des parents non biologiques dans les cas d’infanticides chez les enfants de moins de douze ans, et ce, en regard des taux de base de la population. L’hypothèse 2 prédit que les infanticides des parents biologiques devraient revêtir un caractère plus létal (utilisation d’arme à feu, empoisonnement, etc.) que ceux des parents non biologiques qui devraient être caractérisés principalement par des mauvais traitements et de la négligence. D’autres hypothèses sont examinées en fonction des taux de suicide et du sexe de l’agresseur. La présente étude porte sur les cas d’infanticides d’enfants de douze ans et moins sur le territoire du Québec provenant des archives du bureau du coroner pour la période se situant entre 1990 et 2007 (n=182). Les résultats obtenus appuient partiellement l’hypothèse 1 et confirment l’hypothèse 2. En ce sens, les résultats de cette étude viennent appuyer les hypothèses évolutionnistes qui soutiennent une influence du statut parental sur le comportement de l’infanticide. De façon générale, ces résultats mettent en lumière les différences qualitatives qui existent entre les parents biologiques et les parents non biologiques dans les cas d’infanticides. Les implications des résultats obtenus sont discutées.
Resumo:
Sonar signal processing comprises of a large number of signal processing algorithms for implementing functions such as Target Detection, Localisation, Classification, Tracking and Parameter estimation. Current implementations of these functions rely on conventional techniques largely based on Fourier Techniques, primarily meant for stationary signals. Interestingly enough, the signals received by the sonar sensors are often non-stationary and hence processing methods capable of handling the non-stationarity will definitely fare better than Fourier transform based methods.Time-frequency methods(TFMs) are known as one of the best DSP tools for nonstationary signal processing, with which one can analyze signals in time and frequency domains simultaneously. But, other than STFT, TFMs have been largely limited to academic research because of the complexity of the algorithms and the limitations of computing power. With the availability of fast processors, many applications of TFMs have been reported in the fields of speech and image processing and biomedical applications, but not many in sonar processing. A structured effort, to fill these lacunae by exploring the potential of TFMs in sonar applications, is the net outcome of this thesis. To this end, four TFMs have been explored in detail viz. Wavelet Transform, Fractional Fourier Transfonn, Wigner Ville Distribution and Ambiguity Function and their potential in implementing five major sonar functions has been demonstrated with very promising results. What has been conclusively brought out in this thesis, is that there is no "one best TFM" for all applications, but there is "one best TFM" for each application. Accordingly, the TFM has to be adapted and tailored in many ways in order to develop specific algorithms for each of the applications.
Resumo:
Active microwave imaging is explored as an imaging modality for early detection of breast cancer. When exposed to microwaves, breast tumor exhibits electrical properties that are significantly different from that of healthy breast tissues. The two approaches of active microwave imaging — confocal microwave technique with measured reflected signals and microwave tomographic imaging with measured scattered signals are addressed here. Normal and malignant breast tissue samples of same person are subjected to study within 30 minutes of mastectomy. Corn syrup is used as coupling medium, as its dielectric parameters show good match with that of the normal breast tissue samples. As bandwidth of the transmitter is an important aspect in the time domain confocal microwave imaging approach, wideband bowtie antenna having 2:1 VSWR bandwidth of 46% is designed for the transmission and reception of microwave signals. Same antenna is used for microwave tomographic imaging too at the frequency of 3000 MHz. Experimentally obtained time domain results are substantiated by finite difference time domain (FDTD) analysis. 2-D tomographic images are reconstructed with the collected scattered data using distorted Born iterative method. Variations of dielectric permittivity in breast samples are distinguishable from the obtained permittivity profiles.