882 resultados para the least squares distance method
Resumo:
1. Genomewide association studies (GWAS) enable detailed dissections of the genetic basis for organisms' ability to adapt to a changing environment. In long-term studies of natural populations, individuals are often marked at one point in their life and then repeatedly recaptured. It is therefore essential that a method for GWAS includes the process of repeated sampling. In a GWAS, the effects of thousands of single-nucleotide polymorphisms (SNPs) need to be fitted and any model development is constrained by the computational requirements. A method is therefore required that can fit a highly hierarchical model and at the same time is computationally fast enough to be useful. 2. Our method fits fixed SNP effects in a linear mixed model that can include both random polygenic effects and permanent environmental effects. In this way, the model can correct for population structure and model repeated measures. The covariance structure of the linear mixed model is first estimated and subsequently used in a generalized least squares setting to fit the SNP effects. The method was evaluated in a simulation study based on observed genotypes from a long-term study of collared flycatchers in Sweden. 3. The method we present here was successful in estimating permanent environmental effects from simulated repeated measures data. Additionally, we found that especially for variable phenotypes having large variation between years, the repeated measurements model has a substantial increase in power compared to a model using average phenotypes as a response. 4. The method is available in the R package RepeatABEL. It increases the power in GWAS having repeated measures, especially for long-term studies of natural populations, and the R implementation is expected to facilitate modelling of longitudinal data for studies of both animal and human populations.
Resumo:
A new method for the evaluation of the efficiency of parabolic trough collectors, called Rapid Test Method, is investigated at the Solar Institut Jülich. The basic concept is to carry out measurements under stagnation conditions. This allows a fast and inexpensive process due to the fact that no working fluid is required. With this approach, the temperature reached by the inner wall of the receiver is assumed to be the stagnation temperature and hence the average temperature inside the collector. This leads to a systematic error which can be rectified through the introduction of a correction factor. A model of the collector is simulated with COMSOL Multipyisics to study the size of the correction factor depending on collector geometry and working conditions. The resulting values are compared with experimental data obtained at a test rig at the Solar Institut Jülich. These results do not match with the simulated ones. Consequentially, it was not pos-sible to verify the model. The reliability of both the model with COMSOL Multiphysics and of the measurements are analysed. The influence of the correction factor on the rapid test method is also studied, as well as the possibility of neglecting it by measuring the receiver’s inner wall temperature where it receives the least amount of solar rays. The last two chapters analyse the specific heat capacity as a function of pressure and tem-perature and present some considerations about the uncertainties on the efficiency curve obtained with the Rapid Test Method.
Resumo:
One of the most disputable matters in the theory of finance has been the theory of capital structure. The seminal contributions of Modigliani and Miller (1958, 1963) gave rise to a multitude of studies and debates. Since the initial spark, the financial literature has offered two competing theories of financing decision: the trade-off theory and the pecking order theory. The trade-off theory suggests that firms have an optimal capital structure balancing the benefits and costs of debt. The pecking order theory approaches the firm capital structure from information asymmetry perspective and assumes a hierarchy of financing, with firms using first internal funds, followed by debt and as a last resort equity. This thesis analyses the trade-off and pecking order theories and their predictions on a panel data consisting 78 Finnish firms listed on the OMX Helsinki stock exchange. Estimations are performed for the period 2003–2012. The data is collected from Datastream system and consists of financial statement data. A number of capital structure characteristics are identified: firm size, profitability, firm growth opportunities, risk, asset tangibility and taxes, speed of adjustment and financial deficit. A regression analysis is used to examine the effects of the firm characteristics on capitals structure. The regression models were formed based on the relevant theories. The general capital structure model is estimated with fixed effects estimator. Additionally, dynamic models play an important role in several areas of corporate finance, but with the combination of fixed effects and lagged dependent variables the model estimation is more complicated. A dynamic partial adjustment model is estimated using Arellano and Bond (1991) first-differencing generalized method of moments, the ordinary least squares and fixed effects estimators. The results for Finnish listed firms show support for the predictions of profitability, firm size and non-debt tax shields. However, no conclusive support for the pecking-order theory is found. However, the effect of pecking order cannot be fully ignored and it is concluded that instead of being substitutes the trade-off and pecking order theory appear to complement each other. For the partial adjustment model the results show that Finnish listed firms adjust towards their target capital structure with a speed of 29% a year using book debt ratio.
Resumo:
Three sediment records of sea surface temperature (SST) are analyzed that originate from distant locations in the North Atlantic, have centennial-to-multicentennial resolution, are based on the same reconstruction method and chronological assumptions, and span the past 15 000 yr. Using recursive least squares techniques, an estimate of the time-dependent North Atlantic SST field over the last 15 kyr is sought that is consistent with both the SST records and a surface ocean circulation model, given estimates of their respective error (co)variances. Under the authors' assumptions about data and model errors, it is found that the 10 degrees C mixed layer isotherm, which approximately traces the modern Subpolar Front, would have moved by ~15 degrees of latitude southward (northward) in the eastern North Atlantic at the onset (termination) of the Younger Dryas cold interval (YD), a result significant at the level of two standard deviations in the isotherm position. In contrast, meridional movements of the isotherm in the Newfoundland basin are estimated to be small and not significant. Thus, the isotherm would have pivoted twice around a region southeast of the Grand Banks, with a southwest-northeast orientation during the warm intervals of the Bolling-Allerod and the Holocene and a more zonal orientation and southerly position during the cold interval of the YD. This study provides an assessment of the significance of similar previous inferences and illustrates the potential of recursive least squares in paleoceanography.
Resumo:
In this work, the relationship between diameter at breast height (d) and total height (h) of individual-tree was modeled with the aim to establish provisory height-diameter (h-d) equations for maritime pine (Pinus pinaster Ait.) stands in the Lomba ZIF, Northeast Portugal. Using data collected locally, several local and generalized h-d equations from the literature were tested and adaptations were also considered. Model fitting was conducted by using usual nonlinear least squares (nls) methods. The best local and generalized models selected, were also tested as mixed models applying a first-order conditional expectation (FOCE) approximation procedure and maximum likelihood methods to estimate fixed and random effects. For the calibration of the mixed models and in order to be consistent with the fitting procedure, the FOCE method was also used to test different sampling designs. The results showed that the local h-d equations with two parameters performed better than the analogous models with three parameters. However a unique set of parameter values for the local model can not be used to all maritime pine stands in Lomba ZIF and thus, a generalized model including covariates from the stand, in addition to d, was necessary to obtain an adequate predictive performance. No evident superiority of the generalized mixed model in comparison to the generalized model with nonlinear least squares parameters estimates was observed. On the other hand, in the case of the local model, the predictive performance greatly improved when random effects were included. The results showed that the mixed model based in the local h-d equation selected is a viable alternative for estimating h if variables from the stand are not available. Moreover, it was observed that it is possible to obtain an adequate calibrated response using only 2 to 5 additional h-d measurements in quantile (or random) trees from the distribution of d in the plot (stand). Balancing sampling effort, accuracy and straightforwardness in practical applications, the generalized model from nls fit is recommended. Examples of applications of the selected generalized equation to the forest management are presented, namely how to use it to complete missing information from forest inventory and also showing how such an equation can be incorporated in a stand-level decision support system that aims to optimize the forest management for the maximization of wood volume production in Lomba ZIF maritime pine stands.
Resumo:
In this work, we propose an inexpensive laboratory practice for an introductory physics course laboratory for any grade of science and engineering study. This practice was very well received by our students, where a smartphone (iOS, Android, or Windows) is used together with mini magnets (similar to those used on refrigerator doors), a 20 cm long school rule, a paper, and a free application (app) that needs to be downloaded and installed that measures magnetic fields using the smartphone's magnetic field sensor or magnetometer. The apps we have used are: Magnetometer (iOS), Magnetometer Metal Detector, and Physics Toolbox Magnetometer (Android). Nothing else is needed. Cost of this practice: free. The main purpose of the practice is that students determine the dependence of the component x of the magnetic field produced by different magnets (including ring magnets and sphere magnets). We obtained that the dependency of the magnetic field with the distance is of the form x-3, in total agreement with the theoretical analysis. The secondary objective is to apply the technique of least squares fit to obtain this exponent and the magnetic moment of the magnets, with the corresponding absolute error.
Resumo:
This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new), and respiratory rate predictor RRP) with three main components of cow’s milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p-value < 0.001 and R2 (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation (p-value < 0.001) with R2 (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.
Resumo:
The thesis deals with the problem of Model Selection (MS) motivated by information and prediction theory, focusing on parametric time series (TS) models. The main contribution of the thesis is the extension to the multivariate case of the Misspecification-Resistant Information Criterion (MRIC), a criterion introduced recently that solves Akaike’s original research problem posed 50 years ago, which led to the definition of the AIC. The importance of MS is witnessed by the huge amount of literature devoted to it and published in scientific journals of many different disciplines. Despite such a widespread treatment, the contributions that adopt a mathematically rigorous approach are not so numerous and one of the aims of this project is to review and assess them. Chapter 2 discusses methodological aspects of MS from information theory. Information criteria (IC) for the i.i.d. setting are surveyed along with their asymptotic properties; and the cases of small samples, misspecification, further estimators. Chapter 3 surveys criteria for TS. IC and prediction criteria are considered for: univariate models (AR, ARMA) in the time and frequency domain, parametric multivariate (VARMA, VAR); nonparametric nonlinear (NAR); and high-dimensional models. The MRIC answers Akaike’s original question on efficient criteria, for possibly-misspecified (PM) univariate TS models in multi-step prediction with high-dimensional data and nonlinear models. Chapter 4 extends the MRIC to PM multivariate TS models for multi-step prediction introducing the Vectorial MRIC (VMRIC). We show that the VMRIC is asymptotically efficient by proving the decomposition of the MSPE matrix and the consistency of its Method-of-Moments Estimator (MoME), for Least Squares multi-step prediction with univariate regressor. Chapter 5 extends the VMRIC to the general multiple regressor case, by showing that the MSPE matrix decomposition holds, obtaining consistency for its MoME, and proving its efficiency. The chapter concludes with a digression on the conditions for PM VARX models.
Resumo:
The present paper describes a novel, simple and reliable differential pulse voltammetric method for determining amitriptyline (AMT) in pharmaceutical formulations. It has been described for many authors that this antidepressant is electrochemically inactive at carbon electrodes. However, the procedure proposed herein consisted in electrochemically oxidizing AMT at an unmodified carbon nanotube paste electrode in the presence of 0.1 mol L(-1) sulfuric acid used as electrolyte. At such concentration, the acid facilitated the AMT electroxidation through one-electron transfer at 1.33 V vs. Ag/AgCl, as observed by the augmentation of peak current. Concerning optimized conditions (modulation time 5 ms, scan rate 90 mV s(-1), and pulse amplitude 120 mV) a linear calibration curve was constructed in the range of 0.0-30.0 μmol L(-1), with a correlation coefficient of 0.9991 and a limit of detection of 1.61 μmol L(-1). The procedure was successfully validated for intra- and inter-day precision and accuracy. Moreover, its feasibility was assessed through analysis of commercial pharmaceutical formulations and it has been compared to the UV-vis spectrophotometric method used as standard analytical technique recommended by the Brazilian Pharmacopoeia.
Resumo:
Split-plot design (SPD) and near-infrared chemical imaging were used to study the homogeneity of the drug paracetamol loaded in films and prepared from mixtures of the biocompatible polymers hydroxypropyl methylcellulose, polyvinylpyrrolidone, and polyethyleneglycol. The study was split into two parts: a partial least-squares (PLS) model was developed for a pixel-to-pixel quantification of the drug loaded into films. Afterwards, a SPD was developed to study the influence of the polymeric composition of films and the two process conditions related to their preparation (percentage of the drug in the formulations and curing temperature) on the homogeneity of the drug dispersed in the polymeric matrix. Chemical images of each formulation of the SPD were obtained by pixel-to-pixel predictions of the drug using the PLS model of the first part, and macropixel analyses were performed for each image to obtain the y-responses (homogeneity parameter). The design was modeled using PLS regression, allowing only the most relevant factors to remain in the final model. The interpretation of the SPD was enhanced by utilizing the orthogonal PLS algorithm, where the y-orthogonal variations in the design were separated from the y-correlated variation.
Resumo:
X-ray fluorescence (XRF) is a fast, low-cost, nondestructive, and truly multielement analytical technique. The objectives of this study are to quantify the amount of Na(+) and K(+) in samples of table salt (refined, marine, and light) and to compare three different methodologies of quantification using XRF. A fundamental parameter method revealed difficulties in quantifying accurately lighter elements (Z < 22). A univariate methodology based on peak area calibration is an attractive alternative, even though additional steps of data manipulation might consume some time. Quantifications were performed with good correlations for both Na (r = 0.974) and K (r = 0.992). A partial least-squares (PLS) regression method with five latent variables was very fast. Na(+) quantifications provided calibration errors lower than 16% and a correlation of 0.995. Of great concern was the observation of high Na(+) levels in low-sodium salts. The presented application may be performed in a fast and multielement fashion, in accordance with Green Chemistry specifications.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
The objective of this study was to evaluate children's respiratory patterns in the mixed dentition, by means of acoustic rhinometry, and its relation to the upper arch width development. Fifty patients were examined, 25 females and 25 males with mean age of eight years and seven months. All of them were submitted to acoustic rhinometry and upper and lower arch impressions to obtain plaster models. The upper arch analysis was accomplished by measuring the interdental transverse distance of the upper teeth, deciduous canines (measurement 1), deciduous first molars (measurement 2), deciduous second molars (measurement 3) and the first molars (measurement 4). The results showed that an increased left nasal cavity area in females means an increased interdental distance of the deciduous first molars and deciduous second molars and an increased interdental distance of the deciduous canines, deciduous first and second molars in males. It was concluded that there is a correlation between the nasal cavity area and the upper arch transverse distance in the anterior and mid maxillary regions for both genders.
Resumo:
The Cerrado region still receives relatively little ornithological attention, although it is regarded as the only tropical savanna in the world considered to be a biodiversity hotspot. Cerradão is one of the least known and most deforested Cerrado physiognomies and few recent bird surveys have been conducted in these forests. In order to rescue bird records and complement the few existing inventories of this under-studied forest type in the state of São Paulo, we looked for published papers on birds of cerradão. Additionally we surveyed birds at a 314-ha cerradão remnant located in central São Paulo, Brazil, from September 2005-December 2006 using unlimited distance transect counts. Out of 95 investigations involving cerradão bird studies, only 17 (18%) investigations teased apart bird species recorded inside cerradão from those recorded in other physiognomies of Cerrado. Except for one study, no research found more than 64 species in this type of forest, a result shared within many regions from Brazil and Bolivia. Differences in species richness do not seem be related with levels of disturbance of landscape or fragment size. Considering all species recorded in cerradão in Brazil and Bolivia, a compilation of data accumulated 250 species in 36 families and 15 orders. In recent surveys at central São Paulo, we recorded 48 species in 20 families, including the Pale-bellied Tyrant-Manakin Neopelma pallescens, threatened in São Paulo, and the Helmeted Manakin Antilophia galeata, near threatened in the state and endemic to the Cerrado region. Among the most abundant species inside this fragment, none was considered to be neither threatened nor endemic.
Resumo:
Natural products have widespread biological activities, including inhibition of mitochondrial enzyme systems. Some of these activities, for example cytotoxicity, may be the result of alteration of cellular bioenergetics. Based on previous computer-aided drug design (CADD) studies and considering reported data on structure-activity relationships (SAR), an assumption regarding the mechanism of action of natural products against parasitic infections involves the NADH-oxidase inhibition. In this study, chemometric tools, such as: Principal Component Analysis (PCA), Consensus PCA (CPCA), and partial least squares regression (PLS), were applied to a set of forty natural compounds, acting as NADH-oxidase inhibitors. The calculations were performed using the VolSurf+ program. The formalisms employed generated good exploratory and predictive results. The independent variables or descriptors having a hydrophobic profile were strongly correlated to the biological data.