586 resultados para Bootstrap paramétrique
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
(Morphological cladistic analysis of Pseudobombax Dugand (Malvaceae, Bombacoideae) and allied genera). Pseudobombax Dugand belongs to the family Malvaceae subfamily Bombacoideae and aggregates 29 species restricted to the Neotropics. A morphological cladistic analysis of Pseudobombax and allied genera was carried out to test the monophyly of the genus and to provide hypotheses on its phylogeny. Parsimony analyses were based on 40 morphological characters and 28 species, 14 belonging to Pseudobombax and 14 to other species of Bombacoideae, Matisieae (Malvoideae) and Ochromeae. Nine most parsimonious trees (144 steps, ci 0.40, ri 0.67) were produced when 10 multistate characters were taken as ordered while only two most parsimonious trees (139 steps, ci 0.41, ri 0.67) were obtained when all characters were considered as unordered. Pseudobombax monophyly had moderate bootstrap support, appearing as sister to a clade composed of the genera Bombacopsis Pittier and Pachira Aubl., or to the genus Bombax L. according to the analysis. The petiole widened at the apex and the leaflets not jointed to the petiole are probably synapomorphies of Pseudobombax. Three main clades were found in the genus: one characterised by petiolulated leaflets and 5-angular fruits, the other by pubescent leaves and calyx, and the other by reduction of the number of leaflets. The latter includes species endemic to the Brazilian semi-arid region also characterised by the absence of phalanges in the androecium. Interspecific affinities in Pseudobombax as well as the morphological evolution in Bombacoideae are discussed.
Resumo:
This thesis examines the suitability of VaR in foreign exchange rate risk management from the perspective of a European investor. The suitability of four different VaR models is evaluated in respect to have insight if VaR is a valuable tool in managing foreign exchange rate risk. The models evaluated are historical method, historical bootstrap method, variance-covariance method and Monte Carlo simulation. The data evaluated are divided into emerging and developed market currencies to have more intriguing analysis. The foreign exchange rate data in this thesis is from 31st January 2000 to 30th April 2014. The results show that the previously mentioned VaR models performance in foreign exchange risk management is not to be considered as a single tool in foreign exchange rate risk management. The variance-covariance method and Monte Carlo simulation performs poorest in both currency portfolios. Both historical methods performed better but should also be considered as an additional tool along with other more sophisticated analysis tools. A comparative study of VaR estimates and forward prices is also included in the thesis. The study reveals that regardless of the expensive hedging cost of emerging market currencies the risk captured by VaR is more expensive and thus FX forward hedging is recommended
Resumo:
Avidins (Avds) are homotetrameric or homodimeric glycoproteins with typically less than 130 amino acid residues per monomer. They form a highly stable, non-covalent complex with biotin (vitamin H) with Kd = 10-15 M (for chicken Avd). The best-studied Avds are the chicken Avd from Gallus gallus and streptavidin from Streptomyces avidinii, although other Avd studies have also included Avds from various origins, e.g., from frogs, fishes, mushrooms and from many different bacteria. Several engineered Avds have been reported as well, e.g., dual-chain Avds (dcAvds) and single-chain Avds (scAvds), circular permutants with up to four simultaneously modifiable ligand-binding sites. These engineered Avds along with the many native Avds have potential to be used in various nanobiotechnological applications. In this study, we made a structure-based alignment representing all currently available sequences of Avds and studied the evolutionary relationship of Avds using phylogenetic analysis. First, we created an initial multiple sequence alignment of Avds using 42 closely related sequences, guided by the known Avd crystal structures. Next, we searched for non-redundant Avd sequences from various online databases, including National Centre for Biotechnology Information and the Universal Protein Resource; the identified sequences were added to the initial alignment to expand it to a final alignment of 242 Avd sequences. The MEGA software package was used to create distance matrices and a phylogenetic tree. Bootstrap reproducibility of the tree was poor at multiple nodes and may reflect on several possible issues with the data: the sequence length compared is relatively short and, whereas some positions are highly conserved and functional, others can vary without impinging on the structure or the function, so there are few informative sites; it may be that periods of rapid duplication have led to paralogs and that the differences among them are within the error limit of the data; and there may be other yet unknown reasons. Principle component analysis applied to alternative distance data did segregate the major groups, and success is likely due to the multivariate consideration of all the information. Furthermore, based on our extensive alignment and phylogenetic analysis, we expressed two novel Avds, lacavidin from Lactrodectus Hesperus, a western black widow spider, and hoefavidin from Hoeflea phototrophica, an aerobic marine bacterium, the ultimate aim being to determine their X-ray structures. These Avds were selected because of their unique sequences: lacavidin has an N-terminal Avd-like domain but a long C-terminal overhang, whereas hoefavidin was thought to be a dimeric Avd. Both these Avds could be used as novel scaffolds in biotechnological applications.
Resumo:
The aim of this study was to contribute to the current knowledge-based theory by focusing on a research gap that exists in the empirically proven determination of the simultaneous but differentiable effects of intellectual capital (IC) assets and knowledge management (KM) practices on organisational performance (OP). The analysis was built on the past research and theoreticised interactions between the latent constructs specified using the survey-based items that were measured from a sample of Finnish companies for IC and KM and the dependent construct for OP determined using information available from financial databases. Two widely used and commonly recommended measures in the literature on management science, i.e. the return on total assets (ROA) and the return on equity (ROE), were calculated for OP. Thus the investigation of the relationship between IC and KM impacting OP in relation to the hypotheses founded was possible to conduct using objectively derived performance indicators. Using financial OP measures also strengthened the dynamic features of data needed in analysing simultaneous and causal dependences between the modelled constructs specified using structural path models. The estimates were obtained for the parameters of structural path models using a partial least squares-based regression estimator. Results showed that the path dependencies between IC and OP or KM and OP were always insignificant when analysed separate to any other interactions or indirect effects caused by simultaneous modelling and regardless of the OP measure used that was either ROA or ROE. The dependency between the constructs for KM and IC appeared to be very strong and was always significant when modelled simultaneously with other possible interactions between the constructs and using either ROA or ROE to define OP. This study, however, did not find statistically unambiguous evidence for proving the hypothesised causal mediation effects suggesting, for instance, that the effects of KM practices on OP are mediated by the IC assets. Due to the fact that some indication about the fluctuations of causal effects was assessed, it was concluded that further studies are needed for verifying the fundamental and likely hidden causal effects between the constructs of interest. Therefore, it was also recommended that complementary modelling and data processing measures be conducted for elucidating whether the mediation effects occur between IC, KM and OP, the verification of which requires further investigations of measured items and can be build on the findings of this study.
Resumo:
Phascolomyces articulosus genomic DNA was isolated from 48 h old hyphae and was used for amplification of a chitin synthase fragment by the polymerase chain reaction method. The primers used in the amplification corresponded to two widely conserved amino acid regions found in chitin synthases of many fimgi. Amphfication resulted in four bands (820, 900, 1000 and 1500 bp, approximately) as visualized in a 1.2% agarose gel. The lowest band (820 bp) was selected as a candidate for chitin synthase because most amplified regions from other fimgi so far exhibited similar sizes (600-750 bp). The selected fragment was extracted from the gel and cloned in the Hinc n site of pUC19. The derived plasmid and insert were designated ^\5C\9'PaCHS and PaCHS respectively. The plasmid pUC19-PaC/fS was digested by several restriction enzymes and was found to contain BamHl and HincU sites. Sequencing of PaCHS revealed two intron sequences and a total open reading frame of 200 amino acids. The derived polypeptide was compared with other related sequences from the EMBL database (Heidelberg, Germany) and was matched to 36 other fiilly or partially sequenced fimgal chitin synthase genes. The closest resemblance was with two genes (74.5% and 73.1% identity) from Rhizopus oligosporus. Southern hybridization with the cloned fragment as a probe to the PCR reaction showed a strong signal at the fragment selected for cloning and weaker signals at the other two fragments. Southern hybridization with partially digested Phascolomyces articulosus genomic DNA showed a single band. The amino acid sequence was compared with sequences from other chitin synthase gene classes using the CLUSTALW program. The chitin synthase fragment from Phascolomyces articulosus was initially grouped in class n along with chitin synthase fragments from Rhizopus oligosporus and Phycomyces blakesleeanus which also belong to the same class, Zygomycetes. Bootstrap analysis using the neighbor-joining method available by CLUSTALW verified such classification. Comparison of PaCHS revealed conservation of intron positions that are characteristic of chitin synthase gene fragments of zygomycetous fungi.
Resumo:
Euclidean distance matrix analysis (EDMA) methods are used to distinguish whether or not significant difference exists between conformational samples of antibody complementarity determining region (CDR) loops, isolated LI loop and LI in three-loop assembly (LI, L3 and H3) obtained from Monte Carlo simulation. After the significant difference is detected, the specific inter-Ca distance which contributes to the difference is identified using EDMA.The estimated and improved mean forms of the conformational samples of isolated LI loop and LI loop in three-loop assembly, CDR loops of antibody binding site, are described using EDMA and distance geometry (DGEOM). To the best of our knowledge, it is the first time the EDMA methods are used to analyze conformational samples of molecules obtained from Monte Carlo simulations. Therefore, validations of the EDMA methods using both positive control and negative control tests for the conformational samples of isolated LI loop and LI in three-loop assembly must be done. The EDMA-I bootstrap null hypothesis tests showed false positive results for the comparison of six samples of the isolated LI loop and true positive results for comparison of conformational samples of isolated LI loop and LI in three-loop assembly. The bootstrap confidence interval tests revealed true negative results for comparisons of six samples of the isolated LI loop, and false negative results for the conformational comparisons between isolated LI loop and LI in three-loop assembly. Different conformational sample sizes are further explored by combining the samples of isolated LI loop to increase the sample size, or by clustering the sample using self-organizing map (SOM) to narrow the conformational distribution of the samples being comparedmolecular conformations. However, there is no improvement made for both bootstrap null hypothesis and confidence interval tests. These results show that more work is required before EDMA methods can be used reliably as a method for comparison of samples obtained by Monte Carlo simulations.
Resumo:
I t is generally accepted among scholars that individual learning and team learning contribute to the concept we refer to as organizational learning. However, a small number of quantitative and qualitative studies that have investigated their relationship reported contradicting results. This thesis investigated the relationship between individual learning, team learning, and organizational learning. A survey instrument was used to collect information on individual learning, team learning, and organizational learning. The study sample comprised of supervisors from the clinical laboratories in teaching hospitals and community hospitals in Ontario. The analyses utilized a linear regression to investigate the relationship between individual and team learning. The relationship between individual and organizational learning, and team and organizational learning were simultaneously investigated with canonical correlation and set correlation. T-test and multivariate analysis of variance were used to compare the differences in learning scores of respondents employed by laboratories in teaching and those employed by community hospitals. The study validated its tests results with 1,000 bootstrap replications. Results from this study suggest that there are moderate correlations between individual learning and team learning. The correlation individual learning and organizational learning and team learning and organizational learning appeared to be weak. The scores of the three learning levels show statistically significant differences between respondents from laboratories in teaching hospitals and respondents from community hospitals.
Resumo:
This thesis describes an ancillary project to the Early Diagnosis of Mesothelioma and Lung Cancer in Prior Asbestos Workers study and was conducted to determine the effects of asbestos exposure, pulmonary function and cigarette smoking in the prediction of pulmonary fibrosis. 613 workers who were occupationally exposed to asbestos for an average of 25.9 (SD=14.69) years were sampled from Sarnia, Ontario. A structured questionnaire was administered during a face-to-face interview along with a low-dose computed tomography (LDCT) of the thorax. Of them, 65 workers (10.7%, 95%CI 8.12—12.24) had LDCT-detected pulmonary fibrosis. The model predicting fibrosis included the variables age, smoking (dichotomized), post FVC % splines and post- FEV1% splines. This model had a receiver operator characteristic area under the curve of 0.738. The calibration of the model was evaluated with R statistical program and the bootstrap optimism-corrected calibration slope was 0.692. Thus, our model demonstrated moderate predictive performance.
Resumo:
Emerging markets have received wide attention from investors around the globe because of their return potential and risk diversification. This research examines the selection and timing performance of Canadian mutual funds which invest in fixed-income and equity securities in emerging markets. We use (un)conditional two- and five-factor benchmark models that accommodate the dynamics of returns in emerging markets. We also adopt the cross-sectional bootstrap methodology to distinguish between ‘skill’ and ‘luck’ for individual funds. All the tests are conducted using a comprehensive data set of bond and equity emerging funds over the period of 1989-2011. The risk-adjusted measures of performance are estimated using the least squares method with the Newey-West adjustment for standard errors that are robust to conditional heteroskedasticity and autocorrelation. The performance statistics of the emerging funds before (after) management-related costs are insignificantly positive (significantly negative). They are sensitive to the chosen benchmark model and conditional information improves selection performance. The timing statistics are largely insignificant throughout the sample period and are not sensitive to the benchmark model. Evidence of timing and selecting abilities is obtained in a small number of funds which is not sensitive to the fees structure. We also find evidence that a majority of individual funds provide zero (very few provide positive) abnormal return before fees and a significantly negative return after fees. At the negative end of the tail of performance distribution, our resampling tests fail to reject the role of bad luck in the poor performance of funds and we conclude that most of them are merely ‘unlucky’.
Resumo:
Objective: To investigate the impact of maternity insurance and maternal residence on birth outcomes in a Chinese population. Methods: Secondary data was analyzed from a perinatal cohort study conducted in the Beichen District of the city of Tianjin, China. A total of 2364 pregnant women participated in this study at approximately 12-week gestation upon registration for receiving prenatal care services. After accounting for missing information for relevant variables, a total of 2309 women with single birth were included in this analysis. Results: A total of 1190 (51.5%) women reported having maternity insurance, and 629 (27.2%) were rural residents. The abnormal birth outcomes were small for gestational age (SGA, n=217 (9.4%)), large for gestational age (LGA, n=248 (10.7%)), birth defect (n=48 (2.1%)) including congenital heart defect (n=32 (1.4%)). In urban areas, having maternal insurance increased the odds of SGA infants (1.32, 95%CI (0.85, 2.04), NS), but decreased the odds of LGA infants (0.92, 95%CI (0.62, 1.36), NS); also decreased the odds of birth defect (0.93, 95%CI (0.37, 2.33), NS), and congenital heart defect (0.65, 95%CI (0.21, 1.99), NS) after adjustment for covariates. In contrast to urban areas, having maternal insurance in rural areas reduced the odds of SGA infants (0.60, 95%CI (0.13, 2.73), NS); but increased the odds of LGA infants (2.16, 95%CI (0.92, 5.04), NS), birth defects (2.48, 95% CI (0.70, 8.80), NS), and congenital heart defect (2.18, 95%CI (0.48, 10.00), NS) after adjustment for the same covariates. Similar results were obtained from Bootstrap methods except that the odds ratio of LGA infants in rural areas for maternal insurance was significant (95%CI (1.13, 4.37)); urban residence was significantly related with lower odds of birth defect (95%CI (0.23, 0.89)) and congenital heart defect (95%CI (0.19, 0.91)). Conclusions: whether having maternal insurance did have an impact on perinatal outcomes, but the impact of maternal insurance on the perinatal outcomes showed differently between women with urban residence and women with rural residence status. However, it is not clear what are the reason causing the observed differences. Thus, more studies are needed.
Resumo:
This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.
Resumo:
In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.
Resumo:
A wide range of tests for heteroskedasticity have been proposed in the econometric and statistics literature. Although a few exact homoskedasticity tests are available, the commonly employed procedures are quite generally based on asymptotic approximations which may not provide good size control in finite samples. There has been a number of recent studies that seek to improve the reliability of common heteroskedasticity tests using Edgeworth, Bartlett, jackknife and bootstrap methods. Yet the latter remain approximate. In this paper, we describe a solution to the problem of controlling the size of homoskedasticity tests in linear regression contexts. We study procedures based on the standard test statistics [e.g., the Goldfeld-Quandt, Glejser, Bartlett, Cochran, Hartley, Breusch-Pagan-Godfrey, White and Szroeter criteria] as well as tests for autoregressive conditional heteroskedasticity (ARCH-type models). We also suggest several extensions of the existing procedures (sup-type of combined test statistics) to allow for unknown breakpoints in the error variance. We exploit the technique of Monte Carlo tests to obtain provably exact p-values, for both the standard and the new tests suggested. We show that the MC test procedure conveniently solves the intractable null distribution problem, in particular those raised by the sup-type and combined test statistics as well as (when relevant) unidentified nuisance parameter problems under the null hypothesis. The method proposed works in exactly the same way with both Gaussian and non-Gaussian disturbance distributions [such as heavy-tailed or stable distributions]. The performance of the procedures is examined by simulation. The Monte Carlo experiments conducted focus on : (1) ARCH, GARCH, and ARCH-in-mean alternatives; (2) the case where the variance increases monotonically with : (i) one exogenous variable, and (ii) the mean of the dependent variable; (3) grouped heteroskedasticity; (4) breaks in variance at unknown points. We find that the proposed tests achieve perfect size control and have good power.
Resumo:
Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.