954 resultados para Parametric bootstrap


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main result of this work is a parametric description of the spectral surfaces of a class of periodic 5-diagonal matrices, related to the strong moment problem. This class is a self-adjoint twin of the class of CMV matrices. Jointly they form the simplest possible classes of 5-diagonal matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose - The authors sought to explain why and how protean career attitude might influence self-initiated expatriates' (SIEs) experiences positively. A mediation model of cultural adjustment was proposed and empirically evaluated. Design/methodology/approach - Data from 132 SIEs in Germany containing measures of protean career attitude, cultural adjustment, career satisfaction, life satisfaction, and intention to stay in the host country were analysed using path analysis with a bootstrap method. Findings - Empirical results provide support for the authors' proposed model: the positive relations between protean career attitude and the three expatriation outcomes (career satisfaction, life satisfaction and intention to stay in the host country) were mediated by positive cross-cultural adjustment of SIEs. Research limitations/implications - All data were cross-sectional from a single source. The sample size was small and included a large portion of Chinese participants. The study should be replicated with samples in other destination countries, and longitudinal research is suggested. Practical implications - By fostering both a protean career attitude in skilled SIE employees and their cultural adjustment, corporations and receiving countries could be able to retain this international workforce better in times of talent shortage. Originality/value - This study contributes to the scarce research on the conceptual relatedness of protean career attitude and SIEs, as well as to acknowledging the cultural diversity of the SIE population.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completelyabsent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and byMartín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involvedparts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method isintroduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that thetheoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approachhas reasonable properties from a compositional point of view. In particular, it is “natural” in the sense thatit recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in thesame paper a substitution method for missing values on compositional data sets is introduced

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I use a multi-layer feedforward perceptron, with backpropagation learning implemented via stochastic gradient descent, to extrapolate the volatility smile of Euribor derivatives over low-strikes by training the network on parametric prices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Initial topography and inherited structural discontinuities are known to play a dominant role in rock slope stability. Previous 2-D physical modeling results demonstrated that even if few preexisting fractures are activated/propagated during gravitational failure all of those heterogeneities had a great influence on mobilized volume and its kinematics. The question we address in the present study is to determine if such a result is also observed in 3-D. As in 2-D previous models we examine geologically stable model configuration, based upon the well documented landslide at Randa, Switzerland. The 3-D models consisted of a homogeneous material in which several fracture zones were introduced in order to study simplified but realistic configurations of discontinuities (e.g. based on natural example rather than a parametric study). Results showed that the type of gravitational failure (deep-seated landslide or sequential failure) and resulting slope morphology evolution are the result of the interplay of initial topography and inherited preexisting fractures (orientation and density). The three main results are i) the initial topography exerts a strong control on gravitational slope failure. Indeed in each tested configuration (even in the isotropic one without fractures) the model is affected by a rock slide, ii) the number of simulated fracture sets greatly influences the volume mobilized and its kinematics, and iii) the failure zone involved in the 1991 event is smaller than the results produced by the analog modeling. This failure may indicate that the zone mobilized in 1991 is potentially only a part of a larger deep-seated landslide and/or wider deep seated gravitational slope deformation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ABSTRACTThe Copula Theory was used to analyze contagion among the BRIC (Brazil, Russia, India and China) and European Union stock markets with the U.S. Equity Market. The market indexes used for the period between January 01, 2005 and February 27, 2010 are: MXBRIC (BRIC), MXEU (European Union) and MXUS (United States). This article evaluated the adequacy of the main copulas found in the financial literature using log-likelihood, Akaike information and Bayesian information criteria. This article provides a groundbreaking study in the area of contagion due to the use of conditional copulas, allowing to calculate the correlation increase between indexes with non-parametric approach. The conditional Symmetrized Joe-Clayton copula was the one that fitted better to the considered pairs of returns. Results indicate evidence of contagion effect in both markets, European Union and BRIC members, with a 5% significance level. Furthermore, there is also evidence that the contagion of U.S. financial crisis was more pronounced in the European Union than in the BRIC markets, with a 5% significance level. Therefore, stock portfolios formed by equities from the BRIC countries were able to offer greater protection during the subprime crisis. The results are aligned with recent papers that present an increase in correlation between stock markets, especially in bear markets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION: We aimed to investigate the characteristics and outcome of patients suffering early major worsening (EMW) after acute ischemic stroke (AIS) and assess the parameters associated with it. METHODS: All consecutive patients with AIS in the ASTRAL registry until 10/2010 were included. EMW was defined as an NIHSS increase of ≥8 points within the first 24 h after admission. The Bootstrap version of the Kolmogorov-Smirnov test and the χ(2)-test were used for the comparison of continuous and categorical covariates, respectively, between patients with and without EMW. Multiple logistic regression analysis was performed to identify independent predictors of EMW. RESULTS: Among 2155 patients, 43 (2.0 %) had an EMW. EMW was independently associated with hemorrhagic transformation (OR 22.6, 95 % CI 9.4-54.2), cervical artery dissection (OR 9.5, 95 % CI 4.4-20.6), initial dysarthria (OR 3.7, 95 % CI 1.7-8.0), and intravenous thrombolysis (OR 2.1, 95 % CI 1.1-4.3), whereas a negative association was identified with initial eye deviation (OR 0.4, 95 % CI 0.2-0.9). Favorable outcome at 3 and 12 months was less frequent in patients with EMW compared to patients without (11.6 vs. 55.3 % and 16.3 vs. 50.7 %, respectively), and case fatality was higher (53.5 vs. 12.9 % and 55.8 vs. 16.8 %, respectively). Stroke recurrence within 3 months in surviving patients was similar between patients with and without EMW (9.3 vs. 9.0 %, respectively). CONCLUSIONS: Worsening of ≥8 points in the NIHSS score during the first 24 h in AIS patients is related to cervical artery dissection and hemorrhagic transformation. It justifies urgent repeat parenchymal and arterial imaging. Both conditions may be influenced by targeted interventions in the acute phase of stroke.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En els darrers anys, els sistemes de telemetria per a aplicacions mèdiques han crescut significativament en el diagnòstic i en la monitorització de, per exemple, la glucosa, la pressió de la sang, la temperatura, el ritme cardíac... Els dispositius implantats amplien les aplicacions en medicina i incorpora una millora de qualitat de vida per a l’usuari. Per aquest motiu, en aquest projecte s’estudien dues de les antenes més comuns, com son l’antena dipol i el patch, aquesta última és especialment utilitzada en aplicacions implantades. En l’anàlisi d’aquestes antenes s’han parametritzat característiques relacionades amb l’entorn de l’aplicació, així com també de la pròpia antena, explicant el comportament que, a diferencia amb l’espai lliure, les antenes presenten a canvis d’aquests paràmetres. Al mateix temps, s’ha implementat una configuració per a la mesura d’antenes implantades basat en el model del cos humà d’una capa. Comparant amb els resultats de les simulacions realitzades mitjançant el software FEKO, s’ha obtingut gran correspondència en la mesura empírica d’adaptació i de guany de les antenes microstrip. Gràcies a l’anàlisi paramètric, aquest projecte també presenta diversos dissenys de les antenes optimitzant el guany realitzable amb l’objectiu d’aconseguir la millor comunicació possible amb el dispositiu extern o estació base.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aplicació web per entrar i gestionar incidències desenvolupada amb CodeIgniter (PHP) i Bootstrap.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The populations of parasites and infectious agents are most of the time structured in complex hierarchy that lies beyond the classical nested design described by Wright's F-statistics (F(IS), F(ST) and F(IT)). In this note we propose a user-friendly step-by-step notice for using recent software (HierFstat) that computes and test fixation indices for any hierarchical structure. We add some tricks and tips for some special data kind (haploid, single locus), some other procedure (bootstrap over loci) and how to handle crossed factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates a simple procedure to estimate robustly the mean of an asymmetric distribution. The procedure removes the observations which are larger or smaller than certain limits and takes the arithmetic mean of the remaining observations, the limits being determined with the help of a parametric model, e.g., the Gamma, the Weibull or the Lognormal distribution. The breakdown point, the influence function, the (asymptotic) variance, and the contamination bias of this estimator are explored and compared numerically with those of competing estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Time scale parametric spike train distances like the Victor and the van Rossum distancesare often applied to study the neural code based on neural stimuli discrimination.Different neural coding hypotheses, such as rate or coincidence coding,can be assessed by combining a time scale parametric spike train distance with aclassifier in order to obtain the optimal discrimination performance. The time scalefor which the responses to different stimuli are distinguished best is assumed to bethe discriminative precision of the neural code. The relevance of temporal codingis evaluated by comparing the optimal discrimination performance with the oneachieved when assuming a rate code.We here characterize the measures quantifying the discrimination performance,the discriminative precision, and the relevance of temporal coding. Furthermore,we evaluate the information these quantities provide about the neural code. Weshow that the discriminative precision is too unspecific to be interpreted in termsof the time scales relevant for encoding. Accordingly, the time scale parametricnature of the distances is mainly an advantage because it allows maximizing thediscrimination performance across a whole set of measures with different sensitivitiesdetermined by the time scale parameter, but not due to the possibility toexamine the temporal properties of the neural code.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a maximum likelihood method using historical weather data to estimate a parametric model of daily precipitation and maximum and minimum air temperatures. Parameter estimates are reported for Brookings, SD, and Boone, IA, to illustrate the procedure. The use of this parametric model to generate stochastic time series of daily weather is then summarized. A soil temperature model is described that determines daily average, maximum, and minimum soil temperatures based on air temperatures and precipitation, following a lagged process due to soil heat storage and other factors.