957 resultados para data dependence
Resumo:
The behaviour of syntactic foam is strongly dependent on temperature and strain rate. This research focuses on the behaviour of syntactic foam made of epoxy and glass microballoons in the glassy, transition and rubbery regions. Both epoxy and epoxy foam are investigated separately under tension and shear loadings in order to study the strain rate and temperature effects. The results indicate that the strength and strain to failure data can be collapsed onto master curves depending on temperature reduced strain rate. The highest strain to failure occurs in the transition zone. The presence of glass microballoons reduces the strain to failure over the entire range considered, an effect that is particularly significant under tensile loading. However, as the microballoons increase the elastic modulus significantly in the rubbery zone but reduce it somewhat in the glassy zone, the effect on the strength is more complicated. Different failure mechanisms are identified over the temperature-frequency range considered. As the temperature reduced strain rate is decreased, the failure mechanism changes from microballoon fracture to matrix fracture and debonding between the matrix and microballoons. © IMechE 2012.
Resumo:
We have developed an instrument to study the behavior of the critical current density (J(c)) in superconducting wires and tapes as a function of field (mu(0)H), temperature (T), and axial applied strain (epsilon(a)). The apparatus is an improvement of similar devices that have been successfully used in our institute for over a decade. It encompasses specific advantages such as a simple sample layout, a well defined and homogeneous strain application, the possibility of investigating large compressive strains and the option of simple temperature variation, while improving the main drawback in our previous systems by increasing the investigated sample length by approximately a factor of 10. The increase in length is achieved via a design change from a straight beam section to an initially curved beam, placed perpendicular to the applied field axis in the limited diameter of a high field magnet bore. This article describes in detail the mechanical design of the device and its calibrations. Additionally initial J(c)(epsilon(a)) data, measured at liquid helium temperature, are presented for a bronze processed and for a powder-in-tube Nb3Sn superconducting wire. Comparisons are made with earlier characterizations, indicating consistent behavior of the instrument. The improved voltage resolution, resulting from the increased sample length, enables J(c) determinations at an electric field criterion E-c=10 muV/m, which is substantially lower than a criterion of E-c=100 muV/m which was possible in our previous systems. (C) 2004 American Institute of Physics.
Resumo:
This paper proposes the use of an improved covariate unit root test which exploits the cross-sectional dependence information when the panel data null hypothesis of a unit root is rejected. More explicitly, to increase the power of the test, we suggest the utilization of more than one covariate and offer several ways to select the ‘best’ covariates from the set of potential covariates represented by the individuals in the panel. Employing our methods, we investigate the Prebish-Singer hypothesis for nine commodity prices. Our results show that this hypothesis holds for all but the price of petroleum.
Resumo:
Why did banking compliance fail so badly in the recent financial crisis and why, according to many, does it continue to do so? Rather than point to the lack of oversight of individuals in bank compliance roles, as many commentators do, in this paper I examine in depth the organizational context that surrounded people in such roles. I focus on those compliance personnel who did speak out about risky practices in their banks, who were forced to escalate the problem and 'whistle-blow' to external parties, and who were punished for doing so. Drawing on recent empirical data from a wider study, I argue that the concept of dependence corruption is useful in this setting, and that it can be extended to encompass interpersonal attachments. This, in turn, problematises the concept of dependence corruption because interpersonal attachments in organisational settings are inevitable. The paper engages with recent debates on whether institutional corruption is an appropriate lens for studying private-sector organisations by arguing for a focus on roles, rather than remaining at the level of institutional fields or individual organisations. Finally, the paper contributes to studies on banking compliance in the context of the recent crisis; without a deeper understanding of those who were forced to extremes to simply do their jobs, reform of the banking sector will prove difficult.
Resumo:
Diagnostic test sensitivity and specificity are probabilistic estimates with far reaching implications for disease control, management and genetic studies. In the absence of 'gold standard' tests, traditional Bayesian latent class models may be used to assess diagnostic test accuracies through the comparison of two or more tests performed on the same groups of individuals. The aim of this study was to extend such models to estimate diagnostic test parameters and true cohort-specific prevalence, using disease surveillance data. The traditional Hui-Walter latent class methodology was extended to allow for features seen in such data, including (i) unrecorded data (i.e. data for a second test available only on a subset of the sampled population) and (ii) cohort-specific sensitivities and specificities. The model was applied with and without the modelling of conditional dependence between tests. The utility of the extended model was demonstrated through application to bovine tuberculosis surveillance data from Northern and the Republic of Ireland. Simulation coupled with re-sampling techniques, demonstrated that the extended model has good predictive power to estimate the diagnostic parameters and true herd-level prevalence from surveillance data. Our methodology can aid in the interpretation of disease surveillance data, and the results can potentially refine disease control strategies.
Resumo:
his paper considers a problem of identification for a high dimensional nonlinear non-parametric system when only a limited data set is available. The algorithms are proposed for this purpose which exploit the relationship between the input variables and the output and further the inter-dependence of input variables so that the importance of the input variables can be established. A key to these algorithms is the non-parametric two stage input selection algorithm.
Resumo:
We present a study on the gender balance, in speakers and attendees, at the recent major astronomical conference, the American Astronomical Society meeting 223, in Washington, DC. We conducted an informal survey, yielding over 300 responses by volunteers at the meeting. Each response included gender data about a single talk given at the meeting, recording the gender of the speaker and all question-askers. In total, 225 individual AAS talks were sampled. We analyze basic statistical properties of this sample. We find that the gender ratio of the speakers closely matched the gender ratio of the conference attendees. The audience asked an average of 2.8 questions per talk. Talks given by women had a slightly higher number of questions asked (3.2$\pm$0.2) than talks given by men (2.6$\pm$0.1). The most significant result from this study is that while the gender ratio of speakers very closely mirrors that of conference attendees, women are under-represented in the question-asker category. We interpret this to be an age-effect, as senior scientists may be more likely to ask questions, and are more commonly men. A strong dependence on the gender of session chairs is found, whereby women ask disproportionately fewer questions in sessions chaired by men. While our results point to laudable progress in gender-balanced speaker selection, we believe future surveys of this kind would help ensure that collaboration at such meetings is as inclusive as possible.
Resumo:
Dependence clusters are (maximal) collections of mutually dependent source code entities according to some dependence relation. Their presence in software complicates many maintenance activities including testing, refactoring, and feature extraction. Despite several studies finding them common in production code, their formation, identification, and overall structure are not well understood, partly because of challenges in approximating true dependences between program entities. Previous research has considered two approximate dependence relations: a fine-grained statement-level relation using control and data dependences from a program’s System Dependence Graph and a coarser relation based on function-level controlflow reachability. In principal, the first is more expensive and more precise than the second. Using a collection of twenty programs, we present an empirical investigation of the clusters identified by these two approaches. In support of the analysis, we consider hybrid cluster types that works at the coarser function-level but is based on the higher-precision statement-level dependences. The three types of clusters are compared based on their slice sets using two clustering metrics. We also perform extensive analysis of the programs to identify linchpin functions – functions primarily responsible for holding a cluster together. Results include evidence that the less expensive, coarser approaches can often be used as e�ective proxies for the more expensive, finer-grained approaches. Finally, the linchpin analysis shows that linchpin functions can be e�ectively and automatically identified.
Resumo:
This study was conducted to measure the degree of adherence by public health care providers to a policy that requires them to implement minimal contact intervention for tobacco cessation with their clients. This study also described what components of the intervention may have contributed to the adherence of the policy and how health care providers felt about adhering to the policy. The intervention consisted of a policy for implementation of minimal contact intervention, changes to documentation, a health care provider mentor trained, a training session for health care providers, and ongoing paper and people supports for implementation. Data for this study were collected through a health care provider questionnaire, focus group interviews, and a compliance protocol including a chart audit. The findings of this study showed a high degree of adherence to the policy, that health care providers thought minimal contact intervention was important to conduct with their clients, and that health care providers felt supported to implement the intervention. No statistically significant difference was found between new and experienced health care providers on 17 of the 18 questions on the health care provider questionnaire. However there was a statistically significant difference between new and experienced health care providers with respect to their perception that “clients often feel like they have to accept tobacco cessation information from me.” Changes could be made to the minimal contact intervention and to documentation of the intervention. Implications for future research include implementation within other programs within Hamilton Public Health Services and implementation of this model within other public health units and other types of health care providers within Ontario.
Resumo:
The attached file is created with Scientific Workplace Latex
Resumo:
Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.
Resumo:
Nous développons dans cette thèse, des méthodes de bootstrap pour les données financières de hautes fréquences. Les deux premiers essais focalisent sur les méthodes de bootstrap appliquées à l’approche de "pré-moyennement" et robustes à la présence d’erreurs de microstructure. Le "pré-moyennement" permet de réduire l’influence de l’effet de microstructure avant d’appliquer la volatilité réalisée. En se basant sur cette ap- proche d’estimation de la volatilité intégrée en présence d’erreurs de microstructure, nous développons plusieurs méthodes de bootstrap qui préservent la structure de dépendance et l’hétérogénéité dans la moyenne des données originelles. Le troisième essai développe une méthode de bootstrap sous l’hypothèse de Gaussianité locale des données financières de hautes fréquences. Le premier chapitre est intitulé: "Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns". Nous proposons dans ce chapitre, des méthodes de bootstrap robustes à la présence d’erreurs de microstructure. Particulièrement nous nous sommes focalisés sur la volatilité réalisée utilisant des rendements "pré-moyennés" proposés par Podolskij et Vetter (2009), où les rendements "pré-moyennés" sont construits sur des blocs de rendements à hautes fréquences consécutifs qui ne se chevauchent pas. Le "pré-moyennement" permet de réduire l’influence de l’effet de microstructure avant d’appliquer la volatilité réalisée. Le non-chevauchement des blocs fait que les rendements "pré-moyennés" sont asymptotiquement indépendants, mais possiblement hétéroscédastiques. Ce qui motive l’application du wild bootstrap dans ce contexte. Nous montrons la validité théorique du bootstrap pour construire des intervalles de type percentile et percentile-t. Les simulations Monte Carlo montrent que le bootstrap peut améliorer les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques, pourvu que le choix de la variable externe soit fait de façon appropriée. Nous illustrons ces méthodes en utilisant des données financières réelles. Le deuxième chapitre est intitulé : "Bootstrapping pre-averaged realized volatility under market microstructure noise". Nous développons dans ce chapitre une méthode de bootstrap par bloc basée sur l’approche "pré-moyennement" de Jacod et al. (2009), où les rendements "pré-moyennés" sont construits sur des blocs de rendements à haute fréquences consécutifs qui se chevauchent. Le chevauchement des blocs induit une forte dépendance dans la structure des rendements "pré-moyennés". En effet les rendements "pré-moyennés" sont m-dépendant avec m qui croît à une vitesse plus faible que la taille d’échantillon n. Ceci motive l’application d’un bootstrap par bloc spécifique. Nous montrons que le bloc bootstrap suggéré par Bühlmann et Künsch (1995) n’est valide que lorsque la volatilité est constante. Ceci est dû à l’hétérogénéité dans la moyenne des rendements "pré-moyennés" au carré lorsque la volatilité est stochastique. Nous proposons donc une nouvelle procédure de bootstrap qui combine le wild bootstrap et le bootstrap par bloc, de telle sorte que la dépendance sérielle des rendements "pré-moyennés" est préservée à l’intérieur des blocs et la condition d’homogénéité nécessaire pour la validité du bootstrap est respectée. Sous des conditions de taille de bloc, nous montrons que cette méthode est convergente. Les simulations Monte Carlo montrent que le bootstrap améliore les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques. Nous illustrons cette méthode en utilisant des données financières réelles. Le troisième chapitre est intitulé: "Bootstrapping realized covolatility measures under local Gaussianity assumption". Dans ce chapitre nous montrons, comment et dans quelle mesure on peut approximer les distributions des estimateurs de mesures de co-volatilité sous l’hypothèse de Gaussianité locale des rendements. En particulier nous proposons une nouvelle méthode de bootstrap sous ces hypothèses. Nous nous sommes focalisés sur la volatilité réalisée et sur le beta réalisé. Nous montrons que la nouvelle méthode de bootstrap appliquée au beta réalisé était capable de répliquer les cummulants au deuxième ordre, tandis qu’il procurait une amélioration au troisième degré lorsqu’elle est appliquée à la volatilité réalisée. Ces résultats améliorent donc les résultats existants dans cette littérature, notamment ceux de Gonçalves et Meddahi (2009) et de Dovonon, Gonçalves et Meddahi (2013). Les simulations Monte Carlo montrent que le bootstrap améliore les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques et les résultats de bootstrap existants. Nous illustrons cette méthode en utilisant des données financières réelles.
Resumo:
This paper introduces a framework for analysis of cross-sectional dependence in the idiosyncratic volatilities of assets using high frequency data. We first consider the estimation of standard measures of dependence in the idiosyncratic volatilities such as covariances and correlations. Next, we study an idiosyncratic volatility factor model, in which we decompose the co-movements in idiosyncratic volatilities into two parts: those related to factors such as the market volatility, and the residual co-movements. When using high frequency data, naive estimators of all of the above measures are biased due to the estimation errors in idiosyncratic volatility. We provide bias-corrected estimators and establish their asymptotic properties. We apply our estimators to high-frequency data on 27 individual stocks from nine different sectors, and document strong cross-sectional dependence in their idiosyncratic volatilities. We also find that on average 74% of this dependence can be explained by the market volatility.
Resumo:
Multivariate lifetime data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated lifetime when an individual is followed for the occurrence of two or more types of events, or when distinct individuals have dependent event times. In most studies there are covariates such as treatments, group indicators, individual characteristics, or environmental conditions, whose relationship to lifetime is of interest. This leads to a consideration of regression models.The well known Cox proportional hazards model and its variations, using the marginal hazard functions employed for the analysis of multivariate survival data in literature are not sufficient to explain the complete dependence structure of pair of lifetimes on the covariate vector. Motivated by this, in Chapter 2, we introduced a bivariate proportional hazards model using vector hazard function of Johnson and Kotz (1975), in which the covariates under study have different effect on two components of the vector hazard function. The proposed model is useful in real life situations to study the dependence structure of pair of lifetimes on the covariate vector . The well known partial likelihood approach is used for the estimation of parameter vectors. We then introduced a bivariate proportional hazards model for gap times of recurrent events in Chapter 3. The model incorporates both marginal and joint dependence of the distribution of gap times on the covariate vector . In many fields of application, mean residual life function is considered superior concept than the hazard function. Motivated by this, in Chapter 4, we considered a new semi-parametric model, bivariate proportional mean residual life time model, to assess the relationship between mean residual life and covariates for gap time of recurrent events. The counting process approach is used for the inference procedures of the gap time of recurrent events. In many survival studies, the distribution of lifetime may depend on the distribution of censoring time. In Chapter 5, we introduced a proportional hazards model for duration times and developed inference procedures under dependent (informative) censoring. In Chapter 6, we introduced a bivariate proportional hazards model for competing risks data under right censoring. The asymptotic properties of the estimators of the parameters of different models developed in previous chapters, were studied. The proposed models were applied to various real life situations.
Resumo:
This analysis was stimulated by the real data analysis problem of household expenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that try to add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spending excluding alcohol/tobacco similar for teetotal and non-teetotal households? In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than one component, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durables within the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small. While this analysis is based on around economic data, the ideas carry over to many other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)