939 resultados para Zero-inflated models, Statistical models, Poisson, Negative binomial, Statistical methods
Resumo:
This study examines health care utilization of immigrants relative to the native-born populations aged 50 years and older in eleven European countries. Methods. We analyzed data from the Survey of Health Aging and Retirement in Europe (SHARE) from 2004 for a sample of 27,444 individuals in 11 European countries. Negative Binomial regression was conducted to examine the difference in number of doctor visits, visits to General Practitioners (GPs), and hospital stays between immigrants and the native-born individuals. Results: We find evidence those immigrants above age 50 use health services on average more than the native-born populations with the same characteristics. Our models show immigrants have between 6% and 27% more expected visits to the doctor, GP or hospital stays when compared to native-born populations in a number of European countries. Discussion: Elderly immigrant populations might be using health services more intensively due to cultural reasons.
Resumo:
The purpose of this comparative study is to profile second language learners by exploring the factors which have an impact on their learning. The subjects come from two different countries: one group comes from Milwaukee, US, and the other from Turku, Finland. The subjects have attended bilingual classes from elementary school to senior high school in their respective countries. In the United States, the subjects (N = 57) started in one elementary school from where they moved on to two high schools in the district. The Finnish subjects (N = 39) attended the same school from elementary to high school. The longitudinal study was conducted during 1994-2004 and combines both qualitative and quantitative research methods. A Pilot Study carried out in 1990-1991 preceded the two subsequent studies that form the core material of this research. The theoretical part of the study focuses first on language policies in the United States and Finland: special emphasis is given to the history, development and current state of bilingual education, and the factors that have affected policy-making in the provision of language instruction. Current language learning theories and models form the theoretical foundation of the research, and underpin the empirical studies. Cognitively-labeled theories are at the forefront, but sociocultural theory and the ecological approach are also accounted for. The research methods consist of questionnaires, compositions and interviews. A combination of statistical methods as well as content analysis were used in the analysis. The attitude of the bilingual learners toward L1 and L2 was generally positive: the subjects enjoyed learning through two languages and were motivated to learn both. The knowledge of L1 and parental support, along with early literacy in L1, facilitated the learning of L2. This was particularly evident in the American subject group. The American subjects’ L2 learning was affected by the attitudes of the learners to the L1 culture and its speakers. Furthermore, the negative attitudes taken by L1 speakers toward L2 speakers and the lack of opportunities to engage in activities in the L1 culture affected the American subjects’ learning of L2, English. The research showed that many American L2 learners were isolated from the L1 culture and were even afraid to use English in everyday communication situations. In light of the research results, a politically neutral linguistic environment, which the Finnish subjects inhabited, was seen to be more favorable for learning. The Finnish subjects were learning L2, English, in a neutral zone where their own attitudes and motivation dictated their learning. The role of L2 as a means of international communication in Finland, as opposed to a means of exercising linguistic power, provided a neutral atmosphere for learning English. In both the American and Finnish groups, the learning of other languages was facilitated when the learner had a good foundation in their L1, and the learning of L1 and L2 were in balance. Learning was also fostered when the learners drew positive experiences from their surroundings and were provided with opportunities to engage in activities where L2 was used.
Resumo:
Teaching the measurement of blood pressure for both nursing and public health nursing students The purpose of this two-phase study was to develop the teaching of blood pressure measurement within the nursing degree programmes of the Universities of Applied Sciences. The first survey phase described what and how blood pressure measurement was taught within nursing degree programmes. The second intervention phase (2004-2005) evaluated first academic year nursing and public health nursing students’ knowledge and skills results for blood pressure measurement. Additionally, the effect on the Taitoviikko experimental group students’ blood pressure measurement knowledge and skills level. A further objective was to construct models for an instrument (RRmittTest) to evaluate nursing students measurement of blood pressure (2003-2009). The research data for the survey phase were collected from teachers (total sampling, N=107, response rate 77%) using a specially developed RRmittopetus-questionnaire. Quasi-experimental study data on the RRmittTest-instrument was collected from students (purposive sampling, experimental group, n=29, control group, n=44). The RRmittTest consisted of a test of knowledge (Tietotesti) and simulation-based test (TaitoSimkäsi and Taitovideo) of skills. Measurements were made immediately after the teaching and in clinical practice. Statistical methods were used to analyse the results and responses to open-ended questions were organised and classified. Due to the small amount of materials involved and the results of distribution tests of the variables, non-parametric analytic methods were mainly used. Experimental group and control group similar knowledge and skills teaching was based on the results of the national survey phase (RRmittopetus) questionnaire results. Experimental group teaching includes the supervised Taitoviikko teaching method. During Taitoviikko students studied blood pressure measurement at the municipal hospital in a real nursing environment, guided by a teacher and a clinical nursing professional. In order to evaluate both learning and teaching the processes and components of blood pressure measurement were clearly defined as follows: the reliability of measurement instruments, activities preceding blood pressure measurement, technical execution of the measurement, recording, lifestyle guidance and measurement at home (self-monitoring). According to the survey study, blood pressure measurement is most often taught at Universities of Applied Sciences, separately, as knowledge (teaching of theory, 2 hours) and skills (classroom practice, 4 hours). The teaching was implemented largely in a classroom and was based mainly on a textbook. In the intervention phase the students had good knowledge of blood pressure measurement. However, their blood pressure measurement skills were deficient and the control group students, in particular, were highly deficient. Following in clinical practice the experimental group and control group students’ blood pressure measurement recording knowledge improve and experimental groups declined lifestyle guidance. Skills did not improve within any of the components analysed. The control groups` skills on the whole, declined statistically.There was a significant decline amongst the experimental group although only in one component measured. The results describe the learning results for first academic year students and no parallel conclusions should be drawn when considering any learning results for graduating students. The results support the use and further development of the Taitoviiko teaching method. The RRmittTest developed for the study should be assessed and the results seen from a negative perspective. This evaluation tool needs to be developed and retested.
Resumo:
This paper proposes finite-sample procedures for testing the SURE specification in multi-equation regression models, i.e. whether the disturbances in different equations are contemporaneously uncorrelated or not. We apply the technique of Monte Carlo (MC) tests [Dwass (1957), Barnard (1963)] to obtain exact tests based on standard LR and LM zero correlation tests. We also suggest a MC quasi-LR (QLR) test based on feasible generalized least squares (FGLS). We show that the latter statistics are pivotal under the null, which provides the justification for applying MC tests. Furthermore, we extend the exact independence test proposed by Harvey and Phillips (1982) to the multi-equation framework. Specifically, we introduce several induced tests based on a set of simultaneous Harvey/Phillips-type tests and suggest a simulation-based solution to the associated combination problem. The properties of the proposed tests are studied in a Monte Carlo experiment which shows that standard asymptotic tests exhibit important size distortions, while MC tests achieve complete size control and display good power. Moreover, MC-QLR tests performed best in terms of power, a result of interest from the point of view of simulation-based tests. The power of the MC induced tests improves appreciably in comparison to standard Bonferroni tests and, in certain cases, outperforms the likelihood-based MC tests. The tests are applied to data used by Fischer (1993) to analyze the macroeconomic determinants of growth.
Resumo:
Recent work shows that a low correlation between the instruments and the included variables leads to serious inference problems. We extend the local-to-zero analysis of models with weak instruments to models with estimated instruments and regressors and with higher-order dependence between instruments and disturbances. This makes this framework applicable to linear models with expectation variables that are estimated non-parametrically. Two examples of such models are the risk-return trade-off in finance and the impact of inflation uncertainty on real economic activity. Results show that inference based on Lagrange Multiplier (LM) tests is more robust to weak instruments than Wald-based inference. Using LM confidence intervals leads us to conclude that no statistically significant risk premium is present in returns on the S&P 500 index, excess holding yields between 6-month and 3-month Treasury bills, or in yen-dollar spot returns.
Resumo:
We discuss statistical inference problems associated with identification and testability in econometrics, and we emphasize the common nature of the two issues. After reviewing the relevant statistical notions, we consider in turn inference in nonparametric models and recent developments on weakly identified models (or weak instruments). We point out that many hypotheses, for which test procedures are commonly proposed, are not testable at all, while some frequently used econometric methods are fundamentally inappropriate for the models considered. Such situations lead to ill-defined statistical problems and are often associated with a misguided use of asymptotic distributional results. Concerning nonparametric hypotheses, we discuss three basic problems for which such difficulties occur: (1) testing a mean (or a moment) under (too) weak distributional assumptions; (2) inference under heteroskedasticity of unknown form; (3) inference in dynamic models with an unlimited number of parameters. Concerning weakly identified models, we stress that valid inference should be based on proper pivotal functions —a condition not satisfied by standard Wald-type methods based on standard errors — and we discuss recent developments in this field, mainly from the viewpoint of building valid tests and confidence sets. The techniques discussed include alternative proposed statistics, bounds, projection, split-sampling, conditioning, Monte Carlo tests. The possibility of deriving a finite-sample distributional theory, robustness to the presence of weak instruments, and robustness to the specification of a model for endogenous explanatory variables are stressed as important criteria assessing alternative procedures.
Resumo:
Cette thèse présente des méthodes de traitement de données de comptage en particulier et des données discrètes en général. Il s'inscrit dans le cadre d'un projet stratégique du CRNSG, nommé CC-Bio, dont l'objectif est d'évaluer l'impact des changements climatiques sur la répartition des espèces animales et végétales. Après une brève introduction aux notions de biogéographie et aux modèles linéaires mixtes généralisés aux chapitres 1 et 2 respectivement, ma thèse s'articulera autour de trois idées majeures. Premièrement, nous introduisons au chapitre 3 une nouvelle forme de distribution dont les composantes ont pour distributions marginales des lois de Poisson ou des lois de Skellam. Cette nouvelle spécification permet d'incorporer de l'information pertinente sur la nature des corrélations entre toutes les composantes. De plus, nous présentons certaines propriétés de ladite distribution. Contrairement à la distribution multidimensionnelle de Poisson qu'elle généralise, celle-ci permet de traiter les variables avec des corrélations positives et/ou négatives. Une simulation permet d'illustrer les méthodes d'estimation dans le cas bidimensionnel. Les résultats obtenus par les méthodes bayésiennes par les chaînes de Markov par Monte Carlo (CMMC) indiquent un biais relatif assez faible de moins de 5% pour les coefficients de régression des moyennes contrairement à ceux du terme de covariance qui semblent un peu plus volatils. Deuxièmement, le chapitre 4 présente une extension de la régression multidimensionnelle de Poisson avec des effets aléatoires ayant une densité gamma. En effet, conscients du fait que les données d'abondance des espèces présentent une forte dispersion, ce qui rendrait fallacieux les estimateurs et écarts types obtenus, nous privilégions une approche basée sur l'intégration par Monte Carlo grâce à l'échantillonnage préférentiel. L'approche demeure la même qu'au chapitre précédent, c'est-à-dire que l'idée est de simuler des variables latentes indépendantes et de se retrouver dans le cadre d'un modèle linéaire mixte généralisé (GLMM) conventionnel avec des effets aléatoires de densité gamma. Même si l'hypothèse d'une connaissance a priori des paramètres de dispersion semble trop forte, une analyse de sensibilité basée sur la qualité de l'ajustement permet de démontrer la robustesse de notre méthode. Troisièmement, dans le dernier chapitre, nous nous intéressons à la définition et à la construction d'une mesure de concordance donc de corrélation pour les données augmentées en zéro par la modélisation de copules gaussiennes. Contrairement au tau de Kendall dont les valeurs se situent dans un intervalle dont les bornes varient selon la fréquence d'observations d'égalité entre les paires, cette mesure a pour avantage de prendre ses valeurs sur (-1;1). Initialement introduite pour modéliser les corrélations entre des variables continues, son extension au cas discret implique certaines restrictions. En effet, la nouvelle mesure pourrait être interprétée comme la corrélation entre les variables aléatoires continues dont la discrétisation constitue nos observations discrètes non négatives. Deux méthodes d'estimation des modèles augmentés en zéro seront présentées dans les contextes fréquentiste et bayésien basées respectivement sur le maximum de vraisemblance et l'intégration de Gauss-Hermite. Enfin, une étude de simulation permet de montrer la robustesse et les limites de notre approche.
Resumo:
La surveillance de l’influenza s’appuie sur un large spectre de données, dont les données de surveillance syndromique provenant des salles d’urgences. De plus en plus de variables sont enregistrées dans les dossiers électroniques des urgences et mises à la disposition des équipes de surveillance. L’objectif principal de ce mémoire est d’évaluer l’utilité potentielle de l’âge, de la catégorie de triage et de l’orientation au départ de l’urgence pour améliorer la surveillance de la morbidité liée aux cas sévères d’influenza. Les données d’un sous-ensemble des hôpitaux de Montréal ont été utilisées, d’avril 2006 à janvier 2011. Les hospitalisations avec diagnostic de pneumonie ou influenza ont été utilisées comme mesure de la morbidité liée aux cas sévères d’influenza, et ont été modélisées par régression binomiale négative, en tenant compte des tendances séculaires et saisonnières. En comparaison avec les visites avec syndrome d’allure grippale (SAG) totales, les visites avec SAG stratifiées par âge, par catégorie de triage et par orientation de départ ont amélioré le modèle prédictif des hospitalisations avec pneumonie ou influenza. Avant d’intégrer ces variables dans le système de surveillance de Montréal, des étapes additionnelles sont suggérées, incluant l’optimisation de la définition du syndrome d’allure grippale à utiliser, la confirmation de la valeur de ces prédicteurs avec de nouvelles données et l’évaluation de leur utilité pratique.
Resumo:
In everyday life different flows of customers to avail some service facility or other at some service station are experienced. In some of these situations, congestion of items arriving for service, because an item cannot be serviced Immediately on arrival, is unavoidable. A queuing system can be described as customers arriving for service, waiting for service if it is not immediate, and if having waited for service, leaving the system after being served. Examples Include shoppers waiting in front of checkout stands in a supermarket, Programs waiting to be processed by a digital computer, ships in the harbor Waiting to be unloaded, persons waiting at railway booking office etc. A queuing system is specified completely by the following characteristics: input or arrival pattern, service pattern, number of service channels, System capacity, queue discipline and number of service stages. The ultimate objective of solving queuing models is to determine the characteristics that measure the performance of the system
Resumo:
In this article we compare regression models obtained to predict PhD students’ academic performance in the universities of Girona (Spain) and Slovenia. Explanatory variables are characteristics of PhD student’s research group understood as an egocentered social network, background and attitudinal characteristics of the PhD students and some characteristics of the supervisors. Academic performance was measured by the weighted number of publications. Two web questionnaires were designed, one for PhD students and one for their supervisors and other research group members. Most of the variables were easily comparable across universities due to the careful translation procedure and pre-tests. When direct comparison was not possible we created comparable indicators. We used a regression model in which the country was introduced as a dummy coded variable including all possible interaction effects. The optimal transformations of the main and interaction variables are discussed. Some differences between Slovenian and Girona universities emerge. Some variables like supervisor’s performance and motivation for autonomy prior to starting the PhD have the same positive effect on the PhD student’s performance in both countries. On the other hand, variables like too close supervision by the supervisor and having children have a negative influence in both countries. However, we find differences between countries when we observe the motivation for research prior to starting the PhD which increases performance in Slovenia but not in Girona. As regards network variables, frequency of supervisor advice increases performance in Slovenia and decreases it in Girona. The negative effect in Girona could be explained by the fact that additional contacts of the PhD student with his/her supervisor might indicate a higher workload in addition to or instead of a better advice about the dissertation. The number of external student’s advice relationships and social support mean contact intensity are not significant in Girona, but they have a negative effect in Slovenia. We might explain the negative effect of external advice relationships in Slovenia by saying that a lot of external advice may actually result from a lack of the more relevant internal advice
Resumo:
El artículo analiza los determinantes de la presencia de hijos no deseados en Colombia. Se utiliza la información de la Encuesta Nacional de Demografía y Salud (ENDS, 2005), específicamente para las mujeres de 40 años o más. Dadas las características especiales de la variable que se analiza, se utilizan modelos de conteo para verificar si determinadas características socioeconómicas como la educación o el estrato económico explican la presencia de hijos no deseados. Se encuentra que la educación de la mujer y el área de residencia son determinantes significativos de los nacimientos no planeados. Además, la relación negativa entre el número de hijos no deseados y la educación de la mujer arroja implicaciones clave en materia de política social.
Resumo:
The goal of this study is to evaluate the effect of mass lumping on the dispersion properties of four finite-element velocity/surface-elevation pairs that are used to approximate the linear shallow-water equations. For each pair, the dispersion relation, obtained using the mass lumping technique, is computed and analysed for both gravity and Rossby waves. The dispersion relations are compared with those obtained for the consistent schemes (without lumping) and the continuous case. The P0-P1, RT0 and P-P1 pairs are shown to preserve good dispersive properties when the mass matrix is lumped. Test problems to simulate fast gravity and slow Rossby waves are in good agreement with the analytical results.
Resumo:
In conventional phylogeographic studies, historical demographic processes are elucidated from the geographical distribution of individuals represented on an inferred gene tree. However, the interpretation of gene trees in this context can be difficult as the same demographic/geographical process can randomly lead to multiple different genealogies. Likewise, the same gene trees can arise under different demographic models. This problem has led to the emergence of many statistical methods for making phylogeographic inferences. A popular phylogeographic approach based on nested clade analysis is challenged by the fact that a certain amount of the interpretation of the data is left to the subjective choices of the user, and it has been argued that the method performs poorly in simulation studies. More rigorous statistical methods based on coalescence theory have been developed. However, these methods may also be challenged by computational problems or poor model choice. In this review, we will describe the development of statistical methods in phylogeographic analysis, and discuss some of the challenges facing these methods.
Resumo:
1. Closed Ecological Systems (CES) are small manmade ecosystems which do not have any material exchange with the surrounding environment. Recent ecological and technological advances enable successful establishment and maintenance of CES, making them a suitable tool for detecting and measuring subtle feedbacks and mechanisms. 2. As a part of an analogue (physical) C cycle modelling experiment, we developed a non-intrusive methodology to control the internal environment and to monitor atmospheric CO2 concentration inside 16 replicated CES. Whilst maintaining an air-tight seal of all CES, this approach allowed for access to the CO2 measuring equipment for periodic re-calibration and repairs. 3. To ensure reliable cross-comparison of CO2 observations between individual CES units and to minimise the cost of the system, only one CO2 sampling unit was used. An ADC BioScientific OP-2 (open-path) analyser mounted on a swinging arm was passing over a set of 16 measuring cells. Each cell was connected to an individual CES with air continuously circulating between them. 4. Using this setup, we were able to continuously measure several environmental variables and CO2 concentration within each closed system, allowing us to study minute effects of changing temperature on C fluxes within each CES. The CES and the measuring cells showed minimal air leakage during an experimental run lasting, on average, 3 months. The CO2 analyser assembly performed reliably for over 2 years, however an early iteration of the present design proved to be sensitive to positioning errors. 5. We indicate how the methodology can be further improved and suggest possible avenues where future CES based research could be applied.