946 resultados para Linear analysis
Resumo:
BACKGROUND In contrast to objective structured clinical examinations (OSCEs), mini-clinical evaluation exercises (mini-CEXs) take place at the clinical workplace. As both mini-CEXs and OSCEs assess clinical skills, but within different contexts, this study aims at analyzing to which degree students' mini-CEX scores can be predicted by their recent OSCE scores and/or context characteristics. METHODS Medical students participated in an end of Year 3 OSCE and in 11 mini-CEXs during 5 different clerkships of Year 4. The students' mean scores of 9 clinical skills OSCE stations and mean 'overall' and 'domain' mini-CEX scores, averaged over all mini-CEXs of each student were computed. Linear regression analyses including random effects were used to predict mini-CEX scores by OSCE performance and characteristics of clinics, trainers, students and assessments. RESULTS A total of 512 trainers in 45 clinics provided 1783 mini-CEX ratings for 165 students; OSCE results were available for 144 students (87 %). Most influential for the prediction of 'overall' mini-CEX scores was the trainers' clinical position with a regression coefficient of 0.55 (95 %-CI: 0.26-0.84; p < .001) for residents compared to heads of department. Highly complex tasks and assessments taking place in large clinics significantly enhanced 'overall' mini-CEX scores, too. In contrast, high OSCE performance did not significantly increase 'overall' mini-CEX scores. CONCLUSION In our study, Mini-CEX scores depended rather on context characteristics than on students' clinical skills as demonstrated in an OSCE. Ways are discussed which focus on either to enhance the scores' validity or to use narrative comments only.
Resumo:
The general goal of this thesis is correlating observable properties of organic and metal-organic materials with their ground-state electron density distribution. In a long-term view, we expect to develop empirical or semi-empirical approaches to predict materials properties from the electron density of their building blocks, thus allowing to rationally engineering molecular materials from their constituent subunits, such as their functional groups. In particular, we have focused on linear optical properties of naturally occurring amino acids and their organic and metal-organic derivatives, and on magnetic properties of metal-organic frameworks. For analysing the optical properties and the magnetic behaviour of the molecular or sub-molecular building blocks in materials, we mostly used the more traditional QTAIM partitioning scheme of the molecular or crystalline electron densities, however, we have also investigated a new approach, namely, X-ray Constrained Extremely Localized Molecular Orbitals (XC-ELMO), that can be used in future to extracted the electron densities of crystal subunits. With the purpose of rationally engineering linear optical materials, we have calculated atomic and functional group polarizabilities of amino acid molecules, their hydrogen-bonded aggregates and their metal-organic frameworks. This has enabled the identification of the most efficient functional groups, able to build-up larger electric susceptibilities in crystals, as well as the quantification of the role played by intermolecular interactions and coordinative bonds on modifying the polarizability of the isolated building blocks. Furthermore, we analysed the dependence of the polarizabilities on the one-electron basis set and the many-electron Hamiltonian. This is useful for selecting the most efficient level of theory to estimate susceptibilities of molecular-based materials. With the purpose of rationally design molecular magnetic materials, we have investigated the electron density distributions and the magnetism of two copper(II) pyrazine nitrate metal-organic polymers. High-resolution X-ray diffraction and DFT calculations were used to characterize the magnetic exchange pathways and to establish relationships between the electron densities and the exchange-coupling constants. Moreover, molecular orbital and spin-density analyses were employed to understand the role of different magnetic exchange mechanisms in determining the bulk magnetic behaviour of these materials. As anticipated, we have finally investigated a modified version of the X-ray constrained wavefunction technique, XC-ELMOs, that is not only a useful tool for determination and analysis of experimental electron densities, but also enables one to derive transferable molecular orbitals strictly localized on atoms, bonds or functional groups. In future, we expect to use XC-ELMOs to predict materials properties of large systems, currently challenging to calculate from first-principles, such as macromolecules or polymers. Here, we point out advantages, needs and pitfalls of the technique. This work fulfils, at least partially, the prerequisites to understand materials properties of organic and metal-organic materials from the perspective of the electron density distribution of their building blocks. Empirical or semi-empirical evaluation of optical or magnetic properties from a preconceived assembling of building blocks could be extremely important for rationally design new materials, a field where accurate but expensive first-principles calculations are generally not used. This research could impact the community in the fields of crystal engineering, supramolecular chemistry and, of course, electron density analysis.
Resumo:
BACKGROUND The aim of this study was to evaluate the accuracy of linear measurements on three imaging modalities: lateral cephalograms from a cephalometric machine with a 3 m source-to-mid-sagittal-plane distance (SMD), from a machine with 1.5 m SMD and 3D models from cone-beam computed tomography (CBCT) data. METHODS Twenty-one dry human skulls were used. Lateral cephalograms were taken, using two cephalometric devices: one with a 3 m SMD and one with a 1.5 m SMD. CBCT scans were taken by 3D Accuitomo® 170, and 3D surface models were created in Maxilim® software. Thirteen linear measurements were completed twice by two observers with a 4 week interval. Direct physical measurements by a digital calliper were defined as the gold standard. Statistical analysis was performed. RESULTS Nasion-Point A was significantly different from the gold standard in all methods. More statistically significant differences were found on the measurements of the 3 m SMD cephalograms in comparison to the other methods. Intra- and inter-observer agreement based on 3D measurements was slightly better than others. LIMITATIONS Dry human skulls without soft tissues were used. Therefore, the results have to be interpreted with caution, as they do not fully represent clinical conditions. CONCLUSIONS 3D measurements resulted in a better observer agreement. The accuracy of the measurements based on CBCT and 1.5 m SMD cephalogram was better than a 3 m SMD cephalogram. These findings demonstrated the linear measurements accuracy and reliability of 3D measurements based on CBCT data when compared to 2D techniques. Future studies should focus on the implementation of 3D cephalometry in clinical practice.
Resumo:
BACKGROUND Non-steroidal anti-inflammatory drugs (NSAIDs) are the backbone of osteoarthritis pain management. We aimed to assess the effectiveness of different preparations and doses of NSAIDs on osteoarthritis pain in a network meta-analysis. METHODS For this network meta-analysis, we considered randomised trials comparing any of the following interventions: NSAIDs, paracetamol, or placebo, for the treatment of osteoarthritis pain. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) and the reference lists of relevant articles for trials published between Jan 1, 1980, and Feb 24, 2015, with at least 100 patients per group. The prespecified primary and secondary outcomes were pain and physical function, and were extracted in duplicate for up to seven timepoints after the start of treatment. We used an extension of multivariable Bayesian random effects models for mixed multiple treatment comparisons with a random effect at the level of trials. For the primary analysis, a random walk of first order was used to account for multiple follow-up outcome data within a trial. Preparations that used different total daily dose were considered separately in the analysis. To assess a potential dose-response relation, we used preparation-specific covariates assuming linearity on log relative dose. FINDINGS We identified 8973 manuscripts from our search, of which 74 randomised trials with a total of 58 556 patients were included in this analysis. 23 nodes concerning seven different NSAIDs or paracetamol with specific daily dose of administration or placebo were considered. All preparations, irrespective of dose, improved point estimates of pain symptoms when compared with placebo. For six interventions (diclofenac 150 mg/day, etoricoxib 30 mg/day, 60 mg/day, and 90 mg/day, and rofecoxib 25 mg/day and 50 mg/day), the probability that the difference to placebo is at or below a prespecified minimum clinically important effect for pain reduction (effect size [ES] -0·37) was at least 95%. Among maximally approved daily doses, diclofenac 150 mg/day (ES -0·57, 95% credibility interval [CrI] -0·69 to -0·46) and etoricoxib 60 mg/day (ES -0·58, -0·73 to -0·43) had the highest probability to be the best intervention, both with 100% probability to reach the minimum clinically important difference. Treatment effects increased as drug dose increased, but corresponding tests for a linear dose effect were significant only for celecoxib (p=0·030), diclofenac (p=0·031), and naproxen (p=0·026). We found no evidence that treatment effects varied over the duration of treatment. Model fit was good, and between-trial heterogeneity and inconsistency were low in all analyses. All trials were deemed to have a low risk of bias for blinding of patients. Effect estimates did not change in sensitivity analyses with two additional statistical models and accounting for methodological quality criteria in meta-regression analysis. INTERPRETATION On the basis of the available data, we see no role for single-agent paracetamol for the treatment of patients with osteoarthritis irrespective of dose. We provide sound evidence that diclofenac 150 mg/day is the most effective NSAID available at present, in terms of improving both pain and function. Nevertheless, in view of the safety profile of these drugs, physicians need to consider our results together with all known safety information when selecting the preparation and dose for individual patients. FUNDING Swiss National Science Foundation (grant number 405340-104762) and Arco Foundation, Switzerland.
Resumo:
Although evidence suggests that the benefits of psychodynamic treatments are sustained over time, presently it is unclear whether these sustained benefits are superior to non-psychodynamic treatments. Additionally, the extant literature comparing the sustained benefits of psychodynamic treatments compared to alternative treatments is limited with methodological shortcomings. The purpose of the current study was to conduct a rigorous test of the growth of the benefits of psychodynamic treatments relative to alternative treatments across distinct domains of change (i.e., all outcome measures, targeted outcome measures, non-targeted outcome measures, and personality outcome measures). To do so, the study employed strict inclusion criteria to identify randomized clinical trials that directly compared at least one bona fide psychodynamic treatment and one bona fide non-psychodynamic treatment. Hierarchical linear modeling (Raudenbush, Bryk, Cheong, Congdon, & du Toit, 2011) was used to longitudinally model the impact of psychodynamic treatments compared to non-psychodynamic treatments at post-treatment and to compare the growth (i.e., slope) of effects beyond treatment completion. Findings from the present meta-analysis indicated that psychodynamic treatments and non-psychodynamic treatments were equally efficacious at post-treatment and at follow-up for combined outcomes (k=20), targeted outcomes (k=19), non-targeted outcomes (k=17), and personality outcomes (k=6). Clinical implications, directions for future research, and limitations are discussed.
Resumo:
In this paper, we extend the debate concerning Credit Default Swap valuation to include time varying correlation and co-variances. Traditional multi-variate techniques treat the correlations between covariates as constant over time; however, this view is not supported by the data. Secondly, since financial data does not follow a normal distribution because of its heavy tails, modeling the data using a Generalized Linear model (GLM) incorporating copulas emerge as a more robust technique over traditional approaches. This paper also includes an empirical analysis of the regime switching dynamics of credit risk in the presence of liquidity by following the general practice of assuming that credit and market risk follow a Markov process. The study was based on Credit Default Swap data obtained from Bloomberg that spanned the period January 1st 2004 to August 08th 2006. The empirical examination of the regime switching tendencies provided quantitative support to the anecdotal view that liquidity decreases as credit quality deteriorates. The analysis also examined the joint probability distribution of the credit risk determinants across credit quality through the use of a copula function which disaggregates the behavior embedded in the marginal gamma distributions, so as to isolate the level of dependence which is captured in the copula function. The results suggest that the time varying joint correlation matrix performed far superior as compared to the constant correlation matrix; the centerpiece of linear regression models.
Resumo:
ABSTRACT : BACKGROUND : Diets that restrict carbohydrate (CHO) have proven to be a successful dietary treatment of obesity for many people, but the degree of weight loss varies across individuals. The extent to which genetic factors associate with the magnitude of weight loss induced by CHO restriction is unknown. We examined associations among polymorphisms in candidate genes and weight loss in order to understand the physiological factors influencing body weight responses to CHO restriction. METHODS : We screened for genetic associations with weight loss in 86 healthy adults who were instructed to restrict CHO to a level that induced a small level of ketosis (CHO ~10% of total energy). A total of 27 single nucleotide polymorphisms (SNPs) were selected from 15 candidate genes involved in fat digestion/metabolism, intracellular glucose metabolism, lipoprotein remodeling, and appetite regulation. Multiple linear regression was used to rank the SNPs according to probability of association, and the most significant associations were analyzed in greater detail. RESULTS : Mean weight loss was 6.4 kg. SNPs in the gastric lipase (LIPF), hepatic glycogen synthase (GYS2), cholesteryl ester transfer protein (CETP) and galanin (GAL) genes were significantly associated with weight loss. CONCLUSION : A strong association between weight loss induced by dietary CHO restriction and variability in genes regulating fat digestion, hepatic glucose metabolism, intravascular lipoprotein remodeling, and appetite were detected. These discoveries could provide clues to important physiologic adaptations underlying the body mass response to CHO restriction.
Resumo:
Obesity prevalence among children and adolescents is rising. It is one of the most attributable causes of hospitalization and death. Overweight and obese children are more likely to suffer from associated conditions such as hypertension, dyslipidemia, chronic inflammation, increased blood clotting tendency, endothelial dysfunction, hyperinsulinemia, and asthma. These children and adolescents are also more likely to be overweight and obese in adulthood. Interestingly, rates of obesity and overweight are not evenly distributed across racial and ethnic groups. Mexican American youth have higher rates of obesity and are at higher risk of becoming obese than non-Hispanic black and non-Hispanic white children. ^ Methods. This cross-sectional study describes the association between rates of obesity and physical activity in a sample of 1313 inner-city Mexican American children and adolescents (5-19 years of age) in Houston, Texas. This study is important because it will contribute to our understanding of childhood and adolescent obesity in this at-risk population. ^ Data from the Mexican American Feasibility Cohort using the Mano a Mano questionnaire are used to describe this population's status of obesity and physical activity. An initial sample taken from 5000 households in inner city Houston Texas was used as the baseline for this prospective cohort. The questionnaire was given in person to the participants to complete (or to parents for younger children) at a home visit by two specially trained bilingual interviewers. Analysis comprised prevalence estimates of obesity represented as percentile rank (<85%= normal weight, >85%= at risk, >95%= obese) by age and gender. The association between light, moderate, strenuous activity, and obesity was also examined using linear regression. ^ Results. Overall, 46% of this Mexican American Feasibility cohort is overweight or obese. The prevalence for children in the 6-11 age range (53.2%) was significantly greater than that reported from NHANES, 1999–2002 data (39.4%). Although the percentage of overweight and obese among the 12-19 year olds was greater than that reported in NHANES (38.5% versus 38.6%) this difference was not statistically significant. ^ A significant association between BMI and sit time and moderate physical activity (both p < 0.05) found in this sample. For males, this association was significant for moderate physical activity (p < 0.01). For the females, this association was significant for BMI and sit time (p < 0.05). These results need to be interpreted in the light of design and measurement limitations. ^ Conclusion. This study supports observations that the inner city Houston Texas Mexican American child and adolescent population is more overweight and obese than nationally reported figures, and that there are positive relationships between BMI, activity levels, and sit time in this population. This study supports the need for public health initiatives within the Houston Hispanic community. ^
Resumo:
Background. The purpose of this study was to describe the risk factors and demographics of persons with salmonellosis and shigellosis and to investigate both seasonal and spatial variations in the occurrence of these infections in Texas from 2000 to 2004, utilizing time series analyses and the geographic information system digital mapping methods. ^ Methods. Spatial Analysis: MapInfo software was used to map the distribution of age-adjusted rates of reported shigellosis and salmonellosis in Texas from 2000–2004 by zip codes. Census data on above or below poverty level, household income, highest level of educational attainment, race, ethnicity, and urban/rural community status was obtained from the 2000 Decennial Census for each zip code. The zip codes with the upper 10% and lower 10% were compared using t-tests and logistic regression to determine whether there were any potential risk factors. ^ Temporal analysis. Seasonal patterns in the prevalence of infections in Texas from 2000 to 2003 were determined by performing time-series analysis on the numbers of cases of salmonellosis and shigellosis. A linear regression was also performed to assess for trends in the incidence of each disease, along with auto-correlation and multi-component cosinor analysis. ^ Results. Spatial analysis: Analysis by general linear model showed a significant association between infection rates and age, with young children aged less than 5 and those aged 5–9 years having increased risk of infection for both disease conditions. The data demonstrated that those populations with high percentages of people who attained a higher than high school education were less likely to be represented in zip codes with high rates of shigellosis. However, for salmonellosis, logistic regression models indicated that when compared to populations with high percentages of non-high school graduates, having a high school diploma or equivalent increased the odds of having a high rate of infection. ^ Temporal analysis. For shigellosis, multi-component cosinor analyses were used to determine the approximated cosine curve which represented a statistically significant representation of the time series data for all age groups by sex. The shigellosis results show 2 peaks, with a major peak occurring in June and a secondary peak appearing around October. Salmonellosis results showed a single peak and trough in all age groups with the peak occurring in August and the trough occurring in February. ^ Conclusion. The results from this study can be used by public health agencies to determine the timing of public health awareness programs and interventions in order to prevent salmonellosis and shigellosis from occurring. Because young children depend on adults for their meals, it is important to increase the awareness of day-care workers and new parents about modes of transmission and hygienic methods of food preparation and storage. ^
Resumo:
Introduction and objective. A number of prognostic factors have been reported for predicting survival in patients with renal cell carcinoma. Yet few studies have analyzed the effects of those factors at different stages of the disease process. In this study, different stages of disease progression starting from nephrectomy to metastasis, from metastasis to death, and from evaluation to death were evaluated. ^ Methods. In this retrospective follow-up study, records of 97 deceased renal cell carcinoma (RCC) patients were reviewed between September 2006 to October 2006. Patients with TNM Stage IV disease before nephrectomy or with cancer diagnoses other than RCC were excluded leaving 64 records for analysis. Patient TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were analyzed in relation to time to metastases. Time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from metastases to death. Finally, analysis of laboratory values at time of evaluation, Eastern Cooperative Oncology Group performance status (ECOG), UCLA Integrated Staging System (UISS), time from nephrectomy to metastasis, TNM staging, Furhman Grade, age, tumor size, tumor volume, histology and patient gender were tested for significance in relation to time from evaluation to death. Linear regression and Cox Proportional Hazard (univariate and multivariate) was used for testing significance. Kaplan-Meier Log-Rank test was used to detect any significance between groups at various endpoints. ^ Results. Compared to negative lymph nodes at time of nephrectomy, a single positive lymph node had significantly shorter time to metastasis (p<0.0001). Compared to other histological types, clear cell histology had significant metastasis free survival (p=0.003). Clear cell histology compared to other types (p=0.0002 univariate, p=0.038 multivariate) and time to metastasis with log conversion (p=0.028) significantly affected time from metastasis to death. A greater than one year and greater than two year metastasis free interval, compared to patients that had metastasis before one and two years, had statistically significant survival benefit (p=0.004 and p=0.0318). Time from evaluation to death was affected by greater than one year metastasis free interval (p=0.0459), alcohol consumption (p=0.044), LDH (p=0.006), ECOG performance status (p<0.001), and hemoglobin level (p=0.0092). The UISS risk stratified the patient population in a statistically significant manner for survival (p=0.001). No other factors were found to be significant. ^ Conclusion. Clear cell histology is predictive for both time to metastasis and metastasis to death. Nodal status at time of nephrectomy may predict risk of metastasis. The time interval to metastasis significantly predicts time from metastasis to death and time from evaluation to death. ECOG performance status, and hemoglobin levels predicts survival outcome at evaluation. Finally, UISS appropriately stratifies risk in our population. ^
Resumo:
In the United States, “binge” drinking among college students is an emerging public health concern due to the significant physical and psychological effects on young adults. The focus is on identifying interventions that can help decrease high-risk drinking behavior among this group of drinkers. One such intervention is Motivational interviewing (MI), a client-centered therapy that aims at resolving client ambivalence by developing discrepancy and engaging the client in change talk. Of late, there is a growing interest in determining the active ingredients that influence the alliance between the therapist and the client. This study is a secondary analysis of the data obtained from the Southern Methodist Alcohol Research Trial (SMART) project, a dismantling trial of MI and feedback among heavy drinking college students. The present project examines the relationship between therapist and client language in MI sessions on a sample of “binge” drinking college students. Of the 126 SMART tapes, 30 tapes (‘MI with feedback’ group = 15, ‘MI only’ group = 15) were randomly selected for this study. MISC 2.1, a mutually exclusive and exhaustive coding system, was used to code the audio/videotaped MI sessions. Therapist and client language were analyzed for communication characteristics. Overall, therapists adopted a MI consistent style and clients were found to engage in change talk. Counselor acceptance, empathy, spirit, and complex reflections were all significantly related to client change talk (p-values ranged from 0.001 to 0.047). Additionally, therapist ‘advice without permission’ and MI Inconsistent therapist behaviors were strongly correlated with client sustain talk (p-values ranged from 0.006 to 0.048). Simple linear regression models showed a significant correlation between MI consistent (MICO) therapist language (independent variable) and change talk (dependent variable) and MI inconsistent (MIIN) therapist language (independent variable) and sustain talk (dependent variable). The study has several limitations such as small sample size, self-selection bias, poor inter-rater reliability for the global scales and the lack of a temporal measure of therapist and client language. Future studies might consider a larger sample size to obtain more statistical power. In addition the correlation between therapist language, client language and drinking outcome needs to be explored.^
Resumo:
Next-generation DNA sequencing platforms can effectively detect the entire spectrum of genomic variation and is emerging to be a major tool for systematic exploration of the universe of variants and interactions in the entire genome. However, the data produced by next-generation sequencing technologies will suffer from three basic problems: sequence errors, assembly errors, and missing data. Current statistical methods for genetic analysis are well suited for detecting the association of common variants, but are less suitable to rare variants. This raises great challenge for sequence-based genetic studies of complex diseases.^ This research dissertation utilized genome continuum model as a general principle, and stochastic calculus and functional data analysis as tools for developing novel and powerful statistical methods for next generation of association studies of both qualitative and quantitative traits in the context of sequencing data, which finally lead to shifting the paradigm of association analysis from the current locus-by-locus analysis to collectively analyzing genome regions.^ In this project, the functional principal component (FPC) methods coupled with high-dimensional data reduction techniques will be used to develop novel and powerful methods for testing the associations of the entire spectrum of genetic variation within a segment of genome or a gene regardless of whether the variants are common or rare.^ The classical quantitative genetics suffer from high type I error rates and low power for rare variants. To overcome these limitations for resequencing data, this project used functional linear models with scalar response to develop statistics for identifying quantitative trait loci (QTLs) for both common and rare variants. To illustrate their applications, the functional linear models were applied to five quantitative traits in Framingham heart studies. ^ This project proposed a novel concept of gene-gene co-association in which a gene or a genomic region is taken as a unit of association analysis and used stochastic calculus to develop a unified framework for testing the association of multiple genes or genomic regions for both common and rare alleles. The proposed methods were applied to gene-gene co-association analysis of psoriasis in two independent GWAS datasets which led to discovery of networks significantly associated with psoriasis.^
Resumo:
Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^
A descriptive and exploratory analysis of occupational injuries at a chemical manufacturing facility
Resumo:
A retrospective study of 1353 occupational injuries occurring at a chemical manufacturing facility in Houston, Texas from January, 1982 through May, 1988 was performed to investigate the etiology of the occupational injury process. Injury incidence rates were calculated for various sub-populations of workers to determine differences in the risk of injury for various groups. Linear modeling techniques were used to determine the association between certain collected independent variables and severity of an injury event. Finally, two sub-groups of the worker population, shiftworkers and injury recidivists, were examined. An injury recidivist as defined is any worker experiencing one or more injury per year. Overall, female shiftworkers evidenced the highest average injury incidence rate compared to all other worker groups analyzed. Although the female shiftworkers were younger and less experienced, the etiology of their increased risk of injury remains unclear, although the rigors of performing shiftwork itself or ergonomic factors are suspect. In general, females were injured more frequently than males, but they did not incur more severe injuries. For all workers, many injuries were caused by erroneous or foregone training, and risk taking behaviors. Injuries of these types are avoidable. The distribution of injuries by severity level was bimodal; either injuries were of minor or major severity with only a small number of cases falling in between. Of the variables collected, only the type of injury incurred and the worker's titlecode were statistically significantly associated with injury severity. Shiftworkers did not sustain more severe injuries than other worker groups. Injury to shiftworkers varied as a 24-hour pattern; the greatest number occurred between 1200-1230 hours, (p = 0.002) by Cosinor analysis. Recidivists made up 3.3% of the population (23 males and 10 females), yet suffered 17.8% of the injuries. Although past research suggests that injury recidivism is a random statistical event, analysis of the data by logistic regression implicates gender, area worked, age and job titlecode as being statistically significantly related to injury recidivism at this facility. ^
Resumo:
A discussion of nonlinear dynamics, demonstrated by the familiar automobile, is followed by the development of a systematic method of analysis of a possibly nonlinear time series using difference equations in the general state-space format. This format allows recursive state-dependent parameter estimation after each observation thereby revealing the dynamics inherent in the system in combination with random external perturbations.^ The one-step ahead prediction errors at each time period, transformed to have constant variance, and the estimated parametric sequences provide the information to (1) formally test whether time series observations y(,t) are some linear function of random errors (ELEM)(,s), for some t and s, or whether the series would more appropriately be described by a nonlinear model such as bilinear, exponential, threshold, etc., (2) formally test whether a statistically significant change has occurred in structure/level either historically or as it occurs, (3) forecast nonlinear system with a new and innovative (but very old numerical) technique utilizing rational functions to extrapolate individual parameters as smooth functions of time which are then combined to obtain the forecast of y and (4) suggest a measure of resilience, i.e. how much perturbation a structure/level can tolerate, whether internal or external to the system, and remain statistically unchanged. Although similar to one-step control, this provides a less rigid way to think about changes affecting social systems.^ Applications consisting of the analysis of some familiar and some simulated series demonstrate the procedure. Empirical results suggest that this state-space or modified augmented Kalman filter may provide interesting ways to identify particular kinds of nonlinearities as they occur in structural change via the state trajectory.^ A computational flow-chart detailing computations and software input and output is provided in the body of the text. IBM Advanced BASIC program listings to accomplish most of the analysis are provided in the appendix. ^