948 resultados para Five Factor Model
Resumo:
Le dioxyde de carbone (CO2) est un résidu naturel du métabolisme cellulaire, la troisième substance la plus abondante du sang, et un important agent vasoactif. À la moindre variation de la teneur en CO2 du sang, la résistance du système vasculaire cérébral et la perfusion tissulaire cérébrale subissent des changements globaux. Bien que les mécanismes exacts qui sous-tendent cet effet restent à être élucidés, le phénomène a été largement exploité dans les études de réactivité vasculaire cérébrale (RVC). Une voie prometteuse pour l’évaluation de la fonction vasculaire cérébrale est la cartographie de la RVC de manière non-invasive grâce à l’utilisation de l’Imagerie par Résonance Magnétique fonctionnelle (IRMf). Des mesures quantitatives et non-invasives de de la RVC peuvent être obtenus avec l’utilisation de différentes techniques telles que la manipu- lation du contenu artériel en CO2 (PaCO2) combinée à la technique de marquage de spin artériel (Arterial Spin Labeling, ASL), qui permet de mesurer les changements de la perfusion cérébrale provoqués par les stimuli vasculaires. Toutefois, les préoccupations liées à la sensibilité et la fiabilité des mesures de la RVC limitent de nos jours l’adoption plus large de ces méthodes modernes de IRMf. J’ai considéré qu’une analyse approfondie ainsi que l’amélioration des méthodes disponibles pourraient apporter une contribution précieuse dans le domaine du génie biomédical, de même qu’aider à faire progresser le développement de nouveaux outils d’imagerie de diagnostique. Dans cette thèse je présente une série d’études où j’examine l’impact des méthodes alternatives de stimulation/imagerie vasculaire sur les mesures de la RVC et les moyens d’améliorer la sensibilité et la fiabilité de telles méthodes. J’ai aussi inclus dans cette thèse un manuscrit théorique où j’examine la possible contribution d’un facteur méconnu dans le phénomène de la RVC : les variations de la pression osmotique du sang induites par les produits de la dissolution du CO2. Outre l’introduction générale (Chapitre 1) et les conclusions (Chapitre 6), cette thèse comporte 4 autres chapitres, au long des quels cinq différentes études sont présentées sous forme d’articles scientifiques qui ont été acceptés à des fins de publication dans différentes revues scientifiques. Chaque chapitre débute par sa propre introduction, qui consiste en une description plus détaillée du contexte motivant le(s) manuscrit(s) associé(s) et un bref résumé des résultats transmis. Un compte rendu détaillé des méthodes et des résultats peut être trouvé dans le(s) dit(s) manuscrit(s). Dans l’étude qui compose le Chapitre 2, je compare la sensibilité des deux techniques ASL de pointe et je démontre que la dernière implémentation de l’ASL continue, la pCASL, offre des mesures plus robustes de la RVC en comparaison à d’autres méthodes pulsés plus âgées. Dans le Chapitre 3, je compare les mesures de la RVC obtenues par pCASL avec l’utilisation de quatre méthodes respiratoires différentes pour manipuler le CO2 artérielle (PaCO2) et je démontre que les résultats peuvent varier de manière significative lorsque les manipulations ne sont pas conçues pour fonctionner dans l’intervalle linéaire de la courbe dose-réponse du CO2. Le Chapitre 4 comprend deux études complémentaires visant à déterminer le niveau de reproductibilité qui peut être obtenu en utilisant des méthodes plus récentes pour la mesure de la RVC. La première étude a abouti à la mise au point technique d’un appareil qui permet des manipulations respiratoires du CO2 de manière simple, sécuritaire et robuste. La méthode respiratoire améliorée a été utilisée dans la seconde étude – de neuro-imagerie – où la sensibilité et la reproductibilité de la RVC, mesurée par pCASL, ont été examinées. La technique d’imagerie pCASL a pu détecter des réponses de perfusion induites par la variation du CO2 dans environ 90% du cortex cérébral humain et la reproductibilité de ces mesures était comparable à celle d’autres mesures hémodynamiques déjà adoptées dans la pratique clinique. Enfin, dans le Chapitre 5, je présente un modèle mathématique qui décrit la RVC en termes de changements du PaCO2 liés à l’osmolarité du sang. Les réponses prédites par ce modèle correspondent étroitement aux changements hémodynamiques mesurés avec pCASL ; suggérant une contribution supplémentaire à la réactivité du système vasculaire cérébral en lien avec le CO2.
Resumo:
This paper presents a method based on articulated models for the registration of spine data extracted from multimodal medical images of patients with scoliosis. With the ultimate aim being the development of a complete geometrical model of the torso of a scoliotic patient, this work presents a method for the registration of vertebral column data using 3D magnetic resonance images (MRI) acquired in prone position and X-ray data acquired in standing position for five patients with scoliosis. The 3D shape of the vertebrae is estimated from both image modalities for each patient, and an articulated model is used in order to calculate intervertebral transformations required in order to align the vertebrae between both postures. Euclidean distances between anatomical landmarks are calculated in order to assess multimodal registration error. Results show a decrease in the Euclidean distance using the proposed method compared to rigid registration and more physically realistic vertebrae deformations compared to thin-plate-spline (TPS) registration thus improving alignment.
Resumo:
One of the major concerns of scoliosis patients undergoing surgical treatment is the aesthetic aspect of the surgery outcome. It would be useful to predict the postoperative appearance of the patient trunk in the course of a surgery planning process in order to take into account the expectations of the patient. In this paper, we propose to use least squares support vector regression for the prediction of the postoperative trunk 3D shape after spine surgery for adolescent idiopathic scoliosis. Five dimensionality reduction techniques used in conjunction with the support vector machine are compared. The methods are evaluated in terms of their accuracy, based on the leave-one-out cross-validation performed on a database of 141 cases. The results indicate that the 3D shape predictions using a dimensionality reduction obtained by simultaneous decomposition of the predictors and response variables have the best accuracy.
Resumo:
This research was undertaken with the primary objective of explaining differences in consumption of personal care products using personality variables. Several streams of research reported were reviewed and a conceptual model was developed. Theories on the relationship between self concept and behaviour was reviewed and the need to use individual difference variables to conceptualize and measure the salient dimensions of the self were emphasized. Theories relating to social comparison, eating disorders, role of idealized media images in shaping the self-concept, evidence on cosmetic surgery and persuasibility were reviewed in the study. These came from diverse fields like social psychology, use of cosmetics, women studies, media studies, self-concept literature in psychology and consumer research, and marketing. From the review three basic dimensions, namely self-evaluation, self-awareness and persuasibility were identified and they were posited to be related to consumption. Several personality variables from these conceptual domains were identified and factor analysis confirmed the expected structure fitting the basic theoretical dimensions. Demographic variables like gender and income were also considered.It was found that self-awareness measured by the variable public self-consciousness explain differences in consumption of personal care products. The relationship between public self-consciousness and consumption was found to be most conspicuous in cases of poor self-, evaluation measured by self-esteem. Susceptibility to advertising also was found to explain differences in consumption.From the research, it may be concluded that personality variables are useful for explaining consumption and they must be used together to explain and understand the process. There may not be obvious and conspicuous links between individual measures and behaviour in marketing. However, when used in proper combination and with the help oftheoretical models personality offers considerable explanatory power as illustrated in the seventy five percent accuracy rate of prediction obtained in binary logistic regression.
Resumo:
Occupational stress is becoming a major issue in both corporate and social agenda .In industrialized countries, there have been quite dramatic changes in the conditions at work, during the last decade ,caused by economic, social and technical development. As a consequence, the people today at work are exposed to high quantitative and qualitative demands as well as hard competition caused by global economy. A recent report says that ailments due to work related stress is likely to cost India’s exchequer around 72000 crores between 2009 and 2015. Though India is a fast developing country, it is yet to create facilities to mitigate the adverse effects of work stress, more over only little efforts have been made to assess the work related stress.In the absence of well defined standards to assess the work related stress in India, an attempt is made in this direction to develop the factors for the evaluation of work stress. Accordingly, with the help of existing literature and in consultation with the safety experts, seven factors for the evaluation of work stress is developed. An instrument ( Questionnaire) was developed using these seven factors for the evaluation of work stress .The validity , and unidimensionality of the questionnaire was ensured by confirmatory factor analysis. The reliability of the questionnaire was ensured before administration. While analyzing the relation ship between the variables, it is noted that no relationship exists between them, and hence the above factors are treated as independent factors/ variables for the purpose of research .Initially five profit making manufacturing industries, under public sector in the state of Kerala, were selected for the study. The influence of factors responsible for work stress is analyzed in these industries. These industries were classified in to two types, namely chemical and heavy engineering ,based on the product manufactured and work environment and the analysis is further carried out for these two categories.The variation of work stress with different age , designation and experience of the employees are analyzed by means of one-way ANOVA. Further three different type of modelling of work stress, namely factor modelling, structural equation modelling and multinomial logistic regression modelling was done to analyze the association of factors responsible for work stress. All these models are found equally good in predicting the work stress.The present study indicates that work stress exists among the employees in public sector industries in Kerala. Employees belonging to age group 40-45yrs and experience groups 15-20yrs had relatively higher work demand ,low job control, and low support at work. Low job control was noted among lower designation levels, particularly at the worker level in these industries. Hence the instrument developed using the seven factors namely demand, control, manager support, peer support, relationship, role and change can be effectively used for the evaluation of work stress in industries.
Resumo:
The immunostimulatory effect of an alkali insoluble glucan extracted from marine yeast isolate Candida sake S165 was tested in Fenneropenaeus indicus. Post larvae (PL) of F. indicus, fed glucan incorporated diet at varying concentrations (0.05, 0.1, 0.2, 0.3, 0.4 g glucan/100 g feed) for 21 days were challenged orally with white spot syndrome virus (WSSV). Maximum survival was observed in PL fed the 0.2% glucan incorporated diet. Subsequently the feed incorporated with 0.2% glucan was fed to F. indicus post larvae at different feeding intervals, i.e. daily, once every two days, once every five days, once every seven days and once every ten days. After 40 days, the prawns were challenged orally with WSSV and post challenge survival was recorded. Shrimp feed containing 0.2% glucan when administered once every seven days gave maximum survival. This was supported by haematological data obtained from adult F. indicus, i.e. total haemocyte count, phenoloxidase activity and nitroblue tetrazolium reduction (NBT). The present observation confirms the importance of dose and frequency of administration of immunostimulants in shrimp health management
Resumo:
Anti-lipopolysaccharide factors are small proteins that bind and neutralize lipopolysaccharide and exhibit potent antimicrobial activities. This study presents the molecular characterization and phylogenetic analysis of the first ALF isoform (Pp-ALF1; JQ745295) identified from the hemocytes of Portunus pelagicus. The full length cDNA of Pp-ALF1 consisted of 880 base pairs encoding 293 amino acids with an ORF of 123 amino acids and contains a putative signal peptide of 24 amino acids. Pp-ALF1 possessed a predicted molecular weight (MW) of 13.86 kDa and theoretical isoelectric point (pI) of 8.49. Two highly conserved cysteine residues and putative LPS binding domain were observed in Pp-ALF1. Peptide model of Pp-ALF1 consisted of two α-helices crowded against a four-strand β-sheet. Comparison of amino acid sequences and neighbor joining tree showed that Pp-ALF1 has a maximum similarity (46%) to ALF present in Portunus trituberculatus followed by 39% similarity to ALF of Eriocheir sinensis and 38% similarity to ALFs of Scylla paramamosain and Scylla serrata. Pp-ALF1 is found to be a new isoform of ALF family and its characteristic similarity with other known ALFs signifies its role in protection against invading pathogens.
Resumo:
The 21st century has brought new challenges for forest management at a time when globalization in world trade is increasing and global climate change is becoming increasingly apparent. In addition to various goods and services like food, feed, timber or biofuels being provided to humans, forest ecosystems are a large store of terrestrial carbon and account for a major part of the carbon exchange between the atmosphere and the land surface. Depending on the stage of the ecosystems and/or management regimes, forests can be either sinks, or sources of carbon. At the global scale, rapid economic development and a growing world population have raised much concern over the use of natural resources, especially forest resources. The challenging question is how can the global demands for forest commodities be satisfied in an increasingly globalised economy, and where could they potentially be produced? For this purpose, wood demand estimates need to be integrated in a framework, which is able to adequately handle the competition for land between major land-use options such as residential land or agricultural land. This thesis is organised in accordance with the requirements to integrate the simulation of forest changes based on wood extraction in an existing framework for global land-use modelling called LandSHIFT. Accordingly, the following neuralgic points for research have been identified: (1) a review of existing global-scale economic forest sector models (2) simulation of global wood production under selected scenarios (3) simulation of global vegetation carbon yields and (4) the implementation of a land-use allocation procedure to simulate the impact of wood extraction on forest land-cover. Modelling the spatial dynamics of forests on the global scale requires two important inputs: (1) simulated long-term wood demand data to determine future roundwood harvests in each country and (2) the changes in the spatial distribution of woody biomass stocks to determine how much of the resource is available to satisfy the simulated wood demands. First, three global timber market models are reviewed and compared in order to select a suitable economic model to generate wood demand scenario data for the forest sector in LandSHIFT. The comparison indicates that the ‘Global Forest Products Model’ (GFPM) is most suitable for obtaining projections on future roundwood harvests for further study with the LandSHIFT forest sector. Accordingly, the GFPM is adapted and applied to simulate wood demands for the global forestry sector conditional on selected scenarios from the Millennium Ecosystem Assessment and the Global Environmental Outlook until 2050. Secondly, the Lund-Potsdam-Jena (LPJ) dynamic global vegetation model is utilized to simulate the change in potential vegetation carbon stocks for the forested locations in LandSHIFT. The LPJ data is used in collaboration with spatially explicit forest inventory data on aboveground biomass to allocate the demands for raw forest products and identify locations of deforestation. Using the previous results as an input, a methodology to simulate the spatial dynamics of forests based on wood extraction is developed within the LandSHIFT framework. The land-use allocation procedure specified in the module translates the country level demands for forest products into woody biomass requirements for forest areas, and allocates these on a five arc minute grid. In a first version, the model assumes only actual conditions through the entire study period and does not explicitly address forest age structure. Although the module is in a very preliminary stage of development, it already captures the effects of important drivers of land-use change like cropland and urban expansion. As a first plausibility test, the module performance is tested under three forest management scenarios. The module succeeds in responding to changing inputs in an expected and consistent manner. The entire methodology is applied in an exemplary scenario analysis for India. A couple of future research priorities need to be addressed, particularly the incorporation of plantation establishments; issue of age structure dynamics; as well as the implementation of a new technology change factor in the GFPM which can allow the specification of substituting raw wood products (especially fuelwood) by other non-wood products.
Resumo:
Durch die vermehrte Nachfrage von Biomöhren im Lebensmitteleinzelhandel ist die Anbaufläche ökologisch erzeugter Möhren in den letzten zehn Jahren deutlich angestiegen. Der Anbau konzentriert sich auf bestimmte Regionen und erfolgte damit zunehmend auf großen Schlägen in enger räumlicher und zeitlicher Abfolge. Mit der steigenden Wirtspflanzenpräsenz steigt auch der Befallsdruck durch die Möhrenfliege. Während der Schädling im konventionellen Anbau mit Insektiziden kontrolliert wird, stehen dem Ökologischen Landbau bisher keine direkten Regulative zur Verfügung. Ziel der Untersuchungen war es, unter den Praxisbedingungen des ökologischen Möhrenanbaus einzelbetriebliche und überregionale Muster beteiligter Risikofaktoren im Befallsgeschehen zu identifizieren und so Möglichkeiten einer verbesserten Prävention und Regulation aufzuzeigen. Über einen Zeitraum von drei Jahren wurden auf fünf Betrieben in Niedersachsen und Hessen umfangreiche Felddaten erhoben und diese unter Verwendung von GIS – Software und dem Simulationsmodell SWAT analysiert. Untersuchte Einflussgrößen umfassten (1) die Distanz zu vorjährigen Möhrenfeldern, (2) die zeitliche Möhrenanbauperiode, (3) Vegetationselemente und (4) der experimentelle Einsatz von Fangpflanzen zur Unterdrückung der Fliegenentwicklung. Unter der Berücksichtigung deutlicher einzelbetrieblicher Unterschiede sind die wichtigsten Ergebnisse der Studie wie folgt zu benennen: (1) Auf Betrieben mit Befall im zurückliegenden Anbaujahr zeigte sich die Distanz zu vorjährigen Möhrenfeldern als der wichtigste Risikofaktor. Das Ausbreitungsverhalten der 1. Generation Möhrenfliege erwies sich zudem als situationsgebunden anpassungsfähig. Fliegensumme und Befall waren jeweils in dem zu Vorjahresflächen nächstgelegen Feld am größten, während jeweils dahinter liegende Möhrenschläge entsprechend weniger Fliegenzahlen und Befall auswiesen. Aus den Ergebnissen wird als vorrangige Verbreitungskapazität der 1. Generation Möhrenfliegen innerhalb von 1000 m abgeleitet. (2) Betriebe mit kontinuierlicher Möhren - Anbaubauperiode (ca. April – Oktober), die langfristig die Entwicklung sowohl der 1. als auch der 2. Generation Fliegen unterstützten, verzeichneten stärkere Fliegenprobleme. Hinsichtlich einer verbesserten Prävention wird empfohlen mit einer strikten räumlichen Trennung früher und später Sätze ein Aufschaukeln zwischen den Generationen zu vermeiden. (3) Der Einfluss der Vegetation ließ sich weniger eindeutig interpretieren. Einzelbetriebliche Hinweise, dass Kleingehölze (Hecken und Bäume) im Radius zwischen aktueller und vorjähriger Möhrenfläche die Befallswahrscheinlichkeit erhöhen, konnten mit einem berechneten Gesamtmaß für die regionale holzige Vegetation nicht bestätigt werden. Der großräumigen holzigen Vegetation wird im Vergleich zur Feldrandvegetation daher beim Befallsgeschehen eine geringe Bedeutung zugeschrieben. (4) Drei Meter (vier Dämme) breiter Möhren – Fangstreifen auf den vorjährigen Möhrenfeldern eignen sich bereits ab dem Keimblattstadium, um erhebliches Befallspotential zu binden. Eine mechanische Entfernung der Fangpflanzen (Grubbern) mitsamt dem Befallspotential erzielte in 2008 eine 100 %-ige Unterdrückung der Möhrenfliegenentwicklung, in 2009 jedoch nur zu maximal 41 %. Als mögliche Synthese der Ergebnisse zur Ausbreitung der Möhrenfliegen im Frühjahr und zur zeitlichen Koinzidenz mit der Möhrenentwicklung wird als Empfehlung diskutiert, mit Hilfe einer angepassten Flächenwahl die Fliegenausbreitung räumlich an frühen Sätzen zu binden, um entsprechend befallsarme Regionen für entfernt liegende späte (empfindlichere) Möhrensätze zu schaffen.
Resumo:
To study the behaviour of beam-to-column composite connection more sophisticated finite element models is required, since component model has some severe limitations. In this research a generic finite element model for composite beam-to-column joint with welded connections is developed using current state of the art local modelling. Applying mechanically consistent scaling method, it can provide the constitutive relationship for a plane rectangular macro element with beam-type boundaries. Then, this defined macro element, which preserves local behaviour and allows for the transfer of five independent states between local and global models, can be implemented in high-accuracy frame analysis with the possibility of limit state checks. In order that macro element for scaling method can be used in practical manner, a generic geometry program as a new idea proposed in this study is also developed for this finite element model. With generic programming a set of global geometric variables can be input to generate a specific instance of the connection without much effort. The proposed finite element model generated by this generic programming is validated against testing results from University of Kaiserslautern. Finally, two illustrative examples for applying this macro element approach are presented. In the first example how to obtain the constitutive relationships of macro element is demonstrated. With certain assumptions for typical composite frame the constitutive relationships can be represented by bilinear laws for the macro bending and shear states that are then coupled by a two-dimensional surface law with yield and failure surfaces. In second example a scaling concept that combines sophisticated local models with a frame analysis using a macro element approach is presented as a practical application of this numerical model.
Resumo:
In an earlier investigation (Burger et al., 2000) five sediment cores near the Rodrigues Triple Junction in the Indian Ocean were studied applying classical statistical methods (fuzzy c-means clustering, linear mixing model, principal component analysis) for the extraction of endmembers and evaluating the spatial and temporal variation of geochemical signals. Three main factors of sedimentation were expected by the marine geologists: a volcano-genetic, a hydro-hydrothermal and an ultra-basic factor. The display of fuzzy membership values and/or factor scores versus depth provided consistent results for two factors only; the ultra-basic component could not be identified. The reason for this may be that only traditional statistical methods were applied, i.e. the untransformed components were used and the cosine-theta coefficient as similarity measure. During the last decade considerable progress in compositional data analysis was made and many case studies were published using new tools for exploratory analysis of these data. Therefore it makes sense to check if the application of suitable data transformations, reduction of the D-part simplex to two or three factors and visual interpretation of the factor scores would lead to a revision of earlier results and to answers to open questions . In this paper we follow the lines of a paper of R. Tolosana- Delgado et al. (2005) starting with a problem-oriented interpretation of the biplot scattergram, extracting compositional factors, ilr-transformation of the components and visualization of the factor scores in a spatial context: The compositional factors will be plotted versus depth (time) of the core samples in order to facilitate the identification of the expected sources of the sedimentary process. Kew words: compositional data analysis, biplot, deep sea sediments
Resumo:
Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted
Resumo:
El sangrado posterior a cirugía cardiaca es una complicación importante dada la alta morbimortalidad asociada. La hemotransfusión es la terapia mandatoria, pero la administración masiva de hemoderivados también es un factor de riesgo independiente de morbimortalidad. El factor VIIr se ha propuesto para disminuir las trasfusiones y controlar sangrado. El propósito de este estudio fue determinar si el factor VIIr es una herramienta útil para disminuir el consumo de hemoderivados en sangrado postoperatorio durante cirugía cardiaca sin riesgo de complicaciones tromboembólicas o falla renal aguda. Es un estudio de cohorte retrospectivo realizado durante dos años, comparando el consumo de hemoderivados y las complicaciones postoperatorias en la cohorte que recibió factor VIIa y en la que no. Se realizó muestreo por comparación de medias emparejadas y se describieron variables cualitativas mediante distribuciones de frecuencias y porcentuales, variables cuantitativas con medidas de tendencia central como promedio y mediana, medidas de dispersión como la desviación estándar. Para determinar normalidad se utilizó prueba Kolgomorov Smirnov con nivel de significancia a=10%; de no cumplir normalidad se utilizaron pruebas t-student y U de Mann-Whitney con nivel de significancia a=5%. Se recolectaron 54 pacientes de los cuales a 14 se les aplicó factor VIIr. Un promedio cinco unidades de glóbulos rojos, nueve de plasma, seis de plaquetas y cuatro de crioprecipitados fueron transfundidas sin diferencias significativas en los grupos. Si se aprecia disminución del sangrado en 24 horas postoperatorias, corrección de los tiempos de coagulación, y menor mortalidad. Las complicaciones tromboembólicas y falla renal no fueron estadísticamente significativas.
Resumo:
En este estudio el objetivo fue evaluar variables antropométricas de la mano: Largo palma, índice de forma, perímetro de muñeca, perímetro a 1 cm distal del perímetro de la muñeca, índice de muñeca y ajustarlas por género, edad, ocupación, tiempo en el oficio, como factores de riesgo independiente para Síndrome de Túnel del Carpo. Se realizó un estudio de casos y controles con 63 casos con diagnóstico electrofisiológico, de los cuales 58 eran mujeres, 5 hombres contra 63 controles asintomáticos, de los cuales 52 mujeres y 11 hombres. La evaluación de las variables se realizó mediante un análisis bivariado y un análisis multivariado (Regresión Logística) a lo cual se le aplicó una prueba de bondad de ajuste (Análisis de varianza ANOVA). La estratificación de cada una de las variables por género, no fue posible realizarla por el número reducido de hombres. El análisis bivariado mostro la edad mayor de 40 años, largo palma menor de 105.5 mm tiene un efecto significativo de riesgo; que el índice de forma, el perímetro de muñeca, el índice de muñeca, el índice de masa corporal, el perímetro a 1 cm distal del perímetro de muñeca fueron significativamente mayores en el grupo de casos que en el grupo control. En el análisis de regresión logística mostró que la edad mayor de 40 años, I.M.C mayor de 24.9 kg/m2, tiempo en el oficio de 5 a 10 años, el largo palma menor de 105.5 mm, tienen un efecto significativo de riesgo para Síndrome de Túnel del Carpo. En la prueba de bondad de ajuste del modelo de regresión logística (Análisis de varianza ANOVA) Las variables que presentan un efecto significativo para riesgo son: Ocupación 1-Trabajo Operativo Manual, Tiempo en el oficio de 5 a 10 años, Edad mayor de 40 años, I.M.C. mayor de 24.9 Kg/m2 y largo palma menor de 105.5 mm. En conclusión, de las medidas antropométricas evaluadas, la única que presentó una asociación significativa con síndrome de túnel del carpo fue el largo palma menor de 105.5 mm. De las variables individuales y relacionadas con la ocupación presentaron un efecto significativo para riesgo, las ocupaciones que implican trabajos operativos manuales, tiempo en el oficio de 5 a 10 años, edad mayor de 40 años, Índice de masa corporal dentro de los rangos de sobrepeso y obesidad.
Resumo:
Contiene fotograf??as y esquemas. Resumen tomado del autor