24 resultados para methods of analysis
Resumo:
Introduction: Ankle arthropathy is associated with a decreased motion of the ankle-hindfoot during ambulation. Ankle arthrodesis was shown to result in degeneration of the neighbour joints of the foot. Inversely, total ankle arthroplasty conceptually preserves the adjacent joints because of the residual mobility of the ankle but this has not been demonstrated yet in vivo. It has also been reported that degenerative ankle diseases, and even arthrodesis, do not result in alteration of the knee and hip joints. We present the preliminary results of a new approach of this problem based on ambulatory gait analysis. Patients and Methods: Motion analysis of the lower limbs was performed using a Physilog® (BioAGM, CH) system consisting of three-dimensional (3D) accelerometer and gyroscope, coupled to a magnetic system (Liberty©, Polhemus, USA). Both systems have been validated. Three groups of two patients were included into this pilot study and compared to healthy subjects (controls) during level walking: patients with ankle osteoarthritis (group 1), patients treated by ankle arthrodesis (group 2), patients treated by total ankle prosthesis (group 3). Results: Motion patterns of all analyzed joints over more than 20 gait cycles in each subject were highly repeatable. Motion amplitude of the ankle-hindfoot in control patients was similar to recently reported results. Ankle arthrodesis limited the motion of the ankle-hindfoot in the sagittal and horizontal planes. The prosthetic ankle allowed a more physiologic movement in the sagittal plane only. Ankle arthritis and its treatments did not influence the range of motion of the knee and hip joint during stance phase, excepted for a slight decrease of the hip flexion in groups 1 and 2. Conclusion: The reliability of the system was shown by the repeatability of the consecutive measurements. The results of this preliminary study were similar to those obtained through laboratory gait analysis. However, our system has the advantage to allow ambulatory analysis of 3D kinematics of the lower limbs outside of a gait laboratory and in real life conditions. To our knowledge this is a new concept in the analysis of ankle arthropathy and its treatments. Therefore, there is a potential to address specific questions like the difficult comparison of the benefits of ankle arthroplasty versus arthrodesis. The encouraging results of this pilot study offer the perspective to analyze the consequences of ankle arthropathy and its treatments on the biomechanics of the lower limbs ambulatory, in vivo and in daily life conditions.
Resumo:
AIMS: To determine whether parental factors earlier in life (parenting, single parent family, parental substance use problem) are associated with patterns of alcohol consumption among young men in Switzerland. METHODS: This analysis of a population based sample from the Cohort Study on Substance Use Risk Factors (C-SURF) included 5,990 young men (mean age 19.51 years), all attending a mandatory recruitment process for the army. These conscripts reported on parental monitoring and rule-setting, parental behaviour and family structure. The alcohol use pattern was assessed through abstention, risky single occasion drinking (RSOD), volume drinking and dependence. Furthermore, the impact of age, family socio-economic status, educational level of the parents, language region and civil status was analysed. RESULTS: A parental substance use problem was positively associated with volume drinking and alcohol dependence in young Swiss men. Active parenting corresponded negatively with RSOD, volume drinking and alcohol dependence. Single parent family was not associated with a different alcohol consumption pattern compared to standard family. CONCLUSION: Parental influences earlier in life such as active parenting (monitoring, rule-setting and knowing the whereabouts) and perceived parental substance use problem are associated with alcohol drinking behaviour in young male adults. Therefore, health professionals should stress the importance of active parenting and parental substance use prevention in alcohol prevention strategies.
Resumo:
Practical guidelines for monitoring and measuring compounds such as jasmonates, ketols, ketodi(tri)enes and hydroxy-fatty acids as well as detecting the presence of novel oxylipins are presented. Additionally, a protocol for the penetrant analysis of non-enzymatic lipid oxidation is described. Each of the methods, which employ gas chromatography/mass spectrometry, can be applied without specialist knowledge or recourse to the latest analytical instrumentation. Additional information on oxylipin quantification and novel protocols for preparing oxygen isotope-labelled internal standards are provided. Four developing areas of research are identified: (i) profiling of the unbound cellular pools of oxylipins; (ii) profiling of esterified oxylipins and/or monitoring of their release from parent lipids; (iii) monitoring of non-enzymatic lipid oxidation; (iv) analysis of unstable and reactive oxylipins. The methods and protocols presented herein are designed to give technical insights into the first three areas and to provide a platform from which to enter the fourth area.
Resumo:
To study different temporal components on cancer mortality (age, period and cohort) methods of graphic representation were applied to Swiss mortality data from 1950 to 1984. Maps using continuous slopes ("contour maps") and based on eight tones of grey according to the absolute distribution of rates were used to represent the surfaces defined by the matrix of various age-specific rates. Further, progressively more complex regression surface equations were defined, on the basis of two independent variables (age/cohort) and a dependent one (each age-specific mortality rate). General patterns of trends in cancer mortality were thus identified, permitting definition of important cohort (e.g., upwards for lung and other tobacco-related neoplasms, or downwards for stomach) or period (e.g., downwards for intestines or thyroid cancers) effects, besides the major underlying age component. For most cancer sites, even the lower order (1st to 3rd) models utilised provided excellent fitting, allowing immediate identification of the residuals (e.g., high or low mortality points) as well as estimates of first-order interactions between the three factors, although the parameters of the main effects remained still undetermined. Thus, the method should be essentially used as summary guide to illustrate and understand the general patterns of age, period and cohort effects in (cancer) mortality, although they cannot conceptually solve the inherent problem of identifiability of the three components.
Resumo:
With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
A collaborative study on Raman spectroscopy and microspectrophotometry (MSP) was carried out by members of the ENFSI (European Network of Forensic Science Institutes) European Fibres Group (EFG) on different dyed cotton fabrics. The detection limits of the two methods were tested on two cotton sets with a dye concentration ranging from 0.5 to 0.005% (w/w). This survey shows that it is possible to detect the presence of dye in fibres with concentrations below that detectable by the traditional methods of light microscopy and microspectrophotometry (MSP). The MSP detection limit for the dyes used in this study was found to be a concentration of 0.5% (w/w). At this concentration, the fibres appear colourless with light microscopy. Raman spectroscopy clearly shows a higher potential to detect concentrations of dyes as low as 0.05% for the yellow dye RY145 and 0.005% for the blue dye RB221. This detection limit was found to depend both on the chemical composition of the dye itself and on the analytical conditions, particularly the laser wavelength. Furthermore, analysis of binary mixtures of dyes showed that while the minor dye was detected at 1.5% (w/w) (30% of the total dye concentration) using microspectrophotometry, it was detected at a level as low as 0.05% (w/w) (10% of the total dye concentration) using Raman spectroscopy. This work also highlights the importance of a flexible Raman instrument equipped with several lasers at different wavelengths for the analysis of dyed fibres. The operator and the set up of the analytical conditions are also of prime importance in order to obtain high quality spectra. Changing the laser wavelength is important to detect different dyes in a mixture.