928 resultados para Risks Assessment Methods
Resumo:
INTRODUCTION: Two important risk factors for abnormal neurodevelopment are preterm birth and neonatal hypoxic ischemic encephalopathy. The new revisions of Griffiths Mental Development Scale (Griffiths-II, [1996]) and the Bayley Scales of Infant Development (BSID-II, [1993]) are two of the most frequently used developmental diagnostics tests. The Griffiths-II is divided into five subscales and a global development quotient (QD), and the BSID-II is divided into two scales, the Mental scale (MDI) and the Psychomotor scale (PDI). The main objective of this research was to establish the extent to which developmental diagnoses obtained using the new revisions of these two tests are comparable for a given child. MATERIAL AND METHODS: Retrospective study of 18-months-old high-risk children examined with both tests in the follow-up Unit of the Clinic of Neonatology of our tertiary care university Hospital between 2011 and 2012. To determine the concurrent validity of the two tests paired t-tests and Pearson product-moment correlation coefficients were computed. Using the BSID-II as a gold standard, the performance of the Griffiths-II was analyzed with receiver operating curves. RESULTS: 61 patients (80.3% preterm, 14.7% neonatal asphyxia) were examined. For the BSID-II the MDI mean was 96.21 (range 67-133) and the PDI mean was 87.72 (range 49-114). For the Griffiths-II, the QD mean was 96.95 (range 60-124), the locomotors subscale mean was 92.57 (range 49-119). The score of the Griffiths locomotors subscale was significantly higher than the PDI (p<0.001). Between the Griffiths-II QD and the BSID-II MDI no significant difference was found, and the area under the curve was 0.93, showing good validity. All correlations were high and significant with a Pearson product-moment correlation coefficient >0.8. CONCLUSIONS: The meaning of the results for a given child was the same for the two tests. Two scores were interchangeable, the Griffiths-II QD and the BSID-II MDI.
Resumo:
PURPOSE: Iterative algorithms introduce new challenges in the field of image quality assessment. The purpose of this study is to use a mathematical model to evaluate objectively the low contrast detectability in CT. MATERIALS AND METHODS: A QRM 401 phantom containing 5 and 8 mm diameter spheres with a contrast level of 10 and 20 HU was used. The images were acquired at 120 kV with CTDIvol equal to 5, 10, 15, 20 mGy and reconstructed using the filtered back-projection (FBP), adaptive statistical iterative reconstruction 50% (ASIR 50%) and model-based iterative reconstruction (MBIR) algorithms. The model observer used is the Channelized Hotelling Observer (CHO). The channels are dense difference of Gaussian channels (D-DOG). The CHO performances were compared to the outcomes of six human observers having performed four alternative forced choice (4-AFC) tests. RESULTS: For the same CTDIvol level and according to CHO model, the MBIR algorithm gives the higher detectability index. The outcomes of human observers and results of CHO are highly correlated whatever the dose levels, the signals considered and the algorithms used when some noise is added to the CHO model. The Pearson coefficient between the human observers and the CHO is 0.93 for FBP and 0.98 for MBIR. CONCLUSION: The human observers' performances can be predicted by the CHO model. This opens the way for proposing, in parallel to the standard dose report, the level of low contrast detectability expected. The introduction of iterative reconstruction requires such an approach to ensure that dose reduction does not impair diagnostics.
Post-partum persistence of abnormal circadian pattern of blood pressure after preeclampsia [109-POS]
Resumo:
OBJECTIVES: Blunted nocturnal dip of blood pressure (BP) and reversed circadian rhythm have been described in preeclampsia (PE). Non-dipper status and preeclampsia are both associated with an increased risk of cardiovascular disease later in life. Complete recovery of BP in PE is reported to occur over a variable period of time. Twenty-four hours-ambulatory blood pressure measurement (ABPM) in the post-partum follow-up after a PE has not been described. The aim of this study was to assess 24h-ambulatory blood pressure pattern after a PE and to determine the prevalence of non-dipper status, nocturnal hypertension, white coat hypertension and masked hypertension. METHODS: This is an observational, prospective study on women who suffered from a preeclampsia. A 24h-ABPM was done 6 weeks post-partum at the Hypertension Unit of the University Hospitals of Geneva, concomitantly with a clinical and biological evaluation. RESULTS: Forty-five women were included in a preliminary analysis. Mean age was 33±6years, 57.3% were Caucasian, mean BMI before pregnancy was 24±5kg/m(2). Office and ambulatory BP are shown in Table 1. Prevalence of nocturnal hypertension was high and half of the women had no nocturnal dipping. The diagnosis of hypertension based on office BP was discordant with the diagnosis based on ABPM in 25% of women. CONCLUSIONS: The prevalence of increased nighttime BP and abnormal BP pattern is high at 6weeks post-partum in preeclamptic women. Early assessment of BP with ABPM after preeclampsia allows an early identification of women with persistent circadian abnormalities who might be at increased risk. It also provides a more accurate assessment than office BP. DISCLOSURES: A. Ditisheim: None. B. Ponte: None. G. Wuerzner: None. M. Burnier: None. M. Boulvain: None. A. Pechère-Bertschi: None.
Resumo:
INTRODUCTION: This article is part of a research study on the organization of primary health care (PHC) for mental health in two of Quebec's remote regions. It introduces a methodological approach based on information found in health records, for assessing the quality of PHC offered to people suffering from depression or anxiety disorders. METHODS: Quality indicators were identified from evidence and case studies were reconstructed using data collected in health records over a 2-year observation period. Data collection was developed using a three-step iterative process: (1) feasibility analysis, (2) development of a data collection tool, and (3) application of the data collection method. The adaptation of quality-of-care indicators to remote regions was appraised according to their relevance, measurability and construct validity in this context. RESULTS: As a result of this process, 18 quality indicators were shown to be relevant, measurable and valid for establishing a critical quality appraisal of four recommended dimensions of PHC clinical processes: recognition, assessment, treatment and follow-up. CONCLUSIONS: There is not only an interest in the use of health records to assess the quality of PHC for mental health in remote regions but also a scientific value for the rigorous and meticulous methodological approach developed in this study. From the perspective of stakeholders in the PHC system of care in remote areas, quality indicators are credible and provide potential for transferability to other contexts. This study brings information that has the potential to identify gaps in and implement solutions adapted to the context.
Resumo:
OBJECTIVE: To determine the number of punctures in fine-needle aspiration biopsies required for a safe cytological analysis of thyroid nodules. MATERIALS AND METHODS: Cross-sectional study with focus on diagnosis. The study population included 94 patients. RESULTS: The mean age of the patients participating in the study was 52 years (standard-deviation = 13.7) and 90.4% of them were women. Considering each puncture as an independent event, the first puncture has showed conclusive results in 78.7% of cases, the second, in 81.6%, and the third, in 71.8% of cases. With a view to the increasing chance of a conclusive diagnosis at each new puncture, two punctures have showed conclusive results in 89.5% of cases, and three punctures, in 90.6% of cases with at least one conclusive result. CONCLUSION: Two punctures in fine-needle aspiration biopsies of thyroid nodules have lead to diagnosis in 89.5% of cases in the study sample, suggesting that there is no need for multiple punctures to safely obtain the diagnosis of thyroid nodules.
Resumo:
Objective To investigate superior mesenteric artery flow measurement by Doppler ultrasonography as a means of characterizing inflammatory activity in Crohn's disease. Materials and Methods Forty patients were examined and divided into two groups – disease activity and remission – according to their Crohn's disease activity index score. Mean superior mesenteric artery flow volume was calculated for each group and correlated with Crohn's disease activity index score. Results The mean superior mesenteric artery flow volume was significantly greater in the patients with active disease (626 ml/min ± 236 × 376 ml/min ± 190; p = 0.001). As a cut off corresponding to 500 ml/min was utilized, the superior mesenteric artery flow volume demonstrated sensitivity of 83% and specificity of 82% for the diagnosis of Crohn's disease activity. Conclusion The present results suggest that patients with active Crohn's disease have increased superior mesenteric artery flow volume as compared with patients in remission. Superior mesenteric artery flow measurement had a good performance in the assessment of disease activity in this study sample.
Resumo:
Objective To compare automatic and manual measurements of intima-media complex (IMC) in common carotid, common femoral and right subclavian arteries of HIV-infected patients in relation to a control group, taking into consideration the classical risk factors for atherosclerosis. Materials and Methods The study sample comprised 70 HIV-infected patients and 70 non-HIV-infected controls paired according sex and age. Automatic (gold standard) and manual measurements of IMC were performed in the carotid arteries. Manual measurements were also performed in common femoral and right subclavian arteries. Bland-Altman graphs were utilized in the comparison and the adopted level significance was 5%. Results Intima-media complex alterations were not observed in any of the individuals as the mean automatic measurement in the right common carotid (RCC) artery was considered as the gold standard. As the gold standard was compared with the manual measurements (mean, maximum and minimum), no clinically significant alteration was observed. As the gold standard was compared with other sites, the difference was statistically and clinically significant at the origin of right subclavian artery (RCC: 0.51 mm vs. 0.91 mm) (p < 0.001). Conclusion HIV-infected individuals are not at higher risk for atherosclerosis than the control population.
Resumo:
Nanogenotoxicity is a crucial endpoint in safety testing of nanomaterials as it addresses potential mutagenicity, which has implications for risks of both genetic disease and carcinogenesis. Within the NanoTEST project, we investigated the genotoxic potential of well-characterised nanoparticles (NPs): titanium dioxide (TiO2) NPs of nominal size 20 nm, iron oxide (8 nm) both uncoated (U-Fe3O4) and oleic acid coated (OC-Fe3O4), rhodamine-labelled amorphous silica 25 (Fl-25 SiO2) and 50 nm (Fl-50 SiO) and polylactic glycolic acid polyethylene oxide polymeric NPs - as well as Endorem® as a negative control for detection of strand breaks and oxidised DNA lesions with the alkaline comet assay. Using primary cells and cell lines derived from blood (human lymphocytes and lymphoblastoid TK6 cells), vascular/central nervous system (human endothelial human cerebral endothelial cells), liver (rat hepatocytes and Kupffer cells), kidney (monkey Cos-1 and human HEK293 cells), lung (human bronchial 16HBE14o cells) and placenta (human BeWo b30), we were interested in which in vitro cell model is sufficient to detect positive (genotoxic) and negative (non-genotoxic) responses. All in vitro studies were harmonized, i.e. NPs from the same batch, and identical dispersion protocols (for TiO2 NPs, two dispersions were used), exposure time, concentration range, culture conditions and time-courses were used. The results from the statistical evaluation show that OC-Fe3O4 and TiO2 NPs are genotoxic in the experimental conditions used. When all NPs were included in the analysis, no differences were seen among cell lines - demonstrating the usefulness of the assay in all cells to identify genotoxic and non-genotoxic NPs. The TK6 cells, human lymphocytes, BeWo b30 and kidney cells seem to be the most reliable for detecting a dose-response.
Resumo:
AbstractObjective:To define the distal femur rotation pattern in a Brazilian population, correlating such pattern with the one suggested by the arthroplasty instruments, and analyzing the variability of each anatomic parameter.Materials and Methods:A series of 101 magnetic resonance imaging studies were evaluated in the period between April and June 2012. The epidemiological data collection was performed with the aid of the institution's computed imaging system, and the sample included 52 male and 49 female patients. The measurements were made in the axial plane, with subsequent correlation and triangulation with the other plans. The posterior condylar line was used as a reference for angle measurements. Subsequently, the anatomical and surgical transepicondylar axes and the anteroposterior trochlear line were specified. The angles between the reference line and the studied lines were calculated with the aid of the institution's software.Results:The mean angle between the anatomical transepicondylar axis and the posterior condylar line was found to be 6.89°, ranging from 0.25° to 12°. For the surgical transepicondylar axis, the mean value was 2.89°, ranging from –2.23° (internal rotation) to 7.86°, and for the axis perpendicular to the anteroposterior trochlear line, the mean value was 4.77°, ranging from –2.09° to 12.2°.Conclusion:The anatomical transepicondylar angle showed mean values corresponding to the measurement observed in the Caucasian population. The utilized instruments are appropriate, but no anatomical parameter proved to be steady enough to be used in isolation.
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
Purpose: To assess the composition and compliance with legislation of multivitamin/multiminerals (MVM) in Switzerland. Methods: Information on the composition of vitamin/minerals supplements was obtained from the Swiss drug compendium, the Internet, pharmacies, parapharmacies and supermarkets. MVM was defined as the presence of at least 5 vitamins and/or minerals. Results: 95 MVM were considered. The most frequent vitamins were B6 (73.7%), C (71.6%), B2 (69.5%) and B1 (67.4%); the least frequent were K (17.9%), biotin (51.6%), pantothene (55.8%) and E (56.8%). Around half of MVMs provided >150% of the ADI for vitamins. The most frequent minerals were zinc (66.3%), calcium (55.8%), magnesium (54.7%) and copper (48.4%), and the least frequent were fluoride (3.2%), phosphorous (17.9%) and chrome (22.1%). Only 25% of MVMs contained iodine. More than two thirds of MVMs provided between 15 and 150% of the ADI for minerals, and few MVMs provided >150% of the ADI. While few MVMs provided <15% of the ADI for vitamins, a considerable fraction did so for minerals (32.7% for magnesium, 26.1% for copper and 22.6% for calcium). Conclusion: There is a great variability regarding the composition and amount of MVMs in Switzerland. Several MVM do not comply with the Swiss legislation.
Resumo:
Construction of multiple sequence alignments is a fundamental task in Bioinformatics. Multiple sequence alignments are used as a prerequisite in many Bioinformatics methods, and subsequently the quality of such methods can be critically dependent on the quality of the alignment. However, automatic construction of a multiple sequence alignment for a set of remotely related sequences does not always provide biologically relevant alignments.Therefore, there is a need for an objective approach for evaluating the quality of automatically aligned sequences. The profile hidden Markov model is a powerful approach in comparative genomics. In the profile hidden Markov model, the symbol probabilities are estimated at each conserved alignment position. This can increase the dimension of parameter space and cause an overfitting problem. These two research problems are both related to conservation. We have developed statistical measures for quantifying the conservation of multiple sequence alignments. Two types of methods are considered, those identifying conserved residues in an alignment position, and those calculating positional conservation scores. The positional conservation score was exploited in a statistical prediction model for assessing the quality of multiple sequence alignments. The residue conservation score was used as part of the emission probability estimation method proposed for profile hidden Markov models. The results of the predicted alignment quality score highly correlated with the correct alignment quality scores, indicating that our method is reliable for assessing the quality of any multiple sequence alignment. The comparison of the emission probability estimation method with the maximum likelihood method showed that the number of estimated parameters in the model was dramatically decreased, while the same level of accuracy was maintained. To conclude, we have shown that conservation can be successfully used in the statistical model for alignment quality assessment and in the estimation of emission probabilities in the profile hidden Markov models.
Resumo:
Tämä diplomityö käsittelee tapahtumien tutkinnan systematisointia Loviisan voimalaitoksella. Nykyisin Loviisan voimalaitoksella tutkitaan tapahtumia usealla tavalla eikä raporttien tuloksia pystytä täysin hyödyntämään. Diplomityössä käydään läpi erilaisia tapahtumien tutkintamenetelmiä sekä tutustutaan perussyyanalyysimenetelmiin. Matalan tason tapahtumien tutkintaan kehitettiin kysymyslista, jolla saadaan helposti selville tapahtumien ongelma-alueet. Kysymyslista mahdollisti myös tapahtumien laaja-alaisen luokittelun, jonka avulla tunnistetaan eri tapahtumien kautta ilmenneitä yhteisiä ongelma-alueita. Tapahtumien tutkintaan liittyy myös riskien arviointi. Tässä diplomityössä kehitettiin riskimatriisi tapahtuman vakavuuden ja jatkoselvitystarpeen arvioinnin avuksi. Perussyyanalyysimenetelmistä AcciMap-menetelmä osoittautui käyttökelpoiseksi ja sitä suositellaan kokeiltavaksi seuraavan inhimilliseen toimintaan liittyvän perussyyanalyysin yhteydessä.
Resumo:
There is growing concern that flooding is becoming more frequent and severe in Europe. A better understanding of flood regime changes and their drivers is therefore needed. The paper reviews the current knowledge on flood regime changes in European rivers that has traditionally been obtained through two alternative research approaches. The first approach is the data-based detection of changes in observed flood events. Current methods are reviewed together with their challenges and opportunities. For example, observation biases, the merging of different data sources and accounting for nonlinear drivers and responses. The second approach consists of modelled scenarios of future floods. Challenges and opportunities associated with flood change scenarios are discussed such as fully accounting for uncertainties in the modelling cascade and feedbacks. To make progress in flood change research, we suggest that a synthesis of these two approaches is needed. This can be achieved by focusing on long duration records and flood-rich and flood-poor periods rather than on short duration flood trends only, by formally attributing causes of observed flood changes, by validating scenarios against observed flood regime dynamics, and by developing low-dimensional models of flood changes and feedbacks. The paper finishes with a call for a joint European flood change research network.
Resumo:
Fatal and permanently disabling accidents form only one per I cent of all occupational accidents but in many branches of industry they account for more than half the accident costs. Furthermore the human suffering of the victim and his family is greater in severe accidents than in slight ones. For both human and economic reasons the severe accident risks should be identified befor injuries occur. It is for this purpose that different safety analysis methods have been developed . This study shows two new possible approaches to the problem.. The first is the hypothesis that it is possible to estimate the potential severity of accidents independent of the actual severity. The second is the hypothesis that when workers are also asked to report near accidents, they are particularly prone to report potentially severe near accidents on the basis of their own subjective risk assessment. A field study was carried out in a steel factory. The results supported both the hypotheses. The reliability and the validity of post incident estimates of an accident's potential severity were reasonable. About 10 % of accidents were estimated to be potentially critical; they could have led to death or very severe permanent disability. Reported near accidents were significantly more severe, about 60 $ of them were estimated to be critical. Furthermore the validity of workers subjective risk assessment, manifested in the near accident reports, proved to be reasonable. The studied new methods require further development and testing. They could be used both in routine usage in work places and in research for identifying and setting the priorities of accident risks.