974 resultados para 3D quantitative findings
Resumo:
Introduction : Le syndrome de Brugada, décrit en 1992 par Pedro et Josep Brugada, est un syndrome cardiaque caractérisé par un sus-décalage particulier du segment ST associé à un bloc de branche droit atypique au niveau des dérivations ECG V1 à V3. Les altérations ECG du syndrome de Brugada sont classifiées en 3 types dont seul le type 1 est diagnostique. Les mécanismes physiopathologiques exacts de ce syndrome sont pour le moment encore controversés. Plusieurs hypothèses sont proposées dans la littérature dont deux principales retiennent l'attention : 1) le modèle du trouble de repolarisation stipule des potentiels d'action réduits en durée et en amplitude liés à un changement de répartition de canaux potassiques 2) le modèle du trouble de dépolarisation spécifie un retard de conduction se traduisant par une dépolarisation retardée. Dans le STEMI, un sus-décalage ST ressemblant à celui du syndrome de Brugada est expliqué par deux théories : 1) le courant de lésion diastolique suggère une élévation du potentiel diastolique transformé artificiellement en sus-décalage ST par les filtres utilisés dans tous les appareils ECG.¦Objectif : Recréer les manifestations ECG du syndrome de Brugada en appliquant les modifications du potentiel d'action des cardiomyocytes rapportées dans la littérature.¦Méthode : Pour ce travail, nous avons utilisé "ECGsim", un simulateur informatique réaliste d'ECG disponible gratuitement sur www.ecgsim.org. Ce programme est basé sur une reconstruction de l'ECG de surface à l'aide de 1500 noeuds représentant chacun les potentiels d'action des ventricules droit et gauche, épicardiques et endocardiques. L'ECG simulé peut être donc vu comme l'intégration de l'ensemble de ces potentiels d'action en tenant compte des propriétés de conductivité des tissus s'interposant entre les électrodes de surface et le coeur. Dans ce programme, nous avons définit trois zones, de taille différente, comprenant la chambre de chasse du ventricule droit. Pour chaque zone, nous avons reproduit les modifications des potentiels d'action citées dans les modèles du trouble de repolarisation et de dépolarisation et des théories de courant de lésion systolique et diastolique. Nous avons utilisé, en plus des douze dérivations habituelles, une électrode positionnée en V2IC3 (i.e. 3ème espace intercostal) sur le thorax virtuel du programme ECGsim.¦Résultats : Pour des raisons techniques, le modèle du trouble de repolarisation n'a pas pu être entièrement réalisée dans ce travail. Le modèle du trouble de dépolarisation ne reproduit pas d'altération de type Brugada mais un bloc de branche droit plus ou moins complet. Le courant de lésion diastolique permet d'obtenir un sus-décalage ST en augmentant le potentiel diastolique épicardique des cardiomyocytes de la chambre de chasse du ventricule droit. Une inversion de l'onde T apparaît lorsque la durée du potentiel d'action est prolongée. L'amplitude du sus-décalage ST dépend de la valeur du potentiel diastolique, de la taille de la lésion et de sa localisation épicardique ou transmurale. Le courant de lésion systolique n'entraîne pas de sus-décalage ST mais accentue l'amplitude de l'onde T.¦Discussion et conclusion : Dans ce travail, l'élévation du potentiel diastolique avec un prolongement de la durée du potentiel d'action est la combinaison qui reproduit le mieux les altérations ECG du Brugada. Une persistance de cellules de type nodal au niveau de la chambre de chasse du ventricule droit pourrait être une explication à ces modifications particulières du potentiel d'action. Le risque d'arythmie dans la Brugada pourrait également être expliqué par une automaticité anormale des cellules de type nodal. Ainsi, des altérations des mécanismes cellulaires impliqués dans le maintien du potentiel diastolique pourraient être présentes dans le syndrome de Brugada, ce qui, à notre connaissance, n'a jamais été rapporté dans la littérature.
Resumo:
Introduction: Rotenone is a botanical pesticide derived from extracts of Derris roots, which is traditionally used as piscicide, but also as an industrial insecticide for home gardens. Its mechanism of action is potent inhibition of mitochondrial respiratory chain by uncoupling oxidative phosphorylation by blocking electron transport at complex-I. Despite its classification as mild to moderately toxic to humans (estimated LD50, 300-500 mg/kg), there is a striking variety of acute toxicity of rotenone depending on the formulation (solvents). Human fatalities with rotenone-containing insecticides have been rarely reported, and a rapid deterioration within a few hours of the ingestion has been described previously in one case. Case report: A 49-year-old Tamil man with a history of asthma, ingested 250 mL of an insecticide containing 1.24% of rotenone (3.125 g, 52.1-62.5 mg/kg) in a suicide attempt at home. The product was not labeled as toxic. One hour later, he vomited repeatedly and emergency services were alerted. He was found unconscious with irregular respiration and was intubated. On arrival at the emergency department, he was comatose (GCS 3) with fixed and dilated pupils, and absent corneal reflexes. Physical examination revealed hemodynamic instability with hypotension (55/30 mmHg) and bradycardia (52 bpm). Significant laboratory findings were lactic acidosis (pH 6.97, lactate 17 mmol/L) and hypokalemia (2 mmol/L). Cranial computed tomography (CT) showed early cerebral edema. A single dose of activated charcoal was given. Intravenous hydration, ephedrine, repeated boli of dobutamine, and a perfusor with 90 micrograms/h norepinephine stabilized blood pressure temporarily. Atropine had a minimal effect on heart rate (58 bpm). Intravenous lipid emulsion was considered (log Pow 4.1), but there was a rapid deterioration with refractory hypotension and acute circulatory failure. The patient died 5h after ingestion of the insecticide. No autopsy was performed. Quantitative analysis of serum performed by high-resolution/accurate mass-mass spectrometry and liquid chromatography (LC-HR/AM-MS): 560 ng/mL rotenone. Other substances were excluded by gas chromatography-mass spectrometry (GC-MS) and liquid chromatography-mass spectrometry (LC-MS/MS). Conclusion: The clinical course was characterized by early severe symptoms and a rapidly fatal evolution, compatible with inhibition of mitochondrial energy supply. Although rotenone is classified as mild to moderately toxic, physicians must be aware that suicidal ingestion of emulsified concentrates may be rapidly fatal. (n=3): stridor, cyanosis, cough (one each). Local swelling after chewing or swallowing soap developed at the earliest after 20 minutes and persisted beyond 24 hours in some cases. Treatment with antihistamines and/or steroids relieved the symptoms in 9 cases. Conclusion: Bar soap ingestion by seniors carries a risk of severe local reactions. Half the patients developed symptoms, predominantly swellings of tongue and/or lips (38%). Cognitive impairment, particularly in the cases of dementia (37%), may increase the risk of unintentional ingestion. Chewing and intraoral retention of soap leads to prolonged contact with the mucosal membranes. Age-associated physiological changes of oral mucosa probably promote the irritant effects of the surfactants. Medical treatment with antihistamines and corticosteroids usually leads to rapid decline of symptoms. Without treatment, there may be a risk of airway obstruction.
Resumo:
Elucidating the evolution of Phlebotominae is important not only to revise their taxonomy, but also to help understand the origin of the genus Leishmania and its relationship with humans. Our study is a phenetic portrayal of this history based on the genetic relationships among some New Word and Old Word taxa. We used both multilocus enzyme electrophoresis and morphometry on 24 male specimens of the Old Word genus Phlebotomus (with three of its subgenera: Phlebotomus, Spelaeophlebotomus and Australophlebotomus), and on 67 male specimens of the three New World genera, Warileya, Brumptomyia and Lutzomyia, (with three subgenera of Lutzomyia: Lutzomyia, Oligodontomyia and Psychodopygus). Phenetic trees derived from both techniques were similar, but disclosed relationships that disagree with the present classification of sand flies. The need for a true evolutionary approach is stressed.
Resumo:
Morphological variation among geographic populations of the New World sand fly Lutzomyia quinquefer (Diptera, Phlebotominae) was analyzed and patterns detected that are probably associated with species emergence. This was achieved by examining the relationships of size and shape components of morphological attributes, and their correlation with geographic parameters. Quantitative and qualitative morphological characters are described, showing in both sexes differences among local populations from four Departments of Bolivia. Four arguments are then developed to reject the hypothesis of environment as the unique source of morphological variation: (1) the persistence of differences after removing the allometric consequences of size variation, (2) the association of local metric properties with meristic and qualitative attributes, rather than with altitude, (3) the positive and significant correlation between metric and geographic distances, and (4) the absence of a significant correlation between altitude and general-size of the insects.
Resumo:
Aim: In Western Europe, HIV/AIDS prevention has been based on the provision of information intended to lead the public to voluntarily adapt their behaviour so as to avoid the risk of virus transmission. Whether conveyed in a written or oral form, the messages of prevention are essentially verbal. Sociolinguistic research confirms that, even within a given culture, the meaning attributed to lexical items varies. It was hypothesised that understandings of the terms used in HIV/AIDS prevention in French-speaking Switzerland would vary, and research was undertaken to identify the level and nature of this variation both between and among those who transmit (prevention providers) and those who receive (the public) the messages. Method/issue: All HIV/AIDS prevention material available in French-speaking Switzerland in 2004 was assembled and a corpus of 50 key documents identified. Two series of lexical items were generated from this corpus: one composed of technical terms potentially difficult to understand, and the other, of terms used in everyday language with implicit, and therefore potentially variable, meaning. The two lists of terms were investigated in qualitative interviews in stratified purposive samples of the general public (n=60) and prevention providers (n=30), using standard socio-linguistic methodology. A further quantitative study (CATI) in the general population (17 - 49 yrs.; n=500) investigated understandings of 15 key prevention terms found in the qualitative research to have been associated with high levels of dissension. Results/comments: Selected aspects of the results will be presented. In illustration: meanings attributed to the different terms in both the public and the providers varied. For example, when a relationship is described as "stable", this may be understood as implying exclusive sexual relations or long duration, with an interaction between the two traits; the term "sexual intercourse" may or may not be used to refer to oral sex; "making love" may or may not necessarily include an act of penetration; the pre-ejaculate is qualified by some as sperm, and by others not... Understanding of frequently used "technical" terms in prevention was far from universal; for example, around only a half of respondents understood the meaning of "safer sex". Degree of understanding of these terms was linked to education, whereas variability in meaning in everyday language was not linked to socio-economic variables. Discussion: Findings indicate the need for more awareness regarding the heterogeneity of meaning around the terms regularly used in prevention. Greater attention should be paid to the formulation of prevention messages, and providers should take precautions to ensure that the meanings they wish to convey are those perceived by the receivers of their messages. Wherever possible, terms used should be defined and meanings rendered explicit.
Resumo:
OBJECTIVE: The purpose of this study was to compare the use of different variables to measure the clinical wear of two denture tooth materials in two analysis centers. METHODS: Twelve edentulous patients were provided with full dentures. Two different denture tooth materials (experimental material and control) were placed randomly in accordance with the split-mouth design. For wear measurements, impressions were made after an adjustment phase of 1-2 weeks and after 6, 12, 18, and 24 months. The occlusal wear of the posterior denture teeth of 11 subjects was assessed in two study centers by use of plaster replicas and 3D laser-scanning methods. In both centers sequential scans of the occlusal surfaces were digitized and superimposed. Wear was described by use of four different variables. Statistical analysis was performed after log-transformation of the wear data by use of the Pearson and Lin correlation and by use of a mixed linear model. RESULTS: Mean occlusal vertical wear of the denture teeth after 24 months was between 120μm and 212μm, depending on wear variable and material. For three of the four variables, wear of the experimental material was statistically significantly less than that of the control. Comparison of the two study centers, however, revealed correlation of the wear variables was only moderate whereas strong correlation was observed among the different wear variables evaluated by each center. SIGNIFICANCE: Moderate correlation was observed for clinical wear measurements by optical 3D laser scanning in two different study centers. For the two denture tooth materials, wear measurements limited to the attrition zones led to the same qualitative assessment.
Resumo:
This summary report follows on from the publication of the Northern Ireland physical activity strategy in 1996 and the subsequent publication of the strategy action plan in 1998. Within this strategy action plan a recommendation was made for the health sector, that research should be carried out to evaluate and compare the cost of investing in physical activity programmes against the cost of treating preventable illness. To help in the development of this key area, the Department of Health, Social Services and Public Safety's Economics Branch agreed to develop a model that would seek to establish the extent of avoidable deaths from physical inactivity and, as a consequence, the avoidable economic and healthcare costs for Northern Ireland.
Resumo:
The feasibility of three-dimensional (3D) whole-heart imaging of the coronary venous (CV) system was investigated. The hypothesis that coronary magnetic resonance venography (CMRV) can be improved by using an intravascular contrast agent (CA) was tested. A simplified model of the contrast in T(2)-prepared steady-state free precession (SSFP) imaging was applied to calculate optimal T(2)-preparation durations for the various deoxygenation levels expected in venous blood. Non-contrast-agent (nCA)- and CA-enhanced images were compared for the delineation of the coronary sinus (CS) and its main tributaries. A quantitative analysis of the resulting contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) in both approaches was performed. Precontrast visualization of the CV system was limited by the poor CNR between large portions of the venous blood and the surrounding tissue. Postcontrast, a significant increase in CNR between the venous blood and the myocardium (Myo) resulted in a clear delineation of the target vessels. The CNR improvement was 347% (P < 0.05) for the CS, 260% (P < 0.01) for the mid cardiac vein (MCV), and 430% (P < 0.05) for the great cardiac vein (GCV). The improvement in SNR was on average 155%, but was not statistically significant for the CS and the MCV. The signal of the Myo could be significantly reduced to about 25% (P < 0.001).
Resumo:
Independent research jointly commissioned by the Department of Health, Social Services and Public Safety (DHSSPS) and the HSC R&D Division.
Resumo:
On 27 January 2011 the Department of Health, Social Services and Public Safety (DHSSPS) launched a three month public consultation for a new draft Physical and Sensory Disability Strategy and Action Plan (2011-2015). åÊ The aim of the consultation was to provide the opportunity for a range of different stakeholders (public authorities and organisations, individuals including persons with disabilities and community and voluntary organisations) from across Northern Ireland to give feedback on the suggested priorities and challenges detailed in the document. The Department recognised the need for a new Disability Strategy and Action Plan not least to address new and developing challenges and opportunities. These include: åÊ • Obligations taken by the UK and NI in signing and ratifying the UN Convention on the Rights of Persons with Disabilities; åÊ • New innovations and models of care, support and treatment available within health and social care; åÊ • The current demographic trends and financial constraints being faced by everyone. åÊ åÊ åÊ
Resumo:
A factor limiting preliminary rockfall hazard mapping at regional scale is often the lack of knowledge of potential source areas. Nowadays, high resolution topographic data (LiDAR) can account for realistic landscape details even at large scale. With such fine-scale morphological variability, quantitative geomorphometric analyses become a relevant approach for delineating potential rockfall instabilities. Using digital elevation model (DEM)-based ?slope families? concept over areas of similar lithology and cliffs and screes zones available from the 1:25,000 topographic map, a susceptibility rockfall hazard map was drawn up in the canton of Vaud, Switzerland, in order to provide a relevant hazard overview. Slope surfaces over morphometrically-defined thresholds angles were considered as rockfall source zones. 3D modelling (CONEFALL) was then applied on each of the estimated source zones in order to assess the maximum runout length. Comparison with known events and other rockfall hazard assessments are in good agreement, showing that it is possible to assess rockfall activities over large areas from DEM-based parameters and topographical elements.
Resumo:
This article describes the composition of fingermark residue as being a complex system with numerous compounds coming from different sources and evolving over time from the initial composition (corresponding to the composition right after deposition) to the aged composition (corresponding to the evolution of the initial composition over time). This complex system will additionally vary due to effects of numerous influence factors grouped in five different classes: the donor characteristics, the deposition conditions, the substrate nature, the environmental conditions and the applied enhancement techniques. The initial and aged compositions as well as the influence factors are thus considered in this article to provide a qualitative and quantitative review of all compounds identified in fingermark residue up to now. The analytical techniques used to obtain these data are also enumerated. This review highlights the fact that despite the numerous analytical processes that have already been proposed and tested to elucidate fingermark composition, advanced knowledge is still missing. Thus, there is a real need to conduct future research on the composition of fingermark residue, focusing particularly on quantitative measurements, aging kinetics and effects of influence factors. The results of future research are particularly important for advances in fingermark enhancement and dating technique developments.
Resumo:
BACKGROUND: To report the clinical, histopathological and immunohistochemical findings of two novel mutations within the TGFBI gene. METHODS: The genotype of 41 affected members of 16 families and nine sporadic cases was investigated by direct sequencing of the TGFBI gene. Clinical, histological and immunohistochemical characteristics of corneal opacification were reported and compared with the coding region changes in the TGFBI gene. RESULTS: A novel mutation Leu509Pro was detected in one family with a geographic pattern-like clinical phenotype. Histopathologically we found amyloid together with non-amyloid deposits and immunohistochemical staining of Keratoepithelin (KE) KE2 and KE15 antibodies. In two families and one sporadic case the novel mutation Gly623Arg with a late-onset, map-like corneal dystrophy was identified. Here amyloid and immunohistochemical staining of only KE2 antibodies occurred. Further, five already known mutations are reported: Arg124Cys Arg555Trp Arg124His His626Arg, Ala546Asp in 13 families and five sporadic cases of German origin. The underlying gene defect within the TBFBI gene was not identified in any of the four probands with Thiel-Behnke corneal dystrophy. CONCLUSIONS: The two novel mutations within the TGFBI gene add another two phenotypes with atypical immunohistochemical and histopathological features to those so far reported.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.