882 resultados para surface based methods
Resumo:
Introduction Toxoplasmosis may be life-threatening in fetuses and in immune-deficient patients. Conventional laboratory diagnosis of toxoplasmosis is based on the presence of IgM and IgG anti-Toxoplasma gondii antibodies; however, molecular techniques have emerged as alternative tools due to their increased sensitivity. The aim of this study was to compare the performance of 4 PCR-based methods for the laboratory diagnosis of toxoplasmosis. One hundred pregnant women who seroconverted during pregnancy were included in the study. The definition of cases was based on a 12-month follow-up of the infants. Methods Amniotic fluid samples were submitted to DNA extraction and amplification by the following 4 Toxoplasma techniques performed with parasite B1 gene primers: conventional PCR, nested-PCR, multiplex-nested-PCR, and real-time PCR. Seven parameters were analyzed, sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), positive likelihood ratio (PLR), negative likelihood ratio (NLR) and efficiency (Ef). Results Fifty-nine of the 100 infants had toxoplasmosis; 42 (71.2%) had IgM antibodies at birth but were asymptomatic, and the remaining 17 cases had non-detectable IgM antibodies but high IgG antibody titers that were associated with retinochoroiditis in 8 (13.5%) cases, abnormal cranial ultrasound in 5 (8.5%) cases, and signs/symptoms suggestive of infection in 4 (6.8%) cases. The conventional PCR assay detected 50 cases (9 false-negatives), nested-PCR detected 58 cases (1 false-negative and 4 false-positives), multiplex-nested-PCR detected 57 cases (2 false-negatives), and real-time-PCR detected 58 cases (1 false-negative). Conclusions The real-time PCR assay was the best-performing technique based on the parameters of Se (98.3%), Sp (100%), PPV (100%), NPV (97.6%), PLR (∞), NLR (0.017), and Ef (99%).
Resumo:
This paper presents part of a study aimed at finding a suitable, yet cost-effective, surface finish for a steel structure subject to the car washing environment and corrosive chemicals. The initial, life cycle and average equivalent annual (AEAC) costs for surface finishing methods were calculated for a steel structure using the LCCC algorithm developed by American Galvanizers Association (AGA). The cost study consisted of 45 common surface finish systems including: hot-dip galvanization (HDG), metallization, acrylic, alkyd and epoxy as well as duplex coatings such as epoxy zinc and inorganic zinc (IOZ). The results show that initial, life cycle and AEAC costs for hot dip galvanization are the lowest among all the other methods, followed by coal tar epoxy painting. The annual average cost of HDG for this structure was estimated about €0.22/m2, while the other cost-effective alternatives were: IOZ, polyurea, epoxy waterborne and IOZ/epoxy duplex coating.
Resumo:
Purpose: To determine the relationship of goblet cell density (GCD) with tear function and ocular surface physiology. Methods: This was a cross-sectional study conducted in 35 asymptomatic subjects with mean age 23.8±3.6 years. Tear film assessment, conjunctiva and cornea examination were done in each subject. Conjunctival impression cytology was performed by applying Nitrocellulose Millipore MFTM-Membrane filter over the superior bulbar conjunctiva. The filter paper was than fixed with 96% ethanol and stained with Periodic Acid Schiff, Hematoxylin and Eosin. GCD was determined by optical microscopy. Relation between GCD and Schirmer score, tear break-up time (TBUT), bulbar redness, limbal redness and corneal staining was determined. Results: The mean GCD was 151±122 cells/mm2. GCD was found higher in eyes with higher Schirmer score but it was not significant (p = 0.75). There was a significant relationship ofGCDwith TBUT (p = 0.042). GCD was not correlated with bulbar redness (p = 0.126), and limbal redness (p = 0.054) as well as corneal staining (p = 0.079). No relationship of GCD with age and gender of the subjects (p > 0.05) was observed. Conclusion: GCD was found correlated with TBUT but no significant correlation was found with the aqueous portion of the tear, limbal as well as bulbar redness and corneal staining.
Resumo:
A new electrical method is proposed for determining the apparent resistivity of multi-earth layers located underwater. The method is based on direct current geoelectric sounding principles. A layered earth model is used to simulate the stratigraphic target. The measurement array is of pole-pole type; it is located underwater and is orientated vertically. This particular electrode configuration is very useful when conventional electrical methods cannot be used, especially if the water depth becomes very important. The calculated apparent resistivity shows a substantial quality increase in the measured signal caused by the underwater targets, from which little or no response is measured using conventional surface electrode methods. In practice, however, different factors such as water stratification, underwater streams or meteorological conditions complicate the interpretation of the field results. A case study is presented, where field surveys carried out on Lake Geneva were interpreted using the calculated apparent resistivity master-curves.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Anti-basal ganglia antibodies (ABGAs) have been suggested to be a hallmark of autoimmunity in Gilles de la Tourette's syndrome (GTS), possibly related to prior exposure to streptococcal infection. In order to detect whether the presence of ABGAs was associated with subtle structural changes in GTS, whole-brain analysis using independent sets of T(1) and diffusion tensor imaging MRI-based methods were performed on 22 adults with GTS with (n = 9) and without (n = 13) detectable ABGAs in the serum. Voxel-based morphometry analysis failed to detect any significant difference in grey matter density between ABGA-positive and ABGA-negative groups in caudate nuclei, putamina, thalami and frontal lobes. These results suggest that ABGA synthesis is not related to structural changes in grey and white matter (detectable with these methods) within frontostriatal circuits.
Resumo:
To assess the preferred methods to quit smoking among current smokers. Cross-sectional, population-based study conducted in Lausanne between 2003 and 2006 including 988 current smokers. Preference was assessed by questionnaire. Evidence-based (EB) methods were nicotine replacement, bupropion, physician or group consultations; non-EB-based methods were acupuncture, hypnosis and autogenic training. EB methods were frequently (physician consultation: 48%, 95% confidence interval (45-51); nicotine replacement therapy: 35% (32-38)) or rarely (bupropion and group consultations: 13% (11-15)) preferred by the participants. Non-EB methods were preferred by a third (acupuncture: 33% (30-36)), a quarter (hypnosis: 26% (23-29)) or a seventh (autogenic training: 13% (11-15)) of responders. On multivariate analysis, women preferred both EB and non-EB methods more frequently than men (odds ratio and 95% confidence interval: 1.46 (1.10-1.93) and 2.26 (1.72-2.96) for any EB and non-EB method, respectively). Preference for non-EB methods was higher among highly educated participants, while no such relationship was found for EB methods. Many smokers are unaware of the full variety of methods to quit smoking. Better information regarding these methods is necessary.
Resumo:
We assessed fluconazole susceptibility in 52 Candida tropicalis clinical strains using seven antifungal susceptibility methods, including broth microdilution (BMD) [standard M27 A3 (with neutral and acid pH), ATB Fungus 3, Vitek 2 system and flow cytometric analysis] and agar-based methods (disk diffusion and E-test). Trailing growth, detection of cell-associated secreted aspartic proteases (Saps) and morphological and ultrastructural traits of these clinical strains were also examined. The ranges of fluconazole 24 h-minimum inhibitory concentration (MIC) values were similar among all methods. The essential agreement among the methods used for MIC determinations was excellent and all methods categorised all strains as susceptible, except for one strain that showed a minor error. The presence of the trailing effect was assessed by six methods. Trailing positivity was observed for 86.5-100% of the strains. The exception was the BMD-Ac method where trailing growth was not observed. Morphological and ultrastructural alterations were detected in C. tropicalis trailing cells, including mitochondrial swelling and cell walls with irregular shapes. We tested the production of Saps in 13 C. tropicalis strains expressing trailing growth through flow cytometry. Our results showed that all of the C. tropicalis strains up-regulated surface Sap expression after 24 h or 48 h of exposure to fluconazole, which was not observed in untreated yeast strains. We concluded that C. tropicalis strains expressing trailing growth presented some particular features on both biological and ultrastructural levels.
Resumo:
BACKGROUND Most textbooks contains messages relating to health. This profuse information requires analysis with regards to the quality of such information. The objective was to identify the scientific evidence on which the health messages in textbooks are based. METHODS The degree of evidence on which such messages are based was identified and the messages were subsequently classified into three categories: Messages with high, medium or low levels of evidence; Messages with an unknown level of evidence; and Messages with no known evidence. RESULTS 844 messages were studied. Of this total, 61% were classified as messages with an unknown level of evidence. Less than 15% fell into the category where the level of evidence was known and less than 6% were classified as possessing high levels of evidence. More than 70% of the messages relating to "Balanced Diets and Malnutrition", "Food Hygiene", "Tobacco", "Sexual behaviour and AIDS" and "Rest and ergonomics" are based on an unknown level of evidence. "Oral health" registered the highest percentage of messages based on a high level of evidence (37.5%), followed by "Pregnancy and newly born infants" (35%). Of the total, 24.6% are not based on any known evidence. Two of the messages appeared to contravene known evidence. CONCLUSION Many of the messages included in school textbooks are not based on scientific evidence. Standards must be established to facilitate the production of texts that include messages that are based on the best available evidence and which can improve children's health more effectively.
Resumo:
High-resolution tomographic imaging of the shallow subsurface is becoming increasingly important for a wide range of environmental, hydrological and engineering applications. Because of their superior resolution power, their sensitivity to pertinent petrophysical parameters, and their far reaching complementarities, both seismic and georadar crosshole imaging are of particular importance. To date, corresponding approaches have largely relied on asymptotic, ray-based approaches, which only account for a very small part of the observed wavefields, inherently suffer from a limited resolution, and in complex environments may prove to be inadequate. These problems can potentially be alleviated through waveform inversion. We have developed an acoustic waveform inversion approach for crosshole seismic data whose kernel is based on a finite-difference time-domain (FDTD) solution of the 2-D acoustic wave equations. This algorithm is tested on and applied to synthetic data from seismic velocity models of increasing complexity and realism and the results are compared to those obtained using state-of-the-art ray-based traveltime tomography. Regardless of the heterogeneity of the underlying models, the waveform inversion approach has the potential of reliably resolving both the geometry and the acoustic properties of features of the size of less than half a dominant wavelength. Our results do, however, also indicate that, within their inherent resolution limits, ray-based approaches provide an effective and efficient means to obtain satisfactory tomographic reconstructions of the seismic velocity structure in the presence of mild to moderate heterogeneity and in absence of strong scattering. Conversely, the excess effort of waveform inversion provides the greatest benefits for the most heterogeneous, and arguably most realistic, environments where multiple scattering effects tend to be prevalent and ray-based methods lose most of their effectiveness.
Resumo:
OBJECTIVES: (1) To evaluate the changes in surface roughness and gloss after simulated toothbrushing of 9 composite materials and 2 ceramic materials in relation to brushing time and load in vitro; (2) to assess the relationship between surface gloss and surface roughness. METHODS: Eight flat specimens of composite materials (microfilled: Adoro, Filtek Supreme, Heliomolar; microhybrid: Four Seasons, Tetric EvoCeram; hybrid: Compoglass F, Targis, Tetric Ceram; macrohybrid: Grandio), two ceramic materials (IPS d.SIGN and IPS Empress polished) were fabricated according to the manufacturer's instructions and optimally polished with up to 4000 grit SiC. The specimens were subjected to a toothbrushing (TB) simulation device (Willytec) with rotating movements, toothpaste slurry and at three different loads (100g/250g/350g). At hourly intervals from 1h to 10h TB, mean surface roughness Ra was measured with an optical sensor and the surface gloss (Gl) with a glossmeter. Statistical analysis was performed for log-transformed Ra data applying two-way ANOVA to evaluate the interaction between load and material and load and brushing time. RESULTS: There was a significant interaction between material and load as well as between load and brushing time (p<0.0001). The microhybrid and hybrid materials demonstrated more surface deterioration with higher loads, whereas with the microfilled resins Heliomolar and Adoro it was vice versa. For ceramic materials, no or little deterioration was observed over time and independent of the load. The ceramic materials and 3 of the composite materials (roughness) showed no further deterioration after 5h of toothbrushing. Mean surface gloss was the parameter which discriminated best between the materials, followed by mean surface roughness Ra. There was a strong correlation between surface gloss and surface roughness for all the materials except the ceramics. The evaluation of the deterioration curves of individual specimens revealed a more or less synchronous course suspecting hinting specific external conditions and not showing the true variability in relation to the tested material. SIGNIFICANCE: The surface roughness and gloss of dental materials changes with brushing time and load and thus results in different material rankings. Apart from Grandio, the hybrid composite resins were more prone to surface changes than microfilled composites. The deterioration potential of a composite material can be quickly assessed by measuring surface gloss. For this purpose, a brushing time of 10h (=72,000 strokes) is needed. In further comparative studies, specimens of different materials should be tested in one series to estimate the true variability.
Resumo:
This paper shows how recently developed regression-based methods for thedecomposition of health inequality can be extended to incorporateindividual heterogeneity in the responses of health to the explanatoryvariables. We illustrate our method with an application to the CanadianNPHS of 1994. Our strategy for the estimation of heterogeneous responsesis based on the quantile regression model. The results suggest that thereis an important degree of heterogeneity in the association of health toexplanatory variables which, in turn, accounts for a substantial percentageof inequality in observed health. A particularly interesting finding isthat the marginal response of health to income is zero for healthyindividuals but positive and significant for unhealthy individuals. Theheterogeneity in the income response reduces both overall health inequalityand income related health inequality.
Resumo:
Vertical electric soundings, 2D resistivity imaging and several logging measurements were performed at Kappelen test site to identify the various geolelectric facies that allowed determining the tabular and horizontal structure of the aquifer. The surface-based geoelectric methods allowed for a reliable characterization of the overall structure and the geometry of the aquifer, while geophysical logging methods allowed for inferring detailed hydrogeophysical characteristics, such as the electrical resistivity, total porosity, global and matrix density and hydraulic conductivity. The synoptic interpretation and integration of this broad and diverse database allows for constraining the key hydrological characteristics and hence forms the basis for the detailed hydraulic modelling of flow and transport process.
Resumo:
Surface-based ground penetrating radar (GPR) and electrical resistance tomography (ERT) are common tools for aquifer characterization, because both methods provide data that are sensitive to hydrogeologically relevant quantities. To retrieve bulk subsurface properties at high resolution, we suggest incorporating structural information derived from GPR reflection data when inverting surface ERT data. This reduces resolution limitations, which might hinder quantitative interpretations. Surface-based GPR reflection and ERT data have been recorded on an exposed gravel bar within a restored section of a previously channelized river in northeastern Switzerland to characterize an underlying gravel aquifer. The GPR reflection data acquired over an area of 240×40 m map the aquifer's thickness and two internal sub-horizontal regions with different depositional patterns. The interface between these two regions and the boundary of the aquifer with then underlying clay are incorporated in an unstructured ERT mesh. Subsequent inversions are performed without applying smoothness constraints across these boundaries. Inversion models obtained by using these structural constraints contain subtle resistivity variations within the aquifer that are hardly visible in standard inversion models as a result of strong vertical smearing in the latter. In the upper aquifer region, with high GPR coherency and horizontal layering, the resistivity is moderately high (N300 Ωm). We suggest that this region consists of sediments that were rearranged during more than a century of channelized flow. In the lower low coherency region, the GPR image reveals fluvial features (e.g., foresets) and generally more heterogeneous deposits. In this region, the resistivity is lower (~200 Ωm), which we attribute to increased amounts of fines in some of the well-sorted fluvial deposits. We also find elongated conductive anomalies that correspond to the location of river embankments that were removed in 2002.
Resumo:
Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.