69 resultados para Compact- range measurements
em Université de Lausanne, Switzerland
Resumo:
We report on advanced dual-wavelength digital holographic microscopy (DHM) methods, enabling single-acquisition real-time micron-range measurements while maintaining single-wavelength interferometric resolution in the nanometer regime. In top of the unique real-time capability of our technique, it is shown that axial resolution can be further increased compared to single-wavelength operation thanks to the uncorrelated nature of both recorded wavefronts. It is experimentally demonstrated that DHM topographic investigation within 3 decades measurement range can be achieved with our arrangement, opening new applications possibilities for this interferometric technique. ©2008 COPYRIGHT SPIE
Resumo:
On December 4th 2007, a 3-Mm3 landslide occurred along the northwestern shore of Chehalis Lake. The initiation zone is located at the intersection of the main valley slope and the northern sidewall of a prominent gully. The slope failure caused a displacement wave that ran up to 38 m on the opposite shore of the lake. The landslide is temporally associated with a rain-on-snow meteorological event which is thought to have triggered it. This paper describes the Chehalis Lake landslide and presents a comparison of discontinuity orientation datasets obtained using three techniques: field measurements, terrestrial photogrammetric 3D models and an airborne LiDAR digital elevation model to describe the orientation and characteristics of the five discontinuity sets present. The discontinuity orientation data are used to perform kinematic, surface wedge limit equilibrium and three-dimensional distinct element analyses. The kinematic and surface wedge analyses suggest that the location of the slope failure (intersection of the valley slope and a gully wall) has facilitated the development of the unstable rock mass which initiated as a planar sliding failure. Results from the three-dimensional distinct element analyses suggest that the presence, orientation and high persistence of a discontinuity set dipping obliquely to the slope were critical to the development of the landslide and led to a failure mechanism dominated by planar sliding. The three-dimensional distinct element modelling also suggests that the presence of a steeply dipping discontinuity set striking perpendicular to the slope and associated with a fault exerted a significant control on the volume and extent of the failed rock mass but not on the overall stability of the slope.
Resumo:
The toxicity of heavy metals in natural waters is strongly dependent on the local chemical environment. Assessing the bioavailability of radionuclides predicts the toxic effects to aquatic biota. The technique of diffusive gradients in thin films (DGT) is largely exploited for bioavailability measurements of trace metals in waters. However, it has not been applied for plutonium speciation measurements yet. This study investigates the use of DGT technique for plutonium bioavailability measurements in chemically different environments. We used a diffusion cell to determine the diffusion coefficients (D) of plutonium in polyacrylamide (PAM) gel and found D in the range of 2.06-2.29 × 10(-6) cm(2) s(-1). It ranged between 1.10 and 2.03 × 10(-6) cm(2) s(-1) in the presence of fulvic acid and in natural waters with low DOM. In the presence of 20 ppm of humic acid of an organic-rich soil, plutonium diffusion was hindered by a factor of 5, with a diffusion coefficient of 0.50 × 10(-6) cm(2) s(-1). We also tested commercially available DGT devices with Chelex resin for plutonium bioavailability measurements in laboratory conditions and the diffusion coefficients agreed with those from the diffusion cell experiments. These findings show that the DGT methodology can be used to investigate the bioaccumulation of the labile plutonium fraction in aquatic biota.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.
Resumo:
We present a compact portable biosensor to measure arsenic As(III) concentrations in water using Escherichia coli bioreporter cells. Escherichia coli expresses green fluorescent protein in a linearly dependent manner as a function of the arsenic concentration (between 0 and 100 μg/L). The device accommodates a small polydimethylsiloxane microfluidic chip that holds the agarose-encapsulated bacteria, and a complete optical illumination/collection/detection system for automated quantitative fluorescence measurements. The device is capable of sampling water autonomously, controlling the whole measurement, storing and transmitting data over GSM networks. We demonstrate highly reproducible measurements of arsenic in drinking water at 10 and 50 μg/L within 100 and 80 min, respectively.
Resumo:
Doxorubicin is an antineoplasic agent active against sarcoma pulmonary metastasis, but its clinical use is hampered by its myelotoxicity and its cumulative cardiotoxicity, when administered systemically. This limitation may be circumvented using the isolated lung perfusion (ILP) approach, wherein a therapeutic agent is infused locoregionally after vascular isolation of the lung. The influence of the mode of infusion (anterograde (AG): through the pulmonary artery (PA); retrograde (RG): through the pulmonary vein (PV)) on doxorubicin pharmacokinetics and lung distribution was unknown. Therefore, a simple, rapid and sensitive high-performance liquid chromatography method has been developed to quantify doxorubicin in four different biological matrices (infusion effluent, serum, tissues with low or high levels of doxorubicin). The related compound daunorubicin was used as internal standard (I.S.). Following a single-step protein precipitation of 500 microl samples with 250 microl acetone and 50 microl zinc sulfate 70% aqueous solution, the obtained supernatant was evaporated to dryness at 60 degrees C for exactly 45 min under a stream of nitrogen and the solid residue was solubilized in 200 microl of purified water. A 100 microl-volume was subjected to HPLC analysis onto a Nucleosil 100-5 microm C18 AB column equipped with a guard column (Nucleosil 100-5 microm C(6)H(5) (phenyl) end-capped) using a gradient elution of acetonitrile and 1-heptanesulfonic acid 0.2% pH 4: 15/85 at 0 min-->50/50 at 20 min-->100/0 at 22 min-->15/85 at 24 min-->15/85 at 26 min, delivered at 1 ml/min. The analytes were detected by fluorescence detection with excitation and emission wavelength set at 480 and 550 nm, respectively. The calibration curves were linear over the range of 2-1000 ng/ml for effluent and plasma matrices, and 0.1 microg/g-750 microg/g for tissues matrices. The method is precise with inter-day and intra-day relative standard deviation within 0.5 and 6.7% and accurate with inter-day and intra-day deviations between -5.4 and +7.7%. The in vitro stability in all matrices and in processed samples has been studied at -80 degrees C for 1 month, and at 4 degrees C for 48 h, respectively. During initial studies, heparin used as anticoagulant was found to profoundly influence the measurements of doxorubicin in effluents collected from animals under ILP. Moreover, the strong matrix effect observed with tissues samples indicate that it is mandatory to prepare doxorubicin calibration standard samples in biological matrices which would reflect at best the composition of samples to be analyzed. This method was successfully applied in animal studies for the analysis of effluent, serum and tissue samples collected from pigs and rats undergoing ILP.
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.
Resumo:
BACKGROUND: Hyperoxaluria is a major risk factor for kidney stone formation. Although urinary oxalate measurement is part of all basic stone risk assessment, there is no standardized method for this measurement. METHODS: Urine samples from 24-h urine collection covering a broad range of oxalate concentrations were aliquoted and sent, in duplicates, to six blinded international laboratories for oxalate, sodium and creatinine measurement. In a second set of experiments, ten pairs of native urine and urine spiked with 10 mg/L of oxalate were sent for oxalate measurement. Three laboratories used a commercially available oxalate oxidase kit, two laboratories used a high-performance liquid chromatography (HPLC)-based method and one laboratory used both methods. RESULTS: Intra-laboratory reliability for oxalate measurement expressed as intraclass correlation coefficient (ICC) varied between 0.808 [95% confidence interval (CI): 0.427-0.948] and 0.998 (95% CI: 0.994-1.000), with lower values for HPLC-based methods. Acidification of urine samples prior to analysis led to significantly higher oxalate concentrations. ICC for inter-laboratory reliability varied between 0.745 (95% CI: 0.468-0.890) and 0.986 (95% CI: 0.967-0.995). Recovery of the 10 mg/L oxalate-spiked samples varied between 8.7 ± 2.3 and 10.7 ± 0.5 mg/L. Overall, HPLC-based methods showed more variability compared to the oxalate oxidase kit-based methods. CONCLUSIONS: Significant variability was noted in the quantification of urinary oxalate concentration by different laboratories, which may partially explain the differences of hyperoxaluria prevalence reported in the literature. Our data stress the need for a standardization of the method of oxalate measurement.
Resumo:
PURPOSE: To determine whether motion preservation following oblique cervical corpectomy (OCC) for cervical spondylotic myelopathy (CSM) persists with serial follow-up. METHODS: We included 28 patients with preoperative and at least two serial follow-up neutral and dynamic cervical spine radiographs who underwent OCC for CSM. Patients with an ossified posterior longitudinal ligament (OPLL) were excluded. Changes in sagittal curvature, segmental and whole spine range of motion (ROM) were measured. Nathan's system graded anterior osteophyte formation. Neurological function was measured by Nurick's grade and modified Japanese Orthopedic Association (JOA) scores. RESULTS: The majority (23 patients) had a single or 2-level corpectomy. The average duration of follow-up was 45 months. The Nurick's grade and the JOA scores showed statistically significant improvements after surgery (p < 0.001). 17% of patients with preoperative lordotic spines had a loss of lordosis at last follow-up, but with no clinical worsening. 77% of the whole spine ROM and 62% of segmental ROM was preserved at last follow-up. The whole spine and segmental ROM decreased by 11.2° and 10.9°, respectively (p ≤ 0.001). Patients with a greater range of segmental movement preoperatively had a statistically greater range of movement at follow-up. The analysis of serial radiographs indicated that the range of movement of the whole spine and the range of movement at the segmental spine levels significantly reduced during the follow-up period. Nathan's grade showed increase in osteophytosis in more than two-thirds of the patients (p ≤ 0.01). The whole spine range of movement at follow-up significantly correlated with Nathan's grade. CONCLUSIONS: Although the OCC preserves segmental and whole spine ROM, serial measurements show a progressive decrease in ROM albeit without clinical worsening. The reduction in this ROM is probably related to degenerative ossification of spinal ligaments.
Resumo:
The activity of radiopharmaceuticals in nuclear medicine is measured before patient injection with radionuclide calibrators. In Switzerland, the general requirements for quality controls are defined in a federal ordinance and a directive of the Federal Office of Metrology (METAS) which require each instrument to be verified. A set of three gamma sources (Co-57, Cs-137 and Co-60) is used to verify the response of radionuclide calibrators in the gamma energy range of their use. A beta source, a mixture of (90)Sr and (90)Y in secular equilibrium, is used as well. Manufacturers are responsible for the calibration factors. The main goal of the study was to monitor the validity of the calibration factors by using two sources: a (90)Sr/(90)Y source and a (18)F source. The three types of commercial radionuclide calibrators tested do not have a calibration factor for the mixture but only for (90)Y. Activity measurements of a (90)Sr/(90)Y source with the (90)Y calibration factor are performed in order to correct for the extra-contribution of (90)Sr. The value of the correction factor was found to be 1.113 whereas Monte Carlo simulations of the radionuclide calibrators estimate the correction factor to be 1.117. Measurements with (18)F sources in a specific geometry are also performed. Since this radionuclide is widely used in Swiss hospitals equipped with PET and PET-CT, the metrology of the (18)F is very important. The (18)F response normalized to the (137)Cs response shows that the difference with a reference value does not exceed 3% for the three types of radionuclide calibrators.
Resumo:
OBJECTIVE: The measurement of cardiac output is a key element in the assessment of cardiac function. Recently, a pulse contour analysis-based device without need for calibration became available (FloTrac/Vigileo, Edwards Lifescience, Irvine, CA). This study was conducted to determine if there is an impact of the arterial catheter site and to investigate the accuracy of this system when compared with the pulmonary artery catheter using the bolus thermodilution technique (PAC). DESIGN: Prospective study. SETTING: The operating room of 1 university hospital. PARTICIPANTS: Twenty patients undergoing cardiac surgery. INTERVENTIONS: CO was determined in parallel by the use of the Flotrac/Vigileo systems in the radial and femoral position (CO_rad and CO_fem) and by PAC as the reference method. Data triplets were recorded at defined time points. The primary endpoint was the comparison of CO_rad and CO_fem, and the secondary endpoint was the comparison with the PAC. MEASUREMENTS AND MAIN RESULTS: Seventy-eight simultaneous data recordings were obtained. The Bland-Altman analysis for CO_fem and CO_rad showed a bias of 0.46 L/min, precision was 0.85 L/min, and the percentage error was 34%. The Bland-Altman analysis for CO_rad and PAC showed a bias of -0.35 L/min, the precision was 1.88 L/min, and the percentage error was 76%. The Bland-Altman analysis for CO_fem and PAC showed a bias of 0.11 L/min, the precision was 1.8 L/min, and the percentage error was 69%. CONCLUSION: The FloTrac/Vigileo system was shown to not produce exactly the same CO data when used in radial and femoral arteries, even though the percentage error was close to the clinically acceptable range. Thus, the impact of the introduction site of the arterial catheter is not negligible. The agreement with thermodilution was low.
Resumo:
A novel laboratory technique is proposed to investigate wave-induced fluid flow on the mesoscopic scale as a mechanism for seismic attenuation in partially saturated rocks. This technique combines measurements of seismic attenuation in the frequency range from 1 to 100?Hz with measurements of transient fluid pressure as a response of a step stress applied on top of the sample. We used a Berea sandstone sample partially saturated with water. The laboratory results suggest that wave-induced fluid flow on the mesoscopic scale is dominant in partially saturated samples. A 3-D numerical model representing the sample was used to verify the experimental results. Biot's equations of consolidation were solved with the finite-element method. Wave-induced fluid flow on the mesoscopic scale was the only attenuation mechanism accounted for in the numerical solution. The numerically calculated transient fluid pressure reproduced the laboratory data. Moreover, the numerically calculated attenuation, superposed to the frequency-independent matrix anelasticity, reproduced the attenuation measured in the laboratory in the partially saturated sample. This experimental?numerical fit demonstrates that wave-induced fluid flow on the mesoscopic scale and matrix anelasticity are the dominant mechanisms for seismic attenuation in partially saturated Berea sandstone.
Resumo:
Molar heat capacities of the binary compounds NiAl, NiIn, NiSi, NiGe, NiBi, NiSb, CoSb and FeSb were determined every 10 K by differential scanning calorimetry in the temperature range 310-1080 K. The experimental results have been fitted versus temperature according to C-p = a + b . T + c . T-2 + d . T-2. Results are given, discussed and compared to estimations found in the literature. Two compounds, NiBi and FeSb, are subject to transformations between 460 and 500 K. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
One of the key challenges in the field of nanoparticle (NP) analysis is in producing reliable and reproducible characterisation data for nanomaterials. This study looks at the reproducibility using a relatively new, but rapidly adopted, technique, Nanoparticle Tracking Analysis (NTA) on a range of particle sizes and materials in several different media. It describes the protocol development and presents both the data and analysis of results obtained from 12 laboratories, mostly based in Europe, who are primarily QualityNano members. QualityNano is an EU FP7 funded Research Infrastructure that integrates 28 European analytical and experimental facilities in nanotechnology, medicine and natural sciences with the goal of developing and implementing best practice and quality in all aspects of nanosafety assessment. This study looks at both the development of the protocol and how this leads to highly reproducible results amongst participants. In this study, the parameter being measured is the modal particle size.