222 resultados para Direct solid sampling
Resumo:
Pneumocystis jirovecii pneumonia (PCP) is a common opportunistic infection. Microscopic diagnosis, including diagnosis using the Merifluor-Pneumocystis direct fluorescent antigen (MP-DFA) test, has limitations. Real-time PCR may assist in diagnosis, but no commercially validated real-time PCR assay has been available to date. MycAssay Pneumocystis is a commercial assay that targets the P. jirovecii mitochondrial large subunit (analytical detection limit, ≤3.5 copies/μl of sample). A multicenter trial recruited 110 subjects: 54 with transplants (40 with lung transplants), 32 with nonmalignant conditions, 13 with leukemia, and 11 with solid tumors; 9 were HIV positive. A total of 110 respiratory samples (92% of which were bronchoalveolar lavage [BAL] specimens) were analyzed by PCR. Performance was characterized relative to investigator-determined clinical diagnosis of PCP (including local diagnostic tests), and PCR results were compared with MP-DFA test results for 83 subjects. Thirteen of 14 subjects with PCP and 9/96 without PCP (including 5 undergoing BAL surveillance after lung transplantation) had positive PCR results; sensitivity, specificity, and positive and negative predictive values (PPV and NPV, respectively) were 93%, 91%, 59%, and 99%, respectively. Fourteen of 83 subjects for whom PCR and MP-DFA test results were available had PCP; PCR sensitivity, specificity, PPV, and NPV were 93%, 90%, 65%, and 98%, respectively, and MP-DFA test sensitivity, specificity, PPV, and NPV were 93%, 100%, 100%, and 98%. Of the 9 PCR-positive subjects without PCP, 1 later developed PCP. The PCR diagnostic assay compares well with clinical diagnosis using nonmolecular methods. Additional positive results compared with the MP-DFA test may reflect low-level infection or colonization.
Resumo:
There is increasing evidence to suggest that the presence of mesoscopic heterogeneities constitutes the predominant attenuation mechanism at seismic frequencies. As a consequence, centimeter-scale perturbations of the subsurface physical properties should be taken into account for seismic modeling whenever detailed and accurate responses of the target structures are desired. This is, however, computationally prohibitive since extremely small grid spacings would be necessary. A convenient way to circumvent this problem is to use an upscaling procedure to replace the heterogeneous porous media by equivalent visco-elastic solids. In this work, we solve Biot's equations of motion to perform numerical simulations of seismic wave propagation through porous media containing mesoscopic heterogeneities. We then use an upscaling procedure to replace the heterogeneous poro-elastic regions by homogeneous equivalent visco-elastic solids and repeat the simulations using visco-elastic equations of motion. We find that, despite the equivalent attenuation behavior of the heterogeneous poro-elastic medium and the equivalent visco-elastic solid, the seismograms may differ due to diverging boundary conditions at fluid-solid interfaces, where there exist additional options for the poro-elastic case. In particular, we observe that the seismograms agree for closed-pore boundary conditions, but differ significantly for open-pore boundary conditions. This is an interesting result, which has potentially important implications for wave-equation-based algorithms in exploration geophysics involving fluid-solid interfaces, such as, for example, wave field decomposition.
Resumo:
Designing an efficient sampling strategy is of crucial importance for habitat suitability modelling. This paper compares four such strategies, namely, 'random', 'regular', 'proportional-stratified' and 'equal -stratified'- to investigate (1) how they affect prediction accuracy and (2) how sensitive they are to sample size. In order to compare them, a virtual species approach (Ecol. Model. 145 (2001) 111) in a real landscape, based on reliable data, was chosen. The distribution of the virtual species was sampled 300 times using each of the four strategies in four sample sizes. The sampled data were then fed into a GLM to make two types of prediction: (1) habitat suitability and (2) presence/ absence. Comparing the predictions to the known distribution of the virtual species allows model accuracy to be assessed. Habitat suitability predictions were assessed by Pearson's correlation coefficient and presence/absence predictions by Cohen's K agreement coefficient. The results show the 'regular' and 'equal-stratified' sampling strategies to be the most accurate and most robust. We propose the following characteristics to improve sample design: (1) increase sample size, (2) prefer systematic to random sampling and (3) include environmental information in the design'
Resumo:
Although important progresses have been achieved in the therapeutic management of transplant recipients, acute and chronic rejections remain the leading causes of premature graft loss after solid organ transplantation. This, together with the undesirable side effects of immunosuppressive drugs, has significant implications for the long-term outcome of transplant recipients. Thus, a better understanding of the immunological events occurring after transplantation is essential. The immune system plays an ambivalent role in the outcome of a graft. On one hand, some T lymphocytes with effector functions (called alloreactive) can mediate a cascade of events eventually resulting in the rejection, either acute or chronic, of the grafted organ ; on the other hand, a small subset of T lymphocytes, called regulatory T cells, has been shown to be implicated in the control of these harmful rejection responses, among other things. Thus, we focused our interest on the study of the balance between circulating effectors (alloreactive) and regulatory T lymphocytes, which seems to play an important role in the outcome of allografts, in the context of kidney transplantation. The results were correlated with various variables such as the clinical status of the patients, the immunosuppressive drugs used as induction or maintenance agents, and past or current episodes of rejection. We observed that the percentage of the alloreactive T lymphocyte population was correlated with the clinical status of the kidney transplant recipients. Indeed, the highest percentage was found in patients suffering from chronic humoral rejection, whilst patients on no or only minimal immunosuppressive treatment or on sirolimus-based immunosuppression displayed a percentage comparable to healthy non-transplanted individuals. During the first year after renal transplantation, the balance between effectors and regulatory T lymphocytes was tipped towards the detrimental effector immune response, with the two induction agents studied (thymoglobulin and basiliximab). Overall, these results indicate that monitoring these immunological parameters may be very useful for the clinical follow-up of transplant recipients ; these tests may contribute to identify patients who are more likely to develop rejection or, on the contrary, who tolerate well their graft, in order to adapt the immunosuppressive treatment on an individual basis.
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).
Resumo:
We perform direct numerical simulations of drainage by solving Navier- Stokes equations in the pore space and employing the Volume Of Fluid (VOF) method to track the evolution of the fluid-fluid interface. After demonstrating that the method is able to deal with large viscosity contrasts and to model the transition from stable flow to viscous fingering, we focus on the definition of macroscopic capillary pressure. When the fluids are at rest, the difference between inlet and outlet pressures and the difference between the intrinsic phase average pressure coincide with the capillary pressure. However, when the fluids are in motion these quantities are dominated by viscous forces. In this case, only a definition based on the variation of the interfacial energy provides an accurate measure of the macroscopic capillary pressure and allows separating the viscous from the capillary pressure components.
Resumo:
BACKGROUND: MYCN oncogene amplification has been defined as the most important prognostic factor for neuroblastoma (NB), the most common solid extracranial neoplasm in children. High copy numbers are strongly associated with rapid tumor progression and poor outcome, independently of tumor stage or patient age, and this has become an important factor in treatment stratification. PROCEDURE: By real-time quantitative PCR analysis, we evaluated the clinical relevance of circulating MYCN DNA of 267 patients with locoregional or metastatic NB in children less than 18 months of age. RESULTS: For patients in this age group with INSS stage 4 or 4S NB and stage 3 patients, serum-based determination of MYCN DNA sequences had good sensitivity (85%, 83%, and 75% respectively) and high specificity (100%) when compared to direct tumor gene determination. In contrast, the approach showed low sensitivity patients with stages 1 and 2 disease. CONCLUSION: Our results show that the sensitivity of the serum-based MYCN DNA sequence determination depends on the stage of the disease. However, this simple, reproducible assay may represent a reasonably sensitive and very specific tool to assess tumor MYCN status in cases with stage 3 and metastatic disease for whom a wait and see strategy is often recommended.
Resumo:
The urinary steroid profile is constituted by anabolic androgenic steroids, including testosterone and its relatives, that are extensively metabolized into phase II sulfated or glucuronidated steroids. The use of liquid chromatography coupled to mass spectrometry (LC-MS) is an issue for the direct analysis of conjugated steroids, which can be used as urinary markers of exogenous steroid administration in doping analysis, without hydrolysis of the conjugated moiety. In this study, a sensitive and selective ultra high-pressure liquid chromatography coupled to quadrupole time-of-flight mass spectrometer (UHPLC-QTOF-MS) method was developed to quantify major urinary metabolites simultaneously after testosterone intake. The sample preparation of the urine (1 mL) was performed by solid-phase extraction on Oasis HLB sorbent using a 96-well plate format. The conjugated steroids were analyzed by UHPLC-QTOF-MS(E) with a single-gradient elution of 36 min (including re-equilibration time) in the negative electrospray ionization mode. MS(E) analysis involved parallel alternating acquisitions of both low- and high-collision energy functions. The method was validated and applied to samples collected from a clinical study performed with a group of healthy human volunteers who had taken testosterone, which were compared with samples from a placebo group. Quantitative results were also compared to GC-MS and LC-MS/MS measurements, and the correlations between data were found appropriate. The acquisition of full mass spectra over the entire mass range with QTOF mass analyzers gives promise of the opportunity to extend the steroid profile to a higher number of conjugated steroids.
Resumo:
PURPOSE: Quality of care and its measurement represent a considerable challenge for pediatric smaller-scale comprehensive cancer centers (pSSCC) providing surgical oncology services. It remains unclear whether center size and/or yearly case-flow numbers influence the quality of care, and therefore impact outcomes for this population of patients. PATIENTS AND METHODS: We performed a 14-year, retrospective, single-center analysis, assessing adherence to treatment protocols and surgical adverse events as quality indicators in abdominal and thoracic pediatric solid tumor surgery. RESULTS: Forty-eight patients, enrolled in a research-associated treatment protocol, underwent 51 cancer-oriented surgical procedures. All the protocols contain precise technical criteria, indications, and instructions for tumor surgery. Overall, compliance with such items was very high, with 997/1,035 items (95 %) meeting protocol requirements. There was no surgical mortality. Twenty-one patients (43 %) had one or more complications, for a total of 34 complications (66 % of procedures). Overall, 85 % of complications were grade 1 or 2 according to Clavien-Dindo classification requiring observation or minor medical treatment. Case-sample and outcome/effectiveness data were comparable to published series. Overall, our data suggest that even with the modest caseload of a pSSCC within a Swiss tertiary academic hospital, compliance with international standards can be very high, and the incidence of adverse events can be kept minimal. CONCLUSION: Open and objective data sharing, and discussion between pSSCCs, will ultimately benefit our patient populations. Our study is an initial step towards the enhancement of critical self-review and quality-of-care measurements in this setting.
Resumo:
We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.
Resumo:
According to most political scientists and commentators, direct democracy seems to weaken political parties. Our empirical analysis in the 26 Swiss cantons shows that this thesis in its general form cannot be maintained. Political parties in cantons with extensive use of referendums and initiatives are not in all respects weaker than parties in cantons with little use of direct democratic means of participation. On the contrary, direct democracy goes together with more professional and formalized party organizations. Use of direct democracy is associated with more fragmented and volatile party systems, and with greater support for small parties, but causal interpretations of these relationships are difficult.
Resumo:
We present an open-source ITK implementation of a directFourier method for tomographic reconstruction, applicableto parallel-beam x-ray images. Direct Fourierreconstruction makes use of the central-slice theorem tobuild a polar 2D Fourier space from the 1D transformedprojections of the scanned object, that is resampled intoa Cartesian grid. Inverse 2D Fourier transform eventuallyyields the reconstructed image. Additionally, we providea complex wrapper to the BSplineInterpolateImageFunctionto overcome ITKâeuro?s current lack for image interpolatorsdealing with complex data types. A sample application ispresented and extensively illustrated on the Shepp-Loganhead phantom. We show that appropriate input zeropaddingand 2D-DFT oversampling rates together with radial cubicb-spline interpolation improve 2D-DFT interpolationquality and are efficient remedies to reducereconstruction artifacts.