962 resultados para step-down method
Resumo:
The observational method in tunnel engineering allows the evaluation in real time of the actual conditions of the ground and to take measures if its behavior deviates considerably from predictions. However, it lacks a consistent and structured methodology to use the monitoring data to adapt the support system in real time. The definition of limit criteria above which adaptation is required are not defined and complex inverse analysis procedures (Rechea et al. 2008, Levasseur et al. 2010, Zentar et al. 2001, Lecampion et al. 2002, Finno and Calvello 2005, Goh 1999, Cui and Pan 2012, Deng et al. 2010, Mathew and Lehane 2013, Sharifzadeh et al. 2012, 2013) may be needed to consistently analyze the problem. In this paper a methodology for the real time adaptation of the support systems during tunneling is presented. In a first step limit criteria for displacements and stresses are proposed. The methodology uses graphics that are constructed during the project stage based on parametric calculations to assist in the process and when these graphics are not available, since it is not possible to predict every possible scenario, inverse analysis calculations are carried out. The methodology is applied to the “Bois de Peu” tunnel which is composed by two tubes with over 500 m long. High uncertainty levels existed concerning the heterogeneity of the soil and consequently in the geomechanical design parameters. The methodology was applied in four sections and the results focus on two of them. It is shown that the methodology has potential to be applied in real cases contributing for a consistent approach of a real time adaptation of the support system and highlight the importance of the existence of good quality and specific monitoring data to improve the inverse analysis procedure.
Resumo:
Well-dispersed loads of finely powdered metals, metal oxides, several carbon allotropes or nanoclays are incorporated into highly porous polyamide 6 microcapsules in controllable amounts via an original one-step in situ fabrication technique. It is based on activated anionic polymerization (AAP) of ε-caprolactam in a hydrocarbon solvent performed in the presence of the respective micro- or nanosized loads. The forming microcapsules with typical diameters of 25-50 µm entrap up to 40 wt% of load. Their melt processing produces hybrid thermoplastic composites. Mechanical, electric conductivity and magnetic response measurements show that transforming of in situ loaded microcapsules into composites by melt processing (MP) is a facile and rapid method to fabricate materials with high mechanical resistance and electro-magnetic characteristics sufficient for many industrial applications. This novel concept requires low polymerization temperatures, no functionalization or compatibilization of the loads and it is easy to scale up at industrial production levels.
Resumo:
Quantitative method of viral pollution determination for large volume of water using ferric hydroxide gel impregnated on the surface of glassfibre cartridge filter. The use of ferric hydroxide gel, impregnated on the surface of glassfibre cartridge filter enable us to recover 62.5% of virus (Poliomylitis type I, Lsc strain) exsogeneously added to 400 liters of tap-water. The virus concentrator system consists of four cartridge filters, in which the three first one are clarifiers, where the contaminants are removed physically, without significant virus loss at this stage. The last cartridge filter is impregnated with ferric hydroxide gel, where the virus is adsorbed. After the required volume of water has been processed, the last filter is removed from the system and the viruses are recovered from the gel, using 1 liter of glycine/NaOH buffer, at pH 11. Immediately the eluate is clarified through series of cellulose acetate membranes mounted in a 142mm Millipore filter. For the second step of virus concentration, HC1 1N is added slowly to the eluate to achieve pH 3.5-4. MgC1, is added to give a final concentration of 0.05M and the viruses are readsorbed on a 0.45 , porosity (HA) cellulose acetate membrane, mounted in a 90 mm Millipore filter. The viruses are recovered using the same eluent plus 10% of fetal calf serum, to a final volume of 3 ml. In this way, it was possible to concentrate virus from 400 liters of tap-water, into 1 liter in the first stage of virus concentration and just to 3 ml of final volume in a second step. The efficiency, simplicity and low operational cost, provded by the method, make it feasible to study viral pollution of recreational and tap-water sources.
Resumo:
Activation of the mitogen-activated protein (MAP) kinase cascade by progesterone in Xenopus oocytes leads to a marked down-regulation of activity of the amiloride-sensitive epithelial sodium channel (ENaC). Here we have studied the signaling pathways involved in progesterone effect on ENaC activity. We demonstrate that: (i) the truncation of the C termini of the alphabetagammaENaC subunits results in the loss of the progesterone effect on ENaC; (ii) the effect of progesterone was also suppressed by mutating conserved tyrosine residues in the Pro-X-X-Tyr (PY) motif of the C termini of the beta and gamma ENaC subunits (beta(Y618A) and gamma(Y628A)); (iii) the down-regulation of ENaC activity by progesterone was also suppressed by co-expression ENaC subunits with a catalytically inactive mutant of Nedd4-2, a ubiquitin ligase that has been previously demonstrated to decrease ENaC cell-surface expression via a ubiquitin-dependent internalization/degradation mechanism; (iv) the effect of progesterone was significantly reduced by suppression of consensus sites (beta(T613A) and gamma(T623A)) for ENaC phosphorylation by the extracellular-regulated kinase (ERK), a MAP kinase previously shown to facilitate the binding of Nedd4 ubiquitin ligases to ENaC; (v) the quantification of cell-surface-expressed ENaC subunits revealed that progesterone decreases ENaC open probability (whole cell P(o), wcP(o)) and not its cell-surface expression. Collectively, these results demonstrate that the binding of active Nedd4-2 to ENaC is a crucial step in the mechanism of ENaC inhibition by progesterone. Upon activation of ERK, the effect of Nedd4-2 on ENaC open probability can become more important than its effect on ENaC cell-surface expression.
Resumo:
Simian rotavirus SA-11, experimentally seeded, was recovered from raw domestic sewage by a two-step concentration procedure, using filtration through a positively charged microporous filter (Zeta Plus 60 S) followed by ultracentrifugation, effecting an 8,000-fold concentration. By this method, a mean recovery of 81% ± 7.5 of the SA-11 virus, was achieved
A New Method for ECG Tracking of Persistent Atrial Fibrillation Termination during Stepwise Ablation
Resumo:
Stepwise radiofrequency catheter ablation (step-CA) has become the treatment of choice for the restoration of sinus rhythm (SR) in patients with long-standing persistent atrial fibrillation (pers-AF). Its success rate appears limited as the amount of ablation to achieve long term SR is unknown. Multiple organization indexes (OIs) have been previously developed to track the organization of AF during step-CA, however, with limited success. We report an adaptive method for tracking AF termination (AF-term) based on OIs characterizing the relationship between harmonic components of atrial activity from the surface ECG of AF activity. By computing their relative evolution during the last two steps preceding AF-term, we found that the performance of our OIs was superior to classical indices to track the efficiency of step-CA "en route" to AF-term. Our preliminary results suggest that the gradual synchronization between the fundamental and its first harmonic of AF activity appears as a promising parameter for predicting AF-term during step-CA.
Resumo:
RATIONALE: The aim of the work was to develop and validate a method for the quantification of vitamin D metabolites in serum using ultra-high-pressure liquid chromatography coupled to mass spectrometry (LC/MS), and to validate a high-resolution mass spectrometry (LC/HRMS) approach against a tandem mass spectrometry (LC/MS/MS) approach using a large clinical sample set. METHODS: A fast, accurate and reliable method for the quantification of the vitamin D metabolites, 25-hydroxyvitamin D2 (25OH-D2) and 25-hydroxyvitamin D3 (25OH-D3), in human serum was developed and validated. The C3 epimer of 25OH-D3 (3-epi-25OH-D3) was also separated from 25OH-D3. The samples were rapidly prepared via a protein precipitation step followed by solid-phase extraction (SPE) using an HLB μelution plate. Quantification was performed using both LC/MS/MS and LC/HRMS systems. RESULTS: Recovery, matrix effect, inter- and intra-day reproducibility were assessed. Lower limits of quantification (LLOQs) were determined for both 25OH-D2 and 25OH-D3 for the LC/MS/MS approach (6.2 and 3.4 µg/L, respectively) and the LC/HRMS approach (2.1 and 1.7 µg/L, respectively). A Passing & Bablok fit was determined between both approaches for 25OH-D3 on 662 clinical samples (1.11 + 1.06x). It was also shown that results can be affected by the inclusion of the isomer 3-epi-25OH-D3. CONCLUSIONS: Quantification of the relevant vitamin D metabolites was successfully developed and validated here. It was shown that LC/HRMS is an accurate, powerful and easy to use approach for quantification within clinical laboratories. Finally, the results here suggest that it is important to separate 3-epi-25OH-D3 from 25OH-D3. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
Title: Are suitable general clinic criteria for defining hypothyroidism in people with Down syndrome? Studies on the prevalence of thyroid disorders in people with Down syndrome (DS) show a wide dispersion of results. However, most of these studies agree in indicating a greater frequency than in the general population. The cause of these differences may depend on the method of sample selection. In this work we studied a healthy population of adolescents with DS of the Association of Málaga, selected randomly and regardless of the medical care. Mean TSH distribution, used here as a tool to define the biochemical thyroid function of the studied DS population, was two standard deviation higher than the mean for the general population. These data show that in terms of TSH the DS population is a distinct population with respect to the general population. This clearly indicates that it would be necessary to identify and define new criteria to establish what is normal, subclinical hypothyroidism, borderline or pathological, and to propose new treatment guide.
Resumo:
Chromatin immunoprecipitation followed by deep sequencing (ChIP-seq) experiments are widely used to determine, within entire genomes, the occupancy sites of any protein of interest, including, for example, transcription factors, RNA polymerases, or histones with or without various modifications. In addition to allowing the determination of occupancy sites within one cell type and under one condition, this method allows, in principle, the establishment and comparison of occupancy maps in various cell types, tissues, and conditions. Such comparisons require, however, that samples be normalized. Widely used normalization methods that include a quantile normalization step perform well when factor occupancy varies at a subset of sites, but may miss uniform genome-wide increases or decreases in site occupancy. We describe a spike adjustment procedure (SAP) that, unlike commonly used normalization methods intervening at the analysis stage, entails an experimental step prior to immunoprecipitation. A constant, low amount from a single batch of chromatin of a foreign genome is added to the experimental chromatin. This "spike" chromatin then serves as an internal control to which the experimental signals can be adjusted. We show that the method improves similarity between replicates and reveals biological differences including global and largely uniform changes.
Resumo:
Realistic rendering animation is known to be an expensive processing task when physically-based global illumination methods are used in order to improve illumination details. This paper presents an acceleration technique to compute animations in radiosity environments. The technique is based on an interpolated approach that exploits temporal coherence in radiosity. A fast global Monte Carlo pre-processing step is introduced to the whole computation of the animated sequence to select important frames. These are fully computed and used as a base for the interpolation of all the sequence. The approach is completely view-independent. Once the illumination is computed, it can be visualized by any animated camera. Results present significant high speed-ups showing that the technique could be an interesting alternative to deterministic methods for computing non-interactive radiosity animations for moderately complex scenarios
Resumo:
Platelet-rich plasma (PRP) is a volume of plasma fraction of autologous blood having platelet concentrations above baseline whole-blood values due to processing and concentration. PRP is used in various surgical fields to enhance soft-tissue and bone healing by delivering supra-physiological concentrations of autologous platelets at the site of tissue damage. These preparations may provide a good cellular source of various growth factors and cytokines, and modulate tissue response to injury. Common clinically available materials for blood preparations combined with a two-step centrifugation protocol at 280g each, to ensure cellular component integrity, provided platelet preparations which were concentrated 2-3 fold over total blood values. Costs were shown to be lower than those of other methods which require specific equipment and high-cost disposables, while safety and traceability can be increased. PRP can be used for the treatment of wounds of all types including burns and also of split-thickness skin graft donor sites, which are frequently used in burn management. The procedure can be standardized and is easy to adapt in clinical settings with minimal infrastructure, thus enabling large numbers of patients to benefit from a form of cellular therapy.
Resumo:
Background: Microarray data is frequently used to characterize the expression profile of a whole genome and to compare the characteristics of that genome under several conditions. Geneset analysis methods have been described previously to analyze the expression values of several genes related by known biological criteria (metabolic pathway, pathology signature, co-regulation by a common factor, etc.) at the same time and the cost of these methods allows for the use of more values to help discover the underlying biological mechanisms. Results: As several methods assume different null hypotheses, we propose to reformulate the main question that biologists seek to answer. To determine which genesets are associated with expression values that differ between two experiments, we focused on three ad hoc criteria: expression levels, the direction of individual gene expression changes (up or down regulation), and correlations between genes. We introduce the FAERI methodology, tailored from a two-way ANOVA to examine these criteria. The significance of the results was evaluated according to the self-contained null hypothesis, using label sampling or by inferring the null distribution from normally distributed random data. Evaluations performed on simulated data revealed that FAERI outperforms currently available methods for each type of set tested. We then applied the FAERI method to analyze three real-world datasets on hypoxia response. FAERI was able to detect more genesets than other methodologies, and the genesets selected were coherent with current knowledge of cellular response to hypoxia. Moreover, the genesets selected by FAERI were confirmed when the analysis was repeated on two additional related datasets. Conclusions: The expression values of genesets are associated with several biological effects. The underlying mathematical structure of the genesets allows for analysis of data from several genes at the same time. Focusing on expression levels, the direction of the expression changes, and correlations, we showed that two-step data reduction allowed us to significantly improve the performance of geneset analysis using a modified two-way ANOVA procedure, and to detect genesets that current methods fail to detect.
Resumo:
A select-divide-and-conquer variational method to approximate configuration interaction (CI) is presented. Given an orthonormal set made up of occupied orbitals (Hartree-Fock or similar) and suitable correlation orbitals (natural or localized orbitals), a large N-electron target space S is split into subspaces S0,S1,S2,...,SR. S0, of dimension d0, contains all configurations K with attributes (energy contributions, etc.) above thresholds T0={T0egy, T0etc.}; the CI coefficients in S0 remain always free to vary. S1 accommodates KS with attributes above T1≤T0. An eigenproblem of dimension d0+d1 for S0+S 1 is solved first, after which the last d1 rows and columns are contracted into a single row and column, thus freezing the last d1 CI coefficients hereinafter. The process is repeated with successive Sj(j≥2) chosen so that corresponding CI matrices fit random access memory (RAM). Davidson's eigensolver is used R times. The final energy eigenvalue (lowest or excited one) is always above the corresponding exact eigenvalue in S. Threshold values {Tj;j=0, 1, 2,...,R} regulate accuracy; for large-dimensional S, high accuracy requires S 0+S1 to be solved outside RAM. From there on, however, usually a few Davidson iterations in RAM are needed for each step, so that Hamiltonian matrix-element evaluation becomes rate determining. One μhartree accuracy is achieved for an eigenproblem of order 24 × 106, involving 1.2 × 1012 nonzero matrix elements, and 8.4×109 Slater determinants
Resumo:
Identification of post-translational modifications of proteins in biological samples often requires access to preanalytical purification and concentration methods. In the purification step high or low molecular weight substances can be removed by size exclusion filters, and high abundant proteins can be removed, or low abundant proteins can be enriched, by specific capturing tools. In this paper is described the experience and results obtained with a recently emerged and easy-to-use affinity purification kit for enrichment of the low amounts of EPO found in urine and plasma specimens. The kit can be used as a pre-step in the EPO doping control procedure, as an alternative to the commonly used ultrafiltration, for detecting aberrantly glycosylated isoforms. The commercially available affinity purification kit contains small disposable anti-EPO monolith columns (6 ?L volume, Ø7 mm, length 0.15 mm) together with all required buffers. A 24-channel vacuum manifold was used for simultaneous processing of samples. The column concentrated EPO from 20 mL urine down to 55 ?L eluate with a concentration factor of 240 times, while roughly 99.7% of non-relevant urine proteins were removed. The recoveries of Neorecormon (epoetin beta), and the EPO analogues Aranesp and Mircera applied to buffer were high, 76%, 67% and 57%, respectively. The recovery of endogenous EPO from human urine was 65%. High recoveries were also obtained when purifying human, mouse and equine EPO from serum, and human EPO from cerebrospinal fluid. Evaluation with the accredited EPO doping control method based on isoelectric focusing (IEF) showed that the affinity purification procedure did not change the isoform distribution for rhEPO, Aranesp, Mircera or endogenous EPO. The kit should be particularly useful for applications in which it is essential to avoid carry-over effects, a problem commonly encountered with conventional particle-based affinity columns. The encouraging results with EPO propose that similar affinity monoliths, with the appropriate antibodies, should constitute useful tools for general applications in sample preparation, not only for doping control of EPO and other hormones such as growth hormone and insulin but also for the study of post-translational modifications of other low abundance proteins in biological and clinical research, and for sample preparation prior to in vitro diagnostics.
Resumo:
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in 2D polar coordinates. An important application of this method and its extensions will be the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh, which can be arbitrarily heterogeneous, consisting of two or more concentric rings representing the fluid in the center and the surrounding porous medium. The spatial discretization is based on a Chebyshev expansion in the radial direction and a Fourier expansion in the azimuthal direction and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method is used to match the fluid-solid boundary conditions based on the method of characteristics. This multi-domain approach allows for significant reductions of the number of grid points in the azimuthal direction for the inner grid domain and thus for corresponding increases of the time step and enhancements of computational efficiency. The viability and accuracy of the proposed method has been rigorously tested and verified through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently bench-marked solution for 2D Cartesian coordinates. Finally, the proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is adequately handled.