1000 resultados para PLIC method
Resumo:
ABSTRACT High cost and long time required to determine a retention curve by the conventional methods of the Richards Chamber and Haines Funnel limit its use; therefore, alternative methods to facilitate this routine are needed. The filter paper method to determine the soil water retention curve was evaluated and compared to the conventional method. Undisturbed samples were collected from five different soils. Using a Haines Funnel and Richards Chamber, moisture content was obtained for tensions of 2; 4; 6; 8; 10; 33; 100; 300; 700; and 1,500 kPa. In the filter paper test, the soil matric potential was obtained from the filter-paper calibration equation, and the moisture subsequently determined based on the gravimetric difference. The van Genuchten model was fitted to the observed data of soil matric potential versus moisture. Moisture values of the conventional and the filter paper methods, estimated by the van Genuchten model, were compared. The filter paper method, with R2 of 0.99, can be used to determine water retention curves of agricultural soils as an alternative to the conventional method.
Resumo:
ABSTRACT Particle density, gravimetric and volumetric water contents and porosity are important basic concepts to characterize porous systems such as soils. This paper presents a proposal of an experimental method to measure these physical properties, applicable in experimental physics classes, in porous media samples consisting of spheres with the same diameter (monodisperse medium) and with different diameters (polydisperse medium). Soil samples are not used given the difficulty of working with this porous medium in laboratories dedicated to teaching basic experimental physics. The paper describes the method to be followed and results of two case studies, one in monodisperse medium and the other in polydisperse medium. The particle density results were very close to theoretical values for lead spheres, whose relative deviation (RD) was -2.9 % and +0.1 % RD for the iron spheres. The RD of porosity was also low: -3.6 % for lead spheres and -1.2 % for iron spheres, in the comparison of procedures – using particle and porous medium densities and saturated volumetric water content – and monodisperse and polydisperse media.
Resumo:
Accurate modeling of flow instabilities requires computational tools able to deal with several interacting scales, from the scale at which fingers are triggered up to the scale at which their effects need to be described. The Multiscale Finite Volume (MsFV) method offers a framework to couple fine-and coarse-scale features by solving a set of localized problems which are used both to define a coarse-scale problem and to reconstruct the fine-scale details of the flow. The MsFV method can be seen as an upscaling-downscaling technique, which is computationally more efficient than standard discretization schemes and more accurate than traditional upscaling techniques. We show that, although the method has proven accurate in modeling density-driven flow under stable conditions, the accuracy of the MsFV method deteriorates in case of unstable flow and an iterative scheme is required to control the localization error. To avoid large computational overhead due to the iterative scheme, we suggest several adaptive strategies both for flow and transport. In particular, the concentration gradient is used to identify a front region where instabilities are triggered and an accurate (iteratively improved) solution is required. Outside the front region the problem is upscaled and both flow and transport are solved only at the coarse scale. This adaptive strategy leads to very accurate solutions at roughly the same computational cost as the non-iterative MsFV method. In many circumstances, however, an accurate description of flow instabilities requires a refinement of the computational grid rather than a coarsening. For these problems, we propose a modified iterative MsFV, which can be used as downscaling method (DMsFV). Compared to other grid refinement techniques the DMsFV clearly separates the computational domain into refined and non-refined regions, which can be treated separately and matched later. This gives great flexibility to employ different physical descriptions in different regions, where different equations could be solved, offering an excellent framework to construct hybrid methods.
Resumo:
We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Resumo:
We develop an abstract extrapolation theory for the real interpolation method that covers and improves the most recent versions of the celebrated theorems of Yano and Zygmund. As a consequence of our method, we give new endpoint estimates of the embedding Sobolev theorem for an arbitrary domain Omega
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.
Resumo:
Repeated passaging in conventional cell culture reduces pluripotency and proliferation capacity of human mesenchymal stem cells (MSC). We introduce an innovative cell culture method whereby the culture surface is dynamically enlarged during cell proliferation. This approach maintains constantly high cell density while preventing contact inhibition of growth. A highly elastic culture surface was enlarged in steps of 5% over the course of a 20-day culture period to 800% of the initial surface area. Nine weeks of dynamic expansion culture produced 10-fold more MSC compared with conventional culture, with one-third the number of trypsin passages. After 9 weeks, MSC continued to proliferate under dynamic expansion but ceased to grow in conventional culture. Dynamic expansion culture fully retained the multipotent character of MSC, which could be induced to differentiate into adipogenic, chondrogenic, osteogenic, and myogenic lineages. Development of an undesired fibrogenic myofibroblast phenotype was suppressed. Hence, our novel method can rapidly provide the high number of autologous, multipotent, and nonfibrogenic MSC needed for successful regenerative medicine.
Resumo:
The Multiscale Finite Volume (MsFV) method has been developed to efficiently solve reservoir-scale problems while conserving fine-scale details. The method employs two grid levels: a fine grid and a coarse grid. The latter is used to calculate a coarse solution to the original problem, which is interpolated to the fine mesh. The coarse system is constructed from the fine-scale problem using restriction and prolongation operators that are obtained by introducing appropriate localization assumptions. Through a successive reconstruction step, the MsFV method is able to provide an approximate, but fully conservative fine-scale velocity field. For very large problems (e.g. one billion cell model), a two-level algorithm can remain computational expensive. Depending on the upscaling factor, the computational expense comes either from the costs associated with the solution of the coarse problem or from the construction of the local interpolators (basis functions). To ensure numerical efficiency in the former case, the MsFV concept can be reapplied to the coarse problem, leading to a new, coarser level of discretization. One challenge in the use of a multilevel MsFV technique is to find an efficient reconstruction step to obtain a conservative fine-scale velocity field. In this work, we introduce a three-level Multiscale Finite Volume method (MlMsFV) and give a detailed description of the reconstruction step. Complexity analyses of the original MsFV method and the new MlMsFV method are discussed, and their performances in terms of accuracy and efficiency are compared.
Resumo:
To study the stress-induced effects caused by wounding under a new perspective, a metabolomic strategy based on HPLC-MS has been devised for the model plant Arabidopsis thaliana. To detect induced metabolites and precisely localise these compounds among the numerous constitutive metabolites, HPLC-MS analyses were performed in a two-step strategy. In a first step, rapid direct TOF-MS measurements of the crude leaf extract were performed with a ballistic gradient on a short LC-column. The HPLC-MS data were investigated by multivariate analysis as total mass spectra (TMS). Principal components analysis (PCA) and hierarchical cluster analysis (HCA) on principal coordinates were combined for data treatment. PCA and HCA demonstrated a clear clustering of plant specimens selecting the highest discriminating ions given by the complete data analysis, leading to the specific detection of discrete-induced ions (m/z values). Furthermore, pool constitution with plants of homogeneous behaviour was achieved for confirmatory analysis. In this second step, long high-resolution LC profilings on an UPLC-TOF-MS system were used on pooled samples. This allowed to precisely localise the putative biological marker induced by wounding and by specific extraction of accurate m/z values detected in the screening procedure with the TMS spectra.
Resumo:
False identity documents constitute a potential powerful source of forensic intelligence because they are essential elements of transnational crime and provide cover for organized crime. In previous work, a systematic profiling method using false documents' visual features has been built within a forensic intelligence model. In the current study, the comparison process and metrics lying at the heart of this profiling method are described and evaluated. This evaluation takes advantage of 347 false identity documents of four different types seized in two countries whose sources were known to be common or different (following police investigations and dismantling of counterfeit factories). Intra-source and inter-sources variations were evaluated through the computation of more than 7500 similarity scores. The profiling method could thus be validated and its performance assessed using two complementary approaches to measuring type I and type II error rates: a binary classification and the computation of likelihood ratios. Very low error rates were measured across the four document types, demonstrating the validity and robustness of the method to link documents to a common source or to differentiate them. These results pave the way for an operational implementation of a systematic profiling process integrated in a developed forensic intelligence model.
Resumo:
Establishing CD8(+) T cell cultures has been empirical and the published methods have been largely individual laboratory based. In this study, we optimized culturing conditions and show that IL-2 concentration is the most critical factor for the success of establishing CD8(+) T cell cultures. High IL-2 concentration encouraged T cells to non-specifically proliferate, express a B cell marker, B220, and undergo apoptosis. These cells also lose typical irregular T cell morphology and are incapable of sustaining long-term cultures. Using tetramer and intracellular cytokine assessments, we further demonstrated that many antigen-specific T cells have been rendered nonfunctional when expanded under high IL-2 concentration. When IL-2 is used in the correct range, B220-mediated cell depletion greatly enhanced the success rate of such T cell cultures.