983 resultados para Gradient Method
Resumo:
Traditionally, the analysis of gene regulatory regions suffered from the caveat that it was restricted to artificial contexts (e.g. reporter constructs of limited size). With the advent of the BAC recombineering technique, genomic constructs can now be generated to test regulatory elements in their endogenous environment. The expression of the transcriptional repressor brinker (brk) is negatively regulated by Dpp signaling. Repression is mediated by small sequence motifs, the silencer elements (SEs), that are present in multiple copies in the regulatory region of brk. In this work, we manipulated the SEs in the brk locus. We precisely quantified the effects of the individual SEs on the Brk gradient in the wing disc by employing a 1D data extraction method, followed by the quantification of the data with reference to an internal control. We found that mutating the SEs results in an expansion of the brk expression domain. However, even after mutating all predicted SEs, repression could still be observed in regions of maximal Dpp levels. Thus, our data point to the presence of additional, low affinity binding sites in the brk locus.
Resumo:
A simple and sensitive LC-MS method was developed and validated for the simultaneous quantification of aripiprazole (ARI), atomoxetine (ATO), duloxetine (DUL), clozapine (CLO), olanzapine (OLA), sertindole (STN), venlafaxine (VEN) and their active metabolites dehydroaripiprazole (DARI), norclozapine (NCLO), dehydrosertindole (DSTN) and O-desmethylvenlafaxine (OVEN) in human plasma. The above mentioned compounds and the internal standard (remoxipride) were extracted from 0.5 mL plasma by solid-phase extraction (mix mode support). The analytical separation was carried out on a reverse phase liquid chromatography at basic pH (pH 8.1) in gradient mode. All analytes were monitored by MS detection in the single ion monitoring mode and the method was validated covering the corresponding therapeutic range: 2-200 ng/mL for DUL, OLA, and STN, 4-200 ng/mL for DSTN, 5-1000 ng/mL for ARI, DARI and finally 2-1000 ng/mL for ATO, CLO, NCLO, VEN, OVEN. For all investigated compounds, good performance in terms of recoveries, selectivity, stability, repeatability, intermediate precision, trueness and accuracy, was obtained. Real patient plasma samples were then successfully analysed.
Resumo:
Despite the considerable environmental importance of mercury (Hg), given its high toxicity and ability to contaminate large areas via atmospheric deposition, little is known about its activity in soils, especially tropical soils, in comparison with other heavy metals. This lack of information about Hg arises because analytical methods for determination of Hg are more laborious and expensive compared to methods for other heavy metals. The situation is even more precarious regarding speciation of Hg in soils since sequential extraction methods are also inefficient for this metal. The aim of this paper is to present a technique of thermal desorption associated with atomic absorption spectrometry, TDAAS, as an efficient tool for quantitative determination of Hg in soils. The method consists of the release of Hg by heating, followed by its quantification by atomic absorption spectrometry. It was developed by constructing calibration curves in different soil samples based on increasing volumes of standard Hg2+ solutions. Performance, accuracy, precision, and quantification and detection limit parameters were evaluated. No matrix interference was detected. Certified reference samples and comparison with a Direct Mercury Analyzer, DMA (another highly recognized technique), were used in validation of the method, which proved to be accurate and precise.
Resumo:
ABSTRACT Knowledge of the terms (or processes) of the soil water balance equation or simply the components of the soil water balance over the cycle of an agricultural crop is essential for soil and water management. Thus, the aim of this study was to analyze these components in a Cambissolo Háplico (Haplocambids) growing muskmelon (Cucumis melo L.) under drip irrigation, with covered and uncovered soil, in the municipality of Baraúna, State of Rio Grande do Norte, Brazil (05º 04’ 48” S, 37º 37’ 00” W). Muskmelon, variety AF-646, was cultivated in a flat experimental area (20 × 50 m). The crop was spaced at 2.00 m between rows and 0.35 m between plants, in a total of ten 50-m-long plant rows. At points corresponding to ⅓ and ⅔ of each plant row, four tensiometers (at a distance of 0.1 m from each other) were set up at the depths of 0.1, 0.2, 0.3, and 0.4 m, adjacent to the irrigation line (0.1 m from the plant row), between two selected plants. Five random plant rows were mulched using dry leaves of banana (Musa sp.) along the drip line, forming a 0.5-m-wide strip, which covered an area of 25 m2 per of plant row with covered soil. In the other five rows, there was no covering. Thus, the experiment consisted of two treatments, with 10 replicates, in four phenological stages: initial (7-22 DAS - days after sowing), growing (22-40 DAS), fruiting (40-58 DAS) and maturation (58-70 DAS). Rainfall was measured with a rain gauge and water storage was estimated by the trapezoidal method, based on tensiometer readings and soil water retention curves. For soil water flux densities at 0.3 m, the tensiometers at the depths of 0.2, 0.3, and 0.4 m were considered; the tensiometer at 0.3 m was used to estimate soil water content from the soil water retention curve at this depth, and the other two to calculate the total potential gradient. Flux densities were calculated through use of the Darcy-Buckingham equation, with hydraulic conductivity determined by the instantaneous profile method. Crop actual evapotranspiration was calculated as the unknown of the soil water balance equation. The soil water balance method is effective in estimating the actual evapotranspiration of irrigated muskmelon; there was no significant effect of soil coverage on capillary rise, internal drainage, crop actual evapotranspiration, and muskmelon yield compared with the uncovered soil; the transport of water caused by evaporation in the uncovered soil was controlled by the break in capillarity at the soil-atmosphere interface, which caused similar water dynamics for both management practices applied.
Resumo:
ABSTRACT High cost and long time required to determine a retention curve by the conventional methods of the Richards Chamber and Haines Funnel limit its use; therefore, alternative methods to facilitate this routine are needed. The filter paper method to determine the soil water retention curve was evaluated and compared to the conventional method. Undisturbed samples were collected from five different soils. Using a Haines Funnel and Richards Chamber, moisture content was obtained for tensions of 2; 4; 6; 8; 10; 33; 100; 300; 700; and 1,500 kPa. In the filter paper test, the soil matric potential was obtained from the filter-paper calibration equation, and the moisture subsequently determined based on the gravimetric difference. The van Genuchten model was fitted to the observed data of soil matric potential versus moisture. Moisture values of the conventional and the filter paper methods, estimated by the van Genuchten model, were compared. The filter paper method, with R2 of 0.99, can be used to determine water retention curves of agricultural soils as an alternative to the conventional method.
Resumo:
ABSTRACT Particle density, gravimetric and volumetric water contents and porosity are important basic concepts to characterize porous systems such as soils. This paper presents a proposal of an experimental method to measure these physical properties, applicable in experimental physics classes, in porous media samples consisting of spheres with the same diameter (monodisperse medium) and with different diameters (polydisperse medium). Soil samples are not used given the difficulty of working with this porous medium in laboratories dedicated to teaching basic experimental physics. The paper describes the method to be followed and results of two case studies, one in monodisperse medium and the other in polydisperse medium. The particle density results were very close to theoretical values for lead spheres, whose relative deviation (RD) was -2.9 % and +0.1 % RD for the iron spheres. The RD of porosity was also low: -3.6 % for lead spheres and -1.2 % for iron spheres, in the comparison of procedures – using particle and porous medium densities and saturated volumetric water content – and monodisperse and polydisperse media.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.
Resumo:
We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Resumo:
We develop an abstract extrapolation theory for the real interpolation method that covers and improves the most recent versions of the celebrated theorems of Yano and Zygmund. As a consequence of our method, we give new endpoint estimates of the embedding Sobolev theorem for an arbitrary domain Omega
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.
Resumo:
Chronic hepatitis C is a major healthcare problem. The response to antiviral therapy for patients with chronic hepatitis C has previously been defined biochemically and by PCR. However, changes in the hepatic venous pressure gradient (HVPG) may be considered as an adjunctive end point for the therapeutic evaluation of antiviral therapy in chronic hepatitis C. It is a validated technique which is safe, well tolerated, well established, and reproducible. Serial HVPG measurements may be the best way to evaluate response to therapy in chronic hepatitis C.
Resumo:
Repeated passaging in conventional cell culture reduces pluripotency and proliferation capacity of human mesenchymal stem cells (MSC). We introduce an innovative cell culture method whereby the culture surface is dynamically enlarged during cell proliferation. This approach maintains constantly high cell density while preventing contact inhibition of growth. A highly elastic culture surface was enlarged in steps of 5% over the course of a 20-day culture period to 800% of the initial surface area. Nine weeks of dynamic expansion culture produced 10-fold more MSC compared with conventional culture, with one-third the number of trypsin passages. After 9 weeks, MSC continued to proliferate under dynamic expansion but ceased to grow in conventional culture. Dynamic expansion culture fully retained the multipotent character of MSC, which could be induced to differentiate into adipogenic, chondrogenic, osteogenic, and myogenic lineages. Development of an undesired fibrogenic myofibroblast phenotype was suppressed. Hence, our novel method can rapidly provide the high number of autologous, multipotent, and nonfibrogenic MSC needed for successful regenerative medicine.
Resumo:
The Multiscale Finite Volume (MsFV) method has been developed to efficiently solve reservoir-scale problems while conserving fine-scale details. The method employs two grid levels: a fine grid and a coarse grid. The latter is used to calculate a coarse solution to the original problem, which is interpolated to the fine mesh. The coarse system is constructed from the fine-scale problem using restriction and prolongation operators that are obtained by introducing appropriate localization assumptions. Through a successive reconstruction step, the MsFV method is able to provide an approximate, but fully conservative fine-scale velocity field. For very large problems (e.g. one billion cell model), a two-level algorithm can remain computational expensive. Depending on the upscaling factor, the computational expense comes either from the costs associated with the solution of the coarse problem or from the construction of the local interpolators (basis functions). To ensure numerical efficiency in the former case, the MsFV concept can be reapplied to the coarse problem, leading to a new, coarser level of discretization. One challenge in the use of a multilevel MsFV technique is to find an efficient reconstruction step to obtain a conservative fine-scale velocity field. In this work, we introduce a three-level Multiscale Finite Volume method (MlMsFV) and give a detailed description of the reconstruction step. Complexity analyses of the original MsFV method and the new MlMsFV method are discussed, and their performances in terms of accuracy and efficiency are compared.