993 resultados para demerit point loss
Resumo:
The assessment of spatial uncertainty in the prediction of nutrient losses by erosion associated with landscape models is an important tool for soil conservation planning. The purpose of this study was to evaluate the spatial and local uncertainty in predicting depletion rates of soil nutrients (P, K, Ca, and Mg) by soil erosion from green and burnt sugarcane harvesting scenarios, using sequential Gaussian simulation (SGS). A regular grid with equidistant intervals of 50 m (626 points) was established in the 200-ha study area, in Tabapuã, São Paulo, Brazil. The rate of soil depletion (SD) was calculated from the relation between the nutrient concentration in the sediments and the chemical properties in the original soil for all grid points. The data were subjected to descriptive statistical and geostatistical analysis. The mean SD rate for all nutrients was higher in the slash-and-burn than the green cane harvest scenario (Student’s t-test, p<0.05). In both scenarios, nutrient loss followed the order: Ca>Mg>K>P. The SD rate was highest in areas with greater slope. Lower uncertainties were associated to the areas with higher SD and steeper slopes. Spatial uncertainties were highest for areas of transition between concave and convex landforms.
Resumo:
The description of the fate of fertilizer-derived nitrogen (N) in agricultural systems is an essential tool to enhance management practices that maximize nutrient use by crops and minimize losses. Soil erosion causes loss of nutrients such as N, causing negative effects on surface and ground water quality, aside from losses in agricultural productivity by soil depletion. Studies correlating the percentage of fertilizer-derived N (FDN) with soil erosion rates and the factors involved in this process are scarce. The losses of soil and fertilizer-derived N by water erosion in soil under conventional tillage and no tillage under different rainfall intensities were quantified, identifying the intervening factors that increase loss. The experiment was carried out on plots (3.5 × 11 m) with two treatments and three replications, under simulated rainfall. The treatments consisted of soil with and soil without tillage. Three successive rainfalls were applied in intervals of 24 h, at intensities of 30 mm/h, 30 mm/h and 70 mm/h. The applied N fertilizer was isotopically labeled (15N) and incorporated into the soil in a line perpendicular to the plot length. Tillage absence resulted in higher soil losses and higher total nitrogen losses (TN) by erosion induced by the rainfalls. The FDN losses followed another pattern, since FDN contributions were highest from tilled plots, even when soil and TN losses were lowest, i.e., the smaller the amount of eroded sediment, the greater the percentage of FDN associated with these. Rain intensity did not affect the FDN loss, and losses were greatest after less intense rainfalls in both treatments.
Resumo:
The front form and the point form of dynamics are studied in the framework of predictive relativistic mechanics. The non-interaction theorem is proved when a Poincar-invariant Hamiltonian formulation with canonical position coordinates is required.
Resumo:
Either 200 or 400 syngeneic islets were transplanted under the kidney capsule of normal or streptozocin-induced diabetic B6/AF1 mice. The diabetic mice with 400 islets became normoglycemic, but those with 200 islets, an insufficient number, were still diabetic after the transplantation (Tx). Two weeks after Tx, GLUT2 expression in the islet grafts was evaluated by immunofluorescence and Western blots, and graft function was examined by perfusion of the graft-bearing kidney. Immunofluorescence for GLUT2 was dramatically reduced in the beta-cells of grafts with 200 islets exposed to hyperglycemia. However, it was plentiful in grafts with 400 islets in a normoglycemic environment. Densitometric analysis of Western blots on graft homogenates demonstrated that GLUT2 protein levels in the islets, when exposed to chronic hyperglycemia for 2 weeks, were decreased to 16% of those of normal recipients. Moreover, these grafts had defective glucose-induced insulin secretion, while the effects of arginine were preserved. We conclude that GLUT2 expression in normal beta-cells is promptly down-regulated during exposure to hyperglycemia and may contribute to the loss of glucose-induced secretion of diabetes.
Resumo:
The liquid-liquid critical point scenario of water hypothesizes the existence of two metastable liq- uid phases low-density liquid (LDL) and high-density liquid (HDL) deep within the supercooled region. The hypothesis originates from computer simulations of the ST2 water model, but the stabil- ity of the LDL phase with respect to the crystal is still being debated. We simulate supercooled ST2 water at constant pressure, constant temperature, and constant number of molecules N for N ≤ 729 and times up to 1 μs. We observe clear differences between the two liquids, both structural and dynamical. Using several methods, including finite-size scaling, we confirm the presence of a liquid-liquid phase transition ending in a critical point. We find that the LDL is stable with respect to the crystal in 98% of our runs (we perform 372 runs for LDL or LDL-like states), and in 100% of our runs for the two largest system sizes (N = 512 and 729, for which we perform 136 runs for LDL or LDL-like states). In all these runs, tiny crystallites grow and then melt within 1 μs. Only for N ≤ 343 we observe six events (over 236 runs for LDL or LDL-like states) of spontaneous crystal- lization after crystallites reach an estimated critical size of about 70 ± 10 molecules.
Resumo:
Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.
Resumo:
Enfants de moins de 10 ans fumant passivement 14 cigarettes ! D'avril 2010 à avril 2011, l'exposition de 148 enfants (81 garçons et 67 filles) a été testée: 10 enfants de moins d'un an, 25 de 1 à 5 ans, 19 de 5 à 10 ans, 30 de 10 à 15 ans et 64 de 15 à 18 ans. 10 d'entre eux sont des fumeurs et la plus jeune de 14 ans fume 10 cigarettes par jour. Leurs parents, ou parfois des jeunes eux-mêmes, ont commandé de manière volontaire, via les sites Internet des CIPRET Valais, Vaud et Genève, un badge MoNIC gratuit. Les résultats quant à l'exposition de ces enfants interpellent et méritent l'attention.Pour l'ensemble des enfants, la concentration moyenne de nicotine dans leur environnement intérieur mesurée via les dispositifs MoNIC est de 0,5 mg/m3, avec des maximums pouvant aller jusqu'à 21 mg/m3. Pour le collectif d'enfants âgés de moins de 10 ans (26 garçons et 28 filles; tous non-fumeurs), la concentration de nicotine n'est pas négligeable (moyenne 0,069 mg/m3, min 0, max 0,583 mg/m3). En convertissant ce résultat en équivalent de cigarettes inhalées passivement, nous obtenons des chiffres allant de 0 à 14 cigarettes par jour* avec une moyenne se situant à 1.6 cig/j. Encore plus surprenant, les enfants de moins d'un an (4 garçons et 6 filles) inhalent passivement, dans le cadre familial, en moyenne 1 cigarette (min 0, max 2.2). Pour les deux autres collectifs: 10-15 ans et 15-18 ans, les valeurs maximales avoisinent les 22 cigarettes. Notons cependant que ce résultat est influencé, ce qui n'est pas le cas des enfants plus jeunes, par le fait que ces jeunes sont également parfois des fumeurs actifs.* Quand la durée d'exposition dépassait 1 jour (8 heures), le nombre d'heures a toujours été divisé par 8 heures. Le résultat obtenu donne l'équivalent de cigarettes fumées passivement en huit heures. Il s'agit de ce fait d'une moyenne, ce qui veut dire que durant cette période les enfants ont pu être exposés irrégulièrement à des valeurs supérieures ou inférieures à cette moyenne. [Auteurs]
Resumo:
Chronic hepatitis C is a major healthcare problem. The response to antiviral therapy for patients with chronic hepatitis C has previously been defined biochemically and by PCR. However, changes in the hepatic venous pressure gradient (HVPG) may be considered as an adjunctive end point for the therapeutic evaluation of antiviral therapy in chronic hepatitis C. It is a validated technique which is safe, well tolerated, well established, and reproducible. Serial HVPG measurements may be the best way to evaluate response to therapy in chronic hepatitis C.
Resumo:
Les infections liées aux accès vasculaires sont l'une des causes principales des infections nosocomiales. Elles englobent leur colonisation par des micro-organismes, les infections du site d'insertion et les bactériémies et fongémies qui leur sont attribuées. Une bactériémie complique en moyenne 3 à 5 voies veineuses sur 100, ou représente de 2 à 14 épisodes pour 1000 jour-cathéters. Cette proportion n'est que la partie visible de l'iceberg puisque la plupart des épisodes de sepsis clinique sans foyer infectieux apparent associé sont actuellement considérés comme secondaires aux accès vasculaires. Les principes thérapeutiques sont présentés après une brève revue de leur physiopathologie. Plusieurs approches préventives sont ensuite discutées, y compris des éléments récents sur l'utilisation de cathéters imprégnés de désinfectants ou d'antibiotiques.
Resumo:
The epithelial Na+ channel (ENaC) belongs to a new class of channel proteins called the ENaC/DEG superfamily involved in epithelial Na+ transport, mechanotransduction, and neurotransmission. The role of ENaC in Na+ homeostasis and in the control of blood pressure has been demonstrated recently by the identification of mutations in ENaC beta and gamma subunits causing hypertension. The function of ENaC in Na+ reabsorption depends critically on its ability to discriminate between Na+ and other ions like K+ or Ca2+. ENaC is virtually impermeant to K+ ions, and the molecular basis for its high ionic selectivity is largely unknown. We have identified a conserved Ser residue in the second transmembrane domain of the ENaC alpha subunit (alphaS589), which when mutated allows larger ions such as K+, Rb+, Cs+, and divalent cations to pass through the channel. The relative ion permeability of each of the alphaS589 mutants is related inversely to the ionic radius of the permeant ion, indicating that alphaS589 mutations increase the molecular cutoff of the channel by modifying the pore geometry at the selectivity filter. Proper geometry of the pore is required to tightly accommodate Na+ and Li+ ions and to exclude larger cations. We provide evidence that ENaC discriminates between cations mainly on the basis of their size and the energy of dehydration.
Resumo:
Point defects of opposite signs can alternately nucleate on the -1/2 disclination line that forms near the free surface of a confined nematic liquid crystal. We show the existence of metastable configurations consisting of periodic repetitions of such defects. These configurations are characterized by a minimal interdefect spacing that is seen to depend on sample thickness and on an applied electric field. The time evolution of the defect distribution suggests that the defects attract at small distances and repel at large distances.