951 resultados para relative chlorophyll index
Resumo:
Strategies of cotton growth as row spacing and use of growth regulator are efficient when the knowledge of crop production potential as well as the amounts of nutrients content. The objective of this study was to evaluate the nutritional status of cotton DeltaOpal cultivar through leaf analysis of macronutrient, chlorophyll index and yield in relation to plant arrangement and managements of the growth regulator. The treatments consisted of plant arrangement: arrangement 1: 88,900 plant ha(-1) and row spacing at 0.9m, arrangement 2: 114,300 plant ha(-1) and row spacing at 0.7m, arrangement 3: 178,000 plant ha(-1) and row spacing at 0.45m; management of growth regulator (mepiquat chloride) at 1.0 L ha(-1) dose, concentration was50g L-1: a-no regulator application; b-single application at 70 days after emergency (d.a.e.); c-split application into four stages (35, 45, 55, and 65 d.a.e.). The reading reviews SPAD chlorophyll, leaf analysis of macronutrients and cotton yield were conducted in three agricultural years 2006/07, 2007/08 and 2008/09, under the experimental design of completely randomized blocks, in a 3x3 factorial scheme totaling up nine treatments with four replications. The reduction of row spacing and increasing plant density gives less absorption of potassium and sulfur by cotton crop. A single application of mepiquat chloride increase calcium leaf content. The split application of mepiquat chloride provides increased SPAD reading index, higher foliar magnesium concentration and highest seed cotton yield.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Agronomia (Horticultura) - FCA
Resumo:
Pós-graduação em Agronomia (Horticultura) - FCA
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The wide spectrum of candidiasis and its clinical importance encourage the research with the purpose of clarifying the mechanisms of pathogenicity and identification of virulence factors of Candida sp. Therefore, the aim of this study was to verify the adhesion capacity, protease activity and genotypic diversity of oral C. albicans and C. tropicalis isolates. The adhesion ability to the extracellular matrix glycoproteins laminin and fibronectin was evaluated using the ELISA technique. The research of proteases was carried out in agar plate containing bovine albumin and through a quantitative method in buffer solution containing haemoglobin. Intra and interspecies polymorphisms was verified through random amplified polymorphic DNA (RAPD) technique. All C. albicans and C. tropicalis isolates binded to immobilised laminin and fibronectin. Ca33 and Ct13 isolates had relative adhesion index significantly higher than the other isolates for both glycoproteins (P < 0.001). Protease activity was observed in all isolates of C. albicans using either the semi-quantitative or quantitative assay. The protease activity of C. tropicalis was better detected through the quantitative assay. The genotypic diversity by RAPD revealed a heterogeneous population in both species. Nevertheless, C. tropicalis presented higher genetic variability than C. albicans strains.
Resumo:
Lung stereology has a long and successful tradition. From mice to men, the application of new stereological methods at several levels (alveoli, parenchymal cells, organelles, proteins) has led to new insights into normal lung architecture, parenchymal remodelling in emphysema-like pathology, alveolar type II cell hyperplasia and hypertrophy and intracellular surfactant alterations as well as distribution of surfactant proteins. The Euler number of the network of alveolar openings, estimated using physical disectors at the light microscopic level, is an unbiased and direct estimate of alveolar number. Surfactant-producing alveolar type II cells can be counted and sampled for local size estimation with physical disectors at a high magnification light microscopic level. The number of their surfactant storage organelles, lamellar bodies, can be estimated using physical disectors at the EM level. By immunoelectron microscopy, surfactant protein distribution can be analysed with the relative labelling index. Together with the well-established classical stereological methods, these design-based methods now allow for a complete quantitative phenotype analysis in lung development and disease, including the structural characterization of gene-manipulated mice, at the light and electron microscopic level.
Resumo:
The penetration, translocation, and distribution of ultrafine and nanoparticles in tissues and cells are challenging issues in aerosol research. This article describes a set of novel quantitative microscopic methods for evaluating particle distributions within sectional images of tissues and cells by addressing the following questions: (1) is the observed distribution of particles between spatial compartments random? (2) Which compartments are preferentially targeted by particles? and (3) Does the observed particle distribution shift between different experimental groups? Each of these questions can be addressed by testing an appropriate null hypothesis. The methods all require observed particle distributions to be estimated by counting the number of particles associated with each defined compartment. For studying preferential labeling of compartments, the size of each of the compartments must also be estimated by counting the number of points of a randomly superimposed test grid that hit the different compartments. The latter provides information about the particle distribution that would be expected if the particles were randomly distributed, that is, the expected number of particles. From these data, we can calculate a relative deposition index (RDI) by dividing the observed number of particles by the expected number of particles. The RDI indicates whether the observed number of particles corresponds to that predicted solely by compartment size (for which RDI = 1). Within one group, the observed and expected particle distributions are compared by chi-squared analysis. The total chi-squared value indicates whether an observed distribution is random. If not, the partial chi-squared values help to identify those compartments that are preferential targets of the particles (RDI > 1). Particle distributions between different groups can be compared in a similar way by contingency table analysis. We first describe the preconditions and the way to implement these methods, then provide three worked examples, and finally discuss the advantages, pitfalls, and limitations of this method.
Resumo:
Nutrient leaching studies are expensive and require expertise in water collection and analyses. Less expensive or easier methods that estimate leaching losses would be desirable. The objective of this study was to determine if anion-exchange membranes (AEMs) and reflectance meters could predict nitrate (NO3-N) leaching losses from a cool-season lawn turf. A two-year field study used an established 90% Kentucky bluegrass (Poa pratensis L.)-10% creeping red fescue (Festuca rubra L.) turf that received 0 to 98 kg N ha-1 month-1, from May through November. Soil monolith lysimeters collected leachate that was analyzed for NO3-N concentration. Soil NO3-N was estimated with AEMs. Spectral reflectance measurements of the turf were obtained with chlorophyll and chroma meters. No significant (p > 0.05) increase in percolate flow-weighted NO3-N concentration (FWC) or mass loss occurred when AEM desorbed soil NO3-N was below 0.84 µg cm-2 d-1. A linear increase in FWC and mass loss (p < 0.0001) occurred, however, when AEM soil NO3-N was above this value. The maximum contaminant level (MCL) for drinking water (10 mg L-1 NO3-N) was reached with an AEM soil NO3-N value of 1.6 µg cm-2 d-1. Maximum meter readings were obtained when AEM soil NO3 N reached or exceeded 2.3 µg cm-2 d-1. As chlorophyll index and hue angle (greenness) increased, there was an increased probability of exceeding the NO3-N MCL. These data suggest that AEMs and reflectance meters can serve as tools to predict NO3-N leaching losses from cool-season lawn turf, and to provide objective guides for N fertilization.
Resumo:
Ideal nitrogen (N) management for turfgrass supplies sufficient N for high-quality turf without increasing N leaching losses. A greenhouse study was conducted during two 27-week periods to determine if in situ anion exchange membranes (AEMs) could predict nitrate (NO3-N) leaching from a Kentucky bluegrass (Poa pratensis) turf grown on intact soil columns. Treatments consisted of 16 rates of N fertilizer application, from 0 to 98 kg N ha-1 mo-1. Percolate water was collected weekly and analysed for NO3-N. Mean flow-weighted NO3-N concentration and cumulative mass in percolate were exponentially related (pseudo-R2=0.995 and 0.994, respectively) to AEM desorbed soil NO3-N, with a percolate concentration below 10 mg NO3-N L-1 corresponding to an AEM soil NO3-N value of 2.9 micro g cm-2 d-1. Apparent N recovery by turf ranged from 28 to 40% of applied N, with a maximum corresponding to 4.7 micro g cm-2 d-1 AEM soil NO3-N. Turf colour, growth, and chlorophyll index increased with increasing AEM soil NO3-N, but these increases occurred at the expense of increases in NO3-N leaching losses. These results suggest that AEMs might serve as a tool for predicting NO3-N leaching losses from turf.
Resumo:
This dataset present result from the DFG- funded Arctic-Turbulence-Experiment (ARCTEX-2006) performed by the University of Bayreuth on the island of Svalbard, Norway, during the winter/spring transition 2006. From May 5 to May 19, 2006 turbulent flux and meteorological measurements were performed on the monitoring field near Ny-Ålesund, at 78°55'24'' N, 11°55'15'' E Kongsfjord, Svalbard (Spitsbergen), Norway. The ARCTEX-2006 campaign site was located about 200 m southeast of the settlement on flat snow covered tundra, 11 m to 14 m above sea level. The permanent sites used for this study consisted of the 10 m meteorological tower of the Alfred Wegener Institute for Polar- and Marine Research (AWI), the international standardized radiation measurement site of the Baseline Surface Radiation Network (BSRN), the radiosonde launch site and the AWI tethered balloon launch sites. The temporary sites - set up by the University of Bayreuth - were a 6 m meteorological gradient tower, an eddy-flux measurement complex (EF), and a laser-scintillometer section (SLS). A quality assessment and data correction was applied to detect and eliminate specific measurement errors common at a high arctic landscape. In addition, the quality checked sensible heat flux measurements are compared with bulk aerodynamic formulas that are widely used in atmosphere-ocean/land-ice models for polar regions as described in Ebert and Curry (1993, doi:10.1029/93JC00656) and Launiainen and Cheng (1995). These parameterization approaches easily allow estimation of the turbulent surface fluxes from routine meteorological measurements. The data show: - the role of the intermittency of the turbulent atmospheric fluctuation of momentum and scalars, - the existence of a disturbed vertical temperature profile (sharp inversion layer) close to the surface, - the relevance of possible free convection events for the snow or ice melt in the Arctic spring at Svalbard, and - the relevance of meso-scale atmospheric circulation pattern and air-mass advection for the near-surface turbulent heat exchange in the Arctic spring at Svalbard. Recommendations and improvements regarding the interpretation of eddy-flux and laser-scintillometer data as well as the arrangement of the instrumentation under polar distinct exchange conditions and (extreme) weather situations could be derived.
Resumo:
The first data set contains the mean and cofficient of variation (standard deviation divided by mean) of a multi-frequency indicator I derived from ER60 acoustic information collected at five frequencies (18, 38, 70, 120, and 200 kHz) in the Bay of Biscay in May of the years 2006, 2008, 2009 and 2010 (Pelgas surveys). The multi-frequency indicator was first calculated per voxel (20 m long × 5 m deep sampling unit) and then averaged on a spatial grid (approx. 20 nm × 20 nm) for five 5-m depth layers in the surface waters (10-15m, 15-20m, 20-25m, 25-30m below sea surface); there are missing values in particular in the shallowest layer. The second data set provides for each grid cell and depth layer the proportion of voxels for which the multi-frequency indicator I was indicative of a certain group of organisms. For this the following interpretation was used: I < 0.39 swim bladder fish or large gas bubbles, I = 0.39-0.58 small resonant bubbles present in gas bearing organisms such as larval fish and phytoplankton, I = 0.7-0.8 fluidlike zooplankton such as copepods and euphausiids, and I > 0.8 mackerel. These proportions can be interpreted as a relative abundance index for each of the four organism groups.
Resumo:
Electricity market price forecast is a changeling yet very important task for electricity market managers and participants. Due to the complexity and uncertainties in the power grid, electricity prices are highly volatile and normally carry with spikes. which may be (ens or even hundreds of times higher than the normal price. Such electricity spikes are very difficult to be predicted. So far. most of the research on electricity price forecast is based on the normal range electricity prices. This paper proposes a data mining based electricity price forecast framework, which can predict the normal price as well as the price spikes. The normal price can be, predicted by a previously proposed wavelet and neural network based forecast model, while the spikes are forecasted based on a data mining approach. This paper focuses on the spike prediction and explores the reasons for price spikes based on the measurement of a proposed composite supply-demand balance index (SDI) and relative demand index (RDI). These indices are able to reflect the relationship among electricity demand, electricity supply and electricity reserve capacity. The proposed model is based on a mining database including market clearing price, trading hour. electricity), demand, electricity supply and reserve. Bayesian classification and similarity searching techniques are used to mine the database to find out the internal relationships between electricity price spikes and these proposed. The mining results are used to form the price spike forecast model. This proposed model is able to generate forecasted price spike, level of spike and associated forecast confidence level. The model is tested with the Queensland electricity market data with promising results. Crown Copyright (C) 2004 Published by Elsevier B.V. All rights reserved.
Resumo:
Manipulation of micrometer sized particles with optical tweezers can be precisely modeled with electrodynamic theory using Mie's solution for spherical particles or the T-matrix method for more complex objects. We model optical tweezers for a wide range of parameters including size, relative refractive index and objective numerical aperture. We present the resulting landscapes of the trap stiffness and maximum applicable trapping force in the parameter space. These landscapes give a detailed insight into the requirements and possibilities of optical trapping and provide detailed information on trapping of nanometer sized particles or trapping of high index particles like diamond.
Resumo:
Whole life costing (WLC) has become the best practice in construction procurement and it is likely to be a major issue in predicting whole life costs of a construction project accurately. However, different expectations from different organizations throughout a project's life and the lack of data, monitoring targets, and long-term interest for many key players are obstacles to be overcome if WLC is to be implemented. A questionnaire survey was undertaken to investigate a set of ten common factors and 188 individual factors. These were grouped into eight critical categories (project scope, time, cost, quality, contract/administration, human resource, risk, and health and safety) by project phase, as perceived by the clients, contractors and subcontractors in order to identify critical success factors for whole life performance assessment (WLPA). Using a relative importance index, the top ten critical factors for each category, from the perspective of project participants, were analyzed and ranked. Their agreement on those categories and factors were analyzed using Spearman's rank correlation. All participants identify “Type of Project” as the most common critical factor in the eight categories for WLPA. Using the relative index ranking technique and weighted average methods, it was found that the most critical individual factors in each category were: “clarity of contract” (scope); “fixed construction period” (time); “precise project budget estimate” (cost); “material quality” (quality); “mutual/trusting relationships” (contract/administration); “leadership/team management” (human resource); and “management of work safety on site” (health and safety). There was relatively a high agreement on these categories among all participants. Obviously, with 80 critical factors of WLPA, there is a stronger positive relationship between client and contactor rather than contractor and subcontractor, client and subcontractor. Putting these critical factors into a criteria matrix can facilitate an initial framework of WLPA in order to aid decision making in the public sector in South Korea for evaluation/selection process of a construction project at the bid stage.