975 resultados para Global Solar-radiation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Systems approaches can help to evaluate and improve the agronomic and economic viability of nitrogen application in the frequently water-limited environments. This requires a sound understanding of crop physiological processes and well tested simulation models. Thus, this experiment on spring wheat aimed to better quantify water x nitrogen effects on wheat by deriving some key crop physiological parameters that have proven useful in simulating crop growth. For spring wheat grown in Northern Australia under four levels of nitrogen (0 to 360 kg N ha(-1)) and either entirely on stored soil moisture or under full irrigation, kernel yields ranged from 343 to 719 g m(-2). Yield increases were strongly associated with increases in kernel number (9150-19950 kernels m(-2)), indicating the sensitivity of this parameter to water and N availability. Total water extraction under a rain shelter was 240 mm with a maximum extraction depth of 1.5 m. A substantial amount of mineral nitrogen available deep in the profile (below 0.9 m) was taken up by the crop. This was the source of nitrogen uptake observed after anthesis. Under dry conditions this late uptake accounted for approximately 50% of total nitrogen uptake and resulted in high (>2%) kernel nitrogen percentages even when no nitrogen was applied,Anthesis LAI values under sub-optimal water supply were reduced by 63% and under sub-optimal nitrogen supply by 50%. Radiation use efficiency (RUE) based on total incident short-wave radiation was 1.34 g MJ(-1) and did not differ among treatments. The conservative nature of RUE was the result of the crop reducing leaf area rather than leaf nitrogen content (which would have affected photosynthetic activity) under these moderate levels of nitrogen limitation. The transpiration efficiency coefficient was also conservative and averaged 4.7 Pa in the dry treatments. Kernel nitrogen percentage varied from 2.08 to 2.42%. The study provides a data set and a basis to consider ways to improve simulation capabilities of water and nitrogen effects on spring wheat. (C) 1997 Elsevier Science B.V.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background Prolonged exposure of the lip to sunlight may cause actinic cheilitis (AC) and squamous cell carcinoma (SCC). Maspin is a serpin with tumor suppressor functions. This work analyzed the presence and distribution of maspin in AC and lip SCC. Methods Sections from 36 cases diagnosed as AC (18 cases with mild epithelial dysplasia, 11 with moderate and 7 with severe), 18 cases diagnosed as lip SCC and 7 specimens containing normal lip vermillion epithelium were submitted for immunohistochemical analysis to detect maspin. Results All AC cases with mild and two cases with moderate dysplasia were scored 3. The remaining nine cases with moderate dysplasia were identified as score 2, whereas all cases with severe dysplasia were scored 1. Positive staining for maspin decreased from the basal layer to the surface. Among the 18 lip SCCs studied, 15 cases showed abundant staining for maspin. Epithelium adjacent to the SCCs also showed intense positive staining in all cells. Conclusions Our results suggest that the loss of maspin expression occurs from the basal layer to the surface. Lip SCCs related to solar radiation show an intense presence of maspin protein in almost all tumor cells as well as the neighboring epithelium. Fontes A, Sousa SM, Santos E, Martins MT. The severity of epithelial dysplasia is associated with loss of maspin expression in actinic cheilitis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hypovitaminosis D is a candidate risk-modifying factor for a diverse range of disorders apart from rickets and osteoporosis. Based on epidemiology, and on in vitro and animal experiment, vitamin D has been linked to multiple sclerosis, certain cancers (prostate, breast and colorectal), insulin-dependent diabetes mellitus and schizophrenia. I hypothesise that low pre- and perinatal vitamin D levels imprint on the functional characteristics of various tissues throughout the body, leaving the affected individual at increased risk of developing a range of adult-onset disorders. The hypothesis draws from recent advances in our understanding of the early origin of adult disease and proposes a 'critical window' during which vitamin D levels may have a persisting impact on adult health outcomes. Methods to test the hypothesis are outlined. If correct, the hypothesis has important implications for public health. Careful attention to maternal vitamin D status could translate into diverse improvements in health outcomes for the following generation. (C) 2001 Harcourt Publishers Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sun exposure is the main environmental risk factor for melanoma, but the timing of exposure during life that confers increased risk is controversial. Here we provide the first report of the association between lifetime and age-specific cumulative ultraviolet exposure and cutaneous melanoma in Queensland, Australia, an area of high solar radiation, and examine the association separately for families at high, intermediate and low familial melanoma risk. Subjects were a population-based sample of melanoma cases diagnosed and registered in Queensland between 1982 and 1990 and their relatives. The analysis included 1,263 cases and relatives with confirmed cutaneous melanoma and 3,111 first-degree relatives without melanoma as controls. Data an lifetime residence and sun exposure, family history and other melanoma risk factors were collected by a mailed questionnaire. Using conditional multiple logistic regression with stratification by family, cumulative sun exposure in childhood and in adulthood after age 20 was significantly associated with melanoma, with estimated relative risks of 1.15 per 5,000 minimal erythemal doses (MEDs) from age 5 to 12 years, and 1.52 per 5 MEDs/day from age 20. There was no association with sun exposure in families at high familial melanoma risk. History of nonmelanoma skin cancer (relative risk [RR] = 1.26) and multiple sunburns (RR = 1.31) were significant risk factors. These findings indicate that sun exposure in childhood and in adulthood are important determinants of melanoma but not in those rare families with high melanoma susceptibility, in which genetic factors are likely to be more important. (C) 2002 Wiley-Liss, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The incidence of melanoma increases markedly in the second decade of life but almost nothing is known of the causes of melanoma in this age group. We report on the first population-based case-control study of risk factors for melanoma in adolescents (15-19 years). Data were collected through personal interviews with cases, controls and parents. A single examiner conducted full-body nevus counts and blood samples were collected from cases for analysis of the CDKN2A melanoma predisposition gene. A total of 201 (80%) of the 250 adolescents with melanoma diagnosed between 1987 and 1994 and registered with the Queensland Cancer Registry and 205 (79%) of 258 age-, gender- and location-matched controls who were contacted agreed to participate. The strongest risk factor associated with melanoma in adolescents in a multivariate model was the presence of more than 100 nevi 2 mm or more in diameter (odds ratio [OR] = 46.5, 95% confidence interval [Cl] = 11.4-190.8). Other risk factors were red hair (OR = 5.4, 95%Cl = 1.0-28.4); blue eyes (OR = 4.5, 95%Cl = 1.5- 13.6); inability to tan after prolonged sun exposure (OR = 4.7, 95%Cl = 0.9-24.6); heavy facial freckling (OR = 3.2, 95% Cl = 0.9-12.3); and family history of melanoma (OR = 4.0, 95%Cl = 0.8-18.9). Only 2 of 147 cases tested had germline variants or mutations in CDKN2A. There was no association with sunscreen use overall, however, never/rare use of sunscreen at home under the age of 5 years was associated with increased risk (OR = 2.2, 95%Cl = 0.7-7.1). There was no difference between cases and controls in cumulative sun exposure in this high-exposure environment. Factors indicating genetic susceptibility to melanoma, in particular, the propensity to develop nevi and freckles, red hair, blue eyes, inability to tan and a family history of the disease are the primary determinants of melanoma among adolescents in this high solar radiation environment. Lack of association with reported sun exposure is consistent with the high genetic susceptibility in this group. (C) 2002 Wiley-Liss, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Back ground. Based on the well-described excess of schizophrenia births in winter and spring, we hypothesised that individuals with schizophrenia (a) would be more likely to be born during periods of decreased perinatal sunshine, and (b) those born during periods of less sunshine would have an earlier age of first registration. Methods. We undertook an ecological analysis of long-term trends in perinatal sunshine duration and schizophrenia birth rates based on two mental health registers (Queensland. Australia n = 6630; The Netherlands n = 24, 474). For each of the 480 months between 1931 and 1970, the agreement between slopes of the trends in psychosis and long-term sunshine duration series were assessed. Age at first registration was assessed by quartiles of long-term trends in perinatal sunshine duration, Males and females were assessed separately. Results. Both the Dutch and Australian data showed a statistically significant association between falling long-term trends in sunshine duration around the time of birth and rising schizophrenia birth rates for males only. In both the Dutch and Australian data there were significant associations between earlier age of first registration and reduced long-term trends in sunshine duration around the time of birth for both males and females, Conclusions. A measure of long-term trends in perinatal sunshine duration was associated with two epidemiological features of schizophrenia in two separate data sets. Exposures related to sunshine duration warrant further consideration in schizophrenia research. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Low temperature during panicle development in rice increases spikelet sterility. This effect is exacerbated by high rates of nitrogen (N) application in the field. Spikelet sterility induced by low temperature and N fertilisation was examined in glasshouse experiments to clarify the mechanisms involved. In two glasshouse experiments, 12-h periods of low (18/13degreesC) and high (28/23degreesC) day/night temperatures were imposed over periods of 5-7 days during panicle development, to determine the effects of low temperature and N fertilisation on spikelet sterility. In one experiment, 50% sunlight was imposed together with low temperature to investigate the additive effects of reduced solar radiation and low temperature. The effect of increased tillering due to N fertilisation was examined by a tiller removal treatment in the same experiment. Pollen grain number and spikelet sterility were recorded at heading and harvest, respectively. Although there was no significant effect of low temperature on spikelet sterility in the absence of applied N, low temperature greatly increased spikelet sterility as a result of a reduction in the number of engorged pollen grains per anther in the presence of applied N. Spikelet sterility was strongly correlated with the number of engorged pollen grains per anther. Low temperature during very early ( late stage of spikelet differentiation-pollen mother cell stage) and peak ( second meiotic division stage-early stage of extine formation) microspore development caused a severe reduction in engorged pollen production mainly as a result of reduced total pollen production. Unlike low temperature, the effect of shading was rather small. The increased tillering due to application of high rates of N, increased both spikelet number per plant and spikelet sterility under low temperature conditions. The removal of tillers as they appeared reduced the number of total spikelets per plant and maintained a large number of engorged pollen grains per anther which, in turn, reduced spikelet sterility. The number of engorged pollen grains per anther determined the numbers of intercepted and germinated pollen grains on the stigma. It is concluded that N increased tillering and spikelet number per plant and this, in turn, reduced the number of engorged pollen grains per anther, leading into increased spikelet sterility under low temperature condition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mestrado em Engenharia Química

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study aimed to characterize air pollution and the associated carcinogenic risks of polycyclic aromatic hydrocarbon (PAHs) at an urban site, to identify possible emission sources of PAHs using several statistical methodologies, and to analyze the influence of other air pollutants and meteorological variables on PAH concentrations.The air quality and meteorological data were collected in Oporto, the second largest city of Portugal. Eighteen PAHs (the 16 PAHs considered by United States Environment Protection Agency (USEPA) as priority pollutants, dibenzo[a,l]pyrene, and benzo[j]fluoranthene) were collected daily for 24 h in air (gas phase and in particles) during 40 consecutive days in November and December 2008 by constant low-flow samplers and using polytetrafluoroethylene (PTFE) membrane filters for particulate (PM10 and PM2.5 bound) PAHs and pre-cleaned polyurethane foam plugs for gaseous compounds. The other monitored air pollutants were SO2, PM10, NO2, CO, and O3; the meteorological variables were temperature, relative humidity, wind speed, total precipitation, and solar radiation. Benzo[a]pyrene reached a mean concentration of 2.02 ngm−3, surpassing the EU annual limit value. The target carcinogenic risks were equal than the health-based guideline level set by USEPA (10−6) at the studied site, with the cancer risks of eight PAHs reaching senior levels of 9.98×10−7 in PM10 and 1.06×10−6 in air. The applied statistical methods, correlation matrix, cluster analysis, and principal component analysis, were in agreement in the grouping of the PAHs. The groups were formed according to their chemical structure (number of rings), phase distribution, and emission sources. PAH diagnostic ratios were also calculated to evaluate the main emission sources. Diesel vehicular emissions were the major source of PAHs at the studied site. Besides that source, emissions from residential heating and oil refinery were identified to contribute to PAH levels at the respective area. Additionally, principal component regression indicated that SO2, NO2, PM10, CO, and solar radiation had positive correlation with PAHs concentrations, while O3, temperature, relative humidity, and wind speed were negatively correlated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica no Ramo de Energia

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this work was to assess ultrafine particles (UFP) number concentrations in different microenvironments of Portuguese preschools and to estimate the respective exposure doses of UFP for 3–5-year-old children (in comparison with adults). UFP were sampled both indoors and outdoors in two urban (US1, US2) and one rural (RS1) preschool located in north of Portugal for 31 days. Total levels of indoor UFP were significantly higher at the urban preschools (mean of 1.82x104 and 1.32x104 particles/cm3 at US1 an US2, respectively) than at the rural one (1.15x104 particles/cm3). Canteens were the indoor microenvironment with the highest UFP (mean of 5.17x104, 3.28x104, and 4.09x104 particles/cm3 at US1, US2, and RS1), whereas the lowest concentrations were observed in classrooms (9.31x103, 11.3x103, and 7.14x103 particles/cm3 at US1, US2, and RS1). Mean indoor/outdoor ratios (I/O) of UFP at three preschools were lower than 1 (0.54–0.93), indicating that outdoor emissions significantly contributed to UFP indoors. Significant correlations were obtained between temperature, wind speed, relative humidity, solar radiation, and ambient UFP number concentrations. The estimated exposure doses were higher in children attending urban preschools; 3–5-year-old children were exposed to 4–6 times higher UFP doses than adults with similar daily schedules.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A agricultura é uma das atividades mais antigas realizadas pelo Homem, sendo de grande importância para a obtenção tanto de bens alimentares como de bens para outros fins. No entanto desde o início constatou-se que as culturas eram afetadas por pragas e doenças que levavam à perda das colheitas. Este motivo deu origem à necessidade de nesses termos surgiu a aplicação de substâncias com o objetivo de proteger as colheitas. Os pesticidas são substâncias naturais ou sintéticas, aplicadas com o objetivo de proteger as plantas eliminando pragas e doenças. Para além da potencial toxicidade destas substâncias, em alguns casos a sua degradação no meio ambiente por microrganismos, hidrólise, radiação solar, etc. dá origem a produtos de degradação tanto ou mais tóxicos que os próprios pesticidas. A utilização deste tipo de substâncias acarreta problemas, visto a sua aplicação ser feita de forma a compensar perdas que ocorrem por meio de degradação, lixiviação, entre outros processos. Este tipo de aplicação leva a que haja contaminação do meio ambiente por parte dos pesticidas, pondo em risco tanto a saúde humana como os restantes seres vivos. A utilização de ciclodextrinas no encapsulamento destes compostos tem como objetivo aumentar a estabilidade do composto e promover a sua libertação de forma controlada. No presente trabalho pretende-se efetuar um estudo comparativo sobre a fotodegradação do herbicida terbutilazina e do fungicida pirimetanil livres e quando encapsulados com 2- hidroxipropil-β- ciclodextrina. De forma a quantificar os pesticidas ao longo do estudo foi utilizado o método analítico de HPLC de fase reversa. Os resultados permitiram constatar que a terbutilazina é fotoquimicamente estável, nas condições aplicadas, visto que ao fim de 75 dias de as soluções de pesticida livre em água desionizada e em água do rio apresentarem ainda 98% do pesticida inicial e as soluções de pesticida encapsulado em água desionizada e em água do rio apresentarem ainda 98% do pesticida inicial. Neste caso particular não foi possível, no intervalo de tempo considerado, avaliar a influência do encapsulamento no processo de fotodegradação da terbutilazina. Dada a baixa fotodegradação observada optou-se pela adição de peróxido de hidrogénio às soluções de controlo e 35 mM de HP-β-CD e acetona às soluções de 0 mM e 17,5 mM de HP-β-CD, para tentar promover a degradação do pesticida. Através dos resultados obtidos constatou-se que particularmente para as soluções onde foi adicionada acetona houve um aumento da velocidade de degradação no entanto esta ainda ocorria de forma lenta e muito semelhante quer para o pesticida livre quer para o encapsulado. Relativamente ao estudo da fotodegradação do pirimetanil verificou-se que ao fim de 4 dias de irradiação as soluções de pesticida livre apresentavam já alguma degradação do pesticida e tendo o período de irradiação uma duração de 53 dias foi possível para este pesticida determinar os parâmetros cinéticos em algumas das soluções. Quanto as soluções de água desionizada e água do rio com pirimetanil livre ambas apresentaram degradação do pesticida verificando-se uma cinética de reação de 1ª ordem com constantes de 0,0018 dias-1 e de 0,0060 dias-1 respetivamente. Para a solução de água desionizada com pirimetanil encapsulado não foi detetada degradação do pesticida, já para a solução com pirimetanil encapsulado em água do rio verificou-se a existência de degradação que correspondeu a uma cinética de degradação de 1ª ordem com uma constante de 0,0013 dias-1. Através dos resultados obtidos pode-se concluir que o encapsulamento do pirimetanil com 2-hidroxipropil-β-ciclodextrina é vantajoso visto diminuir a quantidade de pesticida utilizado e aumentar a eficácia do controlo das pragas.