994 resultados para Sub-pixel techniques
Resumo:
Neste trabalho, cristais de sulfato níquel dopados com íons de manganês (NSH: Mn) e cristais de sulfato níquel dopados com de íon de magnésio (NMgSH) foram crescidos e posteriormente caracterizados pelas técnicas de difração de raios X e de espectroscopia Raman. Os resultados obtidos mostraram que os cristais dopados possuem estrutura muito semelhante a do cristal de sulfato de níquel puro (NSH), com uma deformação anisotrópica nas dimensões da célula unitária em relação ao cristal puro. O objetivo do presente estudo foi crescer dois novos monocristais de boa qualidade óptica para serem usados como filtros ópticos de banda passante. Os cristais de sulfato de níquel hexa-hidratado (NHS) são conhecidos por possuírem espectros de transmissão óptica, que tem atraído muita atenção, pois apresentam duas regiões com alta eficiência de transmissão, aproximadamente 80%, sendo a primeira região entre 200 e 350 nm e a segunda entre 400 e 600 nm, e uma alta eficiência de absorção em outras regiões do espectro UV-VIS. Um espectro de transmissão de luz com estas características é semelhante a um filtro óptico. Analises Termogravimetric (TGA) foram realizadas para cristais puros e dopados. A temperatura de decomposição obtida para o NSH foi de 73 ° C, enquanto que os cristais de NSH:Mn e NMgSH apresentam valores de 82 ° C e 86 º C, respectivamente. Como pode ser facilmente percebido, a estabilidade térmica de cristais com o íons de Mn ou Mg em suas estruturas é significativamente maior. A banda de transmissão entre 200 e 350 nm no espectro óptico de NSH foi observada com redução significativa em sua largura nos espectros de transmissão dos cristais dopados restringindo assim a região do espectro conhecida como UVA.
Resumo:
Compostos do tipo hidrotalcita, também conhecidos como hidróxidos duplos lamelares (HDLs), do sistema (Zn-Ni-Cu/Fe-Al)-SO4 foram obtidos por meio de co-precipitação a pH variável (crescente) utilizando lama vermelha (LV) como material de partida devido a sua elevada porcentagem de Fe3+ e Al3+. Para tal estudo, a LV, previamente caracterizada por FRX e DRX, foi submetida à abertura ácida com HClconc. e H2SO4 2N. Para os HDLs obtidos, foram avaliados a influência do tipo de cátion bivalente, da variação do pH de síntese (pH 5, 7, 8, 9, 10 e 12) e da variação de razão molar teórica r = MII/MIII (2, 3 e 4) na sua estrutura cristalina mediante as seguintes técnicas de caracterização: DRX, FT-IR, MEV/EDS, TG/ATD. Os resultados FRX revelaram que a LV é composta principalmente por Fe2O3 (32,80%), Al2O3 (19,83%), SiO2 (18,14%) e Na2O (11,55%). DRX corrobora os resultados da análise química, visto que foram identificados os minerais: hematita, goethita, gibbsita, sodalita, calcita, anatásio e quartzo. Zn-HDLs mostraram que a o aumento de pH de síntese colabora para um melhor ordenamento cristalino do material, uma vez que os picos se tornam melhor definidos, culminando com a melhor condição experimental em pH 9 e r = 3, cujo HDL foi identificado como o mineral natroglaucocerinita (d~11 Ǻ). Nesses valores de pH, a incorporação de SO42- no espaçamento interlamelar foi favorecida apesar da competição com o CO2 presente no ar atmosférico no momento da síntese. FT-IR também indica a presença do sulfato. As análises por MEV revelam a presença de cristais muito finos e pequenos, > 2μm, de forma hexagonal que pela análise via EDS indicaram, em sua composição, os elementos Na, Zn, Fe, Al, S, C e O. TG/ATD evidenciaram quatro etapas de perda de massa: desidratação, desidroxilação, desoxidenação e dessulfatação, para os HDLs com melhor ordenamento cristalino. Para os materiais menos cristalinos, as duas primeiras etapas ocorrem simultaneamente. Ni-HDLs apresentaram três picos com posições próximas às do mineral carrboydita a partir de pH igual a 7. No entanto, a partir de pH 9, surge hematita como uma fase acessória. Também há disputa entre os ânions SO42- e CO32- no espaço interlamelar, visto que os valores de espaçamento basal d diminuem (de aproximadamente 9,5 até 7,8 Ǻ). Tal fato também foi observado pelo FT-IR. As análises por MEV mostraram aglomerados de minerais anédricos menores que 2μm que, via EDS indicaram composição Ni, Fe, Al, S, C e O. As análises de TG/ATD apresentaram o mesmo comportamento do sistema anterior, evidenciaram as etapas de desidratação, desidroxilação, desoxigenação e dessulfatação e, para os materiais menos cristalinos, as duas primeiras etapas também ocorrem simultaneamente. Cu-HDLs, em valores de pH entre 7 e 10, não cristalizaram a fase HDL tal qual verificada para os sistemas contendo zinco ou níquel. O cobre distorce a estrutura do octaedro causando o chamado Efeito Jahn-Teller: distorção tetraédrica no ambiente octaédrico. As análises por FT-IR apresentaram o mesmo comportamento dos sistemas anteriores, apesar de o material se apresentar amorfo via DRX. O MEV também revela aglomerados amorfos que, de acordo com o EDS, indicaram em sua composição os elementos Cu, Fe, Al, S, C e O. As análises de TG/ATD apresentaram o mesmo comportamento que os materiais menos cristalinos dos dois sistemas anteriores, para os quais as etapas de desidratação e desidroxilação ocorreram simultaneamente. Mt-HDLs (mistura de Zn2+, Ni2+, Cu2+), apresentaram comportamento semelhante aos HDLs de níquel, com quatro picos em posições próximas aos da carrboydita a partir de pH igual a 7. A disputa entre sulfato e carbonato também se repete, visto que os valores de espaçamento basal d diminuem (de aproximadamente 9,5 até 7,9 Ǻ), o que também pode ser notado nos espectros de FT-IR. O MEV dessas amostras também apresentaram aglomerados com tamanhos menores que 2μm e, via EDS, indicaram em sua composição os elementos Zn, Cu, Ni, Fe, Al, S, C e O. Aqui também o comportamento das curvas TG/ATD foi semelhante aos materiais pouco cristalinos obtidos anteriormente.
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
This paper makes a comparative analysis of results produced by the application of two techniques for the detection and segmentation of bodies in motion captured in images sequence, namely: 1) technique based on the temporal average of the values of each pixel recorded in N consecutive image frames and, 2) technique based on historical values associated with pixels recorded in different frames of an image sequence.
Resumo:
Fish bioassays are valuable tools that can be used to elucidate the toxicological potential of numerous substances that are present in the aquatic environment. In this study, we assessed the antagonistic action of selenium (Se) against the toxicity of mercury (Hg) in fish (Oreochromis niloticus). Six experimental groups with six fish each were defined as follows: (1) control, (2) mercury (HgCl2), (3) sodium selenite (Na2Se4O3), (4) sodium selenate (Na2Se6O4), (5) mercury + sodium selenite (HgCl2 + Na2Se4O3), and (6) mercury + sodium selenate (HgCl2 + Na2Se6O4). Hematological parameters [red blood cells (RBC), white blood cells (WBC), and erythroblasts (ERB)] in combination with cytogenotoxicity biomarkers [nuclear abnormalities (NAs) and micronuclei (MN)] were examined after three, seven, ten, and fourteen days. After 7 days of exposure, cytogenotoxic effects and increased erythroblasts caused by mercury, leukocytosis triggered by mercury + sodium selenite, leukopenia associated with sodium selenate, and anemia triggered by mercury + sodium selenate were observed. Positive correlations that were independent of time were observed between WBC and RBC, ERB and MN, and NA and MN. The results suggest that short-term exposure to chemical contaminants elicited changes in blood parameters and produced cytogenotoxic effects. Moreover, NAs are the primary manifestations of MN formation and should be included in a class characterized as NA only. Lastly, the staining techniques used can be applied to both hematological characterization and the measurement of cytogenotoxicity biomarkers.
Resumo:
The land use conservation planning requires knowledge of the soil characteristics, natural susceptibility to erosion and the soil loss limit. In this context, the objectives of this study were to perform a detailed soil survey of Ribeirão das Perobas watershed, located in Santa Cruz do Rio Pardo, São Paulo State and to determine and map the erodibility and soil loss tolerance of the soil classes found in the survey. The following techniques were used to perform the detailed soil survey: photopedology, field sampling, physical analysis, chemical analysis, and morphological description of the soil samples and profiles. The erodibility was determined by the methods described by Denardin (1990) and Mannigel et al. (2002), and the determination of soil loss tolerance followed the methodology of Mannigel et al. (2002). The results of erodibility determined by the methodology of Denardin (1990) were not discrepant and they did not distinguish soils that are known to have different susceptibility to erosion., w\Whereas, using the methodology of Mannigel et al. (2002), very high or very low erodibility values were observed in soils with extreme contents of sand silt or clay. The most influent variable to the soil loss tolerance results was the correction factor for the textural gradient of clay between soil horizons.
Resumo:
This work aimed to carry out a study of the environmental conditions of the Permanent Preservation Areas (PPAs) in the sub-basin of Marimbondo Stream in the city of Jales (SP), through remote sensing techniques and taking as the basis Brazilian environmental legislation, the Forest Code Federal Law Nº 12,651/2012. Permanent Preservation Areas are inserted intrinsically improved conditions in the quantity and quality of water in the areas of springs and along water bodies in rural properties. Therefore, we also carried out studies on the new Forest Code compared to the old Federal Law Order no. 4771/1965 and the observation of its application in the area of sub-basin as well as carrying out the simulation. The results of this research allowed toverify the need for direct implementation of the monitoring measures with the Forest Code, and this water management tool in Brazil, which will ensure sustainable management practices of land use and land cover, with direct benefits to water production, mainly for public supply. Also there was a greater need for involvement by the public actors, the Basin Committee of the São José dos Dourados and the municipal government of Jales, in order to truly act aimed at the protection and maintenance of the water body Stream Wasp
Resumo:
This work aimed to carry out a study of the environmental conditions of the Permanent Preservation Areas (PPAs) in the sub-basin of Marimbondo Stream in the city of Jales (SP), through remote sensing techniques and taking as the basis Brazilian environmental legislation, the Forest Code Federal Law Nº 12,651/2012. Permanent Preservation Areas are inserted intrinsically improved conditions in the quantity and quality of water in the areas of springs and along water bodies in rural properties. Therefore, we also carried out studies on the new Forest Code compared to the old Federal Law Order no. 4771/1965 and the observation of its application in the area of sub-basin as well as carrying out the simulation. The results of this research allowed toverify the need for direct implementation of the monitoring measures with the Forest Code, and this water management tool in Brazil, which will ensure sustainable management practices of land use and land cover, with direct benefits to water production, mainly for public supply. Also there was a greater need for involvement by the public actors, the Basin Committee of the São José dos Dourados and the municipal government of Jales, in order to truly act aimed at the protection and maintenance of the water body Stream Wasp
Resumo:
Monitorar a condição de uso de toda a extensão das rodovias brasileiras é tarefa dispendiosa e demorada. Este trabalho trata de novas técnicas que permitem o levantamento da condição da superfície dos pavimentos rodoviários de forma ágil utilizando imagens hiperespectrais de sensor digital aeroembarcado. Nos últimos anos, um número crescente de imagens de alta resolução espacial tem surgido no mercado mundial com o aparecimento dos novos satélites e sensores aeroembarcados de sensoriamento remoto. Propõe-se uma metodologia para identificação dos pavimentos asfálticos e classificação das principais ocorrências dos defeitos na superfície do pavimento. A primeira etapa da metodologia é a identificação da superfície asfáltica na imagem, utilizando uma classificação híbrida baseada inicialmente em pixel e depois refinada por objetos. A segunda etapa da metodologia é a identificação e classificação das ocorrências dos principais defeitos nos pavimentos flexíveis que são observáveis nas imagens de alta resolução espacial. Esta última etapa faz uso intensivo das novas técnicas de classificação de imagens baseadas em objetos. O resultado final é a geração de índices da condição da superfície do pavimento a partir das imagens que possam ser comparados com os indicadores vigentes da condição da superfície do pavimento já normatizados pelos órgãos competentes no país.
Resumo:
XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance.
Resumo:
Purpose: To evaluate biomechanical changes measured with the ORA (Ocular Response Analyzer (R); Reichert Ophthalmic Instruments, Buffalo, New York, USA) after Lasik with the Moria One Use Plus and to compare the biomechanics changes after myopic and hyperopic ablations. Methods: Fourteeneyes for hyperopia (H) and 19 eyes for myopia (M) were evaluated with the ORA preoperatively and 1 month after Lasik with thin flap (100 microns) using SBK-OUP (Sub-Bowman Keratomileusis-One Use Plus, Moria (R)). CH (Corneal Hysteresis), CRF (Corneal Resistance Factor), IOPg (gold-standard, Goldmann correlated Intraocular pressure), IOPcc (Corneal compensated Intraocular pressure) and more 38 variables derived from the corneal biomechanical response signal of the ORA were analyzed. The Wilcoxon test was used to assess differences between the variables before and after surgery for each group and the differences between the pre and postoperative (1 month) myopic eyes were compared with those obtained in hyperopic eyes, using the Mann-Whitney test. Results: There was a significant difference before and after Lasik in myopic and hyperopic eyes in IOPg (Wilcoxon, p<0.05), but not in IOPcc. Only myopic eyes showed a significant difference in CH and CRF measurements before and after LASIK, as well as 9 other biomechanical parameters (aspect1, h1, dive1, path1, p1area1, W11, H11, and w2 path11; Wilcoxon, p<0, 05), 8 of these being related to the first sign of flattening. Five parameters related to the sign of the second applanation showed significant variation only in the eyes before and after hyperopic Lasik (aspect2, h2, dive2, mslew2 and H21; Wilcoxon, p<0,05). There was a difference in both myopic and hyperopic on three parameters related to the applanation signal areas (p1area, and p2area p2area1; Wilcoxon, p<0.05). Differences in IOPg and p1area, before and after surgery were significantly higher in myopic eyes than in hyperopic eyes (Mann-Whitey, p<0.05). Conclusion: There are several significant differences in biomechanical parameters after Lasik with Moria OUP_SBK. Overall, the impact of myopic LASIK on corneal biomechanics is higher than of hyperopic Lasik. The parameters derived from the first sign of the ORA are more affected in myopic LASIK, whereas parameters derived from the second applanation are more affected in hyperopic LASIK.
Resumo:
Ground-based Earth troposphere calibration systems play an important role in planetary exploration, especially to carry out radio science experiments aimed at the estimation of planetary gravity fields. In these experiments, the main observable is the spacecraft (S/C) range rate, measured from the Doppler shift of an electromagnetic wave transmitted from ground, received by the spacecraft and coherently retransmitted back to ground. If the solar corona and interplanetary plasma noise is already removed from Doppler data, the Earth troposphere remains one of the main error sources in tracking observables. Current Earth media calibration systems at NASA’s Deep Space Network (DSN) stations are based upon a combination of weather data and multidirectional, dual frequency GPS measurements acquired at each station complex. In order to support Cassini’s cruise radio science experiments, a new generation of media calibration systems were developed, driven by the need to achieve the goal of an end-to-end Allan deviation of the radio link in the order of 3×〖10〗^(-15) at 1000 s integration time. The future ESA’s Bepi Colombo mission to Mercury carries scientific instrumentation for radio science experiments (a Ka-band transponder and a three-axis accelerometer) which, in combination with the S/C telecommunication system (a X/X/Ka transponder) will provide the most advanced tracking system ever flown on an interplanetary probe. Current error budget for MORE (Mercury Orbiter Radioscience Experiment) allows the residual uncalibrated troposphere to contribute with a value of 8×〖10〗^(-15) to the two-way Allan deviation at 1000 s integration time. The current standard ESA/ESTRACK calibration system is based on a combination of surface meteorological measurements and mathematical algorithms, capable to reconstruct the Earth troposphere path delay, leaving an uncalibrated component of about 1-2% of the total delay. In order to satisfy the stringent MORE requirements, the short time-scale variations of the Earth troposphere water vapor content must be calibrated at ESA deep space antennas (DSA) with more precise and stable instruments (microwave radiometers). In parallel to this high performance instruments, ESA ground stations should be upgraded to media calibration systems at least capable to calibrate both troposphere path delay components (dry and wet) at sub-centimetre level, in order to reduce S/C navigation uncertainties. The natural choice is to provide a continuous troposphere calibration by processing GNSS data acquired at each complex by dual frequency receivers already installed for station location purposes. The work presented here outlines the troposphere calibration technique to support both Deep Space probe navigation and radio science experiments. After an introduction to deep space tracking techniques, observables and error sources, in Chapter 2 the troposphere path delay is widely investigated, reporting the estimation techniques and the state of the art of the ESA and NASA troposphere calibrations. Chapter 3 deals with an analysis of the status and the performances of the NASA Advanced Media Calibration (AMC) system referred to the Cassini data analysis. Chapter 4 describes the current release of a developed GNSS software (S/W) to estimate the troposphere calibration to be used for ESA S/C navigation purposes. During the development phase of the S/W a test campaign has been undertaken in order to evaluate the S/W performances. A description of the campaign and the main results are reported in Chapter 5. Chapter 6 presents a preliminary analysis of microwave radiometers to be used to support radio science experiments. The analysis has been carried out considering radiometric measurements of the ESA/ESTEC instruments installed in Cabauw (NL) and compared with the requirements of MORE. Finally, Chapter 7 summarizes the results obtained and defines some key technical aspects to be evaluated and taken into account for the development phase of future instrumentation.
Resumo:
This research activity studied how the uncertainties are concerned and interrelated through the multi-model approach, since it seems to be the bigger challenge of ocean and weather forecasting. Moreover, we tried to reduce model error throughout the superensemble approach. In order to provide this aim, we created different dataset and by means of proper algorithms we obtained the superensamble estimate. We studied the sensitivity of this algorithm in function of its characteristics parameters. Clearly, it is not possible to evaluate a reasonable estimation of the error neglecting the importance of the grid size of ocean model, for the large amount of all the sub grid-phenomena embedded in space discretizations that can be only roughly parametrized instead of an explicit evaluation. For this reason we also developed a high resolution model, in order to calculate for the first time the impact of grid resolution on model error.
Resumo:
In the present thesis, we discuss the main notions of an axiomatic approach for an invariant Harnack inequality. This procedure, originated from techniques for fully nonlinear elliptic operators, has been developed by Di Fazio, Gutiérrez, and Lanconelli in the general settings of doubling Hölder quasi-metric spaces. The main tools of the approach are the so-called double ball property and critical density property: the validity of these properties implies an invariant Harnack inequality. We are mainly interested in the horizontally elliptic operators, i.e. some second order linear degenerate-elliptic operators which are elliptic with respect to the horizontal directions of a Carnot group. An invariant Harnack inequality of Krylov-Safonov type is still an open problem in this context. In the thesis we show how the double ball property is related to the solvability of a kind of exterior Dirichlet problem for these operators. More precisely, it is a consequence of the existence of some suitable interior barrier functions of Bouligand-type. By following these ideas, we prove the double ball property for a generic step two Carnot group. Regarding the critical density, we generalize to the setting of H-type groups some arguments by Gutiérrez and Tournier for the Heisenberg group. We recognize that the critical density holds true in these peculiar contexts by assuming a Cordes-Landis type condition for the coefficient matrix of the operator. By the axiomatic approach, we thus prove an invariant Harnack inequality in H-type groups which is uniform in the class of the coefficient matrices with prescribed bounds for the eigenvalues and satisfying such a Cordes-Landis condition.
Resumo:
Multi-input multi-output (MIMO) technology is an emerging solution for high data rate wireless communications. We develop soft-decision based equalization techniques for frequency selective MIMO channels in the quest for low-complexity equalizers with BER performance competitive to that of ML sequence detection. We first propose soft decision equalization (SDE), and demonstrate that decision feedback equalization (DFE) based on soft-decisions, expressed via the posterior probabilities associated with feedback symbols, is able to outperform hard-decision DFE, with a low computational cost that is polynomial in the number of symbols to be recovered, and linear in the signal constellation size. Building upon the probabilistic data association (PDA) multiuser detector, we present two new MIMO equalization solutions to handle the distinctive channel memory. With their low complexity, simple implementations, and impressive near-optimum performance offered by iterative soft-decision processing, the proposed SDE methods are attractive candidates to deliver efficient reception solutions to practical high-capacity MIMO systems. Motivated by the need for low-complexity receiver processing, we further present an alternative low-complexity soft-decision equalization approach for frequency selective MIMO communication systems. With the help of iterative processing, two detection and estimation schemes based on second-order statistics are harmoniously put together to yield a two-part receiver structure: local multiuser detection (MUD) using soft-decision Probabilistic Data Association (PDA) detection, and dynamic noise-interference tracking using Kalman filtering. The proposed Kalman-PDA detector performs local MUD within a sub-block of the received data instead of over the entire data set, to reduce the computational load. At the same time, all the inter-ference affecting the local sub-block, including both multiple access and inter-symbol interference, is properly modeled as the state vector of a linear system, and dynamically tracked by Kalman filtering. Two types of Kalman filters are designed, both of which are able to track an finite impulse response (FIR) MIMO channel of any memory length. The overall algorithms enjoy low complexity that is only polynomial in the number of information-bearing bits to be detected, regardless of the data block size. Furthermore, we introduce two optional performance-enhancing techniques: cross- layer automatic repeat request (ARQ) for uncoded systems and code-aided method for coded systems. We take Kalman-PDA as an example, and show via simulations that both techniques can render error performance that is better than Kalman-PDA alone and competitive to sphere decoding. At last, we consider the case that channel state information (CSI) is not perfectly known to the receiver, and present an iterative channel estimation algorithm. Simulations show that the performance of SDE with channel estimation approaches that of SDE with perfect CSI.