97 resultados para Ordered weighted average
em Repositório Institucional UNESP - Universidade Estadual Paulista "Julio de Mesquita Filho"
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Aplicação de análise multicriterial para determinação de áreas prioritárias à recomposição florestal
Resumo:
Using the Geographic Information System (GIS) and taking into account its capability to analyze spatial data, a database of updated spatial data from the sub-basin of the Descalvado stream, Botucatu, SP, was developed to provide an evaluation and diagnosis of the area concerning land use and the degradation processes therein. Through GIS, priority areas for forest recovery were defined by Multicriteria Evaluation and using the Ordered Weighted Average method. The latter allows the decision maker to define the area to be recovered, facing limitation of resources, among one of the proposed scenarios, or do it in stages. The study showed that there are accelerated erosion processes in the headwaters of the springs of water bodies; there is also fragmentation of native vegetation, especially in hillside areas, and little presence of native vegetation in riparian areas. The application of the multicriteria analysis using the Ordered Weighted Average was important as it systematized and discriminated scenarios of priority for forest recovery.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In the fields of Machine Vision and Photogrammetry, extracted straight lines from digital images can be used either as vector elements of a digital representation or as control entities that allow the determination of the camera interior and exterior orientation parameters. Applications related with image orientation require feature extraction with subpixel precision, to guarantee the reliability of the estimated parameters. This paper presents three approaches for straight line extraction with subpixel precision. The first approach considers the subpixel refinement based on the weighted average of subpixel positions calculated on the direction perpendicular to the segmented straight line. In the second approach, a parabolic function is adjusted to the grey level profile of neighboring pixels in a perpendicular direction to the segmented line, followed by an interpolation of this model to estimate subpixel coordinates of the line center. In the third approach, the subpixel refinement is performed with a parabolic surface adjustment to the grey level values of neighboring pixels around the segmented line. The intersection of this surface with a normal plane to the line direction generates a parabolic equation that allows estimating the subpixel coordinates of the point in the straight line, assuming that this is the critical point of this function. Three experiments with real images were made and the approach based on parabolic surface adjustment has presented better results.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Zirconia-based ceramics that retain their metastable tetragonal phase at room temperature are widely studied due to their excellent mechanical and electrical properties. When these materials are prepared from precursor nanopowders with high specific surface areas, this phase is retained in dense ceramic bodies. In this work, we present a morphological study of nanocrystalline ZrO2-2.8 mol% Y2O3 powders synthesized by the gel-combustion method, using different organic fuels - alanine, glycine, lysine and citric acid - and calcined at temperatures ranging from 873 to 1173 K. The nanopore structures were investigated by small-angle X-ray scattering. The experimental results indicate that nanopores in samples prepared with alanine, glycine and lysine have an essentially single-mode volume distribution for calcination temperatures up to 1073 K, while those calcined at 1173 K exhibit a more complex and wider volume distribution. The volume-weighted average of the nanopore radii monotonically increases with increasing calcination temperature. The samples prepared with citric acid exhibit a size distribution much wider than the others. The Brunauer-Emmett-Teller technique was used to determine specific surface area and X-ray diffraction, environmental scanning electron microscopy and transmission electron microscopy were also employed for a complete characterization of the samples.
Resumo:
Groundwater samples were analysed for Rn-222, Ra-226, and Ra-228 in Guarani aquifer spreading around I million kin 2 within four countries in South America, and it was found that their activity concentrations are lognormally distributed. Population-weighted average activity concentration for these radionuclides allowed to estimate a value: either slightly higher (0.13 mSv/year) than 0.1 mSv for the total effective dose or two times higher (0.21 mSv/year) than this limit, depending on the choice of the dose conversion factor. Such calculation adds useful information for the appropriate management of this transboundary aquifer that is socially and economically very important to about 15 million inhabitants living in Brazil, Argentina, Uruguay and Paraguay. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Traditional methods of submerged aquatic vegetation (SAV) survey last long and then, they are high cost. Optical remote sensing is an alternative, but it has some limitations in the aquatic environment. The use of echosounder techniques is efficient to detect submerged targets. Therefore, the aim of this study is to evaluate different kinds of interpolation approach applied on SAV sample data collected by echosounder. This study case was performed in a region of Uberaba River - Brazil. The interpolation methods evaluated in this work follow: Nearest Neighbor, Weighted Average, Triangular Irregular Network (TIN) and ordinary kriging. Better results were carried out with kriging interpolation. Thus, it is recommend the use of geostatistics for spatial inference of SAV from sample data surveyed with echosounder techniques. © 2012 IEEE.
Resumo:
The shifts in the four-body recombination peaks, due to an effective range correction to the zero-range model close to the unitary limit, are obtained and used to extract the corresponding effective range of a given atomic system. The approach is applied to an ultracold gas of cesium atoms close to broad Feshbach resonances, where deviations of experimental values from universal model predictions are associated with effective range corrections. The effective range correction is extracted with a weighted average given by 3.9±0.8R vdW, where RvdW is the van der Waals length scale, which is consistent with the van der Waals potential tail for the Cs2 system. The method can be generally applied to other cold atom experimental setups to determine the contribution of the effective range to the tetramer dissociation position. © 2013 American Physical Society.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Pós-graduação em Química - IQ
Resumo:
Pós-graduação em Agronomia (Ciência do Solo) - FCAV
Resumo:
The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave (GW) astrophysics communities. The purpose of NINJA is to study the ability to detect GWs emitted from merging binary black holes (BBH) and recover their parameters with next-generation GW observatories. We report here on the results of the second NINJA project, NINJA-2, which employs 60 complete BBH hybrid waveforms consisting of a numerical portion modelling the late inspiral, merger, and ringdown stitched to a post-Newtonian portion modelling the early inspiral. In a 'blind injection challenge' similar to that conducted in recent Laser Interferometer Gravitational Wave Observatory (LIGO) and Virgo science runs, we added seven hybrid waveforms to two months of data recoloured to predictions of Advanced LIGO (aLIGO) and Advanced Virgo (AdV) sensitivity curves during their first observing runs. The resulting data was analysed by GW detection algorithms and 6 of the waveforms were recovered with false alarm rates smaller than 1 in a thousand years. Parameter-estimation algorithms were run on each of these waveforms to explore the ability to constrain the masses, component angular momenta and sky position of these waveforms. We find that the strong degeneracy between the mass ratio and the BHs' angular momenta will make it difficult to precisely estimate these parameters with aLIGO and AdV. We also perform a large-scale Monte Carlo study to assess the ability to recover each of the 60 hybrid waveforms with early aLIGO and AdV sensitivity curves. Our results predict that early aLIGO and AdV will have a volume-weighted average sensitive distance of 300 Mpc (1 Gpc) for 10M circle dot + 10M circle dot (50M circle dot + 50M circle dot) BBH coalescences. We demonstrate that neglecting the component angular momenta in the waveform models used in matched-filtering will result in a reduction in sensitivity for systems with large component angular momenta. This reduction is estimated to be up to similar to 15% for 50M circle dot + 50M circle dot BBH coalescences with almost maximal angular momenta aligned with the orbit when using early aLIGO and AdV sensitivity curves.
Resumo:
We consider model selection uncertainty in linear regression. We study theoretically and by simulation the approach of Buckland and co-workers, who proposed estimating a parameter common to all models under study by taking a weighted average over the models, using weights obtained from information criteria or the bootstrap. This approach is compared with the usual approach in which the 'best' model is used, and with Bayesian model averaging. The weighted predictor behaves similarly to model averaging, with generally more realistic mean-squared errors than the usual model-selection-based estimator.