967 resultados para Spatial Budgetary Evaluation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the first detailed investigation on the residual levels of organochlorine insecticide (OCI) concentrations in the Cochin estuarine sediment. It aims in elucidate their distribution and ecological impact on the aquatic system. Concentrations of persistent organochlorine compound (OC) were determined for 17 surface sediment samples which were collected from specific sites of Cochin Estuarine System (CES) over a period of November 2009 and November 2011. The contaminant levels in the CES were compared with other worldwide ecosystems. The sites bearing high concentration of organochlorine compounds are well associated with the complexities and low energy environment. Evaluation of ecotoxicological factors suggests that adverse biological effects are expected in certain areas of CES

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cochin University of Science And Technology

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With movement toward kilometer-scale ensembles, new techniques are needed for their characterization. A new methodology is presented for detailed spatial ensemble characterization using the fractions skill score (FSS). To evaluate spatial forecast differences, the average and standard deviation are taken of the FSS calculated over all ensemble member–member pairs at different scales and lead times. These methods were found to give important information about the ensemble behavior allowing the identification of useful spatial scales, spinup times for the model, and upscale growth of errors and forecast differences. The ensemble spread was found to be highly dependent on the spatial scales considered and the threshold applied to the field. High thresholds picked out localized and intense values that gave large temporal variability in ensemble spread: local processes and undersampling dominate for these thresholds. For lower thresholds the ensemble spread increases with time as differences between the ensemble members upscale. Two convective cases were investigated based on the Met Office United Model run at 2.2-km resolution. Different ensemble types were considered: ensembles produced using the Met Office Global and Regional Ensemble Prediction System (MOGREPS) and an ensemble produced using different model physics configurations. Comparison of the MOGREPS and multiphysics ensembles demonstrated the utility of spatial ensemble evaluation techniques for assessing the impact of different perturbation strategies and the need for assessing spread at different, believable, spatial scales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent treatment planning studies have demonstrated the use of physiologic images in radiation therapy treatment planning to identify regions for functional avoidance. This image-guided radiotherapy (IGRT) strategy may reduce the injury and/or functional loss following thoracic radiotherapy. 4D computed tomography (CT), developed for radiotherapy treatment planning, is a relatively new imaging technique that allows the acquisition of a time-varying sequence of 3D CT images of the patient's lungs through the respiratory cycle. Guerrero et al. developed a method to calculate ventilation imaging from 4D CT, which is potentially better suited and more broadly available for IGRT than the current standard imaging methods. The key to extracting function information from 4D CT is the construction of a volumetric deformation field that accurately tracks the motion of the patient's lungs during the respiratory cycle. The spatial accuracy of the displacement field directly impacts the ventilation images; higher spatial registration accuracy will result in less ventilation image artifacts and physiologic inaccuracies. Presently, a consistent methodology for spatial accuracy evaluation of the DIR transformation is lacking. Evaluation of the 4D CT-derived ventilation images will be performed to assess correlation with global measurements of lung ventilation, as well as regional correlation of the distribution of ventilation with the current clinical standard SPECT. This requires a novel framework for both the detailed assessment of an image registration algorithm's performance characteristics as well as quality assurance for spatial accuracy assessment in routine application. Finally, we hypothesize that hypo-ventilated regions, identified on 4D CT ventilation images, will correlate with hypo-perfused regions in lung cancer patients who have obstructive lesions. A prospective imaging trial of patients with locally advanced non-small-cell lung cancer will allow this hypothesis to be tested. These advances are intended to contribute to the validation and clinical implementation of CT-based ventilation imaging in prospective clinical trials, in which the impact of this imaging method on patient outcomes may be tested.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fusion of multi-sensor imaging data enables a synergetic interpretation of complementary information obtained by sensors of different spectral ranges. Multi-sensor data of diverse spectral, spatial and temporal resolutions require advanced numerical techniques for analysis and interpretation. This paper reviews ten advanced pixel based image fusion techniques – Component substitution (COS), Local mean and variance matching, Modified IHS (Intensity Hue Saturation), Fast Fourier Transformed-enhanced IHS, Laplacian Pyramid, Local regression, Smoothing filter (SF), Sparkle, SVHC and Synthetic Variable Ratio. The above techniques were tested on IKONOS data (Panchromatic band at 1 m spatial resolution and Multispectral 4 bands at 4 m spatial resolution). Evaluation of the fused results through various accuracy measures, revealed that SF and COS methods produce images closest to corresponding multi-sensor would observe at the highest resolution level (1 m).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O dimensionamento de uma rede de monitoramento e controle da qualidade do ar requer o conhecimento da área onde os poluentes atmosféricos, emitidos por fontes fixas e móveis, tendem a se concentrar e os seus fenômenos de dispersão. A definição das áreas de monitoramento da poluição atmosférica na Região Metropolitana do Rio de Janeiro é um tema discutido desde o início dos anos 80 quando foram estabelecidas as bacias aéreas a partir de cartas topográficas. Este projeto consiste em pesquisa aplicada ao estabelecimento da configuração espacial e mapeamento das bacias aéreas a partir de dados digitais. Tal esforço é justificado em função do alcance dos beneficiados diretamente e à sociedade em geral, a partir do conhecimento das condições da qualidade do ar e seu comportamento ao longo do tempo. O estudo realizado se concentra na Região Metropolitana do Rio de Janeiro, com base em dados necessários para a avaliação da dinâmica das massas de ar na área de estudo e suas características para definição das novas bacias aéreas com suporte de um Sistema de Informação Geográfica (SIG). Apoiado nos dados cartográficos digitais e nos dados cadastrais das estações de monitoramento, foi projetado e implementado um SIG, em atendimento aos requisitos de mapeamento digital das bacias aéreas, da distribuição espacial das estações de monitoramento da qualidade do ar, das principais fontes de emissão de poluentes e das principais vias de circulação veicular, onde foram identificadas e mapeadas regiões com características semelhantes para diversos cenários com uso potencial do SIG. Foi criado um banco de dados georeferenciado, previamente modelado oferecendo consultas espaciais destinadas às necessidades de gestão ambiental. Com a utilização do SIG, foram identificadas áreas com deficiência no monitoramento, áreas críticas de poluição atmosférica e propostas as novas bacias aéreas delimitadas a partir dos dados digitais. O SIG se mostrou uma ferramenta eficiente para a gestão ambiental da qualidade do ar na RMRJ, pois permitiu em ambiente de escritório a representação dos elementos necessários para a avaliação da configuração espacial das bacias aéreas e proporcionou uma visualização dinâmica da distribuição espacial das estações de monitoramento nas bacias aéreas propostas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study has concentrated on the development of an impact simulation model for use at the sub-national level. The necessity for the development of this model was demonstrated by the growth of local economic initiatives during the 1970's, and the lack of monitoring and evaluation exercise to assess their success and cost-effectiveness. The first stage of research involved the confirmation that the potential for micro-economic and spatial initiatives existed. This was done by identifying the existence of involuntary structural unemployment. The second stage examined the range of employment policy options from the macroeconomic, micro-economic and spatial perspectives, and focused on the need for evaluation of those policies. The need for spatial impact evaluation exercise in respect of other exogenous shocks, and structural changes was also recognised. The final stage involved the investigation of current techniques of evaluation and their adaptation for the purpose in hand. This led to a recognition of a gap in the armoury of techniques. The employment-dependency model has been developed to fill that gap, providing a low-budget model, capable of implementation at the small area level and generating a vast array of industrially disaggregate data, in terms of employment, employment-income, profits, value-added and gross income, related to levels of United Kingdom final demand. Thus providing scope for a variety of impact simulation exercises.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Crop simulation models have the potential to assess the risk associated with the selection of a specific N fertilizer rate, by integrating the effects of soil-crop interactions on crop growth under different pedo-climatic and management conditions. The objective of this study was to simulate the environmental and economic impact (nitrate leaching and N2O emissions) of a spatially variable N fertilizer application in an irrigated maize field in Italy. The validated SALUS model was run with 5 nitrogen rates scenarios, 50, 100, 150, 200, and 250 kg N ha−1, with the latter being the N fertilization adopted by the farmer. The long-term (25 years) simulations were performed on two previously identified spatially and temporally stable zones, a high yielding and low yielding zone. The simulation results showed that N fertilizer rate can be reduced without affecting yield and net return. The marginal net return was on average higher for the high yield zone, with values ranging from 1550 to 2650 € ha−1 for the 200 N and 1485 to 2875 € ha−1 for the 250 N. N leaching varied between 16.4 and 19.3 kg N ha−1 for the 200 N and the 250 N in the high yield zone. In the low yield zone, the 250 N had a significantly higher N leaching. N2O emissions varied between 0.28 kg N2O ha−1 for the 50 kg N ha−1 rate to a maximum of 1.41 kg N2O ha−1 for the 250 kg N ha−1 rate.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Systematic studies that evaluate the quality of decision-making processes are relatively rare. Using the literature on decision quality, this research develops a framework to assess the quality of decision-making processes for resolving boundary conflicts in the Philippines. The evaluation framework breaks down the decision-making process into three components (the decision procedure, the decision method, and the decision unit) and is applied to two ex-post (one resolved and one unresolved) and one ex-ante cases. The evaluation results from the resolved and the unresolved cases show that the choice of decision method plays a minor role in resolving boundary conflicts whereas the choice of decision procedure is more influential. In the end, a decision unit can choose a simple method to resolve the conflict. The ex-ante case presents a follow-up intended to resolve the unresolved case for a changing decision-making process in which the associated decision unit plans to apply the spatial multi criteria evaluation (SMCE) tool as a decision method. The evaluation results from the ex-ante case confirm that the SMCE has the potential to enhance the decision quality because: a) it provides high quality as a decision method in this changing process, and b) the weaknesses associated with the decision unit and the decision procedure of the unresolved case were found to be eliminated in this process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An early molecular response to DNA double-strand breaks (DSBs) is phosphorylation of the Ser-139 residue within the terminal SQEY motif of the histone H2AX1,2. This phosphorylation of H2AX is mediated by the phosphatidyl-inosito 3-kinase (PI3K) family of proteins, ataxia telangiectasia mutated (ATM), DNA-protein kinase catalytic subunit and ATM and RAD3-related (ATR)3. The phosphorylated form of H2AX, referred to as γH2AX, spreads to adjacent regions of chromatin from the site of the DSB, forming discrete foci, which are easily visualized by immunofluorecence microscopy3. Analysis and quantitation of γH2AX foci has been widely used to evaluate DSB formation and repair, particularly in response to ionizing radiation and for evaluating the efficacy of various radiation modifying compounds and cytotoxic compounds Given the exquisite specificity and sensitivity of this de novo marker of DSBs, it has provided new insights into the processes of DNA damage and repair in the context of chromatin. For example, in radiation biology the central paradigm is that the nuclear DNA is the critical target with respect to radiation sensitivity. Indeed, the general consensus in the field has largely been to view chromatin as a homogeneous template for DNA damage and repair. However, with the use of γH2AX as molecular marker of DSBs, a disparity in γ-irradiation-induced γH2AX foci formation in euchromatin and heterochromatin has been observed5-7. Recently, we used a panel of antibodies to either mono-, di- or tri- methylated histone H3 at lysine 9 (H3K9me1, H3K9me2, H3K9me3) which are epigenetic imprints of constitutive heterochromatin and transcriptional silencing and lysine 4 (H3K4me1, H3K4me2, H3K4me3), which are tightly correlated actively transcribing euchromatic regions, to investigate the spatial distribution of γH2AX following ionizing radiation8. In accordance with the prevailing ideas regarding chromatin biology, our findings indicated a close correlation between γH2AX formation and active transcription9. Here we demonstrate our immunofluorescence method for detection and quantitation of γH2AX foci in non-adherent cells, with a particular focus on co-localization with other epigenetic markers, image analysis and 3Dmodeling.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. Methods We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Results Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Conclusions Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Reduced economic circumstances havemoved management goals towards higher profit, rather than maximum sustainable yields in several Australian fisheries. The eastern king prawn is one such fishery, for which we have developed new methodology for stock dynamics, calculation of model-based and data-based reference points and management strategy evaluation. The fishery is notable for the northward movement of prawns in eastern Australian waters, from the State jurisdiction of New South Wales to that of Queensland, as they grow to spawning size, so that vessels fishing in the northern deeper waters harvest more large prawns. Bioeconomic fishing data were standardized for calibrating a length-structured spatial operating model. Model simulations identified that reduced boat numbers and fishing effort could improve profitability while retaining viable fishing in each jurisdiction. Simulations also identified catch rate levels that were effective for monitoring in simple within-year effort-control rules. However, favourable performance of catch rate indicators was achieved only when a meaningful upper limit was placed on total allowed fishing effort. Themethods and findings will allow improved measures for monitoring fisheries and inform decision makers on the uncertainty and assumptions affecting economic indicators.