799 resultados para context-based retrieval
Resumo:
This paper presents a Robust Content Based Video Retrieval (CBVR) system. This system retrieves similar videos based on a local feature descriptor called SURF (Speeded Up Robust Feature). The higher dimensionality of SURF like feature descriptors causes huge storage consumption during indexing of video information. To achieve a dimensionality reduction on the SURF feature descriptor, this system employs a stochastic dimensionality reduction method and thus provides a model data for the videos. On retrieval, the model data of the test clip is classified to its similar videos using a minimum distance classifier. The performance of this system is evaluated using two different minimum distance classifiers during the retrieval stage. The experimental analyses performed on the system shows that the system has a retrieval performance of 78%. This system also analyses the performance efficiency of the low dimensional SURF descriptor.
Resumo:
Learning contents adaptation has been a subject of interest in the research area of the adaptive hypermedia systems. Defining which variables and which standards can be considered to model adaptive content delivery processes is one of the main challenges in pedagogical design over e-learning environments. In this paper some specifications, architectures and technologies that can be used in contents adaptation processes considering characteristics of the context are described and a proposal to integrate some of these characteristics in the design of units of learning using adaptation conditions in a structure of IMS-Learning Design (IMS-LD) is presented. The key contribution of this work is the generation of instructional designs considering the context, which can be used in Learning Management Systems (LMSs) and diverse mobile devices
Resumo:
El test de circuits és una fase del procés de producció que cada vegada pren més importància quan es desenvolupa un nou producte. Les tècniques de test i diagnosi per a circuits digitals han estat desenvolupades i automatitzades amb èxit, mentre que aquest no és encara el cas dels circuits analògics. D'entre tots els mètodes proposats per diagnosticar circuits analògics els més utilitzats són els diccionaris de falles. En aquesta tesi se'n descriuen alguns, tot analitzant-ne els seus avantatges i inconvenients. Durant aquests últims anys, les tècniques d'Intel·ligència Artificial han esdevingut un dels camps de recerca més importants per a la diagnosi de falles. Aquesta tesi desenvolupa dues d'aquestes tècniques per tal de cobrir algunes de les mancances que presenten els diccionaris de falles. La primera proposta es basa en construir un sistema fuzzy com a eina per identificar. Els resultats obtinguts son força bons, ja que s'aconsegueix localitzar la falla en un elevat tant percent dels casos. Per altra banda, el percentatge d'encerts no és prou bo quan a més a més s'intenta esbrinar la desviació. Com que els diccionaris de falles es poden veure com una aproximació simplificada al Raonament Basat en Casos (CBR), la segona proposta fa una extensió dels diccionaris de falles cap a un sistema CBR. El propòsit no és donar una solució general del problema sinó contribuir amb una nova metodologia. Aquesta consisteix en millorar la diagnosis dels diccionaris de falles mitjançant l'addició i l'adaptació dels nous casos per tal d'esdevenir un sistema de Raonament Basat en Casos. Es descriu l'estructura de la base de casos així com les tasques d'extracció, de reutilització, de revisió i de retenció, fent èmfasi al procés d'aprenentatge. En el transcurs del text s'utilitzen diversos circuits per mostrar exemples dels mètodes de test descrits, però en particular el filtre biquadràtic és l'utilitzat per provar les metodologies plantejades, ja que és un dels benchmarks proposats en el context dels circuits analògics. Les falles considerades son paramètriques, permanents, independents i simples, encara que la metodologia pot ser fàcilment extrapolable per a la diagnosi de falles múltiples i catastròfiques. El mètode es centra en el test dels components passius, encara que també es podria extendre per a falles en els actius.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
The high complexity of cloud parameterizations now held in models puts more pressure on observational studies to provide useful means to evaluate them. One approach to the problem put forth in the modelling community is to evaluate under what atmospheric conditions the parameterizations fail to simulate the cloud properties and under what conditions they do a good job. It is the ambition of this paper to characterize the variability of the statistical properties of tropical ice clouds in different tropical "regimes" recently identified in the literature to aid the development of better process-oriented parameterizations in models. For this purpose, the statistical properties of non-precipitating tropical ice clouds over Darwin, Australia are characterized using ground-based radar-lidar observations from the Atmospheric Radiation Measurement (ARM) Program. The ice cloud properties analysed are the frequency of ice cloud occurrence, the morphological properties (cloud top height and thickness), and the microphysical and radiative properties (ice water content, visible extinction, effective radius, and total concentration). The variability of these tropical ice cloud properties is then studied as a function of the large-scale cloud regimes derived from the International Satellite Cloud Climatology Project (ISCCP), the amplitude and phase of the Madden-Julian Oscillation (MJO), and the large-scale atmospheric regime as derived from a long-term record of radiosonde observations over Darwin. The vertical variability of ice cloud occurrence and microphysical properties is largest in all regimes (1.5 order of magnitude for ice water content and extinction, a factor 3 in effective radius, and three orders of magnitude in concentration, typically). 98 % of ice clouds in our dataset are characterized by either a small cloud fraction (smaller than 0.3) or a very large cloud fraction (larger than 0.9). In the ice part of the troposphere three distinct layers characterized by different statistically-dominant microphysical processes are identified. The variability of the ice cloud properties as a function of the large-scale atmospheric regime, cloud regime, and MJO phase is large, producing mean differences of up to a factor 8 in the frequency of ice cloud occurrence between large-scale atmospheric regimes and mean differences of a factor 2 typically in all microphysical properties. Finally, the diurnal cycle of the frequency of occurrence of ice clouds is also very different between regimes and MJO phases, with diurnal amplitudes of the vertically-integrated frequency of ice cloud occurrence ranging from as low as 0.2 (weak diurnal amplitude) to values in excess of 2.0 (very large diurnal amplitude). Modellers should now use these results to check if their model cloud parameterizations are capable of translating a given atmospheric forcing into the correct statistical ice cloud properties.
Resumo:
Anaerobic digestion (AD) technologies convert organic wastes and crops into methane-rich biogas for heating, electricity generation and vehicle fuel. Farm-based AD has proliferated in some EU countries, driven by favourable policies promoting sustainable energy generation and GHG mitigation. Despite increased state support there are still few AD plants on UK farms leading to a lack of normative data on viability of AD in the whole-farm context. Farmers and lenders are therefore reluctant to fund AD projects and policy makers are hampered in their attempts to design policies that adequately support the industry. Existing AD studies and modelling tools do not adequately capture the farm context within which AD interacts. This paper demonstrates a whole-farm, optimisation modelling approach to assess the viability of AD in a more holistic way, accounting for such issues as: AD scale, synergies and conflicts with other farm enterprises, choice of feedstocks, digestate use and impact on farm Net Margin. This modelling approach demonstrates, for example, that: AD is complementary to dairy enterprises, but competes with arable enterprises for farm resources. Reduced nutrient purchases significantly improve Net Margin on arable farms, but AD scale is constrained by the capacity of farmland to absorb nutrients in AD digestate.
Resumo:
The retrieval (estimation) of sea surface temperatures (SSTs) from space-based infrared observations is increasingly performed using retrieval coefficients derived from radiative transfer simulations of top-of-atmosphere brightness temperatures (BTs). Typically, an estimate of SST is formed from a weighted combination of BTs at a few wavelengths, plus an offset. This paper addresses two questions about the radiative transfer modeling approach to deriving these weighting and offset coefficients. How precisely specified do the coefficients need to be in order to obtain the required SST accuracy (e.g., scatter <0.3 K in week-average SST, bias <0.1 K)? And how precisely is it actually possible to specify them using current forward models? The conclusions are that weighting coefficients can be obtained with adequate precision, while the offset coefficient will often require an empirical adjustment of the order of a few tenths of a kelvin against validation data. Thus, a rational approach to defining retrieval coefficients is one of radiative transfer modeling followed by offset adjustment. The need for this approach is illustrated from experience in defining SST retrieval schemes for operational meteorological satellites. A strategy is described for obtaining the required offset adjustment, and the paper highlights some of the subtler aspects involved with reference to the example of SST retrievals from the imager on the geostationary satellite GOES-8.
Resumo:
We establish a methodology for calculating uncertainties in sea surface temperature estimates from coefficient based satellite retrievals. The uncertainty estimates are derived independently of in-situ data. This enables validation of both the retrieved SSTs and their uncertainty estimate using in-situ data records. The total uncertainty budget is comprised of a number of components, arising from uncorrelated (eg. noise), locally systematic (eg. atmospheric), large scale systematic and sampling effects (for gridded products). The importance of distinguishing these components arises in propagating uncertainty across spatio-temporal scales. We apply the method to SST data retrieved from the Advanced Along Track Scanning Radiometer (AATSR) and validate the results for two different SST retrieval algorithms, both at a per pixel level and for gridded data. We find good agreement between our estimated uncertainties and validation data. This approach to calculating uncertainties in SST retrievals has a wider application to data from other instruments and retrieval of other geophysical variables.
Resumo:
Successful classification, information retrieval and image analysis tools are intimately related with the quality of the features employed in the process. Pixel intensities, color, texture and shape are, generally, the basis from which most of the features are Computed and used in such fields. This papers presents a novel shape-based feature extraction approach where an image is decomposed into multiple contours, and further characterized by Fourier descriptors. Unlike traditional approaches we make use of topological knowledge to generate well-defined closed contours, which are efficient signatures for image retrieval. The method has been evaluated in the CBIR context and image analysis. The results have shown that the multi-contour decomposition, as opposed to a single shape information, introduced a significant improvement in the discrimination power. (c) 2008 Elsevier B.V. All rights reserved,
Resumo:
Texture is one of the most important visual attributes used in image analysis. It is used in many content-based image retrieval systems, where it allows the identification of a larger number of images from distinct origins. This paper presents a novel approach for image analysis and retrieval based on complexity analysis. The approach consists of a texture segmentation step, performed by complexity analysis through BoxCounting fractal dimension, followed by the estimation of complexity of each computed region by multiscale fractal dimension. Experiments have been performed with MRI database in both pattern recognition and image retrieval contexts. Results show the accuracy of the method and also indicate how the performance changes as the texture segmentation process is altered.
Resumo:
Background In the Neonatal health – Knowledge into Practice (NeoKIP) trial in Vietnam, local stakeholder groups, supported by trained laywomen acting as facilitators, promoted knowledge translation (KT) resulting in decreased neonatal mortality. In general, as well as in the community-based NeoKIP trial, there is a need to further understand how context influences KT interventions in low- and middle-income countries (LMICs). Thus, the objective of this study was to explore the influence of context on the facilitation process in the NeoKIP intervention. Methods A secondary content analysis was performed on 16 Focus Group Discussions with facilitators and participants of the stakeholder groups, applying an inductive approach to the content on context through naïve understanding and structured analysis. Results The three main-categories of context found to influence the facilitation process in the NeoKIP intervention were: (1) Support and collaboration of local authorities and other communal stakeholders; (2) Incentives to, and motivation of, participants; and (3) Low health care coverage and utilization. In particular, the role of local authorities in a KT intervention was recognized as important. Also, while project participants expected financial incentives, non-financial benefits such as individual learning were considered to balance the lack of reimbursement in the NeoKIP intervention. Further, project participants recognized the need to acknowledge the needs of disadvantaged groups. Conclusions This study provides insight for further understanding of the influence of contextual aspects to improve effects of a KT intervention in Vietnam. We suggest that future KT interventions should apply strategies to improve local authorities’ engagement, to identify and communicate non-financial incentives, and to make disadvantaged groups a priority. Further studies to evaluate the contextual aspects in KT interventions in LMICs are also needed.
Resumo:
Background: The gap between what is known and what is practiced results in health service users not benefitting from advances in healthcare, and in unnecessary costs. A supportive context is considered a key element for successful implementation of evidence-based practices (EBP). There were no tools available for the systematic mapping of aspects of organizational context influencing the implementation of EBPs in low- and middle-income countries (LMICs). Thus, this project aimed to develop and psychometrically validate a tool for this purpose. Methods: The development of the Context Assessment for Community Health (COACH) tool was premised on the context dimension in the Promoting Action on Research Implementation in Health Services framework, and is a derivative product of the Alberta Context Tool. Its development was undertaken in Bangladesh, Vietnam, Uganda, South Africa and Nicaragua in six phases: (1) defining dimensions and draft tool development, (2) content validity amongst in-country expert panels, (3) content validity amongst international experts, (4) response process validity, (5) translation and (6) evaluation of psychometric properties amongst 690 health workers in the five countries. Results: The tool was validated for use amongst physicians, nurse/midwives and community health workers. The six phases of development resulted in a good fit between the theoretical dimensions of the COACH tool and its psychometric properties. The tool has 49 items measuring eight aspects of context: Resources, Community engagement, Commitment to work, Informal payment, Leadership, Work culture, Monitoring services for action and Sources of knowledge. Conclusions: Aspects of organizational context that were identified as influencing the implementation of EBPs in high-income settings were also found to be relevant in LMICs. However, there were additional aspects of context of relevance in LMICs specifically Resources, Community engagement, Commitment to work and Informal payment. Use of the COACH tool will allow for systematic description of the local healthcare context prior implementing healthcare interventions to allow for tailoring implementation strategies or as part of the evaluation of implementing healthcare interventions and thus allow for deeper insights into the process of implementing EBPs in LMICs.
Resumo:
Recent work has begun exploring the characterization and utilization of provenance in systems based on the Service Oriented Architecture (such as Web Services and Grid based environments). One of the salient issues related to provenance use within any given system is its security. In a broad sense, security requirements arise within any data archival and retrieval system, however provenance presents unique requirements of its own. These requirements are additionally dependent on the architectural and environmental context that a provenance system operates in. We seek to analyze the security considerations pertaining to a Service Oriented Architecture based provenance system. Towards this end, we describe the components of such a system and illustrate the security considerations that arise within it. Concurrently, we outline possible approaches to address them.
Resumo:
Microwave remote sensing has high potential for soil moisture retrieval. However, the efficient retrieval of soil moisture depends on optimally choosing the soil moisture retrieval parameters. In this study first the initial evaluation of SMOS L2 product is performed and then four approaches regarding soil moisture retrieval from SMOS brightness temperature are reported. The radiative transfer equation based tau-omega rationale is used in this study for the soil moisture retrievals. The single channel algorithms (SCA) using H polarisation is implemented with modifications, which includes the effective temperatures simulated from ECMWF (downscaled using WRF-NOAH Land Surface Model (LSM)) and MODIS. The retrieved soil moisture is then utilized for soil moisture deficit (SMD) estimation using empirical relationships with Probability Distributed Model based SMD as a benchmark. The square of correlation during the calibration indicates a value of R2 =0.359 for approach 4 (WRF-NOAH LSM based LST with optimized roughness parameters) followed by the approach 2 (optimized roughness parameters and MODIS based LST) (R2 =0.293), approach 3 (WRF-NOAH LSM based LST with no optimization) (R2 =0.267) and approach 1(MODIS based LST with no optimization) (R2 =0.163). Similarly, during the validation a highest performance is reported by approach 4. The other approaches are also following a similar trend as calibration. All the performances are depicted through Taylor diagram which indicates that the H polarisation using ECMWF based LST is giving a better performance for SMD estimation than the original SMOS L2 products at a catchment scale.