864 resultados para consistency in indexing
Resumo:
Perfect information is seldom available to man or machines due to uncertainties inherent in real world problems. Uncertainties in geographic information systems (GIS) stem from either vague/ambiguous or imprecise/inaccurate/incomplete information and it is necessary for GIS to develop tools and techniques to manage these uncertainties. There is a widespread agreement in the GIS community that although GIS has the potential to support a wide range of spatial data analysis problems, this potential is often hindered by the lack of consistency and uniformity. Uncertainties come in many shapes and forms, and processing uncertain spatial data requires a practical taxonomy to aid decision makers in choosing the most suitable data modeling and analysis method. In this paper, we: (1) review important developments in handling uncertainties when working with spatial data and GIS applications; (2) propose a taxonomy of models for dealing with uncertainties in GIS; and (3) identify current challenges and future research directions in spatial data analysis and GIS for managing uncertainties.
Resumo:
Quantifying the similarity between two trajectories is a fundamental operation in analysis of spatio-temporal databases. While a number of distance functions exist, the recent shift in the dynamics of the trajectory generation procedure violates one of their core assumptions; a consistent and uniform sampling rate. In this paper, we formulate a robust distance function called Edit Distance with Projections (EDwP) to match trajectories under inconsistent and variable sampling rates through dynamic interpolation. This is achieved by deploying the idea of projections that goes beyond matching only the sampled points while aligning trajectories. To enable efficient trajectory retrievals using EDwP, we design an index structure called TrajTree. TrajTree derives its pruning power by employing the unique combination of bounding boxes with Lipschitz embedding. Extensive experiments on real trajectory databases demonstrate EDwP to be up to 5 times more accurate than the state-of-the-art distance functions. Additionally, TrajTree increases the efficiency of trajectory retrievals by up to an order of magnitude over existing techniques.
Resumo:
We propose and advocate basic principles for the fusion of incomplete or uncertain information items, that should apply regardless of the formalism adopted for representing pieces of information coming from several sources. This formalism can be based on sets, logic, partial orders, possibility theory, belief functions or imprecise probabilities. We propose a general notion of information item representing incomplete or uncertain information about the values of an entity of interest. It is supposed to rank such values in terms of relative plausibility, and explicitly point out impossible values. Basic issues affecting the results of the fusion process, such as relative information content and consistency of information items, as well as their mutual consistency, are discussed. For each representation setting, we present fusion rules that obey our principles, and compare them to postulates specific to the representation proposed in the past. In the crudest (Boolean) representation setting (using a set of possible values), we show that the understanding of the set in terms of most plausible values, or in terms of non-impossible ones matters for choosing a relevant fusion rule. Especially, in the latter case our principles justify the method of maximal consistent subsets, while the former is related to the fusion of logical bases. Then we consider several formal settings for incomplete or uncertain information items, where our postulates are instantiated: plausibility orderings, qualitative and quantitative possibility distributions, belief functions and convex sets of probabilities. The aim of this paper is to provide a unified picture of fusion rules across various uncertainty representation settings.
Resumo:
A number of studies have recently investigated personality traits in non-human species, with the dog gaining popularity as a subject species for research in this area. Recent research has shown the consistency of personality traits across both context and time for adult dogs, both when using questionnaire based methods of investigation and behavioural analyses of the dogs' behaviour. However, only a few studies have assessed the correspondence between these two methods, with results varying considerably across studies. Furthermore, most studies have focused on adult dogs, despite the fact that an understanding of personality traits in young puppies may be important for research focusing on the genetic basis of personality traits. In the current study, we sought to evaluate the correspondence between a questionnaire based method and the in depth analyses of the behaviour of 2-month old puppies in an open-field test in which a number of both social and non-social stimuli were presented to the subjects. We further evaluated consistency of traits over time by re-testing a subset of puppies. The correspondence between methods was high and test-retest consistency (for the main trait) was also good using both evaluation methods. Results showed clear factors referring to the two main personality traits 'extroversion,' (i.e. the enthusiastic, exuberant approach to the stimuli) and 'neuroticism,' (i.e. the more cautious and fearful approach to the stimuli), potentially similar to the shyness-boldness dimension found in previous studies. Furthermore, both methods identified an 'amicability' dimension, expressing the positive interactions the pups directed at the humans stranger, and a 'reservedness' dimension which identified pups who largely chose not to interact with the stimuli, and were defined as quiet and not nosey in the questionnaire.
Resumo:
This investigation focused on the development, test and validation of methodologies for mercury fractionation and speciation in soil and sediment. After an exhaustive review of the literature, several methods were chosen and tested in well characterised soil and sediment samples. Sequential extraction procedures that divide mercury fractions according to their mobility and potential availability in the environment were investigated. The efficiency of different solvents for fractionation of mercury was evaluated, as well as the adequacy of different analytical instruments for quantification of mercury in the extracts. Kinetic experiments to establish the equilibrium time for mercury release from soil or sediment were also performed. It was found that in the studied areas, only a very small percentage of mercury is present as mobile species and that mobility is associated to higher aluminium and manganese contents, and that high contents of organic matter and sulfur result in mercury tightly bound to the matrix. Sandy soils tend to release mercury faster that clayey soils, and therefore, texture of soil or sediment has a strong influence on the mobility of mercury. It was also understood that analytical techniques for quantification of mercury need to be further developed, with lower quantification limits, particularly for mercury quantification of less concentrated fractions: water-soluble e exchangeable. Although the results provided a better understanding of the distribution of mercury in the sample, the complexity of the procedure limits its applicability and robustness. A proficiency-testing scheme targeting total mercury determination in soil, sediment, fish and human hair was organised in order to evaluate the consistency of results obtained by different laboratories, applying their routine methods to the same test samples. Additionally, single extractions by 1 mol L-1 ammonium acetate solution, 0.1 mol L-1 HCl and 0.1 mol L-1 CaCl2, as well as extraction of the organometallic fraction were proposed for soil; the last was also suggested for sediment and fish. This study was important to update the knowledge on analytical techniques that are being used for mercury quantification, the associated problems and sources of error, and to improve and standardize mercury extraction techniques, as well as to implement effective strategies for quality control in mercury determination. A different, “non chemical-like” method for mercury species identification was developed, optimised and validated, based on the thermo-desorption of the different mercury species. Compared to conventional extraction procedures, this method has advantages: it requires little to no sample treatment; a complete identification of species present is obtained in less than two hours; mercury losses are almost neglectable; can be considered “clean”, as no residues are produced; the worldwide comparison of results obtained is easier and reliable, an important step towards the validation of the method. Therefore, the main deliverables of this PhD thesis are an improved knowledge on analytical procedures for identification and quantification of mercury species in soils and sediments, as well as a better understanding of the factors controlling the behaviour of mercury in these matrices.
Resumo:
This paper describes the development of a generic tool for dynamic cost indexing (DCI), which encompasses the ability to manage flight delay costs on a dynamic basis, trading accelerated fuel burn against ‘cost of time’. Many airlines have significant barriers to identifying which costs should be included in ‘cost of time’ calculations and how to quantify them. The need is highlighted to integrate historical passenger delay and policy data with real-time passenger connections data. The absence of industry standards for defining and interfacing necessary tools is recognised. Delay recovery decision windows and ATC cooperation are key constraints. DCI tools could also be used in the pre-departure phase, and may offer environmental decision support functionality: which could be used as a differentiating technology required for access to designated, future ‘green’ airspace. Short-term opportunities for saving fuel and/or reducing emissions are also identified.
Resumo:
The design of a decision-support prototype tool for managing flight delay costs in the pre-departure and airborne phases of a flight is described. The tool trades accelerated fuel burn and emissions charges against 'cost of time'. Costs for all major 'cost of time' components, by three cost scenarios, twelve aircraft types and by magnitude of delay are derived. Short-term opportunities for saving fuel and/or reducing environmental impacts are identified. A shift in ATM from managing delay minutes to delay cost is also supported.
Resumo:
Senior thesis written for Oceanography 445
Resumo:
Thesis (Master's)--University of Washington, 2015
Resumo:
The law regulating the availability of abortion is problematic both legally and morally. It is dogmatic in its requirements of women and doctors and ignorant of would-be fathers. Practically, its usage is liberal - with s1(1)(a) Abortion Act 1967 treated as a ‘catch all’ ground - it allows abortion on demand. Yet this is not reflected in the ‘law’. Against this outdated legislation I propose a model of autonomy which seeks to tether our moral concerns with a new legal approach to abortion. I do so by maintaining that a legal conception of autonomy is derivable from the categorical imperative resulting from Gewirth’s argument to the Principle of Generic Consistency: Act in accordance with the generic rights of your recipients as well as of yourself. This model of Gewirthian Rational Autonomy, I suggest, provides a guide for both public and private notions of autonomy and how our autonomous interests can be balanced across social structures in order to legitimately empower choice. I claim, ultimately, that relevant rights in the context of abortion are derivable from this model.
Resumo:
The paper concerns the moral status of persons for the purposes of rights-holding and duty-bearing. Developing from Gewirth’s argument to the Principle of Generic Consistency (PGC) and Beyleveld et al.’s Principle of Precautionary Reasoning, I argue in favour of a capacity-based assessment of the task competencies required for choice-rights and certain duties (within the Hohfeldian analytic). Unlike other, traditional, theories of rights, I claim that precautionary reasoning as to agentic status holds the base justification for rights-holding. If this is the basis for generic legal rights, then the contingent argument must be used to explain communities of rights. Much in the same way as two ‘normal’ adult agents may not have equal rights to be an aeroplane pilot, not all adults hold the same task competencies in relation to the exercise of the generic rights to freedom derived from the PGC. In this paper, I set out to consider the rights held by children, persons suffering from mental illness and generic ‘full’ agents. In mapping the developing ‘portfolio’ of rights and duties that a person carries during their life we might better understand the legal relations of those who do not ostensibly fulfil the criteria of ‘full’ agent.
Resumo:
In the Sparse Point Representation (SPR) method the principle is to retain the function data indicated by significant interpolatory wavelet coefficients, which are defined as interpolation errors by means of an interpolating subdivision scheme. Typically, a SPR grid is coarse in smooth regions, and refined close to irregularities. Furthermore, the computation of partial derivatives of a function from the information of its SPR content is performed in two steps. The first one is a refinement procedure to extend the SPR by the inclusion of new interpolated point values in a security zone. Then, for points in the refined grid, such derivatives are approximated by uniform finite differences, using a step size proportional to each point local scale. If required neighboring stencils are not present in the grid, the corresponding missing point values are approximated from coarser scales using the interpolating subdivision scheme. Using the cubic interpolation subdivision scheme, we demonstrate that such adaptive finite differences can be formulated in terms of a collocation scheme based on the wavelet expansion associated to the SPR. For this purpose, we prove some results concerning the local behavior of such wavelet reconstruction operators, which stand for SPR grids having appropriate structures. This statement implies that the adaptive finite difference scheme and the one using the step size of the finest level produce the same result at SPR grid points. Consequently, in addition to the refinement strategy, our analysis indicates that some care must be taken concerning the grid structure, in order to keep the truncation error under a certain accuracy limit. Illustrating results are presented for 2D Maxwell's equation numerical solutions.
Resumo:
The premise of this paper is that a model for communicating the national value system must start from a strategy aimed at the identification, the cultivation and communication of values that give consistency to the value system. The analysis concentrates on the elements of such strategies and on the implications of applying a value communication program on the identity architecture of the community. The paper will also discuss the role of the national value system in the context of the emerging global culture, where the individual has the power to create his/her own hybrid cultural model.
Resumo:
In this paper, we present some of the fault tolerance management mechanisms being implemented in the Multi-μ architecture, namely its support for replica non-determinism. In this architecture, fault tolerance is achieved by node active replication, with software based replica management and fault tolerance transparent algorithms. A software layer implemented between the application and the real-time kernel, the Fault Tolerance Manager (FTManager), is the responsible for the transparent incorporation of the fault tolerance mechanisms The active replication model can be implemented either imposing replica determinism or keeping replica consistency at critical points, by means of interactive agreement mechanisms. One of the Multi-μ architecture goals is to identify such critical points, relieving the underlying system from performing the interactive agreement in every Ada dispatching point.