901 resultados para distinguishability metrics
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
Development of desalination projects requires simple methodologies and tools for cost-effective and environmentally-sensitive management. Sentinel taxa and biotic indices are easily interpreted in the perspective of environment management. Echinoderms are potential sentinel taxon to gauge the impact produced by brine discharge and the BOPA index is considered an effective tool for monitoring different types of impact. Salinity increase due to desalination brine discharge was evaluated in terms of these two indicators. They reflected the environmental impact and recovery after implementation of a mitigation measure. Echinoderms disappeared at the station closest to the discharge during the years with highest salinity and then recovered their abundance after installation of a diffuser reduced the salinity increase. In the same period, BOPA responded due to the decrease in sensitive amphipods and the increase in tolerant polychaete families when salinities rose. Although salinity changes explained most of the observed variability in both indicators, other abiotic parameters were also significant in explaining this variability.
Resumo:
Bibliography: p. [323]
Resumo:
We compute the Dirac indexes for. the two spin structures kappa(0) and kappa(1) for Eguchi-Hanson metrics with nonzero total mass. It shows that the Dirac indexes do not vanish in general, and axial anomaly exists. When the metric has zero total mass, the Dirac index vanishes for the spin structure no, and no axial anomaly exists in this case.
Resumo:
The basis of this work was to investigate the relative environmental impacts of various power generators knowing that all plants are located in totally different environments and that different receptors will experience different impacts. Based on IChemE sustainability metrics paradigm, we calculated potential environmental indicators (P-EI) that represent the environmental burden of masses of potential pollutants discharged into different receiving media. However, a P-EI may not be of significance, as it may not be expressed at all in different conditions, so to try and include some receiver significance we developed a methodology to take into account some specific environmental indicators (S-EI) that refer to the environmental attributes of a specific site. In this context, we acquired site specific environmental data related to the airsheds and water catchment areas in different locations for a limited number of environmental indicators such as human health (carcinogenic) effects, atmospheric acidification, photochemical (ozone) smog and eutrophication. The S-EI results from this particular analysis show that atmospheric acidification has highest impact value while health risks due to fly ash emissions are considered not to be as significant. This is due to the fact that many coal power plants in Australia are located in low population density air sheds. The contribution of coal power plants to photochemical (ozone) smog and eutrophication were not significant. In this study, we have considered emission related data trends to reflect technology performance (e.g., P-EI indicators) while a real sustainability metric can be associated only with the specific environmental conditions of the relevant sites (e.g., S-EI indicators).
Resumo:
Representing knowledge using domain ontologies has shown to be a useful mechanism and format for managing and exchanging information. Due to the difficulty and cost of building ontologies, a number of ontology libraries and search engines are coming to existence to facilitate reusing such knowledge structures. The need for ontology ranking techniques is becoming crucial as the number of ontologies available for reuse is continuing to grow. In this paper we present AKTiveRank, a prototype system for ranking ontologies based on the analysis of their structures. We describe the metrics used in the ranking system and present an experiment on ranking ontologies returned by a popular search engine for an example query.
Resumo:
PURPOSE: To provide a consistent standard for the evaluation of different types of presbyopic correction. SETTING: Eye Clinic, School of Life and Health Sciences, Aston University, Birmingham, United Kingdom. METHODS: Presbyopic corrections examined were accommodating intraocular lenses (IOLs), simultaneous multifocal and monovision contact lenses, and varifocal spectacles. Binocular near visual acuity measured with different optotypes (uppercase letters, lowercase letters, and words) and reading metrics assessed with the Minnesota Near Reading chart (reading acuity, critical print size [CPS], CPS reading speed) were intercorrelated (Pearson product moment correlations) and assessed for concordance (intraclass correlation coefficients [ICC]) and agreement (Bland-Altman analysis) for indication of clinical usefulness. RESULTS: Nineteen accommodating IOL cases, 40 simultaneous contact lens cases, and 38 varifocal spectacle cases were evaluated. Other than CPS reading speed, all near visual acuity and reading metrics correlated well with each other (r>0.70, P<.001). Near visual acuity measured with uppercase letters was highly concordant (ICC, 0.78) and in close agreement with lowercase letters (+/- 0.17 logMAR). Near word acuity agreed well with reading acuity (+/- 0.16 logMAR), which in turn agreed well with near visual acuity measured with uppercase letters 0.16 logMAR). Concordance (ICC, 0.18 to 0.46) and agreement (+/- 0.24 to 0.30 logMAR) of CPS with the other near metrics was moderate. CONCLUSION: Measurement of near visual ability in presbyopia should be standardized to include assessment of near visual acuity with logMAR uppercase-letter optotypes, smallest logMAR print size that maintains maximum reading speed (CPS), and reading speed. J Cataract Refract Surg 2009; 35:1401-1409 (C) 2009 ASCRS and ESCRS
Resumo:
The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.
Resumo:
The energy consumption and the energy efficiency have become very important issue in optimizing the current as well as in designing the future telecommunications networks. The energy and power metrics are being introduced in order to enable assessment and comparison of the energy consumption and power efficiency of the telecommunications networks and other transmission equipment. The standardization of the energy and power metrics is a significant ongoing activity aiming to define the baseline energy and power metrics for the telecommunications systems. This article provides an up-to-date overview of the energy and power metrics being proposed by the various standardization bodies and subsequently adopted worldwide by the equipment manufacturers and the network operators. © Institut Télécom and Springer-Verlag 2012.and Springer-Verlag 2012.
Resumo:
To what extent does competitive entry create a structural change in key marketing metrics? New players may just be a temporal nuisance to incumbents, but could also fundamentally change the latter's performance evolution, or induce them to permanently alter their spending levels and/or pricing decisions. Similarly, the addition of a new marketing channel could permanently shift shopping preferences, or could just create a short-lived migration from existing channels. The steady-state impact of a given entry or channel addition on various marketing metrics is intrinsically an empirical issue for which we need an appropriate testing procedure. In this study, we introduce a testing sequence that allows for the endogenous determination of potential change (break) locations, thereby accounting for lead and/or lagged effects of the introduction of interest. By not restricting the number of potential breaks to one (as is commonly done in the marketing literature), we quantify the impact of the new entrant(s) while controlling for other events that may have taken place in the market. We illustrate the methodology in the context of the Dutch television advertising market, which was characterized by the entry of several late movers. We find that the steady-state growth of private incumbents' revenues was slowed by the quasi-simultaneous entry of three new players. Contrary to industry observers' expectations, such a slowdown was not experienced in the related markets of print and radio advertising.
Resumo:
Remote sensing data is routinely used in ecology to investigate the relationship between landscape pattern as characterised by land use and land cover maps, and ecological processes. Multiple factors related to the representation of geographic phenomenon have been shown to affect characterisation of landscape pattern resulting in spatial uncertainty. This study investigated the effect of the interaction between landscape spatial pattern and geospatial processing methods statistically; unlike most papers which consider the effect of each factor in isolation only. This is important since data used to calculate landscape metrics typically undergo a series of data abstraction processing tasks and are rarely performed in isolation. The geospatial processing methods tested were the aggregation method and the choice of pixel size used to aggregate data. These were compared to two components of landscape pattern, spatial heterogeneity and the proportion of landcover class area. The interactions and their effect on the final landcover map were described using landscape metrics to measure landscape pattern and classification accuracy (response variables). All landscape metrics and classification accuracy were shown to be affected by both landscape pattern and by processing methods. Large variability in the response of those variables and interactions between the explanatory variables were observed. However, even though interactions occurred, this only affected the magnitude of the difference in landscape metric values. Thus, provided that the same processing methods are used, landscapes should retain their ranking when their landscape metrics are compared. For example, highly fragmented landscapes will always have larger values for the landscape metric "number of patches" than less fragmented landscapes. But the magnitude of difference between the landscapes may change and therefore absolute values of landscape metrics may need to be interpreted with caution. The explanatory variables which had the largest effects were spatial heterogeneity and pixel size. These explanatory variables tended to result in large main effects and large interactions. The high variability in the response variables and the interaction of the explanatory variables indicate it would be difficult to make generalisations about the impact of processing on landscape pattern as only two processing methods were tested and it is likely that untested processing methods will potentially result in even greater spatial uncertainty. © 2013 Elsevier B.V.
Resumo:
* This work was financially supported by the Russian Foundation for Basic Research, project no. 04-01-00858a.
Resumo:
This paper is a survey of results obtained by the authors on the geometry of connections with totally skew-symmetric torsion on the following manifolds: almost complex manifolds with Norden metric, almost contact manifolds with B-metric and almost hypercomplex manifolds with Hermitian and anti-Hermitian metric.
Resumo:
Healthy brain functioning depends on efficient communication of information between brain regions, forming complex networks. By quantifying synchronisation between brain regions, a functionally connected brain network can be articulated. In neurodevelopmental disorders, where diagnosis is based on measures of behaviour and tasks, a measure of the underlying biological mechanisms holds promise as a potential clinical tool. Graph theory provides a tool for investigating the neural correlates of neuropsychiatric disorders, where there is disruption of efficient communication within and between brain networks. This research aimed to use recent conceptualisation of graph theory, along with measures of behaviour and cognitive functioning, to increase understanding of the neurobiological risk factors of atypical development. Using magnetoencephalography to investigate frequency-specific temporal dynamics at rest, the research aimed to identify potential biological markers derived from sensor-level whole-brain functional connectivity. Whilst graph theory has proved valuable for insight into network efficiency, its application is hampered by two limitations. First, its measures have hardly been validated in MEG studies, and second, graph measures have been shown to depend on methodological assumptions that restrict direct network comparisons. The first experimental study (Chapter 3) addressed the first limitation by examining the reproducibility of graph-based functional connectivity and network parameters in healthy adult volunteers. Subsequent chapters addressed the second limitation through adapted minimum spanning tree (a network analysis approach that allows for unbiased group comparisons) along with graph network tools that had been shown in Chapter 3 to be highly reproducible. Network topologies were modelled in healthy development (Chapter 4), and atypical neurodevelopment (Chapters 5 and 6). The results provided support to the proposition that measures of network organisation, derived from sensor-space MEG data, offer insights helping to unravel the biological basis of typical brain maturation and neurodevelopmental conditions, with the possibility of future clinical utility.