19 resultados para Probability Metrics

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Representing knowledge using domain ontologies has shown to be a useful mechanism and format for managing and exchanging information. Due to the difficulty and cost of building ontologies, a number of ontology libraries and search engines are coming to existence to facilitate reusing such knowledge structures. The need for ontology ranking techniques is becoming crucial as the number of ontologies available for reuse is continuing to grow. In this paper we present AKTiveRank, a prototype system for ranking ontologies based on the analysis of their structures. We describe the metrics used in the ranking system and present an experiment on ranking ontologies returned by a popular search engine for an example query.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most of the common techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we introduce two novel techniques for tackling such problems, and investigate their performance using synthetic data. We then apply these techniques to the problem of extracting the distribution of wind vector directions from radar scatterometer data gathered by a remote-sensing satellite.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most of the common techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we apply two novel techniques to the problem of extracting the distribution of wind vector directions from radar catterometer data gathered by a remote-sensing satellite.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most conventional techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we introduce three related techniques for tackling such problems, and investigate their performance using synthetic data. We then apply these techniques to the problem of extracting the distribution of wind vector directions from radar scatterometer data gathered by a remote-sensing satellite.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most of the common techniques for estimating conditional probability densities are inappropriate for applications involving periodic variables. In this paper we introduce three novel techniques for tackling such problems, and investigate their performance using synthetic data. We then apply these techniques to the problem of extracting the distribution of wind vector directions from radar scatterometer data gathered by a remote-sensing satellite.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the direct adaptive inverse control of nonlinear multivariable systems with different delays between every input-output pair. In direct adaptive inverse control, the inverse mapping is learned from examples of input-output pairs. This makes the obtained controller sub optimal, since the network may have to learn the response of the plant over a larger operational range than necessary. Moreover, in certain applications, the control problem can be redundant, implying that the inverse problem is ill posed. In this paper we propose a new algorithm which allows estimating and exploiting uncertainty in nonlinear multivariable control systems. This approach allows us to model strongly non-Gaussian distribution of control signals as well as processes with hysteresis. The proposed algorithm circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To provide a consistent standard for the evaluation of different types of presbyopic correction. SETTING: Eye Clinic, School of Life and Health Sciences, Aston University, Birmingham, United Kingdom. METHODS: Presbyopic corrections examined were accommodating intraocular lenses (IOLs), simultaneous multifocal and monovision contact lenses, and varifocal spectacles. Binocular near visual acuity measured with different optotypes (uppercase letters, lowercase letters, and words) and reading metrics assessed with the Minnesota Near Reading chart (reading acuity, critical print size [CPS], CPS reading speed) were intercorrelated (Pearson product moment correlations) and assessed for concordance (intraclass correlation coefficients [ICC]) and agreement (Bland-Altman analysis) for indication of clinical usefulness. RESULTS: Nineteen accommodating IOL cases, 40 simultaneous contact lens cases, and 38 varifocal spectacle cases were evaluated. Other than CPS reading speed, all near visual acuity and reading metrics correlated well with each other (r>0.70, P<.001). Near visual acuity measured with uppercase letters was highly concordant (ICC, 0.78) and in close agreement with lowercase letters (+/- 0.17 logMAR). Near word acuity agreed well with reading acuity (+/- 0.16 logMAR), which in turn agreed well with near visual acuity measured with uppercase letters 0.16 logMAR). Concordance (ICC, 0.18 to 0.46) and agreement (+/- 0.24 to 0.30 logMAR) of CPS with the other near metrics was moderate. CONCLUSION: Measurement of near visual ability in presbyopia should be standardized to include assessment of near visual acuity with logMAR uppercase-letter optotypes, smallest logMAR print size that maintains maximum reading speed (CPS), and reading speed. J Cataract Refract Surg 2009; 35:1401-1409 (C) 2009 ASCRS and ESCRS

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research described here concerns the development of metrics and models to support the development of hybrid (conventional/knowledge based) integrated systems. The thesis argues from the point that, although it is well known that estimating the cost, duration and quality of information systems is a difficult task, it is far from clear what sorts of tools and techniques would adequately support a project manager in the estimation of these properties. A literature review shows that metrics (measurements) and estimating tools have been developed for conventional systems since the 1960s while there has been very little research on metrics for knowledge based systems (KBSs). Furthermore, although there are a number of theoretical problems with many of the `classic' metrics developed for conventional systems, it also appears that the tools which such metrics can be used to develop are not widely used by project managers. A survey was carried out of large UK companies which confirmed this continuing state of affairs. Before any useful tools could be developed, therefore, it was important to find out why project managers were not using these tools already. By characterising those companies that use software cost estimating (SCE) tools against those which could but do not, it was possible to recognise the involvement of the client/customer in the process of estimation. Pursuing this point, a model of the early estimating and planning stages (the EEPS model) was developed to test exactly where estimating takes place. The EEPS model suggests that estimating could take place either before a fully-developed plan has been produced, or while this plan is being produced. If it were the former, then SCE tools would be particularly useful since there is very little other data available from which to produce an estimate. A second survey, however, indicated that project managers see estimating as being essentially the latter at which point project management tools are available to support the process. It would seem, therefore, that SCE tools are not being used because project management tools are being used instead. The issue here is not with the method of developing an estimating model or tool, but; in the way in which "an estimate" is intimately tied to an understanding of what tasks are being planned. Current SCE tools are perceived by project managers as targetting the wrong point of estimation, A model (called TABATHA) is then presented which describes how an estimating tool based on an analysis of tasks would thus fit into the planning stage. The issue of whether metrics can be usefully developed for hybrid systems (which also contain KBS components) is tested by extending a number of "classic" program size and structure metrics to a KBS language, Prolog. Measurements of lines of code, Halstead's operators/operands, McCabe's cyclomatic complexity, Henry & Kafura's data flow fan-in/out and post-release reported errors were taken for a set of 80 commercially-developed LPA Prolog programs: By re~defining the metric counts for Prolog it was found that estimates of program size and error-proneness comparable to the best conventional studies are possible. This suggests that metrics can be usefully applied to KBS languages, such as Prolog and thus, the development of metncs and models to support the development of hybrid information systems is both feasible and useful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The energy consumption and the energy efficiency have become very important issue in optimizing the current as well as in designing the future telecommunications networks. The energy and power metrics are being introduced in order to enable assessment and comparison of the energy consumption and power efficiency of the telecommunications networks and other transmission equipment. The standardization of the energy and power metrics is a significant ongoing activity aiming to define the baseline energy and power metrics for the telecommunications systems. This article provides an up-to-date overview of the energy and power metrics being proposed by the various standardization bodies and subsequently adopted worldwide by the equipment manufacturers and the network operators. © Institut Télécom and Springer-Verlag 2012.and Springer-Verlag 2012.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Orthogonal frequency division multiplexing (OFDM) is becoming a fundamental technology in future generation wireless communications. Call admission control is an effective mechanism to guarantee resilient, efficient, and quality-of-service (QoS) services in wireless mobile networks. In this paper, we present several call admission control algorithms for OFDM-based wireless multiservice networks. Call connection requests are differentiated into narrow-band calls and wide-band calls. For either class of calls, the traffic process is characterized as batch arrival since each call may request multiple subcarriers to satisfy its QoS requirement. The batch size is a random variable following a probability mass function (PMF) with realistically maximum value. In addition, the service times for wide-band and narrow-band calls are different. Following this, we perform a tele-traffic queueing analysis for OFDM-based wireless multiservice networks. The formulae for the significant performance metrics call blocking probability and bandwidth utilization are developed. Numerical investigations are presented to demonstrate the interaction between key parameters and performance metrics. The performance tradeoff among different call admission control algorithms is discussed. Moreover, the analytical model has been validated by simulation. The methodology as well as the result provides an efficient tool for planning next-generation OFDM-based broadband wireless access systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We obtain the exact asymptotic result for the disorder-averaged probability distribution function for a random walk in a biased Sinai model and show that it is characterized by a creeping behavior of the displacement moments with time, similar to v(mu n), where mu <1 is dimensionless mean drift. We employ a method originated in quantum diffusion which is based on the exact mapping of the problem to an imaginary-time Schrodinger equation. For nonzero drift such an equation has an isolated lowest eigenvalue separated by a gap from quasicontinuous excited states, and the eigenstate corresponding to the former governs the long-time asymptotic behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To what extent does competitive entry create a structural change in key marketing metrics? New players may just be a temporal nuisance to incumbents, but could also fundamentally change the latter's performance evolution, or induce them to permanently alter their spending levels and/or pricing decisions. Similarly, the addition of a new marketing channel could permanently shift shopping preferences, or could just create a short-lived migration from existing channels. The steady-state impact of a given entry or channel addition on various marketing metrics is intrinsically an empirical issue for which we need an appropriate testing procedure. In this study, we introduce a testing sequence that allows for the endogenous determination of potential change (break) locations, thereby accounting for lead and/or lagged effects of the introduction of interest. By not restricting the number of potential breaks to one (as is commonly done in the marketing literature), we quantify the impact of the new entrant(s) while controlling for other events that may have taken place in the market. We illustrate the methodology in the context of the Dutch television advertising market, which was characterized by the entry of several late movers. We find that the steady-state growth of private incumbents' revenues was slowed by the quasi-simultaneous entry of three new players. Contrary to industry observers' expectations, such a slowdown was not experienced in the related markets of print and radio advertising.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Remote sensing data is routinely used in ecology to investigate the relationship between landscape pattern as characterised by land use and land cover maps, and ecological processes. Multiple factors related to the representation of geographic phenomenon have been shown to affect characterisation of landscape pattern resulting in spatial uncertainty. This study investigated the effect of the interaction between landscape spatial pattern and geospatial processing methods statistically; unlike most papers which consider the effect of each factor in isolation only. This is important since data used to calculate landscape metrics typically undergo a series of data abstraction processing tasks and are rarely performed in isolation. The geospatial processing methods tested were the aggregation method and the choice of pixel size used to aggregate data. These were compared to two components of landscape pattern, spatial heterogeneity and the proportion of landcover class area. The interactions and their effect on the final landcover map were described using landscape metrics to measure landscape pattern and classification accuracy (response variables). All landscape metrics and classification accuracy were shown to be affected by both landscape pattern and by processing methods. Large variability in the response of those variables and interactions between the explanatory variables were observed. However, even though interactions occurred, this only affected the magnitude of the difference in landscape metric values. Thus, provided that the same processing methods are used, landscapes should retain their ranking when their landscape metrics are compared. For example, highly fragmented landscapes will always have larger values for the landscape metric "number of patches" than less fragmented landscapes. But the magnitude of difference between the landscapes may change and therefore absolute values of landscape metrics may need to be interpreted with caution. The explanatory variables which had the largest effects were spatial heterogeneity and pixel size. These explanatory variables tended to result in large main effects and large interactions. The high variability in the response variables and the interaction of the explanatory variables indicate it would be difficult to make generalisations about the impact of processing on landscape pattern as only two processing methods were tested and it is likely that untested processing methods will potentially result in even greater spatial uncertainty. © 2013 Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyze theoretically the interplay between optical return-to-zero signal degradation due to timing jitter and additive amplified-spontaneous-emission noise. The impact of these two factors on the performance of a square-law direct detection receiver is also investigated. We derive an analytical expression for the bit-error probability and quantitatively determine the conditions when the contributions of the effects of timing jitter and additive noise to the bit error rate can be treated separately. The analysis of patterning effects is also presented. © 2007 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We find the probability distribution of the fluctuating parameters of a soliton propagating through a medium with additive noise. Our method is a modification of the instanton formalism (method of optimal fluctuation) based on a saddle-point approximation in the path integral. We first solve consistently a fundamental problem of soliton propagation within the framework of noisy nonlinear Schrödinger equation. We then consider model modifications due to in-line (filtering, amplitude and phase modulation) control. It is examined how control elements change the error probability in optical soliton transmission. Even though a weak noise is considered, we are interested here in probabilities of error-causing large fluctuations which are beyond perturbation theory. We describe in detail a new phenomenon of soliton collapse that occurs under the combined action of noise, filtering and amplitude modulation. © 2004 Elsevier B.V. All rights reserved.