977 resultados para metrics


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The role of marketing channels is to implement marketing strategy. The difficulty of channel strategy is compounded by the emergence of e-channels and the need to integrate e-channels into traditional or “bricks and mortar” channels (Rowley 2002). As a result, managing performance across a greater number of channels with diverse characteristics is more difficult.

Organization and marketing performance is to some degree a function of the quality of channel implementation and particularly channel performance measurement. The channels literature suggests a “channel performance metric paradox”. Approaches to channel performance metrics have been mutually orthogonal or even negatively correlated. (Jeuland & Shugan 1983; Lewis & Lambert 1991; Larson & Lusch 1992). This paradox implies that it is impossible for all channel performance metrics to be maximized simultaneously and tradeoffs exist.

This paper proposes a research model and propositions which extend previous research and attempts to reconcile this “channel performance metric paradox”. The model assumes that testing the relationship between the Miles and Snow strategy types and a comprehensive range of channel performance metrics may explain this paradox. Previous implementation performance research has focused more on the Porter strategies rather than the Miles and Snow strategy types.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Image fusion quality metrics have evolved from image processing quality metrics. They measure the quality of fused images by estimating how much localized information has been transferred from the source images into the fused image. However, this technique assumes that it is actually possible to fuse two images into one without any loss. In practice, some features must be sacrificed and relaxed in both source images. Relaxed features might be very important, like edges, gradients and texture elements. The importance of a certain feature is application dependant. This paper presents a new method for image fusion quality assessment. It depends on estimating how much valuable information has not been transferred.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Both Flash crowds and DDoS (Distributed Denial-of-Service) attacks have very similar properties in terms of internet traffic, however Flash crowds are legitimate flows and DDoS attacks are illegitimate flows, and DDoS attacks have been a serious threat to internet security and stability. In this paper we propose a set of novel methods using probability metrics to distinguish DDoS attacks from Flash crowds effectively, and our simulations show that the proposed methods work well. In particular, these mathods can not only distinguish DDoS attacks from Flash crowds clearly, but also can distinguish the anomaly flow being DDoS attacks flow or being Flash crowd flow from Normal network flow effectively. Furthermore, we show our proposed hybrid probability metrics can greatly reduce both false positive and false negative rates in detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose:  The purpose of the study was to obtain anterior segment biometry for 40 normal eyes and to measure variables that may be useful to design large diameter gas permeable contact lenses that sit outside the region normally viewed by corneal topographers. Also, the distribution of these variables in the normal eye and how well they correlated to each other were determined.

Methods:  This is a cross-sectional study, in which data were collected at a single study visit. Corneal topography and imaging of the anterior segment of the eye were performed using the Orbscan II and Visante OCT. The variables that were collected were horizontal K reading, central corneal/scleral sagittal depth at 15 mm chord, and nasal and temporal angles at the 15 mm chord using the built-in software measurement tools.

Results:  The central horizontal K readings for the 40 eyes were 43 ± 1.73 D (7.85 ± 0.31 mm), with ± 95% confidence interval (CI) of 38.7 (8.7 mm) and 46.6 D (7.24 mm). The mean corneal/scleral sagittal depth at the 15 mm chord was 3.74 ± 0.19 mm and the range was 3.14 to 4.04 mm. The average nasal angle (which was not different from the temporal angle) at the 15 mm chord was 39.32 ± 3.07 degrees and the ± 95%CI was 33.7 and 45.5 degrees. The correlation coefficient comparing the K reading and the corneal/scleral sagittal depth showed the best correlation (0.58, p < 0.001). The corneal/scleral sagittal depth at 15 mm correlated less with the nasal angle (0.44, p = 0.004) and the weakest correlation was for the nasal angle at 15 mm with the horizontal readings (0.32, p = 0.046).

Conclusion:  The Visante OCT is a valuable tool for imaging the anterior segment of the eye. The Visante OCT is especially effective in providing the biometry of the peripheral cornea and sclera and may help in fitting GP lenses with a higher percentage of initial lens success, when the corneal sag and lens sag are better matched.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Image fusion process merges two images into a single more informative image. Objective image fusion per- formance metrics rely primarily on measuring the amount of information transferred from each source image into the fused image. Objective image fusion metrics have evolved from image processing dissimilarity metrics. Additionally, researchers have developed many additions to image dissimilarity metrics in order to better value the local fusion worthy features in source images. This paper studies the evolution of objective image fusion performance metrics and their subjective and objective validation. It describes how a fusion performance metric evolves starting with image dissimilarity metrics, its realization into image fusion contexts, its localized weighting factors and the validation process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DDoS attacks are one of the major threats to Internet services. Sophisticated hackers are mimicking the features of legitimate network events, such as flash crowds, to fly under the radar. This poses great challenges to detect DDoS attacks. In this paper, we propose an attack feature independent DDoS flooding attack detection method at local area networks. We employ flow entropy on local area network routers to supervise the network traffic and raise potential DDoS flooding attack alarms when the flow entropy drops significantly in a short period of time. Furthermore, information distance is employed to differentiate DDoS attacks from flash crowds. In general, the attack traffic of one DDoS flooding attack session is generated by many bots from one botnet, and all of these bots are executing the same attack program. As a result, the similarity among attack traffic should higher than that among flash crowds, which are generated by many random users. Mathematical models have been established for the proposed detection strategies. Analysis based on the models indicates that the proposed methods can raise the alarm for potential DDoS flooding attacks and can differentiate DDoS flooding attacks from flash crowds with conditions. The extensive experiments and simulations confirmed the effectiveness of our proposed detection strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1.Quantitative tools to describe biological communities are important for conservation and ecological management. The analysis of trophic structure can be used to quantitatively describe communities. Stable isotope analysis is useful to describe trophic organization, but statistical models that allow the identification of general patterns and comparisons between systems/sampling periods have only recently been developed. 2.Here, stable isotope-based Bayesian community-wide metrics are used to investigate patterns in trophic structure in five estuaries that differ in size, sediment yield and catchment vegetation cover (C3/C4): the Zambezi in Mozambique, the Tana in Kenya and the Rianila, the Betsiboka and Pangalanes Canal (sampled at Ambila) in Madagascar. 3.Primary producers, invertebrates and fish of different trophic ecologies were sampled at each estuary before and after the 2010–2011 wet season. Trophic length, estimated based on δ15N, varied between 3·6 (Ambila) and 4·7 levels (Zambezi) and did not vary seasonally for any estuary. Trophic structure differed the most at Ambila, where trophic diversity and trophic redundancy were lower than at the other estuaries. Among the four open estuaries, the Betsiboka and Tana (C4-dominated) had lower trophic diversity than the Zambezi and Rianila (C3-dominated), probably due to the high loads of suspended sediment, which limited the availability of aquatic sources. 4.There was seasonality in trophic structure at Ambila and Betsiboka, as trophic diversity increased and trophic redundancy decreased from the prewet to the postwet season. For Ambila, this probably resulted from the higher variability and availability of sources after the wet season, which allowed diets to diversify. For the Betsiboka, where aquatic productivity is low, this was likely due to a greater input of terrestrial material during the wet season. 5.The comparative analysis of community-wide metrics was useful to detect patterns in trophic structure and identify differences/similarities in trophic organization related to environmental conditions. However, more widespread application of these approaches across different faunal communities in contrasting ecosystems is required to allow identification of robust large-scale patterns in trophic structure. The approach used here may also find application in comparing food web organization before and after impacts or monitoring ecological recovery after rehabilitation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complexity is increasingly the hallmark in environmental management practices of sandy shorelines. This arises primarily from meeting growing public demands (e.g., real estate, recreation) whilst reconciling economic demands with expectations of coastal users who have modern conservation ethics. Ideally, shoreline management is underpinned by empirical data, but selecting ecologically-meaningful metrics to accurately measure the condition of systems, and the ecological effects of human activities, is a complex task. Here we construct a framework for metric selection, considering six categories of issues that authorities commonly address: erosion; habitat loss; recreation; fishing; pollution (litter and chemical contaminants); and wildlife conservation. Possible metrics were scored in terms of their ability to reflect environmental change, and against criteria that are widely used for judging the performance of ecological indicators (i.e., sensitivity, practicability, costs, and public appeal). From this analysis, four types of broadly applicable metrics that also performed very well against the indicator criteria emerged: 1.) traits of bird populations and assemblages (e.g., abundance, diversity, distributions, habitat use); 2.) breeding/reproductive performance sensu lato (especially relevant for birds and turtles nesting on beaches and in dunes, but equally applicable to invertebrates and plants); 3.) population parameters and distributions of vertebrates associated primarily with dunes and the supralittoral beach zone (traditionally focused on birds and turtles, but expandable to mammals); 4.) compound measurements of the abundance/cover/biomass of biota (plants, invertebrates, vertebrates) at both the population and assemblage level. Local constraints (i.e., the absence of birds in highly degraded urban settings or lack of dunes on bluff-backed beaches) and particular issues may require alternatives. Metrics - if selected and applied correctly - provide empirical evidence of environmental condition and change, but often do not reflect deeper environmental values per se. Yet, values remain poorly articulated for many beach systems; this calls for a comprehensive identification of environmental values and the development of targeted programs to conserve these values on sandy shorelines globally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recommendation systems support users and developers of various computer and software systems to overcome information overload, perform information discovery tasks, and approximate computation, among others. They have recently become popular and have attracted a wide variety of application scenarios ranging from business process modeling to source code manipulation. Due to this wide variety of application domains, different approaches and metrics have been adopted for their evaluation. In this chapter, we review a range of evaluation metrics and measures as well as some approaches used for evaluating recommendation systems. The metrics presented in this chapter are grouped under sixteen different dimensions, e.g., correctness, novelty, coverage. We review these metrics according to the dimensions to which they correspond. A brief overview of approaches to comprehensive evaluation using collections of recommendation system dimensions and associated metrics is presented. We also provide suggestions for key future research and practice directions.