98 resultados para Metric


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new study on the application of the framework of Computational Media Aesthetics to the problem of automated understanding of film. Leveraging Film Grammar as the means to closing the "semantic gap" in media analysis, we examine film rhythm, a powerful narrative concept used to endow structure and form to the film compositionally and enhance its lyrical quality experientially. The novelty of this paper lies in the specification and investigation of the rhythmic elements that are present in two cinematic devices; namely motion and editing patterns, and their potential usefulness to automated content annotation and management systems. In our rhythm model, motion behavior is classified as being either nonexistent, fluid or staccato for a given shot. Shot neighborhoods in movies are then grouped by proportional makeup of these motion behavioral classes to yield seven high-level rhythmic arrangements that prove to be adept at indicating likely scene content (e.g. dialogue or chase sequence) in our experiments. The second part of our investigation presents a computational model to detect editing patterns as either metric, accelerated, decelerated or free. Details of the algorithm for the extraction of these classes are presented, along with experimental results on real movie data. We show with an investigation of combined rhythmic patterns that, while detailed content identification via rhythm types alone is not possible by virtue of the fact that film is not codified to this level in terms of rhythmic elements, analysis of the combined motion/editing rhythms can allow us to determine that the content has changed and hypothesize as to why this is so. We present three such categories of change and demonstrate their efficacy for capturing useful film elements (e.g. scene change precipitated by plot event), by providing data support from five motion pictures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speculative prefetching has been proposed to improve the response time of network access. Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modeling. We analyze the performance of a prefetcher that has uncertain knowledge about future accesses. Our performance metric is the improvement in access time, for which we derive a formula in terms of resource parameters (time available and time required for prefetehing) and speculative parameters (probabilities for next access). We develop a prefetch algorithm to maximize the improvement in access time. The algorithm is based on finding the best solution to a stretch knapsack problem, using theoretically proven apparatus to reduce the search space. An integration between speculative prefetching and caching is also investigated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a model for space in which an autonomous agent acquires information about its environment. The agent uses a predefined exploration strategy to build a map allowing it to navigate and deduce relationships between points in space. The shapes of objects in the environment are represented qualitatively. This shape information is deduced from the agent's motion. Normally, in a qualitative model, directional information degrades under transitive deduction. By reasoning about the shape of the environment, the agent can match visual events to points on the objects. This strengthens the model by allowing further relationships to be deduced. In particular, points that are separated by long distances, or complex surfaces, can be related by line-of-sight. These relationships are deduced without incorporating any metric information into the model. Examples are given to demonstrate the use of the model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modelling. We use improvement in access time as the performance metric, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). The performance maximization problem is expressed as a stretch knapsack problem. We develop an algorithm to maximize the improvement in access time by solving the stretch knapsack problem, using theoretically proven apparatus to reduce the search space. Integration between speculative prefetching and caching is also investigated, albeit under the assumption of equal item sizes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimum subwindow search for object detection aims to find a subwindow so that the contained subimage is most similar to the query object. This problem can be formulated as a four dimensional (4D) maximum entry search problem wherein each entry corresponds to the quality score of the subimage contained in a subwindow. For n x n images, a naive exhaustive search requires O(n4) sequential computations of the quality scores for all subwindows. To reduce the time complexity, we prove that, for some typical similarity functions like Euclidian metric, χ2 metric on image histograms, the associated 4D array carries some Monge structures and we utilise these properties to speed up the optimum subwindow search and the time complexity is reduced to O(n3). Furthermore, we propose a locally optimal alternating column and row search method with typical quadratic time complexity O(n2). Experiments on PASCAL VOC 2006 demonstrate that the alternating method is significantly faster than the well known efficient subwindow search (ESS) method whilst the performance loss due to local maxima problem is negligible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the fundamental issues in building autonomous agents is to be able to sense, represent and react to the world. Some of the earlier work [Mor83, Elf90, AyF89] has aimed towards a reconstructionist approach, where a number of sensors are used to obtain input that is used to construct a model of the world that mirrors the real world. Sensing and sensor fusion was thus an important aspect of such work. Such approaches have had limited success, and some of the main problems were the issues of uncertainty arising from sensor error and errors that accumulated in metric, quantitative models. Recent research has therefore looked at different ways of examining the problems. Instead of attempting to get the most accurate and correct model of the world, these approaches look at qualitative models to represent the world, which maintain relative and significant aspects of the environment rather than all aspects of the world. The relevant aspects of the world that are retained are determined by the task at hand which in turn determines how to sense. That is, task directed or purposive sensing is used to build a qualitative model of the world, which though inaccurate and incomplete is sufficient to solve the problem at hand. This paper examines the issues of building up a hierarchical knowledge representation of the environment with limited sensor input that can be actively acquired by an agent capable of interacting with the environment. Different tasks require different aspects of the environment to be abstracted out. For example, low level tasks such as navigation require aspects of the environment that are related to layout and obstacle placement. For the agent to be able to reposition itself in an environment, significant features of spatial situations and their relative placement need to be kept. For the agent to reason about objects in space, for example to determine the position of one object relative to another, the representation needs to retain information on relative locations of start and finish of the objects, that is endpoints of objects on a grid. For the agent to be able to do high level planning, the agent may need only the relative position of the starting point and destination, and not the low level details of endpoints, visual clues and so on. This indicates that a hierarchical approach would be suitable, such that each level in the hierarchy is at a different level of abstraction, and thus suitable for a different task. At the lowest level, the representation contains low level details of agent's motion and visual clues to allow the agent to navigate and reposition itself. At the next level of abstraction the aspects of the representation allow the agent to perform spatial reasoning, and finally the highest level of abstraction in the representation can be used by the agent for high level planning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper forms a continuation of our work focused on exploiting film grammar for the task of automated film understanding. We examine film rhythm, a powerful narrative concept used to endow structure and form to the film compositionally and to enhance its lyrical quality experientially. Of the many, often complex, cinematic devices contributing to film rhythm, this paper investigates the rhythmic elements that are present in edited sequences of shots, and presents a novel computational model to detect shot structural rhythm as either metric, accelerated, decelerated, or free. Details of the algorithm for the extraction of these editing rhythm classes are presented, along with experimental results on real movie data. Following this we study the usefulness of combining the rhythmic patterns induced through both motion and editing in film. We show that, whilst detailed content identification via rhythm types alone is not possible by virtue of the fact that film is not codified to this level in terms of rhythmic elements, analysis of the combined motion/shot rhythm can allow us to determine that the content has changed and hypothesize as to why this is so. We present 3 such categories of change and demonstrate their efficacy for capturing useful film elements (e.g., scene change precipitated by plot event), by providing data support from 5 motion pictures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report provides an overview of results from the Australian Burden of Disease and Injury Study undertaken by the AIHW during 1998 and 1999. The Study uses the methods developed for the Global Burden of Disease Study, adapted to the Australian context and drawing extensively on Australian sources of population health data. It provides a comprehensive assessment of the amount of ill health and disability, the ‘burden of disease’ in Australia in 1996.

Mortality, disability, impairment, illness and injury arising from 176 diseases, injuries and risk factors are measured using a common metric, the Disability-Adjusted Life Year or DALY. One DALY is a lost year of ‘healthy’ life and is calculated as a combination of years of life lost due to premature mortality (YLL) and equivalent ‘healthy’ years of life lost due to disability (YLD). This report provides estimates of the contribution of fatal and non-fatal health outcomes to the total burden of disease and injury measured in DALYs in Australia in 1996.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed Denial of Service (DDoS) attack is a critical threat to the Internet, and botnets are usually the engines behind them. Sophisticated botmasters attempt to disable detectors by mimicking the traffic patterns of flash crowds. This poses a critical challenge to those who defend against DDoS attacks. In our deep study of the size and organization of current botnets, we found that the current attack flows are usually more similar to each other compared to the flows of flash crowds. Based on this, we proposed a discrimination algorithm using the flow correlation coefficient as a similarity metric among suspicious flows. We formulated the problem, and presented theoretical proofs for the feasibility of the proposed discrimination method in theory. Our extensive experiments confirmed the theoretical analysis and demonstrated the effectiveness of the proposed method in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anonymous communication has become a hot research topic in order to meet the increasing demand for web privacy protection. However, there are few such systems which can provide high level anonymity for web browsing. The reason is the current dominant dummy packet padding method for anonymization against traffic analysis attacks. This method inherits huge delay and bandwidth waste, which inhibits its use for web browsing. In this paper, we propose a predicted packet padding strategy to replace the dummy packet padding method for anonymous web browsing systems. The proposed strategy mitigates delay and bandwidth waste significantly on average. We formulated the traffic analysis attack and defense problem, and defined a metric, cost coefficient of anonymization (CCA), to measure the performance of anonymization. We thoroughly analyzed the problem with the characteristics of web browsing and concluded that the proposed strategy is better than the current dummy packet padding strategy in theory. We have conducted extensive experiments on two real world data sets, and the results confirmed the advantage of the proposed method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Networking of computing devices has been going through rapid evolution and thus continuing to be an ever expanding area of importance in recent years. New technologies, protocols, services and usage patterns have contributed to the major research interests in this area of computer science. The current special issue is an effort to bring forward some of these interesting developments that are being pursued by researchers at present in different parts of the globe. Our objective is to provide the readership with some insight into the latest innovations in computer networking through this. This Special Issue presents selected papers from the thirteenth conference of the series (ICCIT 2010) held during December 23-25, 2010 at the Ahsanullah University of Science and Technology. The first ICCIT was held in Dhaka, Bangladesh, in 1998. Since then the conference has grown to be one of the largest computer and IT related research conferences in the South Asian region, with participation of academics and researchers from many countries around the world. Starting in 2008 the proceedings of ICCIT are included in IEEExplore. In 2010, a total of 410 full papers were submitted to the conference of which 136 were accepted after reviews conducted by an international program committee comprising 81 members from 16 countries. This was tantamount to an acceptance rate of 33%. From these 136 papers, 14 highly ranked manuscripts were invited for this Special Issue. The authors were advised to enhance their papers significantly and submit them to undergo review for suitability of inclusion into this publication. Of those, eight papers survived the review process and have been selected for inclusion in this Special Issue. The authors of these papers represent academic and/or research institutions from Australia, Bangladesh, Japan, Korea and USA. These papers address issues concerning different domains of networks namely, optical fiber communication, wireless and interconnection networks, issues related to networking hardware and software and network mobility. The paper titled “Virtualization in Wireless Sensor Network: Challenges and Opportunities” argues in favor of bringing in different heterogeneous sensors under a common virtual framework so that the issues like flexibility, diversity, management and security can be handled practically. The authors Md. Motaharul Islam and Eui-Num Huh propose an architecture for sensor virtualization. They also present the current status and the challenges and opportunities for further research on the topic. The manuscript “Effect of Polarization Mode Dispersion on the BER Performance of Optical CDMA” deals with impact of polarization mode dispersion on the bit error rate performance of direct sequence optical code division multiple access. The authors, Md. Jahedul Islam and Md. Rafiqul Islam present an analytical approach toward determining the impact of different performance parameters. The authors show that the bit error rate performance improves significantly by the third order polarization mode dispersion than its first or second order counterparts. The authors Md. Shohrab Hossain, Mohammed Atiquzzaman and William Ivancic of the paper “Cost and Efficiency Analysis of NEMO Protocol Entities” present an analytical model for estimating the cost incurred by major mobility entities of a NEMO. The authors define a new metric for cost calculation in the process. Both the newly developed metric and the analytical model are likely to be useful to network engineers in estimating the resource requirement at the key entities while designing such a network. The article titled “A Highly Flexible LDPC Decoder using Hierarchical Quasi-Cyclic Matrix with Layered Permutation” deals with Low Density Parity Check decoders. The authors, Vikram Arkalgud Chandrasetty and Syed Mahfuzul Aziz propose a novel multi-level structured hierarchical matrix approach for generating codes of different lengths flexibly depending upon the requirement of the application. The manuscript “Analysis of Performance Limitations in Fiber Bragg Grating Based Optical Add-Drop Multiplexer due to Crosstalk” has been contributed by M. Mahiuddin and M. S. Islam. The paper proposes a new method of handling crosstalk with a fiber Bragg grating based optical add drop multiplexer (OADM). The authors show with an analytical model that different parameters improve using their proposed OADM. The paper “High Performance Hierarchical Torus Network Under Adverse Traffic Patterns” addresses issues related to hierarchical torus network (HTN) under adverse traffic patterns. The authors, M.M. Hafizur Rahman, Yukinori Sato, and Yasushi Inoguchi observe that dynamic communication performance of an HTN under adverse traffic conditions has not yet been addressed. The authors evaluate the performance of HTN for comparison with some other relevant networks. It is interesting to see that HTN outperforms these counterparts in terms of throughput and data transfer under adverse traffic. The manuscript titled “Dynamic Communication Performance Enhancement in Hierarchical Torus Network by Selection Algorithm” has been contributed by M.M. Hafizur Rahman, Yukinori Sato, and Yasushi Inoguchi. The authors introduce three simple adapting routing algorithms for efficient use of physical links and virtual channels in hierarchical torus network. The authors show that their approaches yield better performance for such networks. The final title “An Optimization Technique for Improved VoIP Performance over Wireless LAN” has been contributed by five authors, namely, Tamal Chakraborty, Atri Mukhopadhyay, Suman Bhunia, Iti Saha Misra and Salil K. Sanyal. The authors propose an optimization technique for configuring the parameters of the access points. In addition, they come up with an optimization mechanism in order to tune the threshold of active queue management system appropriately. Put together, the mechanisms improve the VoIP performance significantly under congestion. Finally, the Guest Editors would like to express their sincere gratitude to the 15 reviewers besides the guest editors themselves (Khalid M. Awan, Mukaddim Pathan, Ben Townsend, Morshed Chowdhury, Iftekhar Ahmad, Gour Karmakar, Shivali Goel, Hairulnizam Mahdin, Abdullah A Yusuf, Kashif Sattar, A.K.M. Azad, F. Rahman, Bahman Javadi, Abdelrahman Desoky, Lenin Mehedy) from several countries (Australia, Bangladesh, Japan, Pakistan, UK and USA) who have given immensely to this process. They have responded to the Guest Editors in the shortest possible time and dedicated their valuable time to ensure that the Special Issue contains high-quality papers with significant novelty and contributions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The knowledge embedded in an online data stream is likely to change over time due to the dynamic evolution of the stream. Consequently, infrequent episode mining over an online stream, frequent episodes should be adaptively extracted from recently generated stream segments instead of the whole stream. However, almost all existing frequent episode mining approaches find episodes frequently occurring over the whole sequence. This paper proposes and investigates a new problem: online mining of recently frequent episodes over data streams. In order to meet strict requirements of stream mining such as one-scan, adaptive result update and instant result return, we choose a novel frequency metric and define a highly condensed set called the base of recently frequent episodes. We then introduce a one-pass method for mining bases of recently frequent episodes. Experimental results show that the proposed method is capable of finding bases of recently frequent episodes quickly and adaptively. The proposed method outperforms the previous approaches with the advantages of one-pass, instant result update and return, more condensed resulting sets and less space usage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various solutions have been proposed in managing trust relationship between trading partners in eCommerce environment. Determine the reliability of trust management systems in eCommerce is most difficult issue due to highly dynamic nature of eCommerce environments. As trust management systems depend on the feedback ratings provided by the trading partners, they are fallible to strategic manipulation of the feedback ratings attacks. This paper addressed the challenges of trust management systems. The requirements of a reliable trust management are also discussed. In particular, we introduce an adaptive credibility model that distinguishes between credible feedback ratings and malicious feedback ratings by considering transaction size, frequency of ratings and majority vote to form a feedback ratings verification metric. The approach has been validated by simulation result.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are a growing number of large-scale freshwater ecological restoration projects worldwide. Assessments of the benefits and costs of restoration often exclude an analysis of uncertainty in the modelled outcomes. To address this shortcoming we explicitly model the uncertainties associated with measures of ecosystem health in the estuary of the Murray– Darling Basin, Australia and how those measures may change with the implementation of a Basin-wide Plan to recover water to improve ecosystem health. Specifically, we compare two metrics – one simple and one more complex – to manage end-of-system flow requirements for one ecosystem asset in the Basin, the internationally important Coorong saline wetlands. Our risk assessment confirms that the ecological conditions in the Coorong are likely to improve with implementation of the Basin Plan; however, there are risks of a Type III error (where the correct answer is found for the wrong question) associated with using the simple metric for adaptive management. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Image fusion process merges two images into a single more informative image. Objective image fusion per- formance metrics rely primarily on measuring the amount of information transferred from each source image into the fused image. Objective image fusion metrics have evolved from image processing dissimilarity metrics. Additionally, researchers have developed many additions to image dissimilarity metrics in order to better value the local fusion worthy features in source images. This paper studies the evolution of objective image fusion performance metrics and their subjective and objective validation. It describes how a fusion performance metric evolves starting with image dissimilarity metrics, its realization into image fusion contexts, its localized weighting factors and the validation process.