920 resultados para Information Models
Resumo:
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.
Resumo:
This thesis proposes a novel graphical model for inference called the Affinity Network,which displays the closeness between pairs of variables and is an alternative to Bayesian Networks and Dependency Networks. The Affinity Network shares some similarities with Bayesian Networks and Dependency Networks but avoids their heuristic and stochastic graph construction algorithms by using a message passing scheme. A comparison with the above two instances of graphical models is given for sparse discrete and continuous medical data and data taken from the UCI machine learning repository. The experimental study reveals that the Affinity Network graphs tend to be more accurate on the basis of an exhaustive search with the small datasets. Moreover, the graph construction algorithm is faster than the other two methods with huge datasets. The Affinity Network is also applied to data produced by a synchronised system. A detailed analysis and numerical investigation into this dynamical system is provided and it is shown that the Affinity Network can be used to characterise its emergent behaviour even in the presence of noise.
Resumo:
This empirical study employs a different methodology to examine the change in wealth associated with mergers and acquisitions (M&As) for US firms. Specifically, we employ the standard CAPM, the Fama-French three-factor model and the Carhart four-factor models within the OLS and GJR-GARCH estimation methods to test the behaviour of the cumulative abnormal returns (CARs). Whilst the standard CAPM captures the variability of stock returns with the overall market, the Fama-French factors capture the risk factors that are important to investors. Additionally, augmenting the Fama-French three-factor model with the Carhart momentum factor to generate the four-factor captures additional pricing elements that may affect stock returns. Traditionally, estimates of abnormal returns (ARs) in M&As situations rely on the standard OLS estimation method. However, the standard OLS will provide inefficient estimates of the ARs if the data contain ARCH and asymmetric effects. To minimise this problem of estimation efficiency we re-estimated the ARs using GJR-GARCH estimation method. We find that there is variation in the results both as regards the choice models and estimation methods. Besides these variations in the estimated models and the choice of estimation methods, we also tested whether the ARs are affected by the degree of liquidity of the stocks and the size of the firm. We document significant positive post-announcement cumulative ARs (CARs) for target firm shareholders under both the OLS and GJR-GARCH methods across all three methodologies. However, post-event CARs for acquiring firm shareholders were insignificant for both sets of estimation methods under the three methodologies. The GJR-GARCH method seems to generate larger CARs than those of the OLS method. Using both market capitalization and trading volume as a measure of liquidity and the size of the firm, we observed strong return continuations in the medium firms relative to small and large firms for target shareholders. We consistently observed market efficiency in small and large firm. This implies that target firms for small and large firms overreact to new information resulting in a more efficient market. For acquirer firms, our measure of liquidity captures strong return continuations for small firms under the OLS estimates for both CAPM and Fama-French three-factor models, whilst under the GJR-GARCH estimates only for Carhart model. Post-announcement bootstrapping simulated CARs confirmed our earlier results.
Resumo:
Early, lesion-based models of language processing suggested that semantic and phonological processes are associated with distinct temporal and parietal regions respectively, with frontal areas more indirectly involved. Contemporary spatial brain mapping techniques have not supported such clear-cut segregation, with strong evidence of activation in left temporal areas by both processes and disputed evidence of involvement of frontal areas in both processes. We suggest that combining spatial information with temporal and spectral data may allow a closer scrutiny of the differential involvement of closely overlapping cortical areas in language processing. Using beamforming techniques to analyze magnetoencephalography data, we localized the neuronal substrates underlying primed responses to nouns requiring either phonological or semantic processing, and examined the associated measures of time and frequency in those areas where activation was common to both tasks. Power changes in the beta (14-30 Hz) and gamma (30-50 Hz) frequency bandswere analyzed in pre-selected time windows of 350-550 and 500-700ms In left temporal regions, both tasks elicited power changes in the same time window (350-550 ms), but with different spectral characteristics, low beta (14-20 Hz) for the phonological task and high beta (20-30 Hz) for the semantic task. In frontal areas (BA10), both tasks elicited power changes in the gamma band (30-50 Hz), but in different time windows, 500-700ms for the phonological task and 350-550ms for the semantic task. In the left inferior parietal area (BA40), both tasks elicited changes in the 20-30 Hz beta frequency band but in different time windows, 350-550ms for the phonological task and 500-700ms for the semantic task. Our findings suggest that, where spatial measures may indicate overlapping areas of involvement, additional beamforming techniques can demonstrate differential activation in time and frequency domains. © 2012 McNab, Hillebrand, Swithenby and Rippon.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Numerous studies find that monetary models of exchange rates cannot beat a random walk model. Such a finding, however, is not surprising given that such models are built upon money demand functions and traditional money demand functions appear to have broken down in many developed countries. In this paper we investigate whether using a more stable underlying money demand function results in improvements in forecasts of monetary models of exchange rates. More specifically, we use a sweepadjusted measure of US monetary aggregate M1 which has been shown to have a more stable money demand function than the official M1 measure. The results suggest that the monetary models of exchange rates contain information about future movements of exchange rates but the success of such models depends on the stability of money demand functions and the specifications of the models.
Resumo:
Continuing advances in digital image capture and storage are resulting in a proliferation of imagery and associated problems of information overload in image domains. In this work we present a framework that supports image management using an interactive approach that captures and reuses task-based contextual information. Our framework models the relationship between images and domain tasks they support by monitoring the interactive manipulation and annotation of task-relevant imagery. During image analysis, interactions are captured and a task context is dynamically constructed so that human expertise, proficiency and knowledge can be leveraged to support other users in carrying out similar domain tasks using case-based reasoning techniques. In this article we present our framework for capturing task context and describe how we have implemented the framework as two image retrieval applications in the geo-spatial and medical domains. We present an evaluation that tests the efficiency of our algorithms for retrieving image context information and the effectiveness of the framework for carrying out goal-directed image tasks. © 2010 Springer Science+Business Media, LLC.
Resumo:
Models at runtime can be defined as abstract representations of a system, including its structure and behaviour, which exist in tandem with the given system during the actual execution time of that system. Furthermore, these models should be causally connected to the system being modelled, offering a reflective capability. Significant advances have been made in recent years in applying this concept, most notably in adaptive systems. In this paper we argue that a similar approach can also be used to support the dynamic generation of software artefacts at execution time. An important area where this is relevant is the generation of software mediators to tackle the crucial problem of interoperability in distributed systems. We refer to this approach as emergent middleware, representing a fundamentally new approach to resolving interoperability problems in the complex distributed systems of today. In this context, the runtime models are used to capture meta-information about the underlying networked systems that need to interoperate, including their interfaces and additional knowledge about their associated behaviour. This is supplemented by ontological information to enable semantic reasoning. This paper focuses on this novel use of models at runtime, examining in detail the nature of such runtime models coupled with consideration of the supportive algorithms and tools that extract this knowledge and use it to synthesise the appropriate emergent middleware.
Resumo:
Modelling architectural information is particularly important because of the acknowledged crucial role of software architecture in raising the level of abstraction during development. In the MDE area, the level of abstraction of models has frequently been related to low-level design concepts. However, model-driven techniques can be further exploited to model software artefacts that take into account the architecture of the system and its changes according to variations of the environment. In this paper, we propose model-driven techniques and dynamic variability as concepts useful for modelling the dynamic fluctuation of the environment and its impact on the architecture. Using the mappings from the models to implementation, generative techniques allow the (semi) automatic generation of artefacts making the process more efficient and promoting software reuse. The automatic generation of configurations and reconfigurations from models provides the basis for safer execution. The architectural perspective offered by the models shift focus away from implementation details to the whole view of the system and its runtime change promoting high-level analysis. © 2009 Springer Berlin Heidelberg.
Resumo:
Requirements-aware systems address the need to reason about uncertainty at runtime to support adaptation decisions, by representing quality of services (QoS) requirements for service-based systems (SBS) with precise values in run-time queryable model specification. However, current approaches do not support updating of the specification to reflect changes in the service market, like newly available services or improved QoS of existing ones. Thus, even if the specification models reflect design-time acceptable requirements they may become obsolete and miss opportunities for system improvement by self-adaptation. This articles proposes to distinguish "abstract" and "concrete" specification models: the former consists of linguistic variables (e.g. "fast") agreed upon at design time, and the latter consists of precise numeric values (e.g. "2ms") that are dynamically calculated at run-time, thus incorporating up-to-date QoS information. If and when freshly calculated concrete specifications are not satisfied anymore by the current service configuration, an adaptation is triggered. The approach was validated using four simulated SBS that use services from a previously published, real-world dataset; in all cases, the system was able to detect unsatisfied requirements at run-time and trigger suitable adaptations. Ongoing work focuses on policies to determine recalculation of specifications. This approach will allow engineers to build SBS that can be protected against market-caused obsolescence of their requirements specifications. © 2012 IEEE.
Resumo:
Methodologies for understanding business processes and their information systems (IS) are often criticized, either for being too imprecise and philosophical (a criticism often levied at softer methodologies) or too hierarchical and mechanistic (levied at harder methodologies). The process-oriented holonic modelling methodology combines aspects of softer and harder approaches to aid modellers in designing business processes and associated IS. The methodology uses holistic thinking and a construct known as the holon to build process descriptions into a set of models known as a holarchy. This paper describes the methodology through an action research case study based in a large design and manufacturing organization. The scientific contribution is a methodology for analysing business processes in environments that are characterized by high complexity, low volume and high variety where there are minimal repeated learning opportunities, such as large IS development projects. The practical deliverables from the project gave IS and business process improvements for the case study company.
Resumo:
This article investigates whether (1) cross-functional integration within a firm and the use of information systems (IS) that support information sharing with external parties can enhance integration across the supply chain and wider networks and (2) whether collaboration with customers, suppliers and other external parties leads to increased supply chain performance in terms of new product development and introduction of new processes. Data from a high-quality survey carried out in Taiwan in 2009 were used, and appropriate econometric models were applied. Results show that the adoption of IS that enhance information sharing is vital not only for the effective communication with suppliers and with wider network members, but their adoption also has a direct effect across a firm's innovative effort. Cross-functional integration appears to matter only for the introduction of an innovative process. Collaboration with customers and suppliers affected a product's design and its overall features and functionality, respectively. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
This book contains 13 papers from the 7th Workshop on Global Sourcing, held in Val d'Isere, France, during March 11-14, 2013, which were carefully reviewed and selected from 40 submissions. They are based on a vast empirical base brought together by leading researchers in information systems, strategic management, and operations. This volume is intended for students, academics, and practitioners interested in research results and experiences on outsourcing and offshoring of information technology and business processes. The topics discussed represent both client and supplier perspectives on sourcing of global services, combine theoretical and practical insights regarding challenges that both clients and vendors face, and include case studies from client and vendor organizations.
Resumo:
Adaptive information filtering is a challenging research problem. It requires the adaptation of a representation of a user’s multiple interests to various changes in them. We investigate the application of an immune-inspired approach to this problem. Nootropia, is a user profiling model that has many properties in common with computational models of the immune system that have been based on Franscisco Varela’s work. In this paper we concentrate on Nootropia’s evaluation. We define an evaluation methodology that uses virtual user’s to simulate various interest changes. The results show that Nootropia exhibits the desirable adaptive behaviour.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT