6 resultados para Variability Models
em Aston University Research Archive
Resumo:
Thermal effects in uncontrolled factory environments are often the largest source of uncertainty in large volume dimensional metrology. As the standard temperature for metrology of 20°C cannot be achieved practically or economically in many manufacturing facilities, the characterisation and modelling of temperature offers a solution for improving the uncertainty of dimensional measurement and quantifying thermal variability in large assemblies. Technologies that currently exist for temperature measurement in the range of 0-50°C have been presented alongside discussion of these temperature measurement technologies' usefulness for monitoring temperatures in a manufacturing context. Particular aspects of production where the technology could play a role are highlighted as well as practical considerations for deployment. Contact sensors such as platinum resistance thermometers can produce accuracy closest to the desired accuracy given the most challenging measurement conditions calculated to be ∼0.02°C. Non-contact solutions would be most practical in the light controlled factory (LCF) and semi-invasive appear least useful but all technologies can play some role during the initial development of thermal variability models.
Resumo:
Deformable models are an attractive approach to recognizing objects which have considerable within-class variability such as handwritten characters. However, there are severe search problems associated with fitting the models to data which could be reduced if a better starting point for the search were available. We show that by training a neural network to predict how a deformable model should be instantiated from an input image, such improved starting points can be obtained. This method has been implemented for a system that recognizes handwritten digits using deformable models, and the results show that the search time can be significantly reduced without compromising recognition performance. © 1997 Academic Press.
Resumo:
This empirical study employs a different methodology to examine the change in wealth associated with mergers and acquisitions (M&As) for US firms. Specifically, we employ the standard CAPM, the Fama-French three-factor model and the Carhart four-factor models within the OLS and GJR-GARCH estimation methods to test the behaviour of the cumulative abnormal returns (CARs). Whilst the standard CAPM captures the variability of stock returns with the overall market, the Fama-French factors capture the risk factors that are important to investors. Additionally, augmenting the Fama-French three-factor model with the Carhart momentum factor to generate the four-factor captures additional pricing elements that may affect stock returns. Traditionally, estimates of abnormal returns (ARs) in M&As situations rely on the standard OLS estimation method. However, the standard OLS will provide inefficient estimates of the ARs if the data contain ARCH and asymmetric effects. To minimise this problem of estimation efficiency we re-estimated the ARs using GJR-GARCH estimation method. We find that there is variation in the results both as regards the choice models and estimation methods. Besides these variations in the estimated models and the choice of estimation methods, we also tested whether the ARs are affected by the degree of liquidity of the stocks and the size of the firm. We document significant positive post-announcement cumulative ARs (CARs) for target firm shareholders under both the OLS and GJR-GARCH methods across all three methodologies. However, post-event CARs for acquiring firm shareholders were insignificant for both sets of estimation methods under the three methodologies. The GJR-GARCH method seems to generate larger CARs than those of the OLS method. Using both market capitalization and trading volume as a measure of liquidity and the size of the firm, we observed strong return continuations in the medium firms relative to small and large firms for target shareholders. We consistently observed market efficiency in small and large firm. This implies that target firms for small and large firms overreact to new information resulting in a more efficient market. For acquirer firms, our measure of liquidity captures strong return continuations for small firms under the OLS estimates for both CAPM and Fama-French three-factor models, whilst under the GJR-GARCH estimates only for Carhart model. Post-announcement bootstrapping simulated CARs confirmed our earlier results.
Resumo:
Constructing and executing distributed systems that can adapt to their operating context in order to sustain provided services and the service qualities are complex tasks. Managing adaptation of multiple, interacting services is particularly difficult since these services tend to be distributed across the system, interdependent and sometimes tangled with other services. Furthermore, the exponential growth of the number of potential system configurations derived from the variabilities of each service need to be handled. Current practices of writing low-level reconfiguration scripts as part of the system code to handle run time adaptation are both error prone and time consuming and make adaptive systems difficult to validate and evolve. In this paper, we propose to combine model driven and aspect oriented techniques to better cope with the complexities of adaptive systems construction and execution, and to handle the problem of exponential growth of the number of possible configurations. Combining these techniques allows us to use high level domain abstractions, simplify the representation of variants and limit the problem pertaining to the combinatorial explosion of possible configurations. In our approach we also use models at runtime to generate the adaptation logic by comparing the current configuration of the system to a composed model representing the configuration we want to reach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
Modelling architectural information is particularly important because of the acknowledged crucial role of software architecture in raising the level of abstraction during development. In the MDE area, the level of abstraction of models has frequently been related to low-level design concepts. However, model-driven techniques can be further exploited to model software artefacts that take into account the architecture of the system and its changes according to variations of the environment. In this paper, we propose model-driven techniques and dynamic variability as concepts useful for modelling the dynamic fluctuation of the environment and its impact on the architecture. Using the mappings from the models to implementation, generative techniques allow the (semi) automatic generation of artefacts making the process more efficient and promoting software reuse. The automatic generation of configurations and reconfigurations from models provides the basis for safer execution. The architectural perspective offered by the models shift focus away from implementation details to the whole view of the system and its runtime change promoting high-level analysis. © 2009 Springer Berlin Heidelberg.
Resumo:
Purpose – The purpose of the paper was to conduct an empirical investigation to explore the impact of project management maturity models (PMMMs) on improving project performance. Design/methodology/approach – The investigation used a cross-case analysis involving over 90 individuals in seven organisations. Findings – The findings of the empirical investigation indicate that PMMMs demonstrate very high levels of variability in individual's assessment of project management maturity. Furthermore, at higher levels of maturity, the type of performance improvement adopted following their application is related to the type of PMMM used in the assessment. The paradox of the unreliability of PMMMs and their widespread acceptance is resolved by calling upon the “wisdom of crowds” phenomenon which has implications for the use of maturity model assessments in other arena. Research limitations/implications – The investigation does have the usual issues associated with case research, but the steps that have been taken in the cross-case construction and analysis have improved the overall robustness and extendibility of the findings. Practical implications – The tendency for PMMMs to shape improvements based on their own inherent structure needs further understanding. Originality/value – The use of empirical methods to investigate the link between project maturity models and extant changes in project management performance is highly novel and the findings that result from this have added resonance.