38 resultados para two-factor models
em Aston University Research Archive
Resumo:
Signal integration determines cell fate on the cellular level, affects cognitive processes and affective responses on the behavioural level, and is likely to be involved in psychoneurobiological processes underlying mood disorders. Interactions between stimuli may subjected to time effects. Time-dependencies of interactions between stimuli typically lead to complex cell responses and complex responses on the behavioural level. We show that both three-factor models and time series models can be used to uncover such time-dependencies. However, we argue that for short longitudinal data the three factor modelling approach is more suitable. In order to illustrate both approaches, we re-analysed previously published short longitudinal data sets. We found that in human embryonic kidney 293 cells cells the interaction effect in the regulation of extracellular signal-regulated kinase (ERK) 1 signalling activation by insulin and epidermal growth factor is subjected to a time effect and dramatically decays at peak values of ERK activation. In contrast, we found that the interaction effect induced by hypoxia and tumour necrosis factor-alpha for the transcriptional activity of the human cyclo-oxygenase-2 promoter in HEK293 cells is time invariant at least in the first 12-h time window after stimulation. Furthermore, we applied the three-factor model to previously reported animal studies. In these studies, memory storage was found to be subjected to an interaction effect of the beta-adrenoceptor agonist clenbuterol and certain antagonists acting on the alpha-1-adrenoceptor / glucocorticoid-receptor system. Our model-based analysis suggests that only if the antagonist drug is administer in a critical time window, then the interaction effect is relevant.
Resumo:
I model the forward premium in the U.K. gilt-edged market over the period 1982–96 using a two-factor general equilibrium model of the term structure of interest rates. The model permits the decomposition of the forward premium into separate components representing interest rate expectations, the risk premia associated with each of the underlying factors, and terms capturing the direct impact of the variances of the factors on the shape of the forward curve.
Resumo:
Experiments combining different groups or factors are a powerful method of investigation in applied microbiology. ANOVA enables not only the effect of individual factors to be estimated but also their interactions; information which cannot be obtained readily when factors are investigated separately. In addition, combining different treatments or factors in a single experiment is more efficient and often reduces the number of replications required to estimate treatment effects adequately. Because of the treatment combinations used in a factorial experiment, the degrees of freedom (DF) of the error term in the ANOVA is a more important indicator of the ‘power’ of the experiment than simply the number of replicates. A good method is to ensure, where possible, that sufficient replication is present to achieve 15 DF for each error term of the ANOVA. Finally, in a factorial experiment, it is important to define the design of the experiment in detail because this determines the appropriate type of ANOVA. We will discuss some of the common variations of factorial ANOVA in future statnotes. If there is doubt about which ANOVA to use, the researcher should seek advice from a statistician with experience of research in applied microbiology.
Resumo:
Presentation of an abstract
Resumo:
Investment in capacity expansion remains one of the most critical decisions for a manufacturing organisation with global production facilities. Multiple factors need to be considered making the decision process very complex. The purpose of this paper is to establish the state-of-the-art in multi-factor models for capacity expansion of manufacturing plants within a corporation. The research programme consisting of an extensive literature review and a structured assessment of the strengths and weaknesses of the current research is presented. The study found that there is a wealth of mathematical multi-factor models for evaluating capacity expansion decisions however no single contribution captures all the different facets of the problem.
Resumo:
Our understanding of early spatial vision owes much to contrast masking and summation paradigms. In particular, the deep region of facilitation at low mask contrasts is thought to indicate a rapidly accelerating contrast transducer (eg a square-law or greater). In experiment 1, we tapped an early stage of this process by measuring monocular and binocular thresholds for patches of 1 cycle deg-1 sine-wave grating. Threshold ratios were around 1.7, implying a nearly linear transducer with an exponent around 1.3. With this form of transducer, two previous models (Legge, 1984 Vision Research 24 385 - 394; Meese et al, 2004 Perception 33 Supplement, 41) failed to fit the monocular, binocular, and dichoptic masking functions measured in experiment 2. However, a new model with two-stages of divisive gain control fits the data very well. Stage 1 incorporates nearly linear monocular transducers (to account for the high level of binocular summation and slight dichoptic facilitation), and monocular and interocular suppression (to fit the profound 42 Oral presentations: Spatial vision Thursday dichoptic masking). Stage 2 incorporates steeply accelerating transduction (to fit the deep regions of monocular and binocular facilitation), and binocular summation and suppression (to fit the monocular and binocular masking). With all model parameters fixed from the discrimination thresholds, we examined the slopes of the psychometric functions. The monocular and binocular slopes were steep (Weibull ߘ3-4) at very low mask contrasts and shallow (ߘ1.2) at all higher contrasts, as predicted by all three models. The dichoptic slopes were steep (ߘ3-4) at very low contrasts, and very steep (ß>5.5) at high contrasts (confirming Meese et al, loco cit.). A crucial new result was that intermediate dichoptic mask contrasts produced shallow slopes (ߘ2). Only the two-stage model predicted the observed pattern of slope variation, so providing good empirical support for a two-stage process of binocular contrast transduction. [Supported by EPSRC GR/S74515/01]
Resumo:
Foley [J. Opt. Soc. Am. A 11 (1994) 1710] has proposed an influential psychophysical model of masking in which mask components in a contrast gain pool are raised to an exponent before summation and divisive inhibition. We tested this summation rule in experiments in which contrast detection thresholds were measured for a vertical 1 c/deg (or 2 c/deg) sine-wave component in the presence of a 3 c/deg (or 6 c/deg) mask that had either a single component oriented at -45° or a pair of components oriented at ±45°. Contrary to the predictions of Foley's model 3, we found that for masks of moderate contrast and above, threshold elevation was predicted by linear summation of the mask components in the inhibitory stage of the contrast gain pool. We built this feature into two new models, referred to as the early adaptation model and the hybrid model. In the early adaptation model, contrast adaptation controls a threshold-like nonlinearity on the output of otherwise linear pathways that provide the excitatory and inhibitory inputs to a gain control stage. The hybrid model involves nonlinear and nonadaptable routes to excitatory and inhibitory stages as well as an adaptable linear route. With only six free parameters, both models provide excellent fits to the masking and adaptation data of Foley and Chen [Vision Res. 37 (1997) 2779] but unlike Foley and Chen's model, are able to do so with only one adaptation parameter. However, only the hybrid model is able to capture the features of Foley's (1994) pedestal plus orthogonal fixed mask data. We conclude that (1) linear summation of inhibitory components is a feature of contrast masking, and (2) that the main aftereffect of spatial adaptation on contrast increment thresholds can be assigned to a single site. © 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
This empirical study employs a different methodology to examine the change in wealth associated with mergers and acquisitions (M&As) for US firms. Specifically, we employ the standard CAPM, the Fama-French three-factor model and the Carhart four-factor models within the OLS and GJR-GARCH estimation methods to test the behaviour of the cumulative abnormal returns (CARs). Whilst the standard CAPM captures the variability of stock returns with the overall market, the Fama-French factors capture the risk factors that are important to investors. Additionally, augmenting the Fama-French three-factor model with the Carhart momentum factor to generate the four-factor captures additional pricing elements that may affect stock returns. Traditionally, estimates of abnormal returns (ARs) in M&As situations rely on the standard OLS estimation method. However, the standard OLS will provide inefficient estimates of the ARs if the data contain ARCH and asymmetric effects. To minimise this problem of estimation efficiency we re-estimated the ARs using GJR-GARCH estimation method. We find that there is variation in the results both as regards the choice models and estimation methods. Besides these variations in the estimated models and the choice of estimation methods, we also tested whether the ARs are affected by the degree of liquidity of the stocks and the size of the firm. We document significant positive post-announcement cumulative ARs (CARs) for target firm shareholders under both the OLS and GJR-GARCH methods across all three methodologies. However, post-event CARs for acquiring firm shareholders were insignificant for both sets of estimation methods under the three methodologies. The GJR-GARCH method seems to generate larger CARs than those of the OLS method. Using both market capitalization and trading volume as a measure of liquidity and the size of the firm, we observed strong return continuations in the medium firms relative to small and large firms for target shareholders. We consistently observed market efficiency in small and large firm. This implies that target firms for small and large firms overreact to new information resulting in a more efficient market. For acquirer firms, our measure of liquidity captures strong return continuations for small firms under the OLS estimates for both CAPM and Fama-French three-factor models, whilst under the GJR-GARCH estimates only for Carhart model. Post-announcement bootstrapping simulated CARs confirmed our earlier results.
Resumo:
The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.
Resumo:
Natural language understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HM-SVMs). Our experimental results on the DARPA communicator data show that both CRFs and HM-SVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HM-SVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in F-measure.
Resumo:
Heme-oxygenases (HOs) catalyze the conversion of heme into carbon monoxide and biliverdin. HO-1 is induced during hypoxia, ischemia/reperfusion, and inflammation, providing cytoprotection and inhibiting leukocyte migration to inflammatory sites. Although in vitro studies have suggested an additional role for HO-1 in angiogenesis, the relevance of this in vivo remains unknown. We investigated the involvement of HO-1 in angiogenesis in vitro and in vivo. Vascular endothelial growth factor (VEGF) induced prolonged HO-1 expression and activity in human endothelial cells and HO-1 inhibition abrogated VEGF-driven angiogenesis. Two murine models of angiogenesis were used: (1) angiogenesis initiated by addition of VEGF to Matrigel and (2) a lipopolysaccharide (LPS)-induced model of inflammatory angiogenesis in which angiogenesis is secondary to leukocyte invasion. Pharmacologic inhibition of HO-1 induced marked leukocytic infiltration that enhanced VEGF-induced angiogenesis. However, in the presence of an anti-CD18 monoclonal antibody (mAb) to block leukocyte migration, VEGF-induced angiogenesis was significantly inhibited by HO-1 antagonists. Furthermore, in the LPS-induced model of inflammatory angiogenesis, induction of HO-1 with cobalt protoporphyrin significantly inhibited leukocyte invasion into LPS-conditioned Matrigel and thus prevented the subsequent angiogenesis. We therefore propose that during chronic inflammation HO-1 has 2 roles: first, an anti-inflammatory action inhibiting leukocyte infiltration; and second, promotion of VEGF-driven noninflammatory angiogenesis that facilitates tissue repair.
Resumo:
This paper discusses critical findings from a two-year EU-funded research project involving four European countries: Austria, England, Slovenia and Romania. The project had two primary aims. The first of these was to develop a systematic procedure for assessing the balance between learning outcomes acquired in education and the specific needs of the labour market. The second aim was to develop and test a set of meta-level quality indicators aimed at evaluating the linkages between education and employment. The project was distinctive in that it combined different partners from Higher Education, Vocational Training, Industry and Quality Assurance. One of the key emergent themes identified in exploratory interviews was that employers and recent business graduates in all four countries want a well-rounded education which delivers a broad foundation of key business knowledge across the various disciplines. Both groups also identified the need for personal development in critical skills and competencies. Following the exploratory study, a questionnaire was designed to address five functional business areas, as well as a cluster of 8 business competencies. Within the survey, questions relating to the meta-level quality indicators assessed the impact of these learning outcomes on the workplace, in terms of the following: 1) value, 2) relevance and 3) graduate ability. This paper provides an overview of the study findings from a sample of 900 business graduates and employers. Two theoretical models are proposed as tools for predicting satisfaction with work performance and satisfaction with business education. The implications of the study findings for education, employment and European public policy are discussed.
Resumo:
It is well known that one of the obstacles to effective forecasting of exchange rates is heteroscedasticity (non-stationary conditional variance). The autoregressive conditional heteroscedastic (ARCH) model and its variants have been used to estimate a time dependent variance for many financial time series. However, such models are essentially linear in form and we can ask whether a non-linear model for variance can improve results just as non-linear models (such as neural networks) for the mean have done. In this paper we consider two neural network models for variance estimation. Mixture Density Networks (Bishop 1994, Nix and Weigend 1994) combine a Multi-Layer Perceptron (MLP) and a mixture model to estimate the conditional data density. They are trained using a maximum likelihood approach. However, it is known that maximum likelihood estimates are biased and lead to a systematic under-estimate of variance. More recently, a Bayesian approach to parameter estimation has been developed (Bishop and Qazaz 1996) that shows promise in removing the maximum likelihood bias. However, up to now, this model has not been used for time series prediction. Here we compare these algorithms with two other models to provide benchmark results: a linear model (from the ARIMA family), and a conventional neural network trained with a sum-of-squares error function (which estimates the conditional mean of the time series with a constant variance noise model). This comparison is carried out on daily exchange rate data for five currencies.
Resumo:
The procedure for successful scale-up of batchwise emulsion polymerisation has been studied. The relevant literature on liquid-liquid dispersion on scale-up and on emulsion polymerisation has been crit1cally reviewed. Batchwise emulsion polymerisation of styrene in a specially built 3 litre, unbaffled, reactor confirmed that impeller speed had a direct effect on the latex particle size and on the reaction rate. This was noted to be more significant at low soap concentrations and the phenomenon was related to the depletion of micelle forming soap by soap adsorption onto the monomer emulsion surface. The scale-up procedure necessary to maintain constant monomer emulsion surface area in an unbaffled batch reactor was therefore investigated. Three geometrically similar 'vessels of 152, 229 and 305mm internal diameter, and a range of impeller speeds (190 to 960 r.p.m.) were employed. The droplet sizes were measured either through photomicroscopy or via a Coulter Counter. The power input to the impeller was also measured. A scale-up procedure was proposed based on the governing relationship between droplet diameter, impeller speed and impeller diameter. The relationships between impeller speed soap concentration, latex particle size and reaction rate were investigated in a series of polymerisations employing an amended commercial recipe for polystyrene. The particle size was determined via a light transmission technique. Two computer models, based on the Smith and Ewart approach but taking into account the adsorption/desorption of soap at the monomer surface, were successful 1n predicting the particle size and the progress of the reaction up to the end of stage II, i.e. to the end of the period of constant reaction rate.
Resumo:
A two-factor no-arbitrage model is used to provide a theoretical link between stock and bond market volatility. While this model suggests that short-term interest rate volatility may, at least in part, drive both stock and bond market volatility, the empirical evidence suggests that past bond market volatility affects both markets and feeds back into short-term yield volatility. The empirical modelling goes on to examine the (time-varying) correlation structure between volatility in the stock and bond markets and finds that the sign of this correlation has reversed over the last 20 years. This has important implications far portfolio selection in financial markets. © 2005 Elsevier B.V. All rights reserved.