17 resultados para Random parameter Logit Model
em Aston University Research Archive
Resumo:
We study the persistence phenomenon in a socio-econo dynamics model using computer simulations at a nite temperature on hypercubic lattices in dimensions up to ve. The model includes a \social" local eld which contains the magnetization at time t. The nearest neighbour quenched interactions are drawn from a binary distribution which is a function of the bond concentration, p. The decay of the persistence probability in the model depends on both the spatial dimension and p. We nd no evidence of \blocking" in this model. We also discuss the implications of our results for possible applications in the social and economic elds. It is suggested that the absence, or otherwise, of blocking could be used as a criterion to decide on the validity of a given model in dierent scenarios.
Resumo:
In this paper we present a novel method for emulating a stochastic, or random output, computer model and show its application to a complex rabies model. The method is evaluated both in terms of accuracy and computational efficiency on synthetic data and the rabies model. We address the issue of experimental design and provide empirical evidence on the effectiveness of utilizing replicate model evaluations compared to a space-filling design. We employ the Mahalanobis error measure to validate the heteroscedastic Gaussian process based emulator predictions for both the mean and (co)variance. The emulator allows efficient screening to identify important model inputs and better understanding of the complex behaviour of the rabies model.
Resumo:
This paper reports preliminary progress on a principled approach to modelling nonstationary phenomena using neural networks. We are concerned with both parameter and model order complexity estimation. The basic methodology assumes a Bayesian foundation. However to allow the construction of pragmatic models, successive approximations have to be made to permit computational tractibility. The lowest order corresponds to the (Extended) Kalman filter approach to parameter estimation which has already been applied to neural networks. We illustrate some of the deficiencies of the existing approaches and discuss our preliminary generalisations, by considering the application to nonstationary time series.
Resumo:
Increasing mail-survey response using monetary incentives is a proven, but not always cost-effective method in every population. This paper tackles the questions of whether it is worth using monetary incentives and the size of the inducement by testing a logit model of the impact of prepaid monetary incentives on response rates in consumer and organizational mail surveys. The results support their use and show that the inducement value makes a significant impact on the effect size. Importantly, no significant differences were found between consumer and organizational populations. A cost-benefit model is developed to estimate the optimum incentive when attempting to minimize overall survey costs for a given sample size. © 2006 Operational Research Society Ltd. All rights reserved.
Resumo:
How does a firm choose a proper model of foreign direct investment (FDI) for entering a foreign market? Which mode of entry performs better? What are the performance implications of joint venture (JV) ownership structure? These important questions face a multinational enterprise (MNE) that decides to enter a foreign market. However, few studies have been conducted on such issues, and no consistent or conclusive findings are generated, especially with respect to China. It’s composed of five chapters, providing corresponding answers to the questions given above. Specifically, Chapter One is an overall introductory chapter. Chapter Two is about the choice of entry mode of FDI in China. Chapter Three examines the relationship between four main entry modes and performance. Chapter Four explores the performance implications of JV ownership structure. Chapter Five is an overall concluding chapter. These empirical studies are based on the most recent and richest data that has never been explored in previous studies. It contains information on 11,765 foreign-invested enterprises in China in seven manufacturing industries in 2000, 10,757 in 1999, and 10,666 in 1998. The four FDI entry modes examined include wholly-owned enterprises (WOEs), equity joint ventures (EJVs), contractual joint ventures (CJVs), and joint stock companies (JSCs). In Chapter Two, a multinominal logit model is established, and techniques of multiple linear regression analysis are employed in Chapter Three and Four. It was found that MNEs, under the conditions of a good investment environment, large capital commitment and small cultural distance, prefer the WOE strategy. If these conditions are not met, the EJV mode would be of greater use. The relative propensity to pursue the CJV mode increases with a good investment environment, small capital commitment, and small cultural distance. JSCs are not favoured by MNEs when the investment environment improves and when affiliates are located in the coastal areas. MNEs have been found to have a greater preference for an EJV as a mode of entry into the Chinese market in all industries. It is also found that in terms of return on assets (ROA) and asset turnover, WOEs perform the best, followed by EJVs, CJVs, and JSCs. Finally, minority-owned EJVs or JSCs are found to outperform their majority-owned counterparts in terms of ROA and asset turnover.
Resumo:
This paper compares the UK/US exchange rate forecasting performance of linear and nonlinear models based on monetary fundamentals, to a random walk (RW) model. Structural breaks are identified and taken into account. The exchange rate forecasting framework is also used for assessing the relative merits of the official Simple Sum and the weighted Divisia measures of money. Overall, there are four main findings. First, the majority of the models with fundamentals are able to beat the RW model in forecasting the UK/US exchange rate. Second, the most accurate forecasts of the UK/US exchange rate are obtained with a nonlinear model. Third, taking into account structural breaks reveals that the Divisia aggregate performs better than its Simple Sum counterpart. Finally, Divisia-based models provide more accurate forecasts than Simple Sum-based models provided they are constructed within a nonlinear framework.
Resumo:
In this article, we examine the issue of high dropout rates in India which has adverse implications for human capital formation and hence for the country's long-term growth potential. Using the 2004–2005 National Sample Survey (NSS) employment–unemployment data, we estimate transition probabilities of moving from a number of different educational levels to higher educational levels using a sequential logit model. Our results suggest that the overall probability of reaching tertiary education is very low. Further, even by the woeful overall standards, women are significantly worse off, particularly in rural areas.
Resumo:
Improving the performance of private sector small and medium sized enterprises (SMEs) in a cost effective manner is a major concern for government. Governments have saved costs by moving information online rather than through more expensive face-to-face exchanges between advisers and clients. Building on previous work that distinguished between types of advice, this article evaluates whether these changes to delivery mechanisms affect the type of advice received. Using a multinomial logit model of 1334 cases of business advice to small firms collected in England, the study found that advice to improve capabilities was taken by smaller firms who were less likely to have limited liability or undertake business planning. SMEs sought word-of-mouth referrals before taking internal, capability-enhancing advice. This is also the case when that advice was part of a wider package of assistance involving both internal and external aspects. Only when firms took advice that used extant capabilities did they rely on the Internet. Therefore, when the Internet is privileged over face-to-face advice the changes made by each recipient of advice are likely to diminish causing less impact from advice within the economy. It implies that fewer firms will adopt the sorts of management practices that would improve their productivity. © 2014 Taylor & Francis.
Resumo:
In this paper, the start-up process is split conceptually into four stages: considering entrepreneurship, intending to start a new business in the next 3 years, nascent entrepreneurship and owning-managing a newly established business. We investigate the determinants of all of these jointly, using a multinomial logit model; it allows for the effects of resources and capabilities to vary across these stages. We employ the Global Entrepreneurship Monitor database for the years 2006–2009, containing 8269 usable observations from respondents drawn from the Lower Layer Super Output Areas in the East Midlands (UK) so that individual observations are linked to space. Our results show that the role of education, experience, and availability of ‘entrepreneurial capital’ in the local neighbourhood varies along the different stages of the entrepreneurial process. In the early stages, the negative (opportunity cost) effect of resources endowment dominates, yet it tends to reverse in the advanced stages, where the positive effect of resources becomes stronger.
Resumo:
Computer models, or simulators, are widely used in a range of scientific fields to aid understanding of the processes involved and make predictions. Such simulators are often computationally demanding and are thus not amenable to statistical analysis. Emulators provide a statistical approximation, or surrogate, for the simulators accounting for the additional approximation uncertainty. This thesis develops a novel sequential screening method to reduce the set of simulator variables considered during emulation. This screening method is shown to require fewer simulator evaluations than existing approaches. Utilising the lower dimensional active variable set simplifies subsequent emulation analysis. For random output, or stochastic, simulators the output dispersion, and thus variance, is typically a function of the inputs. This work extends the emulator framework to account for such heteroscedasticity by constructing two new heteroscedastic Gaussian process representations and proposes an experimental design technique to optimally learn the model parameters. The design criterion is an extension of Fisher information to heteroscedastic variance models. Replicated observations are efficiently handled in both the design and model inference stages. Through a series of simulation experiments on both synthetic and real world simulators, the emulators inferred on optimal designs with replicated observations are shown to outperform equivalent models inferred on space-filling replicate-free designs in terms of both model parameter uncertainty and predictive variance.
Resumo:
Molecular transport in phase space is crucial for chemical reactions because it defines how pre-reactive molecular configurations are found during the time evolution of the system. Using Molecular Dynamics (MD) simulated atomistic trajectories we test the assumption of the normal diffusion in the phase space for bulk water at ambient conditions by checking the equivalence of the transport to the random walk model. Contrary to common expectations we have found that some statistical features of the transport in the phase space differ from those of the normal diffusion models. This implies a non-random character of the path search process by the reacting complexes in water solutions. Our further numerical experiments show that a significant long period of non-stationarity in the transition probabilities of the segments of molecular trajectories can account for the observed non-uniform filling of the phase space. Surprisingly, the characteristic periods in the model non-stationarity constitute hundreds of nanoseconds, that is much longer time scales compared to typical lifetime of known liquid water molecular structures (several picoseconds).
Resumo:
We propose a simple model that captures the salient properties of distribution networks, and study the possible occurrence of blackouts, i.e., sudden failings of large portions of such networks. The model is defined on a random graph of finite connectivity. The nodes of the graph represent hubs of the network, while the edges of the graph represent the links of the distribution network. Both, the nodes and the edges carry dynamical two state variables representing the functioning or dysfunctional state of the node or link in question. We describe a dynamical process in which the breakdown of a link or node is triggered when the level of maintenance it receives falls below a given threshold. This form of dynamics can lead to situations of catastrophic breakdown, if levels of maintenance are themselves dependent on the functioning of the net, once maintenance levels locally fall below a critical threshold due to fluctuations. We formulate conditions under which such systems can be analyzed in terms of thermodynamic equilibrium techniques, and under these conditions derive a phase diagram characterizing the collective behavior of the system, given its model parameters. The phase diagram is confirmed qualitatively and quantitatively by simulations on explicit realizations of the graph, thus confirming the validity of our approach. © 2007 The American Physical Society.
Resumo:
When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.
Resumo:
There is an alternative model of the 1-way ANOVA called the 'random effects' model or ‘nested’ design in which the objective is not to test specific effects but to estimate the degree of variation of a particular measurement and to compare different sources of variation that influence the measurement in space and/or time. The most important statistics from a random effects model are the components of variance which estimate the variance associated with each of the sources of variation influencing a measurement. The nested design is particularly useful in preliminary experiments designed to estimate different sources of variation and in the planning of appropriate sampling strategies.
Resumo:
This thesis studied the effect of (i) the number of grating components and (ii) parameter randomisation on root-mean-square (r.m.s.) contrast sensitivity and spatial integration. The effectiveness of spatial integration without external spatial noise depended on the number of equally spaced orientation components in the sum of gratings. The critical area marking the saturation of spatial integration was found to decrease when the number of components increased from 1 to 5-6 but increased again at 8-16 components. The critical area behaved similarly as a function of the number of grating components when stimuli consisted of 3, 6 or 16 components with different orientations and/or phases embedded in spatial noise. Spatial integration seemed to depend on the global Fourier structure of the stimulus. Spatial integration was similar for sums of two vertical cosine or sine gratings with various Michelson contrasts in noise. The critical area for a grating sum was found to be a sum of logarithmic critical areas for the component gratings weighted by their relative Michelson contrasts. The human visual system was modelled as a simple image processor where the visual stimuli is first low-pass filtered by the optical modulation transfer function of the human eye and secondly high-pass filtered, up to the spatial cut-off frequency determined by the lowest neural sampling density, by the neural modulation transfer function of the visual pathways. The internal noise is then added before signal interpretation occurs in the brain. The detection is mediated by a local spatially windowed matched filter. The model was extended to include complex stimuli and its applicability to the data was found to be successful. The shape of spatial integration function was similar for non-randomised and randomised simple and complex gratings. However, orientation and/or phase randomised reduced r.m.s contrast sensitivity by a factor of 2. The effect of parameter randomisation on spatial integration was modelled under the assumption that human observers change the observer strategy from cross-correlation (i.e., a matched filter) to auto-correlation detection when uncertainty is introduced to the task. The model described the data accurately.