110 resultados para Fundamentals of computing theory
Resumo:
Quantile forecasts are central to risk management decisions because of the widespread use of Value-at-Risk. A quantile forecast is the product of two factors: the model used to forecast volatility, and the method of computing quantiles from the volatility forecasts. In this paper we calculate and evaluate quantile forecasts of the daily exchange rate returns of five currencies. The forecasting models that have been used in recent analyses of the predictability of daily realized volatility permit a comparison of the predictive power of different measures of intraday variation and intraday returns in forecasting exchange rate variability. The methods of computing quantile forecasts include making distributional assumptions for future daily returns as well as using the empirical distribution of predicted standardized returns with both rolling and recursive samples. Our main findings are that the Heterogenous Autoregressive model provides more accurate volatility and quantile forecasts for currencies which experience shifts in volatility, such as the Canadian dollar, and that the use of the empirical distribution to calculate quantiles can improve forecasts when there are shifts
Resumo:
Computational formalisms have been pushing the boundaries of the field of computing for the last 80 years and much debate has surrounded what computing entails; what it is, and what it is not. This paper seeks to explore the boundaries of the ideas of computation and provide a framework for enabling a constructive discussion of computational ideas. First, a review of computing is given, ranging from Turing Machines to interactive computing. Then, a variety of natural physical systems are considered for their computational qualities. From this exploration, a framework is presented under which all dynamical systems can be considered as instances of the class of abstract computational platforms. An abstract computational platform is defined by both its intrinsic dynamics and how it allows computation that is meaningful to an external agent through the configuration of constraints upon those dynamics. It is asserted that a platform’s computational expressiveness is directly related to the freedom with which constraints can be placed. Finally, the requirements for a formal constraint description language are considered and it is proposed that Abstract State Machines may provide a reasonable basis for such a language.
Resumo:
This paper provides an overview of interpolation of Banach and Hilbert spaces, with a focus on establishing when equivalence of norms is in fact equality of norms in the key results of the theory. (In brief, our conclusion for the Hilbert space case is that, with the right normalisations, all the key results hold with equality of norms.) In the final section we apply the Hilbert space results to the Sobolev spaces Hs(Ω) and tildeHs(Ω), for s in R and an open Ω in R^n. We exhibit examples in one and two dimensions of sets Ω for which these scales of Sobolev spaces are not interpolation scales. In the cases when they are interpolation scales (in particular, if Ω is Lipschitz) we exhibit examples that show that, in general, the interpolation norm does not coincide with the intrinsic Sobolev norm and, in fact, the ratio of these two norms can be arbitrarily large.
Resumo:
Casson and Wadeson (International Journal of the Economics of Business, 1998, 5, pp. 5-27) have modelled the dialogue, or conversation, which customers have with their suppliers in order to convey their requirements, while taking production implications into account. They showed that this has important implications for the positioning of the boundaries of the firm. Unfortunately, their model has the restriction that communication is only costly in the direction of customer to supplier. This paper extends their model by introducing two-way communication costs. It shows that the level of communication cost in the direction of supplier to customer is a key additional factor in determining the nature of the dialogue that takes place. It also shows that this has important additional implications for the positioning of the boundaries of the firm. Custom computer software development is used as an example of an application of the theory.
Resumo:
The magnetization properties of aggregated ferrofluids are calculated by combining the chain formation model developed by Zubarev with the modified mean-field theory. Using moderate assumptions for the inter- and intrachain interactions we obtain expressions for the magnetization and initial susceptibility. When comparing the results of our theory to molecular dynamics simulations of the same model we find that at large dipolar couplings (lambda>3) the chain formation model appears to give better predictions than other analytical approaches. This supports the idea that chain formation is an important structural ingredient of strongly interacting dipolar particles.
Resumo:
The study of the mechanical energy budget of the oceans using Lorenz available potential energy (APE) theory is based on knowledge of the adiabatically re-arranged Lorenz reference state of minimum potential energy. The compressible and nonlinear character of the equation of state for seawater has been thought to cause the reference state to be ill-defined, casting doubt on the usefulness of APE theory for investigating ocean energetics under realistic conditions. Using a method based on the volume frequency distribution of parcels as a function of temperature and salinity in the context of the seawater Boussinesq approximation, which we illustrate using climatological data, we show that compressibility effects are in fact minor. The reference state can be regarded as a well defined one-dimensional function of depth, which forms a surface in temperature, salinity and density space between the surface and the bottom of the ocean. For a very small proportion of water masses, this surface can be multivalued and water parcels can have up to two statically stable levels in the reference density profile, of which the shallowest is energetically more accessible. Classifying parcels from the surface to the bottom gives a different reference density profile than classifying in the opposite direction. However, this difference is negligible. We show that the reference state obtained by standard sorting methods is equivalent, though computationally more expensive, to the volume frequency distribution approach. The approach we present can be applied systematically and in a computationally efficient manner to investigate the APE budget of the ocean circulation using models or climatological data.
Resumo:
The assessment of chess players is an increasingly attractive opportunity and an unfortunate necessity. The chess community needs to limit potential reputational damage by inhibiting cheating and unjustified accusations of cheating: there has been a recent rise in both. A number of counter-intuitive discoveries have been made by benchmarking the intrinsic merit of players’ moves: these call for further investigation. Is Capablanca actually, objectively the most accurate World Champion? Has ELO rating inflation not taken place? Stimulated by FIDE/ACP, we revisit the fundamentals of the subject to advance a framework suitable for improved standards of computational experiment and more precise results. Other domains look to chess as the demonstrator of good practice, including the rating of professionals making high-value decisions under pressure, personnel evaluation by Multichoice Assessment and the organization of crowd-sourcing in citizen science projects. The ‘3P’ themes of performance, prediction and profiling pervade all these domains.
Resumo:
Representation error arises from the inability of the forecast model to accurately simulate the climatology of the truth. We present a rigorous framework for understanding this kind of error of representation. This framework shows that the lack of an inverse in the relationship between the true climatology (true attractor) and the forecast climatology (forecast attractor) leads to the error of representation. A new gain matrix for the data assimilation problem is derived that illustrates the proper approaches one may take to perform Bayesian data assimilation when the observations are of states on one attractor but the forecast model resides on another. This new data assimilation algorithm is the optimal scheme for the situation where the distributions on the true attractor and the forecast attractors are separately Gaussian and there exists a linear map between them. The results of this theory are illustrated in a simple Gaussian multivariate model.
Resumo:
Thrift [2008. Non-representational theory: space, politics, affect, 65. Abingdon: Routledge] has identified disenchantment as “[o]ne of the most damaging ideas” within social scientific and humanities research. As we have argued elsewhere, “[m]etanarratives of disenchantment and their concomitant preoccupation with destructive power go some way toward accounting for the overwhelmingly ‘critical’ character of geographical theory over the last 40 years” [Woodyer, T. and Geoghegan, H., 2013. (Re)enchanting geography? The nature of being critical and the character of critique in human geography. Progress in Human Geography, 37 (2), 195–214]. Through its experimentation with different ways of working and writing, cultural geography plays an important role in challenging extant habits of critical thinking. In this paper, we use the concept of “enchantment” to make sense of the deep and powerful affinities exposed in our research experiences and how these might be used to pursue a critical, yet more cheerful way of engaging with the geographies of the world.
Resumo:
Implicit dynamic-algebraic equations, known in control theory as descriptor systems, arise naturally in many applications. Such systems may not be regular (often referred to as singular). In that case the equations may not have unique solutions for consistent initial conditions and arbitrary inputs and the system may not be controllable or observable. Many control systems can be regularized by proportional and/or derivative feedback.We present an overview of mathematical theory and numerical techniques for regularizing descriptor systems using feedback controls. The aim is to provide stable numerical techniques for analyzing and constructing regular control and state estimation systems and for ensuring that these systems are robust. State and output feedback designs for regularizing linear time-invariant systems are described, including methods for disturbance decoupling and mixed output problems. Extensions of these techniques to time-varying linear and nonlinear systems are discussed in the final section.
Resumo:
This paper reviews the literature concerning the practice of using Online Analytical Processing (OLAP) systems to recall information stored by Online Transactional Processing (OLTP) systems. Such a review provides a basis for discussion on the need for the information that are recalled through OLAP systems to maintain the contexts of transactions with the data captured by the respective OLTP system. The paper observes an industry trend involving the use of OLTP systems to process information into data, which are then stored in databases without the business rules that were used to process information and data stored in OLTP databases without associated business rules. This includes the necessitation of a practice, whereby, sets of business rules are used to extract, cleanse, transform and load data from disparate OLTP systems into OLAP databases to support the requirements for complex reporting and analytics. These sets of business rules are usually not the same as business rules used to capture data in particular OLTP systems. The paper argues that, differences between the business rules used to interpret these same data sets, risk gaps in semantics between information captured by OLTP systems and information recalled through OLAP systems. Literature concerning the modeling of business transaction information as facts with context as part of the modelling of information systems were reviewed to identify design trends that are contributing to the design quality of OLTP and OLAP systems. The paper then argues that; the quality of OLTP and OLAP systems design has a critical dependency on the capture of facts with associated context, encoding facts with contexts into data with business rules, storage and sourcing of data with business rules, decoding data with business rules into the facts with the context and recall of facts with associated contexts. The paper proposes UBIRQ, a design model to aid the co-design of data with business rules storage for OLTP and OLAP purposes. The proposed design model provides the opportunity for the implementation and use of multi-purpose databases, and business rules stores for OLTP and OLAP systems. Such implementations would enable the use of OLTP systems to record and store data with executions of business rules, which will allow for the use of OLTP and OLAP systems to query data with business rules used to capture the data. Thereby ensuring information recalled via OLAP systems preserves the contexts of transactions as per the data captured by the respective OLTP system.
Resumo:
Increasing prominence of the psychological ownership (PO) construct in management studies raises questions about how PO manifests at the level of the individual. In this article, we unpack the mechanism by which individuals use PO to express aspects of their identity and explore how PO manifestations can display congruence as well as incongruence between layers of self. As a conceptual foundation, we develop a dynamic model of individual identity that differentiates between four layers of self, namely, the “core self,” “learned self,” “lived self,” and “perceived self.” We then bring identity and PO literatures together to suggest a framework of PO manifestation and expression viewed through the lens of the four presented layers of self. In exploring our framework, we develop a number of propositions that lay the foundation for future empirical and conceptual work and discuss implications for theory and practice.
Resumo:
The UK government is mandating the use of building information modelling (BIM) in large public projects by 2016. As a result, engineering firms are faced with challenges related to embedding new technologies and associated working practices for the digital delivery of major infrastructure projects. Diffusion of innovations theory is used to investigate how digital innovations diffuse across complex firms. A contextualist approach is employed through an in-depth case study of a large, international engineering project-based firm. The analysis of the empirical data, which was collected over a four-year period of close interaction with the firm, reveals parallel paths of diffusion occurring across the firm, where both the innovation and the firm context were continually changing. The diffusion process is traced over three phases: centralization of technology management, standardization of digital working practices, and globalization of digital resources. The findings describe the diffusion of a digital innovation as multiple and partial within a complex social system during times of change and organizational uncertainty, thereby contributing to diffusion of innovations studies in construction by showing a range of activities and dynamics of a non-linear diffusion process.
Resumo:
The notions of resolution and discrimination of probability forecasts are revisited. It is argued that the common concept underlying both resolution and discrimination is the dependence (in the sense of probability theory) of forecasts and observations. More specifically, a forecast has no resolution if and only if it has no discrimination if and only if forecast and observation are stochastically independent. A statistical tests for independence is thus also a test for no resolution and, at the same time, for no discrimination. The resolution term in the decomposition of the logarithmic scoring rule, and the area under the Receiver Operating Characteristic will be investigated in this light.