754 resultados para Knowledge-Capital Model
Resumo:
Inference on the basis of recognition alone is assumed to occur prior to accessing further information (Pachur & Hertwig, 2006). A counterintuitive result of this is the “less-is-more” effect: a drop in the accuracy with which choices are made as to which of two or more items scores highest on a given criterion as more items are learned (Frosch, Beaman & McCloy, 2007; Goldstein & Gigerenzer, 2002). In this paper, we show that less-is-more effects are not unique to recognition-based inference but can also be observed with a knowledge-based strategy provided two assumptions, limited information and differential access, are met. The LINDA model which embodies these assumptions is presented. Analysis of the less-is-more effects predicted by LINDA and by recognition-driven inference shows that these occur for similar reasons and casts doubt upon the “special” nature of recognition-based inference. Suggestions are made for empirical tests to compare knowledge-based and recognition-based less-is-more effects
Resumo:
There are a number of challenges associated with managing knowledge and information in construction organizations delivering major capital assets. These include the ever-increasing volumes of information, losing people because of retirement or competitors, the continuously changing nature of information, lack of methods on eliciting useful knowledge, development of new information technologies and changes in management and innovation practices. Existing tools and methodologies for valuing intangible assets in fields such as engineering, project management and financial, accounting, do not address fully the issues associated with the valuation of information and knowledge. Information is rarely recorded in a way that a document can be valued, when either produced or subsequently retrieved and re-used. In addition there is a wealth of tacit personal knowledge which, if codified into documentary information, may prove to be very valuable to operators of the finished asset or future designers. This paper addresses the problem of information overload and identifies the differences between data, information and knowledge. An exploratory study was conducted with a leading construction consultant examining three perspectives (business, project management and document management) by structured interviews and specifically how to value information in practical terms. Major challenges in information management are identified. An through-life Information Evaluation methodology (IEM) is presented to reduce information overload and to make the information more valuable in the future.
Resumo:
This paper introduces perspex algebra which is being developed as a common representation of geometrical knowledge. A perspex can currently be interpreted in one of four ways. First, the algebraic perspex is a generalization of matrices, it provides the most general representation for all of the interpretations of a perspex. The algebraic perspex can be used to describe arbitrary sets of coordinates. The remaining three interpretations of the perspex are all related to square matrices and operate in a Euclidean model of projective space-time, called perspex space. Perspex space differs from the usual Euclidean model of projective space in that it contains the point at nullity. It is argued that the point at nullity is necessary for a consistent account of perspective in top-down vision. Second, the geometric perspex is a simplex in perspex space. It can be used as a primitive building block for shapes, or as a way of recording landmarks on shapes. Third, the transformational perspex describes linear transformations in perspex space that provide the affine and perspective transformations in space-time. It can be used to match a prototype shape to an image, even in so called 'accidental' views where the depth of an object disappears from view, or an object stays in the same place across time. Fourth, the parametric perspex describes the geometric and transformational perspexes in terms of parameters that are related to everyday English descriptions. The parametric perspex can be used to obtain both continuous and categorical perception of objects. The paper ends with a discussion of issues related to using a perspex to describe logic.
Resumo:
The use of data reconciliation techniques can considerably reduce the inaccuracy of process data due to measurement errors. This in turn results in improved control system performance and process knowledge. Dynamic data reconciliation techniques are applied to a model-based predictive control scheme. It is shown through simulations on a chemical reactor system that the overall performance of the model-based predictive controller is enhanced considerably when data reconciliation is applied. The dynamic data reconciliation techniques used include a combined strategy for the simultaneous identification of outliers and systematic bias.
Resumo:
In this study we quantify the relationship between the aerosol optical depth increase from a volcanic eruption and the severity of the subsequent surface temperature decrease. This investigation is made by simulating 10 different sizes of eruption in a global circulation model (GCM) by changing stratospheric sulfate aerosol optical depth at each time step. The sizes of the simulated eruptions range from Pinatubo‐sized up to the magnitude of supervolcanic eruptions around 100 times the size of Pinatubo. From these simulations we find that there is a smooth monotonic relationship between the global mean maximum aerosol optical depth anomaly and the global mean temperature anomaly and we derive a simple mathematical expression which fits this relationship well. We also construct similar relationships between global mean aerosol optical depth and the temperature anomaly at every individual model grid box to produce global maps of best‐fit coefficients and fit residuals. These maps are used with caution to find the eruption size at which a local temperature anomaly is clearly distinct from the local natural variability and to approximate the temperature anomalies which the model may simulate following a Tambora‐sized eruption. To our knowledge, this is the first study which quantifies the relationship between aerosol optical depth and resulting temperature anomalies in a simple way, using the wealth of data that is available from GCM simulations.
Resumo:
Internal risk management models of the kind popularized by J. P. Morgan are now used widely by the world’s most sophisticated financial institutions as a means of measuring risk. Using the returns on three of the most popular futures contracts on the London International Financial Futures Exchange, in this paper we investigate the possibility of using multivariate generalized autoregressive conditional heteroscedasticity (GARCH) models for the calculation of minimum capital risk requirements (MCRRs). We propose a method for the estimation of the value at risk of a portfolio based on a multivariate GARCH model. We find that the consideration of the correlation between the contracts can lead to more accurate, and therefore more appropriate, MCRRs compared with the values obtained from a univariate approach to the problem.
Resumo:
This paper investigates the frequency of extreme events for three LIFFE futures contracts for the calculation of minimum capital risk requirements (MCRRs). We propose a semiparametric approach where the tails are modelled by the Generalized Pareto Distribution and smaller risks are captured by the empirical distribution function. We compare the capital requirements form this approach with those calculated from the unconditional density and from a conditional density - a GARCH(1,1) model. Our primary finding is that both in-sample and for a hold-out sample, our extreme value approach yields superior results than either of the other two models which do not explicitly model the tails of the return distribution. Since the use of these internal models will be permitted under the EC-CAD II, they could be widely adopted in the near future for determining capital adequacies. Hence, close scrutiny of competing models is required to avoid a potentially costly misallocation capital resources while at the same time ensuring the safety of the financial system.
Resumo:
The relevance of regional policy for less favoured regions (LFRs) reveals itself when policy-makers must reconcile competitiveness with social cohesion through the adaptation of competition or innovation policies. The vast literature in this area generally builds on an overarching concept of ‘social capital’ as the necessary relational infrastructure for collective action diversification and policy integration, in a context much influenced by a dynamic of industrial change and a necessary balance between the creation and diffusion of ‘knowledge’ through learning. This relational infrastructure or ‘social capital’ is centred on people’s willingness to cooperate and ‘envision’ futures as a result of “social organization, such as networks, norms and trust that facilitate action and cooperation for mutual benefit” (Putnam, 1993: 35). Advocates of this interpretation of ‘social capital’ have adopted the ‘new growth’ thinking behind ‘systems of innovation’ and ‘competence building’, arguing that networks have the potential to make both public administration and markets more effective as well as ‘learning’ trajectories more inclusive of the development of society as a whole. This essay aims to better understand the role of ‘social capital’ in the production and reproduction of uneven regional development patterns, and to critically assess the limits of a ‘systems concept’ and an institution-centred approach to comparative studies of regional innovation. These aims are discussed in light of the following two assertions: i) learning behaviour, from an economic point of view, has its determinants, and ii) the positive economic outcomes of ‘social capital’ cannot be taken as a given. It is suggested that an agent-centred approach to comparative research best addresses the ‘learning’ determinants and the consequences of social networks on regional development patterns. A brief discussion of the current debate on innovation surveys has been provided to illustrate this point.
Resumo:
The theoretical understanding of online shopping behavior has received much attention. Less focus has been given to the formation of the customer experience (CE) that results from online shopper interactions with e-retailers. This study develops and empirically tests a model of the relationship between antecedents and outcomes of online customer experience (OCE) within Internet shopping websites using an international sample. The study identifies and provides operational measures of these variables plus the cognitive and affective components of OCE. The paper makes contributions towards new knowledge and understanding of how e-
Resumo:
Successful innovation diffusion process may well take the form of knowledge transfer process. Therefore, the primary objectives of this paper include: first, to evaluate the interrelations between transfer of knowledge and diffusion of innovation; and second to develop a model to establish a connection between the two. This has been achieved using a four-step approach. The first step of the approach is to assess and discuss the theories relating to knowledge transfer (KT) and innovation diffusion (ID). The second step focuses on developing basic models for KT and ID, based on the key theories surrounding these areas. A considerable amount of literature has been written on the association between knowledge management and innovation, the respective fields of KT and ID. The next step, therefore, explores the relationship between innovation and knowledge management in order to identify the connections between the latter, i.e. KT and ID. Finally, step four proposes and develops an integrated model for KT and ID. As the developed model suggests the sub-processes of knowledge transfer can be connected to the innovation diffusion process in several instances as discussed and illustrated in the paper.
Resumo:
Small knowledge-intensive professional service firms are becoming increasingly important agents of innovation in construction. There is thus an urgent need to better understand the nature and process of innovation in such firms. First, this paper presents a review of the relevant literature. It is concluded that this literature is often not appropriate for SKIPSFs, as it neglects the critical role of knowledge and knowledge workers in innovation within SKIPSFs. Second, a knowledge-based innovation model is presented as a holistic, system-oriented framework to better investigate how the SKIPSFs create, manage and exploit innovation. This model is to be tested with case study research.
Resumo:
With the rapid growth of information and technology, knowledge is a valuable asset in organisation which has become significant as a strategic resource. Many studies have focused on managing knowledge in organisations. In particular, knowledge transfer has become a significant issue concerned with the movement of knowledge across organisational boundaries. It enables the exploitation and application of existing knowledge for other organisations, reducing the time of creating knowledge, and minimising the cost of organisational learning. One way to capture knowledge in a transferrable form is through practice. In this paper, we discuss how organisations can transfer knowledge through practice effectively and propose a model for a semiotic approach to practice-oriented knowledge transfer. In this model, practice is treated as a sign that represents knowledge, and its localisation is analysed as a semiotic process.