36 resultados para Information Models
Resumo:
The problem of model selection of a univariate long memory time series is investigated once a semi parametric estimator for the long memory parameter has been used. Standard information criteria are not consistent in this case. A Modified Information Criterion (MIC) that overcomes these difficulties is introduced and proofs that show its asymptotic validity are provided. The results are general and cover a wide range of short memory processes. Simulation evidence compares the new and existing methodologies and empirical applications in monthly inflation and daily realized volatility are presented.
Resumo:
Our objective was to study whether “compensatory” models provide better descriptions of clinical judgment than fast and frugal models, according to expertise and experience. Fifty practitioners appraised 60 vignettes describing a child with an exacerbation of asthma and rated their propensities to admit the child. Linear logistic (LL) models of their judgments were compared with a matching heuristic (MH) model that searched available cues in order of importance for a critical value indicating an admission decision. There was a small difference between the 2 models in the proportion of patients allocated correctly (admit or not-admit decisions), 91.2% and 87.8%, respectively. The proportion allocated correctly by the LL model was lower for consultants than juniors, whereas the MH model performed equally well for both. In this vignette study, neither model provided any better description of judgments made by consultants or by pediatricians compared to other grades and specialties.
Resumo:
This study examines the relation between selection power and selection labor for information retrieval (IR). It is the first part of the development of a labor theoretic approach to IR. Existing models for evaluation of IR systems are reviewed and the distinction of operational from experimental systems partly dissolved. The often covert, but powerful, influence from technology on practice and theory is rendered explicit. Selection power is understood as the human ability to make informed choices between objects or representations of objects and is adopted as the primary value for IR. Selection power is conceived as a property of human consciousness, which can be assisted or frustrated by system design. The concept of selection power is further elucidated, and its value supported, by an example of the discrimination enabled by index descriptions, the discovery of analogous concepts in partly independent scholarly and wider public discourses, and its embodiment in the design and use of systems. Selection power is regarded as produced by selection labor, with the nature of that labor changing with different historical conditions and concurrent information technologies. Selection labor can itself be decomposed into description and search labor. Selection labor and its decomposition into description and search labor will be treated in a subsequent article, in a further development of a labor theoretic approach to information retrieval.
Resumo:
This paper presents two new approaches for use in complete process monitoring. The firstconcerns the identification of nonlinear principal component models. This involves the application of linear
principal component analysis (PCA), prior to the identification of a modified autoassociative neural network (AAN) as the required nonlinear PCA (NLPCA) model. The benefits are that (i) the number of the reduced set of linear principal components (PCs) is smaller than the number of recorded process variables, and (ii) the set of PCs is better conditioned as redundant information is removed. The result is a new set of input data for a modified neural representation, referred to as a T2T network. The T2T NLPCA model is then used for complete process monitoring, involving fault detection, identification and isolation. The second approach introduces a new variable reconstruction algorithm, developed from the T2T NLPCA model. Variable reconstruction can enhance the findings of the contribution charts still widely used in industry by reconstructing the outputs from faulty sensors to produce more accurate fault isolation. These ideas are illustrated using recorded industrial data relating to developing cracks in an industrial glass melter process. A comparison of linear and nonlinear models, together with the combined use of contribution charts and variable reconstruction, is presented.
Resumo:
A problem with use of the geostatistical Kriging error for optimal sampling design is that the design does not adapt locally to the character of spatial variation. This is because a stationary variogram or covariance function is a parameter of the geostatistical model. The objective of this paper was to investigate the utility of non-stationary geostatistics for optimal sampling design. First, a contour data set of Wiltshire was split into 25 equal sub-regions and a local variogram was predicted for each. These variograms were fitted with models and the coefficients used in Kriging to select optimal sample spacings for each sub-region. Large differences existed between the designs for the whole region (based on the global variogram) and for the sub-regions (based on the local variograms). Second, a segmentation approach was used to divide a digital terrain model into separate segments. Segment-based variograms were predicted and fitted with models. Optimal sample spacings were then determined for the whole region and for the sub-regions. It was demonstrated that the global design was inadequate, grossly over-sampling some segments while under-sampling others.
Resumo:
Models of currency competition focus on the 5% of trading attributable to balance-of-payments flows. We introduce an information approach that focuses on the other 95%. Important departures from traditional models arise when transactions convey information. First, prices reveal different information depending on whether trades are direct or though vehicle currencies. Second, missing markets arise due to insufficiently symmetric information, rather than insufficient transactions scale. Third, the indeterminacy of equilibrium that arises in traditional models is resolved: currency trade patterns no longer concentrate arbitrarily on market size. Empirically, we provide a first analysis of transactions across a full market triangle: the euro, yen and US dollar. The estimated transaction effects on prices support the information approach.
Resumo:
People tend to attribute more regret to a character who has decided to take action and experienced a negative outcome than to one who has decided not to act and experienced a negative outcome. For some decisions, however, this finding is not observed in a between-participants design and thus appears to rely on comparisons between people's representations of action and their representations of inaction. In this article, we outline a mental models account that explains findings from studies that have used within- and between-participants designs, and we suggest that, for decisions with uncertain counterfactual outcomes, information about the consequences of a decision to act causes people to flesh out their representation of the counterfactual states of affairs for inaction. In three experiments, we confirm our predictions about participants' fleshing out of representations, demonstrating that an action effect occurs only when information about the consequences of action is available to participants as they rate the nonactor and when this information about action is informative with respect to judgments about inaction. It is important to note that the action effect always occurs when the decision scenario specifies certain counterfactual outcomes. These results suggest that people sometimes base their attributions of regret on comparisons among different sets of mental models.
Resumo:
We study the effects of amplitude and phase damping decoherence in d-dimensional one-way quantum computation. We focus our attention on low dimensions and elementary unidimensional cluster state resources. Our investigation shows how information transfer and entangling gate simulations are affected for d >= 2. To understand motivations for extending the one-way model to higher dimensions, we describe how basic qudit cluster states deteriorate under environmental noise of experimental interest. In order to protect quantum information from the environment, we consider encoding logical qubits into qudits and compare entangled pairs of linear qubit-cluster states to single qudit clusters of equal length and total dimension. A significant reduction in the performance of cluster state resources for d > 2 is found when Markovian-type decoherence models are present.
Resumo:
Despite the simultaneous progress of traffic modelling both on the macroscopic and microscopic front, recent works [E. Bourrel, J.B. Lessort, Mixing micro and macro representation of traffic flow: a hybrid model based on the LWR theory, Transport. Res. Rec. 1852 (2003) 193–200; D. Helbing, M. Treiber, Critical discussion of “synchronized flow”, Coop. Transport. Dyn. 1 (2002) 2.1–2.24; A. Hennecke, M. Treiber, D. Helbing, Macroscopic simulations of open systems and micro–macro link, in: D. Helbing, H.J. Herrmann, M. Schreckenberg, D.E. Wolf (Eds.), Traffic and Granular Flow ’99, Springer, Berlin, 2000, pp. 383–388] highlighted that one of the most promising way to simulate efficiently traffic flow on large road networks is a clever combination of both traffic representations: the hybrid modelling. Our focus in this paper is to propose two hybrid models for which the macroscopic (resp. mesoscopic) part is based on a class of second order model [A. Aw, M. Rascle, Resurection of second order models of traffic flow?, SIAM J. Appl. Math. 60 (2000) 916–938] whereas the microscopic part is a Follow-the Leader type model [D.C. Gazis, R. Herman, R.W. Rothery, Nonlinear follow-the-leader models of traffic flow, Oper. Res. 9 (1961) 545–567; R. Herman, I. Prigogine, Kinetic Theory of Vehicular Traffic, American Elsevier, New York, 1971]. For the first hybrid model, we define precisely the translation of boundary conditions at interfaces and for the second one we explain the synchronization processes. Furthermore, through some numerical simulations we show that the waves propagation is not disturbed and the mass is accurately conserved when passing from one traffic representation to another.
Resumo:
Many of the challenges faced in health care delivery can be informed through building models. In particular, Discrete Conditional Survival (DCS) models, recently under development, can provide policymakers with a flexible tool to assess time-to-event data. The DCS model is capable of modelling the survival curve based on various underlying distribution types and is capable of clustering or grouping observations (based on other covariate information) external to the distribution fits. The flexibility of the model comes through the choice of data mining techniques that are available in ascertaining the different subsets and also in the choice of distribution types available in modelling these informed subsets. This paper presents an illustrated example of the Discrete Conditional Survival model being deployed to represent ambulance response-times by a fully parameterised model. This model is contrasted against use of a parametric accelerated failure-time model, illustrating the strength and usefulness of Discrete Conditional Survival models.
Resumo:
The validation of variable-density flow models simulating seawater intrusion in coastal aquifers requires information about concentration distribution in groundwater. Electrical resistivity tomography (ERT) provides relevant data for this purpose. However, inverse modeling is not accurate because of the non-uniqueness of solutions. Such difficulties in evaluating seawater intrusion can be overcome by coupling geophysical data and groundwater modeling. First, the resistivity distribution obtained by inverse geo-electrical modeling is established. Second, a 3-D variable-density flow hydrogeological model is developed. Third, using Archie's Law, the electrical resistivity model deduced from salt concentration is compared to the formerly interpreted electrical model. Finally, aside from that usual comparison-validation, the theoretical geophysical response of concentrations simulated with the groundwater model can be compared to field-measured resistivity data. This constitutes a cross-validation of both the inverse geo-electrical model and the groundwater model.
[Comte, J.-C., and O. Banton (2007), Cross-validation of geo-electrical and hydrogeological models to evaluate seawater intrusion in coastal aquifers, Geophys. Res. Lett., 34, L10402, doi:10.1029/2007GL029981.]