921 resultados para Artificial Information Models
Resumo:
This paper provides algorithms that use an information-theoretic analysis to learn Bayesian network structures from data. Based on our three-phase learning framework, we develop efficient algorithms that can effectively learn Bayesian networks, requiring only polynomial numbers of conditional independence (CI) tests in typical cases. We provide precise conditions that specify when these algorithms are guaranteed to be correct as well as empirical evidence (from real world applications and simulation tests) that demonstrates that these systems work efficiently and reliably in practice.
Resumo:
Purpose
– Information science has been conceptualized as a partly unreflexive response to developments in information and computer technology, and, most powerfully, as part of the gestalt of the computer. The computer was viewed as an historical accident in the original formulation of the gestalt. An alternative, and timely, approach to understanding, and then dissolving, the gestalt would be to address the motivating technology directly, fully recognizing it as a radical human construction. This paper aims to address the issues.
Design/methodology/approach
– The paper adopts a social epistemological perspective and is concerned with collective, rather than primarily individual, ways of knowing.
Findings
– Information technology tends to be received as objectively given, autonomously developing, and causing but not itself caused, by the language of discussions in information science. It has also been characterized as artificial, in the sense of unnatural, and sometimes as threatening. Attitudes to technology are implied, rather than explicit, and can appear weak when articulated, corresponding to collective repression.
Research limitations/implications
– Receiving technology as objectively given has an analogy with the Platonist view of mathematical propositions as discovered, in its exclusion of human activity, opening up the possibility of a comparable critique which insists on human agency.
Originality/value
– Apprehensions of information technology have been raised to consciousness, exposing their limitations.
Resumo:
Recently, several belief negotiation models have been introduced to deal with the problem of belief merging. A negotiation model usually consists of two functions: a negotiation function and a weakening function. A negotiation function is defined to choose the weakest sources and these sources will weaken their point of view using a weakening function. However, the currently available belief negotiation models are based on classical logic, which makes them difficult to define weakening functions. In this paper, we define a prioritized belief negotiation model in the framework of possibilistic logic. The priority between formulae provides us with important information to decide which beliefs should be discarded. The problem of merging uncertain information from different sources is then solved by two steps. First, beliefs in the original knowledge bases will be weakened to resolve inconsistencies among them. This step is based on a prioritized belief negotiation model. Second, the knowledge bases obtained by the first step are combined using a conjunctive operator which may have a reinforcement effect in possibilistic logic.
Resumo:
A problem with use of the geostatistical Kriging error for optimal sampling design is that the design does not adapt locally to the character of spatial variation. This is because a stationary variogram or covariance function is a parameter of the geostatistical model. The objective of this paper was to investigate the utility of non-stationary geostatistics for optimal sampling design. First, a contour data set of Wiltshire was split into 25 equal sub-regions and a local variogram was predicted for each. These variograms were fitted with models and the coefficients used in Kriging to select optimal sample spacings for each sub-region. Large differences existed between the designs for the whole region (based on the global variogram) and for the sub-regions (based on the local variograms). Second, a segmentation approach was used to divide a digital terrain model into separate segments. Segment-based variograms were predicted and fitted with models. Optimal sample spacings were then determined for the whole region and for the sub-regions. It was demonstrated that the global design was inadequate, grossly over-sampling some segments while under-sampling others.
Resumo:
Models of currency competition focus on the 5% of trading attributable to balance-of-payments flows. We introduce an information approach that focuses on the other 95%. Important departures from traditional models arise when transactions convey information. First, prices reveal different information depending on whether trades are direct or though vehicle currencies. Second, missing markets arise due to insufficiently symmetric information, rather than insufficient transactions scale. Third, the indeterminacy of equilibrium that arises in traditional models is resolved: currency trade patterns no longer concentrate arbitrarily on market size. Empirically, we provide a first analysis of transactions across a full market triangle: the euro, yen and US dollar. The estimated transaction effects on prices support the information approach.
Resumo:
The eng-genes concept involves the use of fundamental known system functions as activation functions in a neural model to create a 'grey-box' neural network. One of the main issues in eng-genes modelling is to produce a parsimonious model given a model construction criterion. The challenges are that (1) the eng-genes model in most cases is a heterogenous network consisting of more than one type of nonlinear basis functions, and each basis function may have different set of parameters to be optimised; (2) the number of hidden nodes has to be chosen based on a model selection criterion. This is a mixed integer hard problem and this paper investigates the use of a forward selection algorithm to optimise both the network structure and the parameters of the system-derived activation functions. Results are included from case studies performed on a simulated continuously stirred tank reactor process, and using actual data from a pH neutralisation plant. The resulting eng-genes networks demonstrate superior simulation performance and transparency over a range of network sizes when compared to conventional neural models. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
People tend to attribute more regret to a character who has decided to take action and experienced a negative outcome than to one who has decided not to act and experienced a negative outcome. For some decisions, however, this finding is not observed in a between-participants design and thus appears to rely on comparisons between people's representations of action and their representations of inaction. In this article, we outline a mental models account that explains findings from studies that have used within- and between-participants designs, and we suggest that, for decisions with uncertain counterfactual outcomes, information about the consequences of a decision to act causes people to flesh out their representation of the counterfactual states of affairs for inaction. In three experiments, we confirm our predictions about participants' fleshing out of representations, demonstrating that an action effect occurs only when information about the consequences of action is available to participants as they rate the nonactor and when this information about action is informative with respect to judgments about inaction. It is important to note that the action effect always occurs when the decision scenario specifies certain counterfactual outcomes. These results suggest that people sometimes base their attributions of regret on comparisons among different sets of mental models.
Resumo:
We study the effects of amplitude and phase damping decoherence in d-dimensional one-way quantum computation. We focus our attention on low dimensions and elementary unidimensional cluster state resources. Our investigation shows how information transfer and entangling gate simulations are affected for d >= 2. To understand motivations for extending the one-way model to higher dimensions, we describe how basic qudit cluster states deteriorate under environmental noise of experimental interest. In order to protect quantum information from the environment, we consider encoding logical qubits into qudits and compare entangled pairs of linear qubit-cluster states to single qudit clusters of equal length and total dimension. A significant reduction in the performance of cluster state resources for d > 2 is found when Markovian-type decoherence models are present.
Resumo:
Despite the simultaneous progress of traffic modelling both on the macroscopic and microscopic front, recent works [E. Bourrel, J.B. Lessort, Mixing micro and macro representation of traffic flow: a hybrid model based on the LWR theory, Transport. Res. Rec. 1852 (2003) 193–200; D. Helbing, M. Treiber, Critical discussion of “synchronized flow”, Coop. Transport. Dyn. 1 (2002) 2.1–2.24; A. Hennecke, M. Treiber, D. Helbing, Macroscopic simulations of open systems and micro–macro link, in: D. Helbing, H.J. Herrmann, M. Schreckenberg, D.E. Wolf (Eds.), Traffic and Granular Flow ’99, Springer, Berlin, 2000, pp. 383–388] highlighted that one of the most promising way to simulate efficiently traffic flow on large road networks is a clever combination of both traffic representations: the hybrid modelling. Our focus in this paper is to propose two hybrid models for which the macroscopic (resp. mesoscopic) part is based on a class of second order model [A. Aw, M. Rascle, Resurection of second order models of traffic flow?, SIAM J. Appl. Math. 60 (2000) 916–938] whereas the microscopic part is a Follow-the Leader type model [D.C. Gazis, R. Herman, R.W. Rothery, Nonlinear follow-the-leader models of traffic flow, Oper. Res. 9 (1961) 545–567; R. Herman, I. Prigogine, Kinetic Theory of Vehicular Traffic, American Elsevier, New York, 1971]. For the first hybrid model, we define precisely the translation of boundary conditions at interfaces and for the second one we explain the synchronization processes. Furthermore, through some numerical simulations we show that the waves propagation is not disturbed and the mass is accurately conserved when passing from one traffic representation to another.
Resumo:
Many of the challenges faced in health care delivery can be informed through building models. In particular, Discrete Conditional Survival (DCS) models, recently under development, can provide policymakers with a flexible tool to assess time-to-event data. The DCS model is capable of modelling the survival curve based on various underlying distribution types and is capable of clustering or grouping observations (based on other covariate information) external to the distribution fits. The flexibility of the model comes through the choice of data mining techniques that are available in ascertaining the different subsets and also in the choice of distribution types available in modelling these informed subsets. This paper presents an illustrated example of the Discrete Conditional Survival model being deployed to represent ambulance response-times by a fully parameterised model. This model is contrasted against use of a parametric accelerated failure-time model, illustrating the strength and usefulness of Discrete Conditional Survival models.
Resumo:
This paper investigates the application of complex wavelet transforms to the field of digital data hiding. Complex wavelets offer improved directional selectivity and shift invariance over their discretely sampled counterparts allowing for better adaptation of watermark distortions to the host media. Two methods of deriving visual models for the watermarking system are adapted to the complex wavelet transforms and their performances are compared. To produce improved capacity a spread transform embedding algorithm is devised, this combines the robustness of spread spectrum methods with the high capacity of quantization based methods. Using established information theoretic methods, limits of watermark capacity are derived that demonstrate the superiority of complex wavelets over discretely sampled wavelets. Finally results for the algorithm against commonly used attacks demonstrate its robustness and the improved performance offered by complex wavelet transforms.
Resumo:
The validation of variable-density flow models simulating seawater intrusion in coastal aquifers requires information about concentration distribution in groundwater. Electrical resistivity tomography (ERT) provides relevant data for this purpose. However, inverse modeling is not accurate because of the non-uniqueness of solutions. Such difficulties in evaluating seawater intrusion can be overcome by coupling geophysical data and groundwater modeling. First, the resistivity distribution obtained by inverse geo-electrical modeling is established. Second, a 3-D variable-density flow hydrogeological model is developed. Third, using Archie's Law, the electrical resistivity model deduced from salt concentration is compared to the formerly interpreted electrical model. Finally, aside from that usual comparison-validation, the theoretical geophysical response of concentrations simulated with the groundwater model can be compared to field-measured resistivity data. This constitutes a cross-validation of both the inverse geo-electrical model and the groundwater model.
[Comte, J.-C., and O. Banton (2007), Cross-validation of geo-electrical and hydrogeological models to evaluate seawater intrusion in coastal aquifers, Geophys. Res. Lett., 34, L10402, doi:10.1029/2007GL029981.]