29 resultados para Robust Probabilistic Model, Dyslexic Users, Rewriting, Question-Answering
Resumo:
The University of Queensland, Australia has developed Fez, a world-leading user-interface and management system for Fedora-based institutional repositories, which bridges the gap between a repository and users. Christiaan Kortekaas, Andrew Bennett and Keith Webster will review this open source software that gives institutions the power to create a comprehensive repository solution without the hassle..
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.
Resumo:
RWMODEL II simulates the Rescorla-Wagner model of Pavlovian conditioning. It is written in Delphi and runs under Windows 3.1 and Windows 95. The program was designed for novice and expert users and can be employed in teaching, as well as in research. It is user friendly and requires a minimal level of computer literacy but is sufficiently flexible to permit a wide range of simulations. It allows the display of empirical data, against which predictions from the model can be validated.
Resumo:
Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data containing a group or groups of observations with longer than normal tails or atypical observations, the use of normal components may unduly affect the fit of the mixture model. In this paper, we consider a more robust approach by modelling the data by a mixture of t distributions. The use of the ECM algorithm to fit this t mixture model is described and examples of its use are given in the context of clustering multivariate data in the presence of atypical observations in the form of background noise.
Resumo:
Almost all leprosy cases reported in industrialized countries occur amongst immigrants or refugees from developing countries where leprosy continues to be an important health issue. Screening for leprosy is an important question for governments in countries with immigration and refugee programmes. A decision analysis framework is used to evaluate leprosy screening. The analysis uses a set of criteria and parameters regarding leprosy screening, and available data to estimate the number of cases which would be detected by a leprosy screening programme of immigrants from countries with different leprosy prevalences, compared with a policy of waiting for immigrants who develop symptomatic clinical diseases to present for health care. In a cohort of 100,000 immigrants from high leprosy prevalence regions (3.6/10,000), screening would detect 32 of the 42 cases which would arise in the destination country over the 14 years after migration; from medium prevalence areas (0.7/10,000) 6.3 of the total 8.1 cases would be detected, and from low prevalence regions (0.2/10,600) 1.8 of 2.3 cases. Using Australian data, the migrant mix would produce 74 leprosy cases from 10 years intake; screening would detect 54, and 19 would be diagnosed subsequently after migration. Screening would only produce significant case-yield amongst immigrants from regions or social groups with high leprosy prevalence. Since the number of immigrants to Australia from countries of higher endemnicity is not large routine leprosy screening would have a small impact on case incidence.
Resumo:
A robust semi-implicit central partial difference algorithm for the numerical solution of coupled stochastic parabolic partial differential equations (PDEs) is described. This can be used for calculating correlation functions of systems of interacting stochastic fields. Such field equations can arise in the description of Hamiltonian and open systems in the physics of nonlinear processes, and may include multiplicative noise sources. The algorithm can be used for studying the properties of nonlinear quantum or classical field theories. The general approach is outlined and applied to a specific example, namely the quantum statistical fluctuations of ultra-short optical pulses in chi((2)) parametric waveguides. This example uses a non-diagonal coherent state representation, and correctly predicts the sub-shot noise level spectral fluctuations observed in homodyne detection measurements. It is expected that the methods used wilt be applicable for higher-order correlation functions and other physical problems as well. A stochastic differencing technique for reducing sampling errors is also introduced. This involves solving nonlinear stochastic parabolic PDEs in combination with a reference process, which uses the Wigner representation in the example presented here. A computer implementation on MIMD parallel architectures is discussed. (C) 1997 Academic Press.
Resumo:
Numerical optimisation methods are being more commonly applied to agricultural systems models, to identify the most profitable management strategies. The available optimisation algorithms are reviewed and compared, with literature and our studies identifying evolutionary algorithms (including genetic algorithms) as superior in this regard to simulated annealing, tabu search, hill-climbing, and direct-search methods. Results of a complex beef property optimisation, using a real-value genetic algorithm, are presented. The relative contributions of the range of operational options and parameters of this method are discussed, and general recommendations listed to assist practitioners applying evolutionary algorithms to the solution of agricultural systems. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Many business-oriented software applications are subject to frequent changes in requirements. This paper shows that, ceteris paribus, increases in the volatility of system requirements decrease the reliability of software. Further, systems that exhibit high volatility during the development phase are likely to have lower reliability during their operational phase. In addition to the typically higher volatility of requirements, end-users who specify the requirements of business-oriented systems are usually less technically oriented than people who specify the requirements of compilers, radar tracking systems or medical equipment. Hence, the characteristics of software reliability problems for business-oriented systems are likely to differ significantly from those of more technically oriented systems.
Resumo:
Management are keen to maximize the life span of an information system because of the high cost, organizational disruption, and risk of failure associated with the re-development or replacement of an information system. This research investigates the effects that various factors have on an information system's life span by understanding how the factors affect an information system's stability. The research builds on a previously developed two-stage model of information system change whereby an information system is either in a stable state of evolution in which the information system's functionality is evolving, or in a state of revolution, in which the information system is being replaced because it is not providing the functionality expected by its users. A case study surveyed a number of systems within one organization. The aim was to test whether a relationship existed between the base value of the volatility index (a measure of the stability of an information system) and certain system characteristics. Data relating to some 3000 user change requests covering 40 systems over a 10-year period were obtained. The following factors were hypothesized to have significant associations with the base value of the volatility index: language level (generation of language of construction), system size, system age, and the timing of changes applied to a system. Significant associations were found in the hypothesized directions except that the timing of user changes was not associated with any change in the value of the volatility index. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
Fixed-point roundoff noise in digital implementation of linear systems arises due to overflow, quantization of coefficients and input signals, and arithmetical errors. In uniform white-noise models, the last two types of roundoff errors are regarded as uniformly distributed independent random vectors on cubes of suitable size. For input signal quantization errors, the heuristic model is justified by a quantization theorem, which cannot be directly applied to arithmetical errors due to the complicated input-dependence of errors. The complete uniform white-noise model is shown to be valid in the sense of weak convergence of probabilistic measures as the lattice step tends to zero if the matrices of realization of the system in the state space satisfy certain nonresonance conditions and the finite-dimensional distributions of the input signal are absolutely continuous.
Resumo:
The purpose of this study was threefold: first, the study was designed to illustrate the use of data and information collected in food safety surveys in a quantitative risk assessment. In this case, the focus was on the food service industry; however, similar data from other parts of the food chain could be similarly incorporated. The second objective was to quantitatively describe and better understand the role that the food service industry plays in the safety of food. The third objective was to illustrate the additional decision-making information that is available when uncertainty and variability are incorporated into the modelling of systems. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
This article is concerned primarily with an examination and comparison of select aspects of the model international consumer protection laws proposed by the United Nations (UN), the European Union (EU), and the Organisation for Economic Co-operation and Development (OECD), using the Trade Practices Act 1974 (Australia) as a basis for examination and comparison. As a secondary consideration, it also broadly examines the content of, and differences between, the model laws. The motive for this article is that any future enforceable international consumer protection regime (possibly in the form of an international treaty or convention) would need to take into account the UN, EU and OECD guidelines. A cross-comparison of those model laws, and a comparison of them with the consumer protection provisions of a well established national consumer protection law, should provide a useful starting point for the development of such a regime. The 'select aspects' of the model laws in question are the various provisions of those laws which could relate to situations involving the wrong delivery or non-delivery of goods.