62 resultados para Artificial Information Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents BAGEL, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that BAGEL can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data. © 2010 Association for Computational Linguistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several studies have highlighted the importance of information and information quality in organisations and thus information is regarded as key determinant for the success and organisational performance. In this paper, we review selected contributions and introduce a model that shows how IS/IT resources and capabilities could be interlinked with IS/IT utilization, organizational performance and business value. Complementing other models and frameworks, we explicitly consider information from a management maturity, quality and risk perspective and show how the new framework can be operationalized with existing assessment approaches by using empirical data from four industrial case studies. © 2012 Springer-Verlag.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among several others, the on-site inspection process is mainly concerned with finding the right design and specifications information needed to inspect each newly constructed segment or element. While inspecting steel erection, for example, inspectors need to locate the right drawings for each member and the corresponding specifications sections that describe the allowable deviations in placement among others. These information seeking tasks are highly monotonous, time consuming and often erroneous, due to the high similarity of drawings and constructed elements and the abundance of information involved which can confuse the inspector. To address this problem, this paper presents the first steps of research that is investigating the requirements of an automated computer vision-based approach to automatically identify “as-built” information and use it to retrieve “as-designed” project information for field construction, inspection, and maintenance tasks. Under this approach, a visual pattern recognition model was developed that aims to allow automatic identification of construction entities and materials visible in the camera’s field of view at a given time and location, and automatic retrieval of relevant design and specifications information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several studies have highlighted the importance of information and information quality in organisations and thus information is regarded as key determinant for the success and organisational performance. At the same time, there are numerous studies, frameworks and case studies examining the impact of information technology and systems to business value. Recently, several studies have proposed maturity models for information management capabilities in the literature, which claim that a higher maturity results in a higher organizational performance. Although these studies provide valuable information about the underlying relations, most are limited in specifying the relationship in more detail. Furthermore, most prominent approaches do not or at least not explicitly consider information as important influencing factor for organisational performance. In this paper, we aim to review selected contributions and introduce a model that shows how IS/IT resources and capabilties could be interlinked with IS/IT utilization, organizational performance and business value. Complementing other models and frameworks, we explicitly consider information from a management maturity, quality and risk perspective. Moreover, the paper discusses how each part of the model can be assessed in order to validate the model in future studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical model is developed to analyse the interaction of artificial cilia with the surrounding fluid in a three-dimensional setting in the limit of vanishing fluid inertia forces. The cilia are modelled using finite shell elements and the fluid is modelled using a boundary element approach. The coupling between both models is performed by imposing no-slip boundary conditions on the surface of the cilia. The performance of the model is verified using various reference problems available in the literature. The model is used to simulate the fluid flow due to magnetically actuated artificial cilia. The results show that narrow and closely spaced cilia create the largest flow, that metachronal waves along the width of the cilia create a significant flow in the direction of the cilia width and that the recovery stroke in the case of the out-of-plane actuation of the cilia strongly depends on the cilia width. © 2012 Cambridge University Press.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mandarin Chinese is based on characters which are syllabic in nature and morphological in meaning. All spoken languages have syllabiotactic rules which govern the construction of syllables and their allowed sequences. These constraints are not as restrictive as those learned from word sequences, but they can provide additional useful linguistic information. Hence, it is possible to improve speech recognition performance by appropriately combining these two types of constraints. For the Chinese language considered in this paper, character level language models (LMs) can be used as a first level approximation to allowed syllable sequences. To test this idea, word and character level n-gram LMs were trained on 2.8 billion words (equivalent to 4.3 billion characters) of texts from a wide collection of text sources. Both hypothesis and model based combination techniques were investigated to combine word and character level LMs. Significant character error rate reductions up to 7.3% relative were obtained on a state-of-the-art Mandarin Chinese broadcast audio recognition task using an adapted history dependent multi-level LM that performs a log-linearly combination of character and word level LMs. This supports the hypothesis that character or syllable sequence models are useful for improving Mandarin speech recognition performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional Hidden Markov models generally consist of a Markov chain observed through a linear map corrupted by additive noise. This general class of model has enjoyed a huge and diverse range of applications, for example, speech processing, biomedical signal processing and more recently quantitative finance. However, a lesser known extension of this general class of model is the so-called Factorial Hidden Markov Model (FHMM). FHMMs also have diverse applications, notably in machine learning, artificial intelligence and speech recognition [13, 17]. FHMMs extend the usual class of HMMs, by supposing the partially observed state process is a finite collection of distinct Markov chains, either statistically independent or dependent. There is also considerable current activity in applying collections of partially observed Markov chains to complex action recognition problems, see, for example, [6]. In this article we consider the Maximum Likelihood (ML) parameter estimation problem for FHMMs. Much of the extant literature concerning this problem presents parameter estimation schemes based on full data log-likelihood EM algorithms. This approach can be slow to converge and often imposes heavy demands on computer memory. The latter point is particularly relevant for the class of FHMMs where state space dimensions are relatively large. The contribution in this article is to develop new recursive formulae for a filter-based EM algorithm that can be implemented online. Our new formulae are equivalent ML estimators, however, these formulae are purely recursive and so, significantly reduce numerical complexity and memory requirements. A computer simulation is included to demonstrate the performance of our results. © Taylor & Francis Group, LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The trapped magnetic field is examined in bulk high-temperature superconductors that are artificially drilled along their c-axis. The influence of the hole pattern on the magnetization is studied and compared by means of numerical models and Hall probe mapping techniques. To this aim, we consider two bulk YBCO samples with a rectangular cross-section that are drilled each by six holes arranged either on a rectangular lattice (sample I) or on a centered rectangular lattice (sample II). For the numerical analysis, three different models are considered for calculating the trapped flux: (i), a two-dimensional (2D) Bean model neglecting demagnetizing effects and flux creep, (ii), a 2D finite-element model neglecting demagnetizing effects but incorporating magnetic relaxation in the form of an E-J power law, and, (iii), a 3D finite element analysis that takes into account both the finite height of the sample and flux creep effects. For the experimental analysis, the trapped magnetic flux density is measured above the sample surface by Hall probe mapping performed before and after the drilling process. The maximum trapped flux density in the drilled samples is found to be smaller than that in the plain samples. The smallest magnetization drop is found for sample II, with the centered rectangular lattice. This result is confirmed by the numerical models. In each sample, the relative drops that are calculated independently with the three different models are in good agreement. As observed experimentally, the magnetization drop calculated in the sample II is the smallest one and its relative value is comparable to the measured one. By contrast, the measured magnetization drop in sample (1) is much larger than that predicted by the simulations, most likely because of a change of the microstructure during the drilling process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper describes a new approach to artificial intelligence (AI) and its role in design. This approach argues that AI can be seen as 'text', or in other words as a medium for the communication of design knowledge and information between designers. This paper will apply these ideas to reinterpreting an existing knowledge-based system (KBS) design tool, that is, CADET - a product design evaluation tool. The paper will discuss the authorial issues, amongst others, involved in the development of AI and KBS design tools by adopting this new approach. Consequently, the designers' rights and responsibilities will be better understood as the knowledge medium, through its concern with authorship, returns control to users rather than attributing the system with agent status. © 1998 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents ongoing work on data collection and collation from a large number of laboratory cement-stabilization projects worldwide. The aim is to employ Artificial Neural Networks (ANN) to establish relationships between variables, which define the properties of cement-stabilized soils, and the two parameters determined by the Unconfined Compression Test, the Unconfined Compressive Strength (UCS), and stiffness, using E50 calculated from UCS results. Bayesian predictive neural network models are developed to predict the UCS values of cement-stabilized inorganic clays/silts, as well as sands as a function of selected soil mix variables, such as grain size distribution, water content, cement content and curing time. A model which can predict the stiffness values of cement-stabilized clays/silts is also developed and compared to the UCS model. The UCS model results emulate known trends better and provide more accurate estimates than the results from the E50 stiffness model. © 2013 American Society of Civil Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Factors that affect the engineering properties of cement stabilized soils such as strength are discussed in this paper using data on these factors. The selected factors studied in this paper are initial soil water content, grain size distribution, organic matter content, binder dosage, age and curing temperature, which has been collated from a number of international deep mixing projects. Some resulting correlations from this data are discussed and presented. The concept of Artificial Neural Networks and its applicability in developing predictive models for deep mixed soils is presented and discussed using a subset of the collated data. The results from the neural network model were found to emulate the known trends and reasonable estimates of strength as a function of the selected variables were obtained. © 2012 American Society of Civil Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geographical Information Systems (GIS) and Digital Elevation Models (DEM) can be used to perform many geospatial and hydrological modelling including drainage and watershed delineation, flood prediction and physical development studies of urban and rural settlements. This paper explores the use of contour data and planimetric features extracted from topographic maps to derive digital elevation models (DEMs) for watershed delineation and flood impact analysis (for emergency preparedness) of part of Accra, Ghana in a GIS environment. In the study two categories of DEMs were developed with 5 m contour and planimetric topographic data; bare earth DEM and built environment DEM. These derived DEMs were used as terrain inputs for performing spatial analysis and obtaining derivative products. The generated DEMs were used to delineate drainage patterns and watershed of the study area using ArcGIS desktop and its ArcHydro extension tool from Environmental Systems Research Institute (ESRI). A vector-based approach was used to derive inundation areas at various flood levels. The DEM of built-up areas was used as inputs for determining properties which will be inundated in a flood event and subsequently generating flood inundation maps. The resulting inundation maps show that about 80% areas which have perennially experienced extensive flooding in the city falls within the predicted flood extent. This approach can therefore provide a simplified means of predicting the extent of inundation during flood events for emergency action especially in less developed economies where sophisticated technologies and expertise are hard to come by. © 2009 Springer Netherlands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vibration and acoustic analysis at higher frequencies faces two challenges: computing the response without using an excessive number of degrees of freedom, and quantifying its uncertainty due to small spatial variations in geometry, material properties and boundary conditions. Efficient models make use of the observation that when the response of a decoupled vibro-acoustic subsystem is sufficiently sensitive to uncertainty in such spatial variations, the local statistics of its natural frequencies and mode shapes saturate to universal probability distributions. This holds irrespective of the causes that underly these spatial variations and thus leads to a nonparametric description of uncertainty. This work deals with the identification of uncertain parameters in such models by using experimental data. One of the difficulties is that both experimental errors and modeling errors, due to the nonparametric uncertainty that is inherent to the model type, are present. This is tackled by employing a Bayesian inference strategy. The prior probability distribution of the uncertain parameters is constructed using the maximum entropy principle. The likelihood function that is subsequently computed takes the experimental information, the experimental errors and the modeling errors into account. The posterior probability distribution, which is computed with the Markov Chain Monte Carlo method, provides a full uncertainty quantification of the identified parameters, and indicates how well their uncertainty is reduced, with respect to the prior information, by the experimental data. © 2013 Taylor & Francis Group, London.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Copyright © 2014, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. This paper presents the beginnings of an automatic statistician, focusing on regression problems. Our system explores an open-ended space of statistical models to discover a good explanation of a data set, and then produces a detailed report with figures and natural- language text. Our approach treats unknown regression functions non- parametrically using Gaussian processes, which has two important consequences. First, Gaussian processes can model functions in terms of high-level properties (e.g. smoothness, trends, periodicity, changepoints). Taken together with the compositional structure of our language of models this allows us to automatically describe functions in simple terms. Second, the use of flexible nonparametric models and a rich language for composing them in an open-ended manner also results in state- of-the-art extrapolation performance evaluated over 13 real time series data sets from various domains.