877 resultados para Model-based geostatistics
Resumo:
AC loss can be a significant problem for any applications that utilize or produce an AC current or magnetic field, such as an electric machine. The authors are currently investigating the electromagnetic properties of high temperature superconductors with a particular focus on the AC loss in coils made from YBCO superconductors. In this paper, a 2D finite element model based on the H formulation is introduced. The model is then used to calculate the transport AC loss using both a bulk approximation and modeling the individual turns in a racetrack-shaped coil. The coil model is based on the superconducting stator coils used in the University of Cambridge EPEC Superconductivity Group's superconducting permanent magnet synchronous motor design. The transport AC loss of a stator coil is measured using an electrical method based on inductive compensation using a variable mutual inductance. The simulated results are compared with the experimental results, verifying the validity of the model, and ways to improve the accuracy of the model are discussed. © 2010 IEEE.
Resumo:
Model based compensation schemes are a powerful approach for noise robust speech recognition. Recently there have been a number of investigations into adaptive training, and estimating the noise models used for model adaptation. This paper examines the use of EM-based schemes for both canonical models and noise estimation, including discriminative adaptive training. One issue that arises when estimating the noise model is a mismatch between the noise estimation approximation and final model compensation scheme. This paper proposes FA-style compensation where this mismatch is eliminated, though at the expense of a sensitivity to the initial noise estimates. EM-based discriminative adaptive training is evaluated on in-car and Aurora4 tasks. FA-style compensation is then evaluated in an incremental mode on the in-car task. © 2011 IEEE.
Resumo:
Growing environmental concerns caused by natural resource depletion and pollution need to be addressed. One approach to these problems is Sustainable Development, a key concept for our society to meet present as well as future needs worldwide. Manufacturing clearly has a major role to play in the move towards a more sustainable society. However it appears that basic principles of environmental sustainability are not systematically applied, with practice tending to focus on local improvements. The aim of the work presented in this paper is to adopt a more holistic view of the factory unit to enable opportunities for wider improvement. This research analyses environmental principles and industrial practice to develop a conceptual manufacturing ecosystem model as a foundation to improve environmental performance. The model developed focuses on material, energy and waste flows to better understand the interactions between manufacturing operations, supporting facilities and surrounding buildings. The research was conducted in three steps: (1) existing concepts and models for industrial sustainability were reviewed and environmental practices in manufacturing were collected and analysed; (2) gaps in knowledge and practice were identified; (3) the outcome is a manufacturing ecosystem model based on industrial ecology (IE). This conceptual model has novelty in detailing IE application at factory level and integrating all resource flows. The work is a base on which to build quantitative modelling tools to seek integrated solutions for lower resource input, higher resource productivity, fewer wastes and emissions, and lower operating cost within the boundary of a factory unit. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Vector Taylor Series (VTS) model based compensation is a powerful approach for noise robust speech recognition. An important extension to this approach is VTS adaptive training (VAT), which allows canonical models to be estimated on diverse noise-degraded training data. These canonical model can be estimated using EM-based approaches, allowing simple extensions to discriminative VAT (DVAT). However to ensure a diagonal corrupted speech covariance matrix the Jacobian (loading matrix) relating the noise and clean speech is diagonalised. In this work an approach for yielding optimal diagonal loading matrices based on minimising the expected KL-divergence between the diagonal loading matrix and "correct" distributions is proposed. The performance of DVAT using the standard and optimal diagonalisation was evaluated on both in-car collected data and the Aurora4 task. © 2012 IEEE.
Resumo:
Magnetic shielding efficiency was measured on high- Tc superconducting hollow cylinders subjected to either an axial or a transverse magnetic field in a large range of field sweep rates, dBapp/dt. The behaviour of the superconductor was modelled in order to reproduce the main features of the field penetration curves by using a minimum number of free parameters suitable for both magnetic field orientations. The field penetration measurements were carried out on Pb-doped Bi-2223 tubes at 77K by applying linearly increasing magnetic fields with a constant sweep rate ranging between 10νTs-1 and 10mTs-1 for both directions of the applied magnetic field. The experimental curves of the internal field versus the applied field, Bin(Bapp), show that, at a given sweep rate, the magnetic field for which the penetration occurs, Blim, is lower for the transverse configuration than for the axial configuration. A power law dependence with large exponent, n′, is found between Blim and dBapp/dt. The values of n′ are nearly the same for both configurations. We show that the main features of the curves B in(Bapp) can be reproduced using a simple 2D model, based on the method of Brandt, involving a E(J) power law with an n-exponent and a field-dependent critical current density, Jc(B), (following the Kim model: Jc = Jc0(1+B/B1)-1). In particular, a linear relationship between the measured n′-exponents and the n-exponent of the E(J) power law is suggested by taking into account the field dependence of the critical current density. Differences between the axial and the transverse shielding properties can be simply attributed to demagnetizing fields. © 2009 IOP Publishing Ltd.
Resumo:
Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full-and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. © 1992-2012 IEEE.
Resumo:
In order to account for interfacial friction of composite materials, an analytical model based on contact geometry and local friction is proposed. A contact area includes several types of microcontacts depending on reinforcement materials and their shape. A proportion between these areas is defined by in-plane contact geometry. The model applied to a fibre-reinforced composite results in the dependence of friction on surface fibre fraction and local friction coefficients. To validate this analytical model, an experimental study on carbon fibrereinforced epoxy composites under low normal pressure was performed. The effects of fibre volume fraction and fibre orientation were studied, discussed and compared with analytical model results. © Springer Science+Business Media, LLC 2012.
Resumo:
To investigate the seasonal and interannual variations in biological productivity in the South China Sea (SCS), a Pacific basin-wide physical - biogeochemical model has been developed and used to estimate the biological productivity and export flux in the SCS. The Pacific circulation model, based on the Regional Ocean Model Systems (ROMS), is forced with daily air-sea fluxes derived from the NCEP (National Centers for Environmental Prediction) reanalysis between 1990 and 2004. The biogeochemical processes are simulated with a carbon, Si(OH)(4), and nitrogen ecosystem (CoSiNE) model consisting of silicate, nitrate, ammonium, two phytoplankton groups (small phytoplankton and large phytoplankton), two zooplankton grazers (small micrograzers and large mesozooplankton), and two detritus pools. The ROMS-CoSiNE model favourably reproduces many of the observed features, such as ChI a, nutrients, and primary production (PP) in the SCS. The modelled depth-integrated PP over the euphotic zone (0-125 m) varies seasonally, with the highest value of 386 mg C m (-2) d (-1) during winter and the lowest value of 156 mg C m (-2) d (-1) during early summer. The annual mean value is 196 mg C m (-2) d (-1). The model-integrated annual mean new production (uptake of nitrate), in carbon units, is 64.4 mg C m (-2) d (-1) which yields an f-ratio of 0.33 for the entire SCS. The modelled export ratio (e-ratio: the ratio of export to PP) is 0.24 for the basin-wide SCS. The year-to-year variation of biological productivity in the SCS is weaker than the seasonal variation. The large phytoplankton group tends to dominate over the smaller phytoplankton group, and likely plays an important role in determining the interannual variability of primary and new production.
Resumo:
This paper consists of two major parts. First, we present the outline of a simple approach to very-low bandwidth video-conferencing system relying on an example-based hierarchical image compression scheme. In particular, we discuss the use of example images as a model, the number of required examples, faces as a class of semi-rigid objects, a hierarchical model based on decomposition into different time-scales, and the decomposition of face images into patches of interest. In the second part, we present several algorithms for image processing and animation as well as experimental evaluations. Among the original contributions of this paper is an automatic algorithm for pose estimation and normalization. We also review and compare different algorithms for finding the nearest neighbors in a database for a new input as well as a generalized algorithm for blending patches of interest in order to synthesize new images. Finally, we outline the possible integration of several algorithms to illustrate a simple model-based video-conference system.
Resumo:
There has been much interest in the area of model-based reasoning within the Artificial Intelligence community, particularly in its application to diagnosis and troubleshooting. The core issue in this thesis, simply put, is, model-based reasoning is fine, but whence the model? Where do the models come from? How do we know we have the right models? What does the right model mean anyway? Our work has three major components. The first component deals with how we determine whether a piece of information is relevant to solving a problem. We have three ways of determining relevance: derivational, situational and an order-of-magnitude reasoning process. The second component deals with the defining and building of models for solving problems. We identify these models, determine what we need to know about them, and importantly, determine when they are appropriate. Currently, the system has a collection of four basic models and two hybrid models. This collection of models has been successfully tested on a set of fifteen simple kinematics problems. The third major component of our work deals with how the models are selected.
Resumo:
Joern Fischer, David B. Lindermayer, and Ioan Fazey (2004). Appreciating Ecological Complexity: Habitat Contours as a Conceptual Landscape Model. Conservation Biology, 18 (5)pp.1245-1253 RAE2008
Resumo:
An improved method for deformable shape-based image segmentation is described. Image regions are merged together and/or split apart, based on their agreement with an a priori distribution on the global deformation parameters for a shape template. The quality of a candidate region merging is evaluated by a cost measure that includes: homogeneity of image properties within the combined region, degree of overlap with a deformed shape model, and a deformation likelihood term. Perceptually-motivated criteria are used to determine where/how to split regions, based on the local shape properties of the region group's bounding contour. A globally consistent interpretation is determined in part by the minimum description length principle. Experiments show that the model-based splitting strategy yields a significant improvement in segmention over a method that uses merging alone.
Resumo:
Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.
Resumo:
BACKGROUND: Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. METHODOLOGY: The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. PRINCIPAL FINDINGS: The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. CONCLUSIONS: Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.
Resumo:
Software-based control of life-critical embedded systems has become increasingly complex, and to a large extent has come to determine the safety of the human being. For example, implantable cardiac pacemakers have over 80,000 lines of code which are responsible for maintaining the heart within safe operating limits. As firmware-related recalls accounted for over 41% of the 600,000 devices recalled in the last decade, there is a need for rigorous model-driven design tools to generate verified code from verified software models. To this effect, we have developed the UPP2SF model-translation tool, which facilitates automatic conversion of verified models (in UPPAAL) to models that may be simulated and tested (in Simulink/Stateflow). We describe the translation rules that ensure correct model conversion, applicable to a large class of models. We demonstrate how UPP2SF is used in themodel-driven design of a pacemaker whosemodel is (a) designed and verified in UPPAAL (using timed automata), (b) automatically translated to Stateflow for simulation-based testing, and then (c) automatically generated into modular code for hardware-level integration testing of timing-related errors. In addition, we show how UPP2SF may be used for worst-case execution time estimation early in the design stage. Using UPP2SF, we demonstrate the value of integrated end-to-end modeling, verification, code-generation and testing process for complex software-controlled embedded systems. © 2014 ACM.