104 resultados para Dimensionless Parameter
Resumo:
We describe a method for verifying seismic modelling parameters. It is equivalent to performing several iterations of unconstrained least-squares migration (LSM). The approach allows the comparison of modelling/imaging parameter configurations with greater confidence than simply viewing the migrated images. The method is best suited to determining discrete parameters but can be used for continuous parameters albeit with greater computational expense.
Resumo:
This paper focuses on the PSpice model of SiC-JFET element inside a SiCED cascode device. The device model parameters are extracted from the I-V and C-V characterization curves. In order to validate the model, an inductive test rig circuit is designed and tested. The switching loss is estimated both using oscilloscope and calorimeter. These results are found to be in good agreement with the simulated results.
Resumo:
Reinforcement techniques have been successfully used to maximise the expected cumulative reward of statistical dialogue systems. Typically, reinforcement learning is used to estimate the parameters of a dialogue policy which selects the system's responses based on the inferred dialogue state. However, the inference of the dialogue state itself depends on a dialogue model which describes the expected behaviour of a user when interacting with the system. Ideally the parameters of this dialogue model should be also optimised to maximise the expected cumulative reward. This article presents two novel reinforcement algorithms for learning the parameters of a dialogue model. First, the Natural Belief Critic algorithm is designed to optimise the model parameters while the policy is kept fixed. This algorithm is suitable, for example, in systems using a handcrafted policy, perhaps prescribed by other design considerations. Second, the Natural Actor and Belief Critic algorithm jointly optimises both the model and the policy parameters. The algorithms are evaluated on a statistical dialogue system modelled as a Partially Observable Markov Decision Process in a tourist information domain. The evaluation is performed with a user simulator and with real users. The experiments indicate that model parameters estimated to maximise the expected reward function provide improved performance compared to the baseline handcrafted parameters. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an agenda-based user simulator which has been extended to be trainable on real data with the aim of more closely modelling the complex rational behaviour exhibited by real users. The train-able part is formed by a set of random decision points that may be encountered during the process of receiving a system act and responding with a user act. A sample-based method is presented for using real user data to estimate the parameters that control these decisions. Evaluation results are given both in terms of statistics of generated user behaviour and the quality of policies trained with different simulators. Compared to a handcrafted simulator, the trained system provides a much better fit to corpus data and evaluations suggest that this better fit should result in improved dialogue performance. © 2010 Association for Computational Linguistics.
Resumo:
Several research studies have been recently initiated to investigate the use of construction site images for automated infrastructure inspection, progress monitoring, etc. In these studies, it is always necessary to extract material regions (concrete or steel) from the images. Existing methods made use of material's special color/texture ranges for material information retrieval, but they do not sufficiently discuss how to find these appropriate color/texture ranges. As a result, users have to define appropriate ones by themselves, which is difficult for those who do not have enough image processing background. This paper presents a novel method of identifying concrete material regions using machine learning techniques. Under the method, each construction site image is first divided into regions through image segmentation. Then, the visual features of each region are calculated and classified with a pre-trained classifier. The output value determines whether the region is composed of concrete or not. The method was implemented using C++ and tested over hundreds of construction site images. The results were compared with the manual classification ones to indicate the method's validity.
Resumo:
A systematic study of the parameter space of graphene chemical vapor deposition (CVD) on polycrystalline Cu foils is presented, aiming at a more fundamental process rationale in particular regarding the choice of carbon precursor and mitigation of Cu sublimation. CH 4 as precursor requires H 2 dilution and temperatures ≥1000 °C to keep the Cu surface reduced and yield a high-quality, complete monolayer graphene coverage. The H 2 atmosphere etches as-grown graphene; hence, maintaining a balanced CH 4/H 2 ratio is critical. Such balance is more easily achieved at low-pressure conditions, at which however Cu sublimation reaches deleterious levels. In contrast, C 6H 6 as precursor requires no reactive diluent and consistently gives similar graphene quality at 100-150 °C lower temperatures. The lower process temperature and more robust processing conditions allow the problem of Cu sublimation to be effectively addressed. Graphene formation is not inherently self-limited to a monolayer for any of the precursors. Rather, the higher the supplied carbon chemical potential, the higher the likelihood of film inhomogeneity and primary and secondary multilayer graphene nucleation. For the latter, domain boundaries of the inherently polycrystalline CVD graphene offer pathways for a continued carbon supply to the catalyst. Graphene formation is significantly affected by the Cu crystallography; i.e., the evolution of microstructure and texture of the catalyst template form an integral part of the CVD process. © 2012 American Chemical Society.
Resumo:
We present an alternative method of producing density stratifications in the laboratory based on the 'double-tank' method proposed by Oster (Sci Am 213:70-76, 1965). We refer to Oster's method as the 'forced-drain' approach, as the volume flow rates between connecting tanks are controlled by mechanical pumps. We first determine the range of density profiles that may be established with the forced-drain approach other than the linear stratification predicted by Oster. The dimensionless density stratification is expressed analytically as a function of three ratios: the volume flow rate ratio n, the ratio of the initial liquid volumes λ and the ratio of the initial densities ψ. We then propose a method which does not require pumps to control the volume flow rates but instead allows the connecting tanks to drain freely under gravity. This is referred to as the 'free-drain' approach. We derive an expression for the density stratification produced and compare our predictions with saline stratifications established in the laboratory using the 'free-drain' extension of Oster's method. To assist in the practical application of our results we plot the region of parameter space that yield concave/convex or linear density profiles for both forced-drain and free-drain approaches. The free-drain approach allows the experimentalist to produce a broad range of density profiles by varying the initial liquid depths, cross-sectional and drain opening areas of the tanks. One advantage over the original Oster approach is that density profiles with an inflexion point can now be established. © 2008 Springer-Verlag.
Resumo:
We examine the time taken to flush pollutants from a naturally ventilated room. A simple theoretical model is developed to predict the time taken for neutrally-buoyant pollutants to be removed from a room by a flow driven by localised heat inputs; both line and point heat sources are considered. We show that the rate of flushing is a function of the room volume, vent areas ( A) and the distribution, number (n) and strength (B) of the heat sources. We also show that the entire problem can be reduced to a single parameter ( μ) that is a measure of the vent areas, and a dimensionless time ( τ) that is a function of B, V and μ. Small-scale salt-bath experiments were conducted to measure the flushing rates in order to validate our modelling assumptions and predictions. The predicted flushing times show good agreement with the experiments over a wide range of μ. We apply our model to a typical open plan office and lecture theatre and discuss some of the implications of our results. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
In this paper a study of the air flow pattern created by a two-dimensional Aaberg exhaust hood local ventilation system is presented. A mathematical model of the flow, in terms of the stream function ψ, is derived analytically for both laminar and turbulent injections of fluid. Streamlines and lines of constant speed deduced from the model are examined for various values of the governing dimensionless operating parameter and predictions are given as to the area in front of the hood from which the air can be sampled. The effect of the injection of fluid on the centre-line velocity of the flow is examined and a comparison of the results with the available experimental data is given. © 1992.
Resumo:
This paper studies the dynamical response of a rotary drilling system with a drag bit, using a lumped parameter model that takes into consideration the axial and torsional vibration modes of the bit. These vibrations are coupled through a bit-rock interaction law. At the bit-rock interface, the cutting process introduces a state-dependent delay, while the frictional process is responsible for discontinuous right-hand sides in the equations governing the motion of the bit. This complex system is characterized by a fast axial dynamics compared to the slow torsional dynamics. A dimensionless formulation exhibits a large parameter in the axial equation, enabling a two-time-scales analysis that uses a combination of averaging methods and a singular perturbation approach. An approximate model of the decoupled axial dynamics permits us to derive a pseudoanalytical expression of the solution of the axial equation. Its averaged behavior influences the slow torsional dynamics by generating an apparent velocity weakening friction law that has been proposed empirically in earlier work. The analytical expression of the solution of the axial dynamics is used to derive an approximate analytical expression of the velocity weakening friction law related to the physical parameters of the system. This expression can be used to provide recommendations on the operating parameters and the drillstring or the bit design in order to reduce the amplitude of the torsional vibrations. Moreover, it is an appropriate candidate model to replace empirical friction laws encountered in torsional models used for control. © 2009 Society for Industrial and Applied Mathematics.
Resumo:
A multivariate, robust, rational interpolation method for propagating uncertainties in several dimensions is presented. The algorithm for selecting numerator and denominator polynomial orders is based on recent work that uses a singular value decomposition approach. In this paper we extend this algorithm to higher dimensions and demonstrate its efficacy in terms of convergence and accuracy, both as a method for response suface generation and interpolation. To obtain stable approximants for continuous functions, we use an L2 error norm indicator to rank optimal numerator and denominator solutions. For discontinous functions, a second criterion setting an upper limit on the approximant value is employed. Analytical examples demonstrate that, for the same stencil, rational methods can yield more rapid convergence compared to pseudospectral or collocation approaches for certain problems. © 2012 AIAA.
Resumo:
We examine theoretically the transient displacement flow and density stratification that develops within a ventilated box after two localized floor-level heat sources of unequal strengths are activated. The heat input is represented by two non-interacting turbulent axisymmetric plumes of constant buoyancy fluxes B1 and B2 > B1. The box connects to an unbounded quiescent external environment of uniform density via openings at the top and base. A theoretical model is developed to predict the time evolution of the dimensionless depths λj and mean buoyancies δj of the 'intermediate' (j = 1) and 'top' (j = 2) layers leading to steady state. The flow behaviour is classified in terms of a stratification parameter S, a dimensionless measure of the relative forcing strengths of the two buoyant layers that drive the flow. We find that dδ1/dτ α 1/λ1 and dδ2/dτ α 1/λ2, where τ is a dimensionless time. When S 1, the intermediate layer is shallow (small λ1), whereas the top layer is relatively deep (large λ2) and, in this limit, δ1 and δ2 evolve on two characteristically different time scales. This produces a time lag and gives rise to a 'thermal overshoot', during which δ1 exceeds its steady value and attains a maximum during the transients; a flow feature we refer to, in the context of a ventilated room, as 'localized overheating'. For a given source strength ratio ψ = B1/B2, we show that thermal overshoots are realized for dimensionless opening areas A < Aoh and are strongly dependent on the time history of the flow. We establish the region of {A, ψ} space where rapid development of δ1 results in δ1 > δ2, giving rise to a bulk overturning of the buoyant layers. Finally, some implications of these results, specifically to the ventilation of a room, are discussed. © Cambridge University Press 2013.
Resumo:
Vibration and acoustic analysis at higher frequencies faces two challenges: computing the response without using an excessive number of degrees of freedom, and quantifying its uncertainty due to small spatial variations in geometry, material properties and boundary conditions. Efficient models make use of the observation that when the response of a decoupled vibro-acoustic subsystem is sufficiently sensitive to uncertainty in such spatial variations, the local statistics of its natural frequencies and mode shapes saturate to universal probability distributions. This holds irrespective of the causes that underly these spatial variations and thus leads to a nonparametric description of uncertainty. This work deals with the identification of uncertain parameters in such models by using experimental data. One of the difficulties is that both experimental errors and modeling errors, due to the nonparametric uncertainty that is inherent to the model type, are present. This is tackled by employing a Bayesian inference strategy. The prior probability distribution of the uncertain parameters is constructed using the maximum entropy principle. The likelihood function that is subsequently computed takes the experimental information, the experimental errors and the modeling errors into account. The posterior probability distribution, which is computed with the Markov Chain Monte Carlo method, provides a full uncertainty quantification of the identified parameters, and indicates how well their uncertainty is reduced, with respect to the prior information, by the experimental data. © 2013 Taylor & Francis Group, London.