813 resultados para Constraint based modelling


Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this text, we present two stereo-based head tracking techniques along with a fast 3D model acquisition system. The first tracking technique is a robust implementation of stereo-based head tracking designed for interactive environments with uncontrolled lighting. We integrate fast face detection and drift reduction algorithms with a gradient-based stereo rigid motion tracking technique. Our system can automatically segment and track a user's head under large rotation and illumination variations. Precision and usability of this approach are compared with previous tracking methods for cursor control and target selection in both desktop and interactive room environments. The second tracking technique is designed to improve the robustness of head pose tracking for fast movements. Our iterative hybrid tracker combines constraints from the ICP (Iterative Closest Point) algorithm and normal flow constraint. This new technique is more precise for small movements and noisy depth than ICP alone, and more robust for large movements than the normal flow constraint alone. We present experiments which test the accuracy of our approach on sequences of real and synthetic stereo images. The 3D model acquisition system we present quickly aligns intensity and depth images, and reconstructs a textured 3D mesh. 3D views are registered with shape alignment based on our iterative hybrid tracker. We reconstruct the 3D model using a new Cubic Ray Projection merging algorithm which takes advantage of a novel data structure: the linked voxel space. We present experiments to test the accuracy of our approach on 3D face modelling using real-time stereo images.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this investigation was to examine in a systematic manner the influence of plasma protein binding on in vivo pharmacodynamics. Comparative pharmacokinetic-pharmacodynamic studies with four beta blockers were performed in conscious rats, using heart rate under isoprenaline-induced tachycardia as a pharmacodynamic endpoint. A recently proposed mechanism-based agonist-antagonist interaction model was used to obtain in vivo estimates of receptor affinities (K(B),(vivo)). These values were compared with in vitro affinities (K(B),(vitro)) on the basis of both total and free drug concentrations. For the total drug concentrations, the K(B),(vivo) estimates were 26, 13, 6.5 and 0.89 nM for S(-)-atenolol, S(-)-propranolol, S(-)-metoprolol and timolol. The K(B),(vivo) estimates on the basis of the free concentrations were 25, 2.0, 5.2 and 0.56 nM, respectively. The K(B),(vivo)-K(B),(vitro) correlation for total drug concentrations clearly deviated from the line of identity, especially for the most highly bound drug S(-)-propranolol (ratio K(B),(vivo)/K(B),(vitro) similar to 6.8). For the free drug, the correlation approximated the line of identity. Using this model, for beta-blockers the free plasma concentration appears to be the best predictor of in vivo pharmacodynamics. (C) 2008 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 98:3816-3828, 2009

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an agent-based approach to modelling individual driver behaviour under the influence of real-time traffic information. The driver behaviour models developed in this study are based on a behavioural survey of drivers which was conducted on a congested commuting corridor in Brisbane, Australia. Commuters' responses to travel information were analysed and a number of discrete choice models were developed to determine the factors influencing drivers' behaviour and their propensity to change route and adjust travel patterns. Based on the results obtained from the behavioural survey, the agent behaviour parameters which define driver characteristics, knowledge and preferences were identified and their values determined. A case study implementing a simple agent-based route choice decision model within a microscopic traffic simulation tool is also presented. Driver-vehicle units (DVUs) were modelled as autonomous software components that can each be assigned a set of goals to achieve and a database of knowledge comprising certain beliefs, intentions and preferences concerning the driving task. Each DVU provided route choice decision-making capabilities, based on perception of its environment, that were similar to the described intentions of the driver it represented. The case study clearly demonstrated the feasibility of the approach and the potential to develop more complex driver behavioural dynamics based on the belief-desire-intention agent architecture. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It was previously published by the authors that granules can either coalesce through Type I (when granules coalesce by viscous dissipation in the surface liquid layer before their surfaces touch) or Type II (when granules are slowed to a halt during rebound, after their surfaces have made contact) (AIChE J. 46 (3) (2000) 529). Based on this coalescence mechanism, a new coalescence kernel for population balance modelling of granule growth is presented. The kernel is constant such that only collisions satisfying the conditions for one of the two coalescence types are successful. One constant rate is assigned to each type of coalescence and zero is for the case of rebound. As the conditions for Types I and II coalescence are dependent on granule and binder properties, the coalescence kernel is thus physically based. Simulation results of a variety of binder and granule materials show good agreement with experimental data. (C) 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Molecular dynamics simulations were employed to analyze the mechanical properties of polymer-based nanocomposites with varying nanofiber network parameters. The study was focused on nanofiber aspect ratio, concentration and initial orientation. The reinforcing phase affects the behavior of the polymeric nanocomposite. Simulations have shown that the fiber concentration has a significant effect on the properties, with higher loadings resulting in higher stress levels and higher stiffness, matching the general behavior from experimental knowledge in this field. The results also indicate that, within the studied range, the observed effect of the aspect ratio and initial orientation is smaller than that of the concentration, and that these two parameters are interrelated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Due to usage conditions, hazardous environments or intentional causes, physical and virtual systems are subject to faults in their components, which may affect their overall behaviour. In a ‘black-box’ agent modelled by a set of propositional logic rules, in which just a subset of components is externally visible, such faults may only be recognised by examining some output function of the agent. A (fault-free) model of the agent’s system provides the expected output given some input. If the real output differs from that predicted output, then the system is faulty. However, some faults may only become apparent in the system output when appropriate inputs are given. A number of problems regarding both testing and diagnosis thus arise, such as testing a fault, testing the whole system, finding possible faults and differentiating them to locate the correct one. The corresponding optimisation problems of finding solutions that require minimum resources are also very relevant in industry, as is minimal diagnosis. In this dissertation we use a well established set of benchmark circuits to address such diagnostic related problems and propose and develop models with different logics that we formalise and generalise as much as possible. We also prove that all techniques generalise to agents and to multiple faults. The developed multi-valued logics extend the usual Boolean logic (suitable for faultfree models) by encoding values with some dependency (usually on faults). Such logics thus allow modelling an arbitrary number of diagnostic theories. Each problem is subsequently solved with CLP solvers that we implement and discuss, together with a new efficient search technique that we present. We compare our results with other approaches such as SAT (that require substantial duplication of circuits), showing the effectiveness of constraints over multi-valued logics, and also the adequacy of a general set constraint solver (with special inferences over set functions such as cardinality) on other problems. In addition, for an optimisation problem, we integrate local search with a constructive approach (branch-and-bound) using a variety of logics to improve an existing efficient tool based on SAT and ILP.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Architectural (bad) smells are design decisions found in software architectures that degrade the ability of systems to evolve. This paper presents an approach to verify that a software architecture is smellfree using the Archery architectural description language. The language provides a core for modelling software architectures and an extension for specifying constraints. The approach consists in precisely specifying architectural smells as constraints, and then verifying that software architectures do not satisfy any of them. The constraint language is based on a propositional modal logic with recursion that includes: a converse operator for relations among architectural concepts, graded modalities for describing the cardinality in such relations, and nominals referencing architectural elements. Four architectural smells illustrate the approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper reviews three different approaches to modelling the cost-effectiveness of schistosomiasis control. Although these approaches vary in their assessment of costs, the major focus of the paper is on the evaluation of effectiveness. The first model presented is a static economic model which assesses effectiveness in terms of the proportion of cases cured. This model is important in highlighting that the optimal choice of chemotherapy regime depends critically on the level of budget constraint, the unit costs of screening and treatment, the rates of compliance with screening and chemotherapy and the prevalence of infection. The limitations of this approach is that it models the cost-effectiveness of only one cycle of treatment, and effectiveness reflects only the immediate impact of treatment. The second model presented is a prevalence-based dynamic model which links prevalence rates from one year to the next, and assesses effectiveness as the proportion of cases prevented. This model was important as it introduced the concept of measuring the long-term impact of control by using a transmission model which can assess reduction in infection through time, but is limited to assessing the impact only on the prevalence of infection. The third approach presented is a theoretical framework which describes the dynamic relationships between infection and morbidity, and which assesses effectiveness in terms of case-years prevented of infection and morbidity. The use of this model in assessing the cost-effectiveness of age-targeted treatment in controlling Schistosoma mansoni is explored in detail, with respect to varying frequencies of treatment and the interaction between drug price and drug efficacy.