944 resultados para Model-based bootstrap


Relevância:

90.00% 90.00%

Publicador:

Resumo:

There is an increasing use of the discrete element method (DEM) to study cemented (e.g. concrete and rocks) and sintered particulate materials. The chief advantage of the DEM over continuum based techniques is that it does not make assumptions about how cracking and fragmentation initiate and propagate, since the DEM system is naturally discontinuous. The ability for the DEM to produce a realistic representation of a cemented granular material depends largely on the implementation of an inter-particle bonded contact model. This paper presents a new bonded contact model based on the Timoshenko beam theory which considers axial, shear and bending behaviour of the bond. The bond model was first verified by simulating both the bending and dynamic response of a simply supported beam. The loading response of a concrete cylinder was then investigated and compared with the Eurocode equation prediction. The results show significant potential for the new model to produce satisfactory predictions for cementitious materials. A unique feature of this model is that it can also be used to accurately represent many deformable structures such as frames and shells, so that both particles and structures or deformable boundaries can be described in the same DEM framework. 

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Head and neck (H&N) cancers are a heterogeneous group of malignancies, affecting various sites, with different prognoses. The aims of this study are to analyse survival for patients with H&N cancers in relation to tumour location, to assess the change in survival between European countries, and to investigate whether survival improved over time.
METHODS: We analysed about 250,000 H&N cancer cases from 86 cancer registries (CRs). Relative survival (RS) was estimated by sex, age, country and stage. We described survival time trends over 1999-2007, using the period approach. Model based survival estimates of relative excess risks (RERs) of death were also provided by country, after adjusting for sex, age and sub-site.
RESULTS: Five-year RS was the poorest for hypopharynx (25%) and the highest for larynx (59%). Outcome was significantly better in female than in male patients. In Europe, age-standardised 5-year survival remained stable from 1999-2001 to 2005-2007 for laryngeal cancer, while it increased for all the other H&N cancers. Five-year age-standardised RS was low in Eastern countries, 47% for larynx and 28% for all the other H&N cancers combined, and high in Ireland and the United Kingdom (UK), and Northern Europe (62% and 46%). Adjustment for sub-site narrowed the difference between countries. Fifty-four percent of patients was diagnosed at advanced stage (regional or metastatic). Five-year RS for localised cases ranged between 42% (hypopharynx) and 74% (larynx).
CONCLUSIONS: This study shows survival progresses during the study period. However, slightly more than half of patients were diagnosed with regional or metastatic disease at diagnosis. Early diagnosis and timely start of treatment are crucial to reduce the European gap to further improve H&N cancers outcome.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: The transtheoretical model has been successful in promoting health behavior change in general and clinical populations. However, there is little knowledge about the application of the transtheoretical model to explain physical activity behavior in individuals with non-cystic fibrosis bronchiectasis. The aim was to examine patterns of (1) physical activity and (2) mediators of behavior change (self-efficacy, decisional balance, and processes of change) across stages of change in individuals with non-cystic fibrosis bronchiectasis.

METHODS: Fifty-five subjects with non-cystic fibrosis bronchiectasis (mean age ± SD = 63 ± 10 y) had physical activity assessed over 7 d using an accelerometer. Each component of the transtheoretical model was assessed using validated questionnaires. Subjects were divided into groups depending on stage of change: Group 1 (pre-contemplation and contemplation; n = 10), Group 2 (preparation; n = 20), and Group 3 (action and maintenance; n = 25). Statistical analyses included one-way analysis of variance and Tukey-Kramer post hoc tests.

RESULTS: Physical activity variables were significantly (P < .05) higher in Group 3 (action and maintenance) compared with Group 2 (preparation) and Group 1 (pre-contemplation and contemplation). For self-efficacy, there were no significant differences between groups for mean scores (P = .14). Decisional balance cons (barriers to being physically active) were significantly lower in Group 3 versus Group 2 (P = .032). For processes of change, substituting alternatives (substituting inactive options for active options) was significantly higher in Group 3 versus Group 1 (P = .01), and enlisting social support (seeking out social support to increase and maintain physical activity) was significantly lower in Group 3 versus Group 2 (P = .038).

CONCLUSIONS: The pattern of physical activity across stages of change is consistent with the theoretical predictions of the transtheoretical model. Constructs of the transtheoretical model that appear to be important at different stages of change include decisional balance cons, substituting alternatives, and enlisting social support. This study provides support to explore transtheoretical model-based physical activity interventions in individuals with non-cystic fibrosis bronchiectasis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Next generation ATM systems cannot be implemented in a technological vacuum. The further ahead we look, the greater the likely impact of societal factors on such changes, and how they are prioritised and promoted. The equitable sustainability of travel behaviour is rising on the political agenda in Europe in an unprecedented manner. This paper examines pilot and controller attitudes towards Continuous Descent Approaches (CDAs). It aims to promote a better understanding of acceptance of change in ATM. The focus is on the psychosocial context and the relationships between perceived societal and system benefits. Behavioural change appeared more correlated with such benefit perceptions in the case of the pilots. For the first time in the study of ATM implementation, and acceptance of change, this paper incorporates the Seven Stages of Change model, based on the constructs of the Theory of Planned Behaviour. It employs a principal components (factor) analysis, and further explores the intercorrelations of benefit perceptions, known in psychology as the ‘halo effect’. Disbenefit perceptions may break down this effect, it appears. For implementers of change, this evidence suggests an approach in terms of reinforcing the dominant benefit(s) perceived, for sub-groups within which a halo effect is evident. In the absence of such an effect, perceived disbenefits, such as with respect to workload and capacity, should be off-set against specific, perceived benefits of the change, as far as possible. This methodology could be equally applied to other stakeholders, from strategic planners to the public. The set of three case studies will be extended beyond CDA trials. A set of concise guidelines will be published with a strong focus on practical advice, in addition to continued work enabling a better understanding of the expected, increasing psychosocial contributions to successful and unsuccessful efforts at ATM innovation and change.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a project consisting on the development of an Intelligent Tutoring System, for training and support concerning the development of electrical installation projects to be used by electrical engineers, technicians and students. One of the major goals of this project is to devise a teaching model based on Intelligent Tutoring techniques, considering not only academic knowledge but also other types of more empirical knowledge, able to achieve successfully the training of electrical installation design.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is well accepted that structural studies with model membranes are of considerable value in understanding the structure of biological membranes. Many studies with models of pure phospholipids have been done; but the effects of divalent cations and protein on these models would make these studies more applicable to intact membrane. The present study, performed with above view, is a structural analysis of divalent io~cardio1ipin complexes using the technique of x-ray diffraction. Cardiolipin, precipitated from dilute solution by divalent ionscalcium, magnesium and barium, contains little water and the structure formed is similar to the structure of pure cardiolipin with low water content. The calcium-cardiolipin complex forms a pure hexagonal type II phase that exists from 40 to 400 C. The molar ratio of calcium and cardiolipin in the complex is 1 : 1. Cardiolipin, precipitated with magnesium and barium forms two co-existing phases, lamellar and hexagonal, the relative quantity of the two phases being dependent on temperature. The hexagonal phase type II consisting of water filled channels formed by adding calcium to cardiolipin may have a remarkable permeability property in intact membrane. Pure cardiolipin and insulin at pH 3.0 and 4.0 precipitate but form no organised structure. Lecithin/cardiolipin and insulin precipitated at pH 3.0 give a pure lamellar phase. As the lecithin/cardiolipin molar ratio changes from 93/7 to SO/50, (a) the repeat distance of the lamellar changes from 72.8 X to 68.2 A; (b) the amount of protein bound increases in such a way that cardiolipin/insulin molar ratio in the complex reaches a maximum constant value at lecithin/cardiolipin molar ratio 70/30. A structural model based on these data shows that the molecular arrangement of lipid and protein is a lipid bilayer coated with protein molecules. The lipid-protein interaction is chiefly electrostatic and little, if any, hydrophobic bonding occurs in this particular system. So, the proposed model is essentially the same as Davson-Daniellifs model of biological membrane.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Affiliation: Département de Biochimie, Université de Montréal

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dans les études sur le transport, les modèles de choix de route décrivent la sélection par un utilisateur d’un chemin, depuis son origine jusqu’à sa destination. Plus précisément, il s’agit de trouver dans un réseau composé d’arcs et de sommets la suite d’arcs reliant deux sommets, suivant des critères donnés. Nous considérons dans le présent travail l’application de la programmation dynamique pour représenter le processus de choix, en considérant le choix d’un chemin comme une séquence de choix d’arcs. De plus, nous mettons en œuvre les techniques d’approximation en programmation dynamique afin de représenter la connaissance imparfaite de l’état réseau, en particulier pour les arcs éloignés du point actuel. Plus précisément, à chaque fois qu’un utilisateur atteint une intersection, il considère l’utilité d’un certain nombre d’arcs futurs, puis une estimation est faite pour le restant du chemin jusqu’à la destination. Le modèle de choix de route est implanté dans le cadre d’un modèle de simulation de trafic par événements discrets. Le modèle ainsi construit est testé sur un modèle de réseau routier réel afin d’étudier sa performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cette thèse comporte trois articles dont un est publié et deux en préparation. Le sujet central de la thèse porte sur le traitement des valeurs aberrantes représentatives dans deux aspects importants des enquêtes que sont : l’estimation des petits domaines et l’imputation en présence de non-réponse partielle. En ce qui concerne les petits domaines, les estimateurs robustes dans le cadre des modèles au niveau des unités ont été étudiés. Sinha & Rao (2009) proposent une version robuste du meilleur prédicteur linéaire sans biais empirique pour la moyenne des petits domaines. Leur estimateur robuste est de type «plugin», et à la lumière des travaux de Chambers (1986), cet estimateur peut être biaisé dans certaines situations. Chambers et al. (2014) proposent un estimateur corrigé du biais. En outre, un estimateur de l’erreur quadratique moyenne a été associé à ces estimateurs ponctuels. Sinha & Rao (2009) proposent une procédure bootstrap paramétrique pour estimer l’erreur quadratique moyenne. Des méthodes analytiques sont proposées dans Chambers et al. (2014). Cependant, leur validité théorique n’a pas été établie et leurs performances empiriques ne sont pas pleinement satisfaisantes. Ici, nous examinons deux nouvelles approches pour obtenir une version robuste du meilleur prédicteur linéaire sans biais empirique : la première est fondée sur les travaux de Chambers (1986), et la deuxième est basée sur le concept de biais conditionnel comme mesure de l’influence d’une unité de la population. Ces deux classes d’estimateurs robustes des petits domaines incluent également un terme de correction pour le biais. Cependant, ils utilisent tous les deux l’information disponible dans tous les domaines contrairement à celui de Chambers et al. (2014) qui utilise uniquement l’information disponible dans le domaine d’intérêt. Dans certaines situations, un biais non négligeable est possible pour l’estimateur de Sinha & Rao (2009), alors que les estimateurs proposés exhibent un faible biais pour un choix approprié de la fonction d’influence et de la constante de robustesse. Les simulations Monte Carlo sont effectuées, et les comparaisons sont faites entre les estimateurs proposés et ceux de Sinha & Rao (2009) et de Chambers et al. (2014). Les résultats montrent que les estimateurs de Sinha & Rao (2009) et de Chambers et al. (2014) peuvent avoir un biais important, alors que les estimateurs proposés ont une meilleure performance en termes de biais et d’erreur quadratique moyenne. En outre, nous proposons une nouvelle procédure bootstrap pour l’estimation de l’erreur quadratique moyenne des estimateurs robustes des petits domaines. Contrairement aux procédures existantes, nous montrons formellement la validité asymptotique de la méthode bootstrap proposée. Par ailleurs, la méthode proposée est semi-paramétrique, c’est-à-dire, elle n’est pas assujettie à une hypothèse sur les distributions des erreurs ou des effets aléatoires. Ainsi, elle est particulièrement attrayante et plus largement applicable. Nous examinons les performances de notre procédure bootstrap avec les simulations Monte Carlo. Les résultats montrent que notre procédure performe bien et surtout performe mieux que tous les compétiteurs étudiés. Une application de la méthode proposée est illustrée en analysant les données réelles contenant des valeurs aberrantes de Battese, Harter & Fuller (1988). S’agissant de l’imputation en présence de non-réponse partielle, certaines formes d’imputation simple ont été étudiées. L’imputation par la régression déterministe entre les classes, qui inclut l’imputation par le ratio et l’imputation par la moyenne sont souvent utilisées dans les enquêtes. Ces méthodes d’imputation peuvent conduire à des estimateurs imputés biaisés si le modèle d’imputation ou le modèle de non-réponse n’est pas correctement spécifié. Des estimateurs doublement robustes ont été développés dans les années récentes. Ces estimateurs sont sans biais si l’un au moins des modèles d’imputation ou de non-réponse est bien spécifié. Cependant, en présence des valeurs aberrantes, les estimateurs imputés doublement robustes peuvent être très instables. En utilisant le concept de biais conditionnel, nous proposons une version robuste aux valeurs aberrantes de l’estimateur doublement robuste. Les résultats des études par simulations montrent que l’estimateur proposé performe bien pour un choix approprié de la constante de robustesse.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Non-resonant light interacting with diatomics via the polarizability anisotropy couples different rotational states and may lead to strong hybridization of the motion. The modification of shape resonances and low-energy scattering states due to this interaction can be fully captured by an asymptotic model, based on the long-range properties of the scattering (Crubellier et al 2015 New J. Phys. 17 045020). Remarkably, the properties of the field-dressed shape resonances in this asymptotic multi-channel description are found to be approximately linear in the field intensity up to fairly large intensity. This suggests a perturbative single-channel approach to be sufficient to study the control of such resonances by the non-resonant field. The multi-channel results furthermore indicate the dependence on field intensity to present, at least approximately, universal characteristics. Here we combine the nodal line technique to solve the asymptotic Schrödinger equation with perturbation theory. Comparing our single channel results to those obtained with the full interaction potential, we find nodal lines depending only on the field-free scattering length of the diatom to yield an approximate but universal description of the field-dressed molecule, confirming universal behavior.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes the development of a model-based vision system that exploits hierarchies of both object structure and object scale. The focus of the research is to use these hierarchies to achieve robust recognition based on effective organization and indexing schemes for model libraries. The goal of the system is to recognize parameterized instances of non-rigid model objects contained in a large knowledge base despite the presence of noise and occlusion. Robustness is achieved by developing a system that can recognize viewed objects that are scaled or mirror-image instances of the known models or that contain components sub-parts with different relative scaling, rotation, or translation than in models. The approach taken in this thesis is to develop an object shape representation that incorporates a component sub-part hierarchy- to allow for efficient and correct indexing into an automatically generated model library as well as for relative parameterization among sub-parts, and a scale hierarchy- to allow for a general to specific recognition procedure. After analysis of the issues and inherent tradeoffs in the recognition process, a system is implemented using a representation based on significant contour curvature changes and a recognition engine based on geometric constraints of feature properties. Examples of the system's performance are given, followed by an analysis of the results. In conclusion, the system's benefits and limitations are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In model-based vision, there are a huge number of possible ways to match model features to image features. In addition to model shape constraints, there are important match-independent constraints that can efficiently reduce the search without the combinatorics of matching. I demonstrate two specific modules in the context of a complete recognition system, Reggie. The first is a region-based grouping mechanism to find groups of image features that are likely to come from a single object. The second is an interpretive matching scheme to make explicit hypotheses about occlusion and instabilities in the image features.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a control strategy for blood glucose(BG) level regulation in type 1 diabetic patients. To design the controller, model-based predictive control scheme has been applied to a newly developed diabetic patient model. The controller is provided with a feedforward loop to improve meal compensation, a gain-scheduling scheme to account for different BG levels, and an asymmetric cost function to reduce hypoglycemic risk. A simulation environment that has been approved for testing of artificial pancreas control algorithms has been used to test the controller. The simulation results show a good controller performance in fasting conditions and meal disturbance rejection, and robustness against model–patient mismatch and errors in meal estimation

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we employ techniques from artificial intelligence such as reinforcement learning and agent based modeling as building blocks of a computational model for an economy based on conventions. First we model the interaction among firms in the private sector. These firms behave in an information environment based on conventions, meaning that a firm is likely to behave as its neighbors if it observes that their actions lead to a good pay off. On the other hand, we propose the use of reinforcement learning as a computational model for the role of the government in the economy, as the agent that determines the fiscal policy, and whose objective is to maximize the growth of the economy. We present the implementation of a simulator of the proposed model based on SWARM, that employs the SARSA(λ) algorithm combined with a multilayer perceptron as the function approximation for the action value function.