822 resultados para Theoretical models


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Planner is a formalism for proving theorems and manipulating models in a robot. The formalism is built out of a number of problem-solving primitives together with a hierarchical multiprocess backtrack control structure. Statements can be asserted and perhaps later withdrawn as the state of the world changes. Under BACKTRACK control structure, the hierarchy of activations of functions previously executed is maintained so that it is possible to revert to any previous state. Thus programs can easily manipulate elaborate hypothetical tentative states. In addition PLANNER uses multiprocessing so that there can be multiple loci of changes in state. Goals can be established and dismissed when they are satisfied. The deductive system of PLANNER is subordinate to the hierarchical control structure in order to maintain the desired degree of control. The use of a general-purpose matching language as the basis of the deductive system increases the flexibility of the system. Instead of explicitly naming procedures in calls, procedures can be invoked implicitly by patterns of what the procedure is supposed to accomplish. The language is being applied to solve problems faced by a robot, to write special purpose routines from goal oriented language, to express and prove properties of procedures, to abstract procedures from protocols of their actions, and as a semantic base for English.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Natural disasters are events that cause general and widespread destruction of the built environment and are becoming increasingly recurrent. They are a product of vulnerability and community exposure to natural hazards, generating a multitude of social, economic and cultural issues of which the loss of housing and the subsequent need for shelter is one of its major consequences. Nowadays, numerous factors contribute to increased vulnerability and exposure to natural disasters such as climate change with its impacts felt across the globe and which is currently seen as a worldwide threat to the built environment. The abandonment of disaster-affected areas can also push populations to regions where natural hazards are felt more severely. Although several actors in the post-disaster scenario provide for shelter needs and recovery programs, housing is often inadequate and unable to resist the effects of future natural hazards. Resilient housing is commonly not addressed due to the urgency in sheltering affected populations. However, by neglecting risks of exposure in construction, houses become vulnerable and are likely to be damaged or destroyed in future natural hazard events. That being said it becomes fundamental to include resilience criteria, when it comes to housing, which in turn will allow new houses to better withstand the passage of time and natural disasters, in the safest way possible. This master thesis is intended to provide guiding principles to take towards housing recovery after natural disasters, particularly in the form of flood resilient construction, considering floods are responsible for the largest number of natural disasters. To this purpose, the main structures that house affected populations were identified and analyzed in depth. After assessing the risks and damages that flood events can cause in housing, a methodology was proposed for flood resilient housing models, in which there were identified key criteria that housing should meet. The same methodology is based in the US Federal Emergency Management Agency requirements and recommendations in accordance to specific flood zones. Finally, a case study in Maldives – one of the most vulnerable countries to sea level rise resulting from climate change – has been analyzed in light of housing recovery in a post-disaster induced scenario. This analysis was carried out by using the proposed methodology with the intent of assessing the resilience of the newly built housing to floods in the aftermath of the 2004 Indian Ocean Tsunami.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La prise de décision est un processus computationnel fondamental dans de nombreux aspects du comportement animal. Le modèle le plus souvent rencontré dans les études portant sur la prise de décision est appelé modèle de diffusion. Depuis longtemps, il explique une grande variété de données comportementales et neurophysiologiques dans ce domaine. Cependant, un autre modèle, le modèle d’urgence, explique tout aussi bien ces mêmes données et ce de façon parcimonieuse et davantage encrée sur la théorie. Dans ce travail, nous aborderons tout d’abord les origines et le développement du modèle de diffusion et nous verrons comment il a été établi en tant que cadre de travail pour l’interprétation de la plupart des données expérimentales liées à la prise de décision. Ce faisant, nous relèveront ses points forts afin de le comparer ensuite de manière objective et rigoureuse à des modèles alternatifs. Nous réexaminerons un nombre d’assomptions implicites et explicites faites par ce modèle et nous mettrons alors l’accent sur certains de ses défauts. Cette analyse servira de cadre à notre introduction et notre discussion du modèle d’urgence. Enfin, nous présenterons une expérience dont la méthodologie permet de dissocier les deux modèles, et dont les résultats illustrent les limites empiriques et théoriques du modèle de diffusion et démontrent en revanche clairement la validité du modèle d'urgence. Nous terminerons en discutant l'apport potentiel du modèle d'urgence pour l'étude de certaines pathologies cérébrales, en mettant l'accent sur de nouvelles perspectives de recherche.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A theoretical framework for the joint conservation of energy and momentum in the parameterization of subgrid-scale processes in climate models is presented. The framework couples a hydrostatic resolved (planetary scale) flow to a nonhydrostatic subgrid-scale (mesoscale) flow. The temporal and horizontal spatial scale separation between the planetary scale and mesoscale is imposed using multiple-scale asymptotics. Energy and momentum are exchanged through subgrid-scale flux convergences of heat, pressure, and momentum. The generation and dissipation of subgrid-scale energy and momentum is understood using wave-activity conservation laws that are derived by exploiting the (mesoscale) temporal and horizontal spatial homogeneities in the planetary-scale flow. The relations between these conservation laws and the planetary-scale dynamics represent generalized nonacceleration theorems. A derived relationship between the wave-activity fluxes-which represents a generalization of the second Eliassen-Palm theorem-is key to ensuring consistency between energy and momentum conservation. The framework includes a consistent formulation of heating and entropy production due to kinetic energy dissipation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The pentrophic membrane (PM) is an anatomical structure surrounding the food bolus in most insects. Rejecting the idea that PM has evolved from coating mucus to play the same protective role as it, novel functions were proposed and experimentally tested. The theoretical principles underlying the digestive enzyme recycling mechanism were described and used to develop an algorithm to calculate enzyme distributions along the midgut and to infer secretory and absorptive sites. The activity of a Spodoptera frugiperda microvillar aminopeptidase decreases by 50% if placed in the presence of midgut contents. S. frugiperda trypsin preparations placed into dialysis bags in stirred and unstirred media have activities of 210 and 160%, respectively, over the activities of samples in a test tube. The ectoperitrophic fluid (EF) present in the midgut caeca of Rhynchosciara americana may be collected. If the enzymes restricted to this fluid are assayed in the presence of PM contents (PMC) their activities decrease by at least 58%. The lack of PM caused by calcofluor feeding impairs growth due to an increase in the metabolic cost associated with the conversion of food into body mass. This probably results from an increase in digestive enzyme excretion and useless homeostatic attempt to reestablish destroyed midgut gradients. The experimental models support the view that PM enhances digestive efficiency by: (a) prevention of non-specific binding of undigested material onto cell Surface; (b) prevention of excretion by allowing enzyme recycling powered by an ectoperitrophic counterflux of fluid; (c) removal from inside PM of the oligomeric molecules that may inhibit the enzymes involved in initial digestion; (d) restriction of oligomer hydrolases to ectoperitrophic space (ECS) to avoid probable partial inhibition by non-dispersed undigested food. Finally,PM functions are discussed regarding insects feeding on any diet. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Density functional calculation at B3LYP level was employed to study the surface oxygen vacancies and the doping process of Co, Cu and Zn on SnO2 (110) surface models. Large clusters, based on (SnO2)(15) models, were selected to simulate the oxidized (Sn15O30), half-reduced (Sn15O29) and the reduced (Sn15O28) surfaces. The doping process was considered on the reduced surfaces: Sn13Co2O28, Sn13Cu2O28 and Sn13Zn2O28. The results are analyzed and discussed based on a calculation of the energy levels along the bulk band gap region, determined by a projection of the monoelectron level structure on to the atomic basis set and by the density of states. This procedure enables one to distinguish the states coming from the bulk, the oxygen vacancies and the doping process, on passing from an oxidized to a reduced surface, missing bridge oxygen atoms generate electronic levels along the band gap region, associated with 5s/5p of four-/five-fold Sn and 2p of in-plane O centers located on the exposed surface, which is in agreement with previous theoretical and experimental investigations. The formation energy of one and two oxygen vacancies is 3.0 and 3.9 eV, respectively. (C) 2001 Elsevier B.V. B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

B3LYP/6-31 + + G** and MP2/6-31 + + G** calculations have been carried out to study six tautomers of the nucleic acid base cytosine in aqueous media. Solvent effects have been analyzed using the self-consistent reaction field theory with two continuum methods. Relative stabilities and optimized geometries have been calculated for the tautomers and compared with experimental data. The present results show the importance of electrostatic solvent effects in determining observable properties of the cytosine tautomers. The amino-oxo form (C1) is the most abundant tautomer in aqueous media while the other amino-oxo form (C4) is the most energetically favored when solvent effects are included. These results can be justified by the larger values of the dipole moments for both C1 and C4 tautomers. Theoretical and experimental results of the harmonic vibrational frequencies and rotational constants show good agreement. (C) 2000 Elsevier B.V. B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This voluminous book which draws on almost 1000 references provides an important theoretical base for practice. After an informative introduction about models, maps and metaphors, Forte provides an impressive presentation of several perspectives for use in practice; applied ecological theory, applied system theory, applied biology, applied cognitive science, applied psychodynamic theory, applied behaviourism, applied symbolic interactionism, applied social role theory, applied economic theory, and applied critical theory. Finally he completes his book with a chapter on “Multi theory practice and routes to integration.”

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Formative measurement has seen increasing acceptance in organizational research since the turn of the 21st Century. However, in more recent times, a number of criticisms of the formative approach have appeared. Such work argues that formatively-measured constructs are empirically ambiguous and thus flawed in a theory-testing context. The aim of the present paper is to examine the underpinnings of formative measurement theory in light of theories of causality and ontology in measurement in general. In doing so, a thesis is advanced which draws a distinction between reflective, formative, and causal theories of latent variables. This distinction is shown to be advantageous in that it clarifies the ontological status of each type of latent variable, and thus provides advice on appropriate conceptualization and application. The distinction also reconciles in part both recent supportive and critical perspectives on formative measurement. In light of this, advice is given on how most appropriately to model formative composites in theory-testing applications, placing the onus on the researcher to make clear their conceptualization and operationalisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Appropriate mathematical models that are capable of estimating times to failures and the probability of failures in the future are essential in EAM. In most real-life situations, the lifetime of an engineering asset is influenced and/or indicated by different factors that are termed as covariates. Hazard prediction with covariates is an elemental notion in the reliability theory to estimate the tendency of an engineering asset failing instantaneously beyond the current time assumed that it has already survived up to the current time. A number of statistical covariate-based hazard models have been developed. However, none of them has explicitly incorporated both external and internal covariates into one model. This paper introduces a novel covariate-based hazard model to address this concern. This model is named as Explicit Hazard Model (EHM). Both the semi-parametric and non-parametric forms of this model are presented in the paper. The major purpose of this paper is to illustrate the theoretical development of EHM. Due to page limitation, a case study with the reliability field data is presented in the applications part of this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper firstly presents an extended ambiguity resolution model that deals with an ill-posed problem and constraints among the estimated parameters. In the extended model, the regularization criterion is used instead of the traditional least squares in order to estimate the float ambiguities better. The existing models can be derived from the general model. Secondly, the paper examines the existing ambiguity searching methods from four aspects: exclusion of nuisance integer candidates based on the available integer constraints; integer rounding; integer bootstrapping and integer least squares estimations. Finally, this paper systematically addresses the similarities and differences between the generalized TCAR and decorrelation methods from both theoretical and practical aspects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the problems of three carrier phase ambiguity resolution (TCAR) and position estimation (PE) are generalized as real time GNSS data processing problems for a continuously observing network on large scale. In order to describe these problems, a general linear equation system is presented to uniform various geometry-free, geometry-based and geometry-constrained TCAR models, along with state transition questions between observation times. With this general formulation, generalized TCAR solutions are given to cover different real time GNSS data processing scenarios, and various simplified integer solutions, such as geometry-free rounding and geometry-based LAMBDA solutions with single and multiple-epoch measurements. In fact, various ambiguity resolution (AR) solutions differ in the floating ambiguity estimation and integer ambiguity search processes, but their theoretical equivalence remains under the same observational systems models and statistical assumptions. TCAR performance benefits as outlined from the data analyses in some recent literatures are reviewed, showing profound implications for the future GNSS development from both technology and application perspectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses computational challenges arising from Bayesian analysis of complex real-world problems. Many of the models and algorithms designed for such analysis are ‘hybrid’ in nature, in that they are a composition of components for which their individual properties may be easily described but the performance of the model or algorithm as a whole is less well understood. The aim of this research project is to after a better understanding of the performance of hybrid models and algorithms. The goal of this thesis is to analyse the computational aspects of hybrid models and hybrid algorithms in the Bayesian context. The first objective of the research focuses on computational aspects of hybrid models, notably a continuous finite mixture of t-distributions. In the mixture model, an inference of interest is the number of components, as this may relate to both the quality of model fit to data and the computational workload. The analysis of t-mixtures using Markov chain Monte Carlo (MCMC) is described and the model is compared to the Normal case based on the goodness of fit. Through simulation studies, it is demonstrated that the t-mixture model can be more flexible and more parsimonious in terms of number of components, particularly for skewed and heavytailed data. The study also reveals important computational issues associated with the use of t-mixtures, which have not been adequately considered in the literature. The second objective of the research focuses on computational aspects of hybrid algorithms for Bayesian analysis. Two approaches will be considered: a formal comparison of the performance of a range of hybrid algorithms and a theoretical investigation of the performance of one of these algorithms in high dimensions. For the first approach, the delayed rejection algorithm, the pinball sampler, the Metropolis adjusted Langevin algorithm, and the hybrid version of the population Monte Carlo (PMC) algorithm are selected as a set of examples of hybrid algorithms. Statistical literature shows how statistical efficiency is often the only criteria for an efficient algorithm. In this thesis the algorithms are also considered and compared from a more practical perspective. This extends to the study of how individual algorithms contribute to the overall efficiency of hybrid algorithms, and highlights weaknesses that may be introduced by the combination process of these components in a single algorithm. The second approach to considering computational aspects of hybrid algorithms involves an investigation of the performance of the PMC in high dimensions. It is well known that as a model becomes more complex, computation may become increasingly difficult in real time. In particular the importance sampling based algorithms, including the PMC, are known to be unstable in high dimensions. This thesis examines the PMC algorithm in a simplified setting, a single step of the general sampling, and explores a fundamental problem that occurs in applying importance sampling to a high-dimensional problem. The precision of the computed estimate from the simplified setting is measured by the asymptotic variance of the estimate under conditions on the importance function. Additionally, the exponential growth of the asymptotic variance with the dimension is demonstrated and we illustrates that the optimal covariance matrix for the importance function can be estimated in a special case.