923 resultados para Generalised Linear Models
Resumo:
Projecte de recerca elaborat a partir d’una estada al Laboratory of Archaeometry del National Centre of Scientific Research “Demokritos” d’Atenes, Grècia, entre juny i setembre 2006. Aquest estudi s’emmarca dins d’un context més ampli d’estudi del canvi tecnològic que es documenta en la producció d’àmfores de tipologia romana durant els segles I aC i I dC en els territoris costaners de Catalunya. Una part d’aquest estudi contempla el càlcul de les propietats mecàniques d’aquestes àmfores i la seva avaluació en funció de la tipologia amforal, a partir de l’Anàlisi d’Elements Finits (AEF). L’AEF és una aproximació numèrica que té el seu origen en les ciències d’enginyeria i que ha estat emprada per estimar el comportament mecànic d’un model en termes, per exemple, de deformació i estrès. Així, un objecte, o millor dit el seu model, es dividit en sub-dominis anomenats elements finits, als quals se’ls atribueixen les propietats mecàniques del material en estudi. Aquests elements finits estan connectats formant una xarxa amb constriccions que pot ser definida. En el cas d’aplicar una força determinada a un model, el comportament de l’objecte pot ser estimat mitjançant el conjunt d’equacions lineals que defineixen el rendiment dels elements finits, proporcionant una bona aproximació per a la descripció de la deformació estructural. Així, aquesta simulació per ordinador suposa una important eina per entendre la funcionalitat de ceràmiques arqueològiques. Aquest procediment representa un model quantitatiu per predir el trencament de l’objecte ceràmic quan aquest és sotmès a diferents condicions de pressió. Aquest model ha estat aplicat a diferents tipologies amforals. Els resultats preliminars mostren diferències significatives entre la tipologia pre-romana i les tipologies romanes, així com entre els mateixos dissenys amforals romans, d’importants implicacions arqueològiques.
Resumo:
Studies evaluating the mechanical behavior of the trabecular microstructure play an important role in our understanding of pathologies such as osteoporosis, and in increasing our understanding of bone fracture and bone adaptation. Understanding of such behavior in bone is important for predicting and providing early treatment of fractures. The objective of this study is to present a numerical model for studying the initiation and accumulation of trabecular bone microdamage in both the pre- and post-yield regions. A sub-region of human vertebral trabecular bone was analyzed using a uniformly loaded anatomically accurate microstructural three-dimensional finite element model. The evolution of trabecular bone microdamage was governed using a non-linear, modulus reduction, perfect damage approach derived from a generalized plasticity stress-strain law. The model introduced in this paper establishes a history of microdamage evolution in both the pre- and post-yield regions
Resumo:
Difficult tracheal intubation assessment is an important research topic in anesthesia as failed intubations are important causes of mortality in anesthetic practice. The modified Mallampati score is widely used, alone or in conjunction with other criteria, to predict the difficulty of intubation. This work presents an automatic method to assess the modified Mallampati score from an image of a patient with the mouth wide open. For this purpose we propose an active appearance models (AAM) based method and use linear support vector machines (SVM) to select a subset of relevant features obtained using the AAM. This feature selection step proves to be essential as it improves drastically the performance of classification, which is obtained using SVM with RBF kernel and majority voting. We test our method on images of 100 patients undergoing elective surgery and achieve 97.9% accuracy in the leave-one-out crossvalidation test and provide a key element to an automatic difficult intubation assessment system.
Resumo:
Summary: Lipophilicity plays an important role in the determination and the comprehension of the pharmacokinetic behavior of drugs. It is usually expressed by the partition coefficient (log P) in the n-octanol/water system. The use of an additional solvent system (1,2-dichlorethane/water) is necessary to obtain complementary information, as the log Poct values alone are not sufficient to explain ail biological properties. The aim of this thesis is to develop tools allowing to predict lipophilicity of new drugs and to analyze the information yielded by those log P values. Part I presents the development of theoretical models used to predict lipophilicity. Chapter 2 shows the necessity to extend the existing solvatochromic analyses in order to predict correctly the lipophilicity of new and complex neutral compounds. In Chapter 3, solvatochromic analyses are used to develop a model for the prediction of the lipophilicity of ions. A global model was obtained allowing to estimate the lipophilicity of neutral, anionic and cationic solutes. Part II presents the detailed study of two physicochemical filters. Chapter 4 shows that the Discovery RP Amide C16 stationary phase allows to estimate lipophilicity of the neutral form of basic and acidic solutes, except of lipophilic acidic solutes. Those solutes present additional interactions with this particular stationary phase. In Chapter 5, 4 different IANI stationary phases are investigated. For neutral solutes, linear data are obtained whatever the IANI column used. For the ionized solutes, their retention is due to a balance of electrostatic and hydrophobie interactions. Thus no discrimination is observed between different series of solutes bearing the same charge, from one column to an other. Part III presents two examples illustrating the information obtained thanks to Structure-Properties Relationships (SPR). Comparing graphically lipophilicity values obtained in two different solvent systems allows to reveal the presence of intramolecular effects .such as internai H-bond (Chapter 6). SPR is used to study the partitioning of ionizable groups encountered in Medicinal Chemistry (Chapter7). Résumé La lipophilie joue un .rôle important dans la détermination et la compréhension du comportement pharmacocinétique des médicaments. Elle est généralement exprimée par le coefficient de partage (log P) d'un composé dans le système de solvants n-octanol/eau. L'utilisation d'un deuxième système de solvants (1,2-dichloroéthane/eau) s'est avérée nécessaire afin d'obtenir des informations complémentaires, les valeurs de log Poct seules n'étant pas suffisantes pour expliquer toutes les propriétés biologiques. Le but de cette thèse est de développer des outils permettant de prédire la lipophilie de nouveaux candidats médicaments et d'analyser l'information fournie par les valeurs de log P. La Partie I présente le développement de modèles théoriques utilisés pour prédire la lipophilie. Le chapitre 2 montre la nécessité de mettre à jour les analyses solvatochromiques existantes mais inadaptées à la prédiction de la lipophilie de nouveaux composés neutres. Dans le chapitre 3, la même méthodologie des analyses solvatochromiques est utilisée pour développer un modèle permettant de prédire la lipophilie des ions. Le modèle global obtenu permet la prédiction de la lipophilie de composés neutres, anioniques et cationiques. La Partie II présente l'étude approfondie de deux filtres physicochimiques. Le Chapitre 4 montre que la phase stationnaire Discovery RP Amide C16 permet la détermination de la lipophilie de la forme neutre de composés basiques et acides, à l'exception des acides très lipophiles. Ces derniers présentent des interactions supplémentaires avec cette phase stationnaire. Dans le Chapitre 5, 4 phases stationnaires IAM sont étudiées. Pour les composés neutres étudiés, des valeurs de rétention linéaires sont obtenues, quelque que soit la colonne IAM utilisée. Pour les composés ionisables, leur rétention est due à une balance entre des interactions électrostatiques et hydrophobes. Donc aucune discrimination n'est observée entre les différentes séries de composés portant la même charge d'une colonne à l'autre. La Partie III présente deux exemples illustrant les informations obtenues par l'utilisation des relations structures-propriétés. Comparer graphiquement la lipophilie mesurée dans deux différents systèmes de solvants permet de mettre en évidence la présence d'effets intramoléculaires tels que les liaisons hydrogène intramoléculaires (Chapitre 6). Cette approche des relations structures-propriétés est aussi appliquée à l'étude du partage de fonctions ionisables rencontrées en Chimie Thérapeutique (Chapitre 7) Résumé large public Pour exercer son effet thérapeutique, un médicament doit atteindre son site d'action en quantité suffisante. La quantité effective de médicament atteignant le site d'action dépend du nombre d'interactions entre le médicament et de nombreux constituants de l'organisme comme, par exemple, les enzymes du métabolisme ou les membranes biologiques. Le passage du médicament à travers ces membranes, appelé perméation, est un paramètre important à optimiser pour développer des médicaments plus puissants. La lipophilie joue un rôle clé dans la compréhension de la perméation passive des médicaments. La lipophilie est généralement exprimée par le coefficient de partage (log P) dans le système de solvants (non miscibles) n-octanol/eau. Les valeurs de log Poct seules se sont avérées insuffisantes pour expliquer la perméation à travers toutes les différentes membranes biologiques du corps humain. L'utilisation d'un système de solvants additionnel (le système 1,2-dichloroéthane/eau) a permis d'obtenir les informations complémentaires indispensables à une bonne compréhension du processus de perméation. Un grand nombre d'outils expérimentaux et théoriques sont à disposition pour étudier la lipophilie. Ce travail de thèse se focalise principalement sur le développement ou l'amélioration de certains de ces outils pour permettre leur application à un champ plus large de composés. Voici une brève description de deux de ces outils: 1)La factorisation de la lipophilie en fonction de certaines propriétés structurelles (telle que le volume) propres aux composés permet de développer des modèles théoriques utilisables pour la prédiction de la lipophilie de nouveaux composés ou médicaments. Cette approche est appliquée à l'analyse de la lipophilie de composés neutres ainsi qu'à la lipophilie de composés chargés. 2)La chromatographie liquide à haute pression sur phase inverse (RP-HPLC) est une méthode couramment utilisée pour la détermination expérimentale des valeurs de log Poct.
Resumo:
Nonlinear Noisy Leaky Integrate and Fire (NNLIF) models for neurons networks can be written as Fokker-Planck-Kolmogorov equations on the probability density of neurons, the main parameters in the model being the connectivity of the network and the noise. We analyse several aspects of the NNLIF model: the number of steady states, a priori estimates, blow-up issues and convergence toward equilibrium in the linear case. In particular, for excitatory networks, blow-up always occurs for initial data concentrated close to the firing potential. These results show how critical is the balance between noise and excitatory/inhibitory interactions to the connectivity parameter.
Resumo:
In economic literature, information deficiencies and computational complexities have traditionally been solved through the aggregation of agents and institutions. In inputoutput modelling, researchers have been interested in the aggregation problem since the beginning of 1950s. Extending the conventional input-output aggregation approach to the social accounting matrix (SAM) models may help to identify the effects caused by the information problems and data deficiencies that usually appear in the SAM framework. This paper develops the theory of aggregation and applies it to the social accounting matrix model of multipliers. First, we define the concept of linear aggregation in a SAM database context. Second, we define the aggregated partitioned matrices of multipliers which are characteristic of the SAM approach. Third, we extend the analysis to other related concepts, such as aggregation bias and consistency in aggregation. Finally, we provide an illustrative example that shows the effects of aggregating a social accounting matrix model.
Resumo:
Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.
Resumo:
Using numerical simulations we investigate shapes of random equilateral open and closed chains, one of the simplest models of freely fluctuating polymers in a solution. We are interested in the 3D density distribution of the modeled polymers where the polymers have been aligned with respect to their three principal axes of inertia. This type of approach was pioneered by Theodorou and Suter in 1985. While individual configurations of the modeled polymers are almost always nonsymmetric, the approach of Theodorou and Suter results in cumulative shapes that are highly symmetric. By taking advantage of asymmetries within the individual configurations, we modify the procedure of aligning independent configurations in a way that shows their asymmetry. This approach reveals, for example, that the 3D density distribution for linear polymers has a bean shape predicted theoretically by Kuhn. The symmetry-breaking approach reveals complementary information to the traditional, symmetrical, 3D density distributions originally introduced by Theodorou and Suter.
Resumo:
Interaction effects are usually modeled by means of moderated regression analysis. Structural equation models with non-linear constraints make it possible to estimate interaction effects while correcting formeasurement error. From the various specifications, Jöreskog and Yang's(1996, 1998), likely the most parsimonious, has been chosen and further simplified. Up to now, only direct effects have been specified, thus wasting much of the capability of the structural equation approach. This paper presents and discusses an extension of Jöreskog and Yang's specification that can handle direct, indirect and interaction effects simultaneously. The model is illustrated by a study of the effects of an interactive style of use of budgets on both company innovation and performance
Resumo:
In this work we develop a viscoelastic bar element that can handle multiple rheo- logical laws with non-linear elastic and non-linear viscous material models. The bar element is built by joining in series an elastic and viscous bar, constraining the middle node position to the bar axis with a reduction method, and stati- cally condensing the internal degrees of freedom. We apply the methodology to the modelling of reversible softening with sti ness recovery both in 2D and 3D, a phenomenology also experimentally observed during stretching cycles on epithelial lung cell monolayers.
Resumo:
An active strain formulation for orthotropic constitutive laws arising in cardiac mechanics modeling is introduced and studied. The passive mechanical properties of the tissue are described by the Holzapfel-Ogden relation. In the active strain formulation, the Euler-Lagrange equations for minimizing the total energy are written in terms of active and passive deformation factors, where the active part is assumed to depend, at the cell level, on the electrodynamics and on the specific orientation of the cardiac cells. The well-posedness of the linear system derived from a generic Newton iteration of the original problem is analyzed and different mechanical activation functions are considered. In addition, the active strain formulation is compared with the classical active stress formulation from both numerical and modeling perspectives. Taylor-Hood and MINI finite elements are employed to discretize the mechanical problem. The results of several numerical experiments show that the proposed formulation is mathematically consistent and is able to represent the main key features of the phenomenon, while allowing savings in computational costs.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
The paper proposes an approach aimed at detecting optimal model parameter combinations to achieve the most representative description of uncertainty in the model performance. A classification problem is posed to find the regions of good fitting models according to the values of a cost function. Support Vector Machine (SVM) classification in the parameter space is applied to decide if a forward model simulation is to be computed for a particular generated model. SVM is particularly designed to tackle classification problems in high-dimensional space in a non-parametric and non-linear way. SVM decision boundaries determine the regions that are subject to the largest uncertainty in the cost function classification, and, therefore, provide guidelines for further iterative exploration of the model space. The proposed approach is illustrated by a synthetic example of fluid flow through porous media, which features highly variable response due to the parameter values' combination.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.