952 resultados para Three models
Resumo:
Global change is affecting marine ecosystems through a combination of different stressors such as warming, ocean acidification and oxygen depletion. Very little is known about the interactions among these factors, especially with respect to gelatinous zooplankton. Therefore, in this study we investigated the direct effects of pH, temperature and oxygen availability on the moon jellyfish Aurelia aurita, concentrating on the ephyral life stage. Starved one-day-old ephyrae were exposed to a range of pCO2 (400-4000 ppm) and three different dissolved oxygen levels (from saturated to hypoxic conditions), in two different temperatures (5 and 15 °C) for 7 days. Carbon content and swimming activity were analysed at the end of the incubation period, and mortality noted. General linearized models were fitted through the data, with the best fitting models including two- and three-way interactions between pCO2, temperature and oxygen concentration. The combined effect of the stressors was small but significant, with the clearest negative effect on growth caused by the combination of all three stressors present (high temperature, high CO2, low oxygen). We conclude that A. aurita ephyrae are robust and that they are not likely to suffer from these environmental stressors in a near future.
Resumo:
The authors would like to thank the College of Life Sciences of Aberdeen University and Marine Scotland Science which funded CP's PhD project. Skate tagging experiments were undertaken as part of Scottish Government project SP004. We thank Ian Burrett for help in catching the fish and the other fishermen and anglers who returned tags. We thank José Manuel Gonzalez-Irusta for extracting and making available the environmental layers used as environmental covariates in the environmental suitability modelling procedure. We also thank Jason Matthiopoulos for insightful suggestions on habitat utilization metrics as well as Stephen C.F. Palmer, and three anonymous reviewers for useful suggestions to improve the clarity and quality of the manuscript.
Resumo:
Extremal quantile index is a concept that the quantile index will drift to zero (or one)
as the sample size increases. The three chapters of my dissertation consists of three
applications of this concept in three distinct econometric problems. In Chapter 2, I
use the concept of extremal quantile index to derive new asymptotic properties and
inference method for quantile treatment effect estimators when the quantile index
of interest is close to zero. In Chapter 3, I rely on the concept of extremal quantile
index to achieve identification at infinity of the sample selection models and propose
a new inference method. Last, in Chapter 4, I use the concept of extremal quantile
index to define an asymptotic trimming scheme which can be used to control the
convergence rate of the estimator of the intercept of binary response models.
Resumo:
BACKGROUND: The role of the microbiome has become synonymous with human health and disease. Bile acids, as essential components of the microbiome, have gained sustained credibility as potential modulators of cancer progression in several disease models. At physiological concentrations, bile acids appear to influence cancer phenotypes, although conflicting data surrounds their precise physiological mechanism of action. Previously, we demonstrated bile acids destabilised the HIF-1α subunit of the Hypoxic-Inducible Factor-1 (HIF-1) transcription factor. HIF-1 overexpression is an early biomarker of tumour metastasis and is associated with tumour resistance to conventional therapies, and poor prognosis in a range of different cancers. METHODS: Here we investigated the effects of bile acids on the cancer growth and migratory potential of cell lines where HIF-1α is known to be active under hypoxic conditions. HIF-1α status was investigated in A-549 lung, DU-145 prostate and MCF-7 breast cancer cell lines exposed to bile acids (CDCA and DCA). Cell adhesion, invasion, migration was assessed in DU-145 cells while clonogenic growth was assessed in all cell lines. RESULTS: Intracellular HIF-1α was destabilised in the presence of bile acids in all cell lines tested. Bile acids were not cytotoxic but exhibited greatly reduced clonogenic potential in two out of three cell lines. In the migratory prostate cancer cell line DU-145, bile acids impaired cell adhesion, migration and invasion. CDCA and DCA destabilised HIF-1α in all cells and significantly suppressed key cancer progression associated phenotypes; clonogenic growth, invasion and migration in DU-145 cells. CONCLUSIONS: These findings suggest previously unobserved roles for bile acids as physiologically relevant molecules targeting hypoxic tumour progression.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: 1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (ELUMO) via QSAR modelling and analysis; 2) to validate the models by using internal and external cross-validation techniques; 3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: 1) Linear or Multi-linear Regression (MLR); 2) Partial Least Squares (PLS); and 3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: 1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; 2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; 3) ELUMO are shown to correlate highly with the NCl for several classes of DBPs; and 4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.
Resumo:
Accurate age models are a tool of utmost important in paleoclimatology. Constraining the rate and pace of past climate change are at the core of paleoclimate research, as such knowledge is crucial to our understanding of the climate system. Indeed, it allows for the disentanglement of the various drivers of climate change. The scarcity of highly resolved sedimentary records from the middle Eocene (Bartonian - Lutetian Stages; 47.8 - 37.8 Ma) has led to the existence of the "Eocene astronomical time scale gap" and hindered the establishment of a comprehensive astronomical time scale (ATS) for the entire Cenozoic. Sediments from the Newfoundland Ridge drilled during Integrated Ocean Drilling Program (IODP) Expedition 342 span the Eocene gap at an unprecedented stratigraphic resolution with carbonate bearing sediments. Moreover, these sediments exhibit cyclic lithological changes that allow for an astronomical calibration of geologic time. In this study, we use the dominant obliquity imprint in XRF-derived calcium-iron ratio series (Ca/Fe) from three sites drilled during IODP Expedition 342 (U1408, U1409, U1410) to construct a floating astrochronology. We then anchor this chronology to numerical geological time by tuning 173-kyr cycles in the amplitude modulation pattern of obliquity to an astronomical solution. This study is one of the first to use the 173-kyr obliquity amplitude cycle for astrochronologic purposes, as previous studies primarily use the 405-kyr long eccentricity cycle as a tuning target to calibrate the Paleogene geologic time scale. We demonstrate that the 173-kyr cycles in obliquity's amplitude are stable between 40 and 50 Ma, which means that one can use the 173-kyr cycle for astrochronologic calibration in the Eocene. Our tuning provides new age estimates for magnetochron reversals C18n.1n - C21r and a stratigraphic framework for key sites from Expedition 342 for the Eocene. Some disagreements emerge when we compare our tuning for the interval between C19r and C20r with previous tuning attempts from the South Atlantic. We therefore present a revision of the original astronomical interpretations for the latter records, so that the various astrochronologic age models for the middle Eocene in the North- and South-Atlantic are consistent.
Resumo:
We consider how three firms compete in a Salop location model and how cooperation in location choice by two of these firms affects the outcomes. We con- sider the classical case of linear transportation costs as a two-stage game in which the firms select first a location on a unit circle along which consumers are dispersed evenly, followed by the competitive selection of a price. Standard analysis restricts itself to purely competitive selection of location; instead, we focus on the situation in which two firms collectively decide about location, but price their products competitively after the location choice has been effectuated. We show that such partial coordination of location is beneficial to all firms, since it reduces the number of equilibria significantly and, thereby, the resulting coordination problem. Subsequently, we show that the case of quadratic transportation costs changes the main conclusions only marginally.
Resumo:
Lors du transport du bois de la forêt vers les usines, de nombreux événements imprévus peuvent se produire, événements qui perturbent les trajets prévus (par exemple, en raison des conditions météo, des feux de forêt, de la présence de nouveaux chargements, etc.). Lorsque de tels événements ne sont connus que durant un trajet, le camion qui accomplit ce trajet doit être détourné vers un chemin alternatif. En l’absence d’informations sur un tel chemin, le chauffeur du camion est susceptible de choisir un chemin alternatif inutilement long ou pire, qui est lui-même "fermé" suite à un événement imprévu. Il est donc essentiel de fournir aux chauffeurs des informations en temps réel, en particulier des suggestions de chemins alternatifs lorsqu’une route prévue s’avère impraticable. Les possibilités de recours en cas d’imprévus dépendent des caractéristiques de la chaîne logistique étudiée comme la présence de camions auto-chargeurs et la politique de gestion du transport. Nous présentons trois articles traitant de contextes d’application différents ainsi que des modèles et des méthodes de résolution adaptés à chacun des contextes. Dans le premier article, les chauffeurs de camion disposent de l’ensemble du plan hebdomadaire de la semaine en cours. Dans ce contexte, tous les efforts doivent être faits pour minimiser les changements apportés au plan initial. Bien que la flotte de camions soit homogène, il y a un ordre de priorité des chauffeurs. Les plus prioritaires obtiennent les volumes de travail les plus importants. Minimiser les changements dans leurs plans est également une priorité. Étant donné que les conséquences des événements imprévus sur le plan de transport sont essentiellement des annulations et/ou des retards de certains voyages, l’approche proposée traite d’abord l’annulation et le retard d’un seul voyage, puis elle est généralisée pour traiter des événements plus complexes. Dans cette ap- proche, nous essayons de re-planifier les voyages impactés durant la même semaine de telle sorte qu’une chargeuse soit libre au moment de l’arrivée du camion à la fois au site forestier et à l’usine. De cette façon, les voyages des autres camions ne seront pas mo- difiés. Cette approche fournit aux répartiteurs des plans alternatifs en quelques secondes. De meilleures solutions pourraient être obtenues si le répartiteur était autorisé à apporter plus de modifications au plan initial. Dans le second article, nous considérons un contexte où un seul voyage à la fois est communiqué aux chauffeurs. Le répartiteur attend jusqu’à ce que le chauffeur termine son voyage avant de lui révéler le prochain voyage. Ce contexte est plus souple et offre plus de possibilités de recours en cas d’imprévus. En plus, le problème hebdomadaire peut être divisé en des problèmes quotidiens, puisque la demande est quotidienne et les usines sont ouvertes pendant des périodes limitées durant la journée. Nous utilisons un modèle de programmation mathématique basé sur un réseau espace-temps pour réagir aux perturbations. Bien que ces dernières puissent avoir des effets différents sur le plan de transport initial, une caractéristique clé du modèle proposé est qu’il reste valable pour traiter tous les imprévus, quelle que soit leur nature. En effet, l’impact de ces événements est capturé dans le réseau espace-temps et dans les paramètres d’entrée plutôt que dans le modèle lui-même. Le modèle est résolu pour la journée en cours chaque fois qu’un événement imprévu est révélé. Dans le dernier article, la flotte de camions est hétérogène, comprenant des camions avec des chargeuses à bord. La configuration des routes de ces camions est différente de celle des camions réguliers, car ils ne doivent pas être synchronisés avec les chargeuses. Nous utilisons un modèle mathématique où les colonnes peuvent être facilement et naturellement interprétées comme des itinéraires de camions. Nous résolvons ce modèle en utilisant la génération de colonnes. Dans un premier temps, nous relaxons l’intégralité des variables de décision et nous considérons seulement un sous-ensemble des itinéraires réalisables. Les itinéraires avec un potentiel d’amélioration de la solution courante sont ajoutés au modèle de manière itérative. Un réseau espace-temps est utilisé à la fois pour représenter les impacts des événements imprévus et pour générer ces itinéraires. La solution obtenue est généralement fractionnaire et un algorithme de branch-and-price est utilisé pour trouver des solutions entières. Plusieurs scénarios de perturbation ont été développés pour tester l’approche proposée sur des études de cas provenant de l’industrie forestière canadienne et les résultats numériques sont présentés pour les trois contextes.
Resumo:
Influencing more environmentally friendly and sustainable behaviour is a current focus of many projects, ranging from government social marketing campaigns, education and tax structures to designers’ work on interactive products, services and environments. There is a wide variety of techniques and methods used, intended to work via different sets of cognitive and environmental principles. These approaches make different assumptions about ‘what people are like’: how users will respond to behavioural interventions, and why, and in the process reveal some of the assumptions that designers and other stakeholders, such as clients commissioning a project, make about human nature. This paper discusses three simple models of user behaviour – the pinball, the shortcut and the thoughtful – which emerge from user experience designers’ statements about users while focused on designing for behaviour change. The models are characterised using systems terminology and the application of each model to design for sustainable behaviour is examined via a series of examples.
Resumo:
The highly dynamic nature of some sandy shores with continuous morphological changes require the development of efficient and accurate methodological strategies for coastal hazard assessment and morphodynamic characterisation. During the past decades, the general methodological approach for the establishment of coastal monitoring programmes was based on photogrammetry or classical geodetic techniques. With the advent of new geodetic techniques, space-based and airborne-based, new methodologies were introduced in coastal monitoring programmes. This paper describes the development of a monitoring prototype that is based on the use of global positioning system (GPS). The prototype has a GPS multiantenna mounted on a fast surveying platform, a land vehicle appropriate for driving in the sand (four-wheel quad). This system was conceived to perform a network of shore profiles in sandy shores stretches (subaerial beach) that extend for several kilometres from which high-precision digital elevation models can be generated. An analysis of the accuracy and precision of some differential GPS kinematic methodologies is presented. The development of an adequate survey methodology is the first step in morphodynamic shore characterisation or in coastal hazard assessment. The sample method and the computational interpolation procedures are important steps for producing reliable three-dimensional surface maps that are real as possible. The quality of several interpolation methods used to generate grids was tested in areas where there were data gaps. The results obtained allow us to conclude that with the developed survey methodology, it is possible to Survey sandy shores stretches, under spatial scales of kilometers, with a vertical accuracy of greater than 0.10 m in the final digital elevation models.
Resumo:
L’un des problèmes importants en apprentissage automatique est de déterminer la complexité du modèle à apprendre. Une trop grande complexité mène au surapprentissage, ce qui correspond à trouver des structures qui n’existent pas réellement dans les données, tandis qu’une trop faible complexité mène au sous-apprentissage, c’est-à-dire que l’expressivité du modèle est insuffisante pour capturer l’ensemble des structures présentes dans les données. Pour certains modèles probabilistes, la complexité du modèle se traduit par l’introduction d’une ou plusieurs variables cachées dont le rôle est d’expliquer le processus génératif des données. Il existe diverses approches permettant d’identifier le nombre approprié de variables cachées d’un modèle. Cette thèse s’intéresse aux méthodes Bayésiennes nonparamétriques permettant de déterminer le nombre de variables cachées à utiliser ainsi que leur dimensionnalité. La popularisation des statistiques Bayésiennes nonparamétriques au sein de la communauté de l’apprentissage automatique est assez récente. Leur principal attrait vient du fait qu’elles offrent des modèles hautement flexibles et dont la complexité s’ajuste proportionnellement à la quantité de données disponibles. Au cours des dernières années, la recherche sur les méthodes d’apprentissage Bayésiennes nonparamétriques a porté sur trois aspects principaux : la construction de nouveaux modèles, le développement d’algorithmes d’inférence et les applications. Cette thèse présente nos contributions à ces trois sujets de recherches dans le contexte d’apprentissage de modèles à variables cachées. Dans un premier temps, nous introduisons le Pitman-Yor process mixture of Gaussians, un modèle permettant l’apprentissage de mélanges infinis de Gaussiennes. Nous présentons aussi un algorithme d’inférence permettant de découvrir les composantes cachées du modèle que nous évaluons sur deux applications concrètes de robotique. Nos résultats démontrent que l’approche proposée surpasse en performance et en flexibilité les approches classiques d’apprentissage. Dans un deuxième temps, nous proposons l’extended cascading Indian buffet process, un modèle servant de distribution de probabilité a priori sur l’espace des graphes dirigés acycliques. Dans le contexte de réseaux Bayésien, ce prior permet d’identifier à la fois la présence de variables cachées et la structure du réseau parmi celles-ci. Un algorithme d’inférence Monte Carlo par chaîne de Markov est utilisé pour l’évaluation sur des problèmes d’identification de structures et d’estimation de densités. Dans un dernier temps, nous proposons le Indian chefs process, un modèle plus général que l’extended cascading Indian buffet process servant à l’apprentissage de graphes et d’ordres. L’avantage du nouveau modèle est qu’il admet les connections entres les variables observables et qu’il prend en compte l’ordre des variables. Nous présentons un algorithme d’inférence Monte Carlo par chaîne de Markov avec saut réversible permettant l’apprentissage conjoint de graphes et d’ordres. L’évaluation est faite sur des problèmes d’estimations de densité et de test d’indépendance. Ce modèle est le premier modèle Bayésien nonparamétrique permettant d’apprendre des réseaux Bayésiens disposant d’une structure complètement arbitraire.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Too often, validation of computer models is considered as a "once and forget" task. In this paper a systematic and graduated approach to evacuation model validation is suggested. This involves, (i) component testing, (ii) functional validation, (iii) qualitative validation and (iv) quantitative validation. Viewed in this manner, validation is considered an on-going activity and an integral part of the life cycle of the software. While the first three components of the validation protocol pose little or no significant problems, the task of quantitative validation poses a number of challenges, the most significant being a shortage of suitable experimental data. Finally, the validation protocol used in the development of the EXODUS suite of evacuation models is examined.
Resumo:
We tested the prediction that, if hoverflies are Batesian mimics, this may extend to behavioral mimicry such that their numerical abundance at each hour of the day (the daily activity pattern) is related to the numbers of their hymenopteran models. After accounting for site, season, microclimatic responses and for general hoverfly abundance at three sites in north-west England, the residual numbers of mimics were significantly correlated positively with their models 9 times out of 17, while 16 out of 17 relationships were positive, itself a highly significant non-random pattern. Several eristaline flies showed significant relationships with honeybees even though some of them mimic wasps or bumblebees, perhaps reflecting an ancestral resemblance to honeybees. There was no evidence that good and poor mimics differed in their daily activity pattern relationships with models. However, the common mimics showed significant activity pattern relationships with their models, but the rarer mimics did not. We conclude that many hoverflies show behavioral mimicry of their hymenopteran models.