866 resultados para pattern-mixture model
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
Conceptual database design is an unusually difficult and error-prone task for novice designers. This study examined how two training approaches---rule-based and pattern-based---might improve performance on database design tasks. A rule-based approach prescribes a sequence of rules for modeling conceptual constructs, and the action to be taken at various stages while developing a conceptual model. A pattern-based approach presents data modeling structures that occur frequently in practice, and prescribes guidelines on how to recognize and use these structures. This study describes the conceptual framework, experimental design, and results of a laboratory experiment that employed novice designers to compare the effectiveness of the two training approaches (between-subjects) at three levels of task complexity (within subjects). Results indicate an interaction effect between treatment and task complexity. The rule-based approach was significantly better in the low-complexity and the high-complexity cases; there was no statistical difference in the medium-complexity case. Designer performance fell significantly as complexity increased. Overall, though the rule-based approach was not significantly superior to the pattern-based approach in all instances, it out-performed the pattern-based approach at two out of three complexity levels. The primary contributions of the study are (1) the operationalization of the complexity construct to a degree not addressed in previous studies; (2) the development of a pattern-based instructional approach to database design; and (3) the finding that the effectiveness of a particular training approach may depend on the complexity of the task.
Resumo:
Hydrogeologic variables controlling groundwater exchange with inflow and flow-through lakes were simulated using a three-dimensional numerical model (MODFLOW) to investigate and quantify spatial patterns of lake bed seepage and hydraulic head distributions in the porous medium surrounding the lakes. Also, the total annual inflow and outflow were calculated as a percentage of lake volume for flow-through lake simulations. The general exponential decline of seepage rates with distance offshore was best demonstrated at lower anisotropy ratio (i.e., Kh/Kv = 1, 10), with increasing deviation from the exponential pattern as anisotropy was increased to 100 and 1000. 2-D vertical section models constructed for comparison with 3-D models showed that groundwater heads and seepages were higher in 3-D simulations. Addition of low conductivity lake sediments decreased seepage rates nearshore and increased seepage rates offshore in inflow lakes, and increased the area of groundwater inseepage on the beds of flow-through lakes. Introduction of heterogeneity into the medium decreased the water table and seepage ratesnearshore, and increased seepage rates offshore in inflow lakes. A laterally restricted aquifer located at the downgradient side of the flow-through lake increased the area of outseepage. Recharge rate, lake depth and lake bed slope had relatively little effect on the spatial patterns of seepage rates and groundwater exchange with lakes.
Resumo:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
Resumo:
El Niño and the Southern Oscillation (ENSO) is a cycle that is initiated in the equatorial Pacific Ocean and is recognized on interannual timescales by oscillating patterns in tropical Pacific sea surface temperatures (SST) and atmospheric circulations. Using correlation and regression analysis of datasets that include SST’s and other interdependent variables including precipitation, surface winds, sea level pressure, this research seeks to quantify recent changes in ENSO behavior. Specifically, the amplitude, frequency of occurrence, and spatial characteristics (i.e. events with maximum amplitude in the Central Pacific versus the Eastern Pacific) are investigated. The research is based on the question; “Are the statistics of ENSO changing due to increasing greenhouse gas concentrations?” Our hypothesis is that the present-day changes in amplitude, frequency, and spatial characteristics of ENSO are determined by the natural variability of the ocean-atmosphere climate system, not the observed changes in the radiative forcing due to change in the concentrations of greenhouse gases. Statistical analysis, including correlation and regression analysis, is performed on observational ocean and atmospheric datasets available from the National Oceanographic and Atmospheric Administration (NOAA), National Center for Atmospheric Research (NCAR) and coupled model simulations from the Coupled Model Inter-comparison Project (phase 5, CMIP5). Datasets are analyzed with a particular focus on ENSO over the last thirty years. Understanding the observed changes in the ENSO phenomenon over recent decades has a worldwide significance. ENSO is the largest climate signal on timescales of 2 - 7 years and affects billions of people via atmospheric teleconnections that originate in the tropical Pacific. These teleconnections explain why changes in ENSO can lead to climate variations in areas including North and South America, Asia, and Australia. For the United States, El Niño events are linked to decreased number of hurricanes in the Atlantic basin, reduction in precipitation in the Pacific Northwest, and increased precipitation throughout the southern United Stated during winter months. Understanding variability in the amplitude, frequency, and spatial characteristics of ENSO is crucial for decision makers who must adapt where regional ecology and agriculture are affected by ENSO.
Resumo:
El Niño and the Southern Oscillation (ENSO) is a cycle that is initiated in the equatorial Pacific Ocean and is recognized on interannual timescales by oscillating patterns in tropical Pacific sea surface temperatures (SST) and atmospheric circulations. Using correlation and regression analysis of datasets that include SST’s and other interdependent variables including precipitation, surface winds, sea level pressure, this research seeks to quantify recent changes in ENSO behavior. Specifically, the amplitude, frequency of occurrence, and spatial characteristics (i.e. events with maximum amplitude in the Central Pacific versus the Eastern Pacific) are investigated. The research is based on the question; “Are the statistics of ENSO changing due to increasing greenhouse gas concentrations?” Our hypothesis is that the present-day changes in amplitude, frequency, and spatial characteristics of ENSO are determined by the natural variability of the ocean-atmosphere climate system, not the observed changes in the radiative forcing due to change in the concentrations of greenhouse gases. Statistical analysis, including correlation and regression analysis, is performed on observational ocean and atmospheric datasets available from the National Oceanographic and Atmospheric Administration (NOAA), National Center for Atmospheric Research (NCAR) and coupled model simulations from the Coupled Model Inter-comparison Project (phase 5, CMIP5). Datasets are analyzed with a particular focus on ENSO over the last thirty years. Understanding the observed changes in the ENSO phenomenon over recent decades has a worldwide significance. ENSO is the largest climate signal on timescales of 2 - 7 years and affects billions of people via atmospheric teleconnections that originate in the tropical Pacific. These teleconnections explain why changes in ENSO can lead to climate variations in areas including North and South America, Asia, and Australia. For the United States, El Niño events are linked to decreased number of hurricanes in the Atlantic basin, reduction in precipitation in the Pacific Northwest, and increased precipitation throughout the southern United Stated during winter months. Understanding variability in the amplitude, frequency, and spatial characteristics of ENSO is crucial for decision makers who must adapt where regional ecology and agriculture are affected by ENSO.
Resumo:
L'activité physique améliore la santé, mais seulement 4.8% des Canadiens atteignent le niveau recommandé. La position socio-économique est un des déterminants de l'activité physique les plus importants. Elle est associée à l’activité physique de manière transversale à l’adolescence et à l’âge adulte. Cette thèse a tenté de déterminer s'il y a une association à long terme entre la position socio-économique au début du parcours de vie et l’activité physique à l’âge adulte. S'il y en avait une, un deuxième objectif était de déterminer quel modèle théorique en épidémiologie des parcours de vie décrivait le mieux sa forme. Cette thèse comprend trois articles: une recension systématique et deux recherches originales. Dans la recension systématique, des recherches ont été faites dans Medline et EMBASE pour trouver les études ayant mesuré la position socio-économique avant l'âge de 18 ans et l'activité physique à ≥18 ans. Dans les deux recherches originales, la modélisation par équations structurelles a été utilisée pour comparer trois modèles alternatifs en épidémiologie des parcours de vie: le modèle d’accumulation de risque avec effets additifs, le modèle d’accumulation de risque avec effet déclenché et le modèle de période critique. Ces modèles ont été comparés dans deux cohortes prospectives représentatives à l'échelle nationale: la 1970 British birth cohort (n=16,571; première recherche) et l’Enquête longitudinale nationale sur les enfants et les jeunes (n=16,903; deuxième recherche). Dans la recension systématique, 10 619 articles ont été passés en revue par deux chercheurs indépendants et 42 ont été retenus. Pour le résultat «activité physique» (tous types et mesures confondus), une association significative avec la position socio-économique durant l’enfance fut trouvée dans 26/42 études (61,9%). Quand seulement l’activité physique durant les loisirs a été considérée, une association significative fut trouvée dans 21/31 études (67,7%). Dans un sous-échantillon de 21 études ayant une méthodologie plus forte, les proportions d’études ayant trouvé une association furent plus hautes : 15/21 (71,4%) pour tous les types et toutes les mesures d’activité physique et 12/15 (80%) pour l’activité physique de loisir seulement. Dans notre première recherche originale sur les données de la British birth cohort, pour la classe sociale, nous avons trouvé que le modèle d’accumulation de risque avec effets additifs s’est ajusté le mieux chez les hommes et les femmes pour l’activité physique de loisir, au travail et durant les transports. Dans notre deuxième recherche originale sur les données canadiennes sur l'activité physique de loisir, nous avons trouvé que chez les hommes, le modèle de période critique s’est ajusté le mieux aux données pour le niveau d’éducation et le revenu, alors que chez les femmes, le modèle d’accumulation de risque avec effets additifs s’est ajusté le mieux pour le revenu, tandis que le niveau d’éducation ne s’est ajusté à aucun des modèles testés. En conclusion, notre recension systématique indique que la position socio-économique au début du parcours de vie est associée à la pratique d'activité physique à l'âge adulte. Les résultats de nos deux recherches originales suggèrent un patron d’associations le mieux représenté par le modèle d’accumulation de risque avec effets additifs.
Resumo:
Diffuse intrinsic pontine glioma (DIPG) is a rare and incurable brain tumor that arises predominately in children and involves the pons, a structure that along with the midbrain and medulla makes up the brainstem. We have previously developed genetically engineered mouse models of brainstem glioma using the RCAS/Tv-a system by targeting PDGF-B overexpression, p53 loss, and H3.3K27M mutation to Nestin-expressing brainstem progenitor cells of the neonatal mouse. Here we describe a novel mouse model targeting these same genetic alterations to Pax3-expressing cells, which in the neonatal mouse pons consist of a Pax3+/Nestin+/Sox2+ population lining the fourth ventricle and a Pax3+/NeuN+ parenchymal population. Injection of RCAS-PDGF-B into the brainstem of Pax3-Tv-a mice at postnatal day 3 results in 40% of mice developing asymptomatic low-grade glioma. A mixture of low- and high-grade glioma results from injection of Pax3-Tv-a;p53(fl/fl) mice with RCAS-PDGF-B and RCAS-Cre, with or without RCAS-H3.3K27M. These tumors are Ki67+, Nestin+, Olig2+, and largely GFAP- and can arise anywhere within the brainstem, including the classic DIPG location of the ventral pons. Expression of the H3.3K27M mutation reduces overall H3K27me3 as compared with tumors without the mutation, similar to what has been previously shown in human and mouse tumors. Thus, we have generated a novel genetically engineered mouse model of DIPG, which faithfully recapitulates the human disease and represents a novel platform with which to study the biology and treatment of this deadly disease.
Resumo:
Continuous high-resolution mass accumulation rates (MAR) and X-ray fluorescence (XRF) measurements from marine sediment records in the Bay of Biscay (NE Atlantic) have allowed the determination of the timing and the amplitude of the 'Fleuve Manche' (Channel River) discharges during glacial stages MIS 10, MIS 8, MIS 6 and MIS 4-2. These results have yielded detailed insight into the Middle and Late Pleistocene glaciations in Europe and the drainage network of the western and central European rivers over the last 350 kyr. This study provides clear evidence that the 'Fleuve Manche' connected the southern North Sea basin with the Bay of Biscay during each glacial period and reveals that 'Fleuve Manche' activity during the glaciations MIS 10 and MIS 8 was significantly less than during MIS 6 and MIS 2. We correlate the significant 'Fleuve Manche' activity, detected during MIS 6 and MIS 2, with the extensive Saalian (Drenthe Substage) and the Weichselian glaciations, respectively, confirming that the major Elsterian glaciation precedes the glacial MIS 10. In detail, massive 'Fleuve Manche' discharges occurred at ca 155 ka (mid-MIS 6) and during Termination I, while no significant discharges are found during Termination II. It is assumed that a substantial retreat of the European ice sheet at ca 155 kyr, followed by the formation of ice-free conditions between the British Isles and Scandinavia until Termination II, allowed meltwater to flow northwards through the North Sea basin during the second part of the MIS 6. We assume that this glacial pattern corresponds to the Warthe Substage glacial maximum, therefore indicating that the data presented here equates to the Drenthe and the Warthe glacial advances at ca 175-160 ka and ca 150-140 ka, respectively. Finally, the correlation of our records with ODP site 980 reveals that massive 'Fleuve Manche' discharges, related to partial or complete melting of the European ice masses, were synchronous with strong decreases in both the rate of deep-water formation and the strength of the Atlantic thermohaline circulation. 'Fleuve Manche' discharges over the last 350 kyr probably participated, with other meltwater sources, in the collapse of the thermohaline circulation by freshening the northern Atlantic surface water.
Resumo:
The Canary Basin lies in a region of strong interaction between the atmospheric and ocean circulation systems: Trade winds drive seasonal coastal upwelling and dust storm outbreaks from the neighbouring Sahara desert are the major source of terrigenous sediment. To investigate the forcing mechanisms for dust input and wind strength in the North Canary Basin, the temporal pattern of variability of sedimentological and geochemical proxy records has been analysed in two sediment cores between latitudes 30°30'N and 31°40'N. Spectral analysis of the dust proxy records indicates that insolation changes related to eccentricity and precession are the main periods of temporal variation in the record. Si/Al and grain-size of the terrigenous fraction show an increase in glacial-interglacial transitions while Al concentration and Fe/Al ratio are both in phase with minima in the precessional index. Hence, the results obtained show that the wind strength was intensified at Terminations. At times of maxima of Northern Hemisphere seasonal insolation, when the African monsoon was enhanced, the North Canary Basin also received higher dust input. This result suggests that the moisture brought by the monsoon may have increased the availability of dust in the source region.
Resumo:
Acquired resistance to selective FLT3 inhibitors is an emerging clinical problem in the treatment of FLT3-ITD(+) acute myeloid leukaemia (AML). The paucity of valid pre-clinical models has restricted investigations to determine the mechanism of acquired therapeutic resistance, thereby limiting the development of effective treatments. We generated selective FLT3 inhibitor-resistant cells by treating the FLT3-ITD(+) human AML cell line MOLM-13 in vitro with the FLT3-selective inhibitor MLN518, and validated the resistant phenotype in vivo and in vitro. The resistant cells, MOLM-13-RES, harboured a new D835Y tyrosine kinase domain (TKD) mutation on the FLT3-ITD(+) allele. Acquired TKD mutations, including D835Y, have recently been identified in FLT3-ITD(+) patients relapsing after treatment with the novel FLT3 inhibitor, AC220. Consistent with this clinical pattern of resistance, MOLM-13-RES cells displayed high relative resistance to AC220 and Sorafenib. Furthermore, treatment of MOLM-13-RES cells with AC220 lead to loss of the FLT3 wild-type allele and the duplication of the FLT3-ITD-D835Y allele. Our FLT3-Aurora kinase inhibitor, CCT137690, successfully inhibited growth of FLT3-ITD-D835Y cells in vitro and in vivo, suggesting that dual FLT3-Aurora inhibition may overcome selective FLT3 inhibitor resistance, in part due to inhibition of Aurora kinase, and may benefit patients with FLT3-mutated AML.
Resumo:
A large eddy simulation is performed to study the deflagration to detonation transition phenomenon in an obstructed channel containing premixed stoichiometric hydrogen–air mixture. Two-dimensional filtered reactive Navier–Stokes equations are solved utilizing the artificially thickened flame approach (ATF) for modeling sub-grid scale combustion. To include the effect of induction time, a 27-step detailed mechanism is utilized along with an in situ adaptive tabulation (ISAT) method to reduce the computational cost due to the detailed chemistry. The results show that in the slow flame propagation regime, the flame–vortex interaction and the resulting flame folding and wrinkling are the main mechanisms for the increase of the flame surface and consequently acceleration of the flame. Furthermore, at high speed, the major mechanisms responsible for flame propagation are repeated reflected shock–flame interactions and the resulting baroclinic vorticity. These interactions intensify the rate of heat release and maintain the turbulence and flame speed at high level. During the flame acceleration, it is seen that the turbulent flame enters the ‘thickened reaction zones’ regime. Therefore, it is necessary to utilize the chemistry based combustion model with detailed chemical kinetics to properly capture the salient features of the fast deflagration propagation.
Resumo:
One of the simplest models of adaptation to a new environment is Fisher's Geometric Model (FGM), in which populations move on a multidimensional landscape defined by the traits under selection. The predictions of this model have been found to be consistent with current observations of patterns of fitness increase in experimentally evolved populations. Recent studies investigated the dynamics of allele frequency change along adaptation of microbes to simple laboratory conditions and unveiled a dramatic pattern of competition between cohorts of mutations, i.e., multiple mutations simultaneously segregating and ultimately reaching fixation. Here, using simulations, we study the dynamics of phenotypic and genetic change as asexual populations under clonal interference climb a Fisherian landscape, and ask about the conditions under which FGM can display the simultaneous increase and fixation of multiple mutations-mutation cohorts-along the adaptive walk. We find that FGM under clonal interference, and with varying levels of pleiotropy, can reproduce the experimentally observed competition between different cohorts of mutations, some of which have a high probability of fixation along the adaptive walk. Overall, our results show that the surprising dynamics of mutation cohorts recently observed during experimental adaptation of microbial populations can be expected under one of the oldest and simplest theoretical models of adaptation-FGM.
Resumo:
Adult anchovies in the Bay of Biscay perform north to south migration from late winter to early summer for spawning. However, what triggers and drives the geographic shift of the population remains unclear and poorly understood. An individual-based fish model has been implemented to explore the potential mechanisms that control anchovy's movement routes toward its spawning habitats. To achieve this goal, two fish movement behaviors – gradient detection through restricted area search and kinesis – simulated fish response to its dynamic environment. A bioenergetics model was used to represent individual growth and reproduction along the fish trajectory. The environmental forcing (food, temperature) of the model was provided by a coupled physical–biogeochemical model. We followed a hypothesis-testing strategy to actualize a series of simulations using different cues and computational assumptions. The gradient detection behavior was found as the most suitable mechanism to recreate the observed shift of anchovy distribution under the combined effect of sea-surface temperature and zooplankton. In addition, our results suggested that southward movement occurred more actively from early April to middle May following favorably the spatio-temporal evolution of zooplankton and temperature. In terms of fish bioenergetics, individuals who ended up in the southern part of the bay presented better condition based on energy content, proposing the resulting energy gain as an ecological explanation for this migration. The kinesis approach resulted in a moderate performance, producing distribution pattern with the highest spread. Finally, model performance was not significantly affected by changes on the starting date, initial fish distribution and number of particles used in the simulations, whereas it was drastically influenced by the adopted cues.
Resumo:
Model Driven based approach for Service Evolution in Clouds will mainly focus on the reusable evolution patterns' advantage to solve evolution problems. During the process, evolution pattern will be driven by MDA models to pattern aspects. Weaving the aspects into service based process by using Aspect-Oriented extended BPEL engine at runtime will be the dynamic feature of the evolution.