977 resultados para Pore-Scale modeling
Resumo:
With the flow of the Mara River becoming increasingly erratic especially in the upper reaches, attention has been directed to land use change as the major cause of this problem. The semi-distributed hydrological model Soil and Water Assessment Tool 5 (SWAT) and Landsat imagery were utilized in the upper Mara River Basin in order to 1) map existing field scale land use practices in order to determine their impact 2) determine the impacts of land use change on water flux; and 3) determine the impacts of rainfall (0%, ±10% and ±20%) and air temperature variations (0% and +5%) based on the Intergovernmental Panel on Climate Change projections on the water flux of the 10 upper Mara River. This study found that the different scenarios impacted on the water balance components differently. Land use changes resulted in a slightly more erratic discharge while rainfall and air temperature changes had a more predictable impact on the discharge and water balance components. These findings demonstrate that the model results 15 show the flow was more sensitive to the rainfall changes than land use changes. It was also shown that land use changes can reduce dry season flow which is the most important problem in the basin. The model shows also deforestation in the Mau Forest increased the peak flows which can also lead to high sediment loading in the Mara River. The effect of the land use and climate change scenarios on the sediment and 20 water quality of the river needs a thorough understanding of the sediment transport processes in addition to observed sediment and water quality data for validation of modeling results.
Resumo:
Shipboard whole-core squeezing was used to measure pore water concentration vs depth profiles of [NO3]-, O2 and SiO2 at 12 stations in the equatorial Pacific along a transect from 15°S to 11°N at 135°W. The [NO3]- and SiO2 profiles were combined with fine-scale resistivity and porosity measurements to calculate benthic fluxes. After using O2 profiles, coupled with the [NO3]- profiles, to constrain the C:N of the degrading organic matter, the [NO3]- fluxes were converted to benthic organic carbon degradation rates. The range in benthic organic carbon degradation rates is 7-30 ?mol cm**-2 y**-1, with maximum values at the equator and minimum values at the southern end of the transect. The zonal trend of benthic degradation rates, with its equatorial maximum and with elevated values skewed to the north of the equator, is similar to the pattern of primary production observed in the region. Benthic organic carbon degradation is 1-2% of primary production. The range of benthic biogenic silica dissolution rates is 6.9-20 µmol cm**-2 y**-1, representing 2.5-5% of silicon fixation in the surface ocean of the region. Its zonal pattern is distinctly different from that of organic carbon degradation: the range in the ratio of silica dissolution to carbon degradation along the transect is 0.44-1.7 mol Si mol C**-1, with maximum values occurring between 12°S and 2°S, and with fairly constant values of 0.5-0.7 north of the equator. A box model calculation of the average lifetime of the organic carbon in the upper 1 cm of the sediments, where 80 +/- 11% of benthic organic carbon degradation occurs, indicates that it is short: from 3.1 years at high flux stations to 11 years at low flux stations. The reactive component of the organic matter must have a shorter lifetime than this average value. In contrast, the average lifetime of biogenic silica in the upper centimeter of these sediments is 55 +/- 28 years, and shows no systematic variations with benthic flux.
Resumo:
Assessing frequency and extent of mass movement at continental margins is crucial to evaluate risks for offshore constructions and coastal areas. A multidisciplinary approach including geophysical, sedimentological, geotechnical, and geochemical methods was applied to investigate multistage mass transport deposits (MTDs) off Uruguay, on top of which no surficial hemipelagic drape was detected based on echosounder data. Nonsteady state pore water conditions are evidenced by a distinct gradient change in the sulfate (SO4**2-) profile at 2.8 m depth. A sharp sedimentological contact at 2.43 m coincides with an abrupt downward increase in shear strength from approx. 10 to >20 kPa. This boundary is interpreted as a paleosurface (and top of an older MTD) that has recently been covered by a sediment package during a younger landslide event. This youngest MTD supposedly originated from an upslope position and carried its initial pore water signature downward. The kink in the SO4**2- profile approx. 35 cm below the sedimentological and geotechnical contact indicates that bioirrigation affected the paleosurface before deposition of the youngest MTD. Based on modeling of the diffusive re-equilibration of SO4**2- the age of the most recent MTD is estimated to be <30 years. The mass movement was possibly related to an earthquake in 1988 (approx. 70 km southwest of the core location). Probabilistic slope stability back analysis of general landslide structures in the study area reveals that slope failure initiation requires additional ground accelerations. Therefore, we consider the earthquake as a reasonable trigger if additional weakening processes (e.g., erosion by previous retrogressive failure events or excess pore pressures) preconditioned the slope for failure. Our study reveals the necessity of multidisciplinary approaches to accurately recognize and date recent slope failures in complex settings such as the investigated area.
Resumo:
High-resolution sedimentary records of major and minor elements (Al, Ba, Ca, Sr, Ti), total organic carbon (TOC), and profiles of pore water constituents (SO42-, CH4, Ca2+, Ba2+, Mg2+, alkalinity) were obtained for two gravity cores (core 755, 501 m water depth and core 214, 1686 m water depth) from the northwestern Black Sea. The records were examined in order to gain insight into the cycling of Ba in anoxic marine sediments characterized by a shallow sulfate-methane transition (SMT) as well as the applicability of barite as a primary productivity proxy in such a setting. The Ba records are strongly overprinted by diagenetic barite (BaSO4) precipitation and remobilization; authigenic Ba enrichments were found at both sites at and slightly above the current SMT. Transport reaction modeling was applied to simulate the migration of the SMT during the changing geochemical conditions after the Holocene seawater intrusion into the Black Sea. Based on this, sediment intervals affected by diagenetic Ba redistribution were identified. Results reveal that the intense overprint of Ba and Baxs (Ba excess above detrital average) strongly limits its correlation to primary productivity. These findings have implications for other modern and ancient anoxic basins, such as sections covering the Oceanic Anoxic Events for which Ba is frequently used as a primary productivity indicator. Our study also demonstrates the limitations concerning the use of Baxs as a tracer for downward migrations of the SMT: due to high sedimentation rates at the investigated sites, diagenetic barite fronts are buried below the SMT within a relatively short period. Thus, 'relict' barite fronts would only be preserved for a few thousands of years, if at all.
Resumo:
The surface sediments in the Black Sea are underlain by extensive deposits of iron (Fe) oxide-rich lake sediments that were deposited prior to the inflow of marine Mediterranean Sea waters ca. 9000 years ago. The subsequent downward diffusion of marine sulfate into the methane-bearing lake sediments has led to a multitude of diagenetic reactions in the sulfate-methane transition zone (SMTZ), including anaerobic oxidation of methane (AOM) with sulfate. While the sedimentary cycles of sulfur (S), methane and Fe in the SMTZ have been extensively studied, relatively little is known about the diagenetic alterations of the sediment record occurring below the SMTZ. Here we combine detailed geochemical analyses of the sediment and pore water with multicomponent diagenetic modeling to study the diagenetic alterations below the SMTZ at two sites in the western Black Sea. We focus on the dynamics of Fe, S and phosphorus (P) and demonstrate that diagenesis has strongly overprinted the sedimentary burial records of these elements. Our results show that sulfate-mediated AOM substantially enhances the downward diffusive flux of sulfide into the deep limnic deposits. During this downward sulfidization, Fe oxides, Fe carbonates and Fe phosphates (e.g. vivianite) are converted to sulfide phases, leading to an enrichment in solid phase S and the release of phosphate to the pore water. Below the sulfidization front, high concentrations of dissolved ferrous Fe (Fe2+) lead to sequestration of downward diffusing phosphate as authigenic vivianite, resulting in a transient accumulation of total P directly below the sulfidization front. Our model results further demonstrate that downward migrating sulfide becomes partly re-oxidized to sulfate due to reactions with oxidized Fe minerals, fueling a cryptic S cycle and thus stimulating slow rates of sulfate-driven AOM (~ 1-100 pmol/cm**3/d) in the sulfate-depleted limnic deposits. However, this process is unlikely to explain the observed release of dissolved Fe2+ below the SMTZ. Instead, we suggest that besides organoclastic Fe oxide reduction, AOM coupled to the reduction of Fe oxides may also provide a possible mechanism for the high concentrations of Fe2+ in the pore water at depth. Our results reveal that methane plays a key role in the diagenetic alterations of Fe, S and P records in Black Sea sediments. The downward sulfidization into the limnic deposits is enhanced through sulfate-driven AOM with sulfate and AOM with Fe oxides may provide a deep source of dissolved Fe2+ that drives the sequestration of P in vivianite below the sulfidization front.
Resumo:
This paper presents a theoretical model on the vibration analysis of micro scale fluid-loaded rectangular isotropic plates, based on the Lamb's assumption of fluid-structure interaction and the Rayleigh-Ritz energy method. An analytical solution for this model is proposed, which can be applied to most cases of boundary conditions. The dynamical experimental data of a series of microfabricated silicon plates are obtained using a base-excitation dynamic testing facility. The natural frequencies and mode shapes in the experimental results are in good agreement with the theoretical simulations for the lower order modes. The presented theoretical and experimental investigations on the vibration characteristics of the micro scale plates are of particular interest in the design of microplate based biosensing devices. Copyright © 2009 by ASME.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
RNA viruses are an important cause of global morbidity and mortality. The rapid evolutionary rates of RNA virus pathogens, caused by high replication rates and error-prone polymerases, can make the pathogens difficult to control. RNA viruses can undergo immune escape within their hosts and develop resistance to the treatment and vaccines we design to fight them. Understanding the spread and evolution of RNA pathogens is essential for reducing human suffering. In this dissertation, I make use of the rapid evolutionary rate of viral pathogens to answer several questions about how RNA viruses spread and evolve. To address each of the questions, I link mathematical techniques for modeling viral population dynamics with phylogenetic and coalescent techniques for analyzing and modeling viral genetic sequences and evolution. The first project uses multi-scale mechanistic modeling to show that decreases in viral substitution rates over the course of an acute infection, combined with the timing of infectious hosts transmitting new infections to susceptible individuals, can account for discrepancies in viral substitution rates in different host populations. The second project combines coalescent models with within-host mathematical models to identify driving evolutionary forces in chronic hepatitis C virus infection. The third project compares the effects of intrinsic and extrinsic viral transmission rate variation on viral phylogenies.
Resumo:
Purpose: To build a model that will predict the survival time for patients that were treated with stereotactic radiosurgery for brain metastases using support vector machine (SVM) regression.
Methods and Materials: This study utilized data from 481 patients, which were equally divided into training and validation datasets randomly. The SVM model used a Gaussian RBF function, along with various parameters, such as the size of the epsilon insensitive region and the cost parameter (C) that are used to control the amount of error tolerated by the model. The predictor variables for the SVM model consisted of the actual survival time of the patient, the number of brain metastases, the graded prognostic assessment (GPA) and Karnofsky Performance Scale (KPS) scores, prescription dose, and the largest planning target volume (PTV). The response of the model is the survival time of the patient. The resulting survival time predictions were analyzed against the actual survival times by single parameter classification and two-parameter classification. The predicted mean survival times within each classification were compared with the actual values to obtain the confidence interval associated with the model’s predictions. In addition to visualizing the data on plots using the means and error bars, the correlation coefficients between the actual and predicted means of the survival times were calculated during each step of the classification.
Results: The number of metastases and KPS scores, were consistently shown to be the strongest predictors in the single parameter classification, and were subsequently used as first classifiers in the two-parameter classification. When the survival times were analyzed with the number of metastases as the first classifier, the best correlation was obtained for patients with 3 metastases, while patients with 4 or 5 metastases had significantly worse results. When the KPS score was used as the first classifier, patients with a KPS score of 60 and 90/100 had similar strong correlation results. These mixed results are likely due to the limited data available for patients with more than 3 metastases or KPS scores of 60 or less.
Conclusions: The number of metastases and the KPS score both showed to be strong predictors of patient survival time. The model was less accurate for patients with more metastases and certain KPS scores due to the lack of training data.
Resumo:
Terrestrial ecosystems, occupying more than 25% of the Earth's surface, can serve as
`biological valves' in regulating the anthropogenic emissions of atmospheric aerosol
particles and greenhouse gases (GHGs) as responses to their surrounding environments.
While the signicance of quantifying the exchange rates of GHGs and atmospheric
aerosol particles between the terrestrial biosphere and the atmosphere is
hardly questioned in many scientic elds, the progress in improving model predictability,
data interpretation or the combination of the two remains impeded by
the lack of precise framework elucidating their dynamic transport processes over a
wide range of spatiotemporal scales. The diculty in developing prognostic modeling
tools to quantify the source or sink strength of these atmospheric substances
can be further magnied by the fact that the climate system is also sensitive to the
feedback from terrestrial ecosystems forming the so-called `feedback cycle'. Hence,
the emergent need is to reduce uncertainties when assessing this complex and dynamic
feedback cycle that is necessary to support the decisions of mitigation and
adaptation policies associated with human activities (e.g., anthropogenic emission
controls and land use managements) under current and future climate regimes.
With the goal to improve the predictions for the biosphere-atmosphere exchange
of biologically active gases and atmospheric aerosol particles, the main focus of this
dissertation is on revising and up-scaling the biotic and abiotic transport processes
from leaf to canopy scales. The validity of previous modeling studies in determining
iv
the exchange rate of gases and particles is evaluated with detailed descriptions of their
limitations. Mechanistic-based modeling approaches along with empirical studies
across dierent scales are employed to rene the mathematical descriptions of surface
conductance responsible for gas and particle exchanges as commonly adopted by all
operational models. Specically, how variation in horizontal leaf area density within
the vegetated medium, leaf size and leaf microroughness impact the aerodynamic attributes
and thereby the ultrane particle collection eciency at the leaf/branch scale
is explored using wind tunnel experiments with interpretations by a porous media
model and a scaling analysis. A multi-layered and size-resolved second-order closure
model combined with particle
uxes and concentration measurements within and
above a forest is used to explore the particle transport processes within the canopy
sub-layer and the partitioning of particle deposition onto canopy medium and forest
oor. For gases, a modeling framework accounting for the leaf-level boundary layer
eects on the stomatal pathway for gas exchange is proposed and combined with sap
ux measurements in a wind tunnel to assess how leaf-level transpiration varies with
increasing wind speed. How exogenous environmental conditions and endogenous
soil-root-stem-leaf hydraulic and eco-physiological properties impact the above- and
below-ground water dynamics in the soil-plant system and shape plant responses
to droughts is assessed by a porous media model that accommodates the transient
water
ow within the plant vascular system and is coupled with the aforementioned
leaf-level gas exchange model and soil-root interaction model. It should be noted
that tackling all aspects of potential issues causing uncertainties in forecasting the
feedback cycle between terrestrial ecosystem and the climate is unrealistic in a single
dissertation but further research questions and opportunities based on the foundation
derived from this dissertation are also brie
y discussed.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.