878 resultados para Scale models.
Resumo:
Urban systems consist of several interlinked sub-systems - social, economic, institutional and environmental – each representing a complex system of its own and affecting all the others at various structural and functional levels. An urban system is represented by a number of “human” agents, such as individuals and households, and “non-human” agents, such as buildings, establishments, transports, vehicles and infrastructures. These two categories of agents interact among them and simultaneously produce impact on the system they interact with. Try to understand the type of interactions, their spatial and temporal localisation to allow a very detailed simulation trough models, turn out to be a great effort and is the topic this research deals with. An analysis of urban system complexity is here presented and a state of the art review about the field of urban models is provided. Finally, six international models - MATSim, MobiSim, ANTONIN, TRANSIMS, UrbanSim, ILUTE - are illustrated and then compared.
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
Asteroid 4Vesta seems to be a major intact protoplanet, with a surface composition similar to that of the HED (howardite-eucrite-diogenite) meteorites. The southern hemisphere is dominated by a giant impact scar, but previous impact models have failed to reproduce the observed topography. The recent discovery that Vesta's southern hemisphere is dominated by two overlapping basins provides an opportunity to model Vesta's topography more accurately. Here we report three-dimensional simulations of Vesta's global evolution under two overlapping planet-scale collisions. We closely reproduce its observed shape, and provide maps of impact excavation and ejecta deposition. Spiral patterns observed in the younger basin Rheasilvia, about one billion years old, are attributed to Coriolis forces during crater collapse. Surface materials exposed in the north come from a depth of about 20kilometres, according to our models, whereas materials exposed inside the southern double-excavation come from depths of about 60-100kilometres. If Vesta began as a layered, completely differentiated protoplanet, then our model predicts large areas of pure diogenites and olivine-rich rocks. These are not seen, possibly implying that the outer 100kilometres or so of Vesta is composed mainly of a basaltic crust (eucrites) with ultramafic intrusions (diogenites).
Resumo:
This paper shows how one can infer the nature of local returns to scale at the input- or output-oriented efficient projection of a technically inefficient input-output bundle, when the input- and output-oriented measures of efficiency differ.
Resumo:
Random Forests™ is reported to be one of the most accurate classification algorithms in complex data analysis. It shows excellent performance even when most predictors are noisy and the number of variables is much larger than the number of observations. In this thesis Random Forests was applied to a large-scale lung cancer case-control study. A novel way of automatically selecting prognostic factors was proposed. Also, synthetic positive control was used to validate Random Forests method. Throughout this study we showed that Random Forests can deal with large number of weak input variables without overfitting. It can account for non-additive interactions between these input variables. Random Forests can also be used for variable selection without being adversely affected by collinearities. ^ Random Forests can deal with the large-scale data sets without rigorous data preprocessing. It has robust variable importance ranking measure. Proposed is a novel variable selection method in context of Random Forests that uses the data noise level as the cut-off value to determine the subset of the important predictors. This new approach enhanced the ability of the Random Forests algorithm to automatically identify important predictors for complex data. The cut-off value can also be adjusted based on the results of the synthetic positive control experiments. ^ When the data set had high variables to observations ratio, Random Forests complemented the established logistic regression. This study suggested that Random Forests is recommended for such high dimensionality data. One can use Random Forests to select the important variables and then use logistic regression or Random Forests itself to estimate the effect size of the predictors and to classify new observations. ^ We also found that the mean decrease of accuracy is a more reliable variable ranking measurement than mean decrease of Gini. ^
Resumo:
In this work, we propose a new methodology for the large scale optimization and process integration of complex chemical processes that have been simulated using modular chemical process simulators. Units with significant numerical noise or large CPU times are substituted by surrogate models based on Kriging interpolation. Using a degree of freedom analysis, some of those units can be aggregated into a single unit to reduce the complexity of the resulting model. As a result, we solve a hybrid simulation-optimization model formed by units in the original flowsheet, Kriging models, and explicit equations. We present a case study of the optimization of a sour water stripping plant in which we simultaneously consider economics, heat integration and environmental impact using the ReCiPe indicator, which incorporates the recent advances made in Life Cycle Assessment (LCA). The optimization strategy guarantees the convergence to a local optimum inside the tolerance of the numerical noise.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Methods for understanding classical disordered spin systems with interactions conforming to some idealized graphical structure are well developed. The equilibrium properties of the Sherrington-Kirkpatrick model, which has a densely connected structure, have become well understood. Many features generalize to sparse Erdös- Rényi graph structures above the percolation threshold and to Bethe lattices when appropriate boundary conditions apply. In this paper, we consider spin states subject to a combination of sparse strong interactions with weak dense interactions, which we term a composite model. The equilibrium properties are examined through the replica method, with exact analysis of the high-temperature paramagnetic, spin-glass, and ferromagnetic phases by perturbative schemes. We present results of replica symmetric variational approximations, where perturbative approaches fail at lower temperature. Results demonstrate re-entrant behaviors from spin glass to ferromagnetic phases as temperature is lowered, including transitions from replica symmetry broken to replica symmetric phases. The nature of high-temperature transitions is found to be sensitive to the connectivity profile in the sparse subgraph, with regular connectivity a discontinuous transition from the paramagnetic to ferromagnetic phases is apparent.
Resumo:
We developed diatom-based prediction models of hydrology and periphyton abundance to inform assessment tools for a hydrologically managed wetland. Because hydrology is an important driver of ecosystem change, hydrologic alterations by restoration efforts could modify biological responses, such as periphyton characteristics. In karstic wetlands, diatoms are particularly important components of mat-forming calcareous periphyton assemblages that both respond and contribute to the structural organization and function of the periphyton matrix. We examined the distribution of diatoms across the Florida Everglades landscape and found hydroperiod and periphyton biovolume were strongly correlated with assemblage composition. We present species optima and tolerances for hydroperiod and periphyton biovolume, for use in interpreting the directionality of change in these important variables. Predictions of these variables were mapped to visualize landscape-scale spatial patterns in a dominant driver of change in this ecosystem (hydroperiod) and an ecosystem-level response metric of hydrologic change (periphyton biovolume). Specific diatom assemblages inhabiting periphyton mats of differing abundance can be used to infer past conditions and inform management decisions based on how assemblages are changing. This study captures diatom responses to wide gradients of hydrology and periphyton characteristics to inform ecosystem-scale bioassessment efforts in a large wetland.
Resumo:
My thesis examines fine-scale habitat use and movement patterns of age 1 Greenland cod (Gadus macrocephalus ogac) tracked using acoustic telemetry. Recent advances in tracking technologies such as GPS and acoustic telemetry have led to increasingly large and detailed datasets that present new opportunities for researchers to address fine-scale ecological questions regarding animal movement and spatial distribution. There is a growing demand for home range models that will not only work with massive quantities of autocorrelated data, but that can also exploit the added detail inherent in these high-resolution datasets. Most published home range studies use radio-telemetry or satellite data from terrestrial mammals or avian species, and most studies that evaluate the relative performance of home range models use simulated data. In Chapter 2, I used actual field-collected data from age-1 Greenland cod tracked with acoustic telemetry to evaluate the accuracy and precision of six home range models: minimum convex polygons, kernel densities with plug-in bandwidth selection and the reference bandwidth, adaptive local convex hulls, Brownian bridges, and dynamic Brownian bridges. I then applied the most appropriate model to two years (2010-2012) of tracking data collected from 82 tagged Greenland cod tracked in Newman Sound, Newfoundland, Canada, to determine diel and seasonal differences in habitat use and movement patterns (Chapter 3). Little is known of juvenile cod ecology, so resolving these relationships will provide valuable insight into activity patterns, habitat use, and predator-prey dynamics, while filling a knowledge gap regarding the use of space by age 1 Greenland cod in a coastal nursery habitat. By doing so, my thesis demonstrates an appropriate technique for modelling the spatial use of fish from acoustic telemetry data that can be applied to high-resolution, high-frequency tracking datasets collected from mobile organisms in any environment.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.
Resumo:
People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.