946 resultados para Dynamic Flow Estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical regression analysis can be used to model time series. However, the assumption that model parameters are constant over time is not necessarily adapted to the data. In phytoplankton ecology, the relevance of time-varying parameter values has been shown using a dynamic linear regression model (DLRM). DLRMs, belonging to the class of Bayesian dynamic models, assume the existence of a non-observable time series of model parameters, which are estimated on-line, i.e. after each observation. The aim of this paper was to show how DLRM results could be used to explain variation of a time series of phytoplankton abundance. We applied DLRM to daily concentrations of Dinophysis cf. acuminata, determined in Antifer harbour (French coast of the English Channel), along with physical and chemical covariates (e.g. wind velocity, nutrient concentrations). A single model was built using 1989 and 1990 data, and then applied separately to each year. Equivalent static regression models were investigated for the purpose of comparison. Results showed that most of the Dinophysis cf. acuminata concentration variability was explained by the configuration of the sampling site, the wind regime and tide residual flow. Moreover, the relationships of these factors with the concentration of the microalga varied with time, a fact that could not be detected with static regression. Application of dynamic models to phytoplankton time series, especially in a monitoring context, is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to test the ability of some correlative models such as Alpert correlations on 1972 and re-examined on 2011, the investigation of Heskestad and Delichatsios in 1978, the correlations produced by Cooper in 1982, to define both dynamic and thermal characteristics of a fire induced ceiling-jet flow. The flow occurs when the fire plume impinges the ceiling and develops in the radial direction of the fire axis. Both temperature and velocity predictions are decisive for sprinklers positioning, fire alarms positions, detectors (heat, smoke) positions and activation times and back-layering predictions. These correlative models will be compared with a 3D numerical simulation software CFAST. For the results comparison of temperature and velocity near the ceiling. These results are also compared with a Computational Fluid Dynamics (CFD) analysis, using ANSYS FLUENT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article we consider the a posteriori error estimation and adaptive mesh refinement of discontinuous Galerkin finite element approximations of the bifurcation problem associated with the steady incompressible Navier-Stokes equations. Particular attention is given to the reliable error estimation of the critical Reynolds number at which a steady pitchfork or Hopf bifurcation occurs when the underlying physical system possesses reflectional or Z_2 symmetry. Here, computable a posteriori error bounds are derived based on employing the generalization of the standard Dual-Weighted-Residual approach, originally developed for the estimation of target functionals of the solution, to bifurcation problems. Numerical experiments highlighting the practical performance of the proposed a posteriori error indicator on adaptively refined computational meshes are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We estimate a dynamic model of mortgage default for a cohort of Colombian debtors between 1997 and 2004. We use the estimated model to study the effects on default of a class of policies that affected the evolution of mortgage balances in Colombia during the 1990's. We propose a framework for estimating dynamic behavioral models accounting for the presence of unobserved state variables that are correlated across individuals and across time periods. We extend the standard literature on the structural estimation of dynamic models by incorporating an unobserved common correlated shock that affects all individuals' static payoffs and the dynamic continuation payoffs associated with different decisions. Given a standard parametric specification the dynamic problem, we show that the aggregate shocks are identified from the variation in the observed aggregate behavior. The shocks and their transition are separately identified, provided there is enough cross-sectionavl ariation of the observeds tates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional direct numerical simulations (DNS) have been performed on a finite-size hemispherecylinder model at angle of attack AoA = 20◦ and Reynolds numbers Re = 350 and 1000. Under these conditions, massive separation exists on the nose and lee-side of the cylinder, and at both Reynolds numbers the flow is found to be unsteady. Proper orthogonal decomposition (POD) and dynamic mode decomposition (DMD) are employed in order to study the primary instability that triggers unsteadiness at Re = 350. The dominant coherent flow structures identified at the lower Reynolds number are also found to exist at Re = 1000; the question is then posed whether the flow oscillations and structures found at the two Reynolds numbers are related. POD and DMD computations are performed using different subdomains of the DNS computational domain. Besides reducing the computational cost of the analyses, this also permits to isolate spatially localized oscillatory structures from other, more energetic structures present in the flow. It is found that POD and DMD are in general sensitive to domain truncation and noneducated choices of the subdomain may lead to inconsistent results. Analyses at Re = 350 show that the primary instability is related to the counter rotating vortex pair conforming the three-dimensional afterbody wake, and characterized by the frequency St ≈ 0.11, in line with results in the literature. At Re = 1000, vortex-shedding is present in the wake with an associated broadband spectrum centered around the same frequency. The horn/leeward vortices at the cylinder lee-side, upstream of the cylinder base, also present finite amplitude oscillations at the higher Reynolds number. The spatial structure of these oscillations, described by the POD modes, is easily differentiated from that of the wake oscillations. Additionally, the frequency spectra associated with the lee-side vortices presents well defined peaks, corresponding to St ≈ 0.11 and its few harmonics, as opposed to the broadband spectrum found at the wake.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional Direct Numerical Simulations combined with Particle Image Velocimetry experiments have been performed on a hemisphere-cylinder at Reynolds number 1000 and angle of attack 20◦. At these flow conditions, a pair of vortices, so-called “horn” vortices, are found to be associated with flow separation. In order to understand the highly complex phenomena associated with this fully threedimensional massively separated flow, different structural analysis techniques have been employed: Proper Orthogonal and Dynamic Mode Decompositions, POD and DMD, respectively, as well as criticalpoint theory. A single dominant frequency associated with the von Karman vortex shedding has been identified in both the experimental and the numerical results. POD and DMD modes associated with this frequency were recovered in the analysis. Flow separation was also found to be intrinsically linked to the observed modes. On the other hand, critical-point theory has been applied in order to highlight possible links of the topology patterns over the surface of the body with the computed modes. Critical points and separation lines on the body surface show in detail the presence of different flow patterns in the base flow: a three-dimensional separation bubble and two pairs of unsteady vortices systems, the horn vortices, mentioned before, and the so-called “leeward” vortices. The horn vortices emerge perpendicularly from the body surface at the separation region. On the other hand, the leeward vortices are originated downstream of the separation bubble, as a result of the boundary layer separation. The frequencies associated with these vortical structures have been quantified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanical conditioning has been shown to promote tissue formation in a wide variety of tissue engineering efforts. However the underlying mechanisms by which external mechanical stimuli regulate cells and tissues are not known. This is particularly relevant in the area of heart valve tissue engineering (HVTE) owing to the intense hemodynamic environments that surround native valves. Some studies suggest that oscillatory shear stress (OSS) caused by steady flow and scaffold flexure play a critical role in engineered tissue formation derived from bone marrow derived stem cells (BMSCs). In addition, scaffold flexure may enhance nutrient (e.g. oxygen, glucose) transport. In this study, we computationally quantified the i) magnitude of fluid-induced shear stresses; ii) the extent of temporal fluid oscillations in the flow field using the oscillatory shear index (OSI) parameter, and iii) glucose and oxygen mass transport profiles. Noting that sample cyclic flexure induces a high degree of oscillatory shear stress (OSS), we incorporated moving boundary computational fluid dynamic simulations of samples housed within a bioreactor to consider the effects of: 1) no flow, no flexure (control group), 2) steady flow-alone, 3) cyclic flexure-alone and 4) combined steady flow and cyclic flexure environments. We also coupled a diffusion and convention mass transport equation to the simulated system. We found that the coexistence of both OSS and appreciable shear stress magnitudes, described by the newly introduced parameter OSI-:τ: explained the high levels of engineered collagen previously observed from combining cyclic flexure and steady flow states. On the other hand, each of these metrics on its own showed no association. This finding suggests that cyclic flexure and steady flow synergistically promote engineered heart valve tissue production via OSS, so long as the oscillations are accompanied by a critical magnitude of shear stress. In addition, our simulations showed that mass transport of glucose and oxygen is enhanced by sample movement at low sample porosities, but did not play a role in highly porous scaffolds. Preliminary in-house in vitro experiments showed that cell proliferation and phenotype is enhanced in OSI-:τ: environments.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tall buildings are wind-sensitive structures and could experience high wind-induced effects. Aerodynamic boundary layer wind tunnel testing has been the most commonly used method for estimating wind effects on tall buildings. Design wind effects on tall buildings are estimated through analytical processing of the data obtained from aerodynamic wind tunnel tests. Even though it is widely agreed that the data obtained from wind tunnel testing is fairly reliable the post-test analytical procedures are still argued to have remarkable uncertainties. This research work attempted to assess the uncertainties occurring at different stages of the post-test analytical procedures in detail and suggest improved techniques for reducing the uncertainties. Results of the study showed that traditionally used simplifying approximations, particularly in the frequency domain approach, could cause significant uncertainties in estimating aerodynamic wind-induced responses. Based on identified shortcomings, a more accurate dual aerodynamic data analysis framework which works in the frequency and time domains was developed. The comprehensive analysis framework allows estimating modal, resultant and peak values of various wind-induced responses of a tall building more accurately. Estimating design wind effects on tall buildings also requires synthesizing the wind tunnel data with local climatological data of the study site. A novel copula based approach was developed for accurately synthesizing aerodynamic and climatological data up on investigating the causes of significant uncertainties in currently used synthesizing techniques. Improvement of the new approach over the existing techniques was also illustrated with a case study on a 50 story building. At last, a practical dynamic optimization approach was suggested for tuning structural properties of tall buildings towards attaining optimum performance against wind loads with less number of design iterations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lithium Ion (Li-Ion) batteries have got attention in recent decades because of their undisputable advantages over other types of batteries. They are used in so many our devices which we need in our daily life such as cell phones, lap top computers, cameras, and so many electronic devices. They also are being used in smart grids technology, stand-alone wind and solar systems, Hybrid Electric Vehicles (HEV), and Plug in Hybrid Electric Vehicles (PHEV). Despite the rapid increase in the use of Lit-ion batteries, the existence of limited battery models also inadequate and very complex models developed by chemists is the lack of useful models a significant matter. A battery management system (BMS) aims to optimize the use of the battery, making the whole system more reliable, durable and cost effective. Perhaps the most important function of the BMS is to provide an estimate of the State of Charge (SOC). SOC is the ratio of available ampere-hour (Ah) in the battery to the total Ah of a fully charged battery. The Open Circuit Voltage (OCV) of a fully relaxed battery has an approximate one-to-one relationship with the SOC. Therefore, if this voltage is known, the SOC can be found. However, the relaxed OCV can only be measured when the battery is relaxed and the internal battery chemistry has reached equilibrium. This thesis focuses on Li-ion battery cell modelling and SOC estimation. In particular, the thesis, introduces a simple but comprehensive model for the battery and a novel on-line, accurate and fast SOC estimation algorithm for the primary purpose of use in electric and hybrid-electric vehicles, and microgrid systems. The thesis aims to (i) form a baseline characterization for dynamic modeling; (ii) provide a tool for use in state-of-charge estimation. The proposed modelling and SOC estimation schemes are validated through comprehensive simulation and experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power system policies are broadly on track to escalate the use of renewable energy resources in electric power generation. Integration of dispersed generation to the utility network not only intensifies the benefits of renewable generation but also introduces further advantages such as power quality enhancement and freedom of power generation for the consumers. However, issues arise from the integration of distributed generators to the existing utility grid are as significant as its benefits. The issues are aggravated as the number of grid-connected distributed generators increases. Therefore, power quality demands become stricter to ensure a safe and proper advancement towards the emerging smart grid. In this regard, system protection is the area that is highly affected as the grid-connected distributed generation share in electricity generation increases. Islanding detection, amongst all protection issues, is the most important concern for a power system with high penetration of distributed sources. Islanding occurs when a portion of the distribution network which includes one or more distributed generation units and local loads is disconnected from the remaining portion of the grid. Upon formation of a power island, it remains energized due to the presence of one or more distributed sources. This thesis introduces a new islanding detection technique based on an enhanced multi-layer scheme that shows superior performance over the existing techniques. It provides improved solutions for safety and protection of power systems and distributed sources that are capable of operating in grid-connected mode. The proposed active method offers negligible non-detection zone. It is applicable to micro-grids with a number of distributed generation sources without sacrificing the dynamic response of the system. In addition, the information obtained from the proposed scheme allows for smooth transition to stand-alone operation if required. The proposed technique paves the path towards a comprehensive protection solution for future power networks. The proposed method is converter-resident and all power conversion systems that are operating based on power electronics converters can benefit from this method. The theoretical analysis is presented, and extensive simulation results confirm the validity of the analytical work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : Recently, there is a great interest to study the flow characteristics of suspensions in different environmental and industrial applications, such as snow avalanches, debris flows, hydrotransport systems, and material casting processes. Regarding rheological aspects, the majority of these suspensions, such as fresh concrete, behave mostly as non-Newtonian fluids. Concrete is the most widely used construction material in the world. Due to the limitations that exist in terms of workability and formwork filling abilities of normal concrete, a new class of concrete that is able to flow under its own weight, especially through narrow gaps in the congested areas of the formwork was developed. Accordingly, self-consolidating concrete (SCC) is a novel construction material that is gaining market acceptance in various applications. Higher fluidity characteristics of SCC enable it to be used in a number of special applications, such as densely reinforced sections. However, higher flowability of SCC makes it more sensitive to segregation of coarse particles during flow (i.e., dynamic segregation) and thereafter at rest (i.e., static segregation). Dynamic segregation can increase when SCC flows over a long distance or in the presence of obstacles. Therefore, there is always a need to establish a trade-off between the flowability, passing ability, and stability properties of SCC suspensions. This should be taken into consideration to design the casting process and the mixture proportioning of SCC. This is called “workability design” of SCC. An efficient and non-expensive workability design approach consists of the prediction and optimization of the workability of the concrete mixtures for the selected construction processes, such as transportation, pumping, casting, compaction, and finishing. Indeed, the mixture proportioning of SCC should ensure the construction quality demands, such as demanded levels of flowability, passing ability, filling ability, and stability (dynamic and static). This is necessary to develop some theoretical tools to assess under what conditions the construction quality demands are satisfied. Accordingly, this thesis is dedicated to carry out analytical and numerical simulations to predict flow performance of SCC under different casting processes, such as pumping and tremie applications, or casting using buckets. The L-Box and T-Box set-ups can evaluate flow performance properties of SCC (e.g., flowability, passing ability, filling ability, shear-induced and gravitational dynamic segregation) in casting process of wall and beam elements. The specific objective of the study consists of relating numerical results of flow simulation of SCC in L-Box and T-Box test set-ups, reported in this thesis, to the flow performance properties of SCC during casting. Accordingly, the SCC is modeled as a heterogeneous material. Furthermore, an analytical model is proposed to predict flow performance of SCC in L-Box set-up using the Dam Break Theory. On the other hand, results of the numerical simulation of SCC casting in a reinforced beam are verified by experimental free surface profiles. The results of numerical simulations of SCC casting (modeled as a single homogeneous fluid), are used to determine the critical zones corresponding to the higher risks of segregation and blocking. The effects of rheological parameters, density, particle contents, distribution of reinforcing bars, and particle-bar interactions on flow performance of SCC are evaluated using CFD simulations of SCC flow in L-Box and T-box test set-ups (modeled as a heterogeneous material). Two new approaches are proposed to classify the SCC mixtures based on filling ability and performability properties, as a contribution of flowability, passing ability, and dynamic stability of SCC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A prospective randomised controlled clinical trial of treatment decisions informed by invasive functional testing of coronary artery disease severity compared with standard angiography-guided management was implemented in 350 patients with a recent non-ST elevation myocardial infarction (NSTEMI) admitted to 6 hospitals in the National Health Service. The main aims of this study were to examine the utility of both invasive fractional flow reserve (FFR) and non-invasive cardiac magnetic resonance imaging (MRI) amongst patients with a recent diagnosis of NSTEMI. In summary, the findings of this thesis are: (1) the use of FFR combined with intravenous adenosine was feasible and safe amongst patients with NSTEMI and has clinical utility; (2) there was discordance between the visual, angiographic estimation of lesion significance and FFR; (3). The use of FFR led to changes in treatment strategy and an increase in prescription of medical therapy in the short term compared with an angiographically guided strategy; (4) in the incidence of major adverse cardiac events (MACE) at 12 months follow up was similar in the two groups. Cardiac MRI was used in a subset of patients enrolled in two hospitals in the West of Scotland. T1 and T2 mapping methods were used to delineate territories of acute myocardial injury. T1 and T2 mapping were superior when compared with conventional T2-weighted dark blood imaging for estimation of the ischaemic area-at-risk (AAR) with less artifact in NSTEMI. There was poor correlation between the angiographic AAR and MRI methods of AAR estimation in patients with NSTEMI. FFR had a high accuracy at predicting inducible perfusion defects demonstrated on stress perfusion MRI. This thesis describes the largest randomized trial published to date specifically looking at the clinical utility of FFR in the NSTEMI population. We have provided evidence of the diagnostic and clinical utility of FFR in this group of patients and provide evidence to inform larger studies. This thesis also describes the largest ever MRI cohort, including with myocardial stress perfusion assessments, specifically looking at the NSTEMI population. We have demonstrated the diagnostic accuracy of FFR to predict reversible ischaemia as referenced to a non-invasive gold standard with MRI. This thesis has also shown the futility of using dark blood oedema imaging amongst all comer NSTEMI patients when compared to novel T1 and T2 mapping methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dynamic global vegetation models (DGVMs) simulate surface processes such as the transfer of energy, water, CO2, and momentum between the terrestrial surface and the atmosphere, biogeochemical cycles, carbon assimilation by vegetation, phenology, and land use change in scenarios of varying atmospheric CO2 concentrations. DGVMs increase the complexity and the Earth system representation when they are coupled with atmospheric global circulation models (AGCMs) or climate models. However, plant physiological processes are still a major source of uncertainty in DGVMs. The maximum velocity of carboxylation (Vcmax), for example, has a direct impact over productivity in the models. This parameter is often underestimated or imprecisely defined for the various plant functional types (PFTs) and ecosystems. Vcmax is directly related to photosynthesis acclimation (loss of response to elevated CO2), a widely known phenomenon that usually occurs when plants are subjected to elevated atmospheric CO2 and might affect productivity estimation in DGVMs. Despite this, current models have improved substantially, compared to earlier models which had a rudimentary and very simple representation of vegetation?atmosphere interactions. In this paper, we describe this evolution through generations of models and the main events that contributed to their improvements until the current state-of-the-art class of models. Also, we describe some main challenges for further improvements to DGVMs.