943 resultados para Travel time prediction
Resumo:
Space-time correlations or Eulerian two-point two-time correlations of fluctuating velocities are analytically and numerically investigated in turbulent shear flows. An elliptic model for the space-time correlations in the inertial range is developed from the similarity assumptions on the isocorrelation contours: they share a uniform preference direction and a constant aspect ratio. The similarity assumptions are justified using the Kolmogorov similarity hypotheses and verified using the direct numerical simulation DNS of turbulent channel flows. The model relates the space-time correlations to the space correlations via the convection and sweeping characteristic velocities. The analytical expressions for the convection and sweeping velocities are derived from the Navier-Stokes equations for homogeneous turbulent shear flows, where the convection velocity is represented by the mean velocity and the sweeping velocity is the sum of the random sweeping velocity and the shearinduced velocity. This suggests that unlike Taylor’s model where the convection velocity is dominating and Kraichnan and Tennekes’ model where the random sweeping velocity is dominating, the decorrelation time scales of the space-time correlations in turbulent shear flows are determined by the convection velocity, the random sweeping velocity, and the shear-induced velocity. This model predicts a universal form of the spacetime correlations with the two characteristic velocities. The DNS of turbulent channel flows supports the prediction: the correlation functions exhibit a fair good collapse, when plotted against the normalized space and time separations defined by the elliptic model.
Resumo:
The application of large-eddy simulation (LES) to particle-laden turbulence raises such a fundamental question as whether the LES with a subgrid scale (SGS) model can correctly predict Lagrangian time correlations (LTCs). Most of the currently existing SGS models are constructed based on the energy budget equations. Therefore, they are able to correctly predict energy spectra, but they may not ensure the correct prediction on the LTCs. Previous researches investigated the effect of the SGS modeling on the Eulerian time correlations. This paper is devoted to study the LTCs in LES. A direct numerical simulation (DNS) and the LES with a spectral eddy viscosity model are performed for isotropic turbulence and the LTCs are calculated using the passive vector method. Both a priori and a posteriori tests are carried out. It is observed that the subgrid-scale contributions to the LTCs cannot be simply ignored and the LES overpredicts the LTCs than the DNS. It is concluded from the straining hypothesis that an accurate prediction of enstrophy spectra is most critical to the prediction of the LTCs.
Resumo:
This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.
Resumo:
Current earthquake early warning systems usually make magnitude and location predictions and send out a warning to the users based on those predictions. We describe an algorithm that assesses the validity of the predictions in real-time. Our algorithm monitors the envelopes of horizontal and vertical acceleration, velocity, and displacement. We compare the observed envelopes with the ones predicted by Cua & Heaton's envelope ground motion prediction equations (Cua 2005). We define a "test function" as the logarithm of the ratio between observed and predicted envelopes at every second in real-time. Once the envelopes deviate beyond an acceptable threshold, we declare a misfit. Kurtosis and skewness of a time evolving test function are used to rapidly identify a misfit. Real-time kurtosis and skewness calculations are also inputs to both probabilistic (Logistic Regression and Bayesian Logistic Regression) and nonprobabilistic (Least Squares and Linear Discriminant Analysis) models that ultimately decide if there is an unacceptable level of misfit. This algorithm is designed to work at a wide range of amplitude scales. When tested with synthetic and actual seismic signals from past events, it works for both small and large events.
Resumo:
In the first section of this thesis, two-dimensional properties of the human eye movement control system were studied. The vertical - horizontal interaction was investigated by using a two-dimensional target motion consisting of a sinusoid in one of the directions vertical or horizontal, and low-pass filtered Gaussian random motion of variable bandwidth (and hence information content) in the orthogonal direction. It was found that the random motion reduced the efficiency of the sinusoidal tracking. However, the sinusoidal tracking was only slightly dependent on the bandwidth of the random motion. Thus the system should be thought of as consisting of two independent channels with a small amount of mutual cross-talk.
These target motions were then rotated to discover whether or not the system is capable of recognizing the two-component nature of the target motion. That is, the sinusoid was presented along an oblique line (neither vertical nor horizontal) with the random motion orthogonal to it. The system did not simply track the vertical and horizontal components of motion, but rotated its frame of reference so that its two tracking channels coincided with the directions of the two target motion components. This recognition occurred even when the two orthogonal motions were both random, but with different bandwidths.
In the second section, time delays, prediction and power spectra were examined. Time delays were calculated in response to various periodic signals, various bandwidths of narrow-band Gaussian random motions and sinusoids. It was demonstrated that prediction occurred only when the target motion was periodic, and only if the harmonic content was such that the signal was sufficiently narrow-band. It appears as if general periodic motions are split into predictive and non-predictive components.
For unpredictable motions, the relationship between the time delay and the average speed of the retinal image was linear. Based on this I proposed a model explaining the time delays for both random and periodic motions. My experiments did not prove that the system is sampled data, or that it is continuous. However, the model can be interpreted as representative of a sample data system whose sample interval is a function of the target motion.
It was shown that increasing the bandwidth of the low-pass filtered Gaussian random motion resulted in an increase of the eye movement bandwidth. Some properties of the eyeball-muscle dynamics and the extraocular muscle "active state tension" were derived.
Resumo:
In a multi-target complex network, the links (L-ij) represent the interactions between the drug (d(i)) and the target (t(j)), characterized by different experimental measures (K-i, K-m, IC50, etc.) obtained in pharmacological assays under diverse boundary conditions (c(j)). In this work, we handle Shannon entropy measures for developing a model encompassing a multi-target network of neuroprotective/neurotoxic compounds reported in the CHEMBL database. The model predicts correctly >8300 experimental outcomes with Accuracy, Specificity, and Sensitivity above 80%-90% on training and external validation series. Indeed, the model can calculate different outcomes for >30 experimental measures in >400 different experimental protocolsin relation with >150 molecular and cellular targets on 11 different organisms (including human). Hereafter, we reported by the first time the synthesis, characterization, and experimental assays of a new series of chiral 1,2-rasagiline carbamate derivatives not reported in previous works. The experimental tests included: (1) assay in absence of neurotoxic agents; (2) in the presence of glutamate; and (3) in the presence of H2O2. Lastly, we used the new Assessing Links with Moving Averages (ALMA)-entropy model to predict possible outcomes for the new compounds in a high number of pharmacological tests not carried out experimentally.
Resumo:
158 p.
Resumo:
Background: Little is known about how sitting time, alone or in combination with markers of physical activity (PA), influences mental well-being and work productivity. Given the need to develop workplace PA interventions that target employees' health related efficiency outcomes; this study examined the associations between self-reported sitting time, PA, mental well-being and work productivity in office employees. Methods: Descriptive cross-sectional study. Spanish university office employees (n = 557) completed a survey measuring socio-demographics, total and domain specific (work and travel) self-reported sitting time, PA (International Physical Activity Questionnaire short version), mental well-being (Warwick-Edinburg Mental Well-Being Scale) and work productivity (Work Limitations Questionnaire). Multivariate linear regression analyses determined associations between the main variables adjusted for gender, age, body mass index and occupation. PA levels (low, moderate and high) were introduced into the model to examine interactive associations. Results: Higher volumes of PA were related to higher mental well-being, work productivity and spending less time sitting at work, throughout the working day and travelling during the week, including the weekends (p < 0.05). Greater levels of sitting during weekends was associated with lower mental well-being (p < 0.05). Similarly, more sitting while travelling at weekends was linked to lower work productivity (p < 0.05). In highly active employees, higher sitting times on work days and occupational sitting were associated with decreased mental well-being (p < 0.05). Higher sitting times while travelling on weekend days was also linked to lower work productivity in the highly active (p < 0.05). No significant associations were observed in low active employees. Conclusions: Employees' PA levels exerts different influences on the associations between sitting time, mental well-being and work productivity. The specific associations and the broad sweep of evidence in the current study suggest that workplace PA strategies to improve the mental well-being and productivity of all employees should focus on reducing sitting time alongside efforts to increase PA.
Resumo:
The sound emission from open turbulent flames is dictated by the two-point spatial correlation of rate of change of fluctuating heat release rate and this correlation has not been investigated directly in the past studies. Turbulent premixed flame data from DNS and laser diagnostics are analyzed to study this correlation function and the two-point spatial correlation of the fluctuating heat release rate. This shows that the correlation functions have simple Gaussian forms whose integral length scale is related to the laminar flame thickness and amplitude depends on the spatial distribution of the time-mean rate of heat release. These results and RANS-CFD solution of open turbulent premixed flames are post-processed to obtain the far field SPL, which agrees well with measured values. © 2010 by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
The primary objective of this study was to predict the distribution of mesophotic hard corals in the Au‘au Channel in the Main Hawaiian Islands (MHI). Mesophotic hard corals are light-dependent corals adapted to the low light conditions at approximately 30 to 150 m in depth. Several physical factors potentially influence their spatial distribution, including aragonite saturation, alkalinity, pH, currents, water temperature, hard substrate availability and the availability of light at depth. Mesophotic corals and mesophotic coral ecosystems (MCEs) have increasingly been the subject of scientific study because they are being threatened by a growing number of anthropogenic stressors. They are the focus of this spatial modeling effort because the Hawaiian Islands Humpback Whale National Marine Sanctuary (HIHWNMS) is exploring the expansion of its scope—beyond the protection of the North Pacific Humpback Whale (Megaptera novaeangliae)—to include the conservation and management of these ecosystem components. The present study helps to address this need by examining the distribution of mesophotic corals in the Au‘au Channel region. This area is located between the islands of Maui, Lanai, Molokai and Kahoolawe, and includes parts of the Kealaikahiki, Alalākeiki and Kalohi Channels. It is unique, not only in terms of its geology, but also in terms of its physical oceanography and local weather patterns. Several physical conditions make it an ideal place for mesophotic hard corals, including consistently good water quality and clarity because it is flushed by tidal currents semi-diurnally; it has low amounts of rainfall and sediment run-off from the nearby land; and it is largely protected from seasonally strong wind and wave energy. Combined, these oceanographic and weather conditions create patches of comparatively warm, calm, clear waters that remain relatively stable through time. Freely available Maximum Entropy modeling software (MaxEnt 3.3.3e) was used to create four separate maps of predicted habitat suitability for: (1) all mesophotic hard corals combined, (2) Leptoseris, (3) Montipora and (4) Porites genera. MaxEnt works by analyzing the distribution of environmental variables where species are present, so it can find other areas that meet all of the same environmental constraints. Several steps (Figure 0.1) were required to produce and validate four ensemble predictive models (i.e., models with 10 replicates each). Approximately 2,000 georeferenced records containing information about mesophotic coral occurrence and 34 environmental predictors describing the seafloor’s depth, vertical structure, available light, surface temperature, currents and distance from shoreline at three spatial scales were used to train MaxEnt. Fifty percent of the 1,989 records were randomly chosen and set aside to assess each model replicate’s performance using Receiver Operating Characteristic (ROC), Area Under the Curve (AUC) values. An additional 1,646 records were also randomly chosen and set aside to independently assess the predictive accuracy of the four ensemble models. Suitability thresholds for these models (denoting where corals were predicted to be present/absent) were chosen by finding where the maximum number of correctly predicted presence and absence records intersected on each ROC curve. Permutation importance and jackknife analysis were used to quantify the contribution of each environmental variable to the four ensemble models.
Resumo:
A description is presented of a time-marking calculation of the unsteady flow generated by the interaction of upstream wakes with a moving blade row. The inviscid equations of motion are solved using a finite volume technique. Wake dissipation is modeled using an artificial viscosity. Predictions are presented for the rotor mid-span section of an axial turbine. Reasonable agreement is found between the predicted and measured unsteady blade surface static pressures and velocities. These and other results confirm that simple theories can be used to explain the phenomena of rotor-stator wake interactions.
Resumo:
Accurate predictions of combustor hot streak migration enable the turbine designer to identify high-temperature regions that can limit component life. It is therefore important that these predictions are achieved within the short time scales of a design process. This article compares temperature measurements of a circular hot streak through a turning duct and a research turbine with predictions using a three-dimensional Reynolds-averaged Navier-Stokes solver. It was found that the mixing length turbulence model did not predict the hot streak dissipation accurately. However, implementation of a very simple model of the free stream turbulence (FST) significantly improved the exit temperature predictions on both the duct and research turbine. One advantage of the simple FST model described over more complex alternatives is that no additional equations are solved. This makes the method attractive for design purposes, as it is not associated with any increase in computational time.
Resumo:
Thus far most studies of operational energy use of buildings fail to take a longitudinal view, or in other words, do not take into account how operational energy use changes during the lifetime of a building. However, such a view is important when predicting the impact of climate change, or for long term energy accounting purposes. This article presents an approach to deliver a longitudinal prediction of operational energy use. The work is based on the review of deterioration in thermal performance, building maintenance effects, and future climate change. The key issues are to estimate the service life expectancy and thermal performance degradation of building components while building maintenance and changing weather conditions are considered at the same time. Two examples are presented to demonstrate the application of the deterministic and stochastic approaches, respectively. The work concludes that longitudinal prediction of operational energy use is feasible, but the prediction will depend largely on the availability of extensive and reliable monitoring data. This premise is not met in most current buildings. © 2011 Elsevier Ltd.
Resumo:
Climate change is becoming a serious issue for the construction industry, since the time scales at which climate change takes place can be expected to show a true impact on the thermal performance of buildings and HVAC systems. In predicting this future building performance by means of building simulation, the underlying assumptions regarding thermal comfort conditions and the related heating, ventilating and air conditioning (HVAC) control set points become important. This article studies the thermal performance of a reference office building with mixedmode ventilation in the UK, using static and adaptive thermal approaches, for a series of time horizons (2020, 2050 and 2080). Results demonstrate the importance of the implementation of adaptive thermal comfort models, and underpin the case for its use in climate change impact studies. Adaptive thermal comfort can also be used by building designers to make buildings more resilient towards change. © 2010 International Building Performance Simulation Association (IBPSA).
Resumo:
To calculate the noise emanating from a turbulent flow using an acoustic analogy knowledge concerning the unsteady characteristics of the turbulence is required. Specifically, the form of the turbulent correlation tensor together with various time and length-scales are needed. However, if a Reynolds Averaged Navier-Stores calculation is used as the starting point then one can only obtain steady characteristics of the flow and it is necessary to model the unsteady behavior in some way. While there has been considerable attention given to the correct way to model the form of the correlation tensor less attention has been given to the underlying physics that dictate the proper choice of time-scale. In this paper the authors recognize that there are several time dependent processes occurring within a turbulent flow and propose a new way of obtaining the time-scale. Isothermal single-stream flow jets with Mach numbers 0.75 and 0.90 have been chosen for the present study. The Mani-Gliebe-Balsa-Khavaran method has been used for prediction of noise at different angles, and there is good agreement between the noise predictions and observations. Furthermore, the new time-scale has an inherent frequency dependency that arises naturally from the underlying physics, thus avoiding supplementary mathematical enhancements needed in previous modeling.