943 resultados para Travel time prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a refined classic noise prediction method based on the VISSIM and FHWA noise prediction model is formulated to analyze the sound level contributed by traffic on the Nanjing Lukou airport connecting freeway before and after widening. The aim of this research is to (i) assess the traffic noise impact on the Nanjing University of Aeronautics and Astronautics (NUAA) campus before and after freeway widening, (ii) compare the prediction results with field data to test the accuracy of this method, (iii) analyze the relationship between traffic characteristics and sound level. The results indicate that the mean difference between model predictions and field measurements is acceptable. The traffic composition impact study indicates that buses (including mid-sized trucks) and heavy goods vehicles contribute a significant proportion of total noise power despite their low traffic volume. In addition, speed analysis offers an explanation for the minor differences in noise level across time periods. Future work will aim at reducing model error, by focusing on noise barrier analysis using the FEM/BEM method and modifying the vehicle noise emission equation by conducting field experimentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acidity in terms of pH and titratable acids influences the texture and flavour of fermented dairy products, such as Kefir. However, the methods for determining pH and titratable acidity (TA) are time consuming. Near infrared (NIR) spectroscopy is a non-destructive method, which simultaneously predicts multiple traits from a single scan and can be used to predict pH and TA. The best pH NIR calibration model was obtained with no spectral pre-treatment applied, whereas smoothing was found to be the best pre-treatment to develop the TA calibration model. Using cross-validation, the prediction results were found acceptable for both pH and TA. With external validation, similar results were found for pH and TA, and both models were found to be acceptable for screening purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Odour from meat chicken (broiler) farms is an environmental issue affecting the sustainable development of the chicken meat industry but is a normal part of broiler production. Odour plumes exhausted from broiler sheds interact with the environment, where dispersion and dilution of the odours varies constantly, especially diurnally. The potential for odour impacts is greatest when odour emission rates are high and/or when atmospheric dispersion and dilution of odour plumes is limited (i.e. during stable conditions). We continuously monitored ventilation rate, on-site weather conditions, atmospheric stability, and estimated odour concentration with an artificial olfaction system. Detailed inspection of odour emission rates at critical times, i.e. dawn, dusk and night time, revealed that maximum daily and batch odour emission rates are not necessarily the cause of odour impacts. Periods of lower odour emission rates on each day are more likely to correspond with odour impacts. Odour emission rates need to be measured at the times when odour impacts are most likely to occur, which is likely to be at night. Additionally, high resolution ventilation rate data should be sought after to improve odour emission models, especially at critical times of the day. Consultants, regulators and researchers need to give more thought to odour emission rates from meat chicken farms to improved prediction and management of odour impacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Sedentary behaviour is associated with several deleterious health consequences. Although device-based measures of sedentary time are available, they are costly and do not provide a measure of domain specific sedentary time. High quality self-report measures are necessary to accurately capture domain specific sedentary time, and to provide an alternative to devices when cost is an issue. In this study, the Past-day Adults’ Sedentary Time (PAST) questionnaire, previously shown to have acceptable validity and reliability in a sample of breast cancer survivors, was modified for a university sample and validity of the modified questionnaire was examined compared with activPAL. Methods Participants (n = 58, age = 18–55 years, 48% female, 66% students) were recruited from the University of Queensland (students and staff). They answered the PAST questionnaire, which asked about time spent sitting or lying down for work, study, travel, television viewing, leisure-time computer use, reading, eating, socialising and other purposes, during the previous day. Time reported for these questions was summed to provide a measure of total sedentary time. Participants also wore an activPAL device for the full day prior to completing the questionnaire and recorded their wake and sleep times in an activity log. Total waking sedentary time derived from the activPAL was used as the criterion measure. Correlation (Pearson's r) and agreement (Bland–Altman plots) between PAST and activPAL sedentary time were examined. Results Participants were sedentary (activPAL-determined) for approximately 66% of waking hours. The correlation between PAST and activPAL sedentary time for the whole sample was r = 0.50 [95% confidence interval (CI) = 0.28–0.67]; and higher for non-students (r = 0.63, 95% CI = 0.26–0.84) than students (r = 0.46, 95% CI = 0.16–0.68). Bland–Altman plots revealed that the mean difference between the two measures was 19 min although limits of agreement were wide (95% limits of agreement −4.1 to 4.7 h). Discussion The PAST questionnaire provides an acceptable measure of sedentary time in this population, which included students and adults with high workplace sitting. These findings support earlier research that questionnaires employing past-day recall of sedentary time provide a viable alternative to existing sedentary behaviour questionnaires.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In order to rapidly and efficiently screen potential biofuel feedstock candidates for quintessential traits, robust high-throughput analytical techniques must be developed and honed. The traditional methods of measuring lignin syringyl/guaiacyl (S/G) ratio can be laborious, involve hazardous reagents, and/or be destructive. Vibrational spectroscopy can furnish high-throughput instrumentation without the limitations of the traditional techniques. Spectral data from mid-infrared, near-infrared, and Raman spectroscopies was combined with S/G ratios, obtained using pyrolysis molecular beam mass spectrometry, from 245 different eucalypt and Acacia trees across 17 species. Iterations of spectral processing allowed the assembly of robust predictive models using partial least squares (PLS). RESULTS: The PLS models were rigorously evaluated using three different randomly generated calibration and validation sets for each spectral processing approach. Root mean standard errors of prediction for validation sets were lowest for models comprised of Raman (0.13 to 0.16) and mid-infrared (0.13 to 0.15) spectral data, while near-infrared spectroscopy led to more erroneous predictions (0.18 to 0.21). Correlation coefficients (r) for the validation sets followed a similar pattern: Raman (0.89 to 0.91), mid-infrared (0.87 to 0.91), and near-infrared (0.79 to 0.82). These statistics signify that Raman and mid-infrared spectroscopy led to the most accurate predictions of S/G ratio in a diverse consortium of feedstocks. CONCLUSION: Eucalypts present an attractive option for biofuel and biochemical production. Given the assortment of over 900 different species of Eucalyptus and Corymbia, in addition to various species of Acacia, it is necessary to isolate those possessing ideal biofuel traits. This research has demonstrated the validity of vibrational spectroscopy to efficiently partition different potential biofuel feedstocks according to lignin S/G ratio, significantly reducing experiment and analysis time and expense while providing non-destructive, accurate, global, predictive models encompassing a diverse array of feedstocks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-throughput techniques are necessary to efficiently screen potential lignocellulosic feedstocks for the production of renewable fuels, chemicals, and bio-based materials, thereby reducing experimental time and expense while supplanting tedious, destructive methods. The ratio of lignin syringyl (S) to guaiacyl (G) monomers has been routinely quantified as a way to probe biomass recalcitrance. Mid-infrared and Raman spectroscopy have been demonstrated to produce robust partial least squares models for the prediction of lignin S/G ratios in a diverse group of Acacia and eucalypt trees. The most accurate Raman model has now been used to predict the S/G ratio from 269 unknown Acacia and eucalypt feedstocks. This study demonstrates the application of a partial least squares model composed of Raman spectral data and lignin S/G ratios measured using pyrolysis/molecular beam mass spectrometry (pyMBMS) for the prediction of S/G ratios in an unknown data set. The predicted S/G ratios calculated by the model were averaged according to plant species, and the means were not found to differ from the pyMBMS ratios when evaluating the mean values of each method within the 95 % confidence interval. Pairwise comparisons within each data set were employed to assess statistical differences between each biomass species. While some pairwise appraisals failed to differentiate between species, Acacias, in both data sets, clearly display significant differences in their S/G composition which distinguish them from eucalypts. This research shows the power of using Raman spectroscopy to supplant tedious, destructive methods for the evaluation of the lignin S/G ratio of diverse plant biomass materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The outcome of the successfully resuscitated patient is mainly determined by the extent of hypoxic-ischemic cerebral injury, and hypothermia has multiple mechanisms of action in mitigating such injury. The present study was undertaken from 1997 to 2001 in Helsinki as a part of the European multicenter study Hypothermia after cardiac arrest (HACA) to test the neuroprotective effect of therapeutic hypothermia in patients resuscitated from out-of-hospital ventricular fibrillation (VF) cardiac arrest (CA). The aim of this substudy was to examine the neurological and cardiological outcome of these patients, and especially to study and develop methods for prediction of outcome in the hypothermia-treated patients. A total of 275 patients were randomized to the HACA trial in Europe. In Helsinki, 70 patients were enrolled in the study according to the inclusion criteria. Those randomized to hypothermia were actively cooled externally to a core temperature 33 ± 1ºC for 24 hours with a cooling device. Serum markers of ischemic neuronal injury, NSE and S-100B, were sampled at 24, 36, and 48 hours after CA. Somatosensory and brain stem auditory evoked potentials (SEPs and BAEPs) were recorded 24 to 28 hours after CA; 24-hour ambulatory electrocardiography recordings were performed three times during the first two weeks and arrhythmias and heart rate variability (HRV) were analyzed from the tapes. The clinical outcome was assessed 3 and 6 months after CA. Neuropsychological examinations were performed on the conscious survivors 3 months after the CA. Quantitative electroencephalography (Q-EEG) and auditory P300 event-related potentials were studied at the same time-point. Therapeutic hypothermia of 33ºC for 24 hours led to an increased chance of good neurological outcome and survival after out-of-hospital VF CA. In the HACA study, 55% of hypothermia-treated patients and 39% of normothermia-treated patients reached a good neurological outcome (p=0.009) at 6 months after CA. Use of therapeutic hypothermia was not associated with any increase in clinically significant arrhythmias. The levels of serum NSE, but not the levels of S-100B, were lower in hypothermia- than in normothermia-treated patients. A decrease in NSE values between 24 and 48 hours was associated with good outcome at 6 months after CA. Decreasing levels of serum NSE but not of S-100B over time may indicate selective attenuation of delayed neuronal death by therapeutic hypothermia, and the time-course of serum NSE between 24 and 48 hours after CA may help in clinical decision-making. In SEP recordings bilaterally absent N20 responses predicted permanent coma with a specificity of 100% in both treatment arms. Recording of BAEPs provided no additional benefit in outcome prediction. Preserved 24- to 48-hour HRV may be a predictor of favorable outcome in CA patients treated with hypothermia. At 3 months after CA, no differences appeared in any cognitive functions between the two groups: 67% of patients in the hypothermia and 44% patients in the normothermia group were cognitively intact or had only very mild impairment. No significant differences emerged in any of the Q-EEG parameters between the two groups. The amplitude of P300 potential was significantly higher in the hypothermia-treated group. These results give further support to the use of therapeutic hypothermia in patients with sudden out-of-hospital CA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present, paper deals with the CAE-based study Of impact of jacketed projectiles on single- and multi-layered metal armour plates using LS-DYNA. The validation of finite element modelling procedure is mainly based on the mesh convergence study using both shell and solid elements for representing single-layered mild steel target plates. It, is shown that the proper choice of mesh density and the strain rate-dependent material properties are essential for all accurate prediction of projectile residual velocity. The modelling requirements are initially arrived at by correlating against test residual velocities for single-layered mild steel plates of different depths at impact velocities in the ran.-c of approximately 800-870 m/s. The efficacy of correlation is adjudged, in terms of a 'correlation index', defined in the paper: for which values close to unity are desirable. The experience gained for single-layered plates is next; used in simulating projectile impacts on multi-layered mild steel target plates and once again a high degree of correlation with experimental residual velocities is observed. The study is repeated for single- and multi-layered aluminium target plates with a similar level of success in test residual velocity prediction. TO the authors' best knowledge, the present comprehensive study shows in particular for the first time that, with a. proper modelling approach, LS-DYNA can be used with a great degree of confidence in designing perforation-resistant single and multi-layered metallic armour plates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A health-monitoring and life-estimation strategy for composite rotor blades is developed in this work. The cross-sectional stiffness reduction obtained by physics-based models is expressed as a function of the life of the structure using a recent phenomenological damage model. This stiffness reduction is further used to study the behavior of measurable system parameters such as blade deflections, loads, and strains of a composite rotor blade in static analysis and forward flight. The simulated measurements are obtained using an aeroelastic analysis of the composite rotor blade based on the finite element in space and time with physics-based damage modes that are then linked to the life consumption of the blade. The model-based measurements are contaminated with noise to simulate real data. Genetic fuzzy systems are developed for global online prediction of physical damage and life consumption using displacement- and force-based measurement deviations between damaged and undamaged conditions. Furthermore, local online prediction of physical damage and life consumption is done using strains measured along the blade length. It is observed that the life consumption in the matrix-cracking zone is about 12-15% and life consumption in debonding/delamination zone is about 45-55% of the total life of the blade. It is also observed that the success rate of the genetic fuzzy systems depends upon the number of measurements, type of measurements and training, and the testing noise level. The genetic fuzzy systems work quite well with noisy data and are recommended for online structural health monitoring of composite helicopter rotor blades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improved forecasting of urban rail patronage is essential for effective policy development and efficient planning for new rail infrastructure. Past modelling and forecasting of urban rail patronage has been based on legacy modelling approaches and often conducted at the general level of public transport demand, rather than being specific to urban rail. This project canvassed current Australian practice and international best practice to develop and estimate time series and cross-sectional models of rail patronage for Australian mainland state capital cities. This involved the implementation of a large online survey of rail riders and non-riders for each of the state capital cities, thereby resulting in a comprehensive database of respondent socio-economic profiles, travel experience, attitudes to rail and other modes of travel, together with stated preference responses to a wide range of urban travel scenarios. Estimation of the models provided a demonstration of their ability to provide information on the major influences on the urban rail travel decision. Rail fares, congestion and rail service supply all have a strong influence on rail patronage, while a number of less significant factors such as fuel price and access to a motor vehicle are also influential. Of note, too, is the relative homogeneity of rail user profiles across the state capitals. Rail users tended to have higher incomes and education levels. They are also younger and more likely to be in full-time employment than non-rail users. The project analysis reported here represents only a small proportion of what could be accomplished utilising the survey database. More comprehensive investigation was beyond the scope of the project and has been left for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, a new flame extinction model based on the k/epsilon turbulence time scale concept is proposed to predict the flame liftoff heights over a wide range of coflow temperature and O-2 mass fraction of the coflow. The flame is assumed to be quenched, when the fluid time scale is less than the chemical time scale ( Da < 1). The chemical time scale is derived as a function of temperature, oxidizer mass fraction, fuel dilution, velocity of the jet and fuel type. The present extinction model has been tested for a variety of conditions: ( a) ambient coflow conditions ( 1 atm and 300 K) for propane, methane and hydrogen jet flames, ( b) highly preheated coflow, and ( c) high temperature and low oxidizer concentration coflow. Predicted flame liftoff heights of jet diffusion and partially premixed flames are in excellent agreement with the experimental data for all the simulated conditions and fuels. It is observed that flame stabilization occurs at a point near the stoichiometric mixture fraction surface, where the local flow velocity is equal to the local flame propagation speed. The present method is used to determine the chemical time scale for the conditions existing in the mild/ flameless combustion burners investigated by the authors earlier. This model has successfully predicted the initial premixing of the fuel with combustion products before the combustion reaction initiates. It has been inferred from these numerical simulations that fuel injection is followed by intense premixing with hot combustion products in the primary zone and combustion reaction follows further downstream. Reaction rate contours suggest that reaction takes place over a large volume and the magnitude of the combustion reaction is lower compared to the conventional combustion mode. The appearance of attached flames in the mild combustion burners at low thermal inputs is also predicted, which is due to lower average jet velocity and larger residence times in the near injection zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The significance of treating rainfall as a chaotic system instead of a stochastic system for a better understanding of the underlying dynamics has been taken up by various studies recently. However, an important limitation of all these approaches is the dependence on a single method for identifying the chaotic nature and the parameters involved. Many of these approaches aim at only analyzing the chaotic nature and not its prediction. In the present study, an attempt is made to identify chaos using various techniques and prediction is also done by generating ensembles in order to quantify the uncertainty involved. Daily rainfall data of three regions with contrasting characteristics (mainly in the spatial area covered), Malaprabha, Mahanadi and All-India for the period 1955-2000 are used for the study. Auto-correlation and mutual information methods are used to determine the delay time for the phase space reconstruction. Optimum embedding dimension is determined using correlation dimension, false nearest neighbour algorithm and also nonlinear prediction methods. The low embedding dimensions obtained from these methods indicate the existence of low dimensional chaos in the three rainfall series. Correlation dimension method is done on th phase randomized and first derivative of the data series to check whether the saturation of the dimension is due to the inherent linear correlation structure or due to low dimensional dynamics. Positive Lyapunov exponents obtained prove the exponential divergence of the trajectories and hence the unpredictability. Surrogate data test is also done to further confirm the nonlinear structure of the rainfall series. A range of plausible parameters is used for generating an ensemble of predictions of rainfall for each year separately for the period 1996-2000 using the data till the preceding year. For analyzing the sensitiveness to initial conditions, predictions are done from two different months in a year viz., from the beginning of January and June. The reasonably good predictions obtained indicate the efficiency of the nonlinear prediction method for predicting the rainfall series. Also, the rank probability skill score and the rank histograms show that the ensembles generated are reliable with a good spread and skill. A comparison of results of the three regions indicates that although they are chaotic in nature, the spatial averaging over a large area can increase the dimension and improve the predictability, thus destroying the chaotic nature. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Asian elephants (Dephas maximus), prominent ``flagship species'', arelisted under the category of endangered species (EN - A2c, ver. 3.1, IUCN Red List 2009) and there is a need for their conservation This requires understanding demographic and reproductive dynamics of the species. Monitoring reproductive status of any species is traditionally being carried out through invasive blood sampling and this is restrictive for large animals such as wild or semi-captive elephants due to legal. ethical, and practical reasons Hence. there is a need for a non-invasive technique to assess reproductive cyclicity profiles of elephants. which will help in the species' conservation strategies In this study. we developed an indirect competitive enzyme linked immuno-sorbent assay (ELISA) to estimate the concentration of one of the progesterone-metabolites i.e, allopregnanolone (5 alpha-P-3OH) in fecal samples of As elephants We validated the assay which had a sensitivity of 0.25 mu M at 90% binding with an EC50 value of 1 37 mu M Using female elephants. kept under semi-captive conditions in the forest camps of Mudumalar Wildlife Sanctuary, Tamil Nadu and Bandipur National Park, Karnataka, India. we measured fecal progesterone-metabolite (5 alpha-P-3OH) concentrations in six an and showed their clear correlation with those of scrum progesterone measured by a standard radio-immuno assay. Statistical analyses using a Linear Mixed Effect model showed a positive correlation (P < 0 1) between the profiles of fecal 5 alpha-P-3OH (range 0 5-10 mu g/g) and serum progesterone (range: 0 1-1 8 ng/mL) Therefore, our studies show, for the first time, that the fecal progesterone-metabolite assay could be exploited to predict estrus cyclicity and to potentially assess the reproductive status of captive and free-ranging female Asian elephants, thereby helping to plan their breeding strategy (C) 2010 Elsevier Inc.All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.