850 resultados para Prediction model
Resumo:
In this paper, a refined classic noise prediction method based on the VISSIM and FHWA noise prediction model is formulated to analyze the sound level contributed by traffic on the Nanjing Lukou airport connecting freeway before and after widening. The aim of this research is to (i) assess the traffic noise impact on the Nanjing University of Aeronautics and Astronautics (NUAA) campus before and after freeway widening, (ii) compare the prediction results with field data to test the accuracy of this method, (iii) analyze the relationship between traffic characteristics and sound level. The results indicate that the mean difference between model predictions and field measurements is acceptable. The traffic composition impact study indicates that buses (including mid-sized trucks) and heavy goods vehicles contribute a significant proportion of total noise power despite their low traffic volume. In addition, speed analysis offers an explanation for the minor differences in noise level across time periods. Future work will aim at reducing model error, by focusing on noise barrier analysis using the FEM/BEM method and modifying the vehicle noise emission equation by conducting field experimentation.
Resumo:
Aflatoxin is a potent carcinogen produced by Aspergillus flavus, which frequently contaminates maize (Zea mays L.) in the field between 40° north and 40° south latitudes. A mechanistic model to predict risk of pre-harvest contamination could assist in management of this very harmful mycotoxin. In this study we describe an aflatoxin risk prediction model which is integrated with the Agricultural Production Systems Simulator (APSIM) modelling framework. The model computes a temperature function for A. flavus growth and aflatoxin production using a set of three cardinal temperatures determined in the laboratory using culture medium and intact grains. These cardinal temperatures were 11.5 °C as base, 32.5 °C as optimum and 42.5 °C as maximum. The model used a low (≤0.2) crop water supply to demand ratio—an index of drought during the grain filling stage to simulate maize crop's susceptibility to A. flavus growth and aflatoxin production. When this low threshold of the index was reached the model converted the temperature function into an aflatoxin risk index (ARI) to represent the risk of aflatoxin contamination. The model was applied to simulate ARI for two commercial maize hybrids, H513 and H614D, grown in five multi-location field trials in Kenya using site specific agronomy, weather and soil parameters. The observed mean aflatoxin contamination in these trials varied from <1 to 7143 ppb. ARI simulated by the model explained 99% of the variation (p ≤ 0.001) in a linear relationship with the mean observed aflatoxin contamination. The strong relationship between ARI and aflatoxin contamination suggests that the model could be applied to map risk prone areas and to monitor in-season risk for genotypes and soils parameterized for APSIM.
Resumo:
Aflatoxin is a potent carcinogen produced by Aspergillus flavus, which frequently contaminates maize (Zea mays L.) in the field between 40° north and 40° south latitudes. A mechanistic model to predict risk of pre-harvest contamination could assist in management of this very harmful mycotoxin. In this study we describe an aflatoxin risk prediction model which is integrated with the Agricultural Production Systems Simulator (APSIM) modelling framework. The model computes a temperature function for A. flavus growth and aflatoxin production using a set of three cardinal temperatures determined in the laboratory using culture medium and intact grains. These cardinal temperatures were 11.5 °C as base, 32.5 °C as optimum and 42.5 °C as maximum. The model used a low (≤0.2) crop water supply to demand ratio—an index of drought during the grain filling stage to simulate maize crop's susceptibility to A. flavus growth and aflatoxin production. When this low threshold of the index was reached the model converted the temperature function into an aflatoxin risk index (ARI) to represent the risk of aflatoxin contamination. The model was applied to simulate ARI for two commercial maize hybrids, H513 and H614D, grown in five multi-location field trials in Kenya using site specific agronomy, weather and soil parameters. The observed mean aflatoxin contamination in these trials varied from <1 to 7143 ppb. ARI simulated by the model explained 99% of the variation (p ≤ 0.001) in a linear relationship with the mean observed aflatoxin contamination. The strong relationship between ARI and aflatoxin contamination suggests that the model could be applied to map risk prone areas and to monitor in-season risk for genotypes and soils parameterized for APSIM.
Resumo:
158 p.
Resumo:
Surface roughness noise is a potentially important contributor to airframe noise. In this paper, noise assessment due to surface roughness is performed for a conceptual Silent Aircraft design SAX-40 by means of a prediction model developed in previous theoretical work and validated experimentally. Estimates of three idealized test cases show that surface roughness could produce a significant noise level above that due to the trailing edge at high frequencies. Roughness height and roughness density are the two most significant parameters influencing surface roughness noise, with roughness height having the dominant effect. The ratio of roughness height to boundary-layer thickness is the relevant non-dimensional parameter and this decreases in the streamwise direction. The candidate surface roughness is selected for SAX-40 to meet an aggressive noise target and keep surface roughness noise at a negligible level. Copyright © 2008 by Yu Liu and Ann P. Dowling.
Resumo:
A density prediction model for juvenile brown shrimp (Farfantepenaeus aztecus) was developed by using three bottom types, five salinity zones, and four seasons to quantify patterns of habitat use in Galveston Bay, Texas. Sixteen years of quantitative density data were used. Bottom types were vegetated marsh edge, submerged aquatic vegetation, and shallow nonvegetated bottom. Multiple regression was used to develop density estimates, and the resultant formula was then coupled with a geographical information system (GIS) to provide a spatial mosaic (map) of predicted habitat use. Results indicated that juvenile brown shrimp (<100 mm) selected vegetated habitats in salinities of 15−25 ppt and that seagrasses were selected over marsh edge where they co-occurred. Our results provide a spatially resolved estimate of high-density areas that will help designate essential fish habitat (EFH) in Galveston Bay. In addition, using this modeling technique, we were able to provide an estimate of the overall population of juvenile brown shrimp (<100 mm) in shallow water habitats within the bay of approximately 1.3 billion. Furthermore, the geographic range of the model was assessed by plotting observed (actual) versus expected (model) brown shrimp densities in three other Texas bays. Similar habitat-use patterns were observed in all three bays—each having a coefficient of determination >0.50. These results indicate that this model may have a broader geographic application and is a plausible approach in refining current EFH designations for all Gulf of Mexico estuaries with similar geomorphological and hydrological characteristics.
Resumo:
This paper describes the application of variable-horizon model predictive control to trajectory generation in surface excavation. A nonlinear dynamic model of a surface mining machine digging in oil sand is developed as a test platform. This model is then stabilised with an inner-loop controller before being linearised to generate a prediction model. The linear model is used to design a predictive controller for trajectory generation. A variable horizon formulation is augmented with extra terms in the cost function to allow more control over digging, whilst still preserving the guarantee of finite-time completion. Simulations show the generation of realistic trajectories, motivating new applications of variable horizon MPC for autonomy that go beyond the realm of vehicle path planning. ©2010 IEEE.
Resumo:
We introduce a conceptually novel structured prediction model, GPstruct, which is kernelized, non-parametric and Bayesian, by design. We motivate the model with respect to existing approaches, among others, conditional random fields (CRFs), maximum margin Markov networks (M3N), and structured support vector machines (SVMstruct), which embody only a subset of its properties. We present an inference procedure based on Markov Chain Monte Carlo. The framework can be instantiated for a wide range of structured objects such as linear chains, trees, grids, and other general graphs. As a proof of concept, the model is benchmarked on several natural language processing tasks and a video gesture segmentation task involving a linear chain structure. We show prediction accuracies for GPstruct which are comparable to or exceeding those of CRFs and SVMstruct.
Resumo:
Computing has recently reached an inflection point with the introduction of multicore processors. On-chip thread-level parallelism is doubling approximately every other year. Concurrency lends itself naturally to allowing a program to trade performance for power savings by regulating the number of active cores; however, in several domains, users are unwilling to sacrifice performance to save power. We present a prediction model for identifying energy-efficient operating points of concurrency in well-tuned multithreaded scientific applications and a runtime system that uses live program analysis to optimize applications dynamically. We describe a dynamic phase-aware performance prediction model that combines multivariate regression techniques with runtime analysis of data collected from hardware event counters to locate optimal operating points of concurrency. Using our model, we develop a prediction-driven phase-aware runtime optimization scheme that throttles concurrency so that power consumption can be reduced and performance can be set at the knee of the scalability curve of each program phase. The use of prediction reduces the overhead of searching the optimization space while achieving near-optimal performance and power savings. A thorough evaluation of our approach shows a reduction in power consumption of 10.8 percent, simultaneous with an improvement in performance of 17.9 percent, resulting in energy savings of 26.7 percent.
Resumo:
This paper presents a scalable, statistical ‘black-box’ model for predicting the performance of parallel programs on multi-core non-uniform memory access (NUMA) systems. We derive a model with low overhead, by reducing data collection and model training time. The model can accurately predict the behaviour of parallel applications in response to changes in their concurrency, thread layout on NUMA nodes, and core voltage and frequency. We present a framework that applies the model to achieve significant energy and energy-delay-square (ED2) savings (9% and 25%, respectively) along with performance improvement (10% mean) on an actual 16-core NUMA system running realistic application workloads. Our prediction model proves substantially more accurate than previous efforts.
Resumo:
Simulations of the top-of-atmosphere radiative-energy budget from the Met Office global numerical weather-prediction model are evaluated using new data from the Geostationary Earth Radiation Budget (GERB) instrument on board the Meteosat-8 satellite. Systematic discrepancies between the model simulations and GERB measurements greater than 20 Wm-2 in outgoing long-wave radiation (OLR) and greater than 60 Wm-2 in reflected short-wave radiation (RSR) are identified over the period April-September 2006 using 12 UTC data. Convective cloud over equatorial Africa is spatially less organized and less reflective than in the GERB data. This bias depends strongly on convective-cloud cover, which is highly sensitive to changes in the model convective parametrization. Underestimates in model OLR over the Gulf of Guinea coincide with unrealistic southerly cloud outflow from convective centres to the north. Large overestimates in model RSR over the subtropical ocean, greater than 50 Wm-2 at 12 UTC, are explained by unrealistic radiative properties of low-level cloud relating to overestimation of cloud liquid water compared with independent satellite measurements. The results of this analysis contribute to the development and improvement of parametrizations in the global forecast model.
Resumo:
Dynamical downscaling is frequently used to investigate the dynamical variables of extra-tropical cyclones, for example, precipitation, using very high-resolution models nested within coarser resolution models to understand the processes that lead to intense precipitation. It is also used in climate change studies, using long timeseries to investigate trends in precipitation, or to look at the small-scale dynamical processes for specific case studies. This study investigates some of the problems associated with dynamical downscaling and looks at the optimum configuration to obtain the distribution and intensity of a precipitation field to match observations. This study uses the Met Office Unified Model run in limited area mode with grid spacings of 12, 4 and 1.5 km, driven by boundary conditions provided by the ECMWF Operational Analysis to produce high-resolution simulations for the Summer of 2007 UK flooding events. The numerical weather prediction model is initiated at varying times before the peak precipitation is observed to test the importance of the initialisation and boundary conditions, and how long the simulation can be run for. The results are compared to raingauge data as verification and show that the model intensities are most similar to observations when the model is initialised 12 hours before the peak precipitation is observed. It was also shown that using non-gridded datasets makes verification more difficult, with the density of observations also affecting the intensities observed. It is concluded that the simulations are able to produce realistic precipitation intensities when driven by the coarser resolution data.
Resumo:
Arctic flaw polynyas are considered to be highly productive areas for the formation of sea-ice throughout the winter season. Most estimates of sea-ice production are based on the surface energy balance equation and use global reanalyses as atmospheric forcing, which are too coarse to take into account the impact of polynyas on the atmosphere. Additional errors in the estimates of polynya ice production may result from the methods of calculating atmospheric energy fluxes and the assumption of a thin-ice distribution within polynyas. The present study uses simulations using the mesoscale weather prediction model of the Consortium for Small-scale Modelling (COSMO), where polynya area is prescribed from satellite data. The polynya area is either assumed to be ice-free or to be covered with thin ice of 10 cm. Simulations have been performed for two winter periods (2007/08 and 2008/09). When using a realistic thin-ice thickness of 10 cm, sea-ice production in Laptev polynyas amount to 30 km3 and 73 km3 for the winters 2007/08 and 2008/09, respectively. The higher turbulent energy fluxes of open-water polynyas result in a 50-70% increase in sea-ice production (49 km3 in 2007/08 and 123 km3 in 2008/09). Our results suggest that previous studies have overestimated ice production in the Laptev Sea.
Resumo:
OBJECTIVE Algorithms to predict the future long-term risk of patients with stable coronary artery disease (CAD) are rare. The VIenna and Ludwigshafen CAD (VILCAD) risk score was one of the first scores specifically tailored for this clinically important patient population. The aim of this study was to refine risk prediction in stable CAD creating a new prediction model encompassing various pathophysiological pathways. Therefore, we assessed the predictive power of 135 novel biomarkers for long-term mortality in patients with stable CAD. DESIGN, SETTING AND SUBJECTS We included 1275 patients with stable CAD from the LUdwigshafen RIsk and Cardiovascular health study with a median follow-up of 9.8 years to investigate whether the predictive power of the VILCAD score could be improved by the addition of novel biomarkers. Additional biomarkers were selected in a bootstrapping procedure based on Cox regression to determine the most informative predictors of mortality. RESULTS The final multivariable model encompassed nine clinical and biochemical markers: age, sex, left ventricular ejection fraction (LVEF), heart rate, N-terminal pro-brain natriuretic peptide, cystatin C, renin, 25OH-vitamin D3 and haemoglobin A1c. The extended VILCAD biomarker score achieved a significantly improved C-statistic (0.78 vs. 0.73; P = 0.035) and net reclassification index (14.9%; P < 0.001) compared to the original VILCAD score. Omitting LVEF, which might not be readily measureable in clinical practice, slightly reduced the accuracy of the new BIO-VILCAD score but still significantly improved risk classification (net reclassification improvement 12.5%; P < 0.001). CONCLUSION The VILCAD biomarker score based on routine parameters complemented by novel biomarkers outperforms previous risk algorithms and allows more accurate classification of patients with stable CAD, enabling physicians to choose more personalized treatment regimens for their patients.
Resumo:
There are a number of factors that contribute to the success of dental implant operations. Among others, is the choice of location in which the prosthetic tooth is to be implanted. This project offers a new approach to analyse jaw tissue for the purpose of selecting suitable locations for teeth implant operations. The application developed takes as input jaw computed tomography stack of slices and trims data outside the jaw area, which is the point of interest. It then reconstructs a three dimensional model of the jaw highlighting points of interest on the reconstructed model. On another hand, data mining techniques have been utilised in order to construct a prediction model based on an information dataset of previous dental implant operations with observed stability values. The goal is to find patterns within the dataset that would help predicting the success likelihood of an implant.