110 resultados para Propagation prediction models
Resumo:
Near-infrared spectroscopy (NIRS) calibrations were developed for the discrimination of Chinese hawthorn (Crataegus pinnatifida Bge. var. major) fruit from three geographical regions as well as for the estimation of the total sugar, total acid, total phenolic content, and total antioxidant activity. Principal component analysis (PCA) was used for the discrimination of the fruit on the basis of their geographical origin. Three pattern recognition methods, linear discriminant analysis, partial least-squares-discriminant analysis, and back-propagation artificial neural networks, were applied to classify and compare these samples. Furthermore, three multivariate calibration models based on the first derivative NIR spectroscopy, partial least-squares regression, back-propagation artificial neural networks, and least-squares-support vector machines, were constructed for quantitative analysis of the four analytes, total sugar, total acid, total phenolic content, and total antioxidant activity, and validated by prediction data sets.
Resumo:
Quality of experience (QoE) measures the overall perceived quality of mobile video delivery from subjective user experience and objective system performance. Current QoE computing models have two main limitations: 1) insufficient consideration of the factors influencing QoE, and; 2) limited studies on QoE models for acceptability prediction. In this paper, a set of novel acceptability-based QoE models, denoted as A-QoE, is proposed based on the results of comprehensive user studies on subjective quality acceptance assessments. The models are able to predict users’ acceptability and pleasantness in various mobile video usage scenarios. Statistical regression analysis has been used to build the models with a group of influencing factors as independent predictors, including encoding parameters and bitrate, video content characteristics, and mobile device display resolution. The performance of the proposed A-QoE models has been compared with three well-known objective Video Quality Assessment metrics: PSNR, SSIM and VQM. The proposed A-QoE models have high prediction accuracy and usage flexibility. Future user-centred mobile video delivery systems can benefit from applying the proposed QoE-based management to optimize video coding and quality delivery decisions.
Resumo:
Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS–SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS–SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65–85% for hybrid PLS–SVM model respectively. Also it was found that the hybrid PLS–SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS–SVM model.
Resumo:
Recent research at the Queensland University of Technology has investigated the structural and thermal behaviour of load bearing Light gauge Steel Frame (LSF) wall systems made of 1.15 mm G500 steel studs and varying plasterboard and insulation configurations (cavity and external insulation) using full scale fire tests. Suitable finite element models of LSF walls were then developed and validated by comparing with test results. In this study, the validated finite element models of LSF wall panels subject to standard fire conditions were used in a detailed parametric study to investigate the effects of important parameters such as steel grade and thickness, plasterboard screw spacing, plasterboard lateral restraint, insulation materials and load ratio on their performance under standard fire conditions. Suitable equations were proposed to predict the time–temperature profiles of LSF wall studs with eight different plasterboard-insulation configurations, and used in the finite element analyses. Finite element parametric studies produced extensive fire performance data for the LSF wall panels in the form of load ratio versus time and critical hot flange (failure) temperature curves for eight wall configurations. This data demonstrated the superior fire performance of externally insulated LSF wall panels made of different steel grades and thicknesses. It also led to the development of a set of equations to predict the important relationship between the load ratio and the critical hot flange temperature of LSF wall studs. Finally this paper proposes a simplified method to predict the fire resistance rating of LSF walls based on the two proposed set of equations for the load ratio–hot flange temperature and the time–temperature relationships.
Resumo:
The objective of this research was to develop a model to estimate future freeway pavement construction costs in Henan Province, China. A comprehensive set of factors contributing to the cost of freeway pavement construction were included in the model formulation. These factors comprehensively reflect the characteristics of region and topography and altitude variation, the cost of labour, material, and equipment, and time-related variables such as index numbers of labour prices, material prices and equipment prices. An Artificial Neural Network model using the Back-Propagation learning algorithm was developed to estimate the cost of freeway pavement construction. A total of 88 valid freeway cases were obtained from freeway construction projects let by the Henan Transportation Department during the period 1994−2007. Data from a random selection of 81 freeway cases were used to train the Neural Network model and the remaining data were used to test the performance of the Neural Network model. The tested model was used to predict freeway pavement construction costs in 2010 based on predictions of input values. In addition, this paper provides a suggested correction for the prediction of the value for the future freeway pavement construction costs. Since the change in future freeway pavement construction cost is affected by many factors, the predictions obtained by the proposed method, and therefore the model, will need to be tested once actual data are obtained.
Resumo:
Autonomous navigation and picture compilation tasks require robust feature descriptions or models. Given the non Gaussian nature of sensor observations, it will be shown that Gaussian mixture models provide a general probabilistic representation allowing analytical solutions to the update and prediction operations in the general Bayesian filtering problem. Each operation in the Bayesian filter for Gaussian mixture models multiplicatively increases the number of parameters in the representation leading to the need for a re-parameterisation step. A computationally efficient re-parameterisation step will be demonstrated resulting in a compact and accurate estimate of the true distribution.
Resumo:
The operation of the law rests on the selection of an account of the facts. Whether this involves prediction or postdiction, it is not possible to achieve certainty. Any attempt to model the operation of the law completely will therefore raise questions of how to model the process of proof. In the selection of a model a crucial question will be whether the model is to be used normatively or descriptively. Focussing on postdiction, this paper presents and contrasts the mathematical model with the story model. The former carries the normative stamp of scientific approval, whereas the latter has been developed by experimental psychologists to describe how humans reason. Neil Cohen's attempt to use a mathematical model descriptively provides an illustration of the dangers in not clearly setting this parameter of the modelling process. It should be kept in mind that the labels 'normative' and 'descriptive' are not eternal. The mathematical model has its normative limits, beyond which we may need to critically assess models with descriptive origins.
Resumo:
There is a wide variety of drivers for business process modelling initiatives, reaching from business evolution and process optimisation over compliance checking and process certification to process enactment. That, in turn, results in models that differ in content due to serving different purposes. In particular, processes are modelled on different abstraction levels and assume different perspectives. Vertical alignment of process models aims at handling these deviations. While the advantages of such an alignment for inter-model analysis and change propagation are out of question, a number of challenges has still to be addressed. In this paper, we discuss three main challenges for vertical alignment in detail. Against this background, the potential application of techniques from the field of process integration is critically assessed. Based thereon, we identify specific research questions that guide the design of a framework for model alignment.
Resumo:
This paper evaluates the performances of prediction intervals generated from alternative time series models, in the context of tourism forecasting. The forecasting methods considered include the autoregressive (AR) model, the AR model using the bias-corrected bootstrap, seasonal ARIMA models, innovations state space models for exponential smoothing, and Harvey’s structural time series models. We use thirteen monthly time series for the number of tourist arrivals to Hong Kong and Australia. The mean coverage rates and widths of the alternative prediction intervals are evaluated in an empirical setting. It is found that all models produce satisfactory prediction intervals, except for the autoregressive model. In particular, those based on the biascorrected bootstrap perform best in general, providing tight intervals with accurate coverage rates, especially when the forecast horizon is long.
Resumo:
Objectives Recent research has shown that machine learning techniques can accurately predict activity classes from accelerometer data in adolescents and adults. The purpose of this study is to develop and test machine learning models for predicting activity type in preschool-aged children. Design Participants completed 12 standardised activity trials (TV, reading, tablet game, quiet play, art, treasure hunt, cleaning up, active game, obstacle course, bicycle riding) over two laboratory visits. Methods Eleven children aged 3–6 years (mean age = 4.8 ± 0.87; 55% girls) completed the activity trials while wearing an ActiGraph GT3X+ accelerometer on the right hip. Activities were categorised into five activity classes: sedentary activities, light activities, moderate to vigorous activities, walking, and running. A standard feed-forward Artificial Neural Network and a Deep Learning Ensemble Network were trained on features in the accelerometer data used in previous investigations (10th, 25th, 50th, 75th and 90th percentiles and the lag-one autocorrelation). Results Overall recognition accuracy for the standard feed forward Artificial Neural Network was 69.7%. Recognition accuracy for sedentary activities, light activities and games, moderate-to-vigorous activities, walking, and running was 82%, 79%, 64%, 36% and 46%, respectively. In comparison, overall recognition accuracy for the Deep Learning Ensemble Network was 82.6%. For sedentary activities, light activities and games, moderate-to-vigorous activities, walking, and running recognition accuracy was 84%, 91%, 79%, 73% and 73%, respectively. Conclusions Ensemble machine learning approaches such as Deep Learning Ensemble Network can accurately predict activity type from accelerometer data in preschool children.
Resumo:
Rodent (mouse and rat) models have been crucial in developing our understanding of human neurogenesis and neural stem cell (NSC) biology. The study of neurogenesis in rodents has allowed us to begin to understand adult human neurogenesis and in particular, protocols established for isolation and in vitro propagation of rodent NSCs have successfully been applied to the expansion of human NSCs. Furthermore, rodent models have played a central role in studying NSC function in vivo and in the development of NSC transplantation strategies for cell therapy applications. Rodents and humans share many similarities in the process of neurogenesis and NSC biology however distinct species differences are important considerations for the development of more efficient human NSC therapeutic applications. Here we review the important contributions rodent studies have had to our understanding of human neurogenesis and to the development of in vitro and in vivo NSC research. Species differences will be discussed to identify key areas in need of further development for human NSC therapy applications.
Resumo:
Iterative computational models have been used to investigate the regulation of bone fracture healing by local mechanical conditions. Although their predictions replicate some mechanical responses and histological features, they do not typically reproduce the predominantly radial hard callus growth pattern observed in larger mammals. We hypothesised that this discrepancy results from an artefact of the models’ initial geometry. Using axisymmetric finite element models, we demonstrated that pre-defining a field of soft tissue in which callus may develop introduces high deviatoric strains in the periosteal region adjacent to the fracture. These bone-inhibiting strains are not present when the initial soft tissue is confined to a thin periosteal layer. As observed in previous healing models, tissue differentiation algorithms regulated by deviatoric strain predicted hard callus forming remotely and growing towards the fracture. While dilatational strain regulation allowed early bone formation closer to the fracture, hard callus still formed initially over a broad area, rather than expanding over time. Modelling callus growth from a thin periosteal layer successfully predicted the initiation of hard callus growth close to the fracture site. However, these models were still susceptible to elevated deviatoric strains in the soft tissues at the edge of the hard callus. Our study highlights the importance of the initial soft tissue geometry used for finite element models of fracture healing. If this cannot be defined accurately, alternative mechanisms for the prediction of early callus development should be investigated.
Resumo:
Drying of food materials offers a significant increase in the shelf life of food materials, along with the modification of quality attributes due to simultaneous heat and mass transfer. Shrinkage and variations in porosity are the common micro and microstructural changes that take place during the drying of mostly the food materials. Although extensive research has been carried out on the prediction of shrinkage and porosity over the time of drying, no single model exists which consider both material properties and process condition in the same model. In this study, an attempt has been made to develop and validate shrinkage and porosity models of food materials during drying considering both process parameters and sample properties. The stored energy within the sample, elastic potential energy, glass transition temperature and physical properties of the sample such as initial porosity, particle density, bulk density and moisture content have been taken into consideration. Physical properties and validation have been made by using a universal testing machine ( Instron 2kN), a profilometer (Nanovea) and a pycnometer. Apart from these, COMSOL Multiphysics 4.4 has been used to solve heat and mass transfer physics. Results obtained from models of shrinkage and porosity is quite consistent with the experimental data. Successful implementation of these models would ensure the use of optimum energy in the course of drying and better quality retention of dried foods.
Resumo:
Considering ultrasound propagation through complex composite media as an array of parallel sonic rays, a comparison of computer simulated prediction with experimental data has previously been reported for transmission mode (where one transducer serves as transmitter, the other as receiver) in a series of ten acrylic step-wedge samples, immersed in water, exhibiting varying degrees of transit time inhomogeneity. In this study, the same samples were used but in pulse-echo mode, where the same ultrasound transducer served as both transmitter and receiver, detecting both ‘primary’ (internal sample interface) and ‘secondary’ (external sample interface) echoes. A transit time spectrum (TTS) was derived, describing the proportion of sonic rays with a particular transit time. A computer simulation was performed to predict the transit time and amplitude of various echoes created, and compared with experimental data. Applying an amplitude-tolerance analysis, 91.7±3.7% of the simulated data was within ±1 standard deviation (STD) of the experimentally measured amplitude-time data. Correlation of predicted and experimental transit time spectra provided coefficients of determination (R2) ranging from 100.0% to 96.8% for the various samples tested. The results acquired from this study provide good evidence for the concept of parallel sonic rays. Further, deconvolution of experimental input and output signals has been shown to provide an effective method to identify echoes otherwise lost due to phase cancellation. Potential applications of pulse-echo ultrasound transit time spectroscopy (PE-UTTS) include improvement of ultrasound image fidelity by improving spatial resolution and reducing phase interference artefacts.
Resumo:
The acceptance of broadband ultrasound attenuation (BUA) for the assessment of osteoporosis suffers from a limited understanding of both ultrasound wave propagation through cancellous bone and its exact dependence upon the material and structural properties. It has recently been proposed that ultrasound wave propagation in cancellous bone may be described by a concept of parallel sonic rays; the transit time of each ray defined by the proportion of bone and marrow propagated. A Transit Time Spectrum (TTS) describes the proportion of sonic rays having a particular transit time, effectively describing the lateral inhomogeneity of transit times over the surface aperture of the receive ultrasound transducer. The aim of this study was to test the hypothesis that the solid volume fraction (SVF) of simplified bone:marrow replica models may be reliably estimated from the corresponding ultrasound transit time spectrum. Transit time spectra were derived via digital deconvolution of the experimentally measured input and output ultrasonic signals, and compared to predicted TTS based on the parallel sonic ray concept, demonstrating agreement in both position and amplitude of spectral peaks. Solid volume fraction was calculated from the TTS; agreement between true (geometric calculation) with predicted (computer simulation) and experimentally-derived values were R2=99.9% and R2=97.3% respectively. It is therefore envisaged that ultrasound transit time spectroscopy (UTTS) offers the potential to reliably estimate bone mineral density and hence the established T-score parameter for clinical osteoporosis assessment.