939 resultados para Mean square error methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to determine the most informative sampling time(s) providing a precise prediction of tacrolimus area under the concentration-time curve (AUC). Fifty-four concentration-time profiles of tacrolimus from 31 adult liver transplant recipients were analyzed. Each profile contained 5 tacrolimus whole-blood concentrations (predose and 1, 2, 4, and 6 or 8 hours postdose), measured using liquid chromatography-tandem mass spectrometry. The concentration at 6 hours was interpolated for each profile, and 54 values of AUC(0-6) were calculated using the trapezoidal rule. The best sampling times were then determined using limited sampling strategies and sensitivity analysis. Linear mixed-effects modeling was performed to estimate regression coefficients of equations incorporating each concentration-time point (C0, C1, C2, C4, interpolated C5, and interpolated C6) as a predictor of AUC(0-6). Predictive performance was evaluated by assessment of the mean error (ME) and root mean square error (RMSE). Limited sampling strategy (LSS) equations with C2, C4, and C5 provided similar results for prediction of AUC(0-6) (R-2 = 0.869, 0.844, and 0.832, respectively). These 3 time points were superior to C0 in the prediction of AUC. The ME was similar for all time points; the RMSE was smallest for C2, C4, and C5. The highest sensitivity index was determined to be 4.9 hours postdose at steady state, suggesting that this time point provides the most information about the AUC(0-12). The results from limited sampling strategies and sensitivity analysis supported the use of a single blood sample at 5 hours postdose as a predictor of both AUC(0-6) and AUC(0-12). A jackknife procedure was used to evaluate the predictive performance of the model, and this demonstrated that collecting a sample at 5 hours after dosing could be considered as the optimal sampling time for predicting AUC(0-6).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite extensive progress on the theoretical aspects of spectral efficient communication systems, hardware impairments, such as phase noise, are the key bottlenecks in next generation wireless communication systems. The presence of non-ideal oscillators at the transceiver introduces time varying phase noise and degrades the performance of the communication system. Significant research literature focuses on joint synchronization and decoding based on joint posterior distribution, which incorporate both the channel and code graph. These joint synchronization and decoding approaches operate on well designed sum-product algorithms, which involves calculating probabilistic messages iteratively passed between the channel statistical information and decoding information. Channel statistical information, generally entails a high computational complexity because its probabilistic model may involve continuous random variables. The detailed knowledge about the channel statistics for these algorithms make them an inadequate choice for real world applications due to power and computational limitations. In this thesis, novel phase estimation strategies are proposed, in which soft decision-directed iterative receivers for a separate A Posteriori Probability (APP)-based synchronization and decoding are proposed. These algorithms do not require any a priori statistical characterization of the phase noise process. The proposed approach relies on a Maximum A Posteriori (MAP)-based algorithm to perform phase noise estimation and does not depend on the considered modulation/coding scheme as it only exploits the APPs of the transmitted symbols. Different variants of APP-based phase estimation are considered. The proposed algorithm has significantly lower computational complexity with respect to joint synchronization/decoding approaches at the cost of slight performance degradation. With the aim to improve the robustness of the iterative receiver, we derive a new system model for an oversampled (more than one sample per symbol interval) phase noise channel. We extend the separate APP-based synchronization and decoding algorithm to a multi-sample receiver, which exploits the received information from the channel by exchanging the information in an iterative fashion to achieve robust convergence. Two algorithms based on sliding block-wise processing with soft ISI cancellation and detection are proposed, based on the use of reliable information from the channel decoder. Dually polarized systems provide a cost-and spatial-effective solution to increase spectral efficiency and are competitive candidates for next generation wireless communication systems. A novel soft decision-directed iterative receiver, for separate APP-based synchronization and decoding, is proposed. This algorithm relies on an Minimum Mean Square Error (MMSE)-based cancellation of the cross polarization interference (XPI) followed by phase estimation on the polarization of interest. This iterative receiver structure is motivated from Master/Slave Phase Estimation (M/S-PE), where M-PE corresponds to the polarization of interest. The operational principle of a M/S-PE block is to improve the phase tracking performance of both polarization branches: more precisely, the M-PE block tracks the co-polar phase and the S-PE block reduces the residual phase error on the cross-polar branch. Two variants of MMSE-based phase estimation are considered; BW and PLP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis considers two basic aspects of impact damage in composite materials, namely damage severity discrimination and impact damage location by using Acoustic Emissions (AE) and Artificial Neural Networks (ANNs). The experimental work embodies a study of such factors as the application of AE as Non-destructive Damage Testing (NDT), and the evaluation of ANNs modelling. ANNs, however, played an important role in modelling implementation. In the first aspect of the study, different impact energies were used to produce different level of damage in two composite materials (T300/914 and T800/5245). The impacts were detected by their acoustic emissions (AE). The AE waveform signals were analysed and modelled using a Back Propagation (BP) neural network model. The Mean Square Error (MSE) from the output was then used as a damage indicator in the damage severity discrimination study. To evaluate the ANN model, a comparison was made of the correlation coefficients of different parameters, such as MSE, AE energy, AE counts, etc. MSE produced an outstanding result based on the best performance of correlation. In the second aspect, a new artificial neural network model was developed to provide impact damage location on a quasi-isotropic composite panel. It was successfully trained to locate impact sites by correlating the relationship between arriving time differences of AE signals at transducers located on the panel and the impact site coordinates. The performance of the ANN model, which was evaluated by calculating the distance deviation between model output and real location coordinates, supports the application of ANN as an impact damage location identifier. In the study, the accuracy of location prediction decreased when approaching the central area of the panel. Further investigation indicated that this is due to the small arrival time differences, which defect the performance of ANN prediction. This research suggested increasing the number of processing neurons in the ANNs as a practical solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present syllable-based duration modelling in the context of a prosody model for Standard Yorùbá (SY) text-to-speech (TTS) synthesis applications. Our prosody model is conceptualised around a modular holistic framework. This framework is implemented using the Relational Tree (R-Tree) techniques. An important feature of our R-Tree framework is its flexibility in that it facilitates the independent implementation of the different dimensions of prosody, i.e. duration, intonation, and intensity, using different techniques and their subsequent integration. We applied the Fuzzy Decision Tree (FDT) technique to model the duration dimension. In order to evaluate the effectiveness of FDT in duration modelling, we have also developed a Classification And Regression Tree (CART) based duration model using the same speech data. Each of these models was integrated into our R-Tree based prosody model. We performed both quantitative (i.e. Root Mean Square Error (RMSE) and Correlation (Corr)) and qualitative (i.e. intelligibility and naturalness) evaluations on the two duration models. The results show that CART models the training data more accurately than FDT. The FDT model, however, shows a better ability to extrapolate from the training data since it achieved a better accuracy for the test data set. Our qualitative evaluation results show that our FDT model produces synthesised speech that is perceived to be more natural than our CART model. In addition, we also observed that the expressiveness of FDT is much better than that of CART. That is because the representation in FDT is not restricted to a set of piece-wise or discrete constant approximation. We, therefore, conclude that the FDT approach is a practical approach for duration modelling in SY TTS applications. © 2006 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose was to advance research and clinical methodology for assessing psychopathology by testing the international generalizability of an 8-syndrome model derived from collateral ratings of adult behavioral, emotional, social, and thought problems. Collateral informants rated 8,582 18-59-year-old residents of 18 societies on the Adult Behavior Checklist (ABCL). Confirmatory factor analyses tested the fit of the 8-syndrome model to ratings from each society. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all societies, while secondary indices (Tucker Lewis Index, Comparative Fit Index) showed acceptable to good fit for 17 societies. Factor loadings were robust across societies and items. Of the 5,007 estimated parameters, 4 (0.08%) were outside the admissible parameter space, but 95% confidence intervals included the admissible space, indicating that the 4 deviant parameters could be due to sampling fluctuations. The findings are consistent with previous evidence for the generalizability of the 8-syndrome model in self-ratings from 29 societies, and support the 8-syndrome model for operationalizing phenotypes of adult psychopathology from multi-informant ratings in diverse societies. © 2014 Asociación Española de Psicología Conductual.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, different artificial neural networks (ANN) are developed for the prediction of surface roughness (R a) values in Al alloy 7075-T7351 after face milling machining process. The radial base (RBNN), feed forward (FFNN), and generalized regression (GRNN) networks were selected, and the data used for training these networks were derived from experiments conducted using a high-speed milling machine. The Taguchi design of experiment was applied to reduce the time and cost of the experiments. From this study, the performance of each ANN used in this research was measured with the mean square error percentage and it was observed that FFNN achieved the best results. Also the Pearson correlation coefficient was calculated to analyze the correlation between the five inputs (cutting speed, feed per tooth, axial depth of cut, chip°s width, and chip°s thickness) selected for the network with the selected output (surface roughness). Results showed a strong correlation between the chip thickness and the surface roughness followed by the cutting speed. © ASM International.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology changes rapidly over years providing continuously more options for computer alternatives and making life easier for economic, intra-relation or any other transactions. However, the introduction of new technology “pushes” old Information and Communication Technology (ICT) products to non-use. E-waste is defined as the quantities of ICT products which are not in use and is bivariate function of the sold quantities, and the probability that specific computers quantity will be regarded as obsolete. In this paper, an e-waste generation model is presented, which is applied to the following regions: Western and Eastern Europe, Asia/Pacific, Japan/Australia/New Zealand, North and South America. Furthermore, cumulative computer sales were retrieved for selected countries of the regions so as to compute obsolete computer quantities. In order to provide robust results for the forecasted quantities, a selection of forecasting models, namely (i) Bass, (ii) Gompertz, (iii) Logistic, (iv) Trend model, (v) Level model, (vi) AutoRegressive Moving Average (ARMA), and (vii) Exponential Smoothing were applied, depicting for each country that model which would provide better results in terms of minimum error indices (Mean Absolute Error and Mean Square Error) for the in-sample estimation. As new technology does not diffuse in all the regions of the world with the same speed due to different socio-economic factors, the lifespan distribution, which provides the probability of a certain quantity of computers to be considered as obsolete, is not adequately modeled in the literature. The time horizon for the forecasted quantities is 2014-2030, while the results show a very sharp increase in the USA and United Kingdom, due to the fact of decreasing computer lifespan and increasing sales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heat sinks are widely used for cooling electronic devices and systems. Their thermal performance is usually determined by the material, shape, and size of the heat sink. With the assistance of computational fluid dynamics (CFD) and surrogate-based optimization, heat sinks can be designed and optimized to achieve a high level of performance. In this paper, the design and optimization of a plate-fin-type heat sink cooled by impingement jet is presented. The flow and thermal fields are simulated using the CFD simulation; the thermal resistance of the heat sink is then estimated. A Kriging surrogate model is developed to approximate the objective function (thermal resistance) as a function of design variables. Surrogate-based optimization is implemented by adaptively adding infill points based on an integrated strategy of the minimum value, the maximum mean square error approach, and the expected improvement approaches. The results show the influence of design variables on the thermal resistance and give the optimal heat sink with lowest thermal resistance for given jet impingement conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To analyze differences in the variables associated with severity of suicidal intent and in the main factors associated with intent when comparing younger and older adults. DESIGN: Observational, descriptive cross-sectional study. SETTING: Four general hospitals in Madrid, Spain. PARTICIPANTS: Eight hundred seventy suicide attempts by 793 subjects split into two groups: 18-54 year olds and subjects older than 55 years. MEASUREMENTS: The authors tested the factorial latent structure of suicidal intent through multigroup confirmatory factor analysis for categorical outcomes and performed statistical tests of invariance across age groups using the DIFFTEST procedure. Then, they tested a multiple indicators-multiple causes (MIMIC) model including different covariates regressed on the latent factor "intent" and performed two separate MIMIC models for younger and older adults to test for differential patterns. RESULTS: Older adults had higher suicidal intent than younger adults (z = 2.63, p = 0.009). The final model for the whole sample showed a relationship of intent with previous attempts, support, mood disorder, personality disorder, substance-related disorder, and schizophrenia and other psychotic disorders. The model showed an adequate fit (chi²[12] = 22.23, p = 0.035; comparative fit index = 0.986; Tucker-Lewis index = 0.980; root mean square error of approximation = 0.031; weighted root mean square residual = 0.727). All covariates had significant weights in the younger group, but in the older group, only previous attempts and mood disorders were significantly related to intent severity. CONCLUSIONS: The pattern of variables associated with suicidal intent varies with age. Recognition, and treatment of geriatric depression may be the most effective measure to prevent suicidal behavior in older adults.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Setting out from the database of Operophtera brumata, L. in between 1973 and 2000 due to the Light Trap Network in Hungary, we introduce a simple theta-logistic population dynamical model based on endogenous and exogenous factors, only. We create an indicator set from which we can choose some elements with which we can improve the fitting results the most effectively. Than we extend the basic simple model with additive climatic factors. The parameter optimization is based on the minimized root mean square error. The best model is chosen according to the Akaike Information Criterion. Finally we run the calibrated extended model with daily outputs of the regional climate model RegCM3.1, regarding 1961-1990 as reference period and 2021-2050 with 2071-2100 as future predictions. The results of the three time intervals are fitted with Beta distributions and compared statistically. The expected changes are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic incidents are a major source of traffic congestion on freeways. Freeway traffic diversion using pre-planned alternate routes has been used as a strategy to reduce traffic delays due to major traffic incidents. However, it is not always beneficial to divert traffic when an incident occurs. Route diversion may adversely impact traffic on the alternate routes and may not result in an overall benefit. This dissertation research attempts to apply Artificial Neural Network (ANN) and Support Vector Regression (SVR) techniques to predict the percent of delay reduction from route diversion to help determine whether traffic should be diverted under given conditions. The DYNASMART-P mesoscopic traffic simulation model was applied to generate simulated data that were used to develop the ANN and SVR models. A sample network that comes with the DYNASMART-P package was used as the base simulation network. A combination of different levels of incident duration, capacity lost, percent of drivers diverted, VMS (variable message sign) messaging duration, and network congestion was simulated to represent different incident scenarios. The resulting percent of delay reduction, average speed, and queue length from each scenario were extracted from the simulation output. The ANN and SVR models were then calibrated for percent of delay reduction as a function of all of the simulated input and output variables. The results show that both the calibrated ANN and SVR models, when applied to the same location used to generate the calibration data, were able to predict delay reduction with a relatively high accuracy in terms of mean square error (MSE) and regression correlation. It was also found that the performance of the ANN model was superior to that of the SVR model. Likewise, when the models were applied to a new location, only the ANN model could produce comparatively good delay reduction predictions under high network congestion level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interferometric synthetic aperture radar (InSAR) techniques can successfully detect phase variations related to the water level changes in wetlands and produce spatially detailed high-resolution maps of water level changes. Despite the vast details, the usefulness of the wetland InSAR observations is rather limited, because hydrologists and water resources managers need information on absolute water level values and not on relative water level changes. We present an InSAR technique called Small Temporal Baseline Subset (STBAS) for monitoring absolute water level time series using radar interferograms acquired successively over wetlands. The method uses stage (water level) observation for calibrating the relative InSAR observations and tying them to the stage's vertical datum. We tested the STBAS technique with two-year long Radarsat-1 data acquired during 2006–2008 over the Water Conservation Area 1 (WCA1) in the Everglades wetlands, south Florida (USA). The InSAR-derived water level data were calibrated using 13 stage stations located in the study area to generate 28 successive high spatial resolution maps (50 m pixel resolution) of absolute water levels. We evaluate the quality of the STBAS technique using a root mean square error (RMSE) criterion of the difference between InSAR observations and stage measurements. The average RMSE is 6.6 cm, which provides an uncertainty estimation of the STBAS technique to monitor absolute water levels. About half of the uncertainties are attributed to the accuracy of the InSAR technique to detect relative water levels. The other half reflects uncertainties derived from tying the relative levels to the stage stations' datum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Florida Bay is a highly dynamic estuary that exhibits wide natural fluctuations in salinity due to changes in the balance of precipitation, evaporation and freshwater runoff from the mainland. Rapid and large-scale modification of freshwater flow and construction of transportation conduits throughout the Florida Keys during the late nineteenth and twentieth centuries reshaped water circulation and salinity patterns across the ecosystem. In order to determine long-term patterns in salinity variation across the Florida Bay estuary, we used a diatom-based salinity transfer function to infer salinity within 3.27 ppt root mean square error of prediction from diatom assemblages from four ~130 year old sediment records. Sites were distributed along a gradient of exposure to anthropogenic shifts in the watershed and salinity. Precipitation was found to be the primary driver influencing salinity fluctuations over the entire record, but watershed modifications on the mainland and in the Florida Keys during the late-1800s and 1900s were the most likely cause of significant shifts in baseline salinity. The timing of these shifts in the salinity baseline varies across the Bay: that of the northeastern coring location coincides with the construction of the Florida Overseas Railway (AD 1906–1916), while that of the east-central coring location coincides with the drainage of Lake Okeechobee (AD 1881–1894). Subsequent decreases occurring after the 1960s (east-central region) and early 1980s (southwestern region) correspond to increases in freshwater delivered through water control structures in the 1950s–1970s and again in the 1980s. Concomitant increases in salinity in the northeastern and south-central regions of the Bay in the mid-1960s correspond to an extensive drought period and the occurrence of three major hurricanes, while the drop in the early 1970s could not be related to any natural event. This paper provides information about major factors influencing salinity conditions in Florida Bay in the past and quantitative estimates of the pre- and post-South Florida watershed modification salinity levels in different regions of the Bay. This information should be useful for environmental managers in setting restoration goals for the marine ecosystems in South Florida, especially for Florida Bay.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Colleges base their admission decisions on a number of factors to determine which applicants have the potential to succeed. This study utilized data for students that graduated from Florida International University between 2006 and 2012. Two models were developed (one using SAT as the principal explanatory variable and the other using ACT as the principal explanatory variable) to predict college success, measured using the student’s college grade point average at graduation. Some of the other factors that were used to make these predictions were high school performance, socioeconomic status, major, gender, and ethnicity. The model using ACT had a higher R^2 but the model using SAT had a lower mean square error. African Americans had a significantly lower college grade point average than graduates of other ethnicities. Females had a significantly higher college grade point average than males.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic incidents are a major source of traffic congestion on freeways. Freeway traffic diversion using pre-planned alternate routes has been used as a strategy to reduce traffic delays due to major traffic incidents. However, it is not always beneficial to divert traffic when an incident occurs. Route diversion may adversely impact traffic on the alternate routes and may not result in an overall benefit. This dissertation research attempts to apply Artificial Neural Network (ANN) and Support Vector Regression (SVR) techniques to predict the percent of delay reduction from route diversion to help determine whether traffic should be diverted under given conditions. The DYNASMART-P mesoscopic traffic simulation model was applied to generate simulated data that were used to develop the ANN and SVR models. A sample network that comes with the DYNASMART-P package was used as the base simulation network. A combination of different levels of incident duration, capacity lost, percent of drivers diverted, VMS (variable message sign) messaging duration, and network congestion was simulated to represent different incident scenarios. The resulting percent of delay reduction, average speed, and queue length from each scenario were extracted from the simulation output. The ANN and SVR models were then calibrated for percent of delay reduction as a function of all of the simulated input and output variables. The results show that both the calibrated ANN and SVR models, when applied to the same location used to generate the calibration data, were able to predict delay reduction with a relatively high accuracy in terms of mean square error (MSE) and regression correlation. It was also found that the performance of the ANN model was superior to that of the SVR model. Likewise, when the models were applied to a new location, only the ANN model could produce comparatively good delay reduction predictions under high network congestion level.