904 resultados para Test data


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main purpose of this thesis project is to prediction of symptom severity and cause in data from test battery of the Parkinson’s disease patient, which is based on data mining. The collection of the data is from test battery on a hand in computer. We use the Chi-Square method and check which variables are important and which are not important. Then we apply different data mining techniques on our normalize data and check which technique or method gives good results.The implementation of this thesis is in WEKA. We normalize our data and then apply different methods on this data. The methods which we used are Naïve Bayes, CART and KNN. We draw the Bland Altman and Spearman’s Correlation for checking the final results and prediction of data. The Bland Altman tells how the percentage of our confident level in this data is correct and Spearman’s Correlation tells us our relationship is strong. On the basis of results and analysis we see all three methods give nearly same results. But if we see our CART (J48 Decision Tree) it gives good result of under predicted and over predicted values that’s lies between -2 to +2. The correlation between the Actual and Predicted values is 0,794in CART. Cause gives the better percentage classification result then disability because it can use two classes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work aims to compare the forecast efficiency of different types of methodologies applied to Brazilian Consumer inflation (IPCA). We will compare forecasting models using disaggregated and aggregated data over twelve months ahead. The disaggregated models were estimated by SARIMA and will have different levels of disaggregation. Aggregated models will be estimated by time series techniques such as SARIMA, state-space structural models and Markov-switching. The forecasting accuracy comparison will be made by the selection model procedure known as Model Confidence Set and by Diebold-Mariano procedure. We were able to find evidence of forecast accuracy gains in models using more disaggregated data

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper discusses the design and performance of the time measurement technique and of the synchronization systems of the CMS hadron calorimeter. Time measurement performance results are presented from test beam data taken in the years 2004 and 2006. For hadronic showers of energy greater than 100 GeV, the timing resolution is measured to be about 1.2 ns. Time synchronization and out-of-time background rejection results are presented from the Cosmic Run At Four Tesla and LHC beam runs taken in the Autumn of 2008. The inter-channel synchronization is measured to be within 2 ns. © 2010 IOP Publishing Ltd and SISSA.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objectives: Previous research conducted in the late 1980s suggested that vehicle impacts following an initial barrier collision increase severe occupant injury risk. Now over 25years old, the data are no longer representative of the currently installed barriers or the present US vehicle fleet. The purpose of this study is to provide a present-day assessment of secondary collisions and to determine if current full-scale barrier crash testing criteria provide an indication of secondary collision risk for real-world barrier crashes. Methods: To characterize secondary collisions, 1,363 (596,331 weighted) real-world barrier midsection impacts selected from 13years (1997-2009) of in-depth crash data available through the National Automotive Sampling System (NASS) / Crashworthiness Data System (CDS) were analyzed. Scene diagram and available scene photographs were used to determine roadside and barrier specific variables unavailable in NASS/CDS. Binary logistic regression models were developed for second event occurrence and resulting driver injury. To investigate current secondary collision crash test criteria, 24 full-scale crash test reports were obtained for common non-proprietary US barriers, and the risk of secondary collisions was determined using recommended evaluation criteria from National Cooperative Highway Research Program (NCHRP) Report 350. Results: Secondary collisions were found to occur in approximately two thirds of crashes where a barrier is the first object struck. Barrier lateral stiffness, post-impact vehicle trajectory, vehicle type, and pre-impact tracking conditions were found to be statistically significant contributors to secondary event occurrence. The presence of a second event was found to increase the likelihood of a serious driver injury by a factor of 7 compared to cases with no second event present. The NCHRP Report 350 exit angle criterion was found to underestimate the risk of secondary collisions in real-world barrier crashes. Conclusions: Consistent with previous research, collisions following a barrier impact are not an infrequent event and substantially increase driver injury risk. The results suggest that using exit-angle based crash test criteria alone to assess secondary collision risk is not sufficient to predict second collision occurrence for real-world barrier crashes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The efficacy of durable polymer drug-eluting stents (DES) is delivered at the expense of delayed healing of the stented vessel. Biodegradable polymer DES aim to avoid this shortcoming and may potentially improve long-term clinical outcomes, with benefit expected to accrue over time. We sought to compare long-term outcomes in patients treated with biodegradable polymer DES vs. durable polymer sirolimus-eluting stents (SES).

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A problem frequently encountered in Data Envelopment Analysis (DEA) is that the total number of inputs and outputs included tend to be too many relative to the sample size. One way to counter this problem is to combine several inputs (or outputs) into (meaningful) aggregate variables reducing thereby the dimension of the input (or output) vector. A direct effect of input aggregation is to reduce the number of constraints. This, in its turn, alters the optimal value of the objective function. In this paper, we show how a statistical test proposed by Banker (1993) may be applied to test the validity of a specific way of aggregating several inputs. An empirical application using data from Indian manufacturing for the year 2002-03 is included as an example of the proposed test.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An interim analysis is usually applied in later phase II or phase III trials to find convincing evidence of a significant treatment difference that may lead to trial termination at an earlier point than planned at the beginning. This can result in the saving of patient resources and shortening of drug development and approval time. In addition, ethics and economics are also the reasons to stop a trial earlier. In clinical trials of eyes, ears, knees, arms, kidneys, lungs, and other clustered treatments, data may include distribution-free random variables with matched and unmatched subjects in one study. It is important to properly include both subjects in the interim and the final analyses so that the maximum efficiency of statistical and clinical inferences can be obtained at different stages of the trials. So far, no publication has applied a statistical method for distribution-free data with matched and unmatched subjects in the interim analysis of clinical trials. In this simulation study, the hybrid statistic was used to estimate the empirical powers and the empirical type I errors among the simulated datasets with different sample sizes, different effect sizes, different correlation coefficients for matched pairs, and different data distributions, respectively, in the interim and final analysis with 4 different group sequential methods. Empirical powers and empirical type I errors were also compared to those estimated by using the meta-analysis t-test among the same simulated datasets. Results from this simulation study show that, compared to the meta-analysis t-test commonly used for data with normally distributed observations, the hybrid statistic has a greater power for data observed from normally, log-normally, and multinomially distributed random variables with matched and unmatched subjects and with outliers. Powers rose with the increase in sample size, effect size, and correlation coefficient for the matched pairs. In addition, lower type I errors were observed estimated by using the hybrid statistic, which indicates that this test is also conservative for data with outliers in the interim analysis of clinical trials.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A small Positron Emission Tomography demonstrator based on LYSO slabs and Silicon Photomultiplier matrices is under construction at the University and INFN of Pisa. In this paper we present the characterization results of the read-out electronics and of the detection system. Two SiPM matrices, composed by 8 × 8 SiPM pixels, 1.5 mm pitch, have been coupled one to one to a LYSO crystals array. Custom Front-End ASICs were used to read the 64 channels of each matrix. Data from each Front-End were multiplexed and sent to a DAQ board for the digital conversion; a motherboard collects the data and communicates with a host computer through a USB port. Specific tests were carried out on the system in order to assess its performance. Futhermore we have measured some of the most important parameters of the system for PET application.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we present a simple algorithm for assessing the validity of the RVoG model for PolInSAR-based inversion techniques. This approach makes use of two important features characterizing a homogeneous random volume over a ground surface, i.e., the independence on polarization states of wave propagation through the volume and the structure of the polarimetric interferometric coherency matrix. These two features have led to two different methods proposed in the literature for retrieving the topographic phase within natural covers, i.e., the well-known line fitting procedure and the observation of the (1, 2) element of the polarimetric interferometric coherency matrix. We show that differences between outputs from both approaches can be interpreted in terms of the PolInSAR modeling based on the Freeman-Durden concept, and this leads to the definition of a RVoG/non-RVoG test. The algorithm is tested with both indoor and airborne data over agricultural and tropical forest areas.