935 resultados para test data generation
Resumo:
The PFC3D (particle flow code) that models the movement and interaction of particles by the DEM techniques was employed to simulate the particle movement and to calculate the velocity and energy distribution of collision in two types of impact crusher: the Canica vertical shaft crusher and the BJD horizontal shaft swing hammer mill. The distribution of collision energies was then converted into a product size distribution for a particular ore type using JKMRC impact breakage test data. Experimental data of the Canica VSI crusher treating quarry and the BJD hammer mill treating coal were used to verify the DEM simulation results. Upon the DEM procedures being validated, a detailed simulation study was conducted to investigate the effects of the machine design and operational conditions on velocity and energy distributions of collision inside the milling chamber and on the particle breakage behaviour. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
A literature review has highlighted the need to measure flotation froth rheology in order to fully characterise the role of the froth in the flotation process. The initial investigation using a coaxial cylinder viscometer for froth rheology measurement led to the development of a new device employing a vane measuring head. The modified rheometer was used in industrial scale flotation tests at Mt. Isa Copper Concentrator. The measured froth rheograms show a non-Newtonian nature for the flotation froths (pseudoplastic flow). The evidence of the non-Newtonian flow has questioned the validity of application of the Laplace equation in froth motion modelling as used by a number of researchers, since the assumption of irrotational flow is violated. Correlations between the froth rheology and the froth retention time, water hold-up in the froth and concentrate grades have been found. These correlations are independent of air flow rate (test data at various air flow rates fall on one similar trend line). This implies that froth rheology may be used as a lumped parameter for other operating variables in flotation modelling and scale up. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The aim of this study was to compare accumulated oxygen deficit data derived using two different exercise protocols with the aim of producing a less time-consuming test specifically for use with athletes. Six road and four track male endurance cyclists performed two series of cycle ergometer tests. The first series involved five 10 min sub-maximal cycle exercise bouts, a (V) over dotO(2peak) test and a 115% (V) over dotO(2peak) test. Data from these tests were used to estimate the accumulated oxygen deficit according to the calculations of Medbo et al. (1988). In the second series of tests, participants performed a 15 min incremental cycle ergometer test followed, 2 min later, by a 2 min variable resistance test in which they completed as much work as possible while pedalling at a constant rate. Analysis revealed that the accumulated oxygen deficit calculated from the first series of tests was higher (P< 0.02) than that calculated from the second series: 52.3 +/- 11.7 and 43.9 +/- 6.4 ml . kg(-1), respectively (mean +/- s). Other significant differences between the two protocols were observed for (V) over dot O-2peak, total work and maximal heart rate; all were higher during the modified protocol (P
Resumo:
Abstract — The analytical methods based on evaluation models of interactive systems were proposed as an alternative to user testing in the last stages of the software development due to its costs. However, the use of isolated behavioral models of the system limits the results of the analytical methods. An example of these limitations relates to the fact that they are unable to identify implementation issues that will impact on usability. With the introduction of model-based testing we are enable to test if the implemented software meets the specified model. This paper presents an model-based approach for test cases generation from the static analysis of source code.
Resumo:
OBJECTIVE To estimate rates of non-adherence to telemedicine strategies aimed at treating drug addiction. METHODS A systematic review was conducted of randomized controlled trials investigating different telemedicine treatment methods for drug addiction. The following databases were consulted between May 18, 2012 and June 21, 2012: PubMed, PsycINFO, SciELO, Wiley (The Cochrane Library), Embase, Clinical trials and Google Scholar. The Grading of Recommendations Assessment, Development and Evaluation was used to evaluate the quality of the studies. The criteria evaluated were: appropriate sequence of data generation, allocation concealment, blinding, description of losses and exclusions and analysis by intention to treat. There were 274 studies selected, of which 20 were analyzed. RESULTS Non-adherence rates varied between 15.0% and 70.0%. The interventions evaluated were of at least three months duration and, although they all used telemedicine as support, treatment methods differed. Regarding the quality of the studies, the values also varied from very poor to high quality. High quality studies showed better adherence rates, as did those using more than one technique of intervention and a limited treatment time. Mono-user studies showed better adherence rates than poly-user studies. CONCLUSIONS Rates of non-adherence to treatment involving telemedicine on the part of users of psycho-active substances differed considerably, depending on the country, the intervention method, follow-up time and substances used. Using more than one technique of intervention, short duration of treatment and the type of substance used by patients appear to facilitate adherence.
Resumo:
Adhesively bonded repairs offer an attractive option for repair of aluminium structures, compared to more traditional methods such as fastening or welding. The single-strap (SS) and double-strap (DS) repairs are very straightforward to execute but stresses in the adhesive layer peak at the overlap ends. The DS repair requires both sides of the damaged structures to be reachable for repair, which is often not possible. In strap repairs, with the patches bonded at the outer surfaces, some limitations emerge such as the weight, aerodynamics and aesthetics. To minimize these effects, SS and DS repairs with embedded patches were evaluated in this work, such that the patches are flush with the adherends. For this purpose, in this work standard SS and DS repairs, and also with the patches embedded in the adherends, were tested under tension to allow the optimization of some repair variables such as the overlap length (LO) and type of adhesive, thus allowing the maximization of the repair strength. The effect of embedding the patch/patches on the fracture modes and failure loads was compared with finite elements (FE) analysis. The FE analysis was performed in ABAQUS® and cohesive zone modelling was used for the simulation of damage onset and growth in the adhesive layer. The comparison with the test data revealed an accurate prediction for all kinds of joints and provided some principles regarding this technique.
Resumo:
We describe the avidity maturation of IgGs in human toxoplasmosis using sequential serum samples from accidental and natural infections. In accidental cases, avidity increased continuously throughout infection while naturally infected patients showed a different profile. Twenty-five percent of sera from chronic patients having specific IgM positive results could be appropriately classified using exclusively the avidity test data. To take advantage of the potentiality of this technique, antigens recognized by IgG showing steeper avidity maturation were identified using immunoblot with KSCN elution. Two clusters of antigens, in the ranges of 21-24 kDa and 30-33 kDa, were identified as the ones that fulfill the aforementioned avidity characteristics.
Resumo:
Dissertação para obtenção do Grau de Mestre em Lógica Computacional
Resumo:
Doctoral Thesis Civil Engineering
Resumo:
A newly developed strain rate dependent anisotropic continuum model is proposed for impact and blast applications in masonry. The present model adopted the usual approach of considering different yield criteria in tension and compression. The analysis of unreinforced block work masonry walls subjected to impact is carried out to validate the capability of the model. Comparison of the numerical predictions and test data revealed good agreement. Next, a parametric study is conducted to evaluate the influence of the tensile strengths along the three orthogonal directions and of the wall thickness on the global behavior of masonry walls.
Resumo:
The present study proposes a dynamic constitutive material interface model that includes non-associated flow rule and high strain rate effects, implemented in the finite element code ABAQUS as a user subroutine. First, the model capability is validated with numerical simulations of unreinforced block work masonry walls subjected to low velocity impact. The results obtained are compared with field test data and good agreement is found. Subsequently, a comprehensive parametric analysis is accomplished with different joint tensile strengths and cohesion, and wall thickness to evaluate the effect of the parameter variations on the impact response of masonry walls.
Resumo:
Aquesta memòria representa la definició del meu projecte final de carrera amb una aplicació destinada al registre d'entrades i eixides de l'Administració pública. He emprat eines de plataformes lliures i obertes per al desenvolupament del projecte, amb tecnologia J2EE. A més, hi ha l'objectiu de fer servir i provar arquitectures d'última generació com Enterprise JavaBeans Preview_2 (EJB) 3.0 (5/11/04) per a la lògica de negoci, Hibernate 3.0 alpha (actualment hi ha la beta 1.0 publicada el 20/12/04) com a
Resumo:
BACKGROUND: Anterior shoulder stabilization surgery with the arthroscopic Bankart procedure can have a high recurrence rate in certain patients. Identifying these patients to modify outcomes has become a focal point of research. PURPOSE: The Instability Shoulder Index Score (ISIS) was developed to predict the success of arthroscopic Bankart repair. Scores range from 0 to 10, with higher scores predicting a higher risk of recurrence after stabilization. The interobserver reliability of the score is not known. STUDY DESIGN: Cohort study (diagnosis); Level of evidence, 2. METHODS: This is a prospective multicenter (North America and Europe) study of patients suffering from shoulder instability and waiting for stabilization surgery. Five pairs of independent evaluators were asked to score patient instability severity with the ISIS. Patients also completed functional scores (Western Ontario Shoulder Instability Index [WOSI], Disabilities of the Arm, Shoulder and Hand-short version [QuickDASH], and Walch-Duplay test). Data on age, sex, number of dislocations, and type of surgery were collected. The test-retest method and intraclass correlation coefficient (ICC: >0.75 = good, >0.85 = very good, and >0.9 = excellent) were used for analysis. RESULTS: A total of 114 patients with anterior shoulder instability were included, of whom 89 (78%) were men. The mean age was 28 years. The ISIS was very reliable, with an ICC of 0.933. The mean number of dislocations per patient was higher in patients who had an ISIS of ≥6 (25 vs 14; P = .05). Patients who underwent more complex arthroscopic procedures such as Hill-Sachs remplissage or open Latarjet had higher preoperative ISIS outcomes, with a mean score of 4.8 versus 3.4, respectively (P = .002). There was no correlation between the ISIS and the quality-of-life questionnaires, with Pearson correlations all >0.05 (WOSI = 0.39; QuickDASH = 0.97; Walch-Duplay = 0.08). CONCLUSION: Our results show that the ISIS is reliable when used in a multicenter study with anterior traumatic instability populations. There was no correlation between the ISIS and the quality-of-life questionnaires, but surgical decisions reflected its increased use.
Resumo:
Introduction: Responses to external stimuli are typically investigated by averaging peri-stimulus electroencephalography (EEG) epochs in order to derive event-related potentials (ERPs) across the electrode montage, under the assumption that signals that are related to the external stimulus are fixed in time across trials. We demonstrate the applicability of a single-trial model based on patterns of scalp topographies (De Lucia et al, 2007) that can be used for ERP analysis at the single-subject level. The model is able to classify new trials (or groups of trials) with minimal a priori hypotheses, using information derived from a training dataset. The features used for the classification (the topography of responses and their latency) can be neurophysiologically interpreted, because a difference in scalp topography indicates a different configuration of brain generators. An above chance classification accuracy on test datasets implicitly demonstrates the suitability of this model for EEG data. Methods: The data analyzed in this study were acquired from two separate visual evoked potential (VEP) experiments. The first entailed passive presentation of checkerboard stimuli to each of the four visual quadrants (hereafter, "Checkerboard Experiment") (Plomp et al, submitted). The second entailed active discrimination of novel versus repeated line drawings of common objects (hereafter, "Priming Experiment") (Murray et al, 2004). Four subjects per experiment were analyzed, using approx. 200 trials per experimental condition. These trials were randomly separated in training (90%) and testing (10%) datasets in 10 independent shuffles. In order to perform the ERP analysis we estimated the statistical distribution of voltage topographies by a Mixture of Gaussians (MofGs), which reduces our original dataset to a small number of representative voltage topographies. We then evaluated statistically the degree of presence of these template maps across trials and whether and when this was different across experimental conditions. Based on these differences, single-trials or sets of a few single-trials were classified as belonging to one or the other experimental condition. Classification performance was assessed using the Receiver Operating Characteristic (ROC) curve. Results: For the Checkerboard Experiment contrasts entailed left vs. right visual field presentations for upper and lower quadrants, separately. The average posterior probabilities, indicating the presence of the computed template maps in time and across trials revealed significant differences starting at ~60-70 ms post-stimulus. The average ROC curve area across all four subjects was 0.80 and 0.85 for upper and lower quadrants, respectively and was in all cases significantly higher than chance (unpaired t-test, p<0.0001). In the Priming Experiment, we contrasted initial versus repeated presentations of visual object stimuli. Their posterior probabilities revealed significant differences, which started at 250ms post-stimulus onset. The classification accuracy rates with single-trial test data were at chance level. We therefore considered sub-averages based on five single trials. We found that for three out of four subjects' classification rates were significantly above chance level (unpaired t-test, p<0.0001). Conclusions: The main advantage of the present approach is that it is based on topographic features that are readily interpretable along neurophysiologic lines. As these maps were previously normalized by the overall strength of the field potential on the scalp, a change in their presence across trials and between conditions forcibly reflects a change in the underlying generator configurations. The temporal periods of statistical difference between conditions were estimated for each training dataset for ten shuffles of the data. Across the ten shuffles and in both experiments, we observed a high level of consistency in the temporal periods over which the two conditions differed. With this method we are able to analyze ERPs at the single-subject level providing a novel tool to compare normal electrophysiological responses versus single cases that cannot be considered part of any cohort of subjects. This aspect promises to have a strong impact on both basic and clinical research.
Resumo:
The aim of this study was to describe the clinical and PSG characteristics of narcolepsy with cataplexy and their genetic predisposition by using the retrospective patient database of the European Narcolepsy Network (EU-NN). We have analysed retrospective data of 1099 patients with narcolepsy diagnosed according to International Classification of Sleep Disorders-2. Demographic and clinical characteristics, polysomnography and multiple sleep latency test data, hypocretin-1 levels, and genome-wide genotypes were available. We found a significantly lower age at sleepiness onset (men versus women: 23.74 ± 12.43 versus 21.49 ± 11.83, P = 0.003) and longer diagnostic delay in women (men versus women: 13.82 ± 13.79 versus 15.62 ± 14.94, P = 0.044). The mean diagnostic delay was 14.63 ± 14.31 years, and longer delay was associated with higher body mass index. The best predictors of short diagnostic delay were young age at diagnosis, cataplexy as the first symptom and higher frequency of cataplexy attacks. The mean multiple sleep latency negatively correlated with Epworth Sleepiness Scale (ESS) and with the number of sleep-onset rapid eye movement periods (SOREMPs), but none of the polysomnographic variables was associated with subjective or objective measures of sleepiness. Variant rs2859998 in UBXN2B gene showed a strong association (P = 1.28E-07) with the age at onset of excessive daytime sleepiness, and rs12425451 near the transcription factor TEAD4 (P = 1.97E-07) with the age at onset of cataplexy. Altogether, our results indicate that the diagnostic delay remains extremely long, age and gender substantially affect symptoms, and that a genetic predisposition affects the age at onset of symptoms.