904 resultados para Test data
Resumo:
A common interest in gene expression data analysis is to identify from a large pool of candidate genes the genes that present significant changes in expression levels between a treatment and a control biological condition. Usually, it is done using a statistic value and a cutoff value that are used to separate the genes differentially and nondifferentially expressed. In this paper, we propose a Bayesian approach to identify genes differentially expressed calculating sequentially credibility intervals from predictive densities which are constructed using the sampled mean treatment effect from all genes in study excluding the treatment effect of genes previously identified with statistical evidence for difference. We compare our Bayesian approach with the standard ones based on the use of the t-test and modified t-tests via a simulation study, using small sample sizes which are common in gene expression data analysis. Results obtained report evidence that the proposed approach performs better than standard ones, especially for cases with mean differences and increases in treatment variance in relation to control variance. We also apply the methodologies to a well-known publicly available data set on Escherichia coli bacterium.
Resumo:
Water pollution caused by toxic cyanobacteria is a problem worldwide, increasing with eutrophication. Due to its biological significance, genotoxicity should be a focus for biomonitoring pollution owing to the increasing complexity of the toxicological environment in which organisms are exposed. Cyanobacteria produce a large number of bioactive compounds, most of which lack toxicological data. Microcystins comprise a class of potent cyclic heptapeptide toxins produced mainly by Microcystis aeruginosa. Other natural products can also be synthesized by cyanobacteria, such as the protease inhibitor, aeruginosin. The hepatotoxicity of microcystins has been well documented, but information on the genotoxic effects of aeruginosins is relatively scarce. In this study, the genotoxicity and ecotoxicity of methanolic extracts from two strains of M. aeruginosa NPLJ-4, containing high levels of microcystin, and M. aeruginosa NPCD-1, with high levels of aeruginosin, were evaluated. Four endpoints, using plant assays in Allium cepa were applied: rootlet growth inhibition, chromosomal aberrations, mitotic divisions, and micronucleus assays. The microcystin content of M. aeruginosa NPLJ-4 was confirmed through ELISA, while M. aeruginosa NPCD-1 did not produce microcystins. The extracts of M. aeruginosa NPLJ-4 were diluted at 0.01, 0.1, 1 and 10 ppb of microcystins: the same procedure was used to dilute M. aeruginosa NPCD-1 used as a parameter for comparison, and water was used as the control. The results demonstrated that both strains inhibited root growth and induced rootlet abnormalities. The strain rich in aeruginosin was more genotoxic, altering the cell cycle, while microcystins were more mitogenic. These findings indicate the need for future research on non-microcystin producing cyanobacterial strains. Understanding the genotoxicity of M. aeruginosa extracts can help determine a possible link between contamination by aquatic cyanobacteria and high risk of primary liver cancer found in some areas as well as establish water level limits for compounds not yet studied. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Abstract Background The aim of the present study was to investigate the relationship between speed during maximum exercise test (ET) and oxygen consumption (VO2) in control and STZ-diabetic rats, in order to provide a useful method to determine exercise capacity and prescription in researches involving STZ-diabetic rats. Methods Male Wistar rats were divided into two groups: control (CG, n = 10) and diabetic (DG, n = 8). The animals were submitted to ET on treadmill with simultaneous gas analysis through open respirometry system. ET and VO2 were assessed 60 days after diabetes induction (STZ, 50 mg/Kg). Results VO2 maximum was reduced in STZ-diabetic rats (72.5 ± 1 mL/Kg/min-1) compared to CG rats (81.1 ± 1 mL/Kg/min-1). There were positive correlations between ET speed and VO2 (r = 0.87 for CG and r = 0.8 for DG), as well as between ET speed and VO2 reserve (r = 0.77 for CG and r = 0.7 for DG). Positive correlations were also obtained between measured VO2 and VO2 predicted values (r = 0.81 for CG and r = 0.75 for DG) by linear regression equations to CG (VO2 = 1.54 * ET speed + 52.34) and DG (VO2 = 1.16 * ET speed + 51.99). Moreover, we observed that 60% of ET speed corresponded to 72 and 75% of VO2 reserve for CG and DG, respectively. The maximum ET speed was also correlated with VO2 maximum for both groups (CG: r = 0.7 and DG: r = 0.7). Conclusion These results suggest that: a) VO2 and VO2 reserve can be estimated using linear regression equations obtained from correlations with ET speed for each studied group; b) exercise training can be prescribed based on ET in control and diabetic-STZ rats; c) physical capacity can be determined by ET. Therefore, ET, which involves a relatively simple methodology and low cost, can be used as an indicator of cardio-respiratory capacity in future studies that investigate the physiological effect of acute or chronic exercise in control and STZ-diabetic male rats.
Resumo:
Abstract Background Accurate malaria diagnosis is mandatory for the treatment and management of severe cases. Moreover, individuals with asymptomatic malaria are not usually screened by health care facilities, which further complicates disease control efforts. The present study compared the performances of a malaria rapid diagnosis test (RDT), the thick blood smear method and nested PCR for the diagnosis of symptomatic malaria in the Brazilian Amazon. In addition, an innovative computational approach was tested for the diagnosis of asymptomatic malaria. Methods The study was divided in two parts. For the first part, passive case detection was performed in 311 individuals with malaria-related symptoms from a recently urbanized community in the Brazilian Amazon. A cross-sectional investigation compared the diagnostic performance of the RDT Optimal-IT, nested PCR and light microscopy. The second part of the study involved active case detection of asymptomatic malaria in 380 individuals from riverine communities in Rondônia, Brazil. The performances of microscopy, nested PCR and an expert computational system based on artificial neural networks (MalDANN) using epidemiological data were compared. Results Nested PCR was shown to be the gold standard for diagnosis of both symptomatic and asymptomatic malaria because it detected the major number of cases and presented the maximum specificity. Surprisingly, the RDT was superior to microscopy in the diagnosis of cases with low parasitaemia. Nevertheless, RDT could not discriminate the Plasmodium species in 12 cases of mixed infections (Plasmodium vivax + Plasmodium falciparum). Moreover, the microscopy presented low performance in the detection of asymptomatic cases (61.25% of correct diagnoses). The MalDANN system using epidemiological data was worse that the light microscopy (56% of correct diagnoses). However, when information regarding plasma levels of interleukin-10 and interferon-gamma were inputted, the MalDANN performance sensibly increased (80% correct diagnoses). Conclusions An RDT for malaria diagnosis may find a promising use in the Brazilian Amazon integrating a rational diagnostic approach. Despite the low performance of the MalDANN test using solely epidemiological data, an approach based on neural networks may be feasible in cases where simpler methods for discriminating individuals below and above threshold cytokine levels are available.
Resumo:
Abstract Background In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method The cost-effectiveness of the Optimal® and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U.S. dollars. Sensitivity analysis was performed considering key model parameters. Results In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion Microscopy is more cost-effective than OptiMal® in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.
Resumo:
The objective of this study was to evaluate the push-out bond strength of fiberglass resin reinforced bonded with five ionomer cements. Also, the interface between cement and dentin was inspected by means of SEM. Fifty human canines were chose after rigorous scrutiny process, endodontically treated and divided randomly into five groups (n = 3) according to cement tested: Group I – Ionoseal (VOCO), Group II – Fugi I (GC), Group III – Fugi II Improved (GC), Group IV – Rely X Luting 2 (3M ESPE), Group V – Ketac Cem (3M ESPE). The post-space was prepared to receive a fiberglass post, which was tried before cementation process. No dentin or post surface pretreatment was carried out. After post bonding, all roots were cross-sectioned to acquire 3 thin-slices (1 mm) from three specific regions of tooth (cervical, medium and apical). A Universal test machine was used to carry out the push-out test with cross-head speed set to 0.5mm/mim. All failed specimens were observed under optical microscope to identify the failure mode. Representative specimens from each group was inspected under SEM. The data were analyzed by Kolmogorov-Smirnov and Levene’s tests and by two-way ANOVA, and Tukey’s port hoc test at a significance level of 5%. It was compared the images obtained for determination of types of failures more occurred in different levels. SEM inspection displayed that all cements filled the space between post and dentin, however, some imperfections such bubles and voids were noticed in all groups in some degree of extension. The push-out bond strength showed that cement Ketac Cem presented significant higher results when compared to the Ionoseal (P = 0.02). There were no statistical significant differences among other cements.
Resumo:
Abstract Background Obstructive sleep apnea (OSA) is a respiratory disease characterized by the collapse of the extrathoracic airway and has important social implications related to accidents and cardiovascular risk. The main objective of the present study was to investigate whether the drop in expiratory flow and the volume expired in 0.2 s during the application of negative expiratory pressure (NEP) are associated with the presence and severity of OSA in a population of professional interstate bus drivers who travel medium and long distances. Methods/Design An observational, analytic study will be carried out involving adult male subjects of an interstate bus company. Those who agree to participate will undergo a detailed patient history, physical examination involving determination of blood pressure, anthropometric data, circumference measurements (hips, waist and neck), tonsils and Mallampati index. Moreover, specific questionnaires addressing sleep apnea and excessive daytime sleepiness will be administered. Data acquisition will be completely anonymous. Following the medical examination, the participants will perform a spirometry, NEP test and standard overnight polysomnography. The NEP test is performed through the administration of negative pressure at the mouth during expiration. This is a practical test performed while awake and requires little cooperation from the subject. In the absence of expiratory flow limitation, the increase in the pressure gradient between the alveoli and open upper airway caused by NEP results in an increase in expiratory flow. Discussion Despite the abundance of scientific evidence, OSA is still underdiagnosed in the general population. In addition, diagnostic procedures are expensive, and predictive criteria are still unsatisfactory. Because increased upper airway collapsibility is one of the main determinants of OSA, the response to the application of NEP could be a predictor of this disorder. With the enrollment of this study protocol, the expectation is to encounter predictive NEP values for different degrees of OSA in order to contribute toward an early diagnosis of this condition and reduce its impact and complications among commercial interstate bus drivers.
Resumo:
The transposition of the São Francisco River is considered one of the greatest engineering works in Brazil of all time since it will cross an extensive agricultural region of continental dimensions, involving environmental impacts, water, soil, irrigation, water payment and other multidisciplinary themes. Taking into account its importance, this subject was incorporated into a discipline of UFSCar (Federal University of São Carlos - Brazil) named "Pollution and Environmental Impacts". It was noted strong reaction against the project, even before the presentation. To allow a critical analysis, the first objective was to compile the main technical data and environmental impacts. The second objective was to detect the three most important aspects that cause reaction, concluding for the following reasons: assumption that the volume of water to be transferred was much greater than it actually is proposed in the project; lack of knowledge about similar project already done in Brazil; the idea that the artificial canal to be built was much broader than that proposed by the project. The participants' opinion about "volume to be transferred" was raised quantitatively four times: 2-undergraduate students; 1-graduate; 1-outside community. The average resulted 14 times larger than that proposed in the project, significant according to t-test. It was concluded that the reaction to water transfer project is due in part to the ignorance combined with a preconceived idea that tend to overestimate the magnitude of environmental impacts.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.
Resumo:
One of the problems in the analysis of nucleus-nucleus collisions is to get information on the value of the impact parameter b. This work consists in the application of pattern recognition techniques aimed at associating values of b to groups of events. To this end, a support vec- tor machine (SVM) classifier is adopted to analyze multifragmentation reactions. This method allows to backtracing the values of b through a particular multidimensional analysis. The SVM classification con- sists of two main phase. In the first one, known as training phase, the classifier learns to discriminate the events that are generated by two different model:Classical Molecular Dynamics (CMD) and Heavy- Ion Phase-Space Exploration (HIPSE) for the reaction: 58Ni +48 Ca at 25 AMeV. To check the classification of events in the second one, known as test phase, what has been learned is tested on new events generated by the same models. These new results have been com- pared to the ones obtained through others techniques of backtracing the impact parameter. Our tests show that, following this approach, the central collisions and peripheral collisions, for the CMD events, are always better classified with respect to the classification by the others techniques of backtracing. We have finally performed the SVM classification on the experimental data measured by NUCL-EX col- laboration with CHIMERA apparatus for the previous reaction.
Resumo:
Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.
Resumo:
The Time-Of-Flight (TOF) detector of ALICE is designed to identify charged particles produced in Pb--Pb collisions at the LHC to address the physics of strongly-interacting matter and the Quark-Gluon Plasma (QGP). The detector is based on the Multigap Resistive Plate Chamber (MRPC) technology which guarantees the excellent performance required for a large time-of-flight array. The construction and installation of the apparatus in the experimental site have been completed and the detector is presently fully operative. All the steps which led to the construction of the TOF detector were strictly followed by a set of quality assurance procedures to enable high and uniform performance and eventually the detector has been commissioned with cosmic rays. This work aims at giving a detailed overview of the ALICE TOF detector, also focusing on the tests performed during the construction phase. The first data-taking experience and the first results obtained with cosmic rays during the commissioning phase are presented as well and allow to confirm the readiness state of the TOF detector for LHC collisions.
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
Background Decreased exercise capacity, and reduction in peak oxygen uptake are present in most patients affected by hypertrophic cardiomyopathy (HCM) . In addition an abnormal blood pressure response during a maximal exercise test was seen to be associated with high risk for sudden cardiac death in adult patients affected by HCM. Therefore exercise test (CPET) has become an important part of the evaluation of the HCM patients, but data on its role in patients with HCM in the pediatric age are quite limited. Methods and results Between 2004 and 2010, using CPET and echocardiography, we studied 68 children (mean age 13.9 ± 2 years) with HCM. The exercise test was completed by all the patients without adverse complications. The mean value of achieved VO2 max was 31.4 ± 8.3 mL/Kg/min which corresponded to 77.5 ± 16.9 % of predicted range. 51 patients (75%) reached a subnormal value of VO2max. On univariate analysis the achieved VO2 as percentage of predicted and the peak exercise systolic blood pressure (BP) Z score were inversely associated with max left ventricle (LV) wall thickness, with E/Ea ratio, and directly related with Ea and Sa wave velocities No association was found with the LV outflow tract gradient. During a mean follow up of 2.16 ± 1.7 years 9 patients reached the defined clinical end point of death, transplantation, implanted cardioverter defibrillator (ICD) shock, ICD implantation for secondary prevention or myectomy. Patients with peak VO2 < 52% or with peak systolic BP Z score < -5.8 had lower event free survival at follow up. Conclusions Exercise capacity is decreased in patients with HCM in pediatric age and global ventricular function seems being the most important determinant of exercise capacity in these patients. CPET seems to play an important role in prognostic stratification of children affected by HCM.