978 resultados para Prediction algorithms
Resumo:
The SEARCH-RIO study prospectively investigated electrocardiogram (ECG)-derived variables in chronic Chagas disease (CCD) as predictors of cardiac death and new onset ventricular tachycardia (VT). Cardiac arrhythmia is a major cause of death in CCD, and electrical markers may play a significant role in risk stratification. One hundred clinically stable outpatients with CCD were enrolled in this study. They initially underwent a 12-lead resting ECG, signal-averaged ECG, and 24-h ambulatory ECG. Abnormal Q-waves, filtered QRS duration, intraventricular electrical transients (IVET), 24-h standard deviation of normal RR intervals (SDNN), and VT were assessed. Echocardiograms assessed left ventricular ejection fraction. Predictors of cardiac death and new onset VT were identified in a Cox proportional hazard model. During a mean follow-up of 95.3 months, 36 patients had adverse events: 22 new onset VT (mean±SD, 18.4±4‰/year) and 20 deaths (26.4±1.8‰/year). In multivariate analysis, only Q-wave (hazard ratio, HR=6.7; P<0.001), VT (HR=5.3; P<0.001), SDNN<100 ms (HR=4.0; P=0.006), and IVET+ (HR=3.0; P=0.04) were independent predictors of the composite endpoint of cardiac death and new onset VT. A prognostic score was developed by weighting points proportional to beta coefficients and summing-up: Q-wave=2; VT=2; SDNN<100 ms=1; IVET+=1. Receiver operating characteristic curve analysis optimized the cutoff value at >1. In 10,000 bootstraps, the C-statistic of this novel score was non-inferior to a previously validated (Rassi) score (0.89±0.03 and 0.80±0.05, respectively; test for non-inferiority: P<0.001). In CCD, surface ECG-derived variables are predictors of cardiac death and new onset VT.
Resumo:
Our objective is to evaluate the accuracy of three algorithms in differentiating the origins of outflow tract ventricular arrhythmias (OTVAs). This study involved 110 consecutive patients with OTVAs for whom a standard 12-lead surface electrocardiogram (ECG) showed typical left bundle branch block morphology with an inferior axis. All the ECG tracings were retrospectively analyzed using the following three recently published ECG algorithms: 1) the transitional zone (TZ) index, 2) the V2 transition ratio, and 3) V2 R wave duration and R/S wave amplitude indices. Considering all patients, the V2 transition ratio had the highest sensitivity (92.3%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (93.9%). The latter finding had a maximal area under the ROC curve of 0.925. In patients with left ventricular (LV) rotation, the V2 transition ratio had the highest sensitivity (94.1%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (87.5%). The former finding had a maximal area under the ROC curve of 0.892. All three published ECG algorithms are effective in differentiating the origin of OTVAs, while the V2 transition ratio, and the V2 R wave duration and R/S wave amplitude indices are the most sensitive and specific algorithms, respectively. Amongst all of the patients, the V2 R wave duration and R/S wave amplitude algorithm had the maximal area under the ROC curve, but in patients with LV rotation the V2 transition ratio algorithm had the maximum area under the ROC curve.
Resumo:
This work describes a method to predict the solubility of essential oils in supercritical carbon dioxide. The method is based on the formulation proposed in 1979 by Asselineau, Bogdanic and Vidal. The Peng-Robinson and Soave-Redlich-Kwong cubic equations of state were used with the van der Waals mixing rules with two interaction parameters. Method validation was accomplished calculating orange essential oil solubility in pressurized carbon dioxide. The solubility of orange essential oil in carbon dioxide calculated at 308.15 K for pressures of 50 to 70 bar varied from 1.7± 0.1 to 3.6± 0.1 mg/g. For same the range of conditions, experimental solubility varied from 1.7± 0.1 to 3.6± 0.1 mg/g. Predicted values were not very sensitive to initial oil composition.
Resumo:
Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.
Resumo:
Many industrial applications need object recognition and tracking capabilities. The algorithms developed for those purposes are computationally expensive. Yet ,real time performance, high accuracy and small power consumption are essential measures of the system. When all these requirements are combined, hardware acceleration of these algorithms becomes a feasible solution. The purpose of this study is to analyze the current state of these hardware acceleration solutions, which algorithms have been implemented in hardware and what modifications have been done in order to adapt these algorithms to hardware.
Resumo:
Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.
Resumo:
Solid mixtures for refreshment are already totally integrated to the Brazilian consumers' daily routine, because of their quick preparation method, yield and reasonable price - quite lower if compared to 'ready-to-drink' products or products for prompt consumption, what makes them economically more accessible to low-income populations. Within such a context, the aim of this work was to evaluate the physicochemical and mineral composition, as well as the hygroscopic behavior of four different brands of solid mixture for mango refreshment. The BET, GAB, Oswim and Henderson mathematical models were built through the adjustment of experimental data to the isotherms of adsorption. Results from the physiochemical evaluation showed that the solid mixtures for refreshments are considerable sources of ascorbic acid and reductor sugar; and regarding mineral compounds, they are significant sources of calcium, sodium and potassium. It was also verified that the solid mixtures for refreshments of the four studied brands are considered highly hygroscopic.
Resumo:
In this study, the effects of hot-air drying conditions on color, water holding capacity, and total phenolic content of dried apple were investigated using artificial neural network as an intelligent modeling system. After that, a genetic algorithm was used to optimize the drying conditions. Apples were dried at different temperatures (40, 60, and 80 °C) and at three air flow-rates (0.5, 1, and 1.5 m/s). Applying the leave-one-out cross validation methodology, simulated and experimental data were in good agreement presenting an error < 2.4 %. Quality index optimal values were found at 62.9 °C and 1.0 m/s using genetic algorithm.
Resumo:
The objective of this study was to predict by means of Artificial Neural Network (ANN), multilayer perceptrons, the texture attributes of light cheesecurds perceived by trained judges based on instrumental texture measurements. Inputs to the network were the instrumental texture measurements of light cheesecurd (imitative and fundamental parameters). Output variables were the sensory attributes consistency and spreadability. Nine light cheesecurd formulations composed of different combinations of fat and water were evaluated. The measurements obtained by the instrumental and sensory analyses of these formulations constituted the data set used for training and validation of the network. Network training was performed using a back-propagation algorithm. The network architecture selected was composed of 8-3-9-2 neurons in its layers, which quickly and accurately predicted the sensory texture attributes studied, showing a high correlation between the predicted and experimental values for the validation data set and excellent generalization ability, with a validation RMSE of 0.0506.
Resumo:
Most of the applications of airborne laser scanner data to forestry require that the point cloud be normalized, i.e., each point represents height from the ground instead of elevation. To normalize the point cloud, a digital terrain model (DTM), which is derived from the ground returns in the point cloud, is employed. Unfortunately, extracting accurate DTMs from airborne laser scanner data is a challenging task, especially in tropical forests where the canopy is normally very thick (partially closed), leading to a situation in which only a limited number of laser pulses reach the ground. Therefore, robust algorithms for extracting accurate DTMs in low-ground-point-densitysituations are needed in order to realize the full potential of airborne laser scanner data to forestry. The objective of this thesis is to develop algorithms for processing airborne laser scanner data in order to: (1) extract DTMs in demanding forest conditions (complex terrain and low number of ground points) for applications in forestry; (2) estimate canopy base height (CBH) for forest fire behavior modeling; and (3) assess the robustness of LiDAR-based high-resolution biomass estimation models against different field plot designs. Here, the aim is to find out if field plot data gathered by professional foresters can be combined with field plot data gathered by professionally trained community foresters and used in LiDAR-based high-resolution biomass estimation modeling without affecting prediction performance. The question of interest in this case is whether or not the local forest communities can achieve the level technical proficiency required for accurate forest monitoring. The algorithms for extracting DTMs from LiDAR point clouds presented in this thesis address the challenges of extracting DTMs in low-ground-point situations and in complex terrain while the algorithm for CBH estimation addresses the challenge of variations in the distribution of points in the LiDAR point cloud caused by things like variations in tree species and season of data acquisition. These algorithms are adaptive (with respect to point cloud characteristics) and exhibit a high degree of tolerance to variations in the density and distribution of points in the LiDAR point cloud. Results of comparison with existing DTM extraction algorithms showed that DTM extraction algorithms proposed in this thesis performed better with respect to accuracy of estimating tree heights from airborne laser scanner data. On the other hand, the proposed DTM extraction algorithms, being mostly based on trend surface interpolation, can not retain small artifacts in the terrain (e.g., bumps, small hills and depressions). Therefore, the DTMs generated by these algorithms are only suitable for forestry applications where the primary objective is to estimate tree heights from normalized airborne laser scanner data. On the other hand, the algorithm for estimating CBH proposed in this thesis is based on the idea of moving voxel in which gaps (openings in the canopy) which act as fuel breaks are located and their height is estimated. Test results showed a slight improvement in CBH estimation accuracy over existing CBH estimation methods which are based on height percentiles in the airborne laser scanner data. However, being based on the idea of moving voxel, this algorithm has one main advantage over existing CBH estimation methods in the context of forest fire modeling: it has great potential in providing information about vertical fuel continuity. This information can be used to create vertical fuel continuity maps which can provide more realistic information on the risk of crown fires compared to CBH.
Resumo:
The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.
Resumo:
Very preterm birth is a risk for brain injury and abnormal neurodevelopment. While the incidence of cerebral palsy has decreased due to advances in perinatal and neonatal care, the rate of less severe neuromotor problems continues to be high in very prematurely born children. Neonatal brain imaging can aid in identifying children for closer follow-up and in providing parents information on developmental risks. This thesis aimed to study the predictive value of structural brain magnetic resonance imaging (MRI) at term age, serial neonatal cranial ultrasound (cUS), and structured neurological examinations during the longitudinal follow-up for the neurodevelopment of very preterm born children up to 11 years of age as a part of the PIPARI Study (The Development and Functioning of Very Low Birth Weight Infants from Infancy to School Age). A further aim was to describe the associations between regional brain volumes and long-term neuromotor profile. The prospective follow-up comprised of the assessment of neurosensory development at 2 years of corrected age, cognitive development at 5 years of chronological age, and neuromotor development at 11 years of age. Neonatal brain imaging and structured neurological examinations predicted neurodevelopment at all age-points. The combination of neurological examination and brain MRI or cUS improved the predictive value of neonatal brain imaging alone. Decreased brain volumes associated with neuromotor performance. At the age of 11 years, the majority of the very preterm born children had age-appropriate neuromotor development and after-school sporting activities. Long-term clinical follow-up is recommended at least for all very preterm infants with major brain pathologies.
Resumo:
This study examined the effect of expHcitly instructing students to use a repertoire of reading comprehension strategies. Specifically, this study examined whether providing students with a "predictive story-frame" which combined the use of prediction and summarization strategies improved their reading comprehension relative to providing students with generic instruction on prediction and summarization. Results were examined in terms of instructional condition and reading ability. Students from 2 grade 4 classes participated in this study. The reading component of the Canadian Achievement Tests, Second Edition (CAT/2) was used to identify students as either "average or above average" or "below average" readers. Students received either strategic predication and summarization instruction (story-frame) or generic prediction and summarization instruction (notepad). Students were provided with new but comparable stories for each session. For both groups, the researcher modelled the strategic tools and provided guided practice, independent practice, and independent reading sessions. Comprehension was measured with an immediate and 1-week delayed comprehension test for each of the 4 stories, hi addition, students participated in a 1- week delayed interview, where they were asked to retell the story and to answer questions about the central elements (character, setting, problem, solution, beginning, middle, and ending events) of each story. There were significant differences, with medium to large effect sizes, in comprehension and recall scores as a fimction of both instructional condition and reading ability. Students in the story-frame condition outperformed students in the notepad condition, and average to above average readers performed better than below average readers. Students in the story-frame condition outperformed students in the notepad condition on the comprehension tests and on the oral retellings when teacher modelling and guidance were present. In the cued recall sessions, students in the story-frame instructional condition recalled more correct information and generated fewer errors than students in the notepad condition. Average to above average readers performed better than below average readers across comprehension and retelling measures. The majority of students in both instructional conditions reported that they would use their strategic tool again.
Resumo:
This research attempted to address the question of the role of explicit algorithms and episodic contexts in the acquisition of computational procedures for regrouping in subtraction. Three groups of students having difficulty learning to subtract with regrouping were taught procedures for doing so through either an explicit algorithm, an episodic content or an examples approach. It was hypothesized that the use of an explicit algorithm represented in a flow chart format would facilitate the acquisition and retention of specific procedural steps relative to the other two conditions. On the other hand, the use of paragraph stories to create episodic content was expected to facilitate the retrieval of algorithms, particularly in a mixed presentation format. The subjects were tested on similar, near, and far transfer questions over a four-day period. Near and far transfer algorithms were also introduced on Day Two. The results suggested that both explicit and episodic context facilitate performance on questions requiring subtraction with regrouping. However, the differential effects of these two approaches on near and far transfer questions were not as easy to identify. Explicit algorithms may facilitate the acquisition of specific procedural steps while at the same time inhibiting the application of such steps to transfer questions. Similarly, the value of episodic context in cuing the retrieval of an algorithm may be limited by the ability of a subject to identify and classify a new question as an exemplar of a particular episodically deflned problem type or category. The implications of these findings in relation to the procedures employed in the teaching of Mathematics to students with learning problems are discussed in detail.
Resumo:
Personality traits and personal values are two important domains of individual differences. Traits are enduring and distinguishable patterns of behaviour whereas values are societally taught, stable, individual preferences that guide behaviour in order to reach a specific end state. The purpose of the present study was to investigate the relations between self and peer report within the domains of personality traits and values, to examine the correlations between values and traits, and to explore the amount of incremental validity of traits and values in predicting behaviour. Two hundred and fiftytwo men and women from a university setting completed self and peer reports on three questionnaires. In order to assess personality traits, the HEXACO-PI (Lee & Ashton, 2004) was used to identify levels of 6 major dimensions of personality in participants. To assess values, the Schwartz Value Survey (Schwartz, 1992) was used to identify the importance each participant placed on each of Schwartz's 10 value types. To measure behaviour, a Behavior Scale, created by Bardi and Schwartz (2003), consisting of items designed to measure the frequency of value-expressive behaviour was used. As expected, correlations between self and peer reports for the personality scales were high indicating that personality traits are easily observable to other people. Correlations between self and peer reports for the values and behaviour scales were only moderate, suggesting that some goals, and behaviours expressive of those goals, may not always be observable to others. Consistent with previous research, there were many strong correlations between traits and values. In addition to the similarities with past research, the present study found that the personality factor Honesty-Humility was correlated strongly with values scales (with five correlations exceeding .25). In the prediction of behaviour, it was found that both personahty and values were able to account for significant and similar amounts of variance. Personality outpredicted values for some behaviours, but the opposite was true of other behaviours. Each domain provided incremental validity beyond the other domain. The impUcations for these findings, along with limitations, and possibilities for future research are also discussed.