957 resultados para Computer Prediction Program
Resumo:
Resuscitation and stabilization are key issues in Intensive Care Burn Units and early survival predictions help to decide the best clinical action during these phases. Current survival scores of burns focus on clinical variables such as age or the body surface area. However, the evolution of other parameters (e.g. diuresis or fluid balance) during the first days is also valuable knowledge. In this work we suggest a methodology and we propose a Temporal Data Mining algorithm to estimate the survival condition from the patient’s evolution. Experiments conducted on 480 patients show the improvement of survival prediction.
Resumo:
History has shown that projects move in and out of poor status through the life of the project. Predicting the success or failure of a project to complete on time because of its recent history on the contract status report could provide our project managers another tool for monitoring contract progress. In many instances, poor contract progress results in the loss of contract time and late completion of projects. This research evaluates the combinations of work type, point in time physical work begins, recent poor status, and contract bid amount as indicators of late project completion.
Resumo:
Currently, at the SC Commission for the Blind, there is no opportunity for computer training for adults in the Older Blind Program. The Older Blind Program has to look for outside partners to make this service viable again. This project proposes that the OB Program partner with regional senior and recreation centers to establish a community-based training program that is both effective and is of minimal cost to the agency and to the partnering centers.
Resumo:
Modern scientific discoveries are driven by an unsatisfiable demand for computational resources. High-Performance Computing (HPC) systems are an aggregation of computing power to deliver considerably higher performance than one typical desktop computer can provide, to solve large problems in science, engineering, or business. An HPC room in the datacenter is a complex controlled environment that hosts thousands of computing nodes that consume electrical power in the range of megawatts, which gets completely transformed into heat. Although a datacenter contains sophisticated cooling systems, our studies indicate quantitative evidence of thermal bottlenecks in real-life production workload, showing the presence of significant spatial and temporal thermal and power heterogeneity. Therefore minor thermal issues/anomalies can potentially start a chain of events that leads to an unbalance between the amount of heat generated by the computing nodes and the heat removed by the cooling system originating thermal hazards. Although thermal anomalies are rare events, anomaly detection/prediction in time is vital to avoid IT and facility equipment damage and outage of the datacenter, with severe societal and business losses. For this reason, automated approaches to detect thermal anomalies in datacenters have considerable potential. This thesis analyzed and characterized the power and thermal characteristics of a Tier0 datacenter (CINECA) during production and under abnormal thermal conditions. Then, a Deep Learning (DL)-powered thermal hazard prediction framework is proposed. The proposed models are validated against real thermal hazard events reported for the studied HPC cluster while in production. This thesis is the first empirical study of thermal anomaly detection and prediction techniques of a real large-scale HPC system to the best of my knowledge. For this thesis, I used a large-scale dataset, monitoring data of tens of thousands of sensors for around 24 months with a data collection rate of around 20 seconds.
Resumo:
The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.
Resumo:
New DNA-based predictive tests for physical characteristics and inference of ancestry are highly informative tools that are being increasingly used in forensic genetic analysis. Two eye colour prediction models: a Bayesian classifier - Snipper and a multinomial logistic regression (MLR) system for the Irisplex assay, have been described for the analysis of unadmixed European populations. Since multiple SNPs in combination contribute in varying degrees to eye colour predictability in Europeans, it is likely that these predictive tests will perform in different ways amongst admixed populations that have European co-ancestry, compared to unadmixed Europeans. In this study we examined 99 individuals from two admixed South American populations comparing eye colour versus ancestry in order to reveal a direct correlation of light eye colour phenotypes with European co-ancestry in admixed individuals. Additionally, eye colour prediction following six prediction models, using varying numbers of SNPs and based on Snipper and MLR, were applied to the study populations. Furthermore, patterns of eye colour prediction have been inferred for a set of publicly available admixed and globally distributed populations from the HGDP-CEPH panel and 1000 Genomes databases with a special emphasis on admixed American populations similar to those of the study samples.
Resumo:
Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.
Resumo:
To evaluate the correlation between neck circumference and insulin resistance and components of metabolic syndrome in adolescents with different adiposity levels and pubertal stages, as well as to determine the usefulness of neck circumference to predict insulin resistance in adolescents. Cross-sectional study with 388 adolescents of both genders from ten to 19 years old. The adolescents underwent anthropometric and body composition assessment, including neck and waist circumferences, and biochemical evaluation. The pubertal stage was obtained by self-assessment, and the blood pressure, by auscultation. Insulin resistance was evaluated by the Homeostasis Model Assessment-Insulin Resistance. The correlation between two variables was evaluated by partial correlation coefficient adjusted for the percentage of body fat and pubertal stage. The performance of neck circumference to identify insulin resistance was tested by Receiver Operating Characteristic Curve. After the adjustment for percentage body fat and pubertal stage, neck circumference correlated with waist circumference, blood pressure, triglycerides and markers of insulin resistance in both genders. The results showed that the neck circumference is a useful tool for the detection of insulin resistance and changes in the indicators of metabolic syndrome in adolescents. The easiness of application and low cost of this measure may allow its use in Public Health services.
Resumo:
To investigate the effects of a specific protocol of undulatory physical resistance training on maximal strength gains in elderly type 2 diabetics. The study included 48 subjects, aged between 60 and 85 years, of both genders. They were divided into two groups: Untrained Diabetic Elderly (n=19) with those who were not subjected to physical training and Trained Diabetic Elderly (n=29), with those who were subjected to undulatory physical resistance training. The participants were evaluated with several types of resistance training's equipment before and after training protocol, by test of one maximal repetition. The subjects were trained on undulatory resistance three times per week for a period of 16 weeks. The overload used in undulatory resistance training was equivalent to 50% of one maximal repetition and 70% of one maximal repetition, alternating weekly. Statistical analysis revealed significant differences (p<0.05) between pre-test and post-test over a period of 16 weeks. The average gains in strength were 43.20% (knee extension), 65.00% (knee flexion), 27.80% (supine sitting machine), 31.00% (rowing sitting), 43.90% (biceps pulley), and 21.10% (triceps pulley). Undulatory resistance training used with weekly different overloads was effective to provide significant gains in maximum strength in elderly type 2 diabetic individuals.
Resumo:
Primary X-ray spectra were measured in the range of 80-150kV in order to validate a computer program based on a semiempirical model. The ratio between the characteristic and total air Kerma was considered to compare computed results and experimental data. Results show that the experimental spectra have higher first HVL and mean energy than the calculated ones. The ratios between the characteristic and total air Kerma for calculated spectra are in good agreement with experimental results for all filtrations used.
Resumo:
The physical model was based on the method of Newton-Euler. The model was developed by using the scientific computer program Mathematica®. Several simulations where tried varying the progress speeds (0.69; 1.12; 1.48; 1.82 and 2.12 m s-1); soil profiles (sinoidal, ascending and descending ramp) and height of the profile (0.025 and 0.05 m) to obtain the normal force of soil reaction. After the initial simulations, the mechanism was optimized using the scientific computer program Matlab® having as criterion (function-objective) the minimization of the normal force of reaction of the profile (FN). The project variables were the lengths of the bars (L1y, L2, l3 and L4), height of the operation (L7), the initial length of the spring (Lmo) and the elastic constant of the spring (k t). The lack of robustness of the mechanism in relation to the variable height of the operation was outlined by using a spring with low rigidity and large length. The results demonstrated that the mechanism optimized showed better flotation performance in relation to the initial mechanism.
Resumo:
In order to determine the energy needed to artificially dry an agricultural product the latent heat of vaporization of moisture in the product, H, must be known. Generally, the expressions for H reported in the literature are of the form H = h(T)f(M), where h(T) is the latent heat of vaporization of free water, and f(M) is a function of the equilibrium moisture content, M, which is a simplification. In this article, a more general expression for the latent heat of vaporization, namely H = g(M,T), is used to determine H for cowpea, always-green variety. For this purpose, a computer program was developed which automatically fits about 500 functions, with one or two independent variables, imbedded in its library to experimental data. The program uses nonlinear regression, and classifies the best functions according to the least reduced chi-squared. A set of executed statistical tests shows that the generalized expression for H used in this work produces better results of H for cowpea than other equations found in literature.
Resumo:
397
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física