973 resultados para score test information matrix artificial regression
Resumo:
Soil information is needed for managing the agricultural environment. The aim of this study was to apply artificial neural networks (ANNs) for the prediction of soil classes using orbital remote sensing products, terrain attributes derived from a digital elevation model and local geology information as data sources. This approach to digital soil mapping was evaluated in an area with a high degree of lithologic diversity in the Serra do Mar. The neural network simulator used in this study was JavaNNS and the backpropagation learning algorithm. For soil class prediction, different combinations of the selected discriminant variables were tested: elevation, declivity, aspect, curvature, curvature plan, curvature profile, topographic index, solar radiation, LS topographic factor, local geology information, and clay mineral indices, iron oxides and the normalized difference vegetation index (NDVI) derived from an image of a Landsat-7 Enhanced Thematic Mapper Plus (ETM+) sensor. With the tested sets, best results were obtained when all discriminant variables were associated with geological information (overall accuracy 93.2 - 95.6 %, Kappa index 0.924 - 0.951, for set 13). Excluding the variable profile curvature (set 12), overall accuracy ranged from 93.9 to 95.4 % and the Kappa index from 0.932 to 0.948. The maps based on the neural network classifier were consistent and similar to conventional soil maps drawn for the study area, although with more spatial details. The results show the potential of ANNs for soil class prediction in mountainous areas with lithological diversity.
Resumo:
Visible and near infrared (vis-NIR) spectroscopy is widely used to detect soil properties. The objective of this study is to evaluate the combined effect of moisture content (MC) and the modeling algorithm on prediction of soil organic carbon (SOC) and pH. Partial least squares (PLS) and the Artificial neural network (ANN) for modeling of SOC and pH at different MC levels were compared in terms of efficiency in prediction of regression. A total of 270 soil samples were used. Before spectral measurement, dry soil samples were weighed to determine the amount of water to be added by weight to achieve the specified gravimetric MC levels of 5, 10, 15, 20, and 25 %. A fiber-optic vis-NIR spectrophotometer (350-2500 nm) was used to measure spectra of soil samples in the diffuse reflectance mode. Spectra preprocessing and PLS regression were carried using Unscrambler® software. Statistica® software was used for ANN modeling. The best prediction result for SOC was obtained using the ANN (RMSEP = 0.82 % and RPD = 4.23) for soil samples with 25 % MC. The best prediction results for pH were obtained with PLS for dry soil samples (RMSEP = 0.65 % and RPD = 1.68) and soil samples with 10 % MC (RMSEP = 0.61 % and RPD = 1.71). Whereas the ANN showed better performance for SOC prediction at all MC levels, PLS showed better predictive accuracy of pH at all MC levels except for 25 % MC. Therefore, based on the data set used in the current study, the ANN is recommended for the analyses of SOC at all MC levels, whereas PLS is recommended for the analysis of pH at MC levels below 20 %.
Resumo:
Dual-energy X-ray absorptiometry (DXA) measurement of bone mineral density (BMD) is the reference standard for diagnosing osteoporosis but does not directly reflect deterioration in bone microarchitecture. The trabecular bone score (TBS), a novel grey-level texture measurement that can be extracted from DXA images, predicts osteoporotic fractures independent of BMD. Our aim was to identify clinical factors that are associated with baseline lumbar spine TBS. In total, 29,407 women ≥50yr at the time of baseline hip and spine DXA were identified from a database containing all clinical results for the Province of Manitoba, Canada. Lumbar spine TBS was derived for each spine DXA examination blinded to clinical parameters and outcomes. Multiple linear regression and logistic regression (lowest vs highest tertile) was used to define the sensitivity of TBS to other risk factors associated with osteoporosis. Only a small component of the TBS measurement (7-11%) could be explained from BMD measurements. In multiple linear regression and logistic regression models, reduced lumbar spine TBS was associated with recent glucocorticoid use, prior major fracture, rheumatoid arthritis, chronic obstructive pulmonary disease, high alcohol intake, and higher body mass index. In contrast, recent osteoporosis therapy was associated with a significantly lower likelihood for reduced TBS. Similar findings were seen after adjustment for lumbar spine or femoral neck BMD. In conclusion, lumbar spine TBS is strongly associated with many of the risk factors that are predictive of osteoporotic fractures. Further work is needed to determine whether lumbar spine TBS can replace some of the clinical risk factors currently used in fracture risk assessment.
Resumo:
Even 30 years after its first publication the Glasgow Coma Scale (GCS) is still used worldwide to describe and assess coma. The GCS consists of three components, the ocular, motor and verbal response to standardized stimulation, and is used as a severity of illness indicator for coma of various origins. The GCS facilitates information transfer and monitoring changes in coma. In addition, it is used as a triage tool in patients with traumatic brain injury. Its prognostic value regarding the outcome after a traumatic brain injury still lacks evidence. One of the main problems is the evaluation of the GCS in sedated, paralysed and/or intubated patients. A multitude of pseudoscores exists but a universal definition has yet to be defined.
Resumo:
La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.
Resumo:
BACKGROUND: Little is known about how to most effectively deliver relevant information to patients scheduled for endoscopy. METHODS: To assess the effects of combined written and oral information, compared with oral information alone on the quality of information before endoscopy and the level of anxiety. We designed a prospective study in two Swiss teaching hospitals which enrolled consecutive patients scheduled for endoscopy over a three-month period. Patients were randomized either to receiving, along with the appointment notice, an explanatory leaflet about the upcoming examination, or to oral information delivered by each patient's doctor. Evaluation of quality of information was rated on scales between 0 (none received) and 5 (excellent). The analysis of outcome variables was performed on the basis of intention to treat-analysis. Multivariate analysis of predictors of information scores was performed by linear regression analysis. RESULTS: Of 718 eligible patients 577 (80%) returned their questionnaire. Patients who received written leaflets (N = 278) rated the quality of information they received higher than those informed verbally (N = 299), for all 8 quality-of-information items. Differences were significant regarding information about the risks of the procedure (3.24 versus 2.26, p < 0.001), how to prepare for the procedure (3.56 versus 3.23, p = 0.036), what to expect after the procedure (2.99 versus 2.59, p < 0.001), and the 8 quality-of-information items (3.35 versus 3.02, p = 0.002). The two groups reported similar levels of anxiety before procedure (p = 0.66), pain during procedure (p = 0.20), tolerability throughout the procedure (p = 0.76), problems after the procedure (p = 0.22), and overall rating of the procedure between poor and excellent (p = 0.82). CONCLUSION: Written information led to more favourable assessments of the quality of information and had no impact on patient anxiety nor on the overall assessment of the endoscopy. Because structured and comprehensive written information is perceived as beneficial by patients, gastroenterologists should clearly explain to their patients the risks, benefits and alternatives of endoscopic procedures. Trial registration: Current Controlled trial number: ISRCTN34382782.
Resumo:
Ski resorts are deploying more and more systems of artificial snow. These tools are necessary to ensure an important economic activity for the high alpine valleys. However, artificial snow raises important environmental issues that can be reduced by an optimization of its production. This paper presents a software prototype based on artificial intelligence to help ski resorts better manage their snowpack. It combines on one hand a General Neural Network for the analysis of the snow cover and the spatial prediction, with on the other hand a multiagent simulation of skiers for the analysis of the spatial impact of ski practice. The prototype has been tested on the ski resort of Verbier (Switzerland).
Resumo:
This research provides a description of the process followed in order to assemble a "Social Accounting Matrix" for Spain corresponding to the year 2000 (SAMSP00). As argued in the paper, this process attempts to reconcile ESA95 conventions with requirements of applied general equilibrium modelling. Particularly, problems related to the level of aggregation of net taxation data, and to the valuation system used for expressing the monetary value of input-output transactions have deserved special attention. Since the adoption of ESA95 conventions, input-output transactions have been preferably valued at basic prices, which impose additional difficulties on modellers interested in computing applied general equilibrium models. This paper addresses these difficulties by developing a procedure that allows SAM-builders to change the valuation system of input-output transactions conveniently. In addition, this procedure produces new data related to net taxation information.
Resumo:
Understanding and anticipating biological invasions can focus either on traits that favour species invasiveness or on features of the receiving communities, habitats or landscapes that promote their invasibility. Here, we address invasibility at the regional scale, testing whether some habitats and landscapes are more invasible than others by fitting models that relate alien plant species richness to various environmental predictors. We use a multi-model information-theoretic approach to assess invasibility by modelling spatial and ecological patterns of alien invasion in landscape mosaics and testing competing hypotheses of environmental factors that may control invasibility. Because invasibility may be mediated by particular characteristics of invasiveness, we classified alien species according to their C-S-R plant strategies. We illustrate this approach with a set of 86 alien species in Northern Portugal. We first focus on predictors influencing species richness and expressing invasibility and then evaluate whether distinct plant strategies respond to the same or different groups of environmental predictors. We confirmed climate as a primary determinant of alien invasions and as a primary environmental gradient determining landscape invasibility. The effects of secondary gradients were detected only when the area was sub-sampled according to predictions based on the primary gradient. Then, multiple predictor types influenced patterns of alien species richness, with some types (landscape composition, topography and fire regime) prevailing over others. Alien species richness responded most strongly to extreme land management regimes, suggesting that intermediate disturbance induces biotic resistance by favouring native species richness. Land-use intensification facilitated alien invasion, whereas conservation areas hosted few invaders, highlighting the importance of ecosystem stability in preventing invasions. Plants with different strategies exhibited different responses to environmental gradients, particularly when the variations of the primary gradient were narrowed by sub-sampling. Such differential responses of plant strategies suggest using distinct control and eradication approaches for different areas and alien plant groups.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
Purpose: To assess the global cardiovascular (CV) risk of an individual, several scores have been developed. However, their accuracy and comparability need to be evaluated in populations others from which they were derived. The aim of this study was to compare the predictive accuracy of 4 CV risk scores using data of a large population-based cohort. Methods: Prospective cohort study including 4980 participants (2698 women, mean age± SD: 52.7±10.8 years) in Lausanne, Switzerland followed for an average of 5.5 years (range 0.2 - 8.5). Two end points were assessed: 1) coronary heart disease (CHD), and 2) CV diseases (CVD). Four risk scores were compared: original and recalibrated Framingham coronary heart disease scores (1998 and 2001); original PROCAM score (2002) and its recalibrated version for Switzerland (IAS-AGLA); Reynolds risk score. Discrimination was assessed using Harrell's C statistics, model fitness using Akaike's information criterion (AIC) and calibration using pseudo Hosmer-Lemeshow test. The sensitivity, specificity and corresponding 95% confidence intervals were assessed for each risk score using the highest risk category ([20+ % at 10 years) as the "positive" test. Results: Recalibrated and original 1998 and original 2001 Framingham scores show better discrimination (>0.720) and model fitness (low AIC) for CHD and CVD. All 4 scores are correctly calibrated (Chi2<20). The recalibrated Framingham 1998 score has the best sensitivities, 37.8% and 40.4%, for CHD and CVD, respectively. All scores present specificities >90%. Framingham 1998, PROCAM and IAS-AGLA scores include the greatest proportion of subjects (>200) in the high risk category whereas recalibrated Framingham 2001 and Reynolds include <=44 subjects. Conclusion: In this cohort, we see variations of accuracy between risk scores, the original Framingham 2001 score demonstrating the best compromise between its accuracy and its limited selection of subjects in the highest risk category. We advocate that national guidelines, based on independently validated data, take into account calibrated CV risk scores for their respective countries.
Resumo:
OBJECTIVES: in a retrospective study, attempts have been made to identify individual organ-dysfunction risk profiles influencing the outcome after surgery for ruptured abdominal aortic aneurysms. METHODS: out of 235 patients undergoing graft replacement for abdominal aortic aneurysms, 57 (53 men, four women, mean age 72 years [s.d. 8.8]) were treated for ruptured aneurysms in a 3-year period. Forty-eight preoperative, 13 intraoperative and 34 postoperative variables were evaluated statistically. A simple multi-organ dysfunction (MOD) score was adopted. RESULTS: the perioperative mortality was 32%. Three patients died intraoperatively, four within 48 h and 11 died later. A significant influence for pre-existing risk factors was identified only for cardiovascular diseases. Multiple linear-regression analysis indicated that a haemoglobin <90 g/l, systolic blood pressure <80 mmHg and ECG signs of ischaemia at admission were highly significant risk factors. The cause of death for patients, who died more than 48 h postoperatively, was mainly MOD. All patients with a MOD score >/=4 died (n=7). These patients required 27% of the intensive-care unit (ICU) days of all patients and 72% of the ICU days of the non-survivors. CONCLUSION: patients with ruptured aortic aneurysms from treatment should not be excluded. However, a physiological scoring system after 48 h appears justifiable in order to decide on the appropriateness of continual ICU support.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Several methods and algorithms have recently been proposed that allow for the systematic evaluation of simple neuron models from intracellular or extracellular recordings. Models built in this way generate good quantitative predictions of the future activity of neurons under temporally structured current injection. It is, however, difficult to compare the advantages of various models and algorithms since each model is designed for a different set of data. Here, we report about one of the first attempts to establish a benchmark test that permits a systematic comparison of methods and performances in predicting the activity of rat cortical pyramidal neurons. We present early submissions to the benchmark test and discuss implications for the design of future tests and simple neurons models
Resumo:
We describe a device made of artificial muscle for the treatment of end-stage heart failure as an alternative to current heart assist devices. The key component is a matrix of nitinol wires and aramidic fibers called Biometal muscle (BM). When heated electrically, it produces a motorless, smooth, and lifelike motion. The BM is connected to a carbon fiber scaffold, tightening the heart and providing simultaneous assistance to the left and right ventricles. A pacemaker-like microprocessor drives the contraction of the BM. We tested the device in a dedicated bench model of diseased heart. It generated a systolic pressure of 75 mm Hg and ejected a maximum of 330 ml/min, with an ejection fraction of 12%. The device required a power supply of 6 V, 250 mA. This could be the beginning of an era in which BMs integrate or replace the mechanical function of natural muscles.