130 resultados para simple algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are far-reaching conceptual similarities between bi-static surface georadar and post-stack, "zero-offset" seismic reflection data, which is expressed in largely identical processing flows. One important difference is, however, that standard deconvolution algorithms routinely used to enhance the vertical resolution of seismic data are notoriously problematic or even detrimental to the overall signal quality when applied to surface georadar data. We have explored various options for alleviating this problem and have tested them on a geologically well-constrained surface georadar dataset. Standard stochastic and direct deterministic deconvolution approaches proved to be largely unsatisfactory. While least-squares-type deterministic deconvolution showed some promise, the inherent uncertainties involved in estimating the source wavelet introduced some artificial "ringiness". In contrast, we found spectral balancing approaches to be effective, practical and robust means for enhancing the vertical resolution of surface georadar data, particularly, but not exclusively, in the uppermost part of the georadar section, which is notoriously plagued by the interference of the direct air- and groundwaves. For the data considered in this study, it can be argued that band-limited spectral blueing may provide somewhat better results than standard band-limited spectral whitening, particularly in the uppermost part of the section affected by the interference of the air- and groundwaves. Interestingly, this finding is consistent with the fact that the amplitude spectrum resulting from least-squares-type deterministic deconvolution is characterized by a systematic enhancement of higher frequencies at the expense of lower frequencies and hence is blue rather than white. It is also consistent with increasing evidence that spectral "blueness" is a seemingly universal, albeit enigmatic, property of the distribution of reflection coefficients in the Earth. Our results therefore indicate that spectral balancing techniques in general and spectral blueing in particular represent simple, yet effective means of enhancing the vertical resolution of surface georadar data and, in many cases, could turn out to be a preferable alternative to standard deconvolution approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although glycogen (Glyc) is the main carbohydrate storage component, the role of Glyc in the brain during prolonged wakefulness is not clear. The aim of this study was to determine brain Glyc concentration ([]) and turnover time (tau) in euglycemic conscious and undisturbed rats, compared to rats maintained awake for 5h. To measure the metabolism of [1-(13)C]-labeled Glc into Glyc, 23 rats received a [1-(13)C]-labeled Glc solution as drink (10% weight per volume in tap water) ad libitum as their sole source of exogenous carbon for a "labeling period" of either 5h (n=13), 24h (n=5) or 48 h (n=5). Six of the rats labeled for 5h were continuously maintained awake by acoustic, tactile and olfactory stimuli during the labeling period, which resulted in slightly elevated corticosterone levels. Brain [Glyc] measured biochemically after focused microwave fixation in the rats maintained awake (3.9+/-0.2 micromol/g, n=6) was not significantly different from that of the control group (4.0+/-0.1 micromol/g, n=7; t-test, P>0.5). To account for potential variations in plasma Glc isotopic enrichment (IE), Glyc IE was normalized by N-acetyl-aspartate (NAA) IE. A simple mathematical model was developed to derive brain Glyc turnover time as 5.3h with a fit error of 3.2h and NAA turnover time as 15.6h with a fit error of 6.5h, in the control rats. A faster tau(Glyc) (2.9h with a fit error of 1.2h) was estimated in the rats maintained awake for 5h. In conclusion, 5h of prolonged wakefulness mainly activates glycogen metabolism, but has minimal effect on brain [Glyc].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AIM: Although acute pain is frequently reported by patients admitted to the emergency room, it is often insufficiently evaluated by physicians and is thus undertreated. With the aim of improving the care of adult patients with acute pain, we developed and implemented abbreviated clinical practice guidelines (CG) for the staff of nurses and physicians in our hospital's emergency room. METHODS: Our algorithm is based upon the practices described in the international literature and uses a simultaneous approach of treating acute pain in a rapid and efficacious manner along with diagnostic and therapeutic procedures. RESULTS: Pain was assessed using either a visual analogue scale (VAS) or a numerical rating scale (NRS) at ER admission and again during the hospital stay. Patients were treated with paracetamol and/or NSAID (VAS/NRS <4) or intravenous morphine (VAS/NRS > or =04). The algorithm also outlines a specific approach for patients with headaches to minimise the risks inherent to a non-specific treatment. In addition, our algorithm addresses the treatment of paroxysmal pain in patients with chronic pain as well as acute pain in drug addicts. It also outlines measures for pain prevention prior to minor diagnostic or therapeutic procedures. CONCLUSIONS: Based on published guidelines, an abbreviated clinical algorithm (AA) was developed and its simple format permitted a widespread implementation. In contrast to international guidelines, our algorithm favours giving nursing staff responsibility for decision making aspects of pain assessment and treatment in emergency room patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents an approach for mapping of precipitation data. The main goal is to perform spatial predictions and simulations of precipitation fields using geostatistical methods (ordinary kriging, kriging with external drift) as well as machine learning algorithms (neural networks). More practically, the objective is to reproduce simultaneously both the spatial patterns and the extreme values. This objective is best reached by models integrating geostatistics and machine learning algorithms. To demonstrate how such models work, two case studies have been considered: first, a 2-day accumulation of heavy precipitation and second, a 6-day accumulation of extreme orographic precipitation. The first example is used to compare the performance of two optimization algorithms (conjugate gradients and Levenberg-Marquardt) of a neural network for the reproduction of extreme values. Hybrid models, which combine geostatistical and machine learning algorithms, are also treated in this context. The second dataset is used to analyze the contribution of radar Doppler imagery when used as external drift or as input in the models (kriging with external drift and neural networks). Model assessment is carried out by comparing independent validation errors as well as analyzing data patterns.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

QUESTION UNDER STUDY: Hospitals transferring patients retain responsibility until admission to the new health care facility. We define safe transfer conditions, based on appropriate risk assessment, and evaluate the impact of this strategy as implemented at our institution. METHODS: An algorithm defining transfer categories according to destination, equipment monitoring, and medication was developed and tested prospectively over 6 months. Conformity with algorithm criteria was assessed for every transfer and transfer category. After introduction of a transfer coordination centre with transfer nurses, the algorithm was implemented and the same survey was carried out over 1 year. RESULTS: Over the whole study period, the number of transfers increased by 40%, chiefly by ambulance from the emergency department to other hospitals and private clinics. Transfers to rehabilitation centres and nursing homes were reassigned to conventional vehicles. The percentage of patients requiring equipment during transfer, such as an intravenous line, decreased from 34% to 15%, while oxygen or i.v. drug requirement remained stable. The percentage of transfers considered below theoretical safety decreased from 6% to 4%, while 20% of transfers were considered safer than necessary. A substantial number of planned transfers could be "downgraded" by mutual agreement to a lower degree of supervision, and the system was stable on a short-term basis. CONCLUSION: A coordinated transfer system based on an algorithm determining transfer categories, developed on the basis of simple but valid medical and nursing criteria, reduced unnecessary ambulance transfers and treatment during transfer, and increased adequate supervision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.