56 resultados para rule-based algorithms
Resumo:
In the last years, simulation training has become widespread in different areas of medicine due to social expectations, political accountability and professional regulation. Different types of simulators allow to improve knowledge, skills, communication and team behavior. Simulation sessions have been proven to shorten the learning curve and allow education in a safe environment. Patients on dialysis are an expanding group. They often suffer from several comorbidities and need complex surgical procedures with regard to their dialysis access. Therefore, education in evidence-based algorithms is as important as teaching of practical skills. In this chapter, we are presenting an overview of available dialysis access training modalities. We are convinced that simulation will become more important in the near future and has a substantial impact on strategies to improve aspects of patient safety. © 2015 S. Karger AG, Basel.
Resumo:
Activities of daily living (ADL) are important for quality of life. They are indicators of cognitive health status and their assessment is a measure of independence in everyday living. ADL are difficult to reliably assess using questionnaires due to self-reporting biases. Various sensor-based (wearable, in-home, intrusive) systems have been proposed to successfully recognize and quantify ADL without relying on self-reporting. New classifiers required to classify sensor data are on the rise. We propose two ad-hoc classifiers that are based only on non-intrusive sensor data. METHODS: A wireless sensor system with ten sensor boxes was installed in the home of ten healthy subjects to collect ambient data over a duration of 20 consecutive days. A handheld protocol device and a paper logbook were also provided to the subjects. Eight ADL were selected for recognition. We developed two ad-hoc ADL classifiers, namely the rule based forward chaining inference engine (RBI) classifier and the circadian activity rhythm (CAR) classifier. The RBI classifier finds facts in data and matches them against the rules. The CAR classifier works within a framework to automatically rate routine activities to detect regular repeating patterns of behavior. For comparison, two state-of-the-art [Naïves Bayes (NB), Random Forest (RF)] classifiers have also been used. All classifiers were validated with the collected data sets for classification and recognition of the eight specific ADL. RESULTS: Out of a total of 1,373 ADL, the RBI classifier correctly determined 1,264, while missing 109 and the CAR determined 1,305 while missing 68 ADL. The RBI and CAR classifier recognized activities with an average sensitivity of 91.27 and 94.36%, respectively, outperforming both RF and NB. CONCLUSIONS: The performance of the classifiers varied significantly and shows that the classifier plays an important role in ADL recognition. Both RBI and CAR classifier performed better than existing state-of-the-art (NB, RF) on all ADL. Of the two ad-hoc classifiers, the CAR classifier was more accurate and is likely to be better suited than the RBI for distinguishing and recognizing complex ADL.
Resumo:
Land degradation as well as land conservation maps at a (sub-) national scale are critical for pro-ject planning for sustainable land management. It has long been recognized that online accessible and low-cost raster data sets (e.g. Landsat imagery, SRTM-DEM’s) provide a readily available basis for land resource assessments for developing countries. However, choice of spatial, tempo-ral and spectral resolution of such data is often limited. Furthermore, while local expert knowl-edge on land degradation processes is abundant, difficulties are often encountered when linking existing knowledge with modern approaches including GIS and RS. The aim of this study was to develop an easily applicable, standardized workflow for preliminary spatial assessments of land degradation and conservation, which also allows the integration of existing expert knowledge. The core of the developed method consists of a workflow for rule-based land resource assess-ment. In a systematic way, this workflow leads from predefined land degradation and conserva-tion classes to field indicators, to suitable spatial proxy data, and finally to a set of rules for clas-sification of spatial datasets. Pre-conditions are used to narrow the area of interest. Decision tree models are used for integrating the different rules. It can be concluded that the workflow presented assists experts from different disciplines in col-laboration GIS/RS specialists in establishing a preliminary model for assessing land degradation and conservation in a spatially explicit manner. The workflow provides support when linking field indicators and spatial datasets, and when determining field indicators for groundtruthing.
Resumo:
The presented approach describes a model for a rule-based expert system calculating the temporal variability of the release of wet snow avalanches, using the assumption of avalanche triggering without the loading of new snow. The knowledge base of the model is created by using investigations on the system behaviour of wet snow avalanches in the Italian Ortles Alps, and is represented by a fuzzy logic rule-base. Input parameters of the expert system are numerical and linguistic variables, measurable meteorological and topographical factors and observable characteristics of the snow cover. Output of the inference method is the quantified release disposition for wet snow avalanches. Combining topographical parameters and the spatial interpolation of the calculated release disposition a hazard index map is dynamically generated. Furthermore, the spatial and temporal variability of damage potential on roads exposed to wet snow avalanches can be quantified, expressed by the number of persons at risk. The application of the rule base to the available data in the study area generated plausible results. The study demonstrates the potential for the application of expert systems and fuzzy logic in the field of natural hazard monitoring and risk management.
Resumo:
Background Tests for recent infections (TRIs) are important for HIV surveillance. We have shown that a patient's antibody pattern in a confirmatory line immunoassay (Inno-Lia) also yields information on time since infection. We have published algorithms which, with a certain sensitivity and specificity, distinguish between incident (< = 12 months) and older infection. In order to use these algorithms like other TRIs, i.e., based on their windows, we now determined their window periods. Methods We classified Inno-Lia results of 527 treatment-naïve patients with HIV-1 infection < = 12 months according to incidence by 25 algorithms. The time after which all infections were ruled older, i.e. the algorithm's window, was determined by linear regression of the proportion ruled incident in dependence of time since infection. Window-based incident infection rates (IIR) were determined utilizing the relationship ‘Prevalence = Incidence x Duration’ in four annual cohorts of HIV-1 notifications. Results were compared to performance-based IIR also derived from Inno-Lia results, but utilizing the relationship ‘incident = true incident + false incident’ and also to the IIR derived from the BED incidence assay. Results Window periods varied between 45.8 and 130.1 days and correlated well with the algorithms' diagnostic sensitivity (R2 = 0.962; P<0.0001). Among the 25 algorithms, the mean window-based IIR among the 748 notifications of 2005/06 was 0.457 compared to 0.453 obtained for performance-based IIR with a model not correcting for selection bias. Evaluation of BED results using a window of 153 days yielded an IIR of 0.669. Window-based IIR and performance-based IIR increased by 22.4% and respectively 30.6% in 2008, while 2009 and 2010 showed a return to baseline for both methods. Conclusions IIR estimations by window- and performance-based evaluations of Inno-Lia algorithm results were similar and can be used together to assess IIR changes between annual HIV notification cohorts.
Resumo:
Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.
Resumo:
Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here we present a graphics processor unit (GPU) based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to auto-regressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and 4 times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a 7-day high-resolution ECG is computed within less than 3 seconds. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced.
Resumo:
Arterial pressure-based cardiac output monitors (APCOs) are increasingly used as alternatives to thermodilution. Validation of these evolving technologies in high-risk surgery is still ongoing. In liver transplantation, FloTrac-Vigileo (Edwards Lifesciences) has limited correlation with thermodilution, whereas LiDCO Plus (LiDCO Ltd.) has not been tested intraoperatively. Our goal was to directly compare the 2 proprietary APCO algorithms as alternatives to pulmonary artery catheter thermodilution in orthotopic liver transplantation (OLT). The cardiac index (CI) was measured simultaneously in 20 OLT patients at prospectively defined surgical landmarks with the LiDCO Plus monitor (CI(L)) and the FloTrac-Vigileo monitor (CI(V)). LiDCO Plus was calibrated according to the manufacturer's instructions. FloTrac-Vigileo did not require calibration. The reference CI was derived from pulmonary artery catheter intermittent thermodilution (CI(TD)). CI(V)-CI(TD) bias ranged from -1.38 (95% confidence interval = -2.02 to -0.75 L/minute/m(2), P = 0.02) to -2.51 L/minute/m(2) (95% confidence interval = -3.36 to -1.65 L/minute/m(2), P < 0.001), and CI(L)-CI(TD) bias ranged from -0.65 (95% confidence interval = -1.29 to -0.01 L/minute/m(2), P = 0.047) to -1.48 L/minute/m(2) (95% confidence interval = -2.37 to -0.60 L/minute/m(2), P < 0.01). For both APCOs, bias to CI(TD) was correlated with the systemic vascular resistance index, with a stronger dependence for FloTrac-Vigileo. The capability of the APCOs for tracking changes in CI(TD) was assessed with a 4-quadrant plot for directional changes and with receiver operating characteristic curves for specificity and sensitivity. The performance of both APCOs was poor in detecting increases and fair in detecting decreases in CI(TD). In conclusion, the calibrated and uncalibrated APCOs perform differently during OLT. Although the calibrated APCO is less influenced by changes in the systemic vascular resistance, neither device can be used interchangeably with thermodilution to monitor cardiac output during liver transplantation.
Resumo:
This paper presents methods based on Information Filters for solving matching problems with emphasis on real-time, or effectively real-time applications. Both applications discussed in this work deal with ultrasound-based rigid registration in computer-assisted orthopedic surgery. In the first application, the usual workflow of rigid registration is reformulated such that registration algorithms would iterate while the surgeon is acquiring ultrasound images of the anatomy to be operated. Using this effectively real-time approach to registration, the surgeon would then receive feedback in order to better gauge the quality of the final registration outcome. The second application considered in this paper circumvents the need to attach physical markers to bones for anatomical referencing. Experiments using anatomical objects immersed in water are performed in order to evaluate and compare the different methods presented herein, using both 2D as well as real-time 3D ultrasound.
Resumo:
Learning by reinforcement is important in shaping animal behavior, and in particular in behavioral decision making. Such decision making is likely to involve the integration of many synaptic events in space and time. However, using a single reinforcement signal to modulate synaptic plasticity, as suggested in classical reinforcement learning algorithms, a twofold problem arises. Different synapses will have contributed differently to the behavioral decision, and even for one and the same synapse, releases at different times may have had different effects. Here we present a plasticity rule which solves this spatio-temporal credit assignment problem in a population of spiking neurons. The learning rule is spike-time dependent and maximizes the expected reward by following its stochastic gradient. Synaptic plasticity is modulated not only by the reward, but also by a population feedback signal. While this additional signal solves the spatial component of the problem, the temporal one is solved by means of synaptic eligibility traces. In contrast to temporal difference (TD) based approaches to reinforcement learning, our rule is explicit with regard to the assumed biophysical mechanisms. Neurotransmitter concentrations determine plasticity and learning occurs fully online. Further, it works even if the task to be learned is non-Markovian, i.e. when reinforcement is not determined by the current state of the system but may also depend on past events. The performance of the model is assessed by studying three non-Markovian tasks. In the first task, the reward is delayed beyond the last action with non-related stimuli and actions appearing in between. The second task involves an action sequence which is itself extended in time and reward is only delivered at the last action, as it is the case in any type of board-game. The third task is the inspection game that has been studied in neuroeconomics, where an inspector tries to prevent a worker from shirking. Applying our algorithm to this game yields a learning behavior which is consistent with behavioral data from humans and monkeys, revealing themselves properties of a mixed Nash equilibrium. The examples show that our neuronal implementation of reward based learning copes with delayed and stochastic reward delivery, and also with the learning of mixed strategies in two-opponent games.
Resumo:
The Pulmonary Embolism Rule-out Criteria (PERC) rule is a clinical diagnostic rule designed to exclude pulmonary embolism (PE) without further testing. We sought to externally validate the diagnostic performance of the PERC rule alone and combined with clinical probability assessment based on the revised Geneva score.
Resumo:
The interest in automatic volume meshing for finite element analysis (FEA) has grown more since the appearance of microfocus CT (μCT), due to its high resolution, which allows for the assessment of mechanical behaviour at a high precision. Nevertheless, the basic meshing approach of generating one hexahedron per voxel produces jagged edges. To prevent this effect, smoothing algorithms have been introduced to enhance the topology of the mesh. However, whether smoothing also improves the accuracy of voxel-based meshes in clinical applications is still under question. There is a trade-off between smoothing and quality of elements in the mesh. Distorted elements may be produced by excessive smoothing and reduce accuracy of the mesh. In the present work, influence of smoothing on the accuracy of voxel-based meshes in micro-FE was assessed. An accurate 3D model of a trabecular structure with known apparent mechanical properties was used as a reference model. Virtual CT scans of this reference model (with resolutions of 16, 32 and 64 μm) were then created and used to build voxel-based meshes of the microarchitecture. Effects of smoothing on the apparent mechanical properties of the voxel-based meshes as compared to the reference model were evaluated. Apparent Young’s moduli of the smooth voxel-based mesh were significantly closer to those of the reference model for the 16 and 32 μm resolutions. Improvements were not significant for the 64 μm, due to loss of trabecular connectivity in the model. This study shows that smoothing offers a real benefit to voxel-based meshes used in micro-FE. It might also broaden voxel-based meshing to other biomechanical domains where it was not used previously due to lack of accuracy. As an example, this work will be used in the framework of the European project ContraCancrum, which aims at providing a patient-specific simulation of tumour development in brain and lungs for oncologists. For this type of clinical application, such a fast, automatic, and accurate generation of the mesh is of great benefit.