942 resultados para Modal interval analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To determine the efficacy of cholinesterase inhibitors (ChEIs) in improving the behavioral and psychological symptoms of dementia (BPSD) in patients with Alzheimer’s disease (AD). Data sources: We searched MEDLINE, Cochrane Registry, and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) from 1966 to 2007. We limited our search to English Language, full text, published articles and human studies. Data extraction: We included randomized, double-blind, placebo-controlled trials evaluating the efficacy of donepezil, rivastigmine, or galantamine in managing BPSD displayed by AD patients. Using the United States Preventive Services Task Force (USPSTF) guidelines, we critically appraised all studies and included only those with an attrition rate of less than 40%, concealed measurement of the outcomes, and intention to treat analysis of the collected data. All data were imputed into pre-defined evidence based tables and were pooled using the Review Manager 4.2.1 software for data synthesis. Results: We found 12 studies that met our inclusion criteria but only nine of them provided sufficient data for the meta-analysis. Among patients with mild to severe AD and in comparison to placebo, ChEIs as a class had a beneficial effects on reducing BPSD with a standard mean difference (SMD) of -0.10 (95% confidence interval [CI]; -0.18, -0.01) and a weighted mean difference (WMD) of -1.38 neuropsychiatry inventory point (95% CI; -2.30, -0.46). In studies with mild AD patients, the WMD was -1.92 (95% CI; -3.18, -0.66); and in studies with severe AD patients, the WMD was -0.06 (95% CI; -2.12, +0.57). Conclusion: Cholinesterase inhibitors lead to a statistical significant reduction in BPSD among patients with AD, yet the clinical relevance of this effect remains unclear.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background - Several antipsychotic agents are known to prolong the QT interval in a dose dependent manner. Corrected QT interval (QTc) exceeding a threshold value of 450 ms may be associated with an increased risk of life threatening arrhythmias. Antipsychotic agents are often given in combination with other psychotropic drugs, such as antidepressants, that may also contribute to QT prolongation. This observational study compares the effects observed on QT interval between antipsychotic monotherapy and psychoactive polytherapy, which included an additional antidepressant or lithium treatment. Method - We examined two groups of hospitalized women with Schizophrenia, Bipolar Disorder and Schizoaffective Disorder in a naturalistic setting. Group 1 was composed of nineteen hospitalized women treated with antipsychotic monotherapy (either haloperidol, olanzapine, risperidone or clozapine) and Group 2 was composed of nineteen hospitalized women treated with an antipsychotic (either haloperidol, olanzapine, risperidone or quetiapine) with an additional antidepressant (citalopram, escitalopram, sertraline, paroxetine, fluvoxamine, mirtazapine, venlafaxine or clomipramine) or lithium. An Electrocardiogram (ECG) was carried out before the beginning of the treatment for both groups and at a second time after four days of therapy at full dosage, when blood was also drawn for determination of serum levels of the antipsychotic. Statistical analysis included repeated measures ANOVA, Fisher Exact Test and Indipendent T Test. Results - Mean QTc intervals significantly increased in Group 2 (24 ± 21 ms) however this was not the case in Group 1 (-1 ± 30 ms) (Repeated measures ANOVA p < 0,01). Furthermore we found a significant difference in the number of patients who exceeded the threshold of borderline QTc interval value (450 ms) between the two groups, with seven patients in Group 2 (38%) compared to one patient in Group 1 (7%) (Fisher Exact Text, p < 0,05). Conclusions - No significant prolongation of the QT interval was found following monotherapy with an antipsychotic agent, while combination of these drugs with antidepressants caused a significant QT prolongation. Careful monitoring of the QT interval is suggested in patients taking a combined treatment of antipsychotic and antidepressant agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although crisp data are fundamentally indispensable for determining the profit Malmquist productivity index (MPI), the observed values in real-world problems are often imprecise or vague. These imprecise or vague data can be suitably characterized with fuzzy and interval methods. In this paper, we reformulate the conventional profit MPI problem as an imprecise data envelopment analysis (DEA) problem, and propose two novel methods for measuring the overall profit MPI when the inputs, outputs, and price vectors are fuzzy or vary in intervals. We develop a fuzzy version of the conventional MPI model by using a ranking method, and solve the model with a commercial off-the-shelf DEA software package. In addition, we define an interval for the overall profit MPI of each decision-making unit (DMU) and divide the DMUs into six groups according to the intervals obtained for their overall profit efficiency and MPIs. We also present two numerical examples to demonstrate the applicability of the two proposed models and exhibit the efficacy of the procedures and algorithms. © 2011 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The purpose of this study was to examine the effectiveness of a new analysis method of mfVEP objective perimetry in the early detection of glaucomatous visual field defects compared to the gold standard technique. Methods and patients: Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes), and glaucoma suspect patients (38 eyes). All subjects underwent two standard 24-2 visual field tests: one with the Humphrey Field Analyzer and a single mfVEP test in one session. Analysis of the mfVEP results was carried out using the new analysis protocol: the hemifield sector analysis protocol. Results: Analysis of the mfVEP showed that the signal to noise ratio (SNR) difference between superior and inferior hemifields was statistically significant between the three groups (analysis of variance, P<0.001 with a 95% confidence interval, 2.82, 2.89 for normal group; 2.25, 2.29 for glaucoma suspect group; 1.67, 1.73 for glaucoma group). The difference between superior and inferior hemifield sectors and hemi-rings was statistically significant in 11/11 pair of sectors and hemi-rings in the glaucoma patients group (t-test P<0.001), statistically significant in 5/11 pairs of sectors and hemi-rings in the glaucoma suspect group (t-test P<0.01), and only 1/11 pair was statistically significant (t-test P<0.9). The sensitivity and specificity of the hemifield sector analysis protocol in detecting glaucoma was 97% and 86% respectively and 89% and 79% in glaucoma suspects. These results showed that the new analysis protocol was able to confirm existing visual field defects detected by standard perimetry, was able to differentiate between the three study groups with a clear distinction between normal patients and those with suspected glaucoma, and was able to detect early visual field changes not detected by standard perimetry. In addition, the distinction between normal and glaucoma patients was especially clear and significant using this analysis. Conclusion: The new hemifield sector analysis protocol used in mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patients. Using this protocol, it can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. The sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucomatous visual field loss. The intersector analysis protocol can detect early field changes not detected by the standard Humphrey Field Analyzer test. © 2013 Mousa et al, publisher and licensee Dove Medical Press Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim: To use previously validated image analysis techniques to determine the incremental nature of printed subjective anterior eye grading scales. Methods: A purpose designed computer program was written to detect edges using a 3 × 3 kernal and to extract colour planes in the selected area of an image. Annunziato and Efron pictorial, and CCLRU and Vistakon-Synoptik photographic grades of bulbar hyperaemia, palpebral hyperaemia roughness, and corneal staining were analysed. Results: The increments of the grading scales were best described by a quadratic rather than a linear function. Edge detection and colour extraction image analysis for bulbar hyperaemia (r2 = 0.35-0.99), palpebral hyperaemia (r2 = 0.71-0.99), palpebral roughness (r2 = 0.30-0.94), and corneal staining (r2 = 0.57-0.99) correlated well with scale grades, although the increments varied in magnitude and direction between different scales. Repeated image analysis measures had a 95% confidence interval of between 0.02 (colour extraction) and 0.10 (edge detection) scale units (on a 0-4 scale). Conclusion: The printed grading scales were more sensitive for grading features of low severity, but grades were not comparable between grading scales. Palpebral hyperaemia and staining grading is complicated by the variable presentations possible. Image analysis techniques are 6-35 times more repeatable than subjective grading, with a sensitivity of 1.2-2.8% of the scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consideration of the influence of test technique and data analysis method is important for data comparison and design purposes. The paper highlights the effects of replication interval, crack growth rate averaging and curve-fitting procedures on crack growth rate results for a Ni-base alloy. It is shown that an upper bound crack growth rate line is not appropriate for use in fatigue design, and that the derivative of a quadratic fit to the a vs N data looks promising. However, this type of averaging, or curve fitting, is not useful in developing an understanding of microstructure/crack tip interactions. For this purpose, simple replica-to-replica growth rate calculations are preferable. © 1988.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many studies have accounted for whole body vibration effects in the fields of exercise physiology, sport and rehabilitation medicine. Generally, surface EMG is utilized to assess muscular activity during the treatment; however, large motion artifacts appear superimposed to the raw signal, making sEMG recording not suitable before any artifact filtering. Sharp notch filters, centered at vibration frequency and at its superior harmonics, have been used in previous studies, to remove the artifacts. [6, 10] However, to get rid of those artifacts some true EMG signal is lost. The purpose of this study was to reproduce the effect of motor-unit synchronization on a simulated surface EMG during vibratory stimulation. In addition, authors mean to evaluate the EMG power percentage in those bands in which are also typically located motion artifact components. Model characteristics were defined to take into account two main aspect: the muscle MUs discharge behavior and the triggering effects that appear during local vibratory stimulation. [7] Inter-pulse-interval, was characterized by a polimodal distribution related to the MU discharge frequency (IPI 55-80ms, σ=12ms) and to the correlation with the vibration period within the range of ±2 ms due to vibration stimulus. [1, 7] The signals were simulated using different stimulation frequencies from 30 to 70 Hz. The percentage of the total simulated EMG power within narrow bands centered at the stimulation frequency and its superior harmonics (± 1 Hz) resulted on average about 8% (± 2.85) of the total EMG power. However, the artifact in those bands may contain more than 40% of the total power of the total signal. [6] Our preliminary results suggest that the analysis of the muscular activity of muscle based on raw sEMG recordings and RMS evaluation, if not processed during vibratory stimulation may lead to a serious overestimation of muscular response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To assess the validity and repeatability of objective compared to subjective contact lens fit analysis. Methods: Thirty-five subjects (aged 22.0. ±. 3.0 years) wore two different soft contact lens designs. Four lens fit variables: centration, horizontal lag, post-blink movement in up-gaze and push-up recovery speed were assessed subjectively (four observers) and objectively from slit-lamp biomicroscopy captured images and video. The analysis was repeated a week later. Results: The average of the four experienced observers was compared to objective measures, but centration, movement on blink, lag and push-up recovery speed all varied significantly between them (p <. 0.001). Horizontal lens centration was on average close to central as assessed both objectively and subjectively (p > 0.05). The 95% confidence interval of subjective repeatability was better than objective assessment (±0.128. mm versus ±0.168. mm, p = 0.417), but utilised only 78% of the objective range. Vertical centration assessed objectively showed a slight inferior decentration (0.371. ±. 0.381. mm) with good inter- and intrasession repeatability (p > 0.05). Movement-on-blink was lower estimated subjectively than measured objectively (0.269. ±. 0.179. mm versus 0.352. ±. 0.355. mm; p = 0.035), but had better repeatability (±0.124. mm versus ±0.314. mm 95% confidence interval) unless correcting for the smaller range (47%). Horizontal lag was lower estimated subjectively (0.562. ±. 0.259. mm) than measured objectively (0.708. ±. 0.374. mm, p <. 0.001), had poorer repeatability (±0.132. mm versus ±0.089. mm 95% confidence interval) and had a smaller range (63%). Subjective categorisation of push-up speed of recovery showed reasonable differentiation relative to objective measurement (p <. 0.001). Conclusions: The objective image analysis allows an accurate, reliable and repeatable assessment of soft contact lens fit characteristics, being a useful tool for research and optimisation of lens fit in clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CONCLUSIONS: The new HSA protocol used in the mfVEP testing can be applied to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patients. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss. PURPOSE: Multifocal visual evoked potential (mfVEP) is a newly introduced method used for objective visual field assessment. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard automated perimetry (SAP) visual field assessment, and others were not very informative and needed more adjustment and research work. In this study we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. METHODS: Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey field analyzer (HFA) test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the hemifield sector analysis (HSA) protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the three groups in the mean signal to noise ratio (ANOVA test, p < 0.001 with a 95% confidence interval). The difference between superior and inferior hemispheres in all subjects were statistically significant in the glaucoma patient group in all 11 sectors (t-test, p < 0.001), partially significant in 5 / 11 (t-test, p < 0.01), and no statistical difference in most sectors of the normal group (1 / 11 sectors was significant, t-test, p < 0.9). Sensitivity and specificity of the HSA protocol in detecting glaucoma was 97% and 86%, respectively, and for glaucoma suspect patients the values were 89% and 79%, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 35Q02, 35Q05, 35Q10, 35B40.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Federal transportation legislation in effect since 1991 was examined to determine outcomes in two areas: (1) The effect of organizational and fiscal structures on the implementation of multimodal transportation infrastructure, and (2) The effect of multimodal transportation infrastructure on sustainability. Triangulation of methods was employed through qualitative analysis (including key informant interviews, focus groups and case studies), as well as quantitative analysis (including one-sample t-tests, regression analysis and factor analysis). ^ Four hypotheses were directly tested: (1) Regions with consolidated government structures will build more multimodal transportation miles: The results of the qualitative analysis do not lend support while the results of the quantitative findings support this hypothesis, possibly due to differences in the definitions of agencies/jurisdictions between the two methods. (2) Regions in which more locally dedicated or flexed funding is applied to the transportation system will build a greater number of multimodal transportation miles: Both quantitative and qualitative research clearly support this hypothesis. (3) Cooperation and coordination, or, conversely, competition will determine the number of multimodal transportation miles: Participants tended to agree that cooperation, coordination and leadership are imperative to achieving transportation goals and objectives, including targeted multimodal miles, but also stressed the importance of political and financial elements in determining what ultimately will be funded and implemented. (4) The modal outcomes of transportation systems will affect the overall health of a region in terms of sustainability/quality of life indicators: Both the qualitative and the quantitative analyses provide evidence that they do. ^ This study finds that federal legislation has had an effect on the modal outcomes of transportation infrastructure and that there are links between these modal outcomes and the sustainability of a region. It is recommended that agencies further consider consolidation and strengthen cooperation efforts and that fiscal regulations are modified to reflect the problems cited in qualitative analysis. Limitations of this legislation especially include the inability to measure sustainability; several measures are recommended. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate knowledge of the time since death, or postmortem interval (PMI), has enormous legal, criminological, and psychological impact. In this study, an investigation was made to determine whether the relationship between the degradation of the human cardiac structure protein Cardiac Troponin T and PMI could be used as an indicator of time since death, thus providing a rapid, high resolution, sensitive, and automated methodology for the determination of PMI. ^ The use of Cardiac Troponin T (cTnT), a protein found in heart tissue, as a selective marker for cardiac muscle damage has shown great promise in the determination of PMI. An optimized conventional immunoassay method was developed to quantify intact and fragmented cTnT. A small sample of cardiac tissue, which is less affected than other tissues by external factors, was taken, homogenized, extracted with magnetic microparticles, separated by SDS-PAGE, and visualized with Western blot by probing with monoclonal antibody against cTnT. This step was followed by labeling and available scanners. This conventional immunoassay provides a proper detection and quantitation of cTnT protein in cardiac tissue as a complex matrix; however, this method does not provide the analyst with immediate results. Therefore, a competitive separation method using capillary electrophoresis with laser-induced fluorescence (CE-LIF) was developed to study the interaction between human cTnT protein and monoclonal anti-TroponinT antibody. ^ Analysis of the results revealed a linear relationship between the percent of degraded cTnT and the log of the PMI, indicating that intact cTnT could be detected in human heart tissue up to 10 days postmortem at room temperature and beyond two weeks at 4C. The data presented demonstrates that this technique can provide an extended time range during which PMI can be more accurately estimated as compared to currently used methods. The data demonstrates that this technique represents a major advance in time of death determination through a fast and reliable, semi-quantitative measurement of a biochemical marker from an organ protected from outside factors. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research focuses on the design and verification of inter-organizational controls. Instead of looking at a documentary procedure, which is the flow of documents and data among the parties, the research examines the underlying deontic purpose of the procedure, the so-called deontic process, and identifies control requirements to secure this purpose. The vision of the research is a formal theory for streamlining bureaucracy in business and government procedures. ^ Underpinning most inter-organizational procedures are deontic relations, which are about rights and obligations of the parties. When all parties trust each other, they are willing to fulfill their obligations and honor the counter parties’ rights; thus controls may not be needed. The challenge is in cases where trust may not be assumed. In these cases, the parties need to rely on explicit controls to reduce their exposure to the risk of opportunism. However, at present there is no analytic approach or technique to determine which controls are needed for a given contracting or governance situation. ^ The research proposes a formal method for deriving inter-organizational control requirements based on static analysis of deontic relations and dynamic analysis of deontic changes. The formal method will take a deontic process model of an inter-organizational transaction and certain domain knowledge as inputs to automatically generate control requirements that a documentary procedure needs to satisfy in order to limit fraud potentials. The deliverables of the research include a formal representation namely Deontic Petri Nets that combine multiple modal logics and Petri nets for modeling deontic processes, a set of control principles that represent an initial formal theory on the relationships between deontic processes and documentary procedures, and a working prototype that uses model checking technique to identify fraud potentials in a deontic process and generate control requirements to limit them. Fourteen scenarios of two well-known international payment procedures—cash in advance and documentary credit—have been used to test the prototype. The results showed that all control requirements stipulated in these procedures could be derived automatically.^