767 resultados para cost utility analysis
Resumo:
Detailed evaluation and cost analysis of a cranial contrast-enhanced MRI (c-ceMRI) in outpatients, inpatients, patients in an intensive care unit and children under anesthesia.
Resumo:
The aim of this study was to investigate treatment failure (TF) in hospitalised community-acquired pneumonia (CAP) patients with regard to initial antibiotic treatment and economic impact. CAP patients were included in two open, prospective multicentre studies assessing the direct costs for in-patient treatment. Patients received treatment either with moxifloxacin (MFX) or a nonstandardised antibiotic therapy. Any change in antibiotic therapy after >72 h of treatment to a broadened antibiotic spectrum was considered as TF. Overall, 1,236 patients (mean ± SD age 69.6 ± 16.8 yrs, 691 (55.9%) male) were included. TF occurred in 197 (15.9%) subjects and led to longer hospital stay (15.4 ± 7.3 days versus 9.8 ± 4.2 days; p < 0.001) and increased median treatment costs (€2,206 versus €1,284; p<0.001). 596 (48.2%) patients received MFX and witnessed less TF (10.9% versus 20.6%; p < 0.001). After controlling for confounders in multivariate analysis, adjusted risk of TF was clearly reduced in MFX as compared with β-lactam monotherapy (adjusted OR for MFX 0.43, 95% CI 0.27-0.68) and was more comparable with a β-lactam plus macrolide combination (BLM) (OR 0.68, 95% CI 0.38-1.21). In hospitalised CAP, TF is frequent and leads to prolonged hospital stay and increased treatment costs. Initial treatment with MFX or BLM is a possible strategy to prevent TF, and may thus reduce treatment costs.
Resumo:
Several studies have shown that treatment with HMG-CoA reductase inhibitors (statins) can reduce coronary heart disease (CHD) rates. However, the cost effectiveness of statin treatment in the primary prevention of CHD has not been fully established.
Resumo:
The electric utility business is an inherently dangerous area to work in with employees exposed to many potential hazards daily. One such hazard is an arc flash. An arc flash is a rapid release of energy, referred to as incident energy, caused by an electric arc. Due to the random nature and occurrence of an arc flash, one can only prepare and minimize the extent of harm to themself, other employees and damage to equipment due to such a violent event. Effective January 1, 2009 the National Electric Safety Code (NESC) requires that an arc-flash assessment be performed by companies whose employees work on or near energized equipment to determine the potential exposure to an electric arc. To comply with the NESC requirement, Minnesota Power’s (MP’s) current short circuit and relay coordination software package, ASPEN OneLinerTM and one of the first software packages to implement an arc-flash module, is used to conduct an arc-flash hazard analysis. At the same time, the package is benchmarked against equations provided in the IEEE Std. 1584-2002 and ultimately used to determine the incident energy levels on the MP transmission system. This report goes into the depth of the history of arc-flash hazards, analysis methods, both software and empirical derived equations, issues of concern with calculation methods and the work conducted at MP. This work also produced two offline software products to conduct and verify an offline arc-flash hazard analysis.
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.
Resumo:
This afternoon you will be working on descriptive statistics, such as what is the total number of discharges in the state of Montana for a given Diagnosis Related Group (DRG), what is the average payment of a given DRG, and what is the range of payments of a given DRG. We will also formulate and solve a statistical question such as is there a relationship between the size of a hospital and the average payment of a given DRG.
Resumo:
OBJECTIVE: To compare costs of function- and pain-centred inpatient treatment in patients with chronic low back pain over 3 years of follow-up. DESIGN: Cost analysis of a randomized controlled trial. PATIENTS: A total of 174 patients with chronic low back pain were randomized to function- or pain-centred inpatient treatment. METHODS: Data on direct and indirect costs were gathered by questionnaires sent to patients, health insurance providers, employers, and the Swiss Disability Insurance Company. RESULTS: There was a non-significant difference in total medical costs after 3 years' follow-up. Total costs were 77,305 Euros in the function-centred inpatient treatment group and 83,085 Euros in the pain-centred inpatient treatment group. Likewise, indirect costs after 3 years from lost work days were non-significantly lower in the function-centred in-patient treatment group (6354 Euros; 95% confidence interval -20,892, 8392) and direct medical costs were non-significantly higher in the function-centred inpatient treatment group (574 Euros; 95% confidence interval -862, 2011). CONCLUSION: The total costs of function-centred and pain-centred inpatient treatment were similar over the whole 3-year follow-up.
Resumo:
Reliable detection of JAK2-V617F is critical for accurate diagnosis of myeloproliferative neoplasms (MPNs); in addition, sensitive mutation-specific assays can be applied to monitor disease response. However, there has been no consistent approach to JAK2-V617F detection, with assays varying markedly in performance, affecting clinical utility. Therefore, we established a network of 12 laboratories from seven countries to systematically evaluate nine different DNA-based quantitative PCR (qPCR) assays, including those in widespread clinical use. Seven quality control rounds involving over 21,500 qPCR reactions were undertaken using centrally distributed cell line dilutions and plasmid controls. The two best-performing assays were tested on normal blood samples (n=100) to evaluate assay specificity, followed by analysis of serial samples from 28 patients transplanted for JAK2-V617F-positive disease. The most sensitive assay, which performed consistently across a range of qPCR platforms, predicted outcome following transplant, with the mutant allele detected a median of 22 weeks (range 6-85 weeks) before relapse. Four of seven patients achieved molecular remission following donor lymphocyte infusion, indicative of a graft vs MPN effect. This study has established a robust, reliable assay for sensitive JAK2-V617F detection, suitable for assessing response in clinical trials, predicting outcome and guiding management of patients undergoing allogeneic transplant.
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
This paper extends the existing research on real estate investment trust (REIT) operating efficiencies. We estimate a stochastic-frontier panel-data model specifying a translog cost function, covering 1995 to 2003. The results disagree with previous research in that we find little evidence of scale economies and some evidence of scale diseconomies. Moreover, we also generally find smaller inefficiencies than those shown by other REIT studies. Contrary to previous research, the results also show that self-management of a REIT associates with more inefficiency when we measure output with assets. When we use revenue to measure output, selfmanagement associates with less inefficiency. Also contrary with previous research, higher leverage associates with more efficiency. The results further suggest that inefficiency increases over time in three of our four specifications.
Resumo:
In this paper we use the 2004-05 Annual Survey of Industries data to estimate the levels of cost efficiency of Indian manufacturing firms in the various states and also get state level measures of industrial organization (IO) efficiency. The empirical results show the presence of considerable cost inefficiency in a majority of the states. Further, we also find that, on average, Indian firms are too small. Consolidating them to attain the optimal scale would further enhance efficiency and lower average cost.
Resumo:
Background. Childhood immunization programs have dramatically reduced the morbidity and mortality associated with vaccine-preventable diseases. Proper documentation of immunizations that have been administered is essential to prevent duplicate immunization of children. To help improve documentation, immunization information systems (IISs) have been developed. IISs are comprehensive repositories of immunization information for children residing within a geographic region. The two models for participation in an IIS are voluntary inclusion, or "opt-in," and voluntary exclusion, or "opt-out." In an opt-in system, consent must be obtained for each participant, conversely, in an opt-out IIS, all children are included unless procedures to exclude the child are completed. Consent requirements for participation vary by state; the Texas IIS, ImmTrac, is an opt-in system.^ Objectives. The specific objectives are to: (1) Evaluate the variance among the time and costs associated with collecting ImmTrac consent at public and private birthing hospitals in the Greater Houston area; (2) Estimate the total costs associated with collecting ImmTrac consent at selected public and private birthing hospitals in the Greater Houston area; (3) Describe the alternative opt-out process for collecting ImmTrac consent at birth and discuss the associated cost savings relative to an opt-in system.^ Methods. Existing time-motion studies (n=281) conducted between October, 2006 and August, 2007 at 8 birthing hospitals in the Greater Houston area were used to assess the time and costs associated with obtaining ImmTrac consent at birth. All data analyzed are deidentified and contain no personal information. Variations in time and costs at each location were assessed and total costs per child and costs per year were estimated. The cost of an alternative opt-out system was also calculated.^ Results. The median time required by birth registrars to complete consent procedures varied from 72-285 seconds per child. The annual costs associated with obtaining consent for 388,285 newborns in ImmTrac's opt-in consent process were estimated at $702,000. The corresponding costs of the proposed opt-out system were estimated to total $194,000 per year. ^ Conclusions. Substantial variation in the time and costs associated with completion of ImmTrac consent procedures were observed. Changing to an opt-out system for participation could represent significant cost savings. ^