894 resultados para Benefit-Cost Analysis


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To evaluate the effect of heparin on duration of catheter patency and on prevention of complications associated with use of peripheral venous and arterial catheters.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To analyse the cost effectiveness of a national programme to screen blood donors for infection with the human T cell leukaemia/lymphoma virus.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To determine how small differences in the efficacy and cost of two antibiotic regimens to eradicate Helicobacter pylori can affect the overall cost effectiveness of H pylori eradication in duodenal ulcer disease.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objectives: To examine the cost of providing hospital at home in place of some forms of inpatient hospital care.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To compare, from the viewpoints of the NHS and social services and of patients, the costs associated with early discharge to a hospital at home scheme and those associated with continued care in an acute hospital.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objectives: To estimate the economic efficiency of tight blood pressure control, with angiotensin converting enzyme inhibitors or β blockers, compared with less tight control in hypertensive patients with type 2 diabetes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

By switching the level of analysis and aggregating data from the micro-level of individual cases to the macro-level, quantitative data can be analysed within a more case-based approach. This paper presents such an approach in two steps: In a first step, it discusses the combination of Social Network Analysis (SNA) and Qualitative Comparative Analysis (QCA) in a sequential mixed-methods research design. In such a design, quantitative social network data on individual cases and their relations at the micro-level are used to describe the structure of the network that these cases constitute at the macro-level. Different network structures can then be compared by QCA. This strategy allows adding an element of potential causal explanation to SNA, while SNA-indicators allow for a systematic description of the cases to be compared by QCA. Because mixing methods can be a promising, but also a risky endeavour, the methodological part also discusses the possibility that underlying assumptions of both methods could clash. In a second step, the research design presented beforehand is applied to an empirical study of policy network structures in Swiss politics. Through a comparison of 11 policy networks, causal paths that lead to a conflictual or consensual policy network structure are identified and discussed. The analysis reveals that different theoretical factors matter and that multiple conjunctural causation is at work. Based on both the methodological discussion and the empirical application, it appears that a combination of SNA and QCA can represent a helpful methodological design for social science research and a possibility of using quantitative data with a more case-based approach.