889 resultados para cost-effective design
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Bakgrund: Demens är en progressiv sjukdom och antalet personer som får en demensdiagnos kommer inom några årtionden öka drastiskt. Sjukvården behöver komma fram med nya icke-farmakologiska metoder för att kunna hantera den stora ökningen av personer med demenssjukdom. Syfte: Syftet var att beskriva om och på vilket sätt musiken påverkar personer med demens. Metod: Litteraturöversikt med induktiv ansats där artiklar med kvalitativ och kvantitativ metodik sammanställdes. Artikelsökning gjordes i databaserna Cinahl samt PsychInfo. De kvalitativa artiklarna analyserades med hjälp av Fribergs femstegsmodell och statistiken från de kvantitativa artiklarna sammanställdes i en tabell. Resultat: Det kvantitativa resultatet visade att musik hade en effekt med statistiskt signifikant skillnad på flera av de undersökta variablerna. Agitation och oro/ ångest minskade medan positivt engagemang/ deltagande ökade. Det kvalitativa resultatet genererade tre teman: kommunikation, sinnesstämning samt indirekt påverkan. Kommunikationen förbättrades, personer med demens upplevde glädje och personalen påverkades positivt av musiken vilket ledde till indirekt påverkan på personer med demens. Slutsats: Musik är en enkel och kostnadseffektiv intervention att använda sig av när det gäller personer med demens. Olika musikinterventioner kan användas vid olika situationer för att få den effekt som önskas. Det är även ett enkelt sätt att komma personer med demens närmare och få en större förståelse.
Resumo:
Purpose The purpose of this paper was to review the effectiveness of telephone interviewing for capturing data and to consider in particular the challenges faced by telephone interviewers when capturing information about market segments. Design/methodology/approach The platform for this methodological critique was a market segment analysis commissioned by Sport Wales which involved a series of 85 telephone interviews completed during 2010. Two focus groups involving the six interviewers involved in the study were convened to reflect on the researchers’ experiences and the implications for business and management research. Findings There are three principal sets of findings. First, although telephone interviewing is generally a cost-effective data collection method, it is important to consider both the actual costs (i.e. time spent planning and conducting interviews) as well as the opportunity costs (i.e. missed appointments, “chasing participants”). Second, researchers need to be sensitised to and sensitive to the demographic characteristics of telephone interviewees (insofar as these are knowable) because responses are influenced by them. Third, the anonymity of telephone interviews may be more conducive for discussing sensitive issues than face-to-face interactions. Originality/value The present study adds to this modest body of literature on the implementation of telephone interviewing as a research technique of business and management. It provides valuable methodological background detail about the intricate, personal experiences of researchers undertaking this method “at a distance” and without visual cues, and makes explicit the challenges of telephone interviewing for the purposes of data capture.
Resumo:
La catalyse joue un rôle essentiel dans de nombreuses applications industrielles telles que les industries pétrochimique et biochimique, ainsi que dans la production de polymères et pour la protection de l’environnement. La conception et la fabrication de catalyseurs efficaces et rentables est une étape importante pour résoudre un certain nombre de problèmes des nouvelles technologies de conversion chimique et de stockage de l’énergie. L’objectif de cette thèse est le développement de voies de synthèse efficaces et simples pour fabriquer des catalyseurs performants à base de métaux non nobles et d’examiner les aspects fondamentaux concernant la relation entre structure/composition et performance catalytique, notamment dans des processus liés à la production et au stockage de l’hydrogène. Dans un premier temps, une série d’oxydes métalliques mixtes (Cu/CeO2, CuFe/CeO2, CuCo/CeO2, CuFe2O4, NiFe2O4) nanostructurés et poreux ont été synthétisés grâce à une méthode améliorée de nanocasting. Les matériaux Cu/CeO2 obtenus, dont la composition et la structure poreuse peuvent être contrôlées, ont ensuite été testés pour l’oxydation préférentielle du CO dans un flux d’hydrogène dans le but d’obtenir un combustible hydrogène de haute pureté. Les catalyseurs synthétisés présentent une activité et une sélectivité élevées lors de l’oxydation sélective du CO en CO2. Concernant la question du stockage d’hydrogène, une voie de synthèse a été trouvée pour le composét mixte CuO-NiO, démontrant une excellente performance catalytique comparable aux catalyseurs à base de métaux nobles pour la production d’hydrogène à partir de l’ammoniaborane (aussi appelé borazane). L’activité catalytique du catalyseur étudié dans cette réaction est fortement influencée par la nature des précurseurs métalliques, la composition et la température de traitement thermique utilisées pour la préparation du catalyseur. Enfin, des catalyseurs de Cu-Ni supportés sur silice colloïdale ou sur des particules de carbone, ayant une composition et une taille variable, ont été synthétisés par un simple procédé d’imprégnation. Les catalyseurs supportés sur carbone sont stables et très actifs à la fois dans l’hydrolyse du borazane et la décomposition de l’hydrazine aqueuse pour la production d’hydrogène. Il a été démontré qu’un catalyseur optimal peut être obtenu par le contrôle de l’effet bi-métallique, l’interaction métal-support, et la taille des particules de métal.
Resumo:
Numerous applications within the mid- and long-wavelength infrared are driving the search for efficient and cost effective detection technologies in this regime. Theoretical calculations have predicted high performance for InAs/GaSb type-II superlattice structures, which rely on mature growth of III-V semiconductors and offer many levels of freedom in design due to band structure engineering. This work focuses on the fabrication and characterization of type-II superlattice infrared detectors. Standard UV-based photolithography was used combined with chemical wet or dry etching techniques in order to fabricate antinomy-based type-II superlattice infrared detectors. Subsequently, Fourier transform infrared spectroscopy and radiometric techniques were applied for optical characterization in order to obtain a detector's spectrum and response, as well as the overall detectivity in combination with electrical characterization. Temperature dependent electrical characterization was used to extract information about the limiting dark current processes. This work resulted in the first demonstration of an InAs/GaSb type-II superlattice infrared photodetector grown by metalorganic chemical vapor deposition. A peak detectivity of 1.6x10^9 Jones at 78 K was achieved for this device with a 11 micrometer zero cutoff wavelength. Furthermore the interband tunneling detector designed for the mid-wavelength infrared regime was studied. Similar results to those previously published were obtained.
Resumo:
Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.
Resumo:
The problem: Around 300 million people worldwide have asthma and prevalence is increasing. Support for optimal self-management can be effective in improving a range of outcomes and is cost effective, but is underutilised as a treatment strategy. Supporting optimum self-management using digital technology shows promise, but how best to do this is not clear. Aim: The purpose of this project was to explore the potential role of a digital intervention in promoting optimum self-management in adults with asthma. Methods: Following the MRC Guidance on the Development and Evaluation of Complex Interventions which advocates using theory, evidence, user testing and appropriate modelling and piloting, this project had 3 phases. Phase 1: Examination of the literature to inform phases 2 and 3, using systematic review methods and focussed literature searching. Phase 2: Developing the Living Well with Asthma website. A prototype (paper-based) version of the website was developed iteratively with input from a multidisciplinary expert panel, empirical evidence from the literature (from phase 1), and potential end users via focus groups (adults with asthma and practice nurses). Implementation and behaviour change theories informed this process. The paper-based designs were converted to the website through an iterative user centred process (think aloud studies with adults with asthma). Participants considered contents, layout, and navigation. Development was agile using feedback from the think aloud sessions immediately to inform design and subsequent think aloud sessions. Phase 3: A pilot randomised controlled trial over 12 weeks to evaluate the feasibility of a Phase 3 trial of Living Well with Asthma to support self-management. Primary outcomes were 1) recruitment & retention; 2) website use; 3) Asthma Control Questionnaire (ACQ) score change from baseline; 4) Mini Asthma Quality of Life (AQLQ) score change from baseline. Secondary outcomes were patient activation, adherence, lung function, fractional exhaled nitric oxide (FeNO), generic quality of life measure (EQ-5D), medication use, prescribing and health services contacts. Results: Phase1: Demonstrated that while digital interventions show promise, with some evidence of effectiveness in certain outcomes, participants were poorly characterised, telling us little about the reach of these interventions. The interventions themselves were poorly described making drawing definitive conclusions about what worked and what did not impossible. Phase 2: The literature indicated that important aspects to cover in any self-management intervention (digital or not) included: asthma action plans, regular health professional review, trigger avoidance, psychological functioning, self-monitoring, inhaler technique, and goal setting. The website asked users to aim to be symptom free. Key behaviours targeted to achieve this include: optimising medication use (including inhaler technique); attending primary care asthma reviews; using asthma action plans; increasing physical activity levels; and stopping smoking. The website had 11 sections, plus email reminders, which promoted these behaviours. Feedback during think aloud studies was mainly positive with most changes focussing on clarification of language, order of pages and usability issues mainly relating to navigation difficulties. Phase 3: To achieve our recruitment target 5383 potential participants were invited, leading to 51 participants randomised (25 to intervention group). Age range 16-78 years; 75% female; 28% from most deprived quintile. Nineteen (76%) of the intervention group used the website for an average of 23 minutes. Non-significant improvements in favour of the intervention group observed in the ACQ score (-0.36; 95% confidence interval: -0.96, 0.23; p=0.225), and mini-AQLQ scores (0.38; -0.13, 0.89; p=0.136). A significant improvement was observed in the activity limitation domain of the mini-AQLQ (0.60; 0.05 to 1.15; p = 0.034). Secondary outcomes showed increased patient activation and reduced reliance on reliever medication. There was no significant difference in the remaining secondary outcomes. There were no adverse events. Conclusion: Living Well with Asthma has been shown to be acceptable to potential end users, and has potential for effectiveness. This intervention merits further development, and subsequent evaluation in a Phase III full scale RCT.
Resumo:
Spent hydroprocessing catalysts (HPCs) are solid wastes generated in refinery industries and typically contain various hazardous metals, such as Co, Ni, and Mo. These wastes cannot be discharged into the environment due to strict regulations and require proper treatment to remove the hazardous substances. Various options have been proposed and developed for spent catalysts treatment; however, hydrometallurgical processes are considered efficient, cost-effective and environmentally-friendly methods of metal extraction, and have been widely employed for different metal uptake from aqueous leachates of secondary materials. Although there are a large number of studies on hazardous metal extraction from aqueous solutions of various spent catalysts, little information is available on Co, Ni, and Mo removal from spent NiMo hydroprocessing catalysts. In the current study, a solvent extraction process was applied to the spent HPC to specifically remove Co, Ni, and Mo. The spent HPC is dissolved in an acid solution and then the metals are extracted using three different extractants, two of which were aminebased and one which was a quaternary ammonium salt. The main aim of this study was to develop a hydrometallurgical method to remove, and ultimately be able to recover, Co, Ni, and Mo from the spent HPCs produced at the petrochemical plant in Come By Chance, Newfoundland and Labrador. The specific objectives of the study were: (1) characterization of the spent catalyst and the acidic leachate, (2) identifying the most efficient leaching agent to dissolve the metals from the spent catalyst; (3) development of a solvent extraction procedure using the amine-based extractants Alamine308, Alamine336 and the quaternary ammonium salt, Aliquat336 in toluene to remove Co, Ni, and Mo from the spent catalyst; (4) selection of the best reagent for Co, Ni, and Mo extraction based on the required contact time, required extractant concentration, as well as organic:aqueous ratio; and (5) evaluation of the extraction conditions and optimization of the metal extraction process using the Design Expert® software. For the present study, a Central Composite Design (CCD) method was applied as the main method to design the experiments, evaluate the effect of each parameter, provide a statistical model, and optimize the extraction process. Three parameters were considered as the most significant factors affecting the process efficiency: (i) extractant concentration, (ii) the organic:aqueous ratio, and (iii) contact time. Metal extraction efficiencies were calculated based on ICP analysis of the pre- and post–leachates, and the process optimization was conducted with the aid of the Design Expert® software. The obtained results showed that Alamine308 can be considered to be the most effective and suitable extractant for spent HPC examined in the study. Alamine308 is capable of removing all three metals to the maximum amounts. Aliquat336 was found to be not as effective, especially for Ni extraction; however, it is able to separate all of these metals within the first 10 min, unlike Alamine336, which required more than 35 min to do so. Based on the results of this study, a cost-effective and environmentally-friendly solventextraction process was achieved to remove Co, Ni, and Mo from the spent HPCs in a short amount of time and with the low extractant concentration required. This method can be tested and implemented for other hazardous metals from other secondary materials as well. Further investigation may be required; however, the results of this study can be a guide for future research on similar metal extraction processes.
Epidemiology and genetic architecture of blood pressure: a family based study of Generation Scotland
Resumo:
Hypertension is a major risk factor for cardiovascular disease and mortality, and a growing global public health concern, with up to one-third of the world’s population affected. Despite the vast amount of evidence for the benefits of blood pressure (BP) lowering accumulated to date, elevated BP is still the leading risk factor for disease and disability worldwide. It is well established that hypertension and BP are common complex traits, where multiple genetic and environmental factors contribute to BP variation. Furthermore, family and twin studies confirmed the genetic component of BP, with a heritability estimate in the range of 30-50%. Contemporary genomic tools enabling the genotyping of millions of genetic variants across the human genome in an efficient, reliable, and cost-effective manner, has transformed hypertension genetics research. This is accompanied by the presence of international consortia that have offered unprecedentedly large sample sizes for genome-wide association studies (GWASs). While GWAS for hypertension and BP have identified more than 60 loci, variants in these loci are associated with modest effects on BP and in aggregate can explain less than 3% of the variance in BP. The aims of this thesis are to study the genetic and environmental factors that influence BP and hypertension traits in the Scottish population, by performing several genetic epidemiological analyses. In the first part of this thesis, it aims to study the burden of hypertension in the Scottish population, along with assessing the familial aggregation and heritialbity of BP and hypertension traits. In the second part, it aims to validate the association of common SNPs reported in the large GWAS and to estimate the variance explained by these variants. In this thesis, comprehensive genetic epidemiology analyses were performed on Generation Scotland: Scottish Family Health Study (GS:SFHS), one of the largest population-based family design studies. The availability of clinical, biological samples, self-reported information, and medical records for study participants has allowed several assessments to be performed to evaluate factors that influence BP variation in the Scottish population. Of the 20,753 subjects genotyped in the study, a total of 18,470 individuals (grouped into 7,025 extended families) passed the stringent quality control (QC) criteria and were available for all subsequent analysis. Based on the BP-lowering treatment exposure sources, subjects were further classified into two groups. First, subjects with both a self-reported medications (SRMs) history and electronic-prescription records (EPRs; n =12,347); second, all the subjects with at least one medication history source (n =18,470). In the first group, the analysis showed a good concordance between SRMs and EPRs (kappa =71%), indicating that SRMs can be used as a surrogate to assess the exposure to BP-lowering medication in GS:SFHS participants. Although both sources suffer from some limitations, SRMs can be considered the best available source to estimate the drug exposure history in those without EPRs. The prevalence of hypertension was 40.8% with higher prevalence in men (46.3%) compared to women (35.8%). The prevalence of awareness, treatment and controlled hypertension as defined by the study definition were 25.3%, 31.2%, and 54.3%, respectively. These findings are lower than similar reported studies in other populations, with the exception of controlled hypertension prevalence, which can be considered better than other populations. Odds of hypertension were higher in men, obese or overweight individuals, people with a parental history of hypertension, and those living in the most deprived area of Scotland. On the other hand, deprivation was associated with higher odds of treatment, awareness and controlled hypertension, suggesting that people living in the most deprived area may have been receiving better quality of care, or have higher comorbidity levels requiring greater engagement with doctors. These findings highlight the need for further work to improve hypertension management in Scotland. The family design of GS:SFHS has allowed family-based analysis to be performed to assess the familial aggregation and heritability of BP and hypertension traits. The familial correlation of BP traits ranged from 0.07 to 0.20, and from 0.18 to 0.34 for parent-offspring pairs and sibling pairs, respectively. A higher correlation of BP traits was observed among first-degree relatives than other types of relative pairs. A variance-component model that was adjusted for sex, body mass index (BMI), age, and age-squared was used to estimate heritability of BP traits, which ranged from 24% to 32% with pulse pressure (PP) having the lowest estimates. The genetic correlation between BP traits showed a high correlation between systolic (SBP), diastolic (DBP) and mean arterial pressure (MAP) (G: 81% to 94%), but lower correlations with PP (G: 22% to 78%). The sibling recurrence risk ratio (λS) for hypertension and treatment were calculated as 1.60 and 2.04 respectively. These findings confirm the genetic components of BP traits in GS:SFHS, and justify further work to investigate genetic determinants of BP. Genetic variants reported in the recent large GWAS of BP traits were selected for genotyping in GS:SFHS using a custom designed TaqMan® OpenArray®. The genotyping plate included 44 single nucleotide polymorphisms (SNPs) that have been previously reported to be associated with BP or hypertension at genome-wide significance level. A linear mixed model that is adjusted for age, age-squared, sex, and BMI was used to test for the association between the genetic variants and BP traits. Of the 43 variants that passed the QC, 11 variants showed statistically significant association with at least one BP trait. The phenotypic variance explained by these variant for the four BP traits were 1.4%, 1.5%, 1.6%, and 0.8% for SBP, DBP, MAP, and PP, respectively. The association of genetic risk score (GRS) that were constructed from selected variants has showed a positive association with BP level and hypertension prevalence, with an average effect of one mmHg increase with each 0.80 unit increases in the GRS across the different BP traits. The impact of BP-lowering medication on the genetic association study for BP traits has been established, with typical practice of adding a fixed value (i.e. 15/10 mmHg) to the measured BP values to adjust for BP treatment. Using the subset of participants with the two treatment exposure sources (i.e. SRMs and EPRs), the influence of using either source to justify the addition of fixed values in SNP association signal was analysed. BP phenotypes derived from EPRs were considered the true phenotypes, and those derived from SRMs were considered less accurate, with some phenotypic noise. Comparing SNPs association signals between the four BP traits in the two model derived from the different adjustments showed that MAP was the least impacted by the phenotypic noise. This was suggested by identifying the same overlapped significant SNPs for the two models in the case of MAP, while other BP traits had some discrepancy between the two sources
Resumo:
The international Argo program, consisting of a global array of more than 3000 free-drifting profiling floats, has now been monitoring the upper 2000 meters of the ocean for several years. One of its main proposed evolutions is to be able to reach the deeper ocean in order to better observe and understand the key role of the deep ocean in the climate system. For this purpose, Ifremer has designed the new “Deep-Arvor” profiling float: it extends the current operational depth down to 4000 meters, and measures temperature and salinity for up to 150 cycles with CTD pumping continuously and 200 cycles in spot sampling mode. High resolution profiles (up to 2000 points) can be transmitted and data are delivered in near real time according to Argo requirements. Deep-Arvor can be deployed everywhere at sea without any pre-ballasting operation and its light weight (~ 26kg) makes its launching easy. Its design was done to target a cost effective solution. Predefined spots have been allocated to add an optional oxygen sensor and a connector for an extra sensor. Extensive laboratory tests were successful. The results of the first at sea experiments showed that the expected performances of the operational prototypes had been reached (i.e. to perform up to 150 cycles). Meanwhile, the industrialization phase was completed in order to manufacture the Deep-Arvor float for the pilot experiment in 2015. In this paper, we detail all the steps of the development work and present the results from the at sea experiments.
Resumo:
8th International Symposium on Project Approaches in Engineering Education (PAEE)
Resumo:
Background: Non-small cell lung cancer (NSCLC) imposes a substantial burden on patients, health care systems and society due to increasing incidence and poor survival rates. In recent years, advances in the treatment of metastatic NSCLC have resulted from the introduction of targeted therapies. However, the application of these new agents increases treatment costs considerably. The objective of this article is to review the economic evidence of targeted therapies in metastatic NSCLC. Methods: A systematic literature review was conducted to identify cost-effectiveness (CE) as well as cost-utility studies. Medline, Embase, SciSearch, Cochrane, and 9 other databases were searched from 2000 through April 2013 (including update) for full-text publications. The quality of the studies was assessed via the validated Quality of Health Economic Studies (QHES) instrument. Results: Nineteen studies (including update) involving the MoAb bevacizumab and the Tyrosine-kinase inhibitors erlotinib and gefitinib met all inclusion criteria. The majority of studies analyzed the CE of first-line maintenance and second-line treatment with erlotinib. Five studies dealt with bevacizumab in first-line regimes. Gefitinib and pharmacogenomic profiling were each covered by only two studies. Furthermore, the available evidence was of only fair quality. Conclusion: First-line maintenance treatment with erlotinib compared to Best Supportive Care (BSC) can be considered cost-effective. In comparison to docetaxel, erlotinib is likely to be cost-effective in subsequent treatment regimens as well. The insights for bevacizumab are miscellaneous. There are findings that gefitinib is cost-effective in first- and second-line treatment, however, based on only two studies. The role of pharmacogenomic testing needs to be evaluated. Therefore, future research should improve the available evidence and consider pharmacogenomic profiling as specified by the European Medicines Agency. Upcoming agents like crizotinib and afatinib need to be analyzed as well. © Lange et al.
Resumo:
African American women account for a disproportionate burden of cervical cancer incidence and mortality rate when compared to non-Hispanic White women. Cervical cancer is one of the most preventable types of cancer, and women can be screened for it with a routine Pap test. Given that religion occupies an essential place in African American lives, framing health messages with important spiritual themes and delivering them through a popular communication delivery channel may allow for a more culturally-relevant and accessible technology-based approach to promoting cervical cancer educational content to African American women. Using community-engaged research as a framework, the purpose of this multiple methods study was to develop, pilot test, and evaluate the feasibility, acceptability, and initial efficacy of a spiritually-based SMS text messaging intervention to increase cervical cancer awareness and Pap test screening intention among African American women. The study recruited church-attending African American women ages 21-65 and was conducted in three phases. Phases 1 and 2 consisted of a series of focus group discussions (n=15), cognitive response interviews (n=8), and initial usability testing that were conducted to inform the intervention development and modifications. Phase 3 utilized a non-experimental one-group pretest-posttest design to pilot test the 16-day text messaging intervention (n=52). Of the individuals enrolled, forty-six completed the posttest (retention rate=88%). Findings provided evidence for the early feasibility, high acceptability, and some initial efficacy of the CervixCheck intervention. There were significant pre-post increases observed for knowledge about cervical cancer and the Pap test (p = .001) and subjective norms (p = .006). Additionally, results post-intervention revealed that 83% of participants reported being either “satisfied” or “very satisfied” with the program and 85% found the text messages either “useful” or “very useful”. 85% of the participants also indicated that they would “likely” or “very likely” share the information they learned from the intervention with the women around them, with 39% indicating that they had already shared some of the information they received with others they knew. A spiritually-based SMS text messaging intervention could be a culturally appropriate and cost-effective method of promoting cervical cancer early detection information to African American women.
Resumo:
Background: Rotavirus diarrhea is one of the most important causes of death among under-five children. Anti-rotavirus vaccination of these children may have a reducing effect on the disease. Objectives: this study is intended to contribute to health policy-makers of the country about the optimal decision and policy development in this area, by performing cost-effectiveness and cost-utility analysis on anti-rotavirus vaccination for under-5 children. Patients and Methods: A cost-effectiveness analysis was performed using a decision tree model to analyze rotavirus vaccination, which was compared with no vaccination with Iran’s ministry of health perspective in a 5-year time horizon. Epidemiological data were collected from published and unpublished sources. Four different assumptions were considered to the extent of the disease episode. To analyze costs, the costs of implementing the vaccination program were calculated with 98% coverage and the cost of USD 7 per dose. Medical and social costs of the disease were evaluated by sampling patients with rotavirus diarrhea, and sensitivity analysis was also performed for different episode rates and vaccine price per dose. Results: For the most optimistic assumption for the episode of illness (10.2 per year), the cost per DALY averted is 12,760 and 7,404 for RotaTeq and Rotarix vaccines, respectively, while assuming the episode of illness is 300%, they will be equal to 2,395 and 354, respectively, which will be highly cost-effective. Number of life-years gained is equal to 3,533 years. Conclusions: Assuming that the illness episodes are 100% and 300% for Rotarix and 300% for Rota Teq, the ratio of cost per DALY averted is highly cost-effective, based on the threshold of the world health organization (< 1 GDP per capita = 4526 USD). The implementation of a national rotavirus vaccination program is suggested.
Resumo:
Water injection is the most widely used method for supplementary recovery in many oil fields due to various reasons, like the fact that water is an effective displacing agent of low viscosity oils, the water injection projects are relatively simple to establish and the water availability at a relatively low cost. For design of water injection projects is necessary to do reservoir studies in order to define the various parameters needed to increase the effectiveness of the method. For this kind of study can be used several mathematical models classified into two general categories: analytical or numerical. The present work aims to do a comparative analysis between the results presented by flow lines simulator and conventional finite differences simulator; both types of simulators are based on numerical methods designed to model light oil reservoirs subjected to water injection. Therefore, it was defined two reservoir models: the first one was a heterogeneous model whose petrophysical properties vary along the reservoir and the other one was created using average petrophysical properties obtained from the first model. Comparisons were done considering that the results of these two models were always in the same operational conditions. Then some rock and fluid parameters have been changed in both models and again the results were compared. From the factorial design, that was done to study the sensitivity analysis of reservoir parameters, a few cases were chosen to study the role of water injection rate and the vertical position of wells perforations in production forecast. It was observed that the results from the two simulators are quite similar in most of the cases; differences were found only in those cases where there was an increase in gas solubility ratio of the model. Thus, it was concluded that in flow simulation of reservoirs analogous of those now studied, mainly when the gas solubility ratio is low, the conventional finite differences simulator may be replaced by flow lines simulator the production forecast is compatible but the computational processing time is lower.