762 resultados para Cost-benefit Analyses
Resumo:
Background: WHO's 2013 revisions to its Consolidated Guidelines on antiretroviral drugs recommend routine viral load monitoring, rather than clinical or immunological monitoring, as the preferred monitoring approach on the basis of clinical evidence. However, HIV programmes in resource-limited settings require guidance on the most cost-effective use of resources in view of other competing priorities such as expansion of antiretroviral therapy coverage. We assessed the cost-effectiveness of alternative patient monitoring strategies. Methods: We evaluated a range of monitoring strategies, including clinical, CD4 cell count, and viral load monitoring, alone and together, at different frequencies and with different criteria for switching to second-line therapies. We used three independently constructed and validated models simultaneously. We estimated costs on the basis of resource use projected in the models and associated unit costs; we quantified impact as disability-adjusted life years (DALYs) averted. We compared alternatives using incremental cost-effectiveness analysis. Findings: All models show that clinical monitoring delivers significant benefit compared with a hypothetical baseline scenario with no monitoring or switching. Regular CD4 cell count monitoring confers a benefit over clinical monitoring alone, at an incremental cost that makes it affordable in more settings than viral load monitoring, which is currently more expensive. Viral load monitoring without CD4 cell count every 6—12 months provides the greatest reductions in morbidity and mortality, but incurs a high cost per DALY averted, resulting in lost opportunities to generate health gains if implemented instead of increasing antiretroviral therapy coverage or expanding antiretroviral therapy eligibility. Interpretation: The priority for HIV programmes should be to expand antiretroviral therapy coverage, firstly at CD4 cell count lower than 350 cells per μL, and then at a CD4 cell count lower than 500 cells per μL, using lower-cost clinical or CD4 monitoring. At current costs, viral load monitoring should be considered only after high antiretroviral therapy coverage has been achieved. Point-of-care technologies and other factors reducing costs might make viral load monitoring more affordable in future. Funding: Bill & Melinda Gates Foundation, WHO.
Resumo:
QUESTION UNDER STUDY The aim of this study was to evaluate the cost-effectiveness of ticagrelor and generic clopidogrel as add-on therapy to acetylsalicylic acid (ASA) in patients with acute coronary syndrome (ACS), from a Swiss perspective. METHODS Based on the PLATelet inhibition and patient Outcomes (PLATO) trial, one-year mean healthcare costs per patient treated with ticagrelor or generic clopidogrel were analysed from a payer perspective in 2011. A two-part decision-analytic model estimated treatment costs, quality-adjusted life years (QALYs), life years and the cost-effectiveness of ticagrelor and generic clopidogrel in patients with ACS up to a lifetime at a discount of 2.5% per annum. Sensitivity analyses were performed. RESULTS Over a patient's lifetime, treatment with ticagrelor generates an additional 0.1694 QALYs and 0.1999 life years at a cost of CHF 260 compared with generic clopidogrel. This results in an Incremental Cost Effectiveness Ratio (ICER) of CHF 1,536 per QALY and CHF 1,301 per life year gained. Ticagrelor dominated generic clopidogrel over the five-year and one-year periods with treatment generating cost savings of CHF 224 and 372 while gaining 0.0461 and 0.0051 QALYs and moreover 0.0517 and 0.0062 life years, respectively. Univariate sensitivity analyses confirmed the dominant position of ticagrelor in the first five years and probabilistic sensitivity analyses showed a high probability of cost-effectiveness over a lifetime. CONCLUSION During the first five years after ACS, treatment with ticagrelor dominates generic clopidogrel in Switzerland. Over a patient's lifetime, ticagrelor is highly cost-effective compared with generic clopidogrel, proven by ICERs significantly below commonly accepted willingness-to-pay thresholds.
Resumo:
Introduction. In this era of high-tech medicine, it is becoming increasingly important to assess patient satisfaction. There are several methods to do so, but these differ greatly in terms of cost, time, and labour and external validity. The aim of this study is to describe and compare the structure and implementation of different methods to assess the satisfaction of patients in an emergency department. Methods. The structure and implementation of the different methods to assess patient satisfaction were evaluated on the basis of a 90-minute standardised interview. Results. We identified a total of six different methods in six different hospitals. The average number of patients assessed was 5012, with a range from 230 (M5) to 20 000 patients (M2). In four methods (M1, M3, M5, and M6), the questionnaire was composed by a specialised external institute. In two methods, the questionnaire was created by the hospital itself (M2, M4).The median response rate was 58.4% (range 9-97.8%). With a reminder, the response rate increased by 60% (M3). Conclusion. The ideal method to assess patient satisfaction in the emergency department setting is to use a patient-based, in-emergency department-based assessment of patient satisfaction, planned and guided by expert personnel.
Resumo:
OBJECTIVE: The presence of minority nonnucleoside reverse transcriptase inhibitor (NNRTI)-resistant HIV-1 variants prior to antiretroviral therapy (ART) has been linked to virologic failure in treatment-naive patients. DESIGN: We performed a large retrospective study to determine the number of treatment failures that could have been prevented by implementing minority drug-resistant HIV-1 variant analyses in ART-naïve patients in whom no NNRTI resistance mutations were detected by routine resistance testing. METHODS: Of 1608 patients in the Swiss HIV Cohort Study, who have initiated first-line ART with two nucleoside reverse transcriptase inhibitors (NRTIs) and one NNRTI before July 2008, 519 patients were eligible by means of HIV-1 subtype, viral load and sample availability. Key NNRTI drug resistance mutations K103N and Y181C were measured by allele-specific PCR in 208 of 519 randomly chosen patients. RESULTS: Minority K103N and Y181C drug resistance mutations were detected in five out of 190 (2.6%) and 10 out of 201 (5%) patients, respectively. Focusing on 183 patients for whom virologic success or failure could be examined, virologic failure occurred in seven out of 183 (3.8%) patients; minority K103N and/or Y181C variants were present prior to ART initiation in only two of those patients. The NNRTI-containing, first-line ART was effective in 10 patients with preexisting minority NNRTI-resistant HIV-1 variant. CONCLUSION: As revealed in settings of case-control studies, minority NNRTI-resistant HIV-1 variants can have an impact on ART. However, the sole implementation of minority NNRTI-resistant HIV-1 variant analysis in addition to genotypic resistance testing (GRT) cannot be recommended in routine clinical settings. Additional associated risk factors need to be discovered.
Resumo:
BACKGROUND The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. METHODS We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. RESULTS Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. CONCLUSION Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs.
Resumo:
OBJECTIVES To investigate the frequency of interim analyses, stopping rules, and data safety and monitoring boards (DSMBs) in protocols of randomized controlled trials (RCTs); to examine these features across different reasons for trial discontinuation; and to identify discrepancies in reporting between protocols and publications. STUDY DESIGN AND SETTING We used data from a cohort of RCT protocols approved between 2000 and 2003 by six research ethics committees in Switzerland, Germany, and Canada. RESULTS Of 894 RCT protocols, 289 prespecified interim analyses (32.3%), 153 stopping rules (17.1%), and 257 DSMBs (28.7%). Overall, 249 of 894 RCTs (27.9%) were prematurely discontinued; mostly due to reasons such as poor recruitment, administrative reasons, or unexpected harm. Forty-six of 249 RCTs (18.4%) were discontinued due to early benefit or futility; of those, 37 (80.4%) were stopped outside a formal interim analysis or stopping rule. Of 515 published RCTs, there were discrepancies between protocols and publications for interim analyses (21.1%), stopping rules (14.4%), and DSMBs (19.6%). CONCLUSION Two-thirds of RCT protocols did not consider interim analyses, stopping rules, or DSMBs. Most RCTs discontinued for early benefit or futility were stopped without a prespecified mechanism. When assessing trial manuscripts, journals should require access to the protocol.
Resumo:
In the demanding environment of healthcare reform, reduction of unwanted physician practice variation is promoted, often through evidence-based guidelines. Guidelines represent innovations that direct change(s) in physician practice; however, compliance has been disappointing. Numerous studies have analyzed guideline development and dissemination, while few have evaluated the consequences of guideline adoption. The primary purpose of this study was to explore and analyze the relationship between physician adoption of the glycated hemoglobin test guideline for management of adult patients with diabetes, and the cost of medical care. The study also examined six personal and organizational characteristics of physicians and their association with innovativeness, or adoption of the guideline. ^ Cost was represented by approved charges from a managed care claims database. Total cost, and diabetes and related complications cost, first were compared for all patients of adopter physicians with those of non-adopter physicians. Then, data were analyzed controlling for disease severity based on insulin dependency, and for high cost cases. There was no statistically significant difference in any of eight cost categories analyzed. This study represented a twelve-month period, and did not reflect cost associated with future complications known to result from inadequate management of glycemia. Guideline compliance did not increase annual cost, which, combined with the future benefit of glycemic control, lends support to the cost effectiveness of the guideline in the long term. Physician adoption of the guideline was recommended to reduce the future personal and economic burden of this chronic disease. ^ Only half of physicians studied had adopted the glycated hemoglobin test guideline for at least 75% of their diabetic patients. No statistically significant relationship was found between any physician characteristic and guideline adoption. Instead, it was likely that the innovation-decision process and guideline dissemination methods were most influential. ^ A multidisciplinary, multi-faceted approach, including interventions for each stage of the innovation-decision process, was proposed to diffuse practice guidelines more effectively. Further, it was recommended that Organized Delivery Systems expand existing administrative databases to include clinical information, decision support systems, and reminder mechanisms, to promote and support physician compliance with this and other evidence-based guidelines. ^
Resumo:
Background. Screening for colorectal cancer (CRC) is considered cost effective but screening compliance in the US remains low. There have been very few studies on economic analyses of screening promotion strategies for colorectal cancer. The main aim of the current study is to conduct a cost effectiveness analysis (CEA) and examine the uncertainty involved in the results of the CEA of a tailored intervention to promote screening for CRC among patients of a multispeciality clinic in Houston, TX. ^ Methods. The two intervention arms received a PC based tailored program and web based educational information to promote CRC screening. The incremental cost of implementing a tailored PC based program was compared to the website based education and the status quo of no intervention for each unit of effect after 12 months of delivering the intervention. Uncertainty analysis in the point estimates of cost and effect was conducted using nonparametric bootstrapping. ^ Results. The cost of implementing a web based educational intervention was $36.00 per person and the cost of the tailored PC based interactive intervention was $43.00 per person. The additional cost per person screened for the web-based strategy was $2374 and the effect of the tailored intervention was negative. ^
Resumo:
Back ground and Purpose. There is a growing consensus among health care researchers that Quality of Life (QoL) is an important outcome and, within the field of family caregiving, cost effectiveness research is needed to determine which programs have the greatest benefit for family members. This study uses a multidimensional approach to measure the cost effectiveness of a multicomponent intervention designed to improve the quality of life of spousal caregivers of stroke survivors. Methods. The CAReS study (Committed to Assisting with Recovery after Stroke) was a 5-year prospective, longitudinal intervention study for 159 stroke survivors and their spousal caregivers upon discharge of the stroke survivor from inpatient rehabilitation to their home. CAReS cost data were analyzed to determine the incremental cost of the intervention per caregiver. The mean values of the quality-of-life predictor variables of the intervention group of caregivers were compared to the mean values of usual care groups found in the literature. Significant differences were then divided into the cost of the intervention per caregiver to calculate the incremental cost effectiveness ratio for each predictor variable. Results. The cost of the intervention per caregiver was approximately $2,500. Statistically significant differences were found between the mean scores for the Perceived Stress and Satisfaction with Life scales. Statistically significant differences were not found between the mean scores for the Self Reported Health Status, Mutuality, and Preparedness scales. Conclusions. This study provides a prototype cost effectiveness analysis on which researchers can build. Using a multidimensional approach to measure QoL, as used in this analysis, incorporates both the subjective and objective components of QoL. Some of the QoL predictor variable scores were significantly different between the intervention and comparison groups, indicating a significant impact of the intervention. The estimated cost of the impact was also examined. In future studies, a scale that takes into account both the dimensions and the weighting each person places on the dimensions of QoL should be used to provide a single QoL score per participant. With participant level cost and outcome data, uncertainty around each cost-effectiveness ratio can be calculated using the bias-corrected percentile bootstrapping method and plotted to calculate the cost-effectiveness acceptability curves.^
Resumo:
Several activities in service oriented computing, such as automatic composition, monitoring, and adaptation, can benefit from knowing properties of a given service composition before executing them. Among these properties we will focus on those related to execution cost and resource usage, in a wide sense, as they can be linked to QoS characteristics. In order to attain more accuracy, we formulate execution costs / resource usage as functions on input data (or appropriate abstractions thereof) and show how these functions can be used to make better, more informed decisions when performing composition, adaptation, and proactive monitoring. We present an approach to, on one hand, synthesizing these functions in an automatic fashion from the definition of the different orchestrations taking part in a system and, on the other hand, to effectively using them to reduce the overall costs of non-trivial service-based systems featuring sensitivity to data and possibility of failure. We validate our approach by means of simulations of scenarios needing runtime selection of services and adaptation due to service failure. A number of rebinding strategies, including the use of cost functions, are compared.
Resumo:
The research in this thesis is related to static cost and termination analysis. Cost analysis aims at estimating the amount of resources that a given program consumes during the execution, and termination analysis aims at proving that the execution of a given program will eventually terminate. These analyses are strongly related, indeed cost analysis techniques heavily rely on techniques developed for termination analysis. Precision, scalability, and applicability are essential in static analysis in general. Precision is related to the quality of the inferred results, scalability to the size of programs that can be analyzed, and applicability to the class of programs that can be handled by the analysis (independently from precision and scalability issues). This thesis addresses these aspects in the context of cost and termination analysis, from both practical and theoretical perspectives. For cost analysis, we concentrate on the problem of solving cost relations (a form of recurrence relations) into closed-form upper and lower bounds, which is the heart of most modern cost analyzers, and also where most of the precision and applicability limitations can be found. We develop tools, and their underlying theoretical foundations, for solving cost relations that overcome the limitations of existing approaches, and demonstrate superiority in both precision and applicability. A unique feature of our techniques is the ability to smoothly handle both lower and upper bounds, by reversing the corresponding notions in the underlying theory. For termination analysis, we study the hardness of the problem of deciding termination for a speci�c form of simple loops that arise in the context of cost analysis. This study gives a better understanding of the (theoretical) limits of scalability and applicability for both termination and cost analysis.
Resumo:
A large number of reinforced concrete (RC) frame structures built in earthquake-prone areas such as Haiti are vulnerable to strong ground motions. Structures in developing countries need low-cost seismic retrofit solutions to reduce their vulnerability. This paper investigates the feasibility of using masonry infill walls to reduce deformations and damage caused by strong ground motions in brittle and weak RC frames designed only for gravity loads. A numerical experiment was conducted in which several idealized prototypes representing RC frame structures of school buildings damaged during the Port-au-Prince earthquake (Haiti, 2010) were strengthened by adding elements representing masonry infill walls arranged in different configurations. Each configuration was characterized by the ratio Rm of the area of walls in the direction of the ground motion (in plan) installed in each story to the total floor area. The numerical representations of these idealized RC frame structures with different values of Rm were (hypothetically) subjected to three major earthquakes with peak ground accelerations of approximately 0.5g. The results of the non-linear dynamic response analyses were summarized in tentative relationships between Rm and four parameters commonly used to characterize the seismic response of structures: interstory drift, Park and Ang indexes of damage, and total amount of energy dissipated by the main frame. It was found that Rm=4% is a reasonable minimum design value for seismic retrofitting purposes in cases in which available resources are not sufficient to afford conventional retrofit measures.
Resumo:
La presente tesis doctoral, “Aprovechamiento térmico de residuos estériles de carbón para generación eléctrica mediante tecnologías de combustión y gasificación eficientes y con mínimo impacto ambiental”, desarrolla la valorización energética de los residuos del carbón, estériles de carbón, producidos durante las etapas de extracción y lavado del carbón. El sistema energético se encuentra en una encrucijada, estamos asistiendo a un cambio en el paradigma energético y, en concreto, en el sector de la generación eléctrica. Se precipita un cambio en la generación y el consumo eléctricos. Una mayor concienciación por la salud está forzando la contención y eliminación de agentes contaminantes que se generan por la utilización de combustibles fósiles de la forma en la que se viene haciendo. Aumenta la preocupación por el cambio climático y por contener en 2°C el aumento de la temperatura de la Tierra para final de este siglo, circunstancia que está impulsando el desarrollo e implantación definitiva de tecnología de control y reducción de emisiones CO2. Generar electricidad de una manera sostenible se está convirtiendo en una obligación. Esto se materializa en generar electricidad respetando el medioambiente, de una forma eficiente en la utilización de los recursos naturales y a un coste competitivo, pensando en el desarrollo de la sociedad y en el beneficio de las personas. En la actualidad, el carbón es la principal fuente de energía utilizada para generar electricidad, y su empleo presenta la forma de energía más barata para mejorar el nivel de vida de cualquier grupo y sociedad. Además, se espera que el carbón siga presente en el mix de generación eléctrica, manteniendo una significativa presencia y extrayéndose en elevadas cantidades. Pero la producción de carbón lleva asociada la generación de un residuo, estéril, que se produce durante la extracción y el lavado del mineral de carbón. Durante décadas se ha estudiado la posibilidad de utilizar el estéril y actualmente se utiliza, en un limitado porcentaje, en la construcción de carreteras, terraplenes y rellenos, y en la producción de algunos materiales de construcción. Esta tesis doctoral aborda la valorización energética del estéril, y analiza el potencial aprovechamiento del residuo para generar electricidad, en una instalación que integre tecnología disponible para minimizar el impacto medioambiental. Además, persigue aprovechar el significativo contenido en azufre que presenta el estéril para producir ácido sulfúrico (H2SO4) como subproducto de la instalación, un compuesto químico muy demandado por la industria de los fertilizantes y con multitud de aplicaciones en otros mercados. Se ha realizado el análisis de caracterización del estéril, los parámetros significativos y los valores de referencia para su empleo como combustible, encontrándose que su empleo como combustible para generar electricidad es posible. Aunque en España se lleva extrayendo carbón desde principios del siglo XVIII, se ha evaluado para un período más reciente la disponibilidad del recurso en España y la normativa existente que condiciona su aplicación en el territorio nacional. Para el período evaluado, se ha calculado que podrían estar disponibles más de 68 millones de toneladas de estéril susceptibles de ser valorizados energéticamente. Una vez realizado el análisis de la tecnología disponible y que podría considerarse para emplear el estéril como combustible, se proponen cuatro configuraciones posibles de planta, tres de ellas basadas en un proceso de combustión y una de ellas en un proceso de gasificación. Tras evaluar las cuatro configuraciones por su interés tecnológico, innovador y económico, se desarrolla el análisis conceptual de una de ellas, basada en un proceso de combustión. La instalación propuesta tiene una capacidad de 65 MW y emplea como combustible una mezcla de carbón y estéril en relación 20/80 en peso. La instalación integra tecnología para eliminar en un 99,8% el SO2 presente en el gas de combustión y en más de un 99% las partículas generadas. La instalación incorpora una unidad de producción de H2SO4, capaz de producir 18,5 t/h de producto, y otra unidad de captura para retirar un 60% del CO2 presente en la corriente de gases de combustión, produciendo 48 tCO2/h. La potencia neta de la planta es 49,7 MW. Se ha calculado el coste de inversión de la instalación, y su cálculo resulta en un coste de inversión unitario de 3.685 €/kW. ABSTRACT The present doctoral thesis, “Thermal utilisation of waste coal for electricity generation by deployment of efficient combustion and gasification technologies with minimum environmental impact”, develops an innovative waste-to-energy concept of waste coals produced during coal mining and washing. The energy system is at a dilemma, we are witnessing a shift in the energy paradigm and specifically in the field of electricity generation. A change in the generation and electrical consumption is foreseen. An increased health consciousness is forcing the containment and elimination of pollutants that are generated by the use of fossil fuels in the way that is being done. Increasing concern about climate change and to contain the rise of global temperature by 2°C by the end of this century, is promoting the development and final implementation of technology to control and reduce the CO2 emission. Electricity generation in a sustainable manner is becoming an obligation. This concept materialised in generating electricity while protecting the environment and deployment of natural resources at a competitive cost, considering the development of society and people´s benefit. Currently, coal is the main source of energy employ to generate electricity, and its use represents the most cost competitive form of energy to increase the standard of living of any group or society. Moreover, coal will keep playing a key role in the global electricity generation mix, maintaining a significant presence and being extracting in large amounts. However, coal production implies the production of waste, termed waste coal or culm in Pennsylvania anthracite extraction, produced during coal mining and coal washing activities. During the last decades, the potential use of waste coal has been studied, and currently, in a limited amount, waste coal is used in roads construction, embankments and fillings, and to produce some construction materials. This doctoral thesis evaluates the waste to energy of waste coals and assesses its potential use to generate electricity, implementing available technology to minimise the environment impact. Additionally, it pursues the significant advantage that presents sulphur content in waste coal to produce sulphuric acid (H2SO4) as a byproduct of the waste-to-energy process, a chemical compound highly demanded by the fertiliser industry and many applications in other markets. It analyses the characteristics of waste coal, and assesses the significant parameters and reference values for its use as fuel, being its fuel use for electricity generation very possible. While mining coal is taking place in Spain since the 1700s, it has been evaluated for a more recent period the waste coal available in Spain and the existing legislation that affects its application and deploy to generate electricity in the country. For the evaluation period has been calculated that may be available more than 68 million tons of waste coal that can be waste-toenergy. The potential available technology to deploy waste coal as fuel has been evaluated and assessed. After considering this, the doctoral thesis proposes four innovative alternatives of facility configuration, three of them based on a combustion process and one in a gasification process. After evaluating the four configurations for its technological, innovative and economic interest, the conceptual analysis of one of alternatives, based on a combustion process, takes place. The proposed alternative facility developed has a capacity of 65 MW, using as fuel a mixture of coal and waste coal 80/20 by weight. The facility comprises technology to remove 99.8% SO2 present in the flue gas and more than 99% of the particles. The facility includes a unit capable of producing 18.5 t/h of H2SO4, and another capture facility, removing 60% of CO2 present in the flue gas stream, producing 48 tCO2/h. The net capacity of the power station is 49.7 MW. The facility unitary cost of investment is 3,685 €/kW.
Resumo:
Young birds and mammals frequently solicit food by means of extravagant and apparently costly begging displays. Much attention has been devoted to the idea that these displays are honest signals of need, and that their apparent cost serves to maintain their honesty. Recent analyses, however, have shown that the cost needed to maintain a fully informative, honest signal may often be so great that both offspring (signaler) and parent (receiver) would do better to refrain from communication. This apparently calls into question the relevance of the costly signaling hypothesis. Here, I show that this argument overlooks the impact of sibling competition. When multiple signalers must compete for the attention of a receiver (as is commonly the case in parent–offspring interactions), I show that (all other things being equal) individual equilibrium signal costs will typically be lower. The greater the number of competitors, the smaller the mean cost, though the maximum level of signal intensity employed by very needy signalers may actually increase with the number of competitors. At the same time, costs become increasingly sensitive to relatedness among signalers as opposed to relatedness between signalers and receivers. As a result of these trends, signaling proves profitable for signalers under a much wider range of conditions when there is competition (though it is still likely to be unprofitable for receivers).
Resumo:
This study seeks to analyse the price determination of low cost airlines in Europe and the effect that Internet has on this strategy. The outcomes obtained reveal that both users and companies benefit from the use of ICTs in the purchase and sale of airline tickets: the Internet allows consumers to increase their bargaining power comparing different airlines and choosing the most competitive flight, while companies can easily check the behaviour of users to adapt their pricing strategies using internal information. More than 2500 flights of the largest European low cost airlines have been used to carry out the study. The study revealed that the most significant variables for understanding pricing strategies were the number of rivals, the behaviour of the demand and the associated costs. The results indicated that consumers should buy their tickets before 25 days prior to departure.