943 resultados para standard-setting organization
Resumo:
Includes bibliography
Resumo:
The dissertation consists of three chapters related to the low-price guarantee marketing strategy and energy efficiency analysis. The low-price guarantee is a marketing strategy in which firms promise to charge consumers the lowest price among their competitors. Chapter 1 addresses the research question "Does a Low-Price Guarantee Induce Lower Prices'' by looking into the retail gasoline industry in Quebec where there was a major branded firm which started a low-price guarantee back in 1996. Chapter 2 does a consumer welfare analysis of low-price guarantees to drive police indications and offers a new explanation of the firms' incentives to adopt a low-price guarantee. Chapter 3 develops the energy performance indicators (EPIs) to measure energy efficiency of the manufacturing plants in pulp, paper and paperboard industry.
Chapter 1 revisits the traditional view that a low-price guarantee results in higher prices by facilitating collusion. Using accurate market definitions and station-level data from the retail gasoline industry in Quebec, I conducted a descriptive analysis based on stations and price zones to compare the price and sales movement before and after the guarantee was adopted. I find that, contrary to the traditional view, the stores that offered the guarantee significantly decreased their prices and increased their sales. I also build a difference-in-difference model to quantify the decrease in posted price of the stores that offered the guarantee to be 0.7 cents per liter. While this change is significant, I do not find the response in comeptitors' prices to be significant. The sales of the stores that offered the guarantee increased significantly while the competitors' sales decreased significantly. However, the significance vanishes if I use the station clustered standard errors. Comparing my observations and the predictions of different theories of modeling low-price guarantees, I conclude the empirical evidence here supports that the low-price guarantee is a simple commitment device and induces lower prices.
Chapter 2 conducts a consumer welfare analysis of low-price guarantees to address the antitrust concerns and potential regulations from the government; explains the firms' potential incentives to adopt a low-price guarantee. Using station-level data from the retail gasoline industry in Quebec, I estimated consumers' demand of gasoline by a structural model with spatial competition incorporating the low-price guarantee as a commitment device, which allows firms to pre-commit to charge the lowest price among their competitors. The counterfactual analysis under the Bertrand competition setting shows that the stores that offered the guarantee attracted a lot more consumers and decreased their posted price by 0.6 cents per liter. Although the matching stores suffered a decrease in profits from gasoline sales, they are incentivized to adopt the low-price guarantee to attract more consumers to visit the store likely increasing profits at attached convenience stores. Firms have strong incentives to adopt a low-price guarantee on the product that their consumers are most price-sensitive about, while earning a profit from the products that are not covered in the guarantee. I estimate that consumers earn about 0.3% more surplus when the low-price guarantee is in place, which suggests that the authorities should not be concerned and regulate low-price guarantees. In Appendix B, I also propose an empirical model to look into how low-price guarantees would change consumer search behavior and whether consumer search plays an important role in estimating consumer surplus accurately.
Chapter 3, joint with Gale Boyd, describes work with the pulp, paper, and paperboard (PP&PB) industry to provide a plant-level indicator of energy efficiency for facilities that produce various types of paper products in the United States. Organizations that implement strategic energy management programs undertake a set of activities that, if carried out properly, have the potential to deliver sustained energy savings. Energy performance benchmarking is a key activity of strategic energy management and one way to enable companies to set energy efficiency targets for manufacturing facilities. The opportunity to assess plant energy performance through a comparison with similar plants in its industry is a highly desirable and strategic method of benchmarking for industrial energy managers. However, access to energy performance data for conducting industry benchmarking is usually unavailable to most industrial energy managers. The U.S. Environmental Protection Agency (EPA), through its ENERGY STAR program, seeks to overcome this barrier through the development of manufacturing sector-based plant energy performance indicators (EPIs) that encourage U.S. industries to use energy more efficiently. In the development of the energy performance indicator tools, consideration is given to the role that performance-based indicators play in motivating change; the steps necessary for indicator development, from interacting with an industry in securing adequate data for the indicator; and actual application and use of an indicator when complete. How indicators are employed in EPA’s efforts to encourage industries to voluntarily improve their use of energy is discussed as well. The chapter describes the data and statistical methods used to construct the EPI for plants within selected segments of the pulp, paper, and paperboard industry: specifically pulp mills and integrated paper & paperboard mills. The individual equations are presented, as are the instructions for using those equations as implemented in an associated Microsoft Excel-based spreadsheet tool.
Resumo:
The security of strong designated verifier (SDV) signature schemes has thus far been analyzed only in a two-user setting. We observe that security in a two-user setting does not necessarily imply the same in a multi-user setting for SDV signatures. Moreover, we show that existing security notions do not adequately model the security of SDV signatures even in a two-user setting. We then propose revised notions of security in a multi-user setting and show that no existing scheme satisfies these notions. A new SDV signature scheme is then presented and proven secure under the revised notions in the standard model. For the purpose of constructing the SDV signature scheme, we propose a one-pass key establishment protocol in the standard model, which is of independent interest in itself.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
We consider one-round key exchange protocols secure in the standard model. The security analysis uses the powerful security model of Canetti and Krawczyk and a natural extension of it to the ID-based setting. It is shown how KEMs can be used in a generic way to obtain two different protocol designs with progressively stronger security guarantees. A detailed analysis of the performance of the protocols is included; surprisingly, when instantiated with specific KEM constructions, the resulting protocols are competitive with the best previous schemes that have proofs only in the random oracle model.
Resumo:
The contributions of this thesis fall into three areas of certificateless cryptography. The first area is encryption, where we propose new constructions for both identity-based and certificateless cryptography. We construct an n-out-of- n group encryption scheme for identity-based cryptography that does not require any special means to generate the keys of the trusted authorities that are participating. We also introduce a new security definition for chosen ciphertext secure multi-key encryption. We prove that our construction is secure as long as at least one authority is uncompromised, and show that the existing constructions for chosen ciphertext security from identity-based encryption also hold in the group encryption case. We then consider certificateless encryption as the special case of 2-out-of-2 group encryption and give constructions for highly efficient certificateless schemes in the standard model. Among these is the first construction of a lattice-based certificateless encryption scheme. Our next contribution is a highly efficient certificateless key encapsulation mechanism (KEM), that we prove secure in the standard model. We introduce a new way of proving the security of certificateless schemes based that are based on identity-based schemes. We leave the identity-based part of the proof intact, and just extend it to cover the part that is introduced by the certificateless scheme. We show that our construction is more efficient than any instanciation of generic constructions for certificateless key encapsulation in the standard model. The third area where the thesis contributes to the advancement of certificateless cryptography is key agreement. Swanson showed that many certificateless key agreement schemes are insecure if considered in a reasonable security model. We propose the first provably secure certificateless key agreement schemes in the strongest model for certificateless key agreement. We extend Swanson's definition for certificateless key agreement and give more power to the adversary. Our new schemes are secure as long as each party has at least one uncompromised secret. Our first construction is in the random oracle model and gives the adversary slightly more capabilities than our second construction in the standard model. Interestingly, our standard model construction is as efficient as the random oracle model construction.
Resumo:
An iterative based strategy is proposed for finding the optimal rating and location of fixed and switched capacitors in distribution networks. The substation Load Tap Changer tap is also set during this procedure. A Modified Discrete Particle Swarm Optimization is employed in the proposed strategy. The objective function is composed of the distribution line loss cost and the capacitors investment cost. The line loss is calculated using estimation of the load duration curve to multiple levels. The constraints are the bus voltage and the feeder current which should be maintained within their standard range. For validation of the proposed method, two case studies are tested. The first case study is the semi-urban 37-bus distribution system which is connected at bus 2 of the Roy Billinton Test System which is located in the secondary side of a 33/11 kV distribution substation. The second case is a 33 kV distribution network based on the modification of the 18-bus IEEE distribution system. The results are compared with prior publications to illustrate the accuracy of the proposed strategy.
Resumo:
Background: The high rates of comorbid depression and substance use in young people have been associated with a range of adverse outcomes. Yet, few treatment studies have been conducted with this population. Objective: To determine if the addition of Motivational Interviewing and Cognitive Behaviour Therapy (MI/CBT) to standard alcohol and other drug (AOD) care improves the outcomes of young people with comorbid depression and substance use. Participants and Setting: Participants comprised 88 young people with comorbid depression (Kessler 10 score of > 17) and substance use (mainly alcohol/cannabis) seeking treatment at two youth AOD services in Melbourne, Australia. Sixty young people received MI/CBT in addition to standard care (SC) and 28 received SC alone. Outcomes Measures: Primary outcome measures were depressive symptoms and drug and alcohol use in the past month. Assessments were conducted at baseline, 3 and 6 months follow up. Results and Conclusions: The addition of MI/CBT to SC was associated with a significantly greater rate of change in depression, cannabis use, motivation to change substance use and social contact in the first 3 months. However, those who received SC had achieved similar improvements on these variables by 6 months follow up. All young people achieved significant improvements in functioning and quality of life variables over time, regardless of the treatment group. No changes in alcohol or other drug use were found in either group. The delivery of MI/CBT in addition to standard AOD care may offer accelerated treatment gains in the short-term.
Resumo:
Little published information exists about the issues involved in conducting complex intravenous medication therapy in patients' homes. An ethnographic study of a local hospital-in-the-home program in the Australian Capital Territory explored this phenomenon to identify those factors that had an impact on the use of medicine in the home environment. This article focuses on one of the three themes identified in the study-Clinical Practice. Within this theme, topics related to the organization and management of intravenous medications, geography and diversity of patient caseload, and communication in the practice setting are discussed. These findings have important implications for policy development and establishment of a research agenda for hospital-in-the-home services.
Resumo:
There is a need for public health interventions to be based on the best available evidence. Unfortunately, well-conducted studies from settings similar to that in which an intervention is to be implemented are often not available. Therefore, health practitioners are forced to make judgements about proven effective interventions in one setting and their suitability to make a difference in their own setting. The framework of Wang et al. has been proposed to help with this process. This paper provides a case study on the application of the framework to a decision-making process regarding antenatal care in Aboriginal and Torres Strait Islander communities in Queensland. This method involved undertaking a systematic search of the current available evidence, then conducting a second literature search to determine factors that may affect the applicability and transferability of these interventions into these communities. Finally, in consideration of these factors, clinical judgement decisions on the applicability and transferability of these interventions were made. This method identified several interventions or strategies for which there was evidence of improving antenatal care or outcomes. By using the framework, we concluded that several of these effective interventions would be feasible in Aboriginal and Torres Strait Islander communities within Queensland.
Resumo:
Objective To determine the burden of hospitalised, radiologically confirmed pneumonia (World Health Organization protocol) in Northern Territory Indigenous children. Design, setting and participants Historical, observational study of all hospital admissions for any diagnosis of NT resident Indigenous children, aged between >= 29 days and < 5 years, 1 April 1997 to 31 March 2005. Intervention All chest radiographs taken during these admissions, regardless of diagnosis, were assessed for pneumonia in accordance with the WHO protocol. Main outcome measure The primary outcome was endpoint consolidation (dense fluffy consolidation [alveolar infiltrate] of a portion of a lobe or the entire lung) present on a chest radiograph within 3 days of hospitalisation. Results We analysed data on 24 115 hospitalised episodes of care for 9492 children and 13 683 chest radiographs. The average annual cumulative incidence of endpoint consolidation was 26.6 per 1000 population per year (95% Cl, 25.3-27.9); 57.5 per 1000 per year in infants aged 1-11 months, 38.3 per 1000 per year in those aged 12-23 months, and 13.3 per 1000 per year in those aged 24-59 months. In all age groups, rates of endpoint consolidation in children in the arid southern region of NT were about twice that of children in the tropical northern region. Conclusion The rates of severe pneumonia in hospitalised NT Indigenous children are among the highest reported in the world. Reducing this unacceptable burden of disease should be a national health priority.
Resumo:
Purpose: This randomised trial was designed to investigate the activity and toxicity of continuous infusion etoposide phosphate (EP), targeting a plasma etoposide concentration of either 3 μg/ml for five days (5d) or 1 μg/ml for 15 days (15d), in previously untreated SCLC patients with extensive disease. Patients and methods: EP was used as a single agent. Plasma etoposide concentration was monitored on days 2 and 4 in patients receiving 5d EP and on days 2, 5, 8 and 11 in patients receiving 15d EP, with infusion modification to ensure target concentrations were achieved. Treatment was repeated every 21 days for up to six cycles, with a 25% reduction in target concentration in patients with toxicity. Results: The study has closed early after entry of 29 patients (14 with 5d EP, 15 with 15d EP). Objective responses were seen in seven of 12 (58%, confidence interval (CI): 27%-85%) evaluable patients after 5d EP, and two of 14 (14%, CI: 4%42%) evaluable patients after 15d EP (P = 0.038). Grade 3 or 4 neutropenia or leucopenia during the first cycle of treatment was observed in six of 12 patients after 5d EP and 0/14 patients after 15d EP (P = 0.004), with median nadir WBC count of 2.6 x 109/1 after 5d and 5.0 x 109/1 after 15d EP (P = 0.017). Only one of 49 cycles of 15d EP was associated with grade 3 or worse haematological toxicity, compared to 14 of 61 cycles of 5d EP. Conclusions: Although the number of patients entered into this trial was small, the low activity seen at 1 μg/ml in the 15d arm suggests that this concentration is below the therapeutic window in this setting. Further concentration- controlled studies with prolonged EP infusions are required.
Resumo:
An encryption scheme is non-malleable if giving an encryption of a message to an adversary does not increase its chances of producing an encryption of a related message (under a given public key). Fischlin introduced a stronger notion, known as complete non-malleability, which requires attackers to have negligible advantage, even if they are allowed to transform the public key under which the related message is encrypted. Ventre and Visconti later proposed a comparison-based definition of this security notion, which is more in line with the well-studied definitions proposed by Bellare et al. The authors also provide additional feasibility results by proposing two constructions of completely non-malleable schemes, one in the common reference string model using non-interactive zero-knowledge proofs, and another using interactive encryption schemes. Therefore, the only previously known completely non-malleable (and non-interactive) scheme in the standard model, is quite inefficient as it relies on generic NIZK approach. They left the existence of efficient schemes in the common reference string model as an open problem. Recently, two efficient public-key encryption schemes have been proposed by Libert and Yung, and Barbosa and Farshim, both of them are based on pairing identity-based encryption. At ACISP 2011, Sepahi et al. proposed a method to achieve completely non-malleable encryption in the public-key setting using lattices but there is no security proof for the proposed scheme. In this paper we review the mentioned scheme and provide its security proof in the standard model. Our study shows that Sepahi’s scheme will remain secure even for post-quantum world since there are currently no known quantum algorithms for solving lattice problems that perform significantly better than the best known classical (i.e., non-quantum) algorithms.
Resumo:
Cancer can be defined as a deregulation or hyperactivity in the ongoing network of intracellular and extracellular signaling events. Reverse phase protein microarray technology may offer a new opportunity to measure and profile these signaling pathways, providing data on post-translational phosphorylation events not obtainable by gene microarray analysis. Treatment of ovarian epithelial carcinoma almost always takes place in a metastatic setting since unfortunately the disease is often not detected until later stages. Thus, in addition to elucidation of the molecular network within a tumor specimen, critical questions are to what extent do signaling changes occur upon metastasis and are there common pathway elements that arise in the metastatic microenvironment. For individualized combinatorial therapy, ideal therapeutic selection based on proteomic mapping of phosphorylation end points may require evaluation of the patient's metastatic tissue. Extending these findings to the bedside will require the development of optimized protocols and reference standards. We have developed a reference standard based on a mixture of phosphorylated peptides to begin to address this challenge.