12 resultados para dynamic adverse selection
em Aston University Research Archive
Resumo:
The aim of our paper is to examine whether Exchange Traded Funds (ETFs) diversify away the private information of informed traders. We apply the spread decomposition models of Glosten and Harris (1998) and Madhavan, Richardson and Roomans (1997) to a sample of ETFs and their control securities. Our results indicate that ETFs have significantly lower adverse selection costs than their control securities. This suggests that private information is diversified away for these securities. Our results therefore offer one explanation for the rapid growth in the ETF market.
Resumo:
This thesis focuses on three main questions. The first uses ExchangeTraded Funds (ETFs) to evaluate estimated adverse selection costs obtained spread decomposition models. The second compares the Probability of Informed Trading (PIN) in Exchange-Traded Funds to control securities. The third examines the intra-day ETF trading patterns. These spread decomposition models evaluated are Glosten and Harris (1988); George, Kaul, and Nimalendran (1991); Lin, Sanger, and Booth (1995); Madhavan, Richardson, and Roomans (1997); Huang and Stoll (1997). Using the characteristics of ETFs it is shown that only the Glosten and Harris (1988) and Madhavan, et al (1997) models provide theoretically consistent results. When the PIN measure is employed ETFs are shown to have greater PINs than control securities. The investigation of the intra-day trading patterns shows that return volatility and trading volume have a U-shaped intra-day pattern. A study of trading systems shows that ETFs on the American Stock Exchange (AMEX) have a U-shaped intra-day pattern of bid-ask spreads, while ETFs on NASDAQ do not. Specifically, ETFs on NASDAQ have higher bid-ask spreads at the market opening, then the lowest bid-ask spread in the middle of the day. At the close of the market, the bid-ask spread of ETFs on NASDAQ slightly elevated when compared to mid-day.
Resumo:
The aim of this paper is to examine the short term dynamics of foreign exchange rate spreads. Using a vector autoregressive model (VAR) we show that most of the variation in the spread comes from the long run dependencies between past and future spreads rather than being caused by changes in inventory, adverse selection, cost of carry or order processing costs. We apply the Integrated Cumulative Sum of Squares (ICSS) algorithm of Inclan and Tiao (1994) to discover how often spread volatility changes. We find that spread volatility shifts are relatively uncommon and shifts in one currency spread tend not to spillover to other currency spreads. © 2013.
Resumo:
In this article we evaluate the most widely used spread decomposition models using Exchange Traded Funds (ETFs). These funds are an example of a basket security and allow the diversification of private information causing these securities to have lower adverse selection costs than individual securities. We use this feature as a criterion for evaluating spread decomposition models. Comparisons of adverse selection costs for ETF's and control securities obtained from spread decomposition models show that only the Glosten-Harris (1988) and the Madhavan-Richardson-Roomans (1997) models provide estimates of the spread that are consistent with the diversification of private information in a basket security. Our results are robust even after controlling for the stock exchange. © 2011 Copyright Taylor and Francis Group, LLC.
Resumo:
Due to dynamic variability, identifying the specific conditions under which non-functional requirements (NFRs) are satisfied may be only possible at runtime. Therefore, it is necessary to consider the dynamic treatment of relevant information during the requirements specifications. The associated data can be gathered by monitoring the execution of the application and its underlying environment to support reasoning about how the current application configuration is fulfilling the established requirements. This paper presents a dynamic decision-making infrastructure to support both NFRs representation and monitoring, and to reason about the degree of satisfaction of NFRs during runtime. The infrastructure is composed of: (i) an extended feature model aligned with a domain-specific language for representing NFRs to be monitored at runtime; (ii) a monitoring infrastructure to continuously assess NFRs at runtime; and (iii) a exible decision-making process to select the best available configuration based on the satisfaction degree of the NRFs. The evaluation of the approach has shown that it is able to choose application configurations that well fit user NFRs based on runtime information. The evaluation also revealed that the proposed infrastructure provided consistent indicators regarding the best application configurations that fit user NFRs. Finally, a benefit of our approach is that it allows us to quantify the level of satisfaction with respect to NFRs specification.
Resumo:
Ontologies have become a key component in the Semantic Web and Knowledge management. One accepted goal is to construct ontologies from a domain specific set of texts. An ontology reflects the background knowledge used in writing and reading a text. However, a text is an act of knowledge maintenance, in that it re-enforces the background assumptions, alters links and associations in the ontology, and adds new concepts. This means that background knowledge is rarely expressed in a machine interpretable manner. When it is, it is usually in the conceptual boundaries of the domain, e.g. in textbooks or when ideas are borrowed into other domains. We argue that a partial solution to this lies in searching external resources such as specialized glossaries and the internet. We show that a random selection of concept pairs from the Gene Ontology do not occur in a relevant corpus of texts from the journal Nature. In contrast, a significant proportion can be found on the internet. Thus, we conclude that sources external to the domain corpus are necessary for the automatic construction of ontologies.
Resumo:
Detection and interpretation of adverse signals during preclinical and clinical stages of drug development inform the benefit-risk assessment that determines suitability for use in real-world situations. This review considers some recent signals associated with diabetes therapies, illustrating the difficulties in ascribing causality and evaluating absolute risk, predictability, prevention, and containment. Individual clinical trials are necessarily restricted for patient selection, number, and duration; they can introduce allocation and ascertainment bias and they often rely on biomarkers to estimate long-term clinical outcomes. In diabetes, the risk perspective is inevitably confounded by emergent comorbid conditions and potential interactions that limit therapeutic choice, hence the need for new therapies and better use of existing therapies to address the consequences of protracted glucotoxicity. However, for some therapies, the adverse effects may take several years to emerge, and it is evident that faint initial signals under trial conditions cannot be expected to foretell all eventualities. Thus, as information and experience accumulate with time, it should be accepted that benefit-risk deliberations will be refined, and adjustments to prescribing indications may become appropriate. © 2013 by the American Diabetes Association.
Resumo:
How speech is separated perceptually from other speech remains poorly understood. Recent research indicates that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This study explored the effects of manipulating the depth and pattern of that variation. Three formants (F1+F2+F3) constituting synthetic analogues of natural sentences were distributed across the 2 ears, together with a competitor for F2 (F2C) that listeners must reject to optimize recognition (left = F1+F2C; right = F2+F3). The frequency contours of F1 − F3 were each scaled to 50% of their natural depth, with little effect on intelligibility. Competitors were created either by inverting the frequency contour of F2 about its geometric mean (a plausibly speech-like pattern) or using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Adding a competitor typically reduced intelligibility; this reduction depended on the depth of F2C variation, being greatest for 100%-depth, intermediate for 50%-depth, and least for 0%-depth (constant) F2Cs. This suggests that competitor impact depends on overall depth of frequency variation, not depth relative to that for the target formants. The absence of tuning (i.e., no minimum in intelligibility for the 50% case) suggests that the ability to reject an extraneous formant does not depend on similarity in the depth of formant-frequency variation. Furthermore, triangle-wave competitors were as effective as their more speech-like counterparts, suggesting that the selection of formants from the ensemble also does not depend on speech-specific constraints.
Resumo:
How speech is separated perceptually from other speech remains poorly understood. Recent research indicates that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This study explored the effects of manipulating the depth and pattern of that variation. Three formants (F1+F2+F3) constituting synthetic analogues of natural sentences were distributed across the 2 ears, together with a competitor for F2 (F2C) that listeners must reject to optimize recognition (left = F1+F2C; right = F2+F3). The frequency contours of F1 - F3 were each scaled to 50% of their natural depth, with little effect on intelligibility. Competitors were created either by inverting the frequency contour of F2 about its geometric mean (a plausibly speech-like pattern) or using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Adding a competitor typically reduced intelligibility; this reduction depended on the depth of F2C variation, being greatest for 100%-depth, intermediate for 50%-depth, and least for 0%-depth (constant) F2Cs. This suggests that competitor impact depends on overall depth of frequency variation, not depth relative to that for the target formants. The absence of tuning (i.e., no minimum in intelligibility for the 50% case) suggests that the ability to reject an extraneous formant does not depend on similarity in the depth of formant-frequency variation. Furthermore, triangle-wave competitors were as effective as their more speech-like counterparts, suggesting that the selection of formants from the ensemble also does not depend on speech-specific constraints. © 2014 The Author(s).
Resumo:
A segment selection method controlled by Quality of Experience (QoE) factors for Dynamic Adaptive Streaming over HTTP (DASH) is presented in this paper. Current rate adaption algorithms aim to eliminate buffer underrun events by significantly reducing the code rate when experiencing pauses in replay. In reality, however, viewers may choose to accept a level of buffer underrun in order to achieve an improved level of picture fidelity or to accept the degradation in picture fidelity in order to maintain the service continuity. The proposed rate adaption scheme in our work can maximize the user QoE in terms of both continuity and fidelity (picture quality) in DASH applications. It is shown that using this scheme a high level of quality for streaming services, especially at low packet loss rates, can be achieved. Our scheme can also maintain a best trade-off between continuity-based quality and fidelity-based quality, by determining proper threshold values for the level of quality intended by clients with different quality requirements. In addition, the integration of the rate adaptation mechanism with the scheduling process is investigated in the context of a mobile communication network and related performances are analyzed.
Resumo:
One of the reasons for using variability in the software product line (SPL) approach (see Apel et al., 2006; Figueiredo et al., 2008; Kastner et al., 2007; Mezini & Ostermann, 2004) is to delay a design decision (Svahnberg et al., 2005). Instead of deciding on what system to develop in advance, with the SPL approach a set of components and a reference architecture are specified and implemented (during domain engineering, see Czarnecki & Eisenecker, 2000) out of which individual systems are composed at a later stage (during application engineering, see Czarnecki & Eisenecker, 2000). By postponing the design decisions in such a manner, it is possible to better fit the resultant system in its intended environment, for instance, to allow selection of the system interaction mode to be made after the customers have purchased particular hardware, such as a PDA vs. a laptop. Such variability is expressed through variation points which are locations in a software-based system where choices are available for defining a specific instance of a system (Svahnberg et al., 2005). Until recently it had sufficed to postpone committing to a specific system instance till before the system runtime. However, in the recent years the use and expectations of software systems in human society has undergone significant changes.Today's software systems need to be always available, highly interactive, and able to continuously adapt according to the varying environment conditions, user characteristics and characteristics of other systems that interact with them. Such systems, called adaptive systems, are expected to be long-lived and able to undertake adaptations with little or no human intervention (Cheng et al., 2009). Therefore, the variability now needs to be present also at system runtime, which leads to the emergence of a new type of system: adaptive systems with dynamic variability.
Resumo:
Introduction: Fluocinolone acetonide slow release implant (Iluvien®) was approved in December 2013 in UK for treatment of eyes which are pseudophakic with DMO that is unresponsive to other available therapies. This approval was based on evidence from FAME trials which were conducted at a time when ranibizumab was not available. There is a paucity of data on implementation of guidance on selecting patients for this treatment modality and also on the real world outcome of fluocinolone therapy especially in those patients that have been unresponsive to ranibizumab therapy. Method: Retrospective study of consecutive patients treated with fluocinolone between January and August 2014 at three sites were included to evaluate selection criteria used, baseline characteristics and clinical outcomes at 3-month time point. Results: Twenty two pseudophakic eyes of 22 consecutive patients were included. Majority of patients had prior therapy with multiple intravitreal anti-VEGF injections. Four eyes had controlled glaucoma. At baseline mean VA and CRT were 50.7 letters and 631 μm respectively. After 3 months, 18 patients had improved CRT of which 15 of them also had improved VA. No adverse effects were noted. One additional patient required IOP lowering medication. Despite being unresponsive to multiple prior therapies including laser and anti-VEGF injections, switching to fluocinolone achieved treatment benefit. Conclusion: The patient level selection criteria proposed by NICE guidance on fluocinolone appeared to be implemented. This data from this study provides new evidence on early outcomes following fluocinolone therapy in eyes with DMO which had not responded to laser and other intravitreal agents.