12 resultados para DYNAMIC PORTFOLIO SELECTION

em Aston University Research Archive


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A two-factor no-arbitrage model is used to provide a theoretical link between stock and bond market volatility. While this model suggests that short-term interest rate volatility may, at least in part, drive both stock and bond market volatility, the empirical evidence suggests that past bond market volatility affects both markets and feeds back into short-term yield volatility. The empirical modelling goes on to examine the (time-varying) correlation structure between volatility in the stock and bond markets and finds that the sign of this correlation has reversed over the last 20 years. This has important implications far portfolio selection in financial markets. © 2005 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Due to dynamic variability, identifying the specific conditions under which non-functional requirements (NFRs) are satisfied may be only possible at runtime. Therefore, it is necessary to consider the dynamic treatment of relevant information during the requirements specifications. The associated data can be gathered by monitoring the execution of the application and its underlying environment to support reasoning about how the current application configuration is fulfilling the established requirements. This paper presents a dynamic decision-making infrastructure to support both NFRs representation and monitoring, and to reason about the degree of satisfaction of NFRs during runtime. The infrastructure is composed of: (i) an extended feature model aligned with a domain-specific language for representing NFRs to be monitored at runtime; (ii) a monitoring infrastructure to continuously assess NFRs at runtime; and (iii) a exible decision-making process to select the best available configuration based on the satisfaction degree of the NRFs. The evaluation of the approach has shown that it is able to choose application configurations that well fit user NFRs based on runtime information. The evaluation also revealed that the proposed infrastructure provided consistent indicators regarding the best application configurations that fit user NFRs. Finally, a benefit of our approach is that it allows us to quantify the level of satisfaction with respect to NFRs specification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ontologies have become a key component in the Semantic Web and Knowledge management. One accepted goal is to construct ontologies from a domain specific set of texts. An ontology reflects the background knowledge used in writing and reading a text. However, a text is an act of knowledge maintenance, in that it re-enforces the background assumptions, alters links and associations in the ontology, and adds new concepts. This means that background knowledge is rarely expressed in a machine interpretable manner. When it is, it is usually in the conceptual boundaries of the domain, e.g. in textbooks or when ideas are borrowed into other domains. We argue that a partial solution to this lies in searching external resources such as specialized glossaries and the internet. We show that a random selection of concept pairs from the Gene Ontology do not occur in a relevant corpus of texts from the journal Nature. In contrast, a significant proportion can be found on the internet. Thus, we conclude that sources external to the domain corpus are necessary for the automatic construction of ontologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates how the autocorrelation structure of UK portfolio returns is linked to dynamic interrelationships among the component securities of that portfolio. Moreover, portfolio return autocorrelation is shown to be an increasing function of the number of securities in the portfolio. Since the security interrelationships seemed to be more a product of their history of non-synchronous trading than of systematic industry-related phenomena, it should not be possible to exploit the high levels of return persistence using trading rules. We show that rules designed to exploit this portfolio autocorrelation structure do not produce economic profits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents research within empirical financial economics with focus on liquidity and portfolio optimisation in the stock market. The discussion on liquidity is focused on measurement issues, including TAQ data processing and measurement of systematic liquidity factors (FSO). Furthermore, a framework for treatment of the two topics in combination is provided. The liquidity part of the thesis gives a conceptual background to liquidity and discusses several different approaches to liquidity measurement. It contributes to liquidity measurement by providing detailed guidelines on the data processing needed for applying TAQ data to liquidity research. The main focus, however, is the derivation of systematic liquidity factors. The principal component approach to systematic liquidity measurement is refined by the introduction of moving and expanding estimation windows, allowing for time-varying liquidity co-variances between stocks. Under several liability specifications, this improves the ability to explain stock liquidity and returns, as compared to static window PCA and market average approximations of systematic liquidity. The highest ability to explain stock returns is obtained when using inventory cost as a liquidity measure and a moving window PCA as the systematic liquidity derivation technique. Systematic factors of this setting also have a strong ability in explaining a cross-sectional liquidity variation. Portfolio optimisation in the FSO framework is tested in two empirical studies. These contribute to the assessment of FSO by expanding the applicability to stock indexes and individual stocks, by considering a wide selection of utility function specifications, and by showing explicitly how the full-scale optimum can be identified using either grid search or the heuristic search algorithm of differential evolution. The studies show that relative to mean-variance portfolios, FSO performs well in these settings and that the computational expense can be mitigated dramatically by application of differential evolution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing a strategy for online channels requires knowledge of the effects of customers' online use on their revenue and cost to serve, which ultimately influence customer profitability. The authors theoretically discuss and empirically examine these effects. An empirical study of retail banking customers reveals that online use improves customer profitability by increasing customer revenue and decreasing cost to serve. Moreover, the revenue effects of online use are substantially larger than the cost-to-serve effects, although the effects of online use on customer revenue and cost to serve vary by product portfolio. Self-selection effects also emerge and can be even greater than online use effects. Ignoring self-selection effects thus can lead to poor managerial decision-making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explains how dynamic client portfolios can be a source of ambidexterity (i.e., exploration and exploitation) for knowledge-intensive firms (KIFs). Drawing from a unique qualitative dataset of firms in the global reinsurance market, we show how different types of client relationships underpin a dynamic client portfolio and become a source of ambidexterity for a KIF. We develop a process model to show how KIFs attain knowledge by segmenting their client portfolios, use that knowledge to explore and exploit within and across their client relationships, and dynamically adjust their client portfolios over time. Our study contributes to the literature on external sources of ambidexterity and dynamic management of client knowledge within KIFs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How speech is separated perceptually from other speech remains poorly understood. Recent research indicates that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This study explored the effects of manipulating the depth and pattern of that variation. Three formants (F1+F2+F3) constituting synthetic analogues of natural sentences were distributed across the 2 ears, together with a competitor for F2 (F2C) that listeners must reject to optimize recognition (left = F1+F2C; right = F2+F3). The frequency contours of F1 − F3 were each scaled to 50% of their natural depth, with little effect on intelligibility. Competitors were created either by inverting the frequency contour of F2 about its geometric mean (a plausibly speech-like pattern) or using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Adding a competitor typically reduced intelligibility; this reduction depended on the depth of F2C variation, being greatest for 100%-depth, intermediate for 50%-depth, and least for 0%-depth (constant) F2Cs. This suggests that competitor impact depends on overall depth of frequency variation, not depth relative to that for the target formants. The absence of tuning (i.e., no minimum in intelligibility for the 50% case) suggests that the ability to reject an extraneous formant does not depend on similarity in the depth of formant-frequency variation. Furthermore, triangle-wave competitors were as effective as their more speech-like counterparts, suggesting that the selection of formants from the ensemble also does not depend on speech-specific constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kralijc’s (1983) purchasing portfolio approach holds that different types of purchases need different sourcing strategies, underpinned by distinct sets of resources and practices. The approach is widely deployed in business and extensively researched, and yet little research has been conducted on how knowledge and skills vary across a portfolio of purchases. This study extends the body of knowledge on purchasing portfolio management, and its application in the strategic development of purchasing in an organization, and on human resource management in the purchasing function. A novel approach to profiling purchasing skills is proposed, which is well suited to dynamic environments which require flexibility. In a survey, experienced purchasing personnel described a specific purchase and profiled the skills required for effective performance in purchasing that item. Purchases were categorized according to their importance to the organization (internally-oriented evaluation of cost and production factors) and to the supply market (externally-oriented evaluation of commercial risk and uncertainty). Through cluster analysis three key types of purchase situations were identified. The skills required for effective purchasing vary significantly across the three clusters (for 22 skills, p<0.01). Prior research shows that global organizations use the purchasing portfolio approach to develop sourcing strategies, but also aggregate analyses to inform the design of purchasing arrangements (local vs global) and to develop their improvement plans. Such organizations would also benefit from profiling skills by purchase type. We demonstrate how the survey can be adapted to provide a management tool for global firms seeking to improve procurement capability, flexibility and performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

How speech is separated perceptually from other speech remains poorly understood. Recent research indicates that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This study explored the effects of manipulating the depth and pattern of that variation. Three formants (F1+F2+F3) constituting synthetic analogues of natural sentences were distributed across the 2 ears, together with a competitor for F2 (F2C) that listeners must reject to optimize recognition (left = F1+F2C; right = F2+F3). The frequency contours of F1 - F3 were each scaled to 50% of their natural depth, with little effect on intelligibility. Competitors were created either by inverting the frequency contour of F2 about its geometric mean (a plausibly speech-like pattern) or using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Adding a competitor typically reduced intelligibility; this reduction depended on the depth of F2C variation, being greatest for 100%-depth, intermediate for 50%-depth, and least for 0%-depth (constant) F2Cs. This suggests that competitor impact depends on overall depth of frequency variation, not depth relative to that for the target formants. The absence of tuning (i.e., no minimum in intelligibility for the 50% case) suggests that the ability to reject an extraneous formant does not depend on similarity in the depth of formant-frequency variation. Furthermore, triangle-wave competitors were as effective as their more speech-like counterparts, suggesting that the selection of formants from the ensemble also does not depend on speech-specific constraints. © 2014 The Author(s).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A segment selection method controlled by Quality of Experience (QoE) factors for Dynamic Adaptive Streaming over HTTP (DASH) is presented in this paper. Current rate adaption algorithms aim to eliminate buffer underrun events by significantly reducing the code rate when experiencing pauses in replay. In reality, however, viewers may choose to accept a level of buffer underrun in order to achieve an improved level of picture fidelity or to accept the degradation in picture fidelity in order to maintain the service continuity. The proposed rate adaption scheme in our work can maximize the user QoE in terms of both continuity and fidelity (picture quality) in DASH applications. It is shown that using this scheme a high level of quality for streaming services, especially at low packet loss rates, can be achieved. Our scheme can also maintain a best trade-off between continuity-based quality and fidelity-based quality, by determining proper threshold values for the level of quality intended by clients with different quality requirements. In addition, the integration of the rate adaptation mechanism with the scheduling process is investigated in the context of a mobile communication network and related performances are analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the reasons for using variability in the software product line (SPL) approach (see Apel et al., 2006; Figueiredo et al., 2008; Kastner et al., 2007; Mezini & Ostermann, 2004) is to delay a design decision (Svahnberg et al., 2005). Instead of deciding on what system to develop in advance, with the SPL approach a set of components and a reference architecture are specified and implemented (during domain engineering, see Czarnecki & Eisenecker, 2000) out of which individual systems are composed at a later stage (during application engineering, see Czarnecki & Eisenecker, 2000). By postponing the design decisions in such a manner, it is possible to better fit the resultant system in its intended environment, for instance, to allow selection of the system interaction mode to be made after the customers have purchased particular hardware, such as a PDA vs. a laptop. Such variability is expressed through variation points which are locations in a software-based system where choices are available for defining a specific instance of a system (Svahnberg et al., 2005). Until recently it had sufficed to postpone committing to a specific system instance till before the system runtime. However, in the recent years the use and expectations of software systems in human society has undergone significant changes.Today's software systems need to be always available, highly interactive, and able to continuously adapt according to the varying environment conditions, user characteristics and characteristics of other systems that interact with them. Such systems, called adaptive systems, are expected to be long-lived and able to undertake adaptations with little or no human intervention (Cheng et al., 2009). Therefore, the variability now needs to be present also at system runtime, which leads to the emergence of a new type of system: adaptive systems with dynamic variability.