956 resultados para Conditional CAPM


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study contributes to the growth of design knowledge in China, where vehicle design for the local, older user is in its initial developmental stages. Therefore, this research has explored the travel needs of older Chinese vehicle users in order to assist designers to better understand users’ current and future needs. A triangulation method consisting of interviews, logbook and co-discovery was used to collect multiple forms of data and so explore the research question. Grounded theory has been employed to analyze the research data. This study found that users’ needs are reflected through various ‘meanings’ that they attach to vehicles – meanings that give a tangible expression to their experiences. This study identified six older-user need categories: (i) safety, (ii) utility, (iii) comfort, (iv) identity, (v) emotion and (vi) spirituality. The interrelationships among these six categories are seen as an interactive structure, rather than as a linear or hierarchical arrangement. Chinese cultural values, which are generated from particular local context and users’ social practice, will play a dynamic role in linking and shaping the travel needs of older vehicle users in the future. Moreover, this study structures the older-user needs model into three levels of meaning, to give guidance to vehicle design direction: (i) the practical meaning level, (ii) the social meaning level and (ii) the cultural meaning level. This study suggests that a more comprehensive explanation exists if designers can identify the vehicle’s meaning and property associated with the fulfilled older users’ needs. However, these needs will vary, and must be related to particular technological, social, and cultural contexts. The significance of this study lies in its contributions to the body of knowledge in three areas: research methodology, theory and design. These theoretical contributions provide a series of methodological tools, models and approaches from a vehicle design perspective. These include a conditional/consequential matrix, a travel needs identification model, an older users’ travel-related needs framework, a user information structure model, and an Older-User-Need-Based vehicle design approach. These models suggest a basic framework for the new design process which might assist in the design of new vehicles to fulfil the needs of future, aging Chinese generations. The models have the potential to be transferred to other design domains and different cultural contexts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a novel relative entropy rate (RER) based approach for multiple HMM (MHMM) approximation of a class of discrete-time uncertain processes. Under different uncertainty assumptions, the model design problem is posed either as a min-max optimisation problem or stochastic minimisation problem on the RER between joint laws describing the state and output processes (rather than the more usual RER between output processes). A suitable filter is proposed for which performance results are established which bound conditional mean estimation performance and show that estimation performance improves as the RER is reduced. These filter consistency and convergence bounds are the first results characterising multiple HMM approximation performance and suggest that joint RER concepts provide a useful model selection criteria. The proposed model design process and MHMM filter are demonstrated on an important image processing dim-target detection problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article explores two matrix methods to induce the ``shades of meaning" (SoM) of a word. A matrix representation of a word is computed from a corpus of traces based on the given word. Non-negative Matrix Factorisation (NMF) and Singular Value Decomposition (SVD) compute a set of vectors corresponding to a potential shade of meaning. The two methods were evaluated based on loss of conditional entropy with respect to two sets of manually tagged data. One set reflects concepts generally appearing in text, and the second set comprises words used for investigations into word sense disambiguation. Results show that for NMF consistently outperforms SVD for inducing both SoM of general concepts as well as word senses. The problem of inducing the shades of meaning of a word is more subtle than that of word sense induction and hence relevant to thematic analysis of opinion where nuances of opinion can arise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study explored kindergarten students’ intuitive strategies and understandings in probabilities. The paper aims to provide an in depth insight into the levels of probability understanding across four constructs, as proposed by Jones (1997), for kindergarten students. Qualitative evidence from two students revealed that even before instruction pupils have a good capacity of predicting most and least likely events, of distinguishing fair probability situations from unfair ones, of comparing the probability of an event in two sample spaces, and of recognizing conditional probability events. These results contribute to the growing evidence on kindergarten students’ intuitive probabilistic reasoning. The potential of this study for improving the learning of probability, as well as suggestions for further research, are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to forecast machinery failure is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models for forecasting machinery health based on condition data. Although these models have aided the advancement of the discipline, they have made only a limited contribution to developing an effective machinery health prognostic system. The literature review indicates that there is not yet a prognostic model that directly models and fully utilises suspended condition histories (which are very common in practice since organisations rarely allow their assets to run to failure); that effectively integrates population characteristics into prognostics for longer-range prediction in a probabilistic sense; which deduces the non-linear relationship between measured condition data and actual asset health; and which involves minimal assumptions and requirements. This work presents a novel approach to addressing the above-mentioned challenges. The proposed model consists of a feed-forward neural network, the training targets of which are asset survival probabilities estimated using a variation of the Kaplan-Meier estimator and a degradation-based failure probability density estimator. The adapted Kaplan-Meier estimator is able to model the actual survival status of individual failed units and estimate the survival probability of individual suspended units. The degradation-based failure probability density estimator, on the other hand, extracts population characteristics and computes conditional reliability from available condition histories instead of from reliability data. The estimated survival probability and the relevant condition histories are respectively presented as “training target” and “training input” to the neural network. The trained network is capable of estimating the future survival curve of a unit when a series of condition indices are inputted. Although the concept proposed may be applied to the prognosis of various machine components, rolling element bearings were chosen as the research object because rolling element bearing failure is one of the foremost causes of machinery breakdowns. Computer simulated and industry case study data were used to compare the prognostic performance of the proposed model and four control models, namely: two feed-forward neural networks with the same training function and structure as the proposed model, but neglected suspended histories; a time series prediction recurrent neural network; and a traditional Weibull distribution model. The results support the assertion that the proposed model performs better than the other four models and that it produces adaptive prediction outputs with useful representation of survival probabilities. This work presents a compelling concept for non-parametric data-driven prognosis, and for utilising available asset condition information more fully and accurately. It demonstrates that machinery health can indeed be forecasted. The proposed prognostic technique, together with ongoing advances in sensors and data-fusion techniques, and increasingly comprehensive databases of asset condition data, holds the promise for increased asset availability, maintenance cost effectiveness, operational safety and – ultimately – organisation competitiveness.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An extensive literature examines the dynamics of interest rates, with particular attention given to the positive relationship between interest-rate volatility and the level of interest rates—the so-called level effect. This paper examines the interaction between the estimated level effect and competing parameterisations of interest-rate volatility for the Australian yield curve. We adopt a new methodology that estimates elasticity in a multivariate setting that explicitly accommodates the correlations that exist between various yield factors. Results show that significant correlations exist between the residuals of yield factors and that such correlations do indeed impact on model estimates. Within the multivariate setting, the level of the short rate is shown to be a crucial determinant of the conditional volatility of all three yield factors. Measures of model fit suggest that, in addition to the usual level effect, the incorporation of GARCH effects and possible regime shifts is important

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung–Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung–Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar–British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biased estimation has the advantage of reducing the mean squared error (MSE) of an estimator. The question of interest is how biased estimation affects model selection. In this paper, we introduce biased estimation to a range of model selection criteria. Specifically, we analyze the performance of the minimum description length (MDL) criterion based on biased and unbiased estimation and compare it against modern model selection criteria such as Kay's conditional model order estimator (CME), the bootstrap and the more recently proposed hook-and-loop resampling based model selection. The advantages and limitations of the considered techniques are discussed. The results indicate that, in some cases, biased estimators can slightly improve the selection of the correct model. We also give an example for which the CME with an unbiased estimator fails, but could regain its power when a biased estimator is used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An experimental investigation has been made of a round, non-buoyant plume of nitric oxide, NO, in a turbulent grid flow of ozone, 03, using the Turbulent Smog Chamber at the University of Sydney. The measurements have been made at a resolution not previously reported in the literature. The reaction is conducted at non-equilibrium so there is significant interaction between turbulent mixing and chemical reaction. The plume has been characterized by a set of constant initial reactant concentration measurements consisting of radial profiles at various axial locations. Whole plume behaviour can thus be characterized and parameters are selected for a second set of fixed physical location measurements where the effects of varying the initial reactant concentrations are investigated. Careful experiment design and specially developed chemilurninescent analysers, which measure fluctuating concentrations of reactive scalars, ensure that spatial and temporal resolutions are adequate to measure the quantities of interest. Conserved scalar theory is used to define a conserved scalar from the measured reactive scalars and to define frozen, equilibrium and reaction dominated cases for the reactive scalars. Reactive scalar means and the mean reaction rate are bounded by frozen and equilibrium limits but this is not always the case for the reactant variances and covariances. The plume reactant statistics are closer to the equilibrium limit than those for the ambient reactant. The covariance term in the mean reaction rate is found to be negative and significant for all measurements made. The Toor closure was found to overestimate the mean reaction rate by 15 to 65%. Gradient model turbulent diffusivities had significant scatter and were not observed to be affected by reaction. The ratio of turbulent diffusivities for the conserved scalar mean and that for the r.m.s. was found to be approximately 1. Estimates of the ratio of the dissipation timescales of around 2 were found downstream. Estimates of the correlation coefficient between the conserved scalar and its dissipation (parallel to the mean flow) were found to be between 0.25 and the significant value of 0.5. Scalar dissipations for non-reactive and reactive scalars were found to be significantly different. Conditional statistics are found to be a useful way of investigating the reactive behaviour of the plume, effectively decoupling the interaction of chemical reaction and turbulent mixing. It is found that conditional reactive scalar means lack significant transverse dependence as has previously been found theoretically by Klimenko (1995). It is also found that conditional variance around the conditional reactive scalar means is relatively small, simplifying the closure for the conditional reaction rate. These properties are important for the Conditional Moment Closure (CMC) model for turbulent reacting flows recently proposed by Klimenko (1990) and Bilger (1993). Preliminary CMC model calculations are carried out for this flow using a simple model for the conditional scalar dissipation. Model predictions and measured conditional reactive scalar means compare favorably. The reaction dominated limit is found to indicate the maximum reactedness of a reactive scalar and is a limiting case of the CMC model. Conventional (unconditional) reactive scalar means obtained from the preliminary CMC predictions using the conserved scalar p.d.f. compare favorably with those found from experiment except where measuring position is relatively far upstream of the stoichiometric distance. Recommendations include applying a full CMC model to the flow and investigations both of the less significant terms in the conditional mean species equation and the small variation of the conditional mean with radius. Forms for the p.d.f.s, in addition to those found from experiments, could be useful for extending the CMC model to reactive flows in the atmosphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Principal Topic: In this study we investigate how strategic orientation moderates the impact of growth on profitability for a sample of Danish high growth (Gazelle) firms. ---------- Firm growth has been an essential part of both management research and entrepreneurship research for decades (e.g. Penrose 1959, Birch 1987, Storey 1994). From a societal point of view, firm growth has been perceived as economic generator and job creator. In entrepreneurship research, growth has been an important part of the field (Davidsson, Delmar and Wiklund 2006), and many have used growth as a measure of success. In strategic management, growth has been seen as an approach to achieve competitive advantages and a way of becoming increasing profitable (e.g. Russo and Fouts 1997, Cho and Pucic 2005). However, although firm growth used to be perceived as a natural pathway to profitability recently more skepticism has emerged due to both new theoretical development and new empirical insights. Empirically, studies show inconsistent and inconclusive empirical evidence regarding the impact of growth on profitability. Our review reveals that some studies find a substantial positive relationship, some find a weak positive relationship, some find no relationship and further some find a negative relationship. Overall, two dominant yet divergent theoretical positions can be identified. The first position, mainly focusing on the environmental fit, argues that firms are likely to become more profitable if they enter a market quickly and on a larger scale due to first mover advantages and economic of scale. The second position, mainly focusing the internal fit, argues that growth may lead to a range of internal challenges and difficulties, including rapid change in structure, reward systems, decision making, communication and management style. The inconsistent empirical results together with two divergent theoretical positions call for further investigations into the circumstances by which growth generate profitability and into the circumstances by which growth do not generate profitability. In this project, we investigate how strategic orientations influence the impact of growth on profitability by asking the following research question: How is the impact of growth on profitability moderated by strategic orientation? Based on a literature review of how growth impacts profitability in areas such as entrepreneurship, strategic management and strategic entrepreneurship we develop three hypotheses regarding the growth-profitability relationship and strategic orientation as a potential moderator. ---------- Methodology/Key Propositions: The three hypotheses are tested on data collected in 2008. All firms in Denmark, including all listed and non-listed (VAT-registered) firms who experienced a 100 % growth and had a positive sales or gross profit over a four years period (2004-2007) were surveyed. In total 2,475 fulfilled the requirements. Among those 1,107 firms returned usable questionnaires satisfactory giving us a response rate on 45 %. The financial data together with data on number of employees were obtained from D&B (previously Dun & Bradstreet). The remaining data were obtained through the survey. Hierarchical regression models with ROA (return on assets) as the dependent variable were used to test the hypotheses. In the first model control variables including region, industry, firm age, CEO age, CEO gender, CEO education and number of employees were entered. In the second model, growth measured as growth in employees was entered. Then strategic orientation (differentiation, cost leadership, focus differentiation and focus cost leadership) and then interaction effects of strategic orientation and growth were entered in the model. ---------- Results and Implications: The results show a positive impact of firm growth on profitability and further that this impact is moderated by strategic orientation. Specifically, it was found that growth has a larger impact on profitability when firms do not pursue a focus strategy including both focus differentiation and focus cost leadership. Our preliminary interpretation of the results suggests that the value of growth depends on the circumstances and more specifically 'how much is left to fight for'. It seems like those firms who target towards a narrow segment are less likely to gain value of growth. The remaining market shares to fight for to these firms are not large enough to compensate for the cost of growing. Based on our findings, it therefore seems like growth has a more positive relationship with profitability for those who approach a broad market segment. Furthermore we argue that firms pursuing af Focus strategy will have more specialized assets that decreases the possibilities of further profitable expansion. For firms, CEOs, board of directors etc., the study shows that high growth is not necessarily something worth aiming for. It is a trade-off between the cost of growing and the value of growing. For many firms, there might be better ways of generating profitability in the long run. It depends on the strategic orientation of the firm. For advisors and consultants, the conditional value of growth implies that in-depth knowledge on their clients' situation is necessary before any advice can be given. And finally, for policy makers, it means they have to be careful when initiating new policies to promote firm growth. They need to take into consideration firm strategy and industry conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.

Relevância:

10.00% 10.00%

Publicador: