924 resultados para failure time model
Resumo:
This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
Concern regarding the health effects of indoor air quality has grown in recent years, due to the increased prevalence of many diseases, as well as the fact that many people now spend most of their time indoors. While numerous studies have reported on the dynamics of aerosols indoors, the dynamics of bioaerosols in indoor environments are still poorly understood and very few studies have focused on fungal spore dynamics in indoor environments. Consequently, this work investigated the dynamics of fungal spores in indoor air, including fungal spore release and deposition, as well as investigating the mechanisms involved in the fungal spore fragmentation process. In relation to the investigation of fungal spore dynamics, it was found that the deposition rates of the bioaerosols (fungal propagules) were in the same range as the deposition rates of nonbiological particles and that they were a function of their aerodynamic diameters. It was also found that fungal particle deposition rates increased with increasing ventilation rates. These results (which are reported for the first time) are important for developing an understanding of the dynamics of fungal spores in the air. In relation to the process of fungal spore fragmentation, important information was generated concerning the airborne dynamics of the spores, as well as the part/s of the fungi which undergo fragmentation. The results obtained from these investigations into the dynamics of fungal propagules in indoor air significantly advance knowledge about the fate of fungal propagules in indoor air, as well as their deposition in the respiratory tract. The need to develop an advanced, real-time method for monitoring bioaerosols has become increasingly important in recent years, particularly as a result of the increased threat from biological weapons and bioterrorism. However, to date, the Ultraviolet Aerodynamic Particle Sizer (UVAPS, Model 3312, TSI, St Paul, MN) is the only commercially available instrument capable of monitoring and measuring viable airborne micro-organisms in real-time. Therefore (for the first time), this work also investigated the ability of the UVAPS to measure and characterise fungal spores in indoor air. The UVAPS was found to be sufficiently sensitive for detecting and measuring fungal propagules. Based on fungal spore size distributions, together with fluorescent percentages and intensities, it was also found to be capable of discriminating between two fungal spore species, under controlled laboratory conditions. In the field, however, it would not be possible to use the UVAPS to differentiate between different fungal spore species because the different micro-organisms present in the air may not only vary in age, but may have also been subjected to different environmental conditions. In addition, while the real-time UVAPS was found to be a good tool for the investigation of fungal particles under controlled conditions, it was not found to be selective for bioaerosols only (as per design specifications). In conclusion, the UVAPS is not recommended for use in the direct measurement of airborne viable bioaerosols in the field, including fungal particles, and further investigations into the nature of the micro-organisms, the UVAPS itself and/or its use in conjunction with other conventional biosamplers, are necessary in order to obtain more realistic results. Overall, the results obtained from this work on airborne fungal particle dynamics will contribute towards improving the detection capabilities of the UVAPS, so that it is capable of selectively monitoring and measuring bioaerosols, for which it was originally designed. This work will assist in finding and/or improving other technologies capable of the real-time monitoring of bioaerosols. The knowledge obtained from this work will also be of benefit in various other bioaerosol applications, such as understanding the transport of bioaerosols indoors.
Resumo:
We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung–Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung–Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar–British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data.
Resumo:
Visiting a modern shopping center is becoming vital in our society nowadays. The fast growth of shopping center, transportation system, and modern vehicles has given more choices for consumers in shopping. Although there are many reasons for the consumers in visiting the shopping center, the influence of travel time and size of shopping center are important things to be considered towards the frequencies of visiting customers in shopping centers. A survey to the customers of three major shopping centers in Surabaya has been conducted to evaluate the Ellwood’s model and Huff’s model. A new exponent value N of 0.48 and n of 0.50 has been found from the Ellwood’s model, while a coefficient of 0.267 and an add value of 0.245 have been found from the Huff’s model.
Resumo:
Presentation provided to a PhD Colloquium between two Australian and one Malaysian University providing the opportunity to inform and critique progress of students concerning their selected topic. This presentation essentially involves "The conceptualisation, sensitivity and measurement of holding costs and other selected elements impacting housing affordability" as provided by Gary Owen Garner of QUT, with research objectives thus: 1. To establish the nature and composition of holding costs over time, as related to residential property in Australia, and internationally. 2. To examine the linkages that may exist between various planning instruments, the length of regulatory assessment periods, and housing affordability. 3. To develop a model that quantifies the impact of holding costs on housing affordability in Australia, with a particular focus on the consequences of extended assessment periods as a component of holding costs. Thus, provide clarification as to the impact of holding costs on overall housing affordability.
Resumo:
Optimal operation and maintenance of engineering systems heavily rely on the accurate prediction of their failures. Most engineering systems, especially mechanical systems, are susceptible to failure interactions. These failure interactions can be estimated for repairable engineering systems when determining optimal maintenance strategies for these systems. An extended Split System Approach is developed in this paper. The technique is based on the Split System Approach and a model for interactive failures. The approach was applied to simulated data. The results indicate that failure interactions will increase the hazard of newly repaired components. The intervals of preventive maintenance actions of a system with failure interactions, will become shorter compared with scenarios where failure interactions do not exist.
Resumo:
In this thesis, a new technique has been developed for determining the composition of a collection of loads including induction motors. The application would be to provide a representation of the dynamic electrical load of Brisbane so that the ability of the power system to survive a given fault can be predicted. Most of the work on load modelling to date has been on post disturbance analysis, not on continuous on-line models for loads. The post disturbance methods are unsuitable for load modelling where the aim is to determine the control action or a safety margin for a specific disturbance. This thesis is based on on-line load models. Dr. Tania Parveen considers 10 induction motors with different power ratings, inertia and torque damping constants to validate the approach, and their composite models are developed with different percentage contributions for each motor. This thesis also shows how measurements of a composite load respond to normal power system variations and this information can be used to continuously decompose the load continuously and to characterize regarding the load into different sizes and amounts of motor loads.
Resumo:
Despite more than three decades of research, there is a limited understanding of the transactional processes of appraisal, stress and coping. This has led to calls for more focused research on the entire process that underlies these variables. To date, there remains a paucity of such research. The present study examined Lazarus and Folkman’s (1984) transactional model of stress and coping. One hundred and twenty nine Australian participants with full time employment (i.e. nurses and administration employees) were recruited. There were 49 male (age mean = 34, SD = 10.51) and 80 female (age mean = 36, SD = 10.31) participants. The analysis of three path models indicated that in addition to the original paths, which were found in Lazarus and Folkman’s transactional model (primary appraisal-->secondary appraisal-->stress-->coping), there were also direct links between primary appraisal and stress level time one and between stress level time one to stress level time two. This study has provided additional insights into the transactional process which will extend our understanding of how individuals appraise, cope and experience occupational stress.
Resumo:
To reduce the damage of phishing and spyware attacks, banks, governments, and other security-sensitive industries are deploying one-time password systems, where users have many passwords and use each password only once. If a single password is compromised, it can be only be used to impersonate the user once, limiting the damage caused. However, existing practical approaches to one-time passwords have been susceptible to sophisticated phishing attacks. ---------- We give a formal security treatment of this important practical problem. We consider the use of one-time passwords in the context of password-authenticated key exchange (PAKE), which allows for mutual authentication, session key agreement, and resistance to phishing attacks. We describe a security model for the use of one-time passwords, explicitly considering the compromise of past (and future) one-time passwords, and show a general technique for building a secure one-time-PAKE protocol from any secure PAKE protocol. Our techniques also allow for the secure use of pseudorandomly generated and time-dependent passwords.
Resumo:
In the quest for shorter time-to-market, higher quality and reduced cost, model-driven software development has emerged as a promising approach to software engineering. The central idea is to promote models to first-class citizens in the development process. Starting from a set of very abstract models in the early stage of the development, they are refined into more concrete models and finally, as a last step, into code. As early phases of development focus on different concepts compared to later stages, various modelling languages are employed to most accurately capture the concepts and relations under discussion. In light of this refinement process, translating between modelling languages becomes a time-consuming and error-prone necessity. This is remedied by model transformations providing support for reusing and automating recurring translation efforts. These transformations typically can only be used to translate a source model into a target model, but not vice versa. This poses a problem if the target model is subject to change. In this case the models get out of sync and therefore do not constitute a coherent description of the software system anymore, leading to erroneous results in later stages. This is a serious threat to the promised benefits of quality, cost-saving, and time-to-market. Therefore, providing a means to restore synchronisation after changes to models is crucial if the model-driven vision is to be realised. This process of reflecting changes made to a target model back to the source model is commonly known as Round-Trip Engineering (RTE). While there are a number of approaches to this problem, they impose restrictions on the nature of the model transformation. Typically, in order for a transformation to be reversed, for every change to the target model there must be exactly one change to the source model. While this makes synchronisation relatively “easy”, it is ill-suited for many practically relevant transformations as they do not have this one-to-one character. To overcome these issues and to provide a more general approach to RTE, this thesis puts forward an approach in two stages. First, a formal understanding of model synchronisation on the basis of non-injective transformations (where a number of different source models can correspond to the same target model) is established. Second, detailed techniques are devised that allow the implementation of this understanding of synchronisation. A formal underpinning for these techniques is drawn from abductive logic reasoning, which allows the inference of explanations from an observation in the context of a background theory. As non-injective transformations are the subject of this research, there might be a number of changes to the source model that all equally reflect a certain target model change. To help guide the procedure in finding “good” source changes, model metrics and heuristics are investigated. Combining abductive reasoning with best-first search and a “suitable” heuristic enables efficient computation of a number of “good” source changes. With this procedure Round-Trip Engineering of non-injective transformations can be supported.
Resumo:
Principal Topic: Entrepreneurship is key to employment, innovation and growth (Acs & Mueller, 2008), and as such, has been the subject of tremendous research in both the economic and management literatures since Solow (1957), Schumpeter (1934, 1943), and Penrose (1959). The presence of entrepreneurs in the economy is a key factor in the success or failure of countries to grow (Audretsch and Thurik, 2001; Dejardin, 2001). Further studies focus on the conditions of existence of entrepreneurship, influential factors invoked are historical, cultural, social, institutional, or purely economic (North, 1997; Thurik 1996 & 1999). Of particular interest, beyond the reasons behind the existence of entrepreneurship, are entrepreneurial survival and good ''performance'' factors. Using cross-country firm data analysis, La Porta & Schleifer (2008) confirm that informal micro-businesses provide on average half of all economic activity in developing countries. They find that these are utterly unproductive compared to formal firms, and conclude that the informal sector serves as a social security net ''keep[ing] millions of people alive, but disappearing over time'' (abstract). Robison (1986), Hill (1996, 1997) posit that the Indonesian government under Suharto always pointed to the lack of indigenous entrepreneurship , thereby motivating the nationalisation of all industries. Furthermore, the same literature also points to the fact that small businesses were mostly left out of development programmes because they were supposed less productive and having less productivity potential than larger ones. Vial (2008) challenges this view and shows that small firms represent about 70% of firms, 12% of total output, but contribute to 25% of total factor productivity growth on average over the period 1975-94 in the industrial sector (Table 10, p.316). ---------- Methodology/Key Propositions: A review of the empirical literature points at several under-researched questions. Firstly, we assess whether there is, evidence of small family-business entrepreneurship in Indonesia. Secondly, we examine and present the characteristics of these enterprises, along with the size of the sector, and its dynamics. Thirdly, we study whether these enterprises underperform compared to the larger scale industrial sector, as it is suggested in the literature. We reconsider performance measurements for micro-family owned businesses. We suggest that, beside productivity measures, performance could be appraised by both the survival probability of the firm, and by the amount of household assets formation. We compare micro-family-owned and larger industrial firms' survival probabilities after the 1997 crisis, their capital productivity, then compare household assets of families involved in business with those who do not. Finally, we examine human and social capital as moderators of enterprises' performance. In particular, we assess whether a higher level of education and community participation have an effect on the likelihood of running a family business, and whether it has an impact on households' assets level. We use the IFLS database compiled and published by RAND Corporation. The data is a rich community, households, and individuals panel dataset in four waves: 1993, 1997, 2000, 2007. We now focus on the waves 1997 and 2000 in order to investigate entrepreneurship behaviours in turbulent times, i.e. the 1997 Asian crisis. We use aggregate individual data, and focus on households data in order to study micro-family-owned businesses. IFLS data covers roughly 7,600 households in 1997 and over 10,000 households in 2000, with about 95% of 1997 households re-interviewed in 2000. Households were interviewed in 13 of the 27 provinces as defined before 2001. Those 13 provinces were targeted because accounting for 83% of the population. A full description of the data is provided in Frankenberg and Thomas (2000), and Strauss et alii (2004). We deflate all monetary values in Rupiah with the World Development Indicators Consumer Price Index base 100 in 2000. ---------- Results and Implications: We find that in Indonesia, entrepreneurship is widespread and two thirds of households hold one or several family businesses. In rural areas, in 2000, 75% of households run one or several businesses. The proportion of households holding both a farm and a non farm business is higher in rural areas, underlining the reliance of rural households on self-employment, especially after the crisis. Those businesses come in various sizes from very small to larger ones. The median business production value represents less than the annual national minimum wage. Figures show that at least 75% of farm businesses produce less than the annual minimum wage, with non farm businesses being more numerous to produce the minimum wage. However, this is only one part of the story, as production is not the only ''output'' or effect of the business. We show that the survival rate of those businesses ranks between 70 and 82% after the 1997 crisis, which contrasts with the 67% survival rate for the formal industrial sector (Ter Wengel & Rodriguez, 2006). Micro Family Owned Businesses might be relatively small in terms of production, they also provide stability in times of crisis. For those businesses that provide business assets figures, we show that capital productivity is fairly high, with rates that are ten times higher for non farm businesses. Results show that households running a business have larger family assets, and households are better off in urban areas. We run a panel logit model in order to test the effect of human and social capital on the existence of businesses among households. We find that non farm businesses are more likely to appear in households with higher human and social capital situated in urban areas. Farm businesses are more likely to appear in lower human capital and rural contexts, while still being supported by community participation. The estimation of our panel data model confirm that households are more likely to have higher family assets if situated in urban area, the higher the education level, the larger the assets, and running a business increase the likelihood of having larger assets. This is especially true for non farm businesses that have a clearly larger and more significant effect on assets than farm businesses. Finally, social capital in the form of community participation also has a positive effect on assets. Those results confirm the existence of a strong entrepreneurship culture among Indonesian households. Investigating survival rates also shows that those businesses are quite stable, even in the face of a violent crisis such as the 1997 one, and as a result, can provide a safety net. Finally, considering household assets - the returns of business to the household, rather than profit or productivity - the returns of business to itself, shows that households running a business are better off. While we demonstrate that uman and social capital are key to business existence, survival and performance, those results open avenues for further research regarding the factors that could hamper growth of those businesses in terms of output and employment.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.
Resumo:
In this paper, we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new double smooth transition conditional correlation (DSTCC) GARCH model extends the smooth transition conditional correlation (STCC) GARCH model of Silvennoinen and Teräsvirta (2005) by including another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. Applying the model to the stock and bond futures data, we discover that the correlation pattern between them has dramatically changed around the turn of the century. The model is also applied to a selection of world stock indices, and we find evidence for an increasing degree of integration in the capital markets.
Resumo:
We study an overlapping-generations model in which agents' mortality risks, and consequently impatience, are endogenously determined by private and public investment in health care. Revenues allocated for public health care arc determined by a voting process. We find that the degree of substitutability between public and private health expenditures matters for macroeconomic outcomes of the model. Higher substitutability implies a “crowding-out" effect, which in turn impacts adversely on morality risks and impatience leading to lower public expenditures on health care in the political equilibrium. Consequently, higher substitutability is associated with greater polarization in wealth, and long-run distributions that are bimodal.