461 resultados para Accelerated failure time model
Resumo:
Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Appropriate mathematical models that are capable of estimating times to failures and the probability of failures in the future are essential in EAM. In most real-life situations, the lifetime of an engineering asset is influenced and/or indicated by different factors that are termed as covariates. Hazard prediction with covariates is an elemental notion in the reliability theory to estimate the tendency of an engineering asset failing instantaneously beyond the current time assumed that it has already survived up to the current time. A number of statistical covariate-based hazard models have been developed. However, none of them has explicitly incorporated both external and internal covariates into one model. This paper introduces a novel covariate-based hazard model to address this concern. This model is named as Explicit Hazard Model (EHM). Both the semi-parametric and non-parametric forms of this model are presented in the paper. The major purpose of this paper is to illustrate the theoretical development of EHM. Due to page limitation, a case study with the reliability field data is presented in the applications part of this study.
Resumo:
Government figures put the current indigenous unemployment rate at around 23%, 3 times the unemployment rate for other Australians. This thesis aims to assess whether Australian indirect discrimination legislation can provide a remedy for one of the causes of indigenous unemployment - the systemic discrimination which can result from the mere operation of established procedures of recruitment and hiring. The impact of those practices on indigenous people is examined in the context of an analysis of anti-discrimination legislation and cases from all Australian jurisdictions from the time of the passing of the Racial Discrimination Act by the Commonwealth in 1975 to the present. The thesis finds a number of reasons why the legislation fails to provide equality of opportunity for indigenous people seeking to enter the workforce. In nearly all jurisdictions it is obscurely drafted, used mainly by educated middle class white women, and provides remedies which tend to be compensatory damages rather than change to recruitment policy. White dominance of the legal process has produced legislative and judicial definitions of "race" and "Aboriginality" which focus on biology rather than cultural difference. In the commissions and tribunals complaints of racial discrimination are often rejected on the grounds of being "vexatious" or "frivolous", not reaching the required standard of proof, or not showing a causal connection between race and the conduct complained of. In all jurisdictions the cornerstone of liability is whether a particular employment term, condition or practice is reasonable. The thesis evaluates the approaches taken by appellate courts, including the High Court, and concludes that there is a trend towards an interpretation of reasonableness which favours employer arguments such as economic rationalism, the maintenance of good industrial relations, managerial prerogative to hire and fire, and the protection of majority rights. The thesis recommends that separate, clearly drafted legislation should be passed to address indigenous disadvantage and that indigenous people should be involved in all stages of the process.
Resumo:
The proliferation of innovative schemes to address climate change at international, national and local levels signals a fundamental shift in the priority and role of the natural environment to society, organizations and individuals. This shift in shared priorities invites academics and practitioners to consider the role of institutions in shaping and constraining responses to climate change at multiple levels of organisations and society. Institutional theory provides an approach to conceptualising and addressing climate change challenges by focusing on the central logics that guide society, organizations and individuals and their material and symbolic relationship to the environment. For example, framing a response to climate change in the form of an emission trading scheme evidences a practice informed by a capitalist market logic (Friedland and Alford 1991). However, not all responses need necessarily align with a market logic. Indeed, Thornton (2004) identifies six broad societal sectors each with its own logic (markets, corporations, professions, states, families, religions). Hence, understanding the logics that underpin successful –and unsuccessful– climate change initiatives contributes to revealing how institutions shape and constrain practices, and provides valuable insights for policy makers and organizations. This paper develops models and propositions to consider the construction of, and challenges to, climate change initiatives based on institutional logics (Thornton and Ocasio 2008). We propose that the challenge of understanding and explaining how climate change initiatives are successfully adopted be examined in terms of their institutional logics, and how these logics evolve over time. To achieve this, a multi-level framework of analysis that encompasses society, organizations and individuals is necessary (Friedland and Alford 1991). However, to date most extant studies of institutional logics have tended to emphasize one level over the others (Thornton and Ocasio 2008: 104). In addition, existing studies related to climate change initiatives have largely been descriptive (e.g. Braun 2008) or prescriptive (e.g. Boiral 2006) in terms of the suitability of particular practices. This paper contributes to the literature on logics by examining multiple levels: the proliferation of the climate change agenda provides a site in which to study how institutional logics are played out across multiple, yet embedded levels within society through institutional forums in which change takes place. Secondly, the paper specifically examines how institutional logics provide society with organising principles –material practices and symbolic constructions– which enable and constrain their actions and help define their motives and identity. Based on this model, we develop a series of propositions of the conditions required for the successful introduction of climate change initiatives. The paper proceeds as follows. We present a review of literature related to institutional logics and develop a generic model of the process of the operation of institutional logics. We then consider how this is applied to key initiatives related to climate change. Finally, we develop a series of propositions which might guide insights into the successful implementation of climate change practices.
Resumo:
In condition-based maintenance (CBM), effective diagnostics and prognostics are essential tools for maintenance engineers to identify imminent fault and to predict the remaining useful life before the components finally fail. This enables remedial actions to be taken in advance and reschedules production if necessary. This paper presents a technique for accurate assessment of the remnant life of machines based on historical failure knowledge embedded in the closed loop diagnostic and prognostic system. The technique uses the Support Vector Machine (SVM) classifier for both fault diagnosis and evaluation of health stages of machine degradation. To validate the feasibility of the proposed model, the five different level data of typical four faults from High Pressure Liquefied Natural Gas (HP-LNG) pumps were used for multi-class fault diagnosis. In addition, two sets of impeller-rub data were analysed and employed to predict the remnant life of pump based on estimation of health state. The results obtained were very encouraging and showed that the proposed prognosis system has the potential to be used as an estimation tool for machine remnant life prediction in real life industrial applications.
Resumo:
The driving task requires sustained attention during prolonged periods, and can be performed in highly predictable or repetitive environments. Such conditions could create drowsiness or hypovigilance and impair the ability to react to critical events. Identifying vigilance decrement in monotonous conditions has been a major subject of research, but no research to date has attempted to predict this vigilance decrement. This pilot study aims to show that vigilance decrements due to monotonous tasks can be predicted through mathematical modelling. A short vigilance task sensitive to short periods of lapses of vigilance called Sustained Attention to Response Task is used to assess participants’ performance. This task models the driver’s ability to cope with unpredicted events by performing the expected action. A Hidden Markov Model (HMM) is proposed to predict participants’ hypovigilance. Driver’s vigilance evolution is modelled as a hidden state and is correlated to an observable variable: the participant’s reactions time. This experiment shows that the monotony of the task can lead to an important vigilance decline in less than five minutes. This impairment can be predicted four minutes in advance with an 86% accuracy using HMMs. This experiment showed that mathematical models such as HMM can efficiently predict hypovigilance through surrogate measures. The presented model could result in the development of an in-vehicle device that detects driver hypovigilance in advance and warn the driver accordingly, thus offering the potential to enhance road safety and prevent road crashes.
Resumo:
This paper presents a model to estimate travel time using cumulative plots. Three different cases considered are i) case-Det, for only detector data; ii) case-DetSig, for detector data and signal controller data and iii) case-DetSigSFR: for detector data, signal controller data and saturation flow rate. The performance of the model for different detection intervals is evaluated. It is observed that detection interval is not critical if signal timings are available. Comparable accuracy can be obtained from larger detection interval with signal timings or from shorter detection interval without signal timings. The performance for case-DetSig and for case-DetSigSFR is consistent with accuracy generally more than 95% whereas, case-Det is highly sensitive to the signal phases in the detection interval and its performance is uncertain if detection interval is integral multiple of signal cycles.
Resumo:
Heart disease is attributed as the highest cause of death in the world. Although this could be alleviated by heart transplantation, there is a chronic shortage of donor hearts and so mechanical solutions are being considered. Currently, many Ventricular Assist Devices (VADs) are being developed worldwide in an effort to increase life expectancy and quality of life for end stage heart failure patients. Current pre-clinical testing methods for VADs involve laboratory testing using Mock Circulation Loops (MCLs), and in vivo testing in animal models. The research and development of highly accurate MCLs is vital to the continuous improvement of VAD performance. The first objective of this study was to develop and validate a mathematical model of a MCL. This model could then be used in the design and construction of a variable compliance chamber to improve the performance of an existing MCL as well as form the basis for a new miniaturised MCL. An extensive review of literature was carried out on MCLs and mathematical modelling of their function. A mathematical model of a MCL was then created in the MATLAB/SIMULINK environment. This model included variable features such as resistance, fluid inertia and volumes (resulting from the pipe lengths and diameters); compliance of Windkessel chambers, atria and ventricles; density of both fluid and compressed air applied to the system; gravitational effects on vertical columns of fluid; and accurately modelled actuators controlling the ventricle contraction. This model was then validated using the physical properties and pressure and flow traces produced from a previously developed MCL. A variable compliance chamber was designed to reproduce parameters determined by the mathematical model. The function of the variability was achieved by controlling the transmural pressure across a diaphragm to alter the compliance of the system. An initial prototype was tested in a previously developed MCL, and a variable level of arterial compliance was successfully produced; however, the complete range of compliance values required for accurate physiological representation was not able to be produced with this initial design. The mathematical model was then used to design a smaller physical mock circulation loop, with the tubing sizes adjusted to produce accurate pressure and flow traces whilst having an appropriate frequency response characteristic. The development of the mathematical model greatly assisted the general design of an in vitro cardiovascular device test rig, while the variable compliance chamber allowed simple and real-time manipulation of MCL compliance to allow accurate transition between a variety of physiological conditions. The newly developed MCL produced an accurate design of a mechanical representation of the human circulatory system for in vitro cardiovascular device testing and education purposes. The continued improvement of VAD test rigs is essential if VAD design is to improve, and hence improve quality of life and life expectancy for heart failure patients.
Resumo:
The ability to forecast machinery failure is vital to reducing maintenance costs, operation downtime and safety hazards. Recent advances in condition monitoring technologies have given rise to a number of prognostic models for forecasting machinery health based on condition data. Although these models have aided the advancement of the discipline, they have made only a limited contribution to developing an effective machinery health prognostic system. The literature review indicates that there is not yet a prognostic model that directly models and fully utilises suspended condition histories (which are very common in practice since organisations rarely allow their assets to run to failure); that effectively integrates population characteristics into prognostics for longer-range prediction in a probabilistic sense; which deduces the non-linear relationship between measured condition data and actual asset health; and which involves minimal assumptions and requirements. This work presents a novel approach to addressing the above-mentioned challenges. The proposed model consists of a feed-forward neural network, the training targets of which are asset survival probabilities estimated using a variation of the Kaplan-Meier estimator and a degradation-based failure probability density estimator. The adapted Kaplan-Meier estimator is able to model the actual survival status of individual failed units and estimate the survival probability of individual suspended units. The degradation-based failure probability density estimator, on the other hand, extracts population characteristics and computes conditional reliability from available condition histories instead of from reliability data. The estimated survival probability and the relevant condition histories are respectively presented as “training target” and “training input” to the neural network. The trained network is capable of estimating the future survival curve of a unit when a series of condition indices are inputted. Although the concept proposed may be applied to the prognosis of various machine components, rolling element bearings were chosen as the research object because rolling element bearing failure is one of the foremost causes of machinery breakdowns. Computer simulated and industry case study data were used to compare the prognostic performance of the proposed model and four control models, namely: two feed-forward neural networks with the same training function and structure as the proposed model, but neglected suspended histories; a time series prediction recurrent neural network; and a traditional Weibull distribution model. The results support the assertion that the proposed model performs better than the other four models and that it produces adaptive prediction outputs with useful representation of survival probabilities. This work presents a compelling concept for non-parametric data-driven prognosis, and for utilising available asset condition information more fully and accurately. It demonstrates that machinery health can indeed be forecasted. The proposed prognostic technique, together with ongoing advances in sensors and data-fusion techniques, and increasingly comprehensive databases of asset condition data, holds the promise for increased asset availability, maintenance cost effectiveness, operational safety and – ultimately – organisation competitiveness.
Resumo:
Abstract: Purpose – Several major infrastructure projects in the Hong Kong Special Administrative Region (HKSAR) have been delivered by the build-operate-transfer (BOT) model since the 1960s. Although the benefits of using BOT have been reported abundantly in the contemporary literature, some BOT projects were less successful than the others. This paper aims to find out why this is so and to explore whether BOT is the best financing model to procure major infrastructure projects. Design/methodology/approach – The benefits of BOT will first be reviewed. Some completed BOT projects in Hong Kong will be examined to ascertain how far the perceived benefits of BOT have been materialized in these projects. A highly profiled project, the Hong Kong-Zhuhai-Macau Bridge, which has long been promoted by the governments of the People's Republic of China, Macau Special Administrative Region and the HKSAR that BOT is the preferred financing model, but suddenly reverted back to the traditional financing model to be funded primarily by the three governments with public money instead, will be studied to explore the true value of the BOT financial model. Findings – Six main reasons for this radical change are derived from the analysis: shorter take-off time for the project; difference in legal systems causing difficulties in drafting BOT agreements; more government control on tolls; private sector uninterested due to unattractive economic package; avoid allegation of collusion between business and the governments; and a comfortable financial reserve possessed by the host governments. Originality/value – The findings from this paper are believed to provide a better understanding to the real benefits of BOT and the governments' main decision criteria in delivering major infrastructure projects.
Resumo:
The epidemic of obesity is impacting an increasing proportion of children, adolescents and adults with a common feature being low levels of physical activity (PA). Despite having more knowledge than ever before about the benefits of PA for health and the growth and development of youngsters, we are only paying lip-service to the development of motor skills in children. Fun, enjoyment and basic skills are the essential underpinnings of meaningful participation in PA. A concurrent problem is the reported increase in sitting time with the most common sedentary behaviors being TV viewing and other screen-based games. Limitations of time have contributed to a displacement of active behaviors with inactive pursuits, which has contributed to reductions in activity energy expenditure. To redress the energy imbalance in overweight and obese children, we urgently need out-of-the-box multisectoral solutions. There is little to be gained from a shame and blame mentality where individuals, their parents, teachers and other groups are singled out as causes of the problem. Such an approach does little more than shift attention from the main game of prevention and management of the condition, which requires a concerted, whole-of-government approach (in each country). The failure to support and encourage all young people to participate in regular PA will increase the chance that our children will live shorter and less healthy lives than their parents. In short, we need novel environmental approaches to foster a systematic increase in PA. This paper provides examples of opportunities and challenges for PA strategies to prevent obesity with a particular emphasis on the school and home settings.
Resumo:
This paper investigates the robust H∞ control for Takagi-Sugeno (T-S) fuzzy systems with interval time-varying delay. By employing a new and tighter integral inequality and constructing an appropriate type of Lyapunov functional, delay-dependent stability criteria are derived for the control problem. Because neither any model transformation nor free weighting matrices are employed in our theoretical derivation, the developed stability criteria significantly improve and simplify the existing stability conditions. Also, the maximum allowable upper delay bound and controller feedback gains can be obtained simultaneously from the developed approach by solving a constrained convex optimization problem. Numerical examples are given to demonstrate the effectiveness of the proposed methods.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
Concern regarding the health effects of indoor air quality has grown in recent years, due to the increased prevalence of many diseases, as well as the fact that many people now spend most of their time indoors. While numerous studies have reported on the dynamics of aerosols indoors, the dynamics of bioaerosols in indoor environments are still poorly understood and very few studies have focused on fungal spore dynamics in indoor environments. Consequently, this work investigated the dynamics of fungal spores in indoor air, including fungal spore release and deposition, as well as investigating the mechanisms involved in the fungal spore fragmentation process. In relation to the investigation of fungal spore dynamics, it was found that the deposition rates of the bioaerosols (fungal propagules) were in the same range as the deposition rates of nonbiological particles and that they were a function of their aerodynamic diameters. It was also found that fungal particle deposition rates increased with increasing ventilation rates. These results (which are reported for the first time) are important for developing an understanding of the dynamics of fungal spores in the air. In relation to the process of fungal spore fragmentation, important information was generated concerning the airborne dynamics of the spores, as well as the part/s of the fungi which undergo fragmentation. The results obtained from these investigations into the dynamics of fungal propagules in indoor air significantly advance knowledge about the fate of fungal propagules in indoor air, as well as their deposition in the respiratory tract. The need to develop an advanced, real-time method for monitoring bioaerosols has become increasingly important in recent years, particularly as a result of the increased threat from biological weapons and bioterrorism. However, to date, the Ultraviolet Aerodynamic Particle Sizer (UVAPS, Model 3312, TSI, St Paul, MN) is the only commercially available instrument capable of monitoring and measuring viable airborne micro-organisms in real-time. Therefore (for the first time), this work also investigated the ability of the UVAPS to measure and characterise fungal spores in indoor air. The UVAPS was found to be sufficiently sensitive for detecting and measuring fungal propagules. Based on fungal spore size distributions, together with fluorescent percentages and intensities, it was also found to be capable of discriminating between two fungal spore species, under controlled laboratory conditions. In the field, however, it would not be possible to use the UVAPS to differentiate between different fungal spore species because the different micro-organisms present in the air may not only vary in age, but may have also been subjected to different environmental conditions. In addition, while the real-time UVAPS was found to be a good tool for the investigation of fungal particles under controlled conditions, it was not found to be selective for bioaerosols only (as per design specifications). In conclusion, the UVAPS is not recommended for use in the direct measurement of airborne viable bioaerosols in the field, including fungal particles, and further investigations into the nature of the micro-organisms, the UVAPS itself and/or its use in conjunction with other conventional biosamplers, are necessary in order to obtain more realistic results. Overall, the results obtained from this work on airborne fungal particle dynamics will contribute towards improving the detection capabilities of the UVAPS, so that it is capable of selectively monitoring and measuring bioaerosols, for which it was originally designed. This work will assist in finding and/or improving other technologies capable of the real-time monitoring of bioaerosols. The knowledge obtained from this work will also be of benefit in various other bioaerosol applications, such as understanding the transport of bioaerosols indoors.
Resumo:
We evaluate the performance of several specification tests for Markov regime-switching time-series models. We consider the Lagrange multiplier (LM) and dynamic specification tests of Hamilton (1996) and Ljung–Box tests based on both the generalized residual and a standard-normal residual constructed using the Rosenblatt transformation. The size and power of the tests are studied using Monte Carlo experiments. We find that the LM tests have the best size and power properties. The Ljung–Box tests exhibit slight size distortions, though tests based on the Rosenblatt transformation perform better than the generalized residual-based tests. The tests exhibit impressive power to detect both autocorrelation and autoregressive conditional heteroscedasticity (ARCH). The tests are illustrated with a Markov-switching generalized ARCH (GARCH) model fitted to the US dollar–British pound exchange rate, with the finding that both autocorrelation and GARCH effects are needed to adequately fit the data.
Resumo:
Visiting a modern shopping center is becoming vital in our society nowadays. The fast growth of shopping center, transportation system, and modern vehicles has given more choices for consumers in shopping. Although there are many reasons for the consumers in visiting the shopping center, the influence of travel time and size of shopping center are important things to be considered towards the frequencies of visiting customers in shopping centers. A survey to the customers of three major shopping centers in Surabaya has been conducted to evaluate the Ellwood’s model and Huff’s model. A new exponent value N of 0.48 and n of 0.50 has been found from the Ellwood’s model, while a coefficient of 0.267 and an add value of 0.245 have been found from the Huff’s model.