794 resultados para Non Parametric Methodology
Resumo:
The refractive error of a human eye varies across the pupil and therefore may be treated as a random variable. The probability distribution of this random variable provides a means for assessing the main refractive properties of the eye without the necessity of traditional functional representation of wavefront aberrations. To demonstrate this approach, the statistical properties of refractive error maps are investigated. Closed-form expressions are derived for the probability density function (PDF) and its statistical moments for the general case of rotationally-symmetric aberrations. A closed-form expression for a PDF for a general non-rotationally symmetric wavefront aberration is difficult to derive. However, for specific cases, such as astigmatism, a closed-form expression of the PDF can be obtained. Further, interpretation of the distribution of the refractive error map as well as its moments is provided for a range of wavefront aberrations measured in real eyes. These are evaluated using a kernel density and sample moments estimators. It is concluded that the refractive error domain allows non-functional analysis of wavefront aberrations based on simple statistics in the form of its sample moments. Clinicians may find this approach to wavefront analysis easier to interpret due to the clinical familiarity and intuitive appeal of refractive error maps.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.
Resumo:
Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.
Resumo:
Effective staff development remains a challenge in higher education. This paper examines the non-traditional methodology of arts-based staff development, its potential to foster transformational learning and the practice of professional artistry, through perceptions of program impact. Over a three year period, eighty academics participated in one metropolitan Australian university’s arts-based academic development program. The methodology used one-on-one hermeneutic-based conversations with fifteen self-selected academics and a focus group with twenty other academics from all three years. The paper presents a learning model to engender academic professional artistry. The findings provide developers with support for using a non-traditional strategy of transformational learning.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
Prognostics and asset life prediction is one of research potentials in engineering asset health management. We previously developed the Explicit Hazard Model (EHM) to effectively and explicitly predict asset life using three types of information: population characteristics; condition indicators; and operating environment indicators. We have formerly studied the application of both the semi-parametric EHM and non-parametric EHM to the survival probability estimation in the reliability field. The survival time in these models is dependent not only upon the age of the asset monitored, but also upon the condition and operating environment information obtained. This paper is a further study of the semi-parametric and non-parametric EHMs to the hazard and residual life prediction of a set of resistance elements. The resistance elements were used as corrosion sensors for measuring the atmospheric corrosion rate in a laboratory experiment. In this paper, the estimated hazard of the resistance element using the semi-parametric EHM and the non-parametric EHM is compared to the traditional Weibull model and the Aalen Linear Regression Model (ALRM), respectively. Due to assuming a Weibull distribution in the baseline hazard of the semi-parametric EHM, the estimated hazard using this model is compared to the traditional Weibull model. The estimated hazard using the non-parametric EHM is compared to ALRM which is a well-known non-parametric covariate-based hazard model. At last, the predicted residual life of the resistance element using both EHMs is compared to the actual life data.
Resumo:
In this sheep study, we investigated the influence of fixation stability on the temporal and spatial distribution of tissues in the fracture callus. As the initial mechanical conditions have been cited as being especially important for the healing outcome, it was hypothesized that differences in the path of healing would be seen as early as the initial phase of healing. ----- ----- Sixty-four sheep underwent a mid-shaft tibial osteotomy that was treated with either a rigid or a semi-rigid external fixator. Animals were sacrificed at 2, 3, 6 and 9 weeks postoperatively and the fracture calluses were analyzed using radiological, biomechanical and histological techniques. Statistical comparison between the groups was performed using the Mann–Whitney U test for unpaired non-parametric data. ----- ----- In the callus of the tibia treated with semi-rigid fixation, remnants of the fracture haematoma remained present for longer, although new periosteal bone formation during early healing was similar in both groups. The mechanical competence of the healing callus at 6 weeks was inferior compared to tibiae treated with rigid fixation. Semi-rigid fixation resulted in a larger cartilage component of the callus, which persisted longer. Remodeling processes were initiated earlier in the rigid group, while new bone formation continued throughout the entire investigated period in the semi-rigid group. ----- ----- In this study, evidence is provided that less rigid fixation increased the time required for healing. The process of intramembranous ossification appeared during the initial stages of healing to be independent of mechanical stability. However, the delay in healing was related to a prolonged chondral phase.
Resumo:
Outdoor workers are exposed to high levels of ultraviolet radiation (UVR) and may thus be at greater risk to experience UVR-related health effects such as skin cancer, sun burn, and cataracts. A number of intervention trials (n=14) have aimed to improve outdoor workers’ work-related sun protection cognitions and behaviours. Only one study however has reported the use of UV-photography as part of a multi-component intervention. This study was performed in the USA and showed long-term (12 months) improvements in work-related sun protection behaviours. Intervention effects of the other studies have varied greatly, depending on the population studied, intervention applied, and measurement of effect. Previous studies have not assessed whether: - Interventions are similarly effective for workers in stringent and less stringent policy organisations; - Policy effect is translated into workers’ leisure time protection; - Implemented interventions are effective in the long-term; - The facial UV-photograph technique is effective in Australian male outdoor workers without a large additional intervention package, and; - Such interventions will also affect workers’ leisure time sun-related cognitions and behaviours. Therefore, the present Protection of Outdoor Workers from Environmental Radiation [POWER]-study aimed to fill these gaps and had the objectives of: a) assessing outdoor workers’ sun-related cognitions and behaviours at work and during leisure time in stringent and less stringent sun protection policy environments; b) assessing the effect of an appearance-based intervention on workers’ risk perceptions, intentions and behaviours over time; c) assessing whether the intervention was equally effective within the two policy settings; and d) assessing the immediate post-intervention effect. Effectiveness was described in terms of changes in sun-related risk perceptions and intentions (as these factors were shown to be main precursors of behaviour change in many health promotion theories) and behaviour. The study purposefully selected and recruited two organisations with a large outdoor worker contingent in Queensland, Australia within a 40 kilometre radius of Brisbane. The two organisations differed in the stringency of implementation and reinforcement of their organisational sun protection policy. Data were collected from 154 male predominantly Australian born outdoor workers with an average age of 37 years and predominantly medium to fair skin (83%). Sun-related cognitions and behaviours of workers were assessed using self-report questionnaires at baseline and six to twelve months later. Variation in follow-up time was due to a time difference in the recruitment of the two organisations. Participants within each organisation were assigned to an intervention or control group. The intervention group participants received a one-off personalised Skin Cancer Risk Assessment Tool [SCRAT]-letter and a facial UV-photograph with detailed verbal information. This was followed by an immediate post-intervention questionnaire within three months of the start of the study. The control group only received the baseline and follow-up questionnaire. Data were analysed using a variety of techniques including: descriptive analyses, parametric and non-parametric tests, and generalised estimating equations. A 15% proportional difference observed was deemed of clinical significance, with the addition of reported statistical significance (p<0.05) where applicable. Objective 1: Assess and compare the current sun-related risk perceptions, intentions, behaviours, and policy awareness of outdoor workers in stringent and less stringent sun protection policy settings. Workers within the two organisations (stringent n=89 and less stringent n=65) were similar in their knowledge about skin cancer, self efficacy, attitudes, and social norms regarding sun protection at work and during leisure time. Participants were predominantly in favour of sun protection. Results highlighted that compared to workers in a less stringent policy organisation working for an organisation with stringent sun protection policies and practices resulted in more desirable sun protection intentions (less willing to tan p=0.03) ; actual behaviours at work (sufficient use of upper and lower body protection, headgear, and sunglasses (p<0.001 for all comparisons), and greater policy awareness (awareness of repercussions if Personal Protective Equipment (PPE) was not used, p<0.001)). However the effect of the work-related sun protection policy was found not to extend to leisure time sun protection. Objective 2: Compare changes in sun-related risk perceptions, intentions, and behaviours between the intervention and control group. The effect of the intervention was minimal and mainly resulted in a clinically significant reduction in work-related self-perceived risk of developing skin cancer in the intervention compared to the control group (16% and 32% for intervention and control group, respectively estimated their risk higher compared to other outdoor workers: , p=0.11). No other clinical significant effects were observed at 12 months follow-up. Objective 3: Assess whether the intervention was equally effective in the stringent sun protection policy organisation and the less stringent sun protection policy organisation. The appearance-based intervention resulted in a clinically significant improvement in the stringent policy intervention group participants’ intention to protect from the sun at work (workplace*time interaction, p=0.01). In addition to a reduction in their willingness to tan both at work (will tan at baseline: 17% and 61%, p=0.06, at follow-up: 54% and 33%, p=0.07, stringent and less stringent policy intervention group respectively. The workplace*time interaction was significant p<0.001) and during leisure time (will tan at baseline: 42% and 78%, p=0.01, at follow-up: 50% and 63%, p=0.43, stringent and less stringent policy intervention group respectively. The workplace*time interaction was significant p=0.01) over the course of the study compared to the less stringent policy intervention group. However, no changes in actual sun protection behaviours were found. Objective 4: Examine the effect of the intervention on level of alarm and concern regarding the health of the skin as well as sun protection behaviours in both organisations. The immediate post-intervention results showed that the stringent policy organisation participants indicated to be less alarmed (p=0.04) and concerned (p<0.01) about the health of their skin and less likely to show the facial UV-photograph to others (family p=0.03) compared to the less stringent policy participants. A clinically significantly larger proportion of participants from the stringent policy organisation reported they worried more about skin cancer (65%) and skin freckling (43%) compared to those in the less stringent policy organisation (46%,and 23% respectively , after seeing the UV-photograph). In summary the results of this study suggest that the having a stringent work-related sun protection policy was significantly related to for work-time sun protection practices, but did not extend to leisure time sun protection. This could reflect the insufficient level of sun protection found in the general Australian population during leisure time. Alternatively, reactance caused by being restricted in personal decisions through work-time policy could have contributed to lower leisure time sun protection. Finally, other factors could have also contributed to the less than optimal leisure time sun protection behaviours reported, such as unmeasured personal or cultural barriers. All these factors combined may have lead to reduced willingness to take proper preventive action during leisure time exposure. The intervention did not result in any measurable difference between the intervention and control groups in sun protection behaviours in this population, potentially due to the long lag time between the implementation of the intervention and assessment at 12-months follow-up. In addition, high levels of sun protection behaviours were found at baseline (ceiling effect) which left little room for improvement. Further, this study did not assess sunscreen use, which was the predominant behaviour assessed in previous effective appearance-based interventions trials. Additionally, previous trials were mainly conducted in female populations, whilst the POWER-study’s population was all male. The observed immediate post-intervention result could be due to more emphasis being placed on sun protection and risks related to sun exposure in the stringent policy organisation. Therefore participants from the stringent policy organisation could have been more aware of harmful effects of UVR and hence, by knowing that they usually protect adequately, not be as alarmed or concerned as the participants from the less stringent policy organisation. In conclusion, a facial UV-photograph and SCRAT-letter information alone may not achieve large changes in sun-related cognitions and behaviour, especially of assessed 6-12 months after the intervention was implemented and in workers who are already quite well protected. Differences found between workers in the present study appear to be more attributable to organisational policy. However, against a background of organisational policy, this intervention may be a useful addition to sun-related workplace health and safety programs. The study findings have been interpreted while respecting a number of limitations. These have included non-random allocation of participants due to pre-organised allocation of participants to study group in one organisation and difficulty in separating participants from either study group. Due to the transient nature of the outdoor worker population, only 105 of 154 workers available at baseline could be reached for follow-up. (attrition rate=32%). In addition the discrepancy in the time to follow-up assessment between the two organisations was a limitation of the current study. Given the caveats of this research, the following recommendations were made for future research: - Consensus should be reached to define "outdoor worker" in terms of time spent outside at work as well as in the way sun protection behaviours are measured and reported. - Future studies should implement and assess the value of the facial UV-photographs in a wide range of outdoor worker organisations and countries. - More timely and frequent follow-up assessments should be implemented in intervention studies to determine the intervention effect and to identify the best timing of booster sessions to optimise results. - Future research should continue to aim to target outdoor workers’ leisure time cognitions and behaviours and improve these if possible. Overall, policy appears to be an important factor in workers’ compliance with work-time use of sun protection. Given the evidence generated by this research, organisations employing outdoor workers should consider stringent implementation and reinforcement of a sun protection policy. Finally, more research is needed to improve ways to generate desirable behaviour in this population during leisure time.
Resumo:
The rank transform is one non-parametric transform which has been applied to the stereo matching problem The advantages of this transform include its invariance to radio metric distortion and its amenability to hardware implementation. This paper describes the derivation of the rank constraint for matching using the rank transform Previous work has shown that this constraint was capable of resolving ambiguous matches thereby improving match reliability A new matching algorithm incorporating this constraint was also proposed. This paper extends on this previous work by proposing a matching algorithm which uses a dimensional match surface in which the match score is computed for every possible template and match window combination. The principal advantage of this algorithm is that the use of the match surface enforces the left�right consistency and uniqueness constraints thus improving the algorithms ability to remove invalid matches Experimental results for a number of test stereo pairs show that the new algorithm is capable of identifying and removing a large number of in incorrect matches particularly in the case of occlusions
Resumo:
The rank transform is a non-parametric technique which has been recently proposed for the stereo matching problem. The motivation behind its application to the matching problem is its invariance to certain types of image distortion and noise, as well as its amenability to real-time implementation. This paper derives an analytic expression for the process of matching using the rank transform, and then goes on to derive one constraint which must be satisfied for a correct match. This has been dubbed the rank order constraint or simply the rank constraint. Experimental work has shown that this constraint is capable of resolving ambiguous matches, thereby improving matching reliability. This constraint was incorporated into a new algorithm for matching using the rank transform. This modified algorithm resulted in an increased proportion of correct matches, for all test imagery used.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. The objective is to produce a stereo vision sensor suited to close-range scenes consisting primarily of rocks. This sensor should be able to produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this investigation. A number of area based matching metrics have been implemented, including the SAD, SSD, NCC, and their zero-meaned versions. The NCC and the zero meaned SAD and SSD were found to produce the disparity maps with the highest proportion of valid matches. The plain SAD and SSD were the least computationally expensive, due to all their operations taking place in integer arithmetic, however, they were extremely sensitive to radiometric distortion. Non-parametric techniques for matching, in particular, the rank and the census transform, have also been investigated. The rank and census transforms were found to be robust with respect to radiometric distortion, as well as being able to produce disparity maps with a high proportion of valid matches. An additional advantage of both the rank and the census transform is their amenability to fast hardware implementation.
Resumo:
Traditional area-based matching techniques make use of similarity metrics such as the Sum of Absolute Differences(SAD), Sum of Squared Differences (SSD) and Normalised Cross Correlation (NCC). Non-parametric matching algorithms such as the rank and census rely on the relative ordering of pixel values rather than the pixels themselves as a similarity measure. Both traditional area-based and non-parametric stereo matching techniques have an algorithmic structure which is amenable to fast hardware realisation. This investigation undertakes a performance assessment of these two families of algorithms for robustness to radiometric distortion and random noise. A generic implementation framework is presented for the stereo matching problem and the relative hardware requirements for the various metrics investigated.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This paper evaluates a number of matching techniques for possible use in a stereo vision sensor for mining automation applications. Area-based techniques have been investigated because they have the potential to yield dense maps, are amenable to fast hardware implementation, and are suited to textured scenes. In addition, two non-parametric transforms, namely, the rank and census, have been investigated. Matching algorithms using these transforms were found to have a number of clear advantages, including reliability in the presence of radiometric distortion, low computational complexity, and amenability to hardware implementation.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.