62 resultados para non-parametric technique


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: This study investigates the clinical utility of the melanopsin expressing intrinsically photosensitive retinal ganglion cell (ipRGC) controlled post-illumination pupil response (PIPR) as a novel technique for documenting inner retinal function in patients with Type II diabetes without diabetic retinopathy. Methods: The post-illumination pupil response (PIPR) was measured in seven patients with Type II diabetes, normal retinal nerve fiber thickness and no diabetic retinopathy. A 488 nm and 610 nm, 7.15º diameter stimulus was presented in Maxwellian view to the right eye and the left consensual pupil light reflex was recorded. Results: The group data for the blue PIPR (488 nm) identified a trend of reduced ipRGC function in patients with diabetes with no retinopathy. The transient pupil constriction was lower on average in the diabetic group. The relationship between duration of diabetes and the blue PIPR amplitude was linear, suggesting that ipRGC function decreases with increasing diabetes duration. Conclusion: This is the first report to show that the ipRGC controlled post-illumination pupil response may have clinical applications as a non-invasive technique for determining progression of inner neuroretinal changes in patients with diabetes before they are ophthalmoscopically or anatomically evident. The lower transient pupil constriction amplitude indicates that outer retinal photoreceptor inputs to the pupil light reflex may also be affected in diabetes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The rank transform is one non-parametric transform which has been applied to the stereo matching problem The advantages of this transform include its invariance to radio metric distortion and its amenability to hardware implementation. This paper describes the derivation of the rank constraint for matching using the rank transform Previous work has shown that this constraint was capable of resolving ambiguous matches thereby improving match reliability A new matching algorithm incorporating this constraint was also proposed. This paper extends on this previous work by proposing a matching algorithm which uses a dimensional match surface in which the match score is computed for every possible template and match window combination. The principal advantage of this algorithm is that the use of the match surface enforces the left�right consistency and uniqueness constraints thus improving the algorithms ability to remove invalid matches Experimental results for a number of test stereo pairs show that the new algorithm is capable of identifying and removing a large number of in incorrect matches particularly in the case of occlusions

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. The objective is to produce a stereo vision sensor suited to close-range scenes consisting primarily of rocks. This sensor should be able to produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this investigation. A number of area based matching metrics have been implemented, including the SAD, SSD, NCC, and their zero-meaned versions. The NCC and the zero meaned SAD and SSD were found to produce the disparity maps with the highest proportion of valid matches. The plain SAD and SSD were the least computationally expensive, due to all their operations taking place in integer arithmetic, however, they were extremely sensitive to radiometric distortion. Non-parametric techniques for matching, in particular, the rank and the census transform, have also been investigated. The rank and census transforms were found to be robust with respect to radiometric distortion, as well as being able to produce disparity maps with a high proportion of valid matches. An additional advantage of both the rank and the census transform is their amenability to fast hardware implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional area-based matching techniques make use of similarity metrics such as the Sum of Absolute Differences(SAD), Sum of Squared Differences (SSD) and Normalised Cross Correlation (NCC). Non-parametric matching algorithms such as the rank and census rely on the relative ordering of pixel values rather than the pixels themselves as a similarity measure. Both traditional area-based and non-parametric stereo matching techniques have an algorithmic structure which is amenable to fast hardware realisation. This investigation undertakes a performance assessment of these two families of algorithms for robustness to radiometric distortion and random noise. A generic implementation framework is presented for the stereo matching problem and the relative hardware requirements for the various metrics investigated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This paper evaluates a number of matching techniques for possible use in a stereo vision sensor for mining automation applications. Area-based techniques have been investigated because they have the potential to yield dense maps, are amenable to fast hardware implementation, and are suited to textured scenes. In addition, two non-parametric transforms, namely, the rank and census, have been investigated. Matching algorithms using these transforms were found to have a number of clear advantages, including reliability in the presence of radiometric distortion, low computational complexity, and amenability to hardware implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It has not yet been established whether the spatial variation of particle number concentration (PNC) within a microscale environment can have an effect on exposure estimation results. In general, the degree of spatial variation within microscale environments remains unclear, since previous studies have only focused on spatial variation within macroscale environments. The aims of this study were to determine the spatial variation of PNC within microscale school environments, in order to assess the importance of the number of monitoring sites on exposure estimation. Furthermore, this paper aims to identify which parameters have the largest influence on spatial variation, as well as the relationship between those parameters and spatial variation. Air quality measurements were conducted for two consecutive weeks at each of the 25 schools across Brisbane, Australia. PNC was measured at three sites within the grounds of each school, along with the measurement of meteorological and several other air quality parameters. Traffic density was recorded for the busiest road adjacent to the school. Spatial variation at each school was quantified using coefficient of variation (CV). The portion of CV associated with instrument uncertainty was found to be 0.3 and therefore, CV was corrected so that only non-instrument uncertainty was analysed in the data. The median corrected CV (CVc) ranged from 0 to 0.35 across the schools, with 12 schools found to exhibit spatial variation. The study determined the number of required monitoring sites at schools with spatial variability and tested the deviation in exposure estimation arising from using only a single site. Nine schools required two measurement sites and three schools required three sites. Overall, the deviation in exposure estimation from using only one monitoring site was as much as one order of magnitude. The study also tested the association of spatial variation with wind speed/direction and traffic density, using partial correlation coefficients to identify sources of variation and non-parametric function estimation to quantify the level of variability. Traffic density and road to school wind direction were found to have a positive effect on CVc, and therefore, also on spatial variation. Wind speed was found to have a decreasing effect on spatial variation when it exceeded a threshold of 1.5 (m/s), while it had no effect below this threshold. Traffic density had a positive effect on spatial variation and its effect increased until it reached a density of 70 vehicles per five minutes, at which point its effect plateaued and did not increase further as a result of increasing traffic density.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVE: The objective of this study was to describe the distribution of conjunctival ultraviolet autofluorescence (UVAF) in an adult population. METHODS: We conducted a cross-sectional, population-based study in the genetic isolate of Norfolk Island, South Pacific Ocean. In all, 641 people, aged 15 to 89 years, were recruited. UVAF and standard (control) photographs were taken of the nasal and temporal interpalpebral regions bilaterally. Differences between the groups for non-normally distributed continuous variables were assessed using the Wilcoxon-Mann-Whitney ranksum test. Trends across categories were assessed using Cuzick's non-parametric test for trend or Kendall's rank correlation τ. RESULTS: Conjunctival UVAF is a non-parametric trait with a positively skewed distribution. Median amount of conjunctival UVAF per person (sum of four measurements; right nasal/temporal and left nasal/temporal) was 28.2 mm(2) (interquartile range 14.5-48.2). There was an inverse, linear relationship between UVAF and advancing age (P<0.001). Males had a higher sum of UVAF compared with females (34.4 mm(2) vs 23.2 mm(2), P<0.0001). There were no statistically significant differences in area of UVAF between right and left eyes or between nasal and temporal regions. CONCLUSION: We have provided the first quantifiable estimates of conjunctival UVAF in an adult population. Further data are required to provide information about the natural history of UVAF and to characterise other potential disease associations with UVAF. UVR protective strategies should be emphasised at an early age to prevent the long-term adverse effects on health associated with excess UVR.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Migraine is a prevalent neurovascular disease with a significant genetic component. Linkage studies have so far identified migraine susceptibility loci on chromosomes 1, 4, 6, 11, 14, 19 and X. We performed a genome-wide scan of 92 Australian pedigrees phenotyped for migraine with and without aura and for a more heritable form of “severe” migraine. Multipoint non-parametric linkage analysis revealed suggestive linkage on chromosome 18p11 for the severe migraine phenotype (LOD*=2.32, P=0.0006) and chromosome 3q (LOD*=2.28, P=0.0006). Excess allele sharing was also observed at multiple different chromosomal regions, some of which overlap with, or are directly adjacent to, previously implicated migraine susceptibility regions. We have provided evidence for two loci involved in severe migraine susceptibility and conclude that dissection of the “migraine” phenotype may be helpful for identifying susceptibility genes that influence the more heritable clinical (symptom) profiles in affected pedigrees. Also, we concluded that the genetic aetiology of the common (International Headache Society) forms of the disease is probably comprised of a number of low to moderate effect susceptibility genes, perhaps acting synergistically, and this effect is not easily detected by traditional single-locus linkage analyses of large samples of affected pedigrees.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

5-Hydroxytryptamine (5HT), commonly known as serotonin, which predominantly serves as an inhibitory neurotransmitter in the brain, has long been implicated in migraine pathophysiology. This study tested an Mspl polymorphism in the human 5HT2A receptor gene (HTR2A) and a closely linked microsatellite marker (D13S126), for linkage and association with common migraine. In the association analyses, no significant differences were found between the migraine and control populations for both the Mspl polymorphism and the D13S126 microsatellite marker. The linkage studies involving three families comprising 36 affected members were analysed using both parametric (FASTLINK) and non-parametric (MFLINK and APM) techniques. Significant close linkage was indicated between the Mspl polymorphism and the D13S126 microsatellite marker at a recombination fraction (θ) of zero (lod score=7.15). Linkage results for the Mspl polymorphism were not very informative in the three families, producing maximum and minimum lod scores of only 0.35 and 0.39 at recombination fractions (θ) of 0.2 and 0.00, respectively. However, linkage analysis between the D13S126 marker and migraine indicated significant non-linkage (lod2) up to a recombination fraction (θ) of 0.028. Results from this study exclude the HTR2A gene, which has been localized to chromosome 13q14-q21, for involvement with common migraine.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. Essential hypertension occurs in people with an underlying genetic predisposition who subject themselves to adverse environmental influences. The number of genes involved is unknown, as is the extent to which each contributes to final blood pressure and the severity of the disease. 2. In the past, studies of potential candidate genes have been performed by association (case-control) analysis of unrelated individuals or linkage (pedigree or sibpair) analysis of families. These studies have resulted in several positive findings but, as one may expect, also an enormous number of negative results. 3. In order to uncover the major genetic loci for essential hypertension, it is proposed that scanning the genome systematically in 100- 200 affected sibships should prove successful. 4. This involves genotyping sets of hypertensive sibships to determine their complement of several hundred microsatellite polymorphisms. Those that are highly informative, by having a high heterozygosity, are most suitable. Also, the markers need to be spaced sufficiently evenly across the genome so as to ensure adequate coverage. 5. Tests are performed to determine increased segregation of alleles of each marker with hypertension. The analytical tools involve specialized statistical programs that can detect such differences. Non- parametric multipoint analysis is an appropriate approach. 6. In this way, loci for essential hypertension are beginning to emerge.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose To investigate the application of retinal nerve fibre layer (RNFL) thickness as a marker for severity of diabetic peripheral neuropathy (DPN) in people with Type 2 diabetes. Methods This was a cross-sectional study whereby 61 participants (mean age 61 [41-75 years], mean duration of diabetes 14 [1-40 years], 70% male) with Type 2 diabetes and DPN underwent optical coherence tomography (OCT) scans. Global and 4 quadrant (TSNI) RNFL thicknesses were measured at 3.45mm around the optic nerve head of one eye. Neuropathy disability score (NDS) was used to assess the severity of DPN on a 0 to 10 scale. Participants were divided into three age-matched groups representing mild (NDS=3-5), moderate (NDS=6-8) and severe (NDS=9-10) neuropathy. Two regression models were fitted for statistical analysis: 1) NDS scores as co-variate for global and quadrant RNFL thicknesses, 2) NDS groups as a factor for global RNFL thickness only. Results Mean (SD) RNFL thickness (µm) was 103(9) for mild neuropathy (n=34), 101(10) for moderate neuropathy (n=16) and 95(13) in the group with severe neuropathy (n=11). Global RNFL thickness and NDS scores were statistically significantly related (b=-1.20, p=0.048). When neuropathy was assessed across groups, a trend of thinner mean RNFL thickness was observed with increasing severity of neuropathy; however, this result was not statistically significant (F=2.86, p=0.065). TSNI quadrant analysis showed that mean RNFL thickness reduction in the inferior quadrant was 2.55 µm per 1 unit increase in NDS score (p=0.005). However, the regression coefficients were not statistically significant for RNFL thickness in the superior (b=-1.0, p=0.271), temporal (b=-0.90, p=0.238) and nasal (b=-0.99, p=0.205) quadrants. Conclusions RNFL thickness was reduced with increasing severity of DPN and the effect was most evident in the inferior quadrant. Measuring RNFL thickness using OCT may prove to be a useful, non-invasive technique for identifying severity of DPN and may also provide additional insight into common mechanisms for peripheral neuropathy and RNFL damage.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An important aspect of decision support systems involves applying sophisticated and flexible statistical models to real datasets and communicating these results to decision makers in interpretable ways. An important class of problem is the modelling of incidence such as fire, disease etc. Models of incidence known as point processes or Cox processes are particularly challenging as they are ‘doubly stochastic’ i.e. obtaining the probability mass function of incidents requires two integrals to be evaluated. Existing approaches to the problem either use simple models that obtain predictions using plug-in point estimates and do not distinguish between Cox processes and density estimation but do use sophisticated 3D visualization for interpretation. Alternatively other work employs sophisticated non-parametric Bayesian Cox process models, but do not use visualization to render interpretable complex spatial temporal forecasts. The contribution here is to fill this gap by inferring predictive distributions of Gaussian-log Cox processes and rendering them using state of the art 3D visualization techniques. This requires performing inference on an approximation of the model on a discretized grid of large scale and adapting an existing spatial-diurnal kernel to the log Gaussian Cox process context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aims and objectives.  This study was undertaken to measure and analyse levels of acoustic noise in a General Surgical Ward. Method.  Measurements were undertaken using the Norsonic 116 sound level meter (SLM) recording noise levels in the internationally agreed ‘A’ weighted scale. Noise level data and observational data as to the number of staff present were obtained and recorded at 5-min intervals over three consecutive days. Results.  Results of noise level analysis indicated that mean noise level within this clinical area was 42.28 dB with acute spikes reaching 70 dB(A). The lowest noise level attained was that of 36 dB(A) during the period midnight to 7 a.m. Non-parametric testing, using Spearman's Rho (two-tailed), found a positive relationship between the number of staff present and the level of noise recorded, indicating that the presence of hospital personnel strongly influences the level of noise within this area. Relevance to clinical practice.  Whilst the results of this may seem self-evident in many respects the problems of excessive noise production and the exposure to it for patients, hospital personnel and relatives alike continues unabated. What must be of concern is the psychophysiological effects excessive noise exposure has on individuals, for example, decreased wound healing, sleep deprivation and cardiovascular stimulation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This small-scale study was undertaken to assess what knowledge nursing staff from a General Intensive Care Unit held with regard to noise exposure. To assess knowledge a self-administered multiple-choice questionnaire was used. Rigorous peer-review insured content validity. This study produced poor results in terms of the knowledge nurses held with regard to noise related issues in particular the psychophysiological effects and current legislation concerning its safe exposure. Non-parametric testing, using Kruskal–Wallis found no significant difference between nursing grades, however, descriptive analysis demonstrated that the staff nurse grade (D and E) performed better overall. Whilst the results of this study may seem self-evident in some respects, it is the problems of exposure to excessive noise levels for both patients and hospital personnel, which are clearly not understood. The effects noise exposure has on individuals for example decreased wound healing; sleep deprivation and cardiovascular stimulation must be of concern especially in terms of patient care but more so for nursing staff especially the effects noise levels can have on cognitive task performance.