62 resultados para non-parametric technique


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Proteoglycans (PGs) are crucial extracellular matrix (ECM) components that are present in all tissues and organs. Pathological remodeling of these macromolecules can lead to severe diseases such as osteoarthritis or rheumatoid arthritis. To date, PG-associated ECM alterations are routinely diagnosed by invasive analytical methods. Here, we employed Raman microspectroscopy, a laser-based, marker-free and non-destructive technique that allows the generation of spectra with peaks originating from molecular vibrations within a sample, to identify specific Raman bands that can be assigned to PGs within human and porcine cartilage samples and chondrocytes. Based on the non-invasively acquired Raman spectra, we further revealed that a prolonged in vitro culture leads to phenotypic alterations of chondrocytes, resulting in a decreased PG synthesis rate and loss of lipid contents. Our results are the first to demonstrate the applicability of Raman microspectroscopy as an analytical and potential diagnostic tool for non-invasive cell and tissue state monitoring of cartilage in biomedical research. ((c) 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To examine the relationship between hip abductor muscle (HABD) strength and the magnitude of pelvic drop (MPD) for patients with non-specific low back pain (NSLBP) and controls (CON) prior to and following a 3-week HABD strengthening protocol. At baseline, we hypothesized that NSLBP patients would exhibit reduced HABD strength and greater MPD compared to CON. Following the protocol, we hypothesized that strength would increase and MPD would decrease. Relevance: The Trendelenburg test (TT) is a common clinical test used to examine the ability of the HABD to maintain horizontal pelvic position during single limb stance. However, no study has specifically tested this theory. Moreover, no study has investigated the relationship between HABD strength and pelvic motion during walking or tested whether increased HABD strength would reduce the MPD. Methods: Quasi-experimental with 3-week exercise intervention. Fifteen NSLBP patients (32.5yrs,range 21-51yrs; VAS baseline: 5.3cm) and 10 CON (29.5yrs,range 22-47yrs) were recruited. Isometric HABD strength was measured using a force dynamometer and the average of three maximal voluntary contractions were normalized to body mass (N/kg). Two-dimensional MPD (degrees) was measured using a 60 Hz camera and was derived from two retroreflective-markers placed on the posterior superior iliac spines. MPD was measured while performing the static TT and while walking and averaged over 10 consecutive footfalls. NSLBP patients completed a 3-week HABD strengthening protocol consisting of 2 open-kinetic-chain exercises then all measures were repeated. Non-parametric analysis was used for group comparisons and correlation analysis. Results: At baseline, the NSLBP patients demonstrated 31% reduced HABD strength (mean=6.6 N/kg) compared to CON (mean=9.5 N/kg: p=0.03) and no significant differences in maximal pelvic frontal plane excursion while walking (NSLBP:mean=8.1°, CON:mean=7.1°: p=0.72). No significant correlations were measured between left HABD strength and right MPD (r=-0.37, p=0.11), or between right HABD strength and left MPD (r=-0.04, p=0.84) while performing the static TT. Following the 3-week strengthening protocol, NSLBP patients demonstrated a 12% improvement in strength (Post:mean=7.4 N/kg: p=0.02), a reduction in pain (VAS followup: 2.8cm), but no significant decreases in MPD while walking (p=0.92). Conclusions: NSLBP patients demonstrated reduced HABD strength at baseline and were able to increase strength and reduce pain in a 3-week period. However, despite increases in HABD strength, the NSLBP group exhibited similar MPD motion during the static TT and while walking compared to baseline and controls. Implications: The results suggest that the HABD alone may not be primarily responsible for controlling a horizontal pelvic position during static and dynamic conditions. Increasing the strength of the hip abductors resulted in a reduction of pain in NSLBP patients providing evidence for further research to identify specific musculature responsible for controlling pelvic motion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hot spot identification (HSID) aims to identify potential sites—roadway segments, intersections, crosswalks, interchanges, ramps, etc.—with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A monolithic stationary phase was prepared via free radical co-polymerization of ethylene glycol dimethacrylate (EDMA) and glycidyl methacrylate (GMA) with pore diameter tailored specifically for plasmid binding, retention and elution. The polymer was functionalized. with 2-chloro-N,N-diethylethylamine hydrochloride (DEAE-Cl) for anion-exchange purification of plasmid DNA (pDNA) from clarified lysate obtained from E. coli DH5α-pUC19 culture in a ribonuclease/ protease-free environment. Characterization of the monolithic resin showed a porous material, with 68% of the pores existing in the matrix having diameters above 300 nm. The final product isolated from a single-stage 5 min anion-exchange purification was a pure and homogeneous supercoiled (SC) pDNA with no gDNA, RNA and protein contamination as confirmed by ethidium bromide agarose gel electrophoresis (EtBr-AGE), enzyme restriction analysis and sodium dodecyl sulfate-polyacrylamide gel electrophoresis. This non-toxic technique is cGMP compatible and highly scalable for production of pDNA on a commercial level.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Principal Topic A small firm is unlikely to possess internally the full range of knowledge and skills that it requires or could benefit from for the development of its business. The ability to acquire suitable external expertise - defined as knowledge or competence that is rare in the firm and acquired from the outside - when needed thus becomes a competitive factor in itself. Access to external expertise enables the firm to focus on its core competencies and removes the necessity to internalize every skill and competence. However, research on how small firms access external expertise is still scarce. The present study contributes to this under-developed discussion by analysing the role of trust and strong ties in the small firm's selection and evaluation of sources of external expertise (henceforth referred to as the 'business advisor' or 'advisor'). Granovetter (1973, 1361) defines the strength of a network tie as 'a (probably linear) combination of the amount of time, the emotional intensity, the intimacy (mutual confiding) and the reciprocal services which characterize the tie'. Strong ties in the context of the present investigation refer to sources of external expertise who are well known to the owner-manager, and who may be either informal (e.g., family, friends) or professional advisors (e.g., consultants, enterprise support officers, accountants or solicitors). Previous research has suggested that strong and weak ties have different fortes and the choice of business advisors could thus be critical to business performance) While previous research results suggest that small businesses favour previously well known business advisors, prior studies have also pointed out that an excessive reliance on a network of well known actors might hamper business development, as the range of expertise available through strong ties is limited. But are owner-managers of small businesses aware of this limitation and does it matter to them? Or does working with a well-known advisor compensate for it? Hence, our research model first examines the impact of the strength of tie on the business advisor's perceived performance. Next, we ask what encourages a small business owner-manager to seek advice from a strong tie. A recent exploratory study by Welter and Kautonen (2005) drew attention to the central role of trust in this context. However, while their study found support for the general proposition that trust plays an important role in the choice of advisors, how trust and its different dimensions actually affect this choice remained ambiguous. The present paper develops this discussion by considering the impact of the different dimensions of perceived trustworthiness, defined as benevolence, integrity and ability, on the strength of tie. Further, we suggest that the dimensions of perceived trustworthiness relevant in the choice of a strong tie vary between professional and informal advisors. Methodology/Key Propositions Our propositions are examined empirically based on survey data comprising 153 Finnish small businesses. The data are analysed utilizing the partial least squares (PLS) approach to structural equation modelling with SmartPLS 2.0. Being non-parametric, the PLS algorithm is particularly well-suited to analysing small datasets with non-normally distributed variables. Results and Implications The path model shows that the stronger the tie, the more positively the advisor's performance is perceived. Hypothesis 1, that strong ties will be associated with higher perceptions of performance is clearly supported. Benevolence is clearly the most significant predictor of the choice of a strong tie for external expertise. While ability also reaches a moderate level of statistical significance, integrity does not have a statistically significant impact on the choice of a strong tie. Hence, we found support for two out of three independent variables included in Hypothesis 2. Path coefficients differed between the professional and informal advisor subsamples. The results of the exploratory group comparison show that Hypothesis 3a regarding ability being associated with strong ties more pronouncedly when choosing a professional advisor was not supported. Hypothesis 3b arguing that benevolence is more strongly associated with strong ties in the context of choosing an informal advisor received some support because the path coefficient in the informal advisor subsample was much larger than in the professional advisor subsample. Hypothesis 3c postulating that integrity would be more strongly associated with strong ties in the choice of a professional advisor was supported. Integrity is the most important dimension of trustworthiness in this context. However, integrity is of no concern, or even negative, when using strong ties to choose an informal advisor. The findings of this study have practical relevance to the enterprise support community. First of all, given that the strength of tie has a significant positive impact on the advisor's perceived performance, this implies that small business owners appreciate working with advisors in long-term relationships. Therefore, advisors are well advised to invest into relationship building and maintenance in their work with small firms. Secondly, the results show that, especially in the context of professional advisors, the advisor's perceived integrity and benevolence weigh more than ability. This again emphasizes the need to invest time and effort into building a personal relationship with the owner-manager, rather than merely maintaining a professional image and credentials. Finally, this study demonstrates that the dimensions of perceived trustworthiness are orthogonal with different effects on the strength of tie and ultimately perceived performance. This means that entrepreneurs and advisors should consider the specific dimensions of ability, benevolence and integrity, rather than rely on general perceptions of trustworthiness in their advice relationships.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Suitable mathematical models that are capable of predicting Time-to-Failure (TTF) and the probability of failure in future time are essential. In traditional reliability models, the lifetime of assets is estimated using failure time data. However, in most real-life situations and industry applications, the lifetime of assets is influenced by different risk factors, which are called covariates. The fundamental notion in reliability theory is the failure time of a system and its covariates. These covariates change stochastically and may influence and/or indicate the failure time. Research shows that many statistical models have been developed to estimate the hazard of assets or individuals with covariates. An extensive amount of literature on hazard models with covariates (also termed covariate models), including theory and practical applications, has emerged. This paper is a state-of-the-art review of the existing literature on these covariate models in both the reliability and biomedical fields. One of the major purposes of this expository paper is to synthesise these models from both industrial reliability and biomedical fields and then contextually group them into non-parametric and semi-parametric models. Comments on their merits and limitations are also presented. Another main purpose of this paper is to comprehensively review and summarise the current research on the development of the covariate models so as to facilitate the application of more covariate modelling techniques into prognostics and asset health management.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Appropriate mathematical models that are capable of estimating times to failures and the probability of failures in the future are essential in EAM. In most real-life situations, the lifetime of an engineering asset is influenced and/or indicated by different factors that are termed as covariates. Hazard prediction with covariates is an elemental notion in the reliability theory to estimate the tendency of an engineering asset failing instantaneously beyond the current time assumed that it has already survived up to the current time. A number of statistical covariate-based hazard models have been developed. However, none of them has explicitly incorporated both external and internal covariates into one model. This paper introduces a novel covariate-based hazard model to address this concern. This model is named as Explicit Hazard Model (EHM). Both the semi-parametric and non-parametric forms of this model are presented in the paper. The major purpose of this paper is to illustrate the theoretical development of EHM. Due to page limitation, a case study with the reliability field data is presented in the applications part of this study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The refractive error of a human eye varies across the pupil and therefore may be treated as a random variable. The probability distribution of this random variable provides a means for assessing the main refractive properties of the eye without the necessity of traditional functional representation of wavefront aberrations. To demonstrate this approach, the statistical properties of refractive error maps are investigated. Closed-form expressions are derived for the probability density function (PDF) and its statistical moments for the general case of rotationally-symmetric aberrations. A closed-form expression for a PDF for a general non-rotationally symmetric wavefront aberration is difficult to derive. However, for specific cases, such as astigmatism, a closed-form expression of the PDF can be obtained. Further, interpretation of the distribution of the refractive error map as well as its moments is provided for a range of wavefront aberrations measured in real eyes. These are evaluated using a kernel density and sample moments estimators. It is concluded that the refractive error domain allows non-functional analysis of wavefront aberrations based on simple statistics in the form of its sample moments. Clinicians may find this approach to wavefront analysis easier to interpret due to the clinical familiarity and intuitive appeal of refractive error maps.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Equilibrium Partitioning of an Ionic Contrast agent with microcomputed tomography (EPIC-[mu]CT) is a non-invasive technique to quantify and visualize the three-dimensional distribution of glycosaminoglycans (GAGs) in fresh cartilage tissue. However, it is unclear whether this technique is applicable to already fixed tissues. Therefore, this study aimed at investigating whether formalin fixation of bovine cartilage affects X-ray attenuation, and thus the interpretation of EPIC-[mu]CT data.Design Osteochondral samples (n = 24) were incubated with ioxaglate, an ionic contrast agent, for 22 h prior to [mu]CT scanning. The samples were scanned in both formalin-fixed and fresh conditions. GAG content was measured using a biochemical assay and normalized to wet weight, dry weight, and water content to determine potential reasons for differences in X-ray attenuation.Results The expected zonal distribution of contrast agent/GAGs was observed for both fixed and fresh cartilage specimens. However, despite no significant differences in GAG concentrations or physical properties between fixed and fresh samples, the average attenuation levels of formalin-fixed cartilage were 14.3% lower than in fresh samples.Conclusions EPIC-[mu]CT is useful for three-dimensional visualization of GAGs in formalin-fixed cartilage. However, a significant reduction in X-ray attenuation for fixed (compared to fresh) cartilage must be taken into account and adjusted for accordingly when quantifying GAG concentrations using EPIC-[mu]CT.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Prognostics and asset life prediction is one of research potentials in engineering asset health management. We previously developed the Explicit Hazard Model (EHM) to effectively and explicitly predict asset life using three types of information: population characteristics; condition indicators; and operating environment indicators. We have formerly studied the application of both the semi-parametric EHM and non-parametric EHM to the survival probability estimation in the reliability field. The survival time in these models is dependent not only upon the age of the asset monitored, but also upon the condition and operating environment information obtained. This paper is a further study of the semi-parametric and non-parametric EHMs to the hazard and residual life prediction of a set of resistance elements. The resistance elements were used as corrosion sensors for measuring the atmospheric corrosion rate in a laboratory experiment. In this paper, the estimated hazard of the resistance element using the semi-parametric EHM and the non-parametric EHM is compared to the traditional Weibull model and the Aalen Linear Regression Model (ALRM), respectively. Due to assuming a Weibull distribution in the baseline hazard of the semi-parametric EHM, the estimated hazard using this model is compared to the traditional Weibull model. The estimated hazard using the non-parametric EHM is compared to ALRM which is a well-known non-parametric covariate-based hazard model. At last, the predicted residual life of the resistance element using both EHMs is compared to the actual life data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this sheep study, we investigated the influence of fixation stability on the temporal and spatial distribution of tissues in the fracture callus. As the initial mechanical conditions have been cited as being especially important for the healing outcome, it was hypothesized that differences in the path of healing would be seen as early as the initial phase of healing. ----- ----- Sixty-four sheep underwent a mid-shaft tibial osteotomy that was treated with either a rigid or a semi-rigid external fixator. Animals were sacrificed at 2, 3, 6 and 9 weeks postoperatively and the fracture calluses were analyzed using radiological, biomechanical and histological techniques. Statistical comparison between the groups was performed using the Mann–Whitney U test for unpaired non-parametric data. ----- ----- In the callus of the tibia treated with semi-rigid fixation, remnants of the fracture haematoma remained present for longer, although new periosteal bone formation during early healing was similar in both groups. The mechanical competence of the healing callus at 6 weeks was inferior compared to tibiae treated with rigid fixation. Semi-rigid fixation resulted in a larger cartilage component of the callus, which persisted longer. Remodeling processes were initiated earlier in the rigid group, while new bone formation continued throughout the entire investigated period in the semi-rigid group. ----- ----- In this study, evidence is provided that less rigid fixation increased the time required for healing. The process of intramembranous ossification appeared during the initial stages of healing to be independent of mechanical stability. However, the delay in healing was related to a prolonged chondral phase.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.