97 resultados para Layer dependent order parameters
Resumo:
Cardiovascular diseases refer to the class of diseases that involve the heart or blood vessels (arteries and veins). Examples of medical devices for treating the cardiovascular diseases include ventricular assist devices (VADs), artificial heart valves and stents. Metallic biomaterials such as titanium and its alloy are commonly used for ventricular assist devices. However, titanium and its alloy show unacceptable thrombosis, which represents a major obstacle to be overcome. Polyurethane (PU) polymer has better blood compatibility and has been used widely in cardiovascular devices. Thus one aim of the project was to coat a PU polymer onto a titanium substrate by increasing the surface roughness, and surface functionality. Since the endothelium of a blood vessel has the most ideal non-thrombogenic properties, it was the target of this research project to grow an endothelial cell layer as a biological coating based on the tissue engineering strategy. However, seeding endothelial cells on the smooth PU coating surfaces is problematic due to the quick loss of seeded cells which do not adhere to the PU surface. Thus it was another aim of the project to create a porous PU top layer on the dense PU pre-layer-coated titanium substrate. The method of preparing the porous PU layer was based on the solvent casting/particulate leaching (SCPL) modified with centrifugation. Without the step of centrifugation, the distribution of the salt particles was not uniform within the polymer solution, and the degree of interconnection between the salt particles was not well controlled. Using the centrifugal treatment, the pore distribution became uniform and the pore interconnectivity was improved even at a high polymer solution concentration (20%) as the maximal salt weight was added in the polymer solution. The titanium surfaces were modified by alkli and heat treatment, followed by functionlisation using hydrogen peroxide. A silane coupling agent was coated before the application of the dense PU pre-layer and the porous PU top layer. The ability of the porous top layer to grow and retain the endothelial cells was also assessed through cell culture techniques. The bonding strengths of the PU coatings to the modified titanium substrates were measured and related to the surface morphologies. The outcome of the project is that it has laid a foundation to achieve the strategy of endothelialisation for the blood compatibility of medical devices. This thesis is divided into seven chapters. Chapter 2 describes the current state of the art in the field of surface modification in cardiovascular devices such as ventricular assist devices (VADs). It also analyses the pros and cons of the existing coatings, particularly in the context of this research. The surface coatings for VADs have evolved from early organic/ inorganic (passive) coatings, to bioactive coatings (e.g. biomolecules), and to cell-based coatings. Based on the commercial applications and the potential of the coatings, the relevant review is focused on the following six types of coatings: (1) titanium nitride (TiN) coatings, (2) diamond-like carbon (DLC) coatings, (3) 2-methacryloyloxyethyl phosphorylcholine (MPC) polymer coatings, (4) heparin coatings, (5) textured surfaces, and (6) endothelial cell lining. Chapter 3 reviews the polymer scaffolds and one relevant fabrication method. In tissue engineering, the function of a polymeric material is to provide a 3-dimensional architecture (scaffold) which is typically used to accommodate transplanted cells and to guide their growth and the regeneration of tissue. The success of these systems is dependent on the design of the tissue engineering scaffolds. Chapter 4 describes chemical surface treatments for titanium and titanium alloys to increase the bond strength to polymer by altering the substrate surface, for example, by increasing surface roughness or changing surface chemistry. The nature of the surface treatment prior to bonding is found to be a major factor controlling the bonding strength. By increasing surface roughness, an increase in surface area occurs, which allows the adhesive to flow in and around the irregularities on the surface to form a mechanical bond. Changing surface chemistry also results in the formation of a chemical bond. Chapter 5 shows that bond strengths between titanium and polyurethane could be significantly improved by surface treating the titanium prior to bonding. Alkaline heat treatment and H2O2 treatment were applied to change the surface roughness and the surface chemistry of titanium. Surface treatment increases the bond strength by altering the substrate surface in a number of ways, including increasing the surface roughness and changing the surface chemistry. Chapter 6 deals with the characterization of the polyurethane scaffolds, which were fabricated using an enhanced solvent casting/particulate (salt) leaching (SCPL) method developed for preparing three-dimensional porous scaffolds for cardiac tissue engineering. The enhanced method involves the combination of a conventional SCPL method and a step of centrifugation, with the centrifugation being employed to improve the pore uniformity and interconnectivity of the scaffolds. It is shown that the enhanced SCPL method and a collagen coating resulted in a spatially uniform distribution of cells throughout the collagen-coated PU scaffolds.In Chapter 7, the enhanced SCPL method is used to form porous features on the polyurethane-coated titanium substrate. The cavities anchored the endothelial cells to remain on the blood contacting surfaces. It is shown that the surface porosities created by the enhanced SCPL may be useful in forming a stable endothelial layer upon the blood contacting surface. Chapter 8 finally summarises the entire work performed on the fabrication and analysis of the polymer-Ti bonding, the enhanced SCPL method and the PU microporous surface on the metallic substrate. It then outlines the possibilities for future work and research in this area.
Resumo:
The radiation chemistry and the grafting of a fluoropolymer, poly(tetrafluoroethylene-coperfluoropropyl vinyl ether) (PFA), was investigated with the aim of developing a highly stable grafted support for use in solid phase organic chemistry (SPOC). A radiation-induced grafting method was used whereby the PFA was exposed to ionizing radiation to form free radicals capable of initiating graft copolymerization of styrene. To fully investigate this process, both the radiation chemistry of PFA and the grafting of styrene to PFA were examined. Radiation alone was found to have a detrimental effect on PFA when irradiated at 303 K. This was evident from the loss in the mechanical properties due to chain scission reactions. This meant that when radiation was used for the grafting reactions, the total radiation dose needed to be kept as low as possible. The radicals produced when PFA was exposed to radiation were examined using electron spin resonance spectroscopy. Both main-chain (–CF2–C.F–CF2-) and end-chain (–CF2–C.F2) radicals were identified. The stability of the majority of the main-chain radicals when the polymer was heated above the glass transition temperature suggested that they were present mainly in the crystalline regions of the polymer, while the end-chain radicals were predominately located in the amorphous regions. The radical yield at 77 K was lower than the radical yield at 303 K suggesting that cage recombination at low temperatures inhibited free radicals from stabilizing. High-speed MAS 19F NMR was used to identify the non-volatile products after irradiation of PFA over a wide temperature range. The major products observed over the irradiation temperature 303 to 633 K included new saturated chain ends, short fluoromethyl side chains in both the amorphous and crystalline regions, and long branch points. The proportion of the radiolytic products shifted from mainly chain scission products at low irradiation temperatures to extensive branching at higher irradiation temperatures. Calculations of G values revealed that net crosslinking only occurred when PFA was irradiated in the melt. Minor products after irradiation at elevated temperatures included internal and terminal double bonds and CF3 groups adjacent to double bonds. The volatile products after irradiation at 303 K included tetrafluoromethane (CF4) and oxygen-containing species from loss of the perfluoropropyl ether side chains of PFA as identified by mass spectrometry and FTIR spectroscopy. The chemical changes induced by radiation exposure were accompanied by changes in the thermal properties of the polymer. Changes in the crystallinity and thermal stability of PFA after irradiation were examined using DSC and TGA techniques. The equilibrium melting temperature of untreated PFA was 599 K as determined using a method of extrapolation of the melting temperatures of imperfectly formed crystals. After low temperature irradiation, radiation- induced crystallization was prevalent due to scission of strained tie molecules, loss of perfluoropropyl ether side chains, and lowering of the molecular weight which promoted chain alignment and hence higher crystallinity. After irradiation at high temperatures, the presence of short and long branches hindered crystallization, lowering the overall crystallinity. The thermal stability of the PFA decreased with increasing radiation dose and temperature due to the introduction of defect groups. Styrene was graft copolymerized to PFA using -radiation as the initiation source with the aim of preparing a graft copolymer suitable as a support for SPOC. Various grafting conditions were studied, such as the total dose, dose rate, solvent effects and addition of nitroxides to create “living” graft chains. The effect of dose rate was examined when grafting styrene vapour to PFA using the simultaneous grafting method. The initial rate of grafting was found to be independent of the dose rate which implied that the reaction was diffusion controlled. When the styrene was dissolved in various solvents for the grafting reaction, the graft yield was strongly dependent of the type and concentration of the solvent used. The greatest graft yield was observed when the solvent swelled the grafted layers and the substrate. Microprobe Raman spectroscopy was used to map the penetration of the graft into the substrate. The grafted layer was found to contain both poly(styrene) (PS) and PFA and became thicker with increasing radiation dose and graft yield which showed that grafting began at the surface and progressively penetrated the substrate as the grafted layer was swollen. The molecular weight of the grafted PS was estimated by measuring the molecular weight of the non-covalently bonded homopolymer formed in the grafted layers using SEC. The molecular weight of the occluded homopolymer was an order of magnitude greater than the free homopolymer formed in the surrounding solution suggesting that the high viscosity in the grafted regions led to long PS grafts. When a nitroxide mediated free radical polymerization was used, grafting occurred within the substrate and not on the surface due to diffusion of styrene into the substrate at the high temperatures needed for the reaction to proceed. Loading tests were used to measure the capacity of the PS graft to be functionialized with aminomethyl groups then further derivatized. These loading tests showed that samples grafted in a solution of styrene and methanol had superior loading capacity over samples graft using other solvents due to the shallow penetration and hence better accessibility of the graft when methanol was used as a solvent.
Resumo:
Fracture behavior of Cu-Ni laminate composites has been investigated by tensile testing. It was found that as the individual layer thickness decreases from 100 to 20nm, the resultant fracture angle of the Cu-Ni laminate changes from 72 degrees to 50 degrees. Cross-sectional observations reveal that the fracture of the Ni layers transforms from opening to shear mode as the layer thickness decreases while that of the Cu layers keeps shear mode. Competition mechanisms were proposed to understand the variation in fracture mode of the metallic laminate composites associated with length scale.
Resumo:
Axial shortening in vertical load bearing elements of reinforced concrete high-rise buildings is caused by the time dependent effects of shrinkage, creep and elastic shortening of concrete under loads. Such phenomenon has to be predicted at design stage and then updated during and after construction of the buildings in order to provide mitigation against the adverse effects of differential axial shortening among the elements. Existing measuring methods for updating previous predictions of axial shortening pose problems. With this in mind, a innovative procedure with a vibration based parameter called axial shortening index is proposed to update axial shortening of vertical elements based on variations in vibration characteristics of the buildings. This paper presents the development of the procedure and illustrates it through a numerical example of an unsymmetrical high-rise building with two outrigger and belt systems. Results indicate that the method has the capability to capture influence of different tributary areas, shear walls of outrigger and belt systems as well as the geometric complexity of the building.
Resumo:
Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These image hashes can be used for watermarking, image integrity authentication or image indexing for fast retrieval. This paper introduces a new method of generating image hashes based on extracting Higher Order Spectral features from the Radon projection of an input image. The feature extraction process is non-invertible, non-linear and different hashes can be produced from the same image through the use of random permutations of the input. We show that the transform is robust to typical image transformations such as JPEG compression, noise, scaling, rotation, smoothing and cropping. We evaluate our system using a verification-style framework based on calculating false match, false non-match likelihoods using the publicly available Uncompressed Colour Image database (UCID) of 1320 images. We also compare our results to Swaminathan’s Fourier-Mellin based hashing method with at least 1% EER improvement under noise, scaling and sharpening.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.
Resumo:
In order to effect permanent closure in burns patients suffering from full thickness wounds, replacing their skin via split thickness autografting, is essential. Dermal substitutes in conjunction with widely meshed split thickness autografts (+/- cultured keratinocytes) reduce scarring at the donor and recipient sites of burns patients by reducing demand for autologous skin (both surface area and thickness), without compromising dermal delivery at the wound face. Tissue engineered products such as Integra consist of a dermal template which is rapidly remodelled to form a neodermis, at which time the temporary silicone outer layer is removed and replaced with autologous split thickness skin. Whilst provision of a thick tissue engineered dermis at full thickness burn sites reduces scarring, it is hampered by delays in vascularisation which results in clinical failure. The ultimate success of any skin graft product is dependent upon a number of basic factors including adherence, haemostasis and in the case of viable tissue grafts, success is ultimately dependent upon restoration of a normal blood supply, and hence this study. Ultimately, the goal of this research is to improve the therapeutic properties of tissue replacements, through impregnation with growth factors aimed at stimulating migration and proliferation of microvascular endothelial cells into the donor tissue post grafting. For the purpose of my masters, the aim was to evaluate the responsiveness of a dermal microvascular endothelial cell line to growth factors and haemostatic factors, in the presence of the glycoprotein vitronectin. Vitronectin formed the backbone for my hypothesis and research due to its association with both epithelial and, more specifically, endothelial migration and proliferation. Early work using a platform technology referred to as VitroGro (Tissue Therapies Ltd), which is comprised of vitronectin bound BP5/IGF-1, aided keratinocyte proliferation. I hypothesised that this result would translate to another epithelium - endothelium. VitroGro had no effect on endothelial proliferation or migration. Vitronectin increases the presence of Fibroblast Growth Factor (FGF) and Vascular Endothelial Growth Factor (VEGF) receptors, enhancing cell responsiveness to their respective ligands. So, although Human Microvascular Endothelial Cell line 1 (HMEC-1) VEGF receptor expression is generally low, it was hypothesised that exposure to vitronectin would up-regulate this receptor. HMEC-1 migration, but not proliferation, was enhanced by vitronectin bound VEGF, as well as vitronectin bound Epidermal Growth Factor (EGF), both of which could be used to stimulate microvascular endothelial cell migration for the purpose of transplantation. In addition to vitronectin's synergy with various growth factors, it has also been shown to play a role in haemostasis. Vitronectin binds thrombin-antithrombin III (TAT) to form a trimeric complex that takes on many of the attributes of vitronectin, such as heparin affinity, which results in its adherence to endothelium via heparan sulfate proteoglycans (HSP), followed by unaltered transcytosis through the endothelium, and ultimately its removal from the circulation. This has been documented as a mechanism designed to remove thrombin from the circulation. Equally, it could be argued that it is a mechanism for delivering vitronectin to the matrix. My results show that matrix-bound vitronectin dramatically alters the effect that conformationally altered antithrombin three (cATIII) has on proliferation of microvascular endothelial cells. cATIII stimulates HMEC-1 proliferation in the presence of matrix-bound vitronectin, as opposed to inhibiting proliferation in its absence. Binding vitronectin to tissues and organs prior to transplant, in the presence of cATIII, will have a profound effect on microvascular infiltration of the graft, by preventing occlusion of existing vessels whilst stimulating migration and proliferation of endothelium within the tissue.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
Automobiles have deeply impacted the way in which we travel but they have also contributed to many deaths and injury due to crashes. A number of reasons for these crashes have been pointed out by researchers. Inexperience has been identified as a contributing factor to road crashes. Driver’s driving abilities also play a vital role in judging the road environment and reacting in-time to avoid any possible collision. Therefore driver’s perceptual and motor skills remain the key factors impacting on road safety. Our failure to understand what is really important for learners, in terms of competent driving, is one of the many challenges for building better training programs. Driver training is one of the interventions aimed at decreasing the number of crashes that involve young drivers. Currently, there is a need to develop comprehensive driver evaluation system that benefits from the advances in Driver Assistance Systems. A multidisciplinary approach is necessary to explain how driving abilities evolves with on-road driving experience. To our knowledge, driver assistance systems have never been comprehensively used in a driver training context to assess the safety aspect of driving. The aim and novelty of this thesis is to develop and evaluate an Intelligent Driver Training System (IDTS) as an automated assessment tool that will help drivers and their trainers to comprehensively view complex driving manoeuvres and potentially provide effective feedback by post processing the data recorded during driving. This system is designed to help driver trainers to accurately evaluate driver performance and has the potential to provide valuable feedback to the drivers. Since driving is dependent on fuzzy inputs from the driver (i.e. approximate distance calculation from the other vehicles, approximate assumption of the other vehicle speed), it is necessary that the evaluation system is based on criteria and rules that handles uncertain and fuzzy characteristics of the driving tasks. Therefore, the proposed IDTS utilizes fuzzy set theory for the assessment of driver performance. The proposed research program focuses on integrating the multi-sensory information acquired from the vehicle, driver and environment to assess driving competencies. After information acquisition, the current research focuses on automated segmentation of the selected manoeuvres from the driving scenario. This leads to the creation of a model that determines a “competency” criterion through the driving performance protocol used by driver trainers (i.e. expert knowledge) to assess drivers. This is achieved by comprehensively evaluating and assessing the data stream acquired from multiple in-vehicle sensors using fuzzy rules and classifying the driving manoeuvres (i.e. overtake, lane change, T-crossing and turn) between low and high competency. The fuzzy rules use parameters such as following distance, gaze depth and scan area, distance with respect to lanes and excessive acceleration or braking during the manoeuvres to assess competency. These rules that identify driving competency were initially designed with the help of expert’s knowledge (i.e. driver trainers). In-order to fine tune these rules and the parameters that define these rules, a driving experiment was conducted to identify the empirical differences between novice and experienced drivers. The results from the driving experiment indicated that significant differences existed between novice and experienced driver, in terms of their gaze pattern and duration, speed, stop time at the T-crossing, lane keeping and the time spent in lanes while performing the selected manoeuvres. These differences were used to refine the fuzzy membership functions and rules that govern the assessments of the driving tasks. Next, this research focused on providing an integrated visual assessment interface to both driver trainers and their trainees. By providing a rich set of interactive graphical interfaces, displaying information about the driving tasks, Intelligent Driver Training System (IDTS) visualisation module has the potential to give empirical feedback to its users. Lastly, the validation of the IDTS system’s assessment was conducted by comparing IDTS objective assessments, for the driving experiment, with the subjective assessments of the driver trainers for particular manoeuvres. Results show that not only IDTS was able to match the subjective assessments made by driver trainers during the driving experiment but also identified some additional driving manoeuvres performed in low competency that were not identified by the driver trainers due to increased mental workload of trainers when assessing multiple variables that constitute driving. The validation of IDTS emphasized the need for an automated assessment tool that can segment the manoeuvres from the driving scenario, further investigate the variables within that manoeuvre to determine the manoeuvre’s competency and provide integrated visualisation regarding the manoeuvre to its users (i.e. trainers and trainees). Through analysis and validation it was shown that IDTS is a useful assistance tool for driver trainers to empirically assess and potentially provide feedback regarding the manoeuvres undertaken by the drivers.
Resumo:
The functional properties of cartilaginous tissues are determined predominantly by the content, distribution, and organization of proteoglycan and collagen in the extracellular matrix. Extracellular matrix accumulates in tissue-engineered cartilage constructs by metabolism and transport of matrix molecules, processes that are modulated by physical and chemical factors. Constructs incubated under free-swelling conditions with freely permeable or highly permeable membranes exhibit symmetric surface regions of soft tissue. The variation in tissue properties with depth from the surfaces suggests the hypothesis that the transport processes mediated by the boundary conditions govern the distribution of proteoglycan in such constructs. A continuum model (DiMicco and Sah in Transport Porus Med 50:57-73, 2003) was extended to test the effects of membrane permeability and perfusion on proteoglycan accumulation in tissue-engineered cartilage. The concentrations of soluble, bound, and degraded proteoglycan were analyzed as functions of time, space, and non-dimensional parameters for several experimental configurations. The results of the model suggest that the boundary condition at the membrane surface and the rate of perfusion, described by non-dimensional parameters, are important determinants of the pattern of proteoglycan accumulation. With perfusion, the proteoglycan profile is skewed, and decreases or increases in magnitude depending on the level of flow-based stimulation. Utilization of a semi-permeable membrane with or without unidirectional flow may lead to tissues with depth-increasing proteoglycan content, resembling native articular cartilage.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
Differential axial deformation between column elements and shear wall elements of cores increase with building height and geometric complexity. Adverse effects due to the differential axial deformation reduce building performance and life time serviceability. Quantifying axial deformations using ambient measurements from vibrating wire, external mechanical and electronic strain gauges in order to acquire adequate provisions to mitigate the adverse effects is well established method. However, these gauges require installing in or on elements to acquire continuous measurements and hence use of these gauges is uneconomical and inconvenient. This motivates to develop a method to quantify the axial deformations. This paper proposes an innovative method based on modal parameters to quantify axial deformations of shear wall elements in cores of buildings. Capabilities of the method are presented though an illustrative example.
Resumo:
Planar magnetic elements are becoming a replacement for their conventional rivals. Among the reasons supporting their application, is their smaller size. Taking less bulk in the electronic package is a critical advantage from the manufacturing point of view. The planar structure consists of the PCB copper tracks to generate the desired windings .The windings on each PCB layer could be connected in various ways to other winding layers to produce a series or parallel connection. These windings could be applied coreless or with a core depending on the application in Switched Mode Power Supplies (SMPS). Planar shapes of the tracks increase the effective conduction area in the windings, brings about more inductance compared to the conventional windings with the similar copper loss case. The problem arising from the planar structure of magnetic inductors is the leakage current between the layers generated by a pulse width modulated voltage across the inductor. This current value relies on the capacitive coupling between the layers, which in its turn depends on the physical parameters of the planar scheme. In order to reduce this electrical power dissipation due to the leakage current and Electromagnetic Interference (EMI), reconsideration in the planar structure might be effective. The aim of this research is to address problem of these capacitive coupling in planar layers and to find out a better structure for the planar inductance which offers less total capacitive coupling and thus less thermal dissipation from the leakage currents. Through Finite Element methods (FEM) several simulations have been carried out for various planar structures. The labs prototypes of these structures are built with the similar specification of the simulation cases. The capacitive couplings of the samples are determined with Spectrum Analyser whereby the test analysis verified the simulation results.