970 resultados para synchroton-based techniques


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Obtaining wind vectors over the ocean is important for weather forecasting and ocean modelling. Several satellite systems used operationally by meteorological agencies utilise scatterometers to infer wind vectors over the oceans. In this paper we present the results of using novel neural network based techniques to estimate wind vectors from such data. The problem is partitioned into estimating wind speed and wind direction. Wind speed is modelled using a multi-layer perceptron (MLP) and a sum of squares error function. Wind direction is a periodic variable and a multi-valued function for a given set of inputs; a conventional MLP fails at this task, and so we model the full periodic probability density of direction conditioned on the satellite derived inputs using a Mixture Density Network (MDN) with periodic kernel functions. A committee of the resulting MDNs is shown to improve the results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Obtaining wind vectors over the ocean is important for weather forecasting and ocean modelling. Several satellite systems used operationally by meteorological agencies utilise scatterometers to infer wind vectors over the oceans. In this paper we present the results of using novel neural network based techniques to estimate wind vectors from such data. The problem is partitioned into estimating wind speed and wind direction. Wind speed is modelled using a multi-layer perceptron (MLP) and a sum of squares error function. Wind direction is a periodic variable and a multi-valued function for a given set of inputs; a conventional MLP fails at this task, and so we model the full periodic probability density of direction conditioned on the satellite derived inputs using a Mixture Density Network (MDN) with periodic kernel functions. A committee of the resulting MDNs is shown to improve the results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Within the framework of heritage preservation, 3D scanning and modeling for heritage documentation has increased significantly in recent years, mainly due to the evolution of laser and image-based techniques, modeling software, powerful computers and virtual reality. 3D laser acquisition constitutes a real development opportunity for 3D modeling based previously on theoretical data. The representation of the object information rely on the knowledge of its historic and theoretical frame to reconstitute a posteriori its previous states. This project proposes an approach dealing with data extraction based on architectural knowledge and Laser statement informing measurements, the whole leading to 3D reconstruction. The experimented Khmer objects are exposed at Guimet museum in Paris. The purpose of this digital modeling meets the need of exploitable models for simulation projects, prototyping, exhibitions, promoting cultural tourism and particularly for archiving against any likely disaster and as an aided tool for the formulation of virtual museum concept.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Calcitic belemnite rostra are usually employed to perform paleoenvironmental studies based on geochemical data. However, several questions, such as their original porosity and microstructure, remain open, despite they are essential to make accurate interpretations based on geochemical analyses.This paper revisits and enlightens some of these questions. Petrographic data demonstrate that calcite crystals of the rostrum solidum of belemnites grow from spherulites that successively develop along the apical line, resulting in a “regular spherulithic prismatic” microstructure. Radially arranged calcite crystals emerge and diverge from the spherulites: towards the apex, crystals grow until a new spherulite is formed; towards the external walls of the rostrum, the crystals become progressively bigger and prismatic. Adjacent crystals slightly vary in their c-axis orientation, resulting in undulose extinction. Concentric growth layering develops at different scales and is superimposed and traversed by a radial pattern, which results in the micro-fibrous texture that is observed in the calcite crystals in the rostra.Petrographic data demonstrate that single calcite crystals in the rostra have a composite nature, which strongly suggests that the belemnite rostra were originally porous. Single crystals consistently comprise two distinct zones or sectors in optical continuity: 1) the inner zone is fluorescent, has relatively low optical relief under transmitted light (TL) microscopy, a dark-grey color under backscatter electron microscopy (BSEM), a commonly triangular shape, a “patchy” appearance and relatively high Mg and Na contents; 2) the outer sector is non-fluorescent, has relatively high optical relief under TL, a light-grey color under BSEM and low Mg and Na contents. The inner and fluorescent sectors are interpreted to have formed first as a product of biologically controlled mineralization during belemnite skeletal growth and the non-fluorescent outer sectors as overgrowths of the former, filling the intra- and inter-crystalline porosity. This question has important implications for making paleoenvironmental and/or paleoclimatic interpretations based on geochemical analyses of belemnite rostra.Finally, the petrographic features of composite calcite crystals in the rostra also suggest the non-classical crystallization of belemnite rostra, as previously suggested by other authors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Terahertz (THz) technology has been generating a lot of interest because of the potential applications for systems working in this frequency range. However, to fully achieve this potential, effective and efficient ways of generating controlled signals in the terahertz range are required. Devices that exhibit negative differential resistance (NDR) in a region of their current-voltage (I-V ) characteristics have been used in circuits for the generation of radio frequency signals. Of all of these NDR devices, resonant tunneling diode (RTD) oscillators, with their ability to oscillate in the THz range are considered as one of the most promising solid-state sources for terahertz signal generation at room temperature. There are however limitations and challenges with these devices, from inherent low output power usually in the range of micro-watts (uW) for RTD oscillators when milli-watts (mW) are desired. At device level, parasitic oscillations caused by the biasing line inductance when the device is biased in the NDR region prevent accurate device characterisation, which in turn prevents device modelling for computer simulations. This thesis describes work on I-V characterisation of tunnel diode (TD) and RTD (fabricated by Dr. Jue Wang) devices, and the radio frequency (RF) characterisation and small signal modelling of RTDs. The thesis also describes the design and measurement of hybrid TD oscillators for higher output power and the design and measurement of a planar Yagi antenna (fabricated by Khalid Alharbi) for THz applications. To enable oscillation free current-voltage characterisation of tunnel diodes, a commonly employed method is the use of a suitable resistor connected across the device to make the total differential resistance in the NDR region positive. However, this approach is not without problems as the value of the resistor has to satisfy certain conditions or else bias oscillations would still be present in the NDR region of the measured I-V characteristics. This method is difficult to use for RTDs which are fabricated on wafer due to the discrepancies in designed and actual resistance values of fabricated resistors using thin film technology. In this work, using pulsed DC rather than static DC measurements during device characterisation were shown to give accurate characteristics in the NDR region without the need for a stabilisation resistor. This approach allows for direct oscillation free characterisation for devices. Experimental results show that the I-V characterisation of tunnel diodes and RTD devices free of bias oscillations in the NDR region can be made. In this work, a new power-combining topology to address the limitations of low output power of TD and RTD oscillators is presented. The design employs the use of two oscillators biased separately, but with the combined output power from both collected at a single load. Compared to previous approaches, this method keeps the frequency of oscillation of the combined oscillators the same as for one of the oscillators. Experimental results with a hybrid circuit using two tunnel diode oscillators compared with a single oscillator design with similar values shows that the coupled oscillators produce double the output RF power of the single oscillator. This topology can be scaled for higher (up to terahertz) frequencies in the future by using RTD oscillators. Finally, a broadband Yagi antenna suitable for wireless communication at terahertz frequencies is presented in this thesis. The return loss of the antenna showed that the bandwidth is larger than the measured range (140-220 GHz). A new method was used to characterise the radiation pattern of the antenna in the E-plane. This was carried out on-wafer and the measured radiation pattern showed good agreement with the simulated pattern. In summary, this work makes important contributions to the accurate characterisation and modelling of TDs and RTDs, circuit-based techniques for power combining of high frequency TD or RTD oscillators, and to antennas suitable for on chip integration with high frequency oscillators.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing diversity of the Internet has created a vast number of multilingual resources on the Web. A huge number of these documents are written in various languages other than English. Consequently, the demand for searching in non-English languages is growing exponentially. It is desirable that a search engine can search for information over collections of documents in other languages. This research investigates the techniques for developing high-quality Chinese information retrieval systems. A distinctive feature of Chinese text is that a Chinese document is a sequence of Chinese characters with no space or boundary between Chinese words. This feature makes Chinese information retrieval more difficult since a retrieved document which contains the query term as a sequence of Chinese characters may not be really relevant to the query since the query term (as a sequence Chinese characters) may not be a valid Chinese word in that documents. On the other hand, a document that is actually relevant may not be retrieved because it does not contain the query sequence but contains other relevant words. In this research, we propose two approaches to deal with the problems. In the first approach, we propose a hybrid Chinese information retrieval model by incorporating word-based techniques with the traditional character-based techniques. The aim of this approach is to investigate the influence of Chinese segmentation on the performance of Chinese information retrieval. Two ranking methods are proposed to rank retrieved documents based on the relevancy to the query calculated by combining character-based ranking and word-based ranking. Our experimental results show that Chinese segmentation can improve the performance of Chinese information retrieval, but the improvement is not significant if it incorporates only Chinese segmentation with the traditional character-based approach. In the second approach, we propose a novel query expansion method which applies text mining techniques in order to find the most relevant words to extend the query. Unlike most existing query expansion methods, which generally select the highly frequent indexing terms from the retrieved documents to expand the query. In our approach, we utilize text mining techniques to find patterns from the retrieved documents that highly correlate with the query term and then use the relevant words in the patterns to expand the original query. This research project develops and implements a Chinese information retrieval system for evaluating the proposed approaches. There are two stages in the experiments. The first stage is to investigate if high accuracy segmentation can make an improvement to Chinese information retrieval. In the second stage, a text mining based query expansion approach is implemented and a further experiment has been done to compare its performance with the standard Rocchio approach with the proposed text mining based query expansion method. The NTCIR5 Chinese collections are used in the experiments. The experiment results show that by incorporating the text mining based query expansion with the hybrid model, significant improvement has been achieved in both precision and recall assessments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Applied Theatre is an umbrella term for a range of drama-based techniques, all of which align with a lineage of pedagogical theory and practice: (e.g.) Freire, Moreno, Heathcote. It encompasses methods and forms including Drama Education (O’Neill); Forum Theatre (Boal); and Process Drama (Haseman, O’Toole). Applied theatre often occurs in non-theatrical settings (schools, hospitals, prisons) with the aim of helping participants address issues of local concern. Increasingly, Applied Theatre practices are utilised in the corporate environment. Appied Theatre adopts artistic principles in production, but posits a practical utility beyond simple entertainment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

AC motors are largely used in a wide range of modern systems, from household appliances to automated industry applications such as: ventilations systems, fans, pumps, conveyors and machine tool drives. Inverters are widely used in industrial and commercial applications due to the growing need for speed control in ASD systems. Fast switching transients and the common mode voltage, in interaction with parasitic capacitive couplings, may cause many unwanted problems in the ASD applications. These include shaft voltage and leakage currents. One of the inherent characteristics of Pulse Width Modulation (PWM) techniques is the generation of the common mode voltage, which is defined as the voltage between the electrical neutral of the inverter output and the ground. Shaft voltage can cause bearing currents when it exceeds the amount of breakdown voltage level of the thin lubricant film between the inner and outer rings of the bearing. This phenomenon is the main reason for early bearing failures. A rapid development in power switches technology has lead to a drastic decrement of switching rise and fall times. Because there is considerable capacitance between the stator windings and the frame, there can be a significant capacitive current (ground current escaping to earth through stray capacitors inside a motor) if the common mode voltage has high frequency components. This current leads to noises and Electromagnetic Interferences (EMI) issues in motor drive systems. These problems have been dealt with using a variety of methods which have been reported in the literature. However, cost and maintenance issues have prevented these methods from being widely accepted. Extra cost or rating of the inverter switches is usually the price to pay for such approaches. Thus, the determination of cost-effective techniques for shaft and common mode voltage reduction in ASD systems, with the focus on the first step of the design process, is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. Electrical power generation from renewable energy sources, such as wind energy systems, has become a crucial issue because of environmental problems and a predicted future shortage of traditional energy sources. Thus, Chapter 2 focuses on the shaft voltage analysis of stator-fed induction generators (IG) and Doubly Fed Induction Generators DFIGs in wind turbine applications. This shaft voltage analysis includes: topologies, high frequency modelling, calculation and mitigation techniques. A back-to-back AC-DC-AC converter is investigated in terms of shaft voltage generation in a DFIG. Different topologies of LC filter placement are analysed in an effort to eliminate the shaft voltage. Different capacitive couplings exist in the motor/generator structure and any change in design parameters affects the capacitive couplings. Thus, an appropriate design for AC motors should lead to the smallest possible shaft voltage. Calculation of the shaft voltage based on different capacitive couplings, and an investigation of the effects of different design parameters are discussed in Chapter 3. This is achieved through 2-D and 3-D finite element simulation and experimental analysis. End-winding parameters of the motor are also effective factors in the calculation of the shaft voltage and have not been taken into account in previous reported studies. Calculation of the end-winding capacitances is rather complex because of the diversity of end winding shapes and the complexity of their geometry. A comprehensive analysis of these capacitances has been carried out with 3-D finite element simulations and experimental studies to determine their effective design parameters. These are documented in Chapter 4. Results of this analysis show that, by choosing appropriate design parameters, it is possible to decrease the shaft voltage and resultant bearing current in the primary stage of generator/motor design without using any additional active and passive filter-based techniques. The common mode voltage is defined by a switching pattern and, by using the appropriate pattern; the common mode voltage level can be controlled. Therefore, any PWM pattern which eliminates or minimizes the common mode voltage will be an effective shaft voltage reduction technique. Thus, common mode voltage reduction of a three-phase AC motor supplied with a single-phase diode rectifier is the focus of Chapter 5. The proposed strategy is mainly based on proper utilization of the zero vectors. Multilevel inverters are also used in ASD systems which have more voltage levels and switching states, and can provide more possibilities to reduce common mode voltage. A description of common mode voltage of multilevel inverters is investigated in Chapter 6. Chapter 7 investigates the elimination techniques of the shaft voltage in a DFIG based on the methods presented in the literature by the use of simulation results. However, it could be shown that every solution to reduce the shaft voltage in DFIG systems has its own characteristics, and these have to be taken into account in determining the most effective strategy. Calculation of the capacitive coupling and electric fields between the outer and inner races and the balls at different motor speeds in symmetrical and asymmetrical shaft and balls positions is discussed in Chapter 8. The analysis is carried out using finite element simulations to determine the conditions which will increase the probability of high rates of bearing failure due to current discharges through the balls and races.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Corneal-height data are typically measured with videokeratoscopes and modeled using a set of orthogonal Zernike polynomials. We address the estimation of the number of Zernike polynomials, which is formalized as a model-order selection problem in linear regression. Classical information-theoretic criteria tend to overestimate the corneal surface due to the weakness of their penalty functions, while bootstrap-based techniques tend to underestimate the surface or require extensive processing. In this paper, we propose to use the efficient detection criterion (EDC), which has the same general form of information-theoretic-based criteria, as an alternative to estimating the optimal number of Zernike polynomials. We first show, via simulations, that the EDC outperforms a large number of information-theoretic criteria and resampling-based techniques. We then illustrate that using the EDC for real corneas results in models that are in closer agreement with clinical expectations and provides means for distinguishing normal corneal surfaces from astigmatic and keratoconic surfaces.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A distinctive feature of Chinese test is that a Chinese document is a sequence of Chinese with no space or boundary between Chinese words. This feature makes Chinese information retrieval more difficult since a retrieved document which contains the query term as a sequence of Chinese characters may not be really relevant to the query since the query term (as a sequence Chinese characters) may not be a valid Chinese word in that documents. On the other hand, a document that is actually relevant may not be retrieved because it does not contain the query sequence but contains other relevant words. In this research, we propose a hybrid Chinese information retrieval model by incorporating word-based techniques with the traditional character-based techniques. The aim of this approach is to investigate the influence of Chinese segmentation on the performance of Chinese information retrieval. Two ranking methods are proposed to rank retrieved documents based on the relevancy to the query calculated by combining character-based ranking and word-based ranking. Our experimental results show that Chinese segmentation can improve the performance of Chinese information retrieval, but the improvement is not significant if it incorporates only Chinese segmentation with the traditional character-based approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cell based therapies require cells capable of self renewal and differentiation, and a prerequisite is the ability to prepare an effective dose of ex vivo expanded cells for autologous transplants. The in vivo identification of a source of physiologically relevant cell types suitable for cell therapies is therefore an integral part of tissue engineering. Bone marrow is the most easily accessible source of mesenchymal stem cells (MSCs), and harbours two distinct populations of adult stem cells; namely hematopoietic stem cells (HSCs) and bone mesenchymal stem cells (BMSCs). Unlike HSCs, there are yet no rigorous criteria for characterizing BMSCs. Changing understanding about the pluripotency of BMSCs in recent studies has expanded their potential application; however, the underlying molecular pathways which impart the features distinctive to BMSCs remain elusive. Furthermore, the sparse in vivo distribution of these cells imposes a clear limitation to their in vitro study. Also, when BMSCs are cultured in vitro there is a loss of the in vivo microenvironment which results in a progressive decline in proliferation potential and multipotentiality. This is further exacerbated with increased passage number, characterized by the onset of senescence related changes. Accordingly, establishing protocols for generating large numbers of BMSCs without affecting their differentiation potential is necessary. The principal aims of this thesis were to identify potential molecular factors for characterizing BMSCs from osteoarthritic patients, and also to attempt to establish culture protocols favourable for generating large number of BMSCs, while at the same time retaining their proliferation and differentiation potential. Previously published studies concerning clonal cells have demonstrated that BMSCs are heterogeneous populations of cells at various stages of growth. Some cells are higher in the hierarchy and represent the progenitors, while other cells occupy a lower position in the hierarchy and are therefore more committed to a particular lineage. This feature of BMSCs was made evident by the work of Mareddy et al., which involved generating clonal populations of BMSCs from bone marrow of osteoarthritic patients, by a single cell clonal culture method. Proliferation potential and differentiation capabilities were used to group cells into fast growing and slow growing clones. The study presented here is a continuation of the work of Mareddy et al. and employed immunological and array based techniques to identify the primary molecular factors involved in regulating phenotypic characteristics exhibited by contrasting clonal populations. The subtractive immunization (SI) was used to generate novel antibodies against favourably expressed proteins in the fast growing clonal cell population. The difference between the clonal populations at the transcriptional level was determined using a Stem Cell RT2 Profiler TM PCR Array which focuses on stem cell pathway gene expression. Monoclonal antibodies (mAb) generated by SI were able to effectively highlight differentially expressed antigenic determinants, as was evident by Western blot analysis and confocal microscopy. Co-immunoprecipitation, followed by mass spectroscopy analysis, identified a favourably expressed protein as the cytoskeletal protein vimentin. The stem cell gene array highlighted genes that were highly upregulated in the fast growing clonal cell population. Based on their functions these genes were grouped into growth factors, cell fate determination and maintenance of embryonic and neural stem cell renewal. Furthermore, on a closer analysis it was established that the cytoskeletal protein vimentin and nine out of ten genes identified by gene array were associated with chondrogenesis or cartilage repair, consistent with the potential role played by BMSCs in defect repair and maintaining tissue homeostasis, by modulating the gene expression pattern to compensate for degenerated cartilage in osteoarthritic tissues. The gene array also presented transcripts for embryonic lineage markers such as FOXA2 and Sox2, both of which were significantly over expressed in fast growing clonal populations. A recent groundbreaking study by Yamanaka et al imparted embryonic stem cell (ESCs) -like characteristic to somatic cells in a process termed nuclear reprogramming, by the ectopic expression of the genes Sox2, cMyc and Oct4. The expression of embryonic lineage markers in adult stem cells may be a mechanism by which the favourable behaviour of fast growing clonal cells is determined and suggests a possible active phenomenon of spontaneous reprogramming in fast growing clonal cells. The expression pattern of these critical molecular markers could be indicative of the competence of BMSCs. For this reason, the expression pattern of Sox2, Oct4 and cMyc, at various passages in heterogeneous BMSCs population and tissue derived cells (osteoblasts and chondrocytes), was investigated by a real-time PCR and immunoflourescence staining. A strong nuclear staining was observed for Sox2, Oct4 and cMyc, which gradually weakened accompanied with cytoplasmic translocation after several passage. The mRNA and protein expression of Sox2, Oct4 and cMyc peaked at the third passage for osteoblasts, chondrocytes and third passage for BMSCs, and declined with each subsequent passage, indicating towards a possible mechanism of spontaneous reprogramming. This study proposes that the progressive decline in proliferation potential and multipotentiality associated with increased passaging of BMSCs in vitro might be a consequence of loss of these propluripotency factors. We therefore hypothesise that the expression of these master genes is not an intrinsic cell function, but rather an outcome of interaction of the cells with their microenvironment; this was evident by the fact that when removed from their in vivo microenvironment, BMSCs undergo a rapid loss of stemness after only a few passages. One of the most interesting aspects of this study was the integration of factors in the culture conditions, which to some extent, mimicked the in vivo microenvironmental niche of the BMSCs. A number of studies have successfully established that the cellular niche is not an inert tissue component but is of prime importance. The total sum of stimuli from the microenvironment underpins the complex interplay of regulatory mechanisms which control multiple functions in stem cells most importantly stem cell renewal. Therefore, well characterised factors which affect BMSCs characteristics, such as fibronectin (FN) coating, and morphogens such as FGF2 and BMP4, were incorporated into the cell culture conditions. The experimental set up was designed to provide insight into the expression pattern of the stem cell related transcription factors Sox2, cMyc and Oct4, in BMSCs with respect to passaging and changes in culture conditions. Induction of these pluripotency markers in somatic cells by retroviral transfection has been shown to confer pluripotency and an ESCs like state. Our study demonstrated that all treatments could transiently induce the expression of Sox2, cMyc and Oct4, and favourably affect the proliferation potential of BMSCs. The combined effect of these treatments was able to induce and retain the endogenous nuclear expression of stem cell transcription factors in BMSCs over an extended number of in vitro passages. Our results therefore suggest that the transient induction and manipulation of endogenous expression of transcription factors critical for stemness can be achieved by modulating the culture conditions; the benefit of which is to circumvent the need for genetic manipulations. In summary, this study has explored the role of BMSCs in the diseased state of osteoarthritis, by employing transcriptional profiling along with SI. In particular this study pioneered the use of primary cells for generating novel antibodies by SI. We established that somatic cells and BMSCs have a basal level of expression of pluripotency markers. Furthermore, our study indicates that intrinsic signalling mechanisms of BMSCs are intimately linked with extrinsic cues from the microenvironment and that these signals appear to be critical for retaining the expression of genes to maintain cell stemness in long term in vitro culture. This project provides a basis for developing an “artificial niche” required for reversion of commitment and maintenance of BMSC in their uncommitted homeostatic state.