944 resultados para transient thermal distortion analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thermal stability and thermal decomposition pathways for synthetic iowaite have been determined using thermogravimetry in conjunction with evolved gas mass spectrometry. Chemical analysis showed the formula of the synthesised iowaite to be Mg6.27Fe1.73(Cl)1.07(OH)16(CO3)0.336.1H2O and X-ray diffraction confirms the layered structure. Dehydration of the iowaite occurred at 35 and 79°C. Dehydroxylation occurred at 254 and 291°C. Both steps were associated with the loss of CO2. Hydrogen chloride gas was evolved in two steps at 368 and 434°C. The products of the thermal decomposition were MgO and a spinel MgFe2O4. Experimentally it was found to be difficult to eliminate CO2 from inclusion in the interlayer during the synthesis of the iowaite compound and in this way the synthesised iowaite resembled the natural mineral.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synthetic Fe—Mn alkoxide of glycerol samples are submitted to controlled heating conditions and examined by IR absorption spectroscopy. On the other hand, the same sample is studied by infrared emission spectroscopy (IRES), upon heating in situ from 100 to 600°C. The spectral techniques employed in this contribution, especially IRES, show that as a result of the thermal treatments ferromagnetic oxides (manganese ferrite) are formed between 350 and 400°C. Some further spectral changes are seen at higher temperatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermal transformations of natural calcium oxalate dihydrate known in mineralogy as weddellite have been undertaken using a combination of Raman microscopy and infrared emission spectroscopy. The vibrational spectroscopic data was complimented with high resolution thermogravimetric analysis combined with evolved gas mass spectrometry. TG–MS identified three mass loss steps at 114, 422 and 592 °C. In the first mass loss step water is evolved only, in the second and third steps carbon dioxide is evolved. The combination of Raman microscopy and a thermal stage clearly identifies the changes in the molecular structure with thermal treatment. Weddellite is the phase in the temperature range up to the pre-dehydration temperature of 97 °C. At this temperature, the phase formed is whewellite (calcium oxalate monohydrate) and above 114 °C the phase is the anhydrous calcium oxalate. Above 422 °C, calcium carbonate is formed. Infrared emission spectroscopy shows that this mineral decomposes at around 650 °C. Changes in the position and intensity of the C=O and C---C stretching vibrations in the Raman spectra indicate the temperature range at which these phase changes occur.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High resolution thermogravimetry has been used to evaluate the carbonaceous content in a commercial sample of single-walled carbon nanotube (SWNT). The content of SWNTs in the sample was found to be at least 77mass% which was supported by images obtained with scanning and transmission electron microscopies (SEM and TEM). Furthermore, the influence of SWNT addition on the thermal stability of graphite in mixtures of SWNT/graphite at different proportions was investigated. The graphite stability decreased with the increased of SWNT content in the overall range of composition. This behavior could be due to the close contact between these carbonaceous species as determined by SEM analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

All relevant international standards for determining if a metallic rod is flammable in oxygen utilize some form of “promoted ignition” test. In this test, for a given pressure, an overwhelming ignition source is coupled to the end of the test sample and the designation flammable or nonflammable is based upon the amount burned, that is, a burn criteria. It is documented that (1) the initial temperature of the test sample affects the burning of the test sample both (a) in regards to the pressure at which the sample will support burning (threshold pressure) and (b) the rate at which the sample is melted (regression rate of the melting interface); and, (2) the igniter used affects the test sample by heating it adjacent to the igniter as ignition occurs. Together, these facts make it necessary to ensure, if a metallic material is to be considered flammable at the conditions tested, that the burn criteria will exclude any region of the test sample that may have undergone preheating during the ignition process. A two-dimensional theoretical model was developed to describe the transient heat transfer occurring and resultant temperatures produced within this system. Several metals (copper, aluminum, iron, and stainless steel) and ignition promoters (magnesium, aluminum, and Pyrofuze®) were evaluated for a range of oxygen pressures between 0.69 MPa (100 psia) and 34.5 MPa (5,000 psia). A MATLAB® program was utilized to solve the developed model that was validated against (1) a published solution for a similar system and (2) against experimental data obtained during actual tests at the National Aeronautics and Space Administration White Sands Test Facility. The validated model successfully predicts temperatures within the test samples with agreement between model and experiment increasing as test pressure increases and/or distance from the promoter increases. Oxygen pressure and test sample thermal diffusivity were shown to have the largest effect on the results. In all cases evaluated, there is no significant preheating (above about 38°C/100°F) occurring at distances greater than 30 mm (1.18 in.) during the time the ignition source is attached to the test sample. This validates a distance of 30 mm (1.18 in.) above the ignition promoter as a burn length upon which a definition of flammable can be based for inclusion in relevant international standards (that is, burning past this length will always be independent of the ignition event for the ignition promoters considered here. KEYWORDS: promoted ignition, metal combustion, heat conduction, thin fin, promoted combustion, burn length, burn criteria, flammability, igniter effects, heat affected zone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glass transition temperature of spaghetti sample was measured by thermal and rheological methods as a function of water content.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The transition of cubic indium hydroxide to cubic indium oxide has been studied by thermogravimetric analysis complimented with hot stage Raman spectroscopy. Thermal analysis shows the transition of In(OH)3 to In2O3 occurs at 219°C. The structure and morphology of In(OH)3 synthesised using a soft chemical route at low temperatures was confirmed by X-ray diffraction and scanning electron microscopy. A topotactical relationship exists between the micro/nano-cubes of In(OH)3 and In2O3. The Raman spectrum of In(OH)3 is characterised by an intense sharp band at 309 cm-1 attributed to ν1 In-O symmetric stretching mode, bands at 1137 and 1155 cm-1 attributed to In-OH δ deformation modes, bands at 3083, 3215, 3123 and 3262 cm-1 assigned to the OH stretching vibrations. Upon thermal treatment of In(OH)3 new Raman bands are observed at 125, 295, 488 and 615 cm-1 attributed to In2O3. Changes in the structure of In(OH)3 with thermal treatment is readily followed by hot stage Raman spectroscopy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayer hydrotalcites prepared using the seawater neutralisation (SWN) process of Bayer liquors are characterised using X-ray diffraction and thermal analysis techniques. The Bayer hydrotalcites are synthesised at four different temperatures (0, 25, 55, 75 °C) to determine the effect on the thermal stability of the hydrotalcite structure, and to identify other precipitates that form at these temperatures. The interlayer distance increased with increasing synthesis temperature, up to 55 °C, and then decreased by 0.14 Å for Bayer hydrotalcites prepared at 75 °C. The three mineralogical phases identified in this investigation are; 1) Bayer hydrotalcite, 2), calcium carbonate species, and 3) hydromagnesite. The DTG curve can be separated into four decomposition steps; 1) the removal of adsorbed water and free interlayer water in hydrotalcite (30 – 230 °C), 2) the dehydroxylation of hydrotalcite and the decarbonation of hydrotalcite (250 – 400 °C), 3) the decarbonation of hydromagnesite (400 – 550 °C), and 4) the decarbonation of aragonite (550 – 650 °C).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrotalcite based upon manganese known as charmarite Mn4Al2(OH)12CO3•3H2O has been synthesised with different Mn/Al ratios from 4:1 to 2:1. Impurities of manganese oxide, rhodochrosite and bayerite at low concentrations were also produced during the synthesis. The thermal stability of charmarite was investigated using thermogravimetry. The manganese hydrotalcite decomposed in stages with mass loss steps at 211, 305 and 793°C. The product of the thermal decomposition was amorphous material mixed with manganese oxide. A comparison is made with the thermal decomposition of the Mg/Al hydrotalcite. It is concluded that the synthetic charmarite is slightly less stable than hydrotalcite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of theses for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of ten published /submitted papers and book chapters of which nine have been published and one is under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of investigating multilevel topologies for high quality and high power applications, with specific emphasis on renewable energy systems. The rapid evolution of renewable energy within the last several years has resulted in the design of efficient power converters suitable for medium and high-power applications such as wind turbine and photovoltaic (PV) systems. Today, the industrial trend is moving away from heavy and bulky passive components to power converter systems that use more and more semiconductor elements controlled by powerful processor systems. However, it is hard to connect the traditional converters to the high and medium voltage grids, as a single power switch cannot stand at high voltage. For these reasons, a new family of multilevel inverters has appeared as a solution for working with higher voltage levels. Besides this important feature, multilevel converters have the capability to generate stepped waveforms. Consequently, in comparison with conventional two-level inverters, they present lower switching losses, lower voltage stress across loads, lower electromagnetic interference (EMI) and higher quality output waveforms. These properties enable the connection of renewable energy sources directly to the grid without using expensive, bulky, heavy line transformers. Additionally, they minimize the size of the passive filter and increase the durability of electrical devices. However, multilevel converters have only been utilised in very particular applications, mainly due to the structural limitations, high cost and complexity of the multilevel converter system and control. New developments in the fields of power semiconductor switches and processors will favor the multilevel converters for many other fields of application. The main application for the multilevel converter presented in this work is the front-end power converter in renewable energy systems. Diode-clamped and cascade converters are the most common type of multilevel converters widely used in different renewable energy system applications. However, some drawbacks – such as capacitor voltage imbalance, number of components, and complexity of the control system – still exist, and these are investigated in the framework of this thesis. Various simulations using software simulation tools are undertaken and are used to study different cases. The feasibility of the developments is underlined with a series of experimental results. This thesis is divided into two main sections. The first section focuses on solving the capacitor voltage imbalance for a wide range of applications, and on decreasing the complexity of the control strategy on the inverter side. The idea of using sharing switches at the output structure of the DC-DC front-end converters is proposed to balance the series DC link capacitors. A new family of multioutput DC-DC converters is proposed for renewable energy systems connected to the DC link voltage of diode-clamped converters. The main objective of this type of converter is the sharing of the total output voltage into several series voltage levels using sharing switches. This solves the problems associated with capacitor voltage imbalance in diode-clamped multilevel converters. These converters adjust the variable and unregulated DC voltage generated by renewable energy systems (such as PV) to the desirable series multiple voltage levels at the inverter DC side. A multi-output boost (MOB) converter, with one inductor and series output voltage, is presented. This converter is suitable for renewable energy systems based on diode-clamped converters because it boosts the low output voltage and provides the series capacitor at the output side. A simple control strategy using cross voltage control with internal current loop is presented to obtain the desired voltage levels at the output voltage. The proposed topology and control strategy are validated by simulation and hardware results. Using the idea of voltage sharing switches, the circuit structure of different topologies of multi-output DC-DC converters – or multi-output voltage sharing (MOVS) converters – have been proposed. In order to verify the feasibility of this topology and its application, steady state and dynamic analyses have been carried out. Simulation and experiments using the proposed control strategy have verified the mathematical analysis. The second part of this thesis addresses the second problem of multilevel converters: the need to improve their quality with minimum cost and complexity. This is related to utilising asymmetrical multilevel topologies instead of conventional multilevel converters; this can increase the quality of output waveforms with a minimum number of components. It also allows for a reduction in the cost and complexity of systems while maintaining the same output quality, or for an increase in the quality while maintaining the same cost and complexity. Therefore, the asymmetrical configuration for two common types of multilevel converters – diode-clamped and cascade converters – is investigated. Also, as well as addressing the maximisation of the output voltage resolution, some technical issues – such as adjacent switching vectors – should be taken into account in asymmetrical multilevel configurations to keep the total harmonic distortion (THD) and switching losses to a minimum. Thus, the asymmetrical diode-clamped converter is proposed. An appropriate asymmetrical DC link arrangement is presented for four-level diode-clamped converters by keeping adjacent switching vectors. In this way, five-level inverter performance is achieved for the same level of complexity of the four-level inverter. Dealing with the capacitor voltage imbalance problem in asymmetrical diodeclamped converters has inspired the proposal for two different DC-DC topologies with a suitable control strategy. A Triple-Output Boost (TOB) converter and a Boost 3-Output Voltage Sharing (Boost-3OVS) converter connected to the four-level diode-clamped converter are proposed to arrange the proposed asymmetrical DC link for the high modulation indices and unity power factor. Cascade converters have shown their abilities and strengths in medium and high power applications. Using asymmetrical H-bridge inverters, more voltage levels can be generated in output voltage with the same number of components as the symmetrical converters. The concept of cascading multilevel H-bridge cells is used to propose a fifteen-level cascade inverter using a four-level H-bridge symmetrical diode-clamped converter, cascaded with classical two-level Hbridge inverters. A DC voltage ratio of cells is presented to obtain maximum voltage levels on output voltage, with adjacent switching vectors between all possible voltage levels; this can minimize the switching losses. This structure can save five isolated DC sources and twelve switches in comparison to conventional cascade converters with series two-level H bridge inverters. To increase the quality in presented hybrid topology with minimum number of components, a new cascade inverter is verified by cascading an asymmetrical four-level H-bridge diode-clamped inverter. An inverter with nineteen-level performance was achieved. This synthesizes more voltage levels with lower voltage and current THD, rather than using a symmetrical diode-clamped inverter with the same configuration and equivalent number of power components. Two different predictive current control methods for the switching states selection are proposed to minimise either losses or THD of voltage in hybrid converters. High voltage spikes at switching time in experimental results and investigation of a diode-clamped inverter structure raised another problem associated with high-level high voltage multilevel converters. Power switching components with fast switching, combined with hard switched-converters, produce high di/dt during turn off time. Thus, stray inductance of interconnections becomes an important issue and raises overvoltage and EMI issues correlated to the number of components. Planar busbar is a good candidate to reduce interconnection inductance in high power inverters compared with cables. The effect of different transient current loops on busbar physical structure of the high-voltage highlevel diode-clamped converters is highlighted. Design considerations of proper planar busbar are also presented to optimise the overall design of diode-clamped converters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermogravimetric analysis-mass spectrometry, X-ray diffraction and scanning electron microscopy (SEM) were used to characterize eight kaolinite samples from China. The results show that the thermal decomposition occurs in three main steps (a) desorption of water below 100 °C, (b) dehydration at about 225 °C, (c) well defined dehydroxylation at around 450 °C. It is also found that decarbonization took place at 710 °C due to the decomposition of calcite impurity in kaolin. The temperature of dehydroxylation of kaolinite is found to be influenced by the degree of disorder of the kaolinite structure and the gases evolved in the decomposition process can be various because of the different amount and kinds of impurities. It is evident by the mass spectra that the interlayer carbonate from impurity of calcite and organic carbon is released as CO2 around 225, 350 and 710 °C in the kaolinite samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two kinds of coal-bearing kaolinite from China were analysed by X-ray diffraction (XRD), Thermogravimetric analysis-mass spectrometry (TG-MS), infrared emission spectroscopy. Thermal decomposition occurs in a series of steps attributed to (a) desorption of water at 68 °C for Datong coal bearing strata kaolinite and 56 °C for Xiaoxian with mass losses of 0.36 % and 0.51 % (b) decarbonization at 456 °C for Datong coal bearing strata kaolinite and 431 °C for Xiaoxian kaolinite, (c) dehydroxylation takes place in two steps at 589 and 633 °C for Datong coal bearing strata kaolinite and at 507 °C and 579 °C for Xiaoxian kaolinite. This mineral were further characterised by infrared emission spectroscopy (IES). Well defined hydroxyl stretching bands at around 3695, 3679, 3652 and 3625 cm-1 are observed. At 650 °C all intensity in these bands is lost in harmony with the thermal analysis results. Characteristic functional groups from coal are observed at 1918, 1724 and 1459 cm-1. The intensity of these bands decrease by thermal treatment and is lost by 700 °C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.