276 resultados para Dip coating techniques
Resumo:
Transition metal oxides are functional materials that have advanced applications in many areas, because of their diverse properties (optical, electrical, magnetic, etc.), hardness, thermal stability and chemical resistance. Novel applications of the nanostructures of these oxides are attracting significant interest as new synthesis methods are developed and new structures are reported. Hydrothermal synthesis is an effective process to prepare various delicate structures of metal oxides on the scales from a few to tens of nanometres, specifically, the highly dispersed intermediate structures which are hardly obtained through pyro-synthesis. In this thesis, a range of new metal oxide (stable and metastable titanate, niobate) nanostructures, namely nanotubes and nanofibres, were synthesised via a hydrothermal process. Further structure modifications were conducted and potential applications in catalysis, photocatalysis, adsorption and construction of ceramic membrane were studied. The morphology evolution during the hydrothermal reaction between Nb2O5 particles and concentrated NaOH was monitored. The study demonstrates that by optimising the reaction parameters (temperature, amount of reactants), one can obtain a variety of nanostructured solids, from intermediate phases niobate bars and fibres to the stable phase cubes. Trititanate (Na2Ti3O7) nanofibres and nanotubes were obtained by the hydrothermal reaction between TiO2 powders or a titanium compound (e.g. TiOSO4·xH2O) and concentrated NaOH solution by controlling the reaction temperature and NaOH concentration. The trititanate possesses a layered structure, and the Na ions that exist between the negative charged titanate layers are exchangeable with other metal ions or H+ ions. The ion-exchange has crucial influence on the phase transition of the exchanged products. The exchange of the sodium ions in the titanate with H+ ions yields protonated titanate (H-titanate) and subsequent phase transformation of the H-titanate enable various TiO2 structures with retained morphology. H-titanate, either nanofibres or tubes, can be converted to pure TiO2(B), pure anatase, mixed TiO2(B) and anatase phases by controlled calcination and by a two-step process of acid-treatment and subsequent calcination. While the controlled calcination of the sodium titanate yield new titanate structures (metastable titanate with formula Na1.5H0.5Ti3O7, with retained fibril morphology) that can be used for removal of radioactive ions and heavy metal ions from water. The structures and morphologies of the metal oxides were characterised by advanced techniques. Titania nanofibres of mixed anatase and TiO2(B) phases, pure anatase and pure TiO2(B) were obtained by calcining H-titanate nanofibres at different temperatures between 300 and 700 °C. The fibril morphology was retained after calcination, which is suitable for transmission electron microscopy (TEM) analysis. It has been found by TEM analysis that in mixed-phase structure the interfaces between anatase and TiO2(B) phases are not random contacts between the engaged crystals of the two phases, but form from the well matched lattice planes of the two phases. For instance, (101) planes in anatase and (101) planes of TiO2(B) are similar in d spaces (~0.18 nm), and they join together to form a stable interface. The interfaces between the two phases act as an one-way valve that permit the transfer of photogenerated charge from anatase to TiO2(B). This reduces the recombination of photogenerated electrons and holes in anatase, enhancing the activity for photocatalytic oxidation. Therefore, the mixed-phase nanofibres exhibited higher photocatalytic activity for degradation of sulforhodamine B (SRB) dye under ultraviolet (UV) light than the nanofibres of either pure phase alone, or the mechanical mixtures (which have no interfaces) of the two pure phase nanofibres with a similar phase composition. This verifies the theory that the difference between the conduction band edges of the two phases may result in charge transfer from one phase to the other, which results in effectively the photogenerated charge separation and thus facilitates the redox reaction involving these charges. Such an interface structure facilitates charge transfer crossing the interfaces. The knowledge acquired in this study is important not only for design of efficient TiO2 photocatalysts but also for understanding the photocatalysis process. Moreover, the fibril titania photocatalysts are of great advantage when they are separated from a liquid for reuse by filtration, sedimentation, or centrifugation, compared to nanoparticles of the same scale. The surface structure of TiO2 also plays a significant role in catalysis and photocatalysis. Four types of large surface area TiO2 nanotubes with different phase compositions (labelled as NTA, NTBA, NTMA and NTM) were synthesised from calcination and acid treatment of the H-titanate nanotubes. Using the in situ FTIR emission spectrescopy (IES), desorption and re-adsorption process of surface OH-groups on oxide surface can be trailed. In this work, the surface OH-group regeneration ability of the TiO2 nanotubes was investigated. The ability of the four samples distinctively different, having the order: NTA > NTBA > NTMA > NTM. The same order was observed for the catalytic when the samples served as photocatalysts for the decomposition of synthetic dye SRB under UV light, as the supports of gold (Au) catalysts (where gold particles were loaded by a colloid-based method) for photodecomposition of formaldehyde under visible light and for catalytic oxidation of CO at low temperatures. Therefore, the ability of TiO2 nanotubes to generate surface OH-groups is an indicator of the catalytic activity. The reason behind the correlation is that the oxygen vacancies at bridging O2- sites of TiO2 surface can generate surface OH-groups and these groups facilitate adsorption and activation of O2 molecules, which is the key step of the oxidation reactions. The structure of the oxygen vacancies at bridging O2- sites is proposed. Also a new mechanism for the photocatalytic formaldehyde decomposition with the Au-TiO2 catalysts is proposed: The visible light absorbed by the gold nanoparticles, due to surface plasmon resonance effect, induces transition of the 6sp electrons of gold to high energy levels. These energetic electrons can migrate to the conduction band of TiO2 and are seized by oxygen molecules. Meanwhile, the gold nanoparticles capture electrons from the formaldehyde molecules adsorbed on them because of gold’s high electronegativity. O2 adsorbed on the TiO2 supports surface are the major electron acceptor. The more O2 adsorbed, the higher the oxidation activity of the photocatalyst will exhibit. The last part of this thesis demonstrates two innovative applications of the titanate nanostructures. Firstly, trititanate and metastable titanate (Na1.5H0.5Ti3O7) nanofibres are used as intelligent absorbents for removal of radioactive cations and heavy metal ions, utilizing the properties of the ion exchange ability, deformable layered structure, and fibril morphology. Environmental contamination with radioactive ions and heavy metal ions can cause a serious threat to the health of a large part of the population. Treatment of the wastes is needed to produce a waste product suitable for long-term storage and disposal. The ion-exchange ability of layered titanate structure permitted adsorption of bivalence toxic cations (Sr2+, Ra2+, Pb2+) from aqueous solution. More importantly, the adsorption is irreversible, due to the deformation of the structure induced by the strong interaction between the adsorbed bivalent cations and negatively charged TiO6 octahedra, and results in permanent entrapment of the toxic bivalent cations in the fibres so that the toxic ions can be safely deposited. Compared to conventional clay and zeolite sorbents, the fibril absorbents are of great advantage as they can be readily dispersed into and separated from a liquid. Secondly, new generation membranes were constructed by using large titanate and small ã-alumina nanofibres as intermediate and top layers, respectively, on a porous alumina substrate via a spin-coating process. Compared to conventional ceramic membranes constructed by spherical particles, the ceramic membrane constructed by the fibres permits high flux because of the large porosity of their separation layers. The voids in the separation layer determine the selectivity and flux of a separation membrane. When the sizes of the voids are similar (which means a similar selectivity of the separation layer), the flux passing through the membrane increases with the volume of the voids which are filtration passages. For the ideal and simplest texture, a mesh constructed with the nanofibres 10 nm thick and having a uniform pore size of 60 nm, the porosity is greater than 73.5 %. In contrast, the porosity of the separation layer that possesses the same pore size but is constructed with metal oxide spherical particles, as in conventional ceramic membranes, is 36% or less. The membrane constructed by titanate nanofibres and a layer of randomly oriented alumina nanofibres was able to filter out 96.8% of latex spheres of 60 nm size, while maintaining a high flux rate between 600 and 900 Lm–2 h–1, more than 15 times higher than the conventional membrane reported in the most recent study.
Resumo:
Cardiovascular diseases refer to the class of diseases that involve the heart or blood vessels (arteries and veins). Examples of medical devices for treating the cardiovascular diseases include ventricular assist devices (VADs), artificial heart valves and stents. Metallic biomaterials such as titanium and its alloy are commonly used for ventricular assist devices. However, titanium and its alloy show unacceptable thrombosis, which represents a major obstacle to be overcome. Polyurethane (PU) polymer has better blood compatibility and has been used widely in cardiovascular devices. Thus one aim of the project was to coat a PU polymer onto a titanium substrate by increasing the surface roughness, and surface functionality. Since the endothelium of a blood vessel has the most ideal non-thrombogenic properties, it was the target of this research project to grow an endothelial cell layer as a biological coating based on the tissue engineering strategy. However, seeding endothelial cells on the smooth PU coating surfaces is problematic due to the quick loss of seeded cells which do not adhere to the PU surface. Thus it was another aim of the project to create a porous PU top layer on the dense PU pre-layer-coated titanium substrate. The method of preparing the porous PU layer was based on the solvent casting/particulate leaching (SCPL) modified with centrifugation. Without the step of centrifugation, the distribution of the salt particles was not uniform within the polymer solution, and the degree of interconnection between the salt particles was not well controlled. Using the centrifugal treatment, the pore distribution became uniform and the pore interconnectivity was improved even at a high polymer solution concentration (20%) as the maximal salt weight was added in the polymer solution. The titanium surfaces were modified by alkli and heat treatment, followed by functionlisation using hydrogen peroxide. A silane coupling agent was coated before the application of the dense PU pre-layer and the porous PU top layer. The ability of the porous top layer to grow and retain the endothelial cells was also assessed through cell culture techniques. The bonding strengths of the PU coatings to the modified titanium substrates were measured and related to the surface morphologies. The outcome of the project is that it has laid a foundation to achieve the strategy of endothelialisation for the blood compatibility of medical devices. This thesis is divided into seven chapters. Chapter 2 describes the current state of the art in the field of surface modification in cardiovascular devices such as ventricular assist devices (VADs). It also analyses the pros and cons of the existing coatings, particularly in the context of this research. The surface coatings for VADs have evolved from early organic/ inorganic (passive) coatings, to bioactive coatings (e.g. biomolecules), and to cell-based coatings. Based on the commercial applications and the potential of the coatings, the relevant review is focused on the following six types of coatings: (1) titanium nitride (TiN) coatings, (2) diamond-like carbon (DLC) coatings, (3) 2-methacryloyloxyethyl phosphorylcholine (MPC) polymer coatings, (4) heparin coatings, (5) textured surfaces, and (6) endothelial cell lining. Chapter 3 reviews the polymer scaffolds and one relevant fabrication method. In tissue engineering, the function of a polymeric material is to provide a 3-dimensional architecture (scaffold) which is typically used to accommodate transplanted cells and to guide their growth and the regeneration of tissue. The success of these systems is dependent on the design of the tissue engineering scaffolds. Chapter 4 describes chemical surface treatments for titanium and titanium alloys to increase the bond strength to polymer by altering the substrate surface, for example, by increasing surface roughness or changing surface chemistry. The nature of the surface treatment prior to bonding is found to be a major factor controlling the bonding strength. By increasing surface roughness, an increase in surface area occurs, which allows the adhesive to flow in and around the irregularities on the surface to form a mechanical bond. Changing surface chemistry also results in the formation of a chemical bond. Chapter 5 shows that bond strengths between titanium and polyurethane could be significantly improved by surface treating the titanium prior to bonding. Alkaline heat treatment and H2O2 treatment were applied to change the surface roughness and the surface chemistry of titanium. Surface treatment increases the bond strength by altering the substrate surface in a number of ways, including increasing the surface roughness and changing the surface chemistry. Chapter 6 deals with the characterization of the polyurethane scaffolds, which were fabricated using an enhanced solvent casting/particulate (salt) leaching (SCPL) method developed for preparing three-dimensional porous scaffolds for cardiac tissue engineering. The enhanced method involves the combination of a conventional SCPL method and a step of centrifugation, with the centrifugation being employed to improve the pore uniformity and interconnectivity of the scaffolds. It is shown that the enhanced SCPL method and a collagen coating resulted in a spatially uniform distribution of cells throughout the collagen-coated PU scaffolds.In Chapter 7, the enhanced SCPL method is used to form porous features on the polyurethane-coated titanium substrate. The cavities anchored the endothelial cells to remain on the blood contacting surfaces. It is shown that the surface porosities created by the enhanced SCPL may be useful in forming a stable endothelial layer upon the blood contacting surface. Chapter 8 finally summarises the entire work performed on the fabrication and analysis of the polymer-Ti bonding, the enhanced SCPL method and the PU microporous surface on the metallic substrate. It then outlines the possibilities for future work and research in this area.
Resumo:
Many studies in the area of project management and social networks have identified the significance of project knowledge transfer within and between projects. However, only few studies have examined the intra- and inter-projects knowledge transfer activities. Knowledge in projects can be transferred via face-to-face interactions on the one hand, and via IT-based tools on the other. Although companies have allocated many resources to the IT tools, it has been found that they are not always effectively utilised, and people prefer to look for knowledge using social face-to-face interactions. This paper explores how to effectively leverage two alternative knowledge transfer techniques, face-to-face and IT-based tools to facilitate knowledge transfer and enhance knowledge creation for intra- and inter-project knowledge transfer. The paper extends the previous research on the relationships between and within teams by examining the project’s external and internal knowledge networks concurrently. Social network qualitative analysis, using a case study within a small-medium enterprise, was used to examine the knowledge transfer activities within and between projects, and to investigate knowledge transfer techniques. This paper demonstrates the significance of overlapping employees working simultaneously on two or more projects and their impact on facilitating knowledge transfer between projects within a small/medium organisation. This research is also crucial to gaining better understanding of different knowledge transfer techniques used for intra- and inter-project knowledge exchange. The research provides recommendations on how to achieve better knowledge transfer within and between projects in order to fully utilise a project’s knowledge and achieve better project performance.
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Resumo:
In this study, cell sheets comprising multilayered porcine bone marrow stromal cells (BMSC) were assembled with fully interconnected scaffolds made from medical-grade polycaprolactone–calcium phosphate (mPCL–CaP), for the engineering of structural and functional bone grafts. The BMSC sheets were harvested from culture flasks and wrapped around pre-seeded composite scaffolds. The layered cell sheets integrated well with the scaffold/cell construct and remained viable, with mineralized nodules visible both inside and outside the scaffold for up to 8 weeks culture. Cells within the constructs underwent classical in vitro osteogenic differentiation with the associated elevation of alkaline phosphatase activity and bone-related protein expression. In vivo, two sets of cell-sheet-scaffold/cell constructs were transplanted under the skin of nude rats. The first set of constructs (554mm3) were assembled with BMSC sheets and cultured for 8 weeks before implantation. The second set of constructs (10104mm3) was implanted immediately after assembly with BMSC sheets, with no further in vitro culture. For both groups, neo cortical and well-vascularised cancellous bone were formed within the constructs with up to 40% bone volume. Histological and immunohistochemical examination revealed that neo bone tissue formed from the pool of seeded BMSC and the bone formation followed predominantly an endochondral pathway, with woven bone matrix subsequently maturing into fully mineralized compact bone; exhibiting the histological markers of native bone. These findings demonstrate that large bone tissues similar to native bone can be regenerated utilizing BMSC sheet techniques in conjunction with composite scaffolds whose structures are optimized from a mechanical, nutrient transport and vascularization perspective.
Resumo:
The seemingly exponential nature of technological change provides SMEs with a complex and challenging operational context. The development of infrastructures capable of supporting the wireless application protocol (WAP) and associated 'wireless' applications represents the latest generation of technological innovation with potential appeals to SMEs and end-users alike. This paper aims to understand the mobile data technology needs of SMEs in a regional setting. The research was especially concerned with perceived needs across three market segments : non-adopters, partial-adopters and full-adopters of new technology. The research was exploratory in nature as the phenomenon under scrutiny is relatively new and the uses unclear, thus focus groups were conducted with each of the segments. The paper provides insights for business, industry and academics.
Resumo:
The process of compiling a studio vocal performance from many takes can often result in the performer producing a new complete performance once this new "best of" assemblage is heard back. This paper investigates the ways that the physical process of recording can alter vocal performance techniques, and in particular, the establishing of a definitive melodic and rhythmic structure. Drawing on his many years of experience as a commercially successful producer, including the attainment of a Grammy award, the author will analyse the process of producing a “credible” vocal performance in depth, with specific case studies and examples. The question of authenticity in rock and pop will also be discussed and, in this context, the uniqueness of the producer’s role as critical arbiter – what gives the producer the authority to make such performance evaluations? Techniques for creating conditions in the studio that are conducive to vocal performances, in many ways a very unnatural performance environment, will be discussed, touching on areas such as the psycho-acoustic properties of headphone mixes, the avoidance of intimidatory practices, and a methodology for inducing the perception of a “familiar” acoustic environment.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent