995 resultados para projective techniques
Resumo:
The following paper presents an evaluation of airborne sensors for use in vegetation management in powerline corridors. Three integral stages in the management process are addressed including, the detection of trees, relative positioning with respect to the nearest powerline and vegetation height estimation. Image data, including multi-spectral and high resolution, are analyzed along with LiDAR data captured from fixed wing aircraft. Ground truth data is then used to establish the accuracy and reliability of each sensor thus providing a quantitative comparison of sensor options. Tree detection was achieved through crown delineation using a Pulse-Coupled Neural Network (PCNN) and morphologic reconstruction applied to multi-spectral imagery. Through testing it was shown to achieve a detection rate of 96%, while the accuracy in segmenting groups of trees and single trees correctly was shown to be 75%. Relative positioning using LiDAR achieved a RMSE of 1.4m and 2.1m for cross track distance and along track position respectively, while Direct Georeferencing achieved RMSE of 3.1m in both instances. The estimation of pole and tree heights measured with LiDAR had a RMSE of 0.4m and 0.9m respectively, while Stereo Matching achieved 1.5m and 2.9m. Overall a small number of poles were missed with detection rates of 98% and 95% for LiDAR and Stereo Matching.
Resumo:
The investigation into the encapsulation of gold nanoparticles (AuNPs) by poly(methyl methacrylate) (PMMA) was undertaken. This was performed by three polymerisation techniques including: grafting PMMA synthesised by reversible addition-fragmentation chain transfer (RAFT) polymerisation to AuNPs, grafting PMMA synthesised by atom transfer radical polymerisation (ATRP) from the surface of functionalised AuNPs and by encapsulation of AuNPs within PMMA latexes produced through photo-initiated oil-in-water (o/w) miniemulsion polymerisation. The grafting of RAFT PMMA to AuNPs was performed by the addition of the RAFT functionalised PMMA to citrate stabilised AuNPs. This was conducted with a range of PMMA of varying molecular weight distribution (MWD) as either the dithioester or thiol end-group functionalities. The RAFT PMMA polymers were characterised by gel permeation chromatography (GPC), ultraviolet-visible (UV-vis), Fourier transform infrared-attenuated total reflectance (FTIR-ATR), Fourier transform Raman (FT-Raman) and proton nuclear magnetic resonance (1H NMR) spectroscopies. The attachment of PMMA to AuNPs showed a tendency for AuNPs to associate with the PMMA structures formed, though significant aggregation occurred. Interestingly, thiol functionalised end-group PMMA showed very little aggregation of AuNPs. The spherical polymer-AuNP structures did not vary in size with variations in PMMA MWD. The PMMA-AuNP structures were characterised using scanning electron microscopy (SEM), transition electron microscopy (TEM), energy dispersive X-ray analysis (EDAX) and UV-vis spectroscopy. The surface confined ATRP grafting of PMMA from initiator functionalised AuNPs was polymerised in both homogeneous and heterogeneous media. 11,11’- dithiobis[1-(2-bromo-2-methylpropionyloxy)undecane] (DSBr) was used as the surface-confined initiator and was synthesised in a three step procedure from mercaptoundecanol (MUD). All compounds were characterised by 1H NMR, FTIR-ATR and Raman spectroscopies. The grafting in homogeneous media resulted in amorphous PMMA with significant AuNP aggregation. Individually grafted AuNPs were difficult to separate and characterise, though SEM, TEM, EDAX and UV-vis spectroscopy was used. The heterogeneous polymerisation did not produce grafted AuNPs as characterised by SEM and EDAX. The encapsulation of AuNPs within PMMA latexes through the process of photoinitiated miniemulsion polymerisation was successfully achieved. Initially, photoinitiated miniemulsion polymerisation was conducted as a viable low temperature method of miniemulsion initiation. This proved successful producing a stable PMMA with good conversion efficiency and narrow particle size distribution (PSD). This is the first report of such a system. The photo-initiated technique was further optimised and AuNPs were included into the miniemulsion. AuNP encapsulation was very effective, producing reproducible AuNP encapsulated PMMA latexes. Again, this is the first reported case of this. The latexes were characterised by TEM, SEM, GPC, gravimetric analysis and dynamic light scattering (DLS).
Resumo:
Many studies in the area of project management and social networks have identified the significance of project knowledge transfer within and between projects. However, only few studies have examined the intra- and inter-projects knowledge transfer activities. Knowledge in projects can be transferred via face-to-face interactions on the one hand, and via IT-based tools on the other. Although companies have allocated many resources to the IT tools, it has been found that they are not always effectively utilised, and people prefer to look for knowledge using social face-to-face interactions. This paper explores how to effectively leverage two alternative knowledge transfer techniques, face-to-face and IT-based tools to facilitate knowledge transfer and enhance knowledge creation for intra- and inter-project knowledge transfer. The paper extends the previous research on the relationships between and within teams by examining the project’s external and internal knowledge networks concurrently. Social network qualitative analysis, using a case study within a small-medium enterprise, was used to examine the knowledge transfer activities within and between projects, and to investigate knowledge transfer techniques. This paper demonstrates the significance of overlapping employees working simultaneously on two or more projects and their impact on facilitating knowledge transfer between projects within a small/medium organisation. This research is also crucial to gaining better understanding of different knowledge transfer techniques used for intra- and inter-project knowledge exchange. The research provides recommendations on how to achieve better knowledge transfer within and between projects in order to fully utilise a project’s knowledge and achieve better project performance.
Resumo:
Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.
Resumo:
In this study, cell sheets comprising multilayered porcine bone marrow stromal cells (BMSC) were assembled with fully interconnected scaffolds made from medical-grade polycaprolactone–calcium phosphate (mPCL–CaP), for the engineering of structural and functional bone grafts. The BMSC sheets were harvested from culture flasks and wrapped around pre-seeded composite scaffolds. The layered cell sheets integrated well with the scaffold/cell construct and remained viable, with mineralized nodules visible both inside and outside the scaffold for up to 8 weeks culture. Cells within the constructs underwent classical in vitro osteogenic differentiation with the associated elevation of alkaline phosphatase activity and bone-related protein expression. In vivo, two sets of cell-sheet-scaffold/cell constructs were transplanted under the skin of nude rats. The first set of constructs (554mm3) were assembled with BMSC sheets and cultured for 8 weeks before implantation. The second set of constructs (10104mm3) was implanted immediately after assembly with BMSC sheets, with no further in vitro culture. For both groups, neo cortical and well-vascularised cancellous bone were formed within the constructs with up to 40% bone volume. Histological and immunohistochemical examination revealed that neo bone tissue formed from the pool of seeded BMSC and the bone formation followed predominantly an endochondral pathway, with woven bone matrix subsequently maturing into fully mineralized compact bone; exhibiting the histological markers of native bone. These findings demonstrate that large bone tissues similar to native bone can be regenerated utilizing BMSC sheet techniques in conjunction with composite scaffolds whose structures are optimized from a mechanical, nutrient transport and vascularization perspective.
Resumo:
The seemingly exponential nature of technological change provides SMEs with a complex and challenging operational context. The development of infrastructures capable of supporting the wireless application protocol (WAP) and associated 'wireless' applications represents the latest generation of technological innovation with potential appeals to SMEs and end-users alike. This paper aims to understand the mobile data technology needs of SMEs in a regional setting. The research was especially concerned with perceived needs across three market segments : non-adopters, partial-adopters and full-adopters of new technology. The research was exploratory in nature as the phenomenon under scrutiny is relatively new and the uses unclear, thus focus groups were conducted with each of the segments. The paper provides insights for business, industry and academics.
Resumo:
The process of compiling a studio vocal performance from many takes can often result in the performer producing a new complete performance once this new "best of" assemblage is heard back. This paper investigates the ways that the physical process of recording can alter vocal performance techniques, and in particular, the establishing of a definitive melodic and rhythmic structure. Drawing on his many years of experience as a commercially successful producer, including the attainment of a Grammy award, the author will analyse the process of producing a “credible” vocal performance in depth, with specific case studies and examples. The question of authenticity in rock and pop will also be discussed and, in this context, the uniqueness of the producer’s role as critical arbiter – what gives the producer the authority to make such performance evaluations? Techniques for creating conditions in the studio that are conducive to vocal performances, in many ways a very unnatural performance environment, will be discussed, touching on areas such as the psycho-acoustic properties of headphone mixes, the avoidance of intimidatory practices, and a methodology for inducing the perception of a “familiar” acoustic environment.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.