961 resultados para Laboratory techniques
Resumo:
DIRECTOR’S OVERVIEW by Professor Mark Pearcy This report for 2009 is the first full year report for MERF. The development of our activities in 2009 has been remarkable and is testament to the commitment of the staff to the vision of MERF as a premier training and research facility. From the beginnings in 2003, when a need was identified for the provision of specialist research and training facilities to enable close collaboration between researchers and clinicians, to the realisation of the vision in 2009 has been an amazing journey. However, we have learnt that there is much more that can be achieved and the emphasis will be on working with the university, government and external partners to realise the full potential of MERF by further development of the Facility. In 2009 we conducted 28 workshops in the Anatomical and Surgical Skills Laboratory providing training for surgeons in the latest techniques. This was an excellent achievement for the first full year as our reputation for delivering first class facilities and support grows. The highlight, perhaps, was a course run via our video link by a surgeon in the USA directing the participants in MERF. In addition, we have continued to run a small number of workshops in the operating theatre and this promises to be an avenue that will be of growing interest. Final approval was granted for the QUT Body Bequest Program late in 2009 following the granting of an Anatomical Accepting Licence. This will enable us to expand our capabilities by provide better material for the workshops. The QUT Body Bequest Program will be launched early in 2010. The Biological Research Facility (BRF) conducted over 270 procedures in 2009. This is a wonderful achievement considering less then 40 were performed in 2008. The staff of the BRF worked very hard to improve the state of the old animal house and this resulted in approval for expanded use by the ethics committees of both QUT and the University of Queensland. An external agency conducted an Occupational Health and Safety Audit of MERF in 2009. While there were a number of small issues that require attention, the auditor congratulated the staff of MERF on achieving a good result, particularly for such an early stage in the development of MERF. The journey from commissioning of MERF in 2008 to the full implementation of its activities in 2009 has demonstrated the potential of this facility and 2010 will be an exciting year as its activities are recognised and further expanded building development is pursued.
Resumo:
This paper presents an Airborne Systems Laboratory for Automation Research. The Airborne Systems Laboratory (ASL) is a Cessna 172 aircraft that has been specially modified and equipped by ARCAA specifically for research in future aircraft automation technologies, including Unmanned Airborne Systems (UAS). This capability has been developed over a long period of time, initially through the hire of aircraft, and finally through the purchase and modification of a dedicated flight-testing capability. The ASL has been equipped with a payload system that includes the provision of secure mounting, power, aircraft state data, flight management system and real-time subsystem. Finally, this system has been deployed in a cost effective platform allowing real-world flight-testing on a range of projects.
Resumo:
The process of compiling a studio vocal performance from many takes can often result in the performer producing a new complete performance once this new "best of" assemblage is heard back. This paper investigates the ways that the physical process of recording can alter vocal performance techniques, and in particular, the establishing of a definitive melodic and rhythmic structure. Drawing on his many years of experience as a commercially successful producer, including the attainment of a Grammy award, the author will analyse the process of producing a “credible” vocal performance in depth, with specific case studies and examples. The question of authenticity in rock and pop will also be discussed and, in this context, the uniqueness of the producer’s role as critical arbiter – what gives the producer the authority to make such performance evaluations? Techniques for creating conditions in the studio that are conducive to vocal performances, in many ways a very unnatural performance environment, will be discussed, touching on areas such as the psycho-acoustic properties of headphone mixes, the avoidance of intimidatory practices, and a methodology for inducing the perception of a “familiar” acoustic environment.
Resumo:
Cleaning of sugar mill evaporators is an expensive exercise. Identifying the scale components assists in determining which chemical cleaning agents would result in effective evaporator cleaning. The current methods (based on x-ray diffraction techniques, ion exchange/high performance liquid chromatography and thermogravimetry/differential thermal analysis) used for scale characterisation are difficult, time consuming and expensive, and cannot be performed in a conventional analytical laboratory or by mill staff. The present study has examined the use of simple descriptor tests for the characterisation of Australian sugar mill evaporator scales. Scale samples were obtained from seven Australian sugar mill evaporators by mechanical means. The appearance, texture and colour of the scale were noted before the samples were characterised using x-ray fluorescence and x-ray powder diffraction to determine the compounds present. A number of commercial analytical test kits were used to determine the phosphate and calcium contents of scale samples. Dissolution experiments were carried out on the scale samples with selected cleaning agents to provide relevant information about the effect the cleaning agents have on different evaporator scales. Results have shown that by simply identifying the colour and the appearance of the scale, the elemental composition and knowing from which effect the scale originates, a prediction of the scale composition can be made. These descriptors and dissolution experiments on scale samples can be used to provide factory staff with an on-site rapid process to predict the most effective chemicals for chemical cleaning of the evaporators.
Resumo:
This present paper reviews the reliability and validity of visual analogue scales (VAS) in terms of (1) their ability to predict feeding behaviour, (2) their sensitivity to experimental manipulations, and (3) their reproducibility. VAS correlate with, but do not reliably predict, energy intake to the extent that they could be used as a proxy of energy intake. They do predict meal initiation in subjects eating their normal diets in their normal environment. Under laboratory conditions, subjectively rated motivation to eat using VAS is sensitive to experimental manipulations and has been found to be reproducible in relation to those experimental regimens. Other work has found them not to be reproducible in relation to repeated protocols. On balance, it would appear, in as much as it is possible to quantify, that VAS exhibit a good degree of within-subject reliability and validity in that they predict with reasonable certainty, meal initiation and amount eaten, and are sensitive to experimental manipulations. This reliability and validity appears more pronounced under the controlled (but more arti®cial) conditions of the laboratory where the signal : noise ratio in experiments appears to be elevated relative to real life. It appears that VAS are best used in within-subject, repeated-measures designs where the effect of different treatments can be compared under similar circumstances. They are best used in conjunction with other measures (e.g. feeding behaviour, changes in plasma metabolites) rather than as proxies for these variables. New hand-held electronic appetite rating systems (EARS) have been developed to increase reliability of data capture and decrease investigator workload. Recent studies have compared these with traditional pen and paper (P&P) VAS. The EARS have been found to be sensitive to experimental manipulations and reproducible relative to P&P. However, subjects appear to exhibit a signi®cantly more constrained use of the scale when using the EARS relative to the P&P. For this reason it is recommended that the two techniques are not used interchangeably
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
Scientific discoveries, developments in medicine and health issues are the constant focus of media attention and the principles surrounding the creation of so called ‘saviour siblings’ are of no exception. The development in the field of reproductive techniques has provided the ability to genetically analyse embryos created in the laboratory to enable parents to implant selected embryos to create a tissue-matched child who may be able to cure an existing sick child. The research undertaken in this thesis examines the regulatory frameworks overseeing the delivery of assisted reproductive technologies (ART) in Australia and the United Kingdom and considers how those frameworks impact on the accessibility of in vitro fertilisation (IVF) procedures for the creation of ‘saviour siblings’. In some jurisdictions, the accessibility of such techniques is limited by statutory requirements. The limitations and restrictions imposed by the state in relation to the technology are analysed in order to establish whether such restrictions are justified. The analysis is conducted on the basis of a harm framework. The framework seeks to establish whether those affected by the use of the technology (including the child who will be created) are harmed. In order to undertake such evaluation, the concept of harm is considered under the scope of John Stuart Mill’s liberal theory and the Harm Principle is used as a normative tool to judge whether the level of harm that may result, justifies state intervention or restriction with the reproductive decision-making of parents in this context. The harm analysis conducted in this thesis seeks to determine an appropriate regulatory response in relation to the use of pre-implantation tissue-typing for the creation of ‘saviour siblings’. The proposals outlined in the last part of this thesis seek to address the concern that harm may result from the practice of pre-implantation tissue-typing. The current regulatory frameworks in place are also analysed on the basis of the harm framework established in this thesis. The material referred to in this thesis reflects the law and policy in place in Australia and the UK at the time the thesis was submitted for examination (December 2009).
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent