930 resultados para STATIONARY SECTORIAL SAMPLER
Resumo:
Diarrhoea is one of the leading causes of morbidity and mortality in populations in developing countries and is a significant health issue throughout the world. Despite the frequency and the severity of the diarrhoeal disease, mechanisms of pathogenesis for many of the causative agents have been poorly characterised. Although implicated in a number of intestinal and extra-intestinal infections in humans, Plesiomonas shigelloides generally has been dismissed as an enteropathogen due to the lack of clearly demonstrated virulence-associated properties such as production of cytotoxins and enterotoxins or invasive abilities. However, evidence from a number of sources has indicated that this species may be the cause of a number of clinical infections. The work described in this thesis seeks to resolve this discrepancy by investigating the pathogenic potential of P. shigelloides using in vitro cell models. The focus of this research centres on how this organism interacts with human host cells in an experimental model. Very little is known about the pathogenic potential of P. shigel/oides and its mechanisms in human infections and disease. However, disease manifestations mimic those of other related microorganisms. Chapter 2 reviews microbial pathogenesis in general, with an emphasis on understanding the mechanisms resulting from infection with bacterial pathogens and the alterations in host cell biology. In addition, this review analyses the pathogenic status of a poorly-defined enteropathogen, P. shigelloides. Key stages of pathogenicity must occur in order for a bacterial pathogen to cause disease. Such stages include bacterial adherence to host tissue, bacterial entry into host tissues (usually required), multiplication within host tissues, evasion of host defence mechanisms and the causation of damage. In this study, these key strategies in infection and disease were sought to help assess the pathogenic potential of P. shigelloides (Chapter 3). Twelve isolates of P. shigelloides, obtained from clinical cases of gastroenteritis, were used to infect monolayers of human intestinal epithelial cells in vitro. Ultrastructural analysis demonstrated that P. shigelloides was able to adhere to the microvilli at the apical surface of the epithelial cells and also to the plasma membranes of both apical and basal surfaces. Furthermore, it was demonstrated that these isolates were able to enter intestinal epithelial cells. Internalised bacteria often were confined within vacuoles surrounded by single or multiple membranes. Observation of bacteria within membranebound vacuoles suggests that uptake of P. shigelloides into intestinal epithelial cells occurs via a process morphologically comparable to phagocytosis. Bacterial cells also were observed free in the host cell cytoplasm, indicating that P. shige/loides is able to escape from the surrounding vacuolar membrane and exist within the cytosol of the host. Plesiomonas shigelloides has not only been implicated in gastrointestinal infections, but also in a range of non-intestinal infections such as cholecystitis, proctitis, septicaemia and meningitis. The mechanisms by which P. shigelloides causes these infections are not understood. Previous research was unable to ascertain the pathogenic potential of P. shigel/oides using cells of non-intestinal origin (HEp-2 cells derived from a human larynx carcinoma and Hela cells derived from a cervical carcinoma). However, with the recent findings (from this study) that P. shigelloides can adhere to and enter intestinal cells, it was hypothesised, that P. shigel/oides would be able to enter Hela and HEp-2 cells. Six clinical isolates of P. shigelloides, which previously have been shown to be invasive to intestinally derived Caco-2 cells (Chapter 3) were used to study interactions with Hela and HEp-2 cells (Chapter 4). These isolates were shown to adhere to and enter both nonintestinal host cell lines. Plesiomonas shigelloides were observed within vacuoles surrounded by single and multiple membranes, as well as free in the host cell cytosol, similar to infection by P. shigelloides of Caco-2 cells. Comparisons of the number of bacteria adhered to and present intracellularly within Hela, HEp-2 and Caco-2 cells revealed a preference of P. shigelloides for Caco-2 cells. This study conclusively showed for the first time that P. shigelloides is able to enter HEp-2 and Hela cells, demonstrating the potential ability to cause an infection and/or disease of extra-intestinal sites in humans. Further high resolution ultrastructural analysis of the mechanisms involved in P. shigelloides adherence to intestinal epithelial cells (Chapter 5) revealed numerous prominent surface features which appeared to be involved in the binding of P. shige/loides to host cells. These surface structures varied in morphology from small bumps across the bacterial cell surface to much longer filaments. Evidence that flagella might play a role in bacterial adherence also was found. The hypothesis that filamentous appendages are morphologically expressed when in contact with host cells also was tested. Observations of bacteria free in the host cell cytosol suggests that P. shigelloides is able to lyse free from the initial vacuolar compartment. The vacuoles containing P. shigel/oides within host cells have not been characterised and the point at which P. shigelloides escapes from the surrounding vacuolar compartment has not been determined. A cytochemical detection assay for acid phosphatase, an enzymatic marker for lysosomes, was used to analyse the co-localisation of bacteria-containing vacuoles and acid phosphatase activity (Chapter 6). Acid phosphatase activity was not detected in these bacteria-containing vacuoles. However, the surface of many intracellular and extracellular bacteria demonstrated high levels of acid phosphatase activity, leading to the proposal of a new virulence factor for P. shigelloides. For many pathogens, the efficiency with which they adhere to and enter host cells is dependant upon the bacterial phase of growth. Such dependency reflects the timing of expression of particular virulence factors important for bacterial pathogenesis. In previous studies (Chapter 3 to Chapter 6), an overnight culture of P. shigelloides was used to investigate a number of interactions, however, it was unknown whether this allowed expression of bacterial factors to permit efficient P. shigelloides attachment and entry into human cells. In this study (Chapter 7), a number of clinical and environmental P. shigelloides isolates were investigated to determine whether adherence and entry into host cells in vitro was more efficient during exponential-phase or stationary-phase bacterial growth. An increase in the number of adherent and intracellular bacteria was demonstrated when bacteria were inoculated into host cell cultures in exponential phase cultures. This was demonstrated clearly for 3 out of 4 isolates examined. In addition, an increase in the morphological expression of filamentous appendages, a suggested virulence factor for P. shigel/oides, was observed for bacteria in exponential growth phase. These observations suggest that virulence determinants for P. shigel/oides may be more efficiently expressed when bacteria are in exponential growth phase. This study demonstrated also, for the first time, that environmental water isolates of P. shigelloides were able to adhere to and enter human intestinal cells in vitro. These isolates were seen to enter Caco-2 host cells through a process comparable to the clinical isolates examined. These findings support the hypothesis of a water transmission route for P. shigelloides infections. The results presented in this thesis contribute significantly to our understanding of the pathogenic mechanisms involved in P. shigelloides infections and disease. Several of the factors involved in P. shigelloides pathogenesis have homologues in other pathogens of the human intestine, namely Vibrio, Aeromonas, Salmonella, Shigella species and diarrhoeaassociated strains of Escherichia coli. This study emphasises the relevance of research into Plesiomonas as a means of furthering our understanding of bacterial virulence in general. As well it provides tantalising clues on normal and pathogenic host cell mechanisms.
Resumo:
My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
Statisticians along with other scientists have made significant computational advances that enable the estimation of formerly complex statistical models. The Bayesian inference framework combined with Markov chain Monte Carlo estimation methods such as the Gibbs sampler enable the estimation of discrete choice models such as the multinomial logit (MNL) model. MNL models are frequently applied in transportation research to model choice outcomes such as mode, destination, or route choices or to model categorical outcomes such as crash outcomes. Recent developments allow for the modification of the potentially limiting assumptions of MNL such as the independence from irrelevant alternatives (IIA) property. However, relatively little transportation-related research has focused on Bayesian MNL models, the tractability of which is of great value to researchers and practitioners alike. This paper addresses MNL model specification issues in the Bayesian framework, such as the value of including prior information on parameters, allowing for nonlinear covariate effects, and extensions to random parameter models, so changing the usual limiting IIA assumption. This paper also provides an example that demonstrates, using route-choice data, the considerable potential of the Bayesian MNL approach with many transportation applications. This paper then concludes with a discussion of the pros and cons of this Bayesian approach and identifies when its application is worthwhile
Resumo:
We advance the proposition that dynamic stochastic general equilibrium (DSGE) models should not only be estimated and evaluated with full information methods. These require that the complete system of equations be specified properly. Some limited information analysis, which focuses upon specific equations, is therefore likely to be a useful complement to full system analysis. Two major problems occur when implementing limited information methods. These are the presence of forward-looking expectations in the system as well as unobservable non-stationary variables. We present methods for dealing with both of these difficulties, and illustrate the interaction between full and limited information methods using a well-known model.
Resumo:
Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.
Resumo:
This study examined whether the conspicuity of road workers at night can be enhanced by distributing retroreflective strips across the body to present a pattern of biological motion (biomotion). Twenty visually normal drivers (mean age = 40.3 years) participated in an experiment conducted at two open-road work sites (one suburban and one freeway) at night-time. At each site, four road workers walked in place wearing a standard road worker night vest either (a) alone, (b) with additional retroreflective strips on thighs, (c) with additional retroreflective strips on ankles and knees, or (d) with additional retroreflective strips on eight moveable joints (full biomotion). Participants, seated in stationary vehicles at three different distances (80 m, 160 m, 240 m), rated the relative conspicuity of the four road workers. Road worker conspicuity was maximized by the full biomotion configuration at all distances and at both sites. The addition of ankle and knee markings also provided significant benefits relative to the standard vest alone. The effects of clothing configuration were more evident at the freeway site and at shorter distances. Overall, the full biomotion configuration was ranked to be most conspicuous and the vest least conspicuous. These data provide the first evidence that biomotion effectively enhances conspicuity of road workers at open-road work sites.
Resumo:
Aims: This study determined whether the visibility benefits of positioning retroreflective strips in biological motion configurations were evident at real world road worker sites. ---------- Methods: 20 visually normal drivers (M=40.3 years) participated in this study that was conducted at two road work sites (one suburban and one freeway) on two separate nights. At each site, four road workers walked in place wearing one of four different clothing options: a) standard road worker night vest, b) standard night vest plus retroreflective strips on thighs, c) standard night vest plus retroreflective strips on ankles and knees, d) standard night vest plus retroreflective strips on eight moveable joints (full biomotion). Participants seated in stationary vehicles at three different distances (80m, 160m, 240m) rated the relative conspicuity of the four road workers using a series of a standardized visibility and ranking scales. ---------- Results: Adding retroreflective strips in the full biomotion configuration to the standard night vest significantly (p<0.001) enhanced perceptions of road worker visibility compared to the standard vest alone, or in combination with thigh retroreflective markings. These visibility benefits were evident at all distances and at both sites. Retroreflective markings at the ankles and knees also provided visibility benefits compared to the standard vest, however, the full biomotion configuration was significantly better than all of the other configurations. ---------- Conclusions: These data provide the first evidence that the benefits of biomotion retroreflective markings that have been previously demonstrated under laboratory and closed- and open-road conditions are also evident at real work sites.
Resumo:
For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.
Resumo:
The Electrocardiogram (ECG) is an important bio-signal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. The HRV signal can be used as a base signal to observe the heart's functioning. These signals are non-linear and non-stationary in nature. So, higher order spectral (HOS) analysis, which is more suitable for non-linear systems and is robust to noise, was used. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, we have extracted seven features from the heart rate signals using HOS and fed them to a support vector machine (SVM) for classification. Our performance evaluation protocol uses 330 subjects consisting of five different kinds of cardiac disease conditions. We demonstrate a sensitivity of 90% for the classifier with a specificity of 87.93%. Our system is ready to run on larger data sets.
Resumo:
Markov chain Monte Carlo (MCMC) estimation provides a solution to the complex integration problems that are faced in the Bayesian analysis of statistical problems. The implementation of MCMC algorithms is, however, code intensive and time consuming. We have developed a Python package, which is called PyMCMC, that aids in the construction of MCMC samplers and helps to substantially reduce the likelihood of coding error, as well as aid in the minimisation of repetitive code. PyMCMC contains classes for Gibbs, Metropolis Hastings, independent Metropolis Hastings, random walk Metropolis Hastings, orientational bias Monte Carlo and slice samplers as well as specific modules for common models such as a module for Bayesian regression analysis. PyMCMC is straightforward to optimise, taking advantage of the Python libraries Numpy and Scipy, as well as being readily extensible with C or Fortran.
Resumo:
Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.
Resumo:
INTRODUCTION: Breast milk fatty acids play a major role in infant development. However, no data have compared the breast milk composition of different ethnic groups living in the same environment. We aimed to (i) investigate breast milk fatty acid composition of three ethnic groups in Singapore and (ii) determine dietary fatty acid patterns in these groups and any association with breast milk fatty acid composition. MATERIALS AND METHODS: This was a prospective study conducted at a tertiary hospital in Singapore. Healthy pregnant women with the intention to breastfeed were recruited. Diet profile was studied using a standard validated 3-day food diary. Breast milk was collected from mothers at 1 to 2 weeks and 6 to 8 weeks postnatally. Agilent gas chromatograph (6870N) equipped with a mass spectrometer (5975) and an automatic liquid sampler (ALS) system with a split mode was used for analysis. RESULTS: Seventy-two breast milk samples were obtained from 52 subjects. Analysis showed that breast milk ETA (Eicosatetraenoic acid) and ETA:EA (Eicosatrienoic acid) ratio were significantly different among the races (P = 0.031 and P = 0.020), with ETA being the highest among Indians and the lowest among Malays. Docosahexaenoic acid was significantly higher among Chinese compared to Indians and Malays. No difference was demonstrated in n3 and n6 levels in the food diet analysis among the 3 ethnic groups. CONCLUSIONS: Differences exist in breast milk fatty acid composition in different ethnic groups in the same region, although no difference was demonstrated in the diet analysis. Factors other than maternal diet may play a role in breast milk fatty acid composition.
Resumo:
A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.