954 resultados para STATIONARY SPACETIMES


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diarrhoea is one of the leading causes of morbidity and mortality in populations in developing countries and is a significant health issue throughout the world. Despite the frequency and the severity of the diarrhoeal disease, mechanisms of pathogenesis for many of the causative agents have been poorly characterised. Although implicated in a number of intestinal and extra-intestinal infections in humans, Plesiomonas shigelloides generally has been dismissed as an enteropathogen due to the lack of clearly demonstrated virulence-associated properties such as production of cytotoxins and enterotoxins or invasive abilities. However, evidence from a number of sources has indicated that this species may be the cause of a number of clinical infections. The work described in this thesis seeks to resolve this discrepancy by investigating the pathogenic potential of P. shigelloides using in vitro cell models. The focus of this research centres on how this organism interacts with human host cells in an experimental model. Very little is known about the pathogenic potential of P. shigel/oides and its mechanisms in human infections and disease. However, disease manifestations mimic those of other related microorganisms. Chapter 2 reviews microbial pathogenesis in general, with an emphasis on understanding the mechanisms resulting from infection with bacterial pathogens and the alterations in host cell biology. In addition, this review analyses the pathogenic status of a poorly-defined enteropathogen, P. shigelloides. Key stages of pathogenicity must occur in order for a bacterial pathogen to cause disease. Such stages include bacterial adherence to host tissue, bacterial entry into host tissues (usually required), multiplication within host tissues, evasion of host defence mechanisms and the causation of damage. In this study, these key strategies in infection and disease were sought to help assess the pathogenic potential of P. shigelloides (Chapter 3). Twelve isolates of P. shigelloides, obtained from clinical cases of gastroenteritis, were used to infect monolayers of human intestinal epithelial cells in vitro. Ultrastructural analysis demonstrated that P. shigelloides was able to adhere to the microvilli at the apical surface of the epithelial cells and also to the plasma membranes of both apical and basal surfaces. Furthermore, it was demonstrated that these isolates were able to enter intestinal epithelial cells. Internalised bacteria often were confined within vacuoles surrounded by single or multiple membranes. Observation of bacteria within membranebound vacuoles suggests that uptake of P. shigelloides into intestinal epithelial cells occurs via a process morphologically comparable to phagocytosis. Bacterial cells also were observed free in the host cell cytoplasm, indicating that P. shige/loides is able to escape from the surrounding vacuolar membrane and exist within the cytosol of the host. Plesiomonas shigelloides has not only been implicated in gastrointestinal infections, but also in a range of non-intestinal infections such as cholecystitis, proctitis, septicaemia and meningitis. The mechanisms by which P. shigelloides causes these infections are not understood. Previous research was unable to ascertain the pathogenic potential of P. shigel/oides using cells of non-intestinal origin (HEp-2 cells derived from a human larynx carcinoma and Hela cells derived from a cervical carcinoma). However, with the recent findings (from this study) that P. shigelloides can adhere to and enter intestinal cells, it was hypothesised, that P. shigel/oides would be able to enter Hela and HEp-2 cells. Six clinical isolates of P. shigelloides, which previously have been shown to be invasive to intestinally derived Caco-2 cells (Chapter 3) were used to study interactions with Hela and HEp-2 cells (Chapter 4). These isolates were shown to adhere to and enter both nonintestinal host cell lines. Plesiomonas shigelloides were observed within vacuoles surrounded by single and multiple membranes, as well as free in the host cell cytosol, similar to infection by P. shigelloides of Caco-2 cells. Comparisons of the number of bacteria adhered to and present intracellularly within Hela, HEp-2 and Caco-2 cells revealed a preference of P. shigelloides for Caco-2 cells. This study conclusively showed for the first time that P. shigelloides is able to enter HEp-2 and Hela cells, demonstrating the potential ability to cause an infection and/or disease of extra-intestinal sites in humans. Further high resolution ultrastructural analysis of the mechanisms involved in P. shigelloides adherence to intestinal epithelial cells (Chapter 5) revealed numerous prominent surface features which appeared to be involved in the binding of P. shige/loides to host cells. These surface structures varied in morphology from small bumps across the bacterial cell surface to much longer filaments. Evidence that flagella might play a role in bacterial adherence also was found. The hypothesis that filamentous appendages are morphologically expressed when in contact with host cells also was tested. Observations of bacteria free in the host cell cytosol suggests that P. shigelloides is able to lyse free from the initial vacuolar compartment. The vacuoles containing P. shigel/oides within host cells have not been characterised and the point at which P. shigelloides escapes from the surrounding vacuolar compartment has not been determined. A cytochemical detection assay for acid phosphatase, an enzymatic marker for lysosomes, was used to analyse the co-localisation of bacteria-containing vacuoles and acid phosphatase activity (Chapter 6). Acid phosphatase activity was not detected in these bacteria-containing vacuoles. However, the surface of many intracellular and extracellular bacteria demonstrated high levels of acid phosphatase activity, leading to the proposal of a new virulence factor for P. shigelloides. For many pathogens, the efficiency with which they adhere to and enter host cells is dependant upon the bacterial phase of growth. Such dependency reflects the timing of expression of particular virulence factors important for bacterial pathogenesis. In previous studies (Chapter 3 to Chapter 6), an overnight culture of P. shigelloides was used to investigate a number of interactions, however, it was unknown whether this allowed expression of bacterial factors to permit efficient P. shigelloides attachment and entry into human cells. In this study (Chapter 7), a number of clinical and environmental P. shigelloides isolates were investigated to determine whether adherence and entry into host cells in vitro was more efficient during exponential-phase or stationary-phase bacterial growth. An increase in the number of adherent and intracellular bacteria was demonstrated when bacteria were inoculated into host cell cultures in exponential phase cultures. This was demonstrated clearly for 3 out of 4 isolates examined. In addition, an increase in the morphological expression of filamentous appendages, a suggested virulence factor for P. shigel/oides, was observed for bacteria in exponential growth phase. These observations suggest that virulence determinants for P. shigel/oides may be more efficiently expressed when bacteria are in exponential growth phase. This study demonstrated also, for the first time, that environmental water isolates of P. shigelloides were able to adhere to and enter human intestinal cells in vitro. These isolates were seen to enter Caco-2 host cells through a process comparable to the clinical isolates examined. These findings support the hypothesis of a water transmission route for P. shigelloides infections. The results presented in this thesis contribute significantly to our understanding of the pathogenic mechanisms involved in P. shigelloides infections and disease. Several of the factors involved in P. shigelloides pathogenesis have homologues in other pathogens of the human intestine, namely Vibrio, Aeromonas, Salmonella, Shigella species and diarrhoeaassociated strains of Escherichia coli. This study emphasises the relevance of research into Plesiomonas as a means of furthering our understanding of bacterial virulence in general. As well it provides tantalising clues on normal and pathogenic host cell mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

My research investigates why nouns are learned disproportionately more frequently than other kinds of words during early language acquisition (Gentner, 1982; Gleitman, et al., 2004). This question must be considered in the context of cognitive development in general. Infants have two major streams of environmental information to make meaningful: perceptual and linguistic. Perceptual information flows in from the senses and is processed into symbolic representations by the primitive language of thought (Fodor, 1975). These symbolic representations are then linked to linguistic input to enable language comprehension and ultimately production. Yet, how exactly does perceptual information become conceptualized? Although this question is difficult, there has been progress. One way that children might have an easier job is if they have structures that simplify the data. Thus, if particular sorts of perceptual information could be separated from the mass of input, then it would be easier for children to refer to those specific things when learning words (Spelke, 1990; Pylyshyn, 2003). It would be easier still, if linguistic input was segmented in predictable ways (Gentner, 1982; Gleitman, et al., 2004) Unfortunately the frequency of patterns in lexical or grammatical input cannot explain the cross-cultural and cross-linguistic tendency to favor nouns over verbs and predicates. There are three examples of this failure: 1) a wide variety of nouns are uttered less frequently than a smaller number of verbs and yet are learnt far more easily (Gentner, 1982); 2) word order and morphological transparency offer no insight when you contrast the sentence structures and word inflections of different languages (Slobin, 1973) and 3) particular language teaching behaviors (e.g. pointing at objects and repeating names for them) have little impact on children's tendency to prefer concrete nouns in their first fifty words (Newport, et al., 1977). Although the linguistic solution appears problematic, there has been increasing evidence that the early visual system does indeed segment perceptual information in specific ways before the conscious mind begins to intervene (Pylyshyn, 2003). I argue that nouns are easier to learn because their referents directly connect with innate features of the perceptual faculty. This hypothesis stems from work done on visual indexes by Zenon Pylyshyn (2001, 2003). Pylyshyn argues that the early visual system (the architecture of the "vision module") segments perceptual data into pre-conceptual proto-objects called FINSTs. FINSTs typically correspond to physical things such as Spelke objects (Spelke, 1990). Hence, before conceptualization, visual objects are picked out by the perceptual system demonstratively, like a finger pointing indicating ‘this’ or ‘that’. I suggest that this primitive system of demonstration elaborates on Gareth Evan's (1982) theory of nonconceptual content. Nouns are learnt first because their referents attract demonstrative visual indexes. This theory also explains why infants less often name stationary objects such as plate or table, but do name things that attract the focal attention of the early visual system, i.e., small objects that move, such as ‘dog’ or ‘ball’. This view leaves open the question how blind children learn words for visible objects and why children learn category nouns (e.g. 'dog'), rather than proper nouns (e.g. 'Fido') or higher taxonomic distinctions (e.g. 'animal').

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We advance the proposition that dynamic stochastic general equilibrium (DSGE) models should not only be estimated and evaluated with full information methods. These require that the complete system of equations be specified properly. Some limited information analysis, which focuses upon specific equations, is therefore likely to be a useful complement to full system analysis. Two major problems occur when implementing limited information methods. These are the presence of forward-looking expectations in the system as well as unobservable non-stationary variables. We present methods for dealing with both of these difficulties, and illustrate the interaction between full and limited information methods using a well-known model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Impedance cardiography is an application of bioimpedance analysis primarily used in a research setting to determine cardiac output. It is a non invasive technique that measures the change in the impedance of the thorax which is attributed to the ejection of a volume of blood from the heart. The cardiac output is calculated from the measured impedance using the parallel conductor theory and a constant value for the resistivity of blood. However, the resistivity of blood has been shown to be velocity dependent due to changes in the orientation of red blood cells induced by changing shear forces during flow. The overall goal of this thesis was to study the effect that flow deviations have on the electrical impedance of blood, both experimentally and theoretically, and to apply the results to a clinical setting. The resistivity of stationary blood is isotropic as the red blood cells are randomly orientated due to Brownian motion. In the case of blood flowing through rigid tubes, the resistivity is anisotropic due to the biconcave discoidal shape and orientation of the cells. The generation of shear forces across the width of the tube during flow causes the cells to align with the minimal cross sectional area facing the direction of flow. This is in order to minimise the shear stress experienced by the cells. This in turn results in a larger cross sectional area of plasma and a reduction in the resistivity of the blood as the flow increases. Understanding the contribution of this effect on the thoracic impedance change is a vital step in achieving clinical acceptance of impedance cardiography. Published literature investigates the resistivity variations for constant blood flow. In this case, the shear forces are constant and the impedance remains constant during flow at a magnitude which is less than that for stationary blood. The research presented in this thesis, however, investigates the variations in resistivity of blood during pulsataile flow through rigid tubes and the relationship between impedance, velocity and acceleration. Using rigid tubes isolates the impedance change to variations associated with changes in cell orientation only. The implications of red blood cell orientation changes for clinical impedance cardiography were also explored. This was achieved through measurement and analysis of the experimental impedance of pulsatile blood flowing through rigid tubes in a mock circulatory system. A novel theoretical model including cell orientation dynamics was developed for the impedance of pulsatile blood through rigid tubes. The impedance of flowing blood was theoretically calculated using analytical methods for flow through straight tubes and the numerical Lattice Boltzmann method for flow through complex geometries such as aortic valve stenosis. The result of the analytical theoretical model was compared to the experimental impedance measurements through rigid tubes. The impedance calculated for flow through a stenosis using the Lattice Boltzmann method provides results for comparison with impedance cardiography measurements collected as part of a pilot clinical trial to assess the suitability of using bioimpedance techniques to assess the presence of aortic stenosis. The experimental and theoretical impedance of blood was shown to inversely follow the blood velocity during pulsatile flow with a correlation of -0.72 and -0.74 respectively. The results for both the experimental and theoretical investigations demonstrate that the acceleration of the blood is an important factor in determining the impedance, in addition to the velocity. During acceleration, the relationship between impedance and velocity is linear (r2 = 0.98, experimental and r2 = 0.94, theoretical). The relationship between the impedance and velocity during the deceleration phase is characterised by a time decay constant, ô , ranging from 10 to 50 s. The high level of agreement between the experimental and theoretically modelled impedance demonstrates the accuracy of the model developed here. An increase in the haematocrit of the blood resulted in an increase in the magnitude of the impedance change due to changes in the orientation of red blood cells. The time decay constant was shown to decrease linearly with the haematocrit for both experimental and theoretical results, although the slope of this decrease was larger in the experimental case. The radius of the tube influences the experimental and theoretical impedance given the same velocity of flow. However, when the velocity was divided by the radius of the tube (labelled the reduced average velocity) the impedance response was the same for two experimental tubes with equivalent reduced average velocity but with different radii. The temperature of the blood was also shown to affect the impedance with the impedance decreasing as the temperature increased. These results are the first published for the impedance of pulsatile blood. The experimental impedance change measured orthogonal to the direction of flow is in the opposite direction to that measured in the direction of flow. These results indicate that the impedance of blood flowing through rigid cylindrical tubes is axisymmetric along the radius. This has not previously been verified experimentally. Time frequency analysis of the experimental results demonstrated that the measured impedance contains the same frequency components occuring at the same time point in the cycle as the velocity signal contains. This suggests that the impedance contains many of the fluctuations of the velocity signal. Application of a theoretical steady flow model to pulsatile flow presented here has verified that the steady flow model is not adequate in calculating the impedance of pulsatile blood flow. The success of the new theoretical model over the steady flow model demonstrates that the velocity profile is important in determining the impedance of pulsatile blood. The clinical application of the impedance of blood flow through a stenosis was theoretically modelled using the Lattice Boltzman method (LBM) for fluid flow through complex geometeries. The impedance of blood exiting a narrow orifice was calculated for varying degrees of stenosis. Clincial impedance cardiography measurements were also recorded for both aortic valvular stenosis patients (n = 4) and control subjects (n = 4) with structurally normal hearts. This pilot trial was used to corroborate the results of the LBM. Results from both investigations showed that the decay time constant for impedance has potential in the assessment of aortic valve stenosis. In the theoretically modelled case (LBM results), the decay time constant increased with an increase in the degree of stenosis. The clinical results also showed a statistically significant difference in time decay constant between control and test subjects (P = 0.03). The time decay constant calculated for test subjects (ô = 180 - 250 s) is consistently larger than that determined for control subjects (ô = 50 - 130 s). This difference is thought to be due to difference in the orientation response of the cells as blood flows through the stenosis. Such a non-invasive technique using the time decay constant for screening of aortic stenosis provides additional information to that currently given by impedance cardiography techniques and improves the value of the device to practitioners. However, the results still need to be verified in a larger study. While impedance cardiography has not been widely adopted clinically, it is research such as this that will enable future acceptance of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examined whether the conspicuity of road workers at night can be enhanced by distributing retroreflective strips across the body to present a pattern of biological motion (biomotion). Twenty visually normal drivers (mean age = 40.3 years) participated in an experiment conducted at two open-road work sites (one suburban and one freeway) at night-time. At each site, four road workers walked in place wearing a standard road worker night vest either (a) alone, (b) with additional retroreflective strips on thighs, (c) with additional retroreflective strips on ankles and knees, or (d) with additional retroreflective strips on eight moveable joints (full biomotion). Participants, seated in stationary vehicles at three different distances (80 m, 160 m, 240 m), rated the relative conspicuity of the four road workers. Road worker conspicuity was maximized by the full biomotion configuration at all distances and at both sites. The addition of ankle and knee markings also provided significant benefits relative to the standard vest alone. The effects of clothing configuration were more evident at the freeway site and at shorter distances. Overall, the full biomotion configuration was ranked to be most conspicuous and the vest least conspicuous. These data provide the first evidence that biomotion effectively enhances conspicuity of road workers at open-road work sites.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims: This study determined whether the visibility benefits of positioning retroreflective strips in biological motion configurations were evident at real world road worker sites. ---------- Methods: 20 visually normal drivers (M=40.3 years) participated in this study that was conducted at two road work sites (one suburban and one freeway) on two separate nights. At each site, four road workers walked in place wearing one of four different clothing options: a) standard road worker night vest, b) standard night vest plus retroreflective strips on thighs, c) standard night vest plus retroreflective strips on ankles and knees, d) standard night vest plus retroreflective strips on eight moveable joints (full biomotion). Participants seated in stationary vehicles at three different distances (80m, 160m, 240m) rated the relative conspicuity of the four road workers using a series of a standardized visibility and ranking scales. ---------- Results: Adding retroreflective strips in the full biomotion configuration to the standard night vest significantly (p<0.001) enhanced perceptions of road worker visibility compared to the standard vest alone, or in combination with thigh retroreflective markings. These visibility benefits were evident at all distances and at both sites. Retroreflective markings at the ankles and knees also provided visibility benefits compared to the standard vest, however, the full biomotion configuration was significantly better than all of the other configurations. ---------- Conclusions: These data provide the first evidence that the benefits of biomotion retroreflective markings that have been previously demonstrated under laboratory and closed- and open-road conditions are also evident at real work sites.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Electrocardiogram (ECG) is an important bio-signal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. The HRV signal can be used as a base signal to observe the heart's functioning. These signals are non-linear and non-stationary in nature. So, higher order spectral (HOS) analysis, which is more suitable for non-linear systems and is robust to noise, was used. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, we have extracted seven features from the heart rate signals using HOS and fed them to a support vector machine (SVM) for classification. Our performance evaluation protocol uses 330 subjects consisting of five different kinds of cardiac disease conditions. We demonstrate a sensitivity of 90% for the classifier with a specificity of 87.93%. Our system is ready to run on larger data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this preliminary study was to determine the relevance of the categorization of the load regime data to assess the functional output and usage of the prosthesis of lower limb amputees. The objectives were a) to introduce a categorization of load regime, b) to present some descriptors of each activity, and c) to report the results for a case. The load applied on the osseointegrated fixation of one transfemoral amputee was recorded using a portable kinetic system for 5 hours. The periods of directional locomotion, localized locomotion, and stationary loading occurred 44%, 34%, and 22% of recording time and each accounted for 51%, 38%, and 12% of the duration of the periods of activity, respectively. The absolute maximum force during directional locomotion, localized locomotion, and stationary loading was 19%, 15%, and 8% of the body weight on the anteroposterior axis, 20%, 19%, and 12% on the mediolateral axis, and 121%, 106%, and 99% on the long axis. A total of 2,783 gait cycles were recorded. Approximately 10% more gait cycles and 50% more of the total impulse than conventional analyses were identified. The proposed categorization and apparatus have the potential to complement conventional instruments, particularly for difficult cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We seek numerical methods for second‐order stochastic differential equations that reproduce the stationary density accurately for all values of damping. A complete analysis is possible for scalar linear second‐order equations (damped harmonic oscillators with additive noise), where the statistics are Gaussian and can be calculated exactly in the continuous‐time and discrete‐time cases. A matrix equation is given for the stationary variances and correlation for methods using one Gaussian random variable per timestep. The only Runge–Kutta method with a nonsingular tableau matrix that gives the exact steady state density for all values of damping is the implicit midpoint rule. Numerical experiments, comparing the implicit midpoint rule with Heun and leapfrog methods on nonlinear equations with additive or multiplicative noise, produce behavior similar to the linear case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A general procedure to determine the principal domain (i.e., nonredundant region of computation) of any higher-order spectrum is presented, using the bispectrum as an example. The procedure is then applied to derive the principal domain of the trispectrum of a real-valued, stationary time series. These results are easily extended to compute the principal domains of other higher-order spectra

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An experimental laboratory investigation was carried out to assess the structural adequacy of a disused PHO Class Flat Bottom Rail Wagon (FRW) for a single lane low volume road bridge application as per the design provisions of the Australian Bridge Design Standard AS 5100(2004). The investigation also encompassed a review into the risk associated with the pre-existing damage in wagons incurred during their service life on rail. The main objective of the laboratory testing of the FRW was to physically measure its performance under the same applied traffic loading it would be required to resist as a road bridge deck. In order to achieve this a full width (5.2m) single lane, single span (approximately 10m), simply supported bridge would be required to be constructed and tested in a structural laboratory. However, the available clear spacing between the columns of the loading portal frame encountered within the laboratory was insufficient to accommodate the 5.2m wide bridge deck excluding clearance normally considered necessary in structural testing. Therefore, only half of the full scale bridge deck (single FRW of width 2.6m) was able to be accommodated and tested; with the continuity of the bridge deck in the lateral direction applied as boundary constraints along the full length of the FRW at six selected locations. This represents a novel approach not yet reported in the literature for bridge deck testing to the best of the knowledge of the author. The test was carried out under two loadings provided in AS 5100 (2004) – one stationary W80 wheel load and the second a moving axle load M1600. As the bridge investigated in the study is a single lane single span low volume road bridge, the risk of pre-existing damage and the expected high cycle fatigue failure potential was assessed as being minimal and hence the bridge deck was not tested structurally for fatigue/ fracture. The high axle load requirements have instead been focussed upon the investigation into the serviceability and ultimate limit state requirements. The testing regime adopted however involved extensive recording of strains and deflections at several critical locations of the FRW. Three locations of W80 point load and two locations of the M1600 Axle load were considered for the serviceability testing; the FRW was also tested under the ultimate load dictated by the M1600. The outcomes of the experimental investigation have demonstrated that the FRW is structurally adequate to resist the prescribed traffic loadings outlaid in AS 5100 (2004). As the loading was directly applied on to the FRW, the laboratory testing is assessed as being significantly conservative. The FRW bridge deck in the field would only resist the load transferred by the running platform, where, depending on the design, composite action might exist – thereby the share of the loading which needs to be resisted by the FRW would be smaller than the system tested in the lab. On this basis, a demonstration bridge is under construction at the time of writing this thesis and future research will involve field testing in order to assess its performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

People interact with mobile computing devices everywhere, while sitting, walking, running or even driving. Adapting the interface to suit these contexts is important, thus this paper proposes a simple human activity classification system. Our approach uses a vector magnitude recognition technique to detect and classify when a person is stationary (or not walking), casually walking, or jogging, without any prior training. The user study has confirmed the accuracy.