366 resultados para propagation-rate equations
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Introduction. In vitro spine biomechanical testing has been central to many advances in understanding the physiology and pathology of the human spine. Owing to the difficulty in obtaining sufficient numbers of human samples to conduct these studies, animal spines have been accepted as a substitute model. However, it is difficult to compare results from different studies, as they use different preparation, testing and data collection methods. The aim of this study was to identify the effect of repeated cyclic loading on bovine spine segment stiffness. It also aimed to quantify the effect of multiple freeze-thaw sequences, as many tests would be difficult to complete in a single session [1-3]. Materials and Methods. Thoracic spines from 6-8 week old calves were used. Each spine was dissected and divided into motion segments including levels T4-T11 (n=28). These were divided into two equal groups. Each segment was potted in polymethylemethacrylate. An Instron Biaxial materials testing machine with a custom made jig was used for testing. The segments were tested in flexion/extension, lateral bending and axial rotation at 37 degrees C and 100% humidity, using moment control to a maximum plus/minus 1.75 Nm with a loading rate of 0.3 Nm per second. Group (A) were tested with continuous repeated cyclic loading for 500 cycles with data recorded at cycles 3, 5, 10, 25, 100, 200, 300, 400 and 500. Group (B) were tested with 10 load cycles after each of 5 freeze thaw sequences. Data was collected from the tenth load cycle after each sequence. Statistical analysis of the data was performed using paired samples t-tests, ANOVA and generalized estimating equations. Results. The data were confirmed as having a normal distribution. 1. There were significant reductions in mean stiffness in flexion/extension (-20%; P=0.001) and lateral bending (-17%; P=0.009) over the 500 load cycles. However, there was no statistically significant change in axial rotation (P=0.152) 2. There was no statistically significant difference between mean stiffness over the five freeze-thaw sequences in flexion/extension (p=0.879) and axial rotation (p=0.07). However, there was a significant reduction in stiffness in lateral bending (-26%; p=0.007) Conclusion. Biomechanical testing of immature bovine spine motion segments requires careful interpretation. The effect of the number of load cycles as well as the number of freeze-thaw cycles on the stiffness of the motion segments depends on the axis of main movement.
Resumo:
To date, the formation of deposits on heat exchanger surfaces is the least understood problem in the design of heat exchangers for processing industries. Dr East has related the structure of the deposits to solution composition and has developed predictive models for composite fouling of calcium oxalate and silica in sugar factory evaporators.
Resumo:
This paper presents a nonlinear gust-attenuation controller based on constrained neural-network (NN) theory. The controller aims to achieve sufficient stability and handling quality for a fixed-wing unmanned aerial system (UAS) in a gusty environment when control inputs are subjected to constraints. Constraints in inputs emulate situations where aircraft actuators fail requiring the aircraft to be operated with fail-safe capability. The proposed controller enables gust-attenuation property and stabilizes the aircraft dynamics in a gusty environment. The proposed flight controller is obtained by solving the Hamilton-Jacobi-Isaacs (HJI) equations based on an policy iteration (PI) approach. Performance of the controller is evaluated using a high-fidelity six degree-of-freedom Shadow UAS model. Simulations show that our controller demonstrates great performance improvement in a gusty environment, especially in angle-of-attack (AOA), pitch and pitch rate. Comparative studies are conducted with the proportional-integral-derivative (PID) controllers, justifying the efficiency of our controller and verifying its suitability for integration into the design of flight control systems for forced landing of UASs.
Resumo:
Three native freshwater crayfish Cherax species are farmed in Australia namely; Redclaw (Cherax quadricarinatus), Marron (C. tenuimanus), and Yabby (C. destructor). Lack of appropriate data on specific nutrient requirements for each of these species, however, has constrained development of specific formulated diets and hence current use of over-formulated feeds or expensive marine shrimp feeds, limit their profitability. A number of studies have investigated nutritional requirements in redclaw that have focused on replacing expensive fish meal in formulated feeds with non-protein, less expensive substitutes including plant based ingredients. Confirmation that freshwater crayfish possess endogenous cellulase genes, suggests their potential ability to utilize complex carbohydrates like cellulose as nutrient sources in their diet. To date, studies have been limited to only C. quadricarinatus and C. destructor and no studies have compared the relative ability of each species to utilize soluble cellulose in their diets. Individual feeding trials of late-juveniles of each species were conducted separately in an automated recirculating culture system over 12 week cycles. Animals were fed either a test diet (TD) that contained 20% soluble cellulose or a reference diet (RD) substituted with the same amount of corn starch. Water temperature, conductivity and pH were maintained at constant and optimum levels for each species. Animals were fed at 3% of their body weight twice daily and wet body weight was recorded bi-weekly. At the end of experiment, all animals were harvested, measured and midgut gland extracts assayed for alpha-amylase, total protease and cellulase activity levels. After the trial period, redclaw fed with RD showed significantly higher (p<0.05) specific growth rate (SGR) compare with animals fed the TD while SGR of marron and yabby fed the two diets were not significantly different (p<0.05). Cellulase expression levels in redclaw were not significantly different between diets. Marron and yabby showed significantly higher cellulase activity when fed the RD. Amylase and protease activity in all three species were significantly higher in the animals fed with RD (Table 1). These results indicate that test animals of all species can utilize starch better than dietary soluble cellulose in their diet and inclusion of 20% soluble cellulose in diets does not appear to have any significant negative effect on their growth rate but survival was impacted in C. quadricarinatus while not in C. tenuimanus or C. destructor.
Resumo:
The current study evaluated the effect of soluble dietary cellulose on growth, survival and digestive enzyme activity in three endemic, Australian freshwater crayfish species (redclaw: Cherax quadricarinatus, marron: C. tenuimanus, yabby: C. destructor). Separate individual feeding trials were conducted for late-stage juveniles from each species in an automated recirculating freshwater, culture system. Animals were fed either a test diet (TD) that contained 20% soluble cellulose or a reference diet (RD) substituted with the same amount of corn starch, over a 12 week period. Redclaw fed with RD showed significantly higher (p<0.05) specific growth rates (SGR) compared with animals fed the TD, while SGR of marron and yabby fed the two diets were not significantly different. Expressed cellulase activity levels in redclaw were not significantly different between diets. Marron and yabby showed significantly higher cellulase activity when fed the RD (p<0.05). Amylase and protease activity in all three species were significantly higher in the animals fed with RD (p<0.05). These results indicate that test animals of all three species appear to utilize starch more efficiently than soluble dietary cellulose in their diet. The inclusion of 20% soluble cellulose in diets did not appear, however, to have a significant negative effect on growth rates.
Resumo:
Streamciphers are common cryptographic algorithms used to protect the confidentiality of frame-based communications like mobile phone conversations and Internet traffic. Streamciphers are ideal cryptographic algorithms to encrypt these types of traffic as they have the potential to encrypt them quickly and securely, and have low error propagation. The main objective of this thesis is to determine whether structural features of keystream generators affect the security provided by stream ciphers.These structural features pertain to the state-update and output functions used in keystream generators. Using linear sequences as keystream to encrypt messages is known to be insecure. Modern keystream generators use nonlinear sequences as keystream.The nonlinearity can be introduced through a keystream generator's state-update function, output function, or both. The first contribution of this thesis relates to nonlinear sequences produced by the well-known Trivium stream cipher. Trivium is one of the stream ciphers selected in a final portfolio resulting from a multi-year project in Europe called the ecrypt project. Trivium's structural simplicity makes it a popular cipher to cryptanalyse, but to date, there are no attacks in the public literature which are faster than exhaustive keysearch. Algebraic analyses are performed on the Trivium stream cipher, which uses a nonlinear state-update and linear output function to produce keystream. Two algebraic investigations are performed: an examination of the sliding property in the initialisation process and algebraic analyses of Trivium-like streamciphers using a combination of the algebraic techniques previously applied separately by Berbain et al. and Raddum. For certain iterations of Trivium's state-update function, we examine the sets of slid pairs, looking particularly to form chains of slid pairs. No chains exist for a small number of iterations.This has implications for the period of keystreams produced by Trivium. Secondly, using our combination of the methods of Berbain et al. and Raddum, we analysed Trivium-like ciphers and improved on previous on previous analysis with regards to forming systems of equations on these ciphers. Using these new systems of equations, we were able to successfully recover the initial state of Bivium-A.The attack complexity for Bivium-B and Trivium were, however, worse than exhaustive keysearch. We also show that the selection of stages which are used as input to the output function and the size of registers which are used in the construction of the system of equations affect the success of the attack. The second contribution of this thesis is the examination of state convergence. State convergence is an undesirable characteristic in keystream generators for stream ciphers, as it implies that the effective session key size of the stream cipher is smaller than the designers intended. We identify methods which can be used to detect state convergence. As a case study, theMixer streamcipher, which uses nonlinear state-update and output functions to produce keystream, is analysed. Mixer is found to suffer from state convergence as the state-update function used in its initialisation process is not one-to-one. A discussion of several other streamciphers which are known to suffer from state convergence is given. From our analysis of these stream ciphers, three mechanisms which can cause state convergence are identified.The effect state convergence can have on stream cipher cryptanalysis is examined. We show that state convergence can have a positive effect if the goal of the attacker is to recover the initial state of the keystream generator. The third contribution of this thesis is the examination of the distributions of bit patterns in the sequences produced by nonlinear filter generators (NLFGs) and linearly filtered nonlinear feedback shift registers. We show that the selection of stages used as input to a keystream generator's output function can affect the distribution of bit patterns in sequences produced by these keystreamgenerators, and that the effect differs for nonlinear filter generators and linearly filtered nonlinear feedback shift registers. In the case of NLFGs, the keystream sequences produced when the output functions take inputs from consecutive register stages are less uniform than sequences produced by NLFGs whose output functions take inputs from unevenly spaced register stages. The opposite is true for keystream sequences produced by linearly filtered nonlinear feedback shift registers.
Resumo:
The problem of estimating pseudobearing rate information of an airborne target based on measurements from a vision sensor is considered. Novel image speed and heading angle estimators are presented that exploit image morphology, hidden Markov model (HMM) filtering, and relative entropy rate (RER) concepts to allow pseudobearing rate information to be determined before (or whilst) the target track is being estimated from vision information.
Resumo:
The mean action time is the mean of a probability density function that can be interpreted as a critical time, which is a finite estimate of the time taken for the transient solution of a reaction-diffusion equation to effectively reach steady state. For high-variance distributions, the mean action time under-approximates the critical time since it neglects to account for the spread about the mean. We can improve our estimate of the critical time by calculating the higher moments of the probability density function, called the moments of action, which provide additional information regarding the spread about the mean. Existing methods for calculating the nth moment of action require the solution of n nonhomogeneous boundary value problems which can be difficult and tedious to solve exactly. Here we present a simplified approach using Laplace transforms which allows us to calculate the nth moment of action without solving this family of boundary value problems and also without solving for the transient solution of the underlying reaction-diffusion problem. We demonstrate the generality of our method by calculating exact expressions for the moments of action for three problems from the biophysics literature. While the first problem we consider can be solved using existing methods, the second problem, which is readily solved using our approach, is intractable using previous techniques. The third problem illustrates how the Laplace transform approach can be used to study coupled linear reaction-diffusion equations.
Resumo:
In this paper we explore the relationship between monthly random breath testing (RBT) rates (per 1000 licensed drivers) and alcohol-related traffic crash (ARTC) rates over time, across two Australian states: Queensland and Western Australia. We analyse the RBT, ARTC and licensed driver rates across 12 years; however, due to administrative restrictions, we model ARTC rates against RBT rates for the period July 2004 to June 2009. The Queensland data reveals that the monthly ARTC rate is almost flat over the five year period. Based on the results of the analysis, an average of 5.5 ARTCs per 100,000 licensed drivers are observed across the study period. For the same period, the monthly rate of RBTs per 1000 licensed drivers is observed to be decreasing across the study with the results of the analysis revealing no significant variations in the data. The comparison between Western Australia and Queensland shows that Queensland's ARTC monthly percent change (MPC) is 0.014 compared to the MPC of 0.47 for Western Australia. While Queensland maintains a relatively flat ARTC rate, the ARTC rate in Western Australia is increasing. Our analysis reveals an inverse relationship between ARTC RBT rates, that for every 10% increase in the percentage of RBTs to licensed driver there is a 0.15 decrease in the rate of ARTCs per 100,000 licenced drivers. Moreover, in Western Australia, if the 2011 ratio of 1:2 (RBTs to annual number of licensed drivers) were to double to a ratio of 1:1, we estimate the number of monthly ARTCs would reduce by approximately 15. Based on these findings we believe that as the number of RBTs conducted increases the number of drivers willing to risk being detected for drinking driving decreases, because the perceived risk of being detected is considered greater. This is turn results in the number of ARTCs diminishing. The results of this study provide an important evidence base for policy decisions for RBT operations.
Resumo:
There has been considerable recent work on the development of energy conserving one-step methods that are not symplectic. Here we extend these ideas to stochastic Hamiltonian problems with additive noise and show that there are classes of Runge-Kutta methods that are very effective in preserving the expectation of the Hamiltonian, but care has to be taken in how the Wiener increments are sampled at each timestep. Some numerical simulations illustrate the performance of these methods.
Resumo:
Purpose The objectives of this study were to examine the effect of 4-week moderate- and high-intensity interval training (MIIT and HIIT) on fat oxidation and the responses of blood lactate (BLa) and rating of perceived exertion (RPE). Methods Ten overweight/obese men (age = 29 ±3.7 years, BMI = 30.7 ±3.4 kg/m2) participated in a cross-over study of 4-week MIIT and HIIT training. The MIIT training sessions consisted of 5-min cycling stages at mechanical workloads 20% above and 20% below 45%VO2peak. The HIIT sessions consisted of intervals of 30-s work at 90%VO2peak and 30-s rest. Pre- and post-training assessments included VO2max using a graded exercise test (GXT) and fat oxidation using a 45-min constant-load test at 45%VO2max. BLa and RPE were also measured during the constant-load exercise test. Results There were no significant changes in body composition with either intervention. There were significant increases in fat oxidation after MIIT and HIIT (p ≤ 0.01), with no effect of intensity. BLa during the constant-load exercise test significantly decreased after MIIT and HIIT (p ≤ 0.01), and the difference between MIIT and HIIT was not significant (p = 0.09). RPE significantly decreased after HIIT greater than MIIT (p ≤ 0.05). Conclusion Interval training can increase fat oxidation with no effect of exercise intensity, but BLa and RPE decreased after HIIT to greater extent than MIIT.
Resumo:
Recent developments in wearable ECG technology have seen renewed interest in the use of Heart Rate Variability (HRV) feedback for stress management. Yet, little is know about the efficacy of such interventions. Positive reappraisal is an emotion regulation strategy that involves changing the way a situation is construed to decrease emotional impact. We sought to test the effectiveness of an intervention that used feedback on HRV data to prompt positive reappraisal during a stressful work task. Participants (N=122) completed two 20-minute trials of an inbox activity. In-between the first and the second trial participants were assigned to the waitlist control condition, a positive reappraisal via psycho-education condition, or a positive reappraisal via HRV feedback condition. Results revealed that using HRV data to frame a positive reappraisal message is more effective than using psycho-education (or no intervention)–especially for increasing positive mood and reducing arousal.
Resumo:
Lean body mass (LBM) and muscle mass remains difficult to quantify in large epidemiological studies due to non-availability of inexpensive methods. We therefore developed anthropometric prediction equations to estimate the LBM and appendicular lean soft tissue (ALST) using dual energy X-ray absorptiometry (DXA) as a reference method. Healthy volunteers (n= 2220; 36% females; age 18-79 y) representing a wide range of body mass index (14-44 kg/m2) participated in this study. Their LBM including ALST was assessed by DXA along with anthropometric measurements. The sample was divided into prediction (60%) and validation (40%) sets. In the prediction set, a number of prediction models were constructed using DXA measured LBM and ALST estimates as dependent variables and a combination of anthropometric indices as independent variables. These equations were cross-validated in the validation set. Simple equations using age, height and weight explained > 90% variation in the LBM and ALST in both men and women. Additional variables (hip and limb circumferences and sum of SFTs) increased the explained variation by 5-8% in the fully adjusted models predicting LBM and ALST. More complex equations using all the above anthropometric variables could predict the DXA measured LBM and ALST accurately as indicated by low standard error of the estimate (LBM: 1.47 kg and 1.63 kg for men and women, respectively) as well as good agreement by Bland Altman analyses. These equations could be a valuable tool in large epidemiological studies assessing these body compartments in Indians and other population groups with similar body composition.