114 resultados para holomorphic fourth- R polynomial
em Queensland University of Technology - ePrints Archive
Resumo:
There are several popular soil moisture measurement methods today such as time domain reflectometry, electromagnetic (EM) wave, electrical and acoustic methods. Significant studies have been dedicated in developing method of measurements using those concepts, especially to achieve the characteristics of noninvasiveness. EM wave method provides an advantage because it is non-invasive to the soil and does not need to utilise probes to penetrate or bury in the soil. But some EM methods are also too complex, expensive, and not portable for the application of Wireless Sensor Networks; for example satellites or UAV (Unmanned Aerial Vehicle) based sensors. This research proposes a method in detecting changes in soil moisture using soil-reflected electromagnetic (SREM) wave from Wireless Sensor Networks (WSNs). Studies have shown that different levels of soil moisture will affects soil’s dielectric properties, such as relative permittivity and conductivity, and in turns change its reflection coefficients. The SREM wave method uses a transmitter adjacent to a WSNs node with purpose exclusively to transmit wireless signals that will be reflected by the soil. The strength from the reflected signal that is determined by the soil’s reflection coefficients is used to differentiate the level of soil moisture. The novel nature of this method comes from using WSNs communication signals to perform soil moisture estimation without the need of external sensors or invasive equipment. This innovative method is non-invasive, low cost and simple to set up. There are three locations at Brisbane, Australia chosen as the experiment’s location. The soil type in these locations contains 10–20% clay according to the Australian Soil Resource Information System. Six approximate levels of soil moisture (8, 10, 13, 15, 18 and 20%) are measured at each location; with each measurement consisting of 200 data. In total 3600 measurements are completed in this research, which is sufficient to achieve the research objective, assessing and proving the concept of SREM wave method. These results are compared with reference data from similar soil type to prove the concept. A fourth degree polynomial analysis is used to generate an equation to estimate soil moisture from received signal strength as recorded by using the SREM wave method.
Resumo:
We present the first detailed application of Meadows’s cost-based modelling framework to the analysis of JFK, an Internet key agreement protocol. The analysis identifies two denial of service attacks against the protocol that are possible when an attacker is willing to reveal the source IP address. The first attack was identified through direct application of a cost-based modelling framework, while the second was only identified after considering coordinated attackers. Finally, we demonstrate how the inclusion of client puzzles in the protocol can improve denial of service resistance against both identified attacks.
Resumo:
Contemporary debates on the role of journalism in society are continuing the tradition of downplaying the role of proactive journalism - generally situated under the catchphrase of the Fourth Estate - in public policy making. This paper puts the case for the retention of a notion of a proactive form of journalism which can be broadly described as "investigative ", because it is important to the public policy process in modern democracies. It argues that critiques that downplay the potential of this form of journalism are flawed and overly deterministic. Finally. it seeks to illustrate how journalists can proactively inquire in ways that are relevant to the lives ofpeople in a range of settings, and that question elite sources in the interests ofthose people.
Resumo:
Industrial employment growth has been one of the most dynamic areas of expansion in Asia; however, current trends in industrialised working environments have resulted in greater employee stress. Despite research showing that cultural values affect the way people cope with stress, there is a dearth of psychometrically established tools for use in non-Western countries to measure these constructs. Studies of the "Way of Coping Checklist-Revised" (WCCL-R) in the West suggest that the WCCL-R has good psychometric properties, but its applicability in the East is still understudied. A confirmatory factor analysis (CFA) is used to validate the WCCL-R constructs in an Asian population. This study used 1,314 participants from Indonesia, Sri Lanka, Singapore, and Thailand. An initial exploratory factor analysis revealed that original structures were not confirmed; however, a subsequent EFA and CFA showed that a 38-item, five-factor structure model was confirmed. The revised WCCL-R in the Asian sample was also found to have good reliability and sound construct and concurrent validity. The 38-item structure of the WCCL-R has considerable potential in future occupational stress-related research in Asian countries.
Resumo:
The current study aims to investigate the non-linear relationship between the JD-R model and work engagement. Previous research has identified linear relationships between these constructs; however there are strong theoretical arguments for testing curvilinear relationships (e.g., Warr, 1987). Data were collected via a self-report online survey from officers of one Australian police service (N = 2,626). Results demonstrated a curvilinear relationship between job demands and job resources and engagement. Gender (as a control variable) was also found to be a significant predictor of work engagement. The results indicated that male police officers experienced significantly higher job demands and colleague support than female officers. However, female police officers reported significantly higher levels of work engagement than male officers. This study emphasises the need to test curvilinear relationships, as well as simple linear associations, when measuring psychological health.
Resumo:
This paper reports results from a study in which we automatically classified the query reformulation patterns for 964,780 Web searching sessions (composed of 1,523,072 queries) in order to predict what the next query reformulation would be. We employed an n-gram modeling approach to describe the probability of searchers transitioning from one query reformulation state to another and predict their next state. We developed first, second, third, and fourth order models and evaluated each model for accuracy of prediction. Findings show that Reformulation and Assistance account for approximately 45 percent of all query reformulations. Searchers seem to seek system searching assistant early in the session or after a content change. The results of our evaluations show that the first and second order models provided the best predictability, between 28 and 40 percent overall, and higher than 70 percent for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance in real time.
Resumo:
Significant sums of money are invested in developing technological innovations that have low levels and rates of adoption. Several approaches have been put forward in an effort to improve rates of adoption. This paper presents the results of study that examined the innovation fit of key technological innovations in the beef industry. Findings indicate that be assessing the innovation fit throughout the R&D process researchers and end users can collaborate to improve the innovation fit and the rate of adoption. The paper also put forward a model that demonstrates the linkages between R&D, adoption and innovation fit.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
This book is a thorough investigation of the relationship between land use planning and the railways in Britain, through review of the factors affecting the two sectors and their integration during the period of public ownership. The rationale behind the book is explained as a timely analysis of the dynamic correlation involving town planning and management of the railway in a period when growing congestion on the road network is forcing people to look for alternative modes and capacity is badly needed to accommodate this increased demand for travel. The book calls for a modal shift from road to rail for passenger and freight traffic.
Resumo:
The 1:1 proton-transfer compound of the potent substituted amphetamine hallucinogen (R)-1-(8-bromobenzo[1,2-b; 4,5-b']difuran-4-yl)-2-aminopropane (common trivial name 'bromodragonfly') with 3,5-dinitrosalicylic acid, 1-(8-bromobenzo[1,2-b;4,5-b']difuran-4-yl)-2-mmoniopropane 2-carboxy-4,6-dinitrophenolate, C13H13BrNO2+ C7H3N2O7- forms hydrogen-bonded cation-anion chain substructures comprising undulating head-to-tail anion chains formed through C(8) carboxyl O-H...O(nitro) associations and incorporating the aminium groups of the cations. The intra-chain cation-anion hydrogen-bonding associations feature proximal cyclic R33(8) interactions involving both a N+-H...O(phenolate) and the carboxyl O--H...O(nitro)associations. Also present are aromatic pi-pi ring interactions [minimum ring centroid separation, 3.566(2)A; inter-plane dihedral angle, 5.13(1)deg]. A lateral hydrogen-bonding interaction between the third aminium proton and a carboxyl O acceptor link the chain substructures giving a two-dimensional sheet structure. This determination represents the first of any form of this compound and confirms that it has the (R) absolute configuration. The atypical crystal stability is attributed both to the hydrogen-bonded chain substructures provided by the anions, which accommodate the aminium proton-donor groups of the cations and give cross-linking, and to the presence of cation--anion aromatic ring pi-pi interactions.
Resumo:
This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.