89 resultados para Second-order decision analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

n design of bridge structures, it is common to adopt a 100 year design life. However, analysis of a number of case study bridges in Australia has indicated that the actual design life can be significantly reduced due to premature deterioration resulting from exposure to aggressive environments. A closer analysis of the cost of rehabilitation of these structures has raised some interesting questions. What would be the real service life of a bridge exposed to certain aggressive environments? What is the strategy of conducting bridge rehabilitation? And what are the life cycle costs associated with rehabilitation? A research project funded by the CRC for Construction Innovation in Australia is aimed at addressing these issues. This paper presents a concept map for assisting decision makers to appropriately choose the best treatment for bridge rehabilitation affected by premature deterioration through exposure to aggressive environments in Australia. The decision analysis is referred to a whole of life cycle cost analysis by considering appropriate elements of bridge rehabilitation costs. In addition, the results of bridges inspections in Queensland are presented

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart, by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computer-based intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are nonlinear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of nonlinear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and seven classes of arrhythmia. We present some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. We also extracted features from the HOS and performed an analysis of variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A common optometric problem is to specify the eye’s ocular aberrations in terms of Zernike coefficients and to reduce that specification to a prescription for the optimum sphero-cylindrical correcting lens. The typical approach is first to reconstruct wavefront phase errors from measurements of wavefront slopes obtained by a wavefront aberrometer. This paper applies a new method to this clinical problem that does not require wavefront reconstruction. Instead, we base our analysis of axial wavefront vergence as inferred directly from wavefront slopes. The result is a wavefront vergence map that is similar to the axial power maps in corneal topography and hence has a potential to be favoured by clinicians. We use our new set of orthogonal Zernike slope polynomials to systematically analyse details of the vergence map analogous to Zernike analysis of wavefront maps. The result is a vector of slope coefficients that describe fundamental aberration components. Three different methods for reducing slope coefficients to a spherocylindrical prescription in power vector forms are compared and contrasted. When the original wavefront contains only second order aberrations, the vergence map is a function of meridian only and the power vectors from all three methods are identical. The differences in the methods begin to appear as we include higher order aberrations, in which case the wavefront vergence map is more complicated. Finally, we discuss the advantages and limitations of vergence map representation of ocular aberrations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Frontline employee behaviours are recognised as vital for achieving a competitive advantage for service organisations. The services marketing literature has comprehensively examined ways to improve frontline employee behaviours in service delivery and recovery. However, limited attention has been paid to frontline employee behaviours that favour customers in ways that go against organisational norms or rules. This study examines these behaviours by introducing a behavioural concept of Customer-Oriented Deviance (COD). COD is defined as, “frontline employees exhibiting extra-role behaviours that they perceive to defy existing expectations or prescribed rules of higher authority through service adaptation, communication and use of resources to benefit customers during interpersonal service encounters.” This thesis develops a COD measure and examines the key determinants of these behaviours from a frontline employee perspective. Existing research on similar behaviours that has originated in the positive deviance and pro-social behaviour domains has limitations and is considered inadequate to examine COD in the services context. The absence of a well-developed body of knowledge on non-conforming service behaviours has implications for both theory and practice. The provision of ‘special favours’ increases customer satisfaction but the over-servicing of customers is also counterproductive for the service delivery and costly for the organisation. Despite these implications of non-conforming service behaviours, there is little understanding about the nature of these behaviours and its key drivers. This research builds on inadequacies in prior research on positive deviance, pro-social and pro-customer literature to develop the theoretical foundation of COD. The concept of positive deviance which has predominantly been used to study organisational behaviours is applied within a services marketing setting. Further, it addresses previous limitations in pro-social and pro-customer behavioural literature that has examined limited forms of behaviours with no clear understanding on the nature of these behaviours. Building upon these literature streams, this research adopts a holistic approach towards the conceptualisation of COD. It addresses previous shortcomings in the literature by providing a well bounded definition, developing a psychometrically sound measure of COD and a conceptually well-founded model of COD. The concept of COD was examined across three separate studies and based on the theoretical foundations of role theory and social identity theory. Study 1 was exploratory and based on in-depth interviews using the Critical Incident Technique (CIT). The aim of Study 1 was to understand the nature of COD and qualitatively identify its key drivers. Thematic analysis was conducted to analyse the data and the two potential dimensions of COD behaviours of Deviant Service Adaptation (DSA) and Deviant Service Communication (DSC) were revealed in the analysis. In addition, themes representing the potential influences of COD were broadly classified as individual factors, situational factors, and organisational factors. Study 2 was a scale development procedure that involved the generation and purification of items for the measure based on two student samples working in customer service roles (Pilot sample, N=278; Initial validation sample, N=231). The results for the reliability and Exploratory Factor Analyses (EFA) on the pilot sample suggested the scale had poor psychometric properties. As a result, major revisions were made in terms of item wordings and new items were developed based on the literature to reflect a new dimension, Deviant Use of Resources (DUR). The revised items were tested on the initial validation sample with the EFA analysis suggesting a four-factor structure of COD. The aim of Study 3 was to further purify the COD measure and test for nomological validity based on its theoretical relationships with key antecedents and similar constructs (key correlates). The theoretical model of COD consisting of nine hypotheses was tested on a retail and hospitality sample of frontline employees (Retail N=311; Hospitality N=305) of a market research panel using an online survey. The data was analysed using Structural Equation Modelling (SEM). The results provided support for a re-specified second-order three-factor model of COD which consists of 11 items. Overall, the COD measure was found to be reliable and valid, demonstrating convergent validity, discriminant validity and marginal partial invariance for the factor loadings. The results showed support for nomological validity, although the antecedents had differing impact on COD across samples. Specifically, empathy and perspective-taking, role conflict, and job autonomy significantly influenced COD in the retail sample, whereas empathy and perspective-taking, risk-taking propensity and role conflict were significant predictors in the hospitality sample. In addition, customer orientation-selling orientation, the altruistic dimension of organisational citizenship behaviours, workplace deviance, and social desirability responding were found to correlate with COD. This research makes several contributions to theory. First, the findings of this thesis extend the literature on positive deviance, pro-social and pro-customer behaviours. Second, the research provides an empirically tested model which describes the antecedents of COD. Third, this research contributes by providing a reliable and valid measure of COD. Finally, the research investigates the differential effects of the key antecedents in different service sectors on COD. The research findings also contribute to services marketing practice. Based on the research findings, service practitioners can better understand the phenomenon of COD and utilise the measurement tool to calibrate COD levels within their organisations. Knowledge on the key determinants of COD will help improve recruitment and training programs and drive internal initiatives within the firm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since its initial proposal in 1998, alkaline hydrothermal processing has rapidly become an established technology for the production of titanate nanostructures. This simple, highly reproducible process has gained a strong research following since its conception. However, complete understanding and elucidation of nanostructure phase and formation have not yet been achieved. Without fully understanding phase, formation, and other important competing effects of the synthesis parameters on the final structure, the maximum potential of these nanostructures cannot be obtained. Therefore this study examined the influence of synthesis parameters on the formation of titanate nanostructures produced by alkaline hydrothermal treatment. The parameters included alkaline concentration, hydrothermal temperature, the precursor material‘s crystallite size and also the phase of the titanium dioxide precursor (TiO2, or titania). The nanostructure‘s phase and morphology was analysed using X-ray diffraction (XRD), Raman spectroscopy and transmission electron microscopy. X-ray photoelectron spectroscopy (XPS), dynamic light scattering (non-invasive backscattering), nitrogen sorption, and Rietveld analysis were used to determine phase, for particle sizing, surface area determinations, and establishing phase concentrations, respectively. This project rigorously examined the effect of alkaline concentration and hydrothermal temperature on three commercially sourced and two self-prepared TiO2 powders. These precursors consisted of both pure- or mixed-phase anatase and rutile polymorphs, and were selected to cover a range of phase concentrations and crystallite sizes. Typically, these precursors were treated with 5–10 M sodium hydroxide (NaOH) solutions at temperatures between 100–220 °C. Both nanotube and nanoribbon morphologies could be produced depending on the combination of these hydrothermal conditions. Both titania and titanate phases are comprised of TiO6 units which are assembled in different combinations. The arrangement of these atoms affects the binding energy between the Ti–O bonds. Raman spectroscopy and XPS were therefore employed in a preliminary study of phase determination for these materials. The change in binding energy from a titania to a titanate binding energy was investigated in this study, and the transformation of titania precursor into nanotubes and titanate nanoribbons was directly observed by these methods. Evaluation of the Raman and XPS results indicated a strengthening in the binding energies of both the Ti (2p3/2) and O (1s) bands which correlated to an increase in strength and decrease in resolution of the characteristic nanotube doublet observed between 320 and 220 cm.1 in the Raman spectra of these products. The effect of phase and crystallite size on nanotube formation was examined over a series of temperatures (100.200 �‹C in 20 �‹C increments) at a set alkaline concentration (7.5 M NaOH). These parameters were investigated by employing both pure- and mixed- phase precursors of anatase and rutile. This study indicated that both the crystallite size and phase affect nanotube formation, with rutile requiring a greater driving force (essentially �\harsher. hydrothermal conditions) than anatase to form nanotubes, where larger crystallites forms of the precursor also appeared to impede nanotube formation slightly. These parameters were further examined in later studies. The influence of alkaline concentration and hydrothermal temperature were systematically examined for the transformation of Degussa P25 into nanotubes and nanoribbons, and exact conditions for nanostructure synthesis were determined. Correlation of these data sets resulted in the construction of a morphological phase diagram, which is an effective reference for nanostructure formation. This morphological phase diagram effectively provides a .recipe book�e for the formation of titanate nanostructures. Morphological phase diagrams were also constructed for larger, near phase-pure anatase and rutile precursors, to further investigate the influence of hydrothermal reaction parameters on the formation of titanate nanotubes and nanoribbons. The effects of alkaline concentration, hydrothermal temperature, crystallite phase and size are observed when the three morphological phase diagrams are compared. Through the analysis of these results it was determined that alkaline concentration and hydrothermal temperature affect nanotube and nanoribbon formation independently through a complex relationship, where nanotubes are primarily affected by temperature, whilst nanoribbons are strongly influenced by alkaline concentration. Crystallite size and phase also affected the nanostructure formation. Smaller precursor crystallites formed nanostructures at reduced hydrothermal temperature, and rutile displayed a slower rate of precursor consumption compared to anatase, with incomplete conversion observed for most hydrothermal conditions. The incomplete conversion of rutile into nanotubes was examined in detail in the final study. This study selectively examined the kinetics of precursor dissolution in order to understand why rutile incompletely converted. This was achieved by selecting a single hydrothermal condition (9 M NaOH, 160 °C) where nanotubes are known to form from both anatase and rutile, where the synthesis was quenched after 2, 4, 8, 16 and 32 hours. The influence of precursor phase on nanostructure formation was explicitly determined to be due to different dissolution kinetics; where anatase exhibited zero-order dissolution and rutile second-order. This difference in kinetic order cannot be simply explained by the variation in crystallite size, as the inherent surface areas of the two precursors were determined to have first-order relationships with time. Therefore, the crystallite size (and inherent surface area) does not affect the overall kinetic order of dissolution; rather, it determines the rate of reaction. Finally, nanostructure formation was found to be controlled by the availability of dissolved titanium (Ti4+) species in solution, which is mediated by the dissolution kinetics of the precursor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human knee acts as a sophisticated shock absorber during landing movements. The ability of the knee to perform this function in the real world is remarkable given that the context of the landing movement may vary widely between performances. For this reason, humans must be capable of rapidly adjusting the mechanical properties of the knee under impact load in order to satisfy many competing demands. However, the processes involved in regulating these properties in response to changing constraints remain poorly understood. In particular, the effects of muscle fatigue on knee function during step landing are yet to be fully explored. Fatigue of the knee muscles is significant for 2 reasons. First, it is thought to have detrimental effects on the ability of the knee to act as a shock absorber and is considered a risk factor for knee injury. Second, fatigue of knee muscles provides a unique opportunity to examine the mechanisms by which healthy individuals alter knee function. A review of the literature revealed that the effect of fatigue on knee function during landing has been assessed by comparing pre and postfatigue measurements, with fatigue induced by a voluntary exercise protocol. The information is limited by inconsistent results with key measures, such as knee stiffness, showing varying results following fatigue, including increased stiffness, decreased stiffness or failure to detect any change in some experiments. Further consideration of the literature questions the validity of the models used to induce and measure fatigue, as well as the pre-post study design, which may explain the lack of consensus in the results. These limitations cast doubt on the usefulness of the available information and identify a need to investigate alternative approaches. Based on the results of this review, the aims of this thesis were to: • evaluate the methodological procedures used in validation of a fatigue model • investigate the adaptation and regulation of post-impact knee mechanics during repeated step landings • use this new information to test the effects of fatigue on knee function during a step-landing task. To address the aims of the thesis, 3 related experiments were conducted that collected kinetic, kinematic and electromyographic data from 3 separate samples of healthy male participants. The methodologies involved optoelectronic motion capture (VICON), isokinetic dynamometry (System3 Pro, BIODEX) and wireless surface electromyography (Zerowire, Aurion, Italy). Fatigue indicators and knee function measures used in each experiment were derived from the data. Study 1 compared the validity and reliability of repetitive stepping and isokinetic contractions with respect to fatigue of the quadriceps and hamstrings. Fifteen participants performed 50 repetitions of each exercise twice in randomised order, over 4 sessions. Sessions were separated by a minimum of 1 week’s rest, to ensure full recovery. Validity and reliability depended on a complex interaction between the exercise protocol, the fatigue indicator, the individual and the muscle of interest. Nevertheless, differences between exercise protocols indicated that stepping was less effective in eliciting valid and reliable changes in peak power and spectral compression, compared with isokinetic exercise. A key finding was that fatigue progressed in a biphasic pattern during both exercises. The point separating the 2 phases, known as the transition point, demonstrated superior between-test reliability during the isokinetic protocol, compared with stepping. However, a correction factor should be used to accurately apply this technique to the study of fatigue during landing. Study 2 examined alterations in knee function during repeated landings, with a different sample (N =12) performing 60 consecutive step landing trials. Each landing trial was separated by 1-minute rest periods. The results provided new information in relation to the pre-post study design in the context of detecting adjustments in knee function during landing. First, participants significantly increased or decreased pre-impact muscle activity or post-impact mechanics despite environmental and task constraints remaining unchanged. This is the 1st study to demonstrate this effect in healthy individuals without external feedback on performance. Second, single-subject analysis was more effective in detecting alterations in knee function compared to group-level analysis. Finally, repeated landing trials did not reduce inter-trial variability of knee function in some participants, contrary to assumptions underpinning previous studies. The results of studies 1 and 2 were used to modify the design of Study 3 relative to previous research. These alterations included a modified isokinetic fatigue protocol, multiple pre-fatigue measurements and singlesubject analysis to detect fatigue-related changes in knee function. The study design incorporated new analytical approaches to investigate fatiguerelated alterations in knee function during landing. Participants (N = 16) were measured during multiple pre-fatigue baseline trial blocks prior to the fatigue model. A final block of landing trials was recorded once the participant met the operational fatigue definition that was identified in Study 1. The analysis revealed that the effects of fatigue in this context are heavily dependent on the compensatory response of the individual. A continuum of responses was observed within the sample for each knee function measure. Overall, preimpact preparation and post-impact mechanics of the knee were altered with highly individualised patterns. Moreover, participants used a range of active or passive pre-impact strategies to adapt post-impact mechanics in response to quadriceps fatigue. The unique patterns identified in the data represented an optimisation of knee function based on priorities of the individual. The findings of these studies explain the lack of consensus within the literature regarding the effects of fatigue on knee function during landing. First, functional fatigue protocols lack validity in inducing fatigue-related changes in mechanical output and spectral compression of surface electromyography (sEMG) signals, compared with isokinetic exercise. Second, fatigue-related changes in knee function during landing are confounded by inter-individual variation, which limits the sensitivity of group-level analysis. By addressing these limitations, the 3rd study demonstrated the efficacies of new experimental and analytical approaches to observe fatigue-related alterations in knee function during landing. Consequently, this thesis provides new perspectives into the effects of fatigue in knee function during landing. In conclusion: • The effects of fatigue on knee function during landing depend on the response of the individual, with considerable variation present between study participants, despite similar physical characteristics. • In healthy males, adaptation of pre-impact muscle activity and postimpact knee mechanics is unique to the individual and reflects their own optimisation of demands such as energy expenditure, joint stability, sensory information and loading of knee structures. • The results of these studies should guide future exploration of adaptations in knee function to fatigue. However, research in this area should continue with reduced emphasis on the directional response of the population and a greater focus on individual adaptations of knee function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Simulink Matlab control system of a heavy vehicle suspension has been developed. The aim of the exercise presented in this paper was to develop a Simulink Matlab control system of a heavy vehicle suspension. The objective facilitated by this outcome was the use of a working model of a heavy vehicle (HV) suspension that could be used for future research. A working computer model is easier and cheaper to re-configure than a HV axle group installed on a truck; it presents less risk should something go wrong and allows more scope for variation and sensitivity analysis before embarking on further "real-world" testing. Empirical data recorded as the input and output signals of a heavy vehicle (HV) suspension were used to develop the parameters for computer simulation of a linear time invariant system described by a second-order differential equation of the form: (i.e. a "2nd-order" system). Using the empirical data as an input to the computer model allowed validation of its output compared with the empirical data. The errors ranged from less than 1% to approximately 3% for any parameter, when comparing like-for-like inputs and outputs. The model is presented along with the results of the validation. This model will be used in future research in the QUT/Main Roads project Heavy vehicle suspensions – testing and analysis, particularly so for a theoretical model of a multi-axle HV suspension with varying values of dynamic load sharing. Allowance will need to be made for the errors noted when using the computer models in this future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis conceptualises Use for IS (Information Systems) success. While Use in this study describes the extent to which an IS is incorporated into the user’s processes or tasks, success of an IS is the measure of the degree to which the person using the system is better off. For IS success, the conceptualisation of Use offers new perspectives on describing and measuring Use. We test the philosophies of the conceptualisation using empirical evidence in an Enterprise Systems (ES) context. Results from the empirical analysis contribute insights to the existing body of knowledge on the role of Use and demonstrate Use as an important factor and measure of IS success. System Use is a central theme in IS research. For instance, Use is regarded as an important dimension of IS success. Despite its recognition, the Use dimension of IS success reportedly suffers from an all too simplistic definition, misconception, poor specification of its complex nature, and an inadequacy of measurement approaches (Bokhari 2005; DeLone and McLean 2003; Zigurs 1993). Given the above, Burton-Jones and Straub (2006) urge scholars to revisit the concept of system Use, consider a stronger theoretical treatment, and submit the construct to further validation in its intended nomological net. On those considerations, this study re-conceptualises Use for IS success. The new conceptualisation adopts a work-process system-centric lens and draws upon the characteristics of modern system types, key user groups and their information needs, and the incorporation of IS in work processes. With these characteristics, the definition of Use and how it may be measured is systematically established. Use is conceptualised as a second-order measurement construct determined by three sub-dimensions: attitude of its users, depth, and amount of Use. The construct is positioned in a modified IS success research model, in an attempt to demonstrate its central role in determining IS success in an ES setting. A two-stage mixed-methods research design—incorporating a sequential explanatory strategy—was adopted to collect empirical data and to test the research model. The first empirical investigation involved an experiment and a survey of ES end users at a leading tertiary education institute in Australia. The second, a qualitative investigation, involved a series of interviews with real-world operational managers in large Indian private-sector companies to canvass their day-to-day experiences with ES. The research strategy adopted has a stronger quantitative leaning. The survey analysis results demonstrate the aptness of Use as an antecedent and a consequence of IS success, and furthermore, as a mediator between the quality of IS and the impacts of IS on individuals. Qualitative data analysis on the other hand, is used to derive a framework for classifying the diversity of ES Use behaviour. The qualitative results establish that workers Use IS in their context to orientate, negotiate, or innovate. The implications are twofold. For research, this study contributes to cumulative IS success knowledge an approach for defining, contextualising, measuring, and validating Use. For practice, research findings not only provide insights for educators when incorporating ES for higher education, but also demonstrate how operational managers incorporate ES into their work practices. Research findings leave the way open for future, larger-scale research into how industry practitioners interact with an ES to complete their work in varied organisational environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to apply the principles of content, criterion, and construct validation to a new questionnaire specifically designed to measure foot-health status. One hundred eleven subjects completed two different questionnaires designed to measure foot health (the new Foot Health Status Questionnaire and the previously validated Foot Function Index) and underwent a clinical examination in order to provide data for a second-order confirmatory factor analysis. Presented herein is a psychometrically evaluated questionnaire that contains 13 items covering foot pain, foot function, footwear, and general foot health. The tool demonstrates a high degree of content, criterion, and construct validity and test-retest reliability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polynomial models are shown to simulate accurately the quadratic and cubic nonlinear interactions (e.g. higher-order spectra) of time series of voltages measured in Chua's circuit. For circuit parameters resulting in a spiral attractor, bispectra and trispectra of the polynomial model are similar to those from the measured time series, suggesting that the individual interactions between triads and quartets of Fourier components that govern the process dynamics are modeled accurately. For parameters that produce the double-scroll attractor, both measured and modeled time series have small bispectra, but nonzero trispectra, consistent with higher-than-second order nonlinearities dominating the chaos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study is conducted within the IS-Impact Research Track at Queensland University of Technology (QUT). The goal of the IS-Impact Track is, “to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice” (Gable et al, 2006). IS-Impact is defined as “a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups” (Gable Sedera and Chan, 2008). Track efforts have yielded the bicameral IS-Impact measurement model; the “impact” half includes Organizational-Impact and Individual-Impact dimensions; the “quality” half includes System-Quality and Information-Quality dimensions. The IS-Impact model, by design, is intended to be robust, simple and generalizable, to yield results that are comparable across time, stakeholders, different systems and system contexts. The model and measurement approach employ perceptual measures and an instrument that is relevant to key stakeholder groups, thereby enabling the combination or comparison of stakeholder perspectives. Such a validated and widely accepted IS-Impact measurement model has both academic and practical value. It facilitates systematic operationalization of a main dependent variable in research (IS-Impact), which can also serve as an important independent variable. For IS management practice it provides a means to benchmark and track the performance of information systems in use. The objective of this study is to develop a Mandarin version IS-Impact model, encompassing a list of China-specific IS-Impact measures, aiding in a better understanding of the IS-Impact phenomenon in a Chinese organizational context. The IS-Impact model provides a much needed theoretical guidance for this investigation of ES and ES impacts in a Chinese context. The appropriateness and soundness of employing the IS-Impact model as a theoretical foundation are evident: the model originated from a sound theory of IS Success (1992), developed through rigorous validation, and also derived in the context of Enterprise Systems. Based on the IS-Impact model, this study investigates a number of research questions (RQs). Firstly, the research investigated what essential impacts have been derived from ES by Chinese users and organizations [RQ1]. Secondly, we investigate which salient quality features of ES are perceived by Chinese users [RQ2]. Thirdly, we seek to answer whether the quality and impacts measures are sufficient to assess ES-success in general [RQ3]. Lastly, the study attempts to address whether the IS-Impact measurement model is appropriate for Chinese organizations in terms of evaluating their ES [RQ4]. An open-ended, qualitative identification survey was employed in the study. A large body of short text data was gathered from 144 Chinese users and 633 valid IS-Impact statements were generated from the data set. A generally inductive approach was applied in the qualitative data analysis. Rigorous qualitative data coding resulted in 50 first-order categories with 6 second-order categories that were grounded from the context of Chinese organization. The six second-order categories are: 1) System Quality; 2) Information Quality; 3) Individual Impacts;4) Organizational Impacts; 5) User Quality and 6) IS Support Quality. The final research finding of the study is the contextualized Mandarin version IS-Impact measurement model that includes 38 measures organized into 4 dimensions: System Quality, information Quality, Individual Impacts and Organizational Impacts. The study also proposed two conceptual models to harmonize the IS-Impact model and the two emergent constructs – User Quality and IS Support Quality by drawing on previous IS effectiveness literatures and the Work System theory proposed by Alter (1999) respectively. The study is significant as it is the first effort that empirically and comprehensively investigates IS-Impact in China. Specifically, the research contributions can be classified into theoretical contributions and practical contributions. From the theoretical perspective, through qualitative evidence, the study test and consolidate IS-Impact measurement model in terms of the quality of robustness, completeness and generalizability. The unconventional research design exhibits creativity of the study. The theoretical model does not work as a top-down a priori seeking for evidence demonstrating its credibility; rather, the study allows a competitive model to emerge from the bottom-up and open-coding analysis. Besides, the study is an example extending and localizing pre-existing theory developed in Western context when the theory is introduced to a different context. On the other hand, from the practical perspective, It is first time to introduce prominent research findings in field of IS Success to Chinese academia and practitioner. This study provides a guideline for Chinese organizations to assess their Enterprise System, and leveraging IT investment in the future. As a research effort in ITPS track, this study contributes the research team with an alternative operationalization of the dependent variable. The future research can take on the contextualized Mandarin version IS-Impact framework as a theoretical a priori model, further quantitative and empirical testing its validity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The adsorption of Congo Red (CR) by ball-milled sugarcane bagasse was evaluated in an aqueous batch system. CR adsorption capacity increased significantly with small changes in bagasse surface area. CR removal decreased with increasing solution pH from 5.0 to 10.0. Maximum adsorption capacity was 38.2 mg/g bagasse at a CR concentration of 500 mg/L. The equilibrium isotherm fitted the Freundlich model and the adsorption kinetics obeyed pseudo-second order equation. CR adsorption obeyed the intra-particle diffusion model very well with bagasse surface area in the range of 0.58–0.66 m2/g, whereas it was controlled by multi-adsorption stages with bagasse surface area in the range of 1.31–1.82 m2/g. Thermodynamic analysis indicated that the adsorption process is an exothermic and spontaneous process. Fourier transform infrared analysis of bagasse containing adsorbed CR indicated interactions between the carboxyl and hydroxyl groups of bagasse and CR function groups.