445 resultados para Measurement-While-Drilling


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Chronic disease presents overwhelming challenges to elderly patients, their families, health care providers and the health care system. The aim of this study was to explore a theoretical model for effective management of chronic diseases, especially type 2 diabetes mellitus and/or cardiovascular disease. The assumed theoretical model considered the connections between physical function, mental health, social support and health behaviours. The study effort was to improve the quality of life for people with chronic diseases, especially type 2 diabetes and/or cardiovascular disease and to reduce health costs. Methods: A cross-sectional post questionnaire survey was conducted in early 2009 from a randomised sample of Australians aged 50 to 80 years. A total of 732 subjects were eligible for analysis. Firstly, factors influencing respondents‘ quality of life were investigated through bivariate and multivariate regression analysis. Secondly, the Theory of Planned Behaviour (TPB) model for regular physical activity, healthy eating and medication adherence behaviours was tested for all relevant respondents using regression analysis. Thirdly, TPB variable differences between respondents who have diabetes and/or cardiovascular disease and those without these diseases were compared. Finally, the TPB model for three behaviours including regular physical activity, healthy eating and medication adherence were tested in respondents with diabetes and/or cardiovascular diseases using Structure Equation Modelling (SEM). Results: This was the first study combining the three behaviours using a TPB model, while testing the influence of extra variables on the TPB model in one study. The results of this study provided evidence that the ageing process was a cumulative effect of biological change, socio-economic environment and lifelong behaviours. Health behaviours, especially physical activity and healthy eating were important modifiable factors influencing respondents‘ quality of life. Since over 80% of the respondents had at least one chronic disease, it was important to consider supporting older people‘s chronic disease self-management skills such as healthy diet, regular physical activity and medication adherence to improve their quality of life. Direct measurement of the TPB model was helpful in understanding respondents‘ intention and behaviour toward physical activity, healthy eating and medication adherence. In respondents with diabetes and/or cardiovascular disease, the TPB model predicted different proportions of intention toward three different health behaviours with 39% intending to engage in physical activity, 49% intending to engage in healthy eating and 47% intending to comply with medication adherence. Perceived behavioural control, which was proven to be the same as self-efficacy in measurement in this study, played an important role in predicting intention towards the three health behaviours. Also social norms played a slightly more important role than attitude for physical activity and medication adherence, while attitude and social norms had similar effects on healthy eating in respondents with diabetes and/or cardiovascular disease. Both perceived behavioural control and intention directly predicted recent actual behaviours. Physical activity was more a volitional control behaviour than healthy eating and medication adherence. Step by step goal setting and motivation was more important for physical activity, while accessibility, resources and other social environmental factors were necessary for improving healthy eating and medication adherence. The extra variables of age, waist circumference, health related quality of life and depression indirectly influenced intention towards the three behaviours mainly mediated through attitude and perceived behavioural control. Depression was a serious health problem that reduced the three health behaviours‘ motivation, mediated through decreased self-efficacy and negative attitude. This research provided evidence that self-efficacy is similar to perceived behavioural control in the TPB model and intention is a proximal goal toward a particular behaviour. Combining four sources of information in the self-efficacy model with the TPB model would improve chronic disease patients‘ self management behaviour and reach an improved long-term treatment outcome. Conclusion: Health intervention programs that target chronic disease management should focus on patients‘ self-efficacy. A holistic approach which is patient-centred and involves a multidisciplinary collaboration strategy would be effective. Supporting the socio-economic environment and the mental/ emotional environment for older people needs to be considered within an integrated health care system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The stochastic simulation algorithm was introduced by Gillespie and in a different form by Kurtz. There have been many attempts at accelerating the algorithm without deviating from the behavior of the simulated system. The crux of the explicit τ-leaping procedure is the use of Poisson random variables to approximate the number of occurrences of each type of reaction event during a carefully selected time period, τ. This method is acceptable providing the leap condition, that no propensity function changes “significantly” during any time-step, is met. Using this method there is a possibility that species numbers can, artificially, become negative. Several recent papers have demonstrated methods that avoid this situation. One such method classifies, as critical, those reactions in danger of sending species populations negative. At most, one of these critical reactions is allowed to occur in the next time-step. We argue that the criticality of a reactant species and its dependent reaction channels should be related to the probability of the species number becoming negative. This way only reactions that, if fired, produce a high probability of driving a reactant population negative are labeled critical. The number of firings of more reaction channels can be approximated using Poisson random variables thus speeding up the simulation while maintaining the accuracy. In implementing this revised method of criticality selection we make use of the probability distribution from which the random variable describing the change in species number is drawn. We give several numerical examples to demonstrate the effectiveness of our new method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: To compare the intraocular pressure readings obtained with the iCare rebound tonometer and the 7CR non-contact tonometer with those measured by Goldmann applanation tonometry in treated glaucoma patients. Design: A prospective, cross sectional study was conducted in a private tertiary glaucoma clinic. Participants: 109 (54M:55F) patients including only eyes under medical treatment for glaucoma. Methods: Measurement by Goldmann applanation tonometry, iCare rebound tonometry and 7CR non-contact tonometry. Main Outcome Measures: Intraocular pressure. Results: There were strong correlations between the intraocular pressure measurements obtained with Goldmann and both the rebound and non-contact tonometers (Spearman r values ≥ 0.79, p < 0.001). However, there were small, statistically significant differences between the average readings for each tonometer. For the rebound tonometer, the mean intraocular pressure was slightly higher compared to the Goldmann applanation tonometer in the right eyes (p = 0.02), and similar in the left eyes (p = 0.93) however these differences did not reach statistical significance. The Goldmann correlated measurements from the noncontact tonometer were lower than the average Goldmann reading for both right (p < 0.001) and left (p > 0.01) eyes. The corneal compensated measurements from the non-contact tonometer were significantly higher compared to the other tonometers (p ≤ 0.001). Conclusions: The iCare rebound tonometer and the 7CR non-contact tonometer measure IOP in fundamentally different ways to the Goldmann applanation tonometer. The resulting IOP values vary between the instruments and will need to be considered when comparing clinical versus home acquired measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Damage detection in structures has become increasingly important in recent years. While a number of damage detection and localization methods have been proposed, few attempts have been made to explore the structure damage with frequency response functions (FRFs). This paper illustrates the damage identification and condition assessment of a beam structure using a new frequency response functions (FRFs) based damage index and Artificial Neural Networks (ANNs). In practice, usage of all available FRF data as an input to artificial neural networks makes the training and convergence impossible. Therefore one of the data reduction techniques Principal Component Analysis (PCA) is introduced in the algorithm. In the proposed procedure, a large set of FRFs are divided into sub-sets in order to find the damage indices for different frequency points of different damage scenarios. The basic idea of this method is to establish features of damaged structure using FRFs from different measurement points of different sub-sets of intact structure. Then using these features, damage indices of different damage cases of the structure are identified after reconstructing of available FRF data using PCA. The obtained damage indices corresponding to different damage locations and severities are introduced as input variable to developed artificial neural networks. Finally, the effectiveness of the proposed method is illustrated and validated by using the finite element modal of a beam structure. The illustrated results show that the PCA based damage index is suitable and effective for structural damage detection and condition assessment of building structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many bridges, vertical displacements are one of the most relevant parameters for structural health monitoring in both the short and long terms. Bridge managers around the globe are always looking for a simple way to measure vertical displacements of bridges. However, it is difficult to carry out such measurements. On the other hand, in recent years, with the advancement of fiber-optic technologies, fiber Bragg grating (FBG) sensors are more commonly used in structural health monitoring due to their outstanding advantages including multiplexing capability, immunity of electromagnetic interference as well as high resolution and accuracy. For these reasons, using FBG sensors is proposed to develop a simple, inexpensive and practical method to measure vertical displacements of bridges. A curvature approach for vertical displacement measurement using curvature measurements is proposed. In addition, with the successful development of a FBG tilt sensors, an inclination approach is also proposed using inclination measurements. A series of simulation tests of a full-scale bridge was conducted. It shows that both the approaches can be implemented to determine vertical displacements for bridges with various support conditions, varying stiffness (EI) along the spans and without any prior known loading. These approaches can thus measure vertical displacements for most of slab-on-girder and box-girder bridges. Moreover, with the advantages of FBG sensors, they can be implemented to monitor bridge behavior remotely and in real time. Further recommendations of these approaches for developments will also be discussed at the end of the paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasing global competitiveness worldwide has forced manufacturing organizations to produce high-quality products more quickly and at a competitive cost. In order to reach these goals, they need good quality components from suppliers at optimum price and lead time. This actually forced all the companies to adapt different improvement practices such as lean manufacturing, Just in Time (JIT) and effective supply chain management. Applying new improvement techniques and tools cause higher establishment costs and more Information Delay (ID). On the contrary, these new techniques may reduce the risk of stock outs and affect supply chain flexibility to give a better overall performance. But industry people are unable to measure the overall affects of those improvement techniques with a standard evaluation model .So an effective overall supply chain performance evaluation model is essential for suppliers as well as manufacturers to assess their companies under different supply chain strategies. However, literature on lean supply chain performance evaluation is comparatively limited. Moreover, most of the models assumed random values for performance variables. The purpose of this paper is to propose an effective supply chain performance evaluation model using triangular linguistic fuzzy numbers and to recommend optimum ranges for performance variables for lean implementation. The model initially considers all the supply chain performance criteria (input, output and flexibility), converts the values to triangular linguistic fuzzy numbers and evaluates overall supply chain performance under different situations. Results show that with the proposed performance measurement model, improvement area for each variable can be accurately identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, the effect of ions and ultrafine particles on ambient air quality and human health has been well documented, however, knowledge about their sources, concentrations and interactions within different types of urban environments remains limited. This thesis presents the results of numerous field studies aimed at quantifying variations in ion concentration with distance from the source, as well as identifying the dynamics of the particle ionisation processes which lead to the formation of charged particles in the air. In order to select the most appropriate measurement instruments and locations for the studies, a literature review was also conducted on studies that reported ion and ultrafine particle emissions from different sources in a typical urban environment. The initial study involved laboratory experiments on the attachment of ions to aerosols, so as to gain a better understanding of the interaction between ions and particles. This study determined the efficiency of corona ions at charging and removing particles from the air, as a function of different particle number and ion concentrations. The results showed that particle number loss was directly proportional to particle charge concentration, and that higher small ion concentrations led to higher particle deposition rates in all size ranges investigated. Nanoparticles were also observed to decrease with increasing particle charge concentration, due to their higher Brownian mobility and subsequent attachment to charged particles. Given that corona discharge from high voltage powerlines is considered one of the major ion sources in urban areas, a detailed study was then conducted under three parallel overhead powerlines, with a steady wind blowing in a perpendicular direction to the lines. The results showed that large sections of the lines did not produce any corona at all, while strong positive emissions were observed from discrete components such as a particular set of spacers on one of the lines. Measurements were also conducted at eight upwind and downwind points perpendicular to the powerlines, spanning a total distance of about 160m. The maximum positive small and large ion concentrations, and DC electric field were observed at a point 20 m downwind from the lines, with median values of 4.4×103 cm-3, 1.3×103 cm-3 and 530 V m-1, respectively. It was estimated that, at this point, less than 7% of the total number of particles was charged. The electrical parameters decreased steadily with increasing downwind distance from the lines but remained significantly higher than background levels at the limit of the measurements. Moreover, vehicles are one of the most prevalent ion and particle emitting sources in urban environments, and therefore, experiments were also conducted behind a motor vehicle exhaust pipe and near busy motorways, with the aim of quantifying small ion and particle charge concentration, as well as their distribution as a function of distance from the source. The study found that approximately equal numbers of positive and negative ions were observed in the vehicle exhaust plume, as well as near motorways, of which heavy duty vehicles were believed to be the main contributor. In addition, cluster ion concentration was observed to decrease rapidly within the first 10-15 m from the road and ion-ion recombination and ion-aerosol attachment were the most likely cause of ion depletion, rather than dilution and turbulence related processes. In addition to the above-mentioned dominant ion sources, other sources also exist within urban environments where intensive human activities take place. In this part of the study, airborne concentrations of small ions, particles and net particle charge were measured at 32 different outdoor sites in and around Brisbane, Australia, which were classified into seven different groups as follows: park, woodland, city centre, residential, freeway, powerlines and power substation. Whilst the study confirmed that powerlines, power substations and freeways were the main ion sources in an urban environment, it also suggested that not all powerlines emitted ions, only those with discrete corona discharge points. In addition to the main ion sources, higher ion concentrations were also observed environments affected by vehicle traffic and human activities, such as the city centre and residential areas. A considerable number of ions were also observed in a woodland area and it is still unclear if they were emitted directly from the trees, or if they originated from some other local source. Overall, it was found that different types of environments had different types of ion sources, which could be classified as unipolar or bipolar particle sources, as well as ion sources that co-exist with particle sources. In general, fewer small ions were observed at sites with co-existing sources, however particle charge was often higher due to the effect of ion-particle attachment. In summary, this study quantified ion concentrations in typical urban environments, identified major charge sources in urban areas, and determined the spatial dispersion of ions as a function of distance from the source, as well as their controlling factors. The study also presented ion-aerosol attachment efficiencies under high ion concentration conditions, both in the laboratory and in real outdoor environments. The outcomes of these studies addressed the aims of this work and advanced understanding of the charge status of aerosols in the urban environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inter-Vehicular Communications (IVC) are considered a promising technological approach for enhancing transportation safety and improving highway efficiency. Previous theoretical work has demonstrated the benefits of IVC in vehicles strings. Simulations of partially IVC-equipped vehicles strings showed that only a small equipment ratio is sufficient to drastically reduce the number of head on collisions. However, these results are based on the assumptions that IVC exhibit lossless and instantaneous messages transmission. This paper presents the research design of an empirical measurement of a vehicles string, with the goal of highlighting the constraints introduced by the actual characteristics of communication devices. A warning message diffusion system based on IEEE 802.11 wireless technology was developed for an emergency breaking scenario. Preliminary results are presented as well, showing the latencies introduced by using 802.11a and discussing early findings and experimental limitations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To determine the test-retest reliability of measurements of thickness, fascicle length (Lf) and pennation angle (θ) of the vastus lateralis (VL) and gastrocnemius medialis (GM) muscles in older adults. Participants Twenty-one healthy older adults (11 men and ten women; average age 68·1 ± 5·2 years) participated in this study. Methods Ultrasound images (probe frequency 10 MHz) of the VL at two sites (VL site 1 and 2) were obtained with participants seated with knee at 90º flexion. For GM measures, participants lay prone with ankle fixed at 15º dorsiflexion. Measures were taken on two separate occasions, 7 days apart (T1 and T2). Results The ICCs (95% CI) were: VL site 1 thickness = 0·96(0·90–0·98); VL site 2 thickness = 0·96(0·90–0·98), VL θ = 0·87(0·68–0·95), VL Lf = 0·80(0·50–0·92), GM thickness = 0·97(0·92–0·99), GM θ = 0·85(0·62–0·94) and GM Lf =0·90(0·75–0·96). The 95% ratio limits of agreement (LOAs) for all measures, calculated by multiplying the standard deviation of the ratio of the results between T1 and T2 by 1·96, ranged from 10·59 to 38·01%. Conclusion The ability of these tests to determine a real change in VL and GM muscle architecture is good on a group level but problematic on an individual level as the relatively large 95% ratio LOAs in the current study may encompass the changes in architecture observed in other training studies. Therefore, the current findings suggest that B-mode ultrasonography can be used with confidence by researchers when investigating changes in muscle architecture in groups of older adults, but its use is limited in showing changes in individuals over time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rats are superior to the most advanced robots when it comes to creating and exploiting spatial representations. A wild rat can have a foraging range of hundreds of meters, possibly kilometers, and yet the rodent can unerringly return to its home after each foraging mission, and return to profitable foraging locations at a later date (Davis, et al., 1948). The rat runs through undergrowth and pipes with few distal landmarks, along paths where the visual, textural, and olfactory appearance constantly change (Hardy and Taylor, 1980; Recht, 1988). Despite these challenges the rat builds, maintains, and exploits internal representations of large areas of the real world throughout its two to three year lifetime. While algorithms exist that allow robots to build maps, the questions of how to maintain those maps and how to handle change in appearance over time remain open. The robotic approach to map building has been dominated by algorithms that optimise the geometry of the map based on measurements of distances to features. In a robotic approach, measurements of distance to features are taken with range-measuring devices such as laser range finders or ultrasound sensors, and in some cases estimates of depth from visual information. The features are incorporated into the map based on previous readings of other features in view and estimates of self-motion. The algorithms explicitly model the uncertainty in measurements of range and the measurement of self-motion, and use probability theory to find optimal solutions for the geometric configuration of the map features (Dissanayake, et al., 2001; Thrun and Leonard, 2008). Some of the results from the application of these algorithms have been impressive, ranging from three-dimensional maps of large urban strucutures (Thrun and Montemerlo, 2006) to natural environments (Montemerlo, et al., 2003).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DeLone and McLean (1992, p. 16) argue that the concept of “system use” has suffered from a “too simplistic definition.” Despite decades of substantial research on system use, the concept is yet to receive strong theoretical scrutiny. Many measures of system use and the development of measures have been often idiosyncratic and lack credibility or comparability. This paper reviews various attempts at conceptualization and measurement of system use and then proposes a re-conceptualization of it as “the level of incorporation of an information system within a user’s processes.” The definition is supported with the theory of work systems, system, and Key-User-Group considerations. We then go on to develop the concept of a Functional- Interface-Point (FIP) and four dimensions of system usage: extent, the proportion of the FIPs used by the business process; frequency, the rate at which FIPs are used by the participants in the process; thoroughness, the level of use of information/functionality provided by the system at an FIP; and attitude towards use, a set of measures that assess the level of comfort, degree of respect and the challenges set forth by the system. The paper argues that the automation level, the proportion of the business process encoded by the information system has a mediating impact on system use. The article concludes with a discussion of some implications of this re-conceptualization and areas for follow on research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research is one of several ongoing studies conducted within the IT Professional Services (ITPS) research programme at Queensland University of Technology (QUT). In 2003, ITPS introduced the IS-Impact model, a measurement model for measuring information systems success from the viewpoint of multiple stakeholders. The model, along with its instrument, is robust, simple, yet generalisable, and yields results that are comparable across time, stakeholders, different systems and system contexts. The IS-Impact model is defined as “a measure at a point in time, of the stream of net benefits from the Information System (IS), to date and anticipated, as perceived by all key-user-groups”. The model represents four dimensions, which are ‘Individual Impact’, ‘Organizational Impact’, ‘Information Quality’ and ‘System Quality’. The two Impact dimensions measure the up-to-date impact of the evaluated system, while the remaining two Quality dimensions act as proxies for probable future impacts (Gable, Sedera & Chan, 2008). To fulfil the goal of ITPS, “to develop the most widely employed model” this research re-validates and extends the IS-Impact model in a new context. This method/context-extension research aims to test the generalisability of the model by addressing known limitations of the model. One of the limitations of the model relates to the extent of external validity of the model. In order to gain wide acceptance, a model should be consistent and work well in different contexts. The IS-Impact model, however, was only validated in the Australian context, and packaged software was chosen as the IS understudy. Thus, this study is concerned with whether the model can be applied in another different context. Aiming for a robust and standardised measurement model that can be used across different contexts, this research re-validates and extends the IS-Impact model and its instrument to public sector organisations in Malaysia. The overarching research question (managerial question) of this research is “How can public sector organisations in Malaysia measure the impact of information systems systematically and effectively?” With two main objectives, the managerial question is broken down into two specific research questions. The first research question addresses the applicability (relevance) of the dimensions and measures of the IS-Impact model in the Malaysian context. Moreover, this research question addresses the completeness of the model in the new context. Initially, this research assumes that the dimensions and measures of the IS-Impact model are sufficient for the new context. However, some IS researchers suggest that the selection of measures needs to be done purposely for different contextual settings (DeLone & McLean, 1992, Rai, Lang & Welker, 2002). Thus, the first research question is as follows, “Is the IS-Impact model complete for measuring the impact of IS in Malaysian public sector organisations?” [RQ1]. The IS-Impact model is a multidimensional model that consists of four dimensions or constructs. Each dimension is represented by formative measures or indicators. Formative measures are known as composite variables because these measures make up or form the construct, or, in this case, the dimension in the IS-Impact model. These formative measures define different aspects of the dimension, thus, a measurement model of this kind needs to be tested not just on the structural relationship between the constructs but also the validity of each measure. In a previous study, the IS-Impact model was validated using formative validation techniques, as proposed in the literature (i.e., Diamantopoulos and Winklhofer, 2001, Diamantopoulos and Siguaw, 2006, Petter, Straub and Rai, 2007). However, there is potential for improving the validation testing of the model by adding more criterion or dependent variables. This includes identifying a consequence of the IS-Impact construct for the purpose of validation. Moreover, a different approach is employed in this research, whereby the validity of the model is tested using the Partial Least Squares (PLS) method, a component-based structural equation modelling (SEM) technique. Thus, the second research question addresses the construct validation of the IS-Impact model; “Is the IS-Impact model valid as a multidimensional formative construct?” [RQ2]. This study employs two rounds of surveys, each having a different and specific aim. The first is qualitative and exploratory, aiming to investigate the applicability and sufficiency of the IS-Impact dimensions and measures in the new context. This survey was conducted in a state government in Malaysia. A total of 77 valid responses were received, yielding 278 impact statements. The results from the qualitative analysis demonstrate the applicability of most of the IS-Impact measures. The analysis also shows a significant new measure having emerged from the context. This new measure was added as one of the System Quality measures. The second survey is a quantitative survey that aims to operationalise the measures identified from the qualitative analysis and rigorously validate the model. This survey was conducted in four state governments (including the state government that was involved in the first survey). A total of 254 valid responses were used in the data analysis. Data was analysed using structural equation modelling techniques, following the guidelines for formative construct validation, to test the validity and reliability of the constructs in the model. This study is the first research that extends the complete IS-Impact model in a new context that is different in terms of nationality, language and the type of information system (IS). The main contribution of this research is to present a comprehensive, up-to-date IS-Impact model, which has been validated in the new context. The study has accomplished its purpose of testing the generalisability of the IS-Impact model and continuing the IS evaluation research by extending it in the Malaysian context. A further contribution is a validated Malaysian language IS-Impact measurement instrument. It is hoped that the validated Malaysian IS-Impact instrument will encourage related IS research in Malaysia, and that the demonstrated model validity and generalisability will encourage a cumulative tradition of research previously not possible. The study entailed several methodological improvements on prior work, including: (1) new criterion measures for the overall IS-Impact construct employed in ‘identification through measurement relations’; (2) a stronger, multi-item ‘Satisfaction’ construct, employed in ‘identification through structural relations’; (3) an alternative version of the main survey instrument in which items are randomized (rather than blocked) for comparison with the main survey data, in attention to possible common method variance (no significant differences between these two survey instruments were observed); (4) demonstrates a validation process of formative indexes of a multidimensional, second-order construct (existing examples mostly involved unidimensional constructs); (5) testing the presence of suppressor effects that influence the significance of some measures and dimensions in the model; and (6) demonstrates the effect of an imbalanced number of measures within a construct to the contribution power of each dimension in a multidimensional model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identity is unique, multiple and dynamic. This paper explores common attributes of organisational identities, and examines the role of performance management systems (PMSs) on revealing identity attributes. One of the influential PMSs, the balanced scorecard, is used to illustrate the arguments. A case study of a public-sector organisation suggests that PMSs now place a value on the intangible aspects of organisational life as well as the financial, periodically revealing distinctiveness, relativity, visibility, fluidity and manageability of public-sector identities that sustain their viability. This paper contributes to a multi-disciplinary approach and its practical application, demonstrating an alternative pathway to identity-making using PMSs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern technology now has the ability to generate large datasets over space and time. Such data typically exhibit high autocorrelations over all dimensions. The field trial data motivating the methods of this paper were collected to examine the behaviour of traditional cropping and to determine a cropping system which could maximise water use for grain production while minimising leakage below the crop root zone. They consist of moisture measurements made at 15 depths across 3 rows and 18 columns, in the lattice framework of an agricultural field. Bayesian conditional autoregressive (CAR) models are used to account for local site correlations. Conditional autoregressive models have not been widely used in analyses of agricultural data. This paper serves to illustrate the usefulness of these models in this field, along with the ease of implementation in WinBUGS, a freely available software package. The innovation is the fitting of separate conditional autoregressive models for each depth layer, the ‘layered CAR model’, while simultaneously estimating depth profile functions for each site treatment. Modelling interest also lay in how best to model the treatment effect depth profiles, and in the choice of neighbourhood structure for the spatial autocorrelation model. The favoured model fitted the treatment effects as splines over depth, and treated depth, the basis for the regression model, as measured with error, while fitting CAR neighbourhood models by depth layer. It is hierarchical, with separate onditional autoregressive spatial variance components at each depth, and the fixed terms which involve an errors-in-measurement model treat depth errors as interval-censored measurement error. The Bayesian framework permits transparent specification and easy comparison of the various complex models compared.