346 resultados para cache consistency
Resumo:
This paper presents the results of a pilot study examining the factors that impact most on the effective implementation of, and improvement to, Quality Mangement Sytems (QMSs) amongst Indonesian construction companies. Nine critical factors were identified from an extensive literature review, and a survey was conducted of 23 respondents from three specific groups (Quality Managers, Project Managers, and Site Engineers) undertaking work in the Indonesian infrastructure construction sector. The data has been analyzed initially using simple descriptive techniques. This study reveals that different groups within the sector have different opinions of the factors regardless of the degree of importance of each factor. However, the evaluation of construction project success and the incentive schemes for high performance staff, are the two factors that were considered very important by most of the respondents in all three groups. In terms of their assessment of tools for measuring contractor’s performance, additional QMS guidelines, techniques related to QMS practice provided by the Government, and benchmarking, a clear majority in each group regarded their usefulness as ‘of some importance’.
Resumo:
The 27-item Intolerance of Uncertainty Scale (IUS) has become one of the most frequently used measure of Intolerance of Uncertainty. More recently, an abridged, 12-item version of the IUS has been developed. The current research used clinical (n = 50) and non-clinical (n = 56) samples to examine and compare the psychometric properties of both versions of the IUS. The two scales showed good internal consistency at both the total and subscale level and had satisfactory test-retest reliability. Both versions were correlated with worry and trait anxiety and had satisfactory concurrent validity. Significant differences between the scores of the clinical and non-clinical sample supported discriminant validity. Predictive validity was also supported for the two scales. Total scores, in the case of the clinical sample, and a subscale, in the case of the non-clinical sample, significantly predicted pathological worry and trait anxiety. Overall, the clinicians and researchers can use either version of the IUS with confidence, due to their sound psychometric properties.
Resumo:
Background: In response to the need for more comprehensive quality assessment within Australian residential aged care facilities, the Clinical Care Indicator (CCI) Tool was developed to collect outcome data as a means of making inferences about quality. A national trial of its effectiveness and a Brisbane-based trial of its use within the quality improvement context determined the CCI Tool represented a potentially valuable addition to the Australian aged care system. This document describes the next phase in the CCI Tool.s development; the aims of which were to establish validity and reliability of the CCI Tool, and to develop quality indicator thresholds (benchmarks) for use in Australia. The CCI Tool is now known as the ResCareQA (Residential Care Quality Assessment). Methods: The study aims were achieved through a combination of quantitative data analysis, and expert panel consultations using modified Delphi process. The expert panel consisted of experienced aged care clinicians, managers, and academics; they were initially consulted to determine face and content validity of the ResCareQA, and later to develop thresholds of quality. To analyse its psychometric properties, ResCareQA forms were completed for all residents (N=498) of nine aged care facilities throughout Queensland. Kappa statistics were used to assess inter-rater and test-retest reliability, and Cronbach.s alpha coefficient calculated to determine internal consistency. For concurrent validity, equivalent items on the ResCareQA and the Resident Classification Scales (RCS) were compared using Spearman.s rank order correlations, while discriminative validity was assessed using known-groups technique, comparing ResCareQA results between groups with differing care needs, as well as between male and female residents. Rank-ordered facility results for each clinical care indicator (CCI) were circulated to the panel; upper and lower thresholds for each CCI were nominated by panel members and refined through a Delphi process. These thresholds indicate excellent care at one extreme and questionable care at the other. Results: Minor modifications were made to the assessment, and it was renamed the ResCareQA. Agreement on its content was reached after two Delphi rounds; the final version contains 24 questions across four domains, enabling generation of 36 CCIs. Both test-retest and inter-rater reliability were sound with median kappa values of 0.74 (test-retest) and 0.91 (inter-rater); internal consistency was not as strong, with a Chronbach.s alpha of 0.46. Because the ResCareQA does not provide a single combined score, comparisons for concurrent validity were made with the RCS on an item by item basis, with most resultant correlations being quite low. Discriminative validity analyses, however, revealed highly significant differences in total number of CCIs between high care and low care groups (t199=10.77, p=0.000), while the differences between male and female residents were not significant (t414=0.56, p=0.58). Clinical outcomes varied both within and between facilities; agreed upper and lower thresholds were finalised after three Delphi rounds. Conclusions: The ResCareQA provides a comprehensive, easily administered means of monitoring quality in residential aged care facilities that can be reliably used on multiple occasions. The relatively modest internal consistency score was likely due to the multi-factorial nature of quality, and the absence of an aggregate result for the assessment. Measurement of concurrent validity proved difficult in the absence of a gold standard, but the sound discriminative validity results suggest that the ResCareQA has acceptable validity and could be confidently used as an indication of care quality within Australian residential aged care facilities. The thresholds, while preliminary due to small sample size, enable users to make judgements about quality within and between facilities. Thus it is recommended the ResCareQA be adopted for wider use.
Resumo:
To analyse mechanotransduction resulting from tensile loading under defined conditions, various devices for in vitro cell stimulation have been developed. This work aimed to determine the strain distribution on the membrane of a commercially available device and its consistency with rising cycle numbers, as well as the amount of strain transferred to adherent cells. The strains and their behaviour within the stimulation device were determined using digital image correlation (DIC). The strain transferred to cells was measured on eGFP-transfected bone marrow-derived cells imaged with a fluorescence microscope. The analysis was performed by determining the coordinates of prominent positions on the cells, calculating vectors between the coordinates and their length changes with increasing applied tensile strain. The stimulation device was found to apply homogeneous (mean of standard deviations approx. 2% of mean strain) and reproducible strains in the central well area. However, on average, only half of the applied strain was transferred to the bone marrow-derived cells. Furthermore, the strain measured within the device increased significantly with an increasing number of cycles while the membrane's Young's modulus decreased, indicating permanent changes in the material during extended use. Thus, strain magnitudes do not match the system readout and results require careful interpretation, especially at high cycle numbers.
Resumo:
This research report documents work conducted by the Center for Transportation (CTR) at The University of Texas at Austin in analyzing the Joint Analysis using the Combined Knowledge (J.A.C.K.) program. This program was developed by the Texas Department of Transportation (TxDOT) to make projections of revenues and expenditures. This research effort was to span from September 2008 to August 2009, but the bulk of the work was completed and presented by December 2008. J.A.C.K. was subsequently renamed TRENDS, but for consistency with the scope of work, the original name is used throughout this report.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
Traffic oscillations are typical features of congested traffic flow that are characterized by recurring decelerations followed by accelerations (stop-and-go driving). The negative environmental impacts of these oscillations are widely accepted, but their impact on traffic safety has been debated. This paper describes the impact of freeway traffic oscillations on traffic safety. This study employs a matched case-control design using high-resolution traffic and crash data from a freeway segment. Traffic conditions prior to each crash were taken as cases, while traffic conditions during the same periods on days without crashes were taken as controls. These were also matched by presence of congestion, geometry and weather. A total of 82 cases and about 80,000 candidate controls were extracted from more than three years of data from 2004 to 2007. Conditional logistic regression models were developed based on the case-control samples. To verify consistency in the results, 20 different sets of controls were randomly extracted from the candidate pool for varying control-case ratios. The results reveal that the standard deviation of speed (thus, oscillations) is a significant variable, with an average odds ratio of about 1.08. This implies that the likelihood of a (rear-end) crash increases by about 8% with an additional unit increase in the standard deviation of speed. The average traffic states prior to crashes were less significant than the speed variations in congestion.
Resumo:
This protocol represents an attempt to assist in the instruction of teamwork assessment for first-year students across QUT. We anticipate that teaching staff will view this protocol as a generic resource in teamwork instruction, processes and evaluation. Teamwork has been acknowledged as a problematic practice at QUT while existing predominantly in importance amongst graduate capabilities for all students at this institution. This protocol is not an extensive document on the complexities and dynamics of teamwork processes, but instead presents itself as a set of best practice guidelines and recommendations to assist in team design, development, management, support and assessment. It is recommended that this protocol be progressively implemented across QUT, not only to attain teamwork teaching consistency, but to address and deal with the misconceptions and conflict around the importance of the teamwork experience. The authors acknowledge the extensive input and contributions from a Teamwork Steering Committee selected from academic staff and administrative members across the institution. As well, we welcome feedback and suggestions to both fine tune and make inclusive those strategies that staff believe add to optimal teamwork outcomes.
Resumo:
The twists and turns in the ongoing development of the implied common law good faith obligation in the commercial contractual arena continue to prove fertile academic ground. Despite a lack of guidance from the High Court, the lower courts have been besieged by claims based, in part, on the implied obligation. Although lower court authority is lacking consistency and the ‘decisions in which lower courts have recognised the legitimacy of implication of a term of good faith vary in their suggested rationales’, the implied obligation may provide some comfort to a party to ‘at least some commercial contracts’ faced with a contractual counterpart exhibiting symptoms of bad faith.
Resumo:
This thesis investigates the place of online moderation in supporting teachers to work in a system of standards-based assessment. The participants of the study were fifty middle school teachers who met online with the aim of developing consistency in their judgement decisions. Data were gathered through observation of the online meetings, interviews, surveys and the collection of artefacts. The data were viewed and analysed through sociocultural theories of learning and sociocultural theories of technology, and demonstrates how utilising these theories can add depth to understanding the added complexity of developing shared meaning of standards in an online context. The findings contribute to current understanding of standards-based assessment by examining the social moderation process as it acts to increase the reliability of judgements that are made within a standards framework. Specifically, the study investigates the opportunities afforded by conducting social moderation practices in a synchronous online context. The study explicates how the technology affects the negotiation of judgements and the development of shared meanings of assessment standards, while demonstrating how involvement in online moderation discussions can support teachers to become and belong within a practice of standards-based assessment. This research responds to a growing international interest in standards-based assessment and the use of social moderation to develop consistency in judgement decisions. Online moderation is a new practice to address these concerns on a systemic basis.
Resumo:
Aim: This paper reports a study designed to assess the psychometric properties (validity and reliability) of a Turkish version of the Australian Parents’ Fever Management Scale (PFMS). Background: Little is known about childhood fever management among Turkish parents. No scales to measure parents’ fever management practices in Turkey are available. Design: This is a methodological study. Methods: Eighty parents, of febrile children aged six months to five years, were randomly selected from the paedaitric hospital and two community family health centers in Sakarya, Turkey. The PFMS was back translated; language equivalence and content validity were validated. PFMS and socio-demographic data were collected in 2009. Means and standard deviations were calculated for interval level data and p values greater than 0.05 were considered statistically significant. Unrotated principal component analysis was used to determine construct validity and Cronbach’s coefficient alpha determined the internal consistency reliability. Results: The PFMS was psychometrically sound in this population. Construct validity, confirmed by confirmatory factor analysis [KMO 0.812, Bartlett’s Specificity (χ² = 182.799, df=28, P < 0·001)] revealed the Turkish version to be comprised of the eight original PFMS items. Internal consistency reliability coefficient was 0.80 and the scale’s total-item correlation coefficients ranged from 0.15 to 0.66 and were significant (p<0.001). Interestingly parents reported high scores on the PFMS 34.52±4.60 (range 8-40 with 40 indicating a high burden of care for febrile children). Conclusion: The PFMS was as psychometrically robust in a Turkish population as in an Australian population and is, therefore, a useful tool for health professionals to identify parents’ practices, provide targeted education thereby in reducing the unnecessary burden of care they place on themselves when caring for a febrile child. Relevance to clinical practice. Testing in different populations, cultures and healthcare systems will further assist in reporting the PFMS usefulness in clinical practice and research.
Resumo:
In fast bowling, cricketers are expected to produce a range of delivery lines and lengths while maximising ball speed. From a coaching perspective, technique consistency has been typically associated with superior performance in these areas. However, although bowlers are required to bowl consistently, at the elite level they must also be able to vary line, length and speed to adapt to opposition batters’ strengths and weaknesses. The relationship between technique and performance variability (and consistency) has not been investigated in previous fast bowling research. Consequently, the aim of this study was to quantify both technique (bowling action and coordination) and performance variability in elite fast bowlers from Australian Junior and National Pace Squads. Technique variability was analysed to investigate whether it could be classified as functional or dysfunctional in relation to speed and accuracy.
Resumo:
This paper establishes practical stability results for an important range of approximate discrete-time filtering problems involving mismatch between the true system and the approximating filter model. Using local consistency assumption, the practical stability established is in the sense of an asymptotic bound on the amount of bias introduced by the model approximation. Significantly, these practical stability results do not require the approximating model to be of the same model type as the true system. Our analysis applies to a wide range of estimation problems and justifies the common practice of approximating intractable infinite dimensional nonlinear filters by simpler computationally tractable filters.