444 resultados para Graphic consistency


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research report documents work conducted by the Center for Transportation (CTR) at The University of Texas at Austin in analyzing the Joint Analysis using the Combined Knowledge (J.A.C.K.) program. This program was developed by the Texas Department of Transportation (TxDOT) to make projections of revenues and expenditures. This research effort was to span from September 2008 to August 2009, but the bulk of the work was completed and presented by December 2008. J.A.C.K. was subsequently renamed TRENDS, but for consistency with the scope of work, the original name is used throughout this report.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traffic oscillations are typical features of congested traffic flow that are characterized by recurring decelerations followed by accelerations (stop-and-go driving). The negative environmental impacts of these oscillations are widely accepted, but their impact on traffic safety has been debated. This paper describes the impact of freeway traffic oscillations on traffic safety. This study employs a matched case-control design using high-resolution traffic and crash data from a freeway segment. Traffic conditions prior to each crash were taken as cases, while traffic conditions during the same periods on days without crashes were taken as controls. These were also matched by presence of congestion, geometry and weather. A total of 82 cases and about 80,000 candidate controls were extracted from more than three years of data from 2004 to 2007. Conditional logistic regression models were developed based on the case-control samples. To verify consistency in the results, 20 different sets of controls were randomly extracted from the candidate pool for varying control-case ratios. The results reveal that the standard deviation of speed (thus, oscillations) is a significant variable, with an average odds ratio of about 1.08. This implies that the likelihood of a (rear-end) crash increases by about 8% with an additional unit increase in the standard deviation of speed. The average traffic states prior to crashes were less significant than the speed variations in congestion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This protocol represents an attempt to assist in the instruction of teamwork assessment for first-year students across QUT. We anticipate that teaching staff will view this protocol as a generic resource in teamwork instruction, processes and evaluation. Teamwork has been acknowledged as a problematic practice at QUT while existing predominantly in importance amongst graduate capabilities for all students at this institution. This protocol is not an extensive document on the complexities and dynamics of teamwork processes, but instead presents itself as a set of best practice guidelines and recommendations to assist in team design, development, management, support and assessment. It is recommended that this protocol be progressively implemented across QUT, not only to attain teamwork teaching consistency, but to address and deal with the misconceptions and conflict around the importance of the teamwork experience. The authors acknowledge the extensive input and contributions from a Teamwork Steering Committee selected from academic staff and administrative members across the institution. As well, we welcome feedback and suggestions to both fine tune and make inclusive those strategies that staff believe add to optimal teamwork outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The twists and turns in the ongoing development of the implied common law good faith obligation in the commercial contractual arena continue to prove fertile academic ground. Despite a lack of guidance from the High Court, the lower courts have been besieged by claims based, in part, on the implied obligation. Although lower court authority is lacking consistency and the ‘decisions in which lower courts have recognised the legitimacy of implication of a term of good faith vary in their suggested rationales’, the implied obligation may provide some comfort to a party to ‘at least some commercial contracts’ faced with a contractual counterpart exhibiting symptoms of bad faith.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis develops, applies and analyses a collaborative design methodology for branding a tourism destination. The area between the Northern Tablelands and the Mid-North Coast of New South Wales, Australia, was used as a case study for this research. The study applies theoretical concepts of systems thinking and complexity to the real world, and tests the use of design as a social tool to engage multiple stakeholders in planning. In this research I acknowledge that places (and destinations) are socially constructed through people's interactions with their physical and social environments. This study explores a methodology that is explicit about the uncertainties of the destination’s system, and that helps to elicit knowledge and system trends. The collective design process used the creation of brand concepts, elements and strategies as instruments to directly engage stakeholders in the process of reflecting about their places and the issues related to tourism activity in the region. The methods applied included individual conversations and collaborative design sessions to elicit knowledge from local stakeholders. Concept maps were used to register and interpret information released throughout the process. An important aspect of the methodology was to bring together different stakeholder groups and translate the information into a common language that was understandable by all participants. This work helped release significant information as to what kind of tourism activity local stakeholders are prepared to receive and support. It also helped the emergence of a more unified regional identity. The outcomes delivered by the project (brand, communication material and strategies) were of high quality and in line with the desires and expectation of the local hosts. The process also reinforced local sense of pride, belonging and conservation. Furthermore, interaction between participants from different parts of the region triggered some self organising activity around the brand they created together. A major contribution of the present work is the articulation of an inclusive methodology to facilitate the involvement of locals into the decision-making process related to tourism planning. Of particular significance is the focus on the social construction of meaning in and through design, showing that design exercises can have significant social impact – not only on the final product, but also on the realities of the people involved in the creative process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates the place of online moderation in supporting teachers to work in a system of standards-based assessment. The participants of the study were fifty middle school teachers who met online with the aim of developing consistency in their judgement decisions. Data were gathered through observation of the online meetings, interviews, surveys and the collection of artefacts. The data were viewed and analysed through sociocultural theories of learning and sociocultural theories of technology, and demonstrates how utilising these theories can add depth to understanding the added complexity of developing shared meaning of standards in an online context. The findings contribute to current understanding of standards-based assessment by examining the social moderation process as it acts to increase the reliability of judgements that are made within a standards framework. Specifically, the study investigates the opportunities afforded by conducting social moderation practices in a synchronous online context. The study explicates how the technology affects the negotiation of judgements and the development of shared meanings of assessment standards, while demonstrating how involvement in online moderation discussions can support teachers to become and belong within a practice of standards-based assessment. This research responds to a growing international interest in standards-based assessment and the use of social moderation to develop consistency in judgement decisions. Online moderation is a new practice to address these concerns on a systemic basis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: This paper reports a study designed to assess the psychometric properties (validity and reliability) of a Turkish version of the Australian Parents’ Fever Management Scale (PFMS). Background: Little is known about childhood fever management among Turkish parents. No scales to measure parents’ fever management practices in Turkey are available. Design: This is a methodological study. Methods: Eighty parents, of febrile children aged six months to five years, were randomly selected from the paedaitric hospital and two community family health centers in Sakarya, Turkey. The PFMS was back translated; language equivalence and content validity were validated. PFMS and socio-demographic data were collected in 2009. Means and standard deviations were calculated for interval level data and p values greater than 0.05 were considered statistically significant. Unrotated principal component analysis was used to determine construct validity and Cronbach’s coefficient alpha determined the internal consistency reliability. Results: The PFMS was psychometrically sound in this population. Construct validity, confirmed by confirmatory factor analysis [KMO 0.812, Bartlett’s Specificity (χ² = 182.799, df=28, P < 0·001)] revealed the Turkish version to be comprised of the eight original PFMS items. Internal consistency reliability coefficient was 0.80 and the scale’s total-item correlation coefficients ranged from 0.15 to 0.66 and were significant (p<0.001). Interestingly parents reported high scores on the PFMS 34.52±4.60 (range 8-40 with 40 indicating a high burden of care for febrile children). Conclusion: The PFMS was as psychometrically robust in a Turkish population as in an Australian population and is, therefore, a useful tool for health professionals to identify parents’ practices, provide targeted education thereby in reducing the unnecessary burden of care they place on themselves when caring for a febrile child. Relevance to clinical practice. Testing in different populations, cultures and healthcare systems will further assist in reporting the PFMS usefulness in clinical practice and research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In fast bowling, cricketers are expected to produce a range of delivery lines and lengths while maximising ball speed. From a coaching perspective, technique consistency has been typically associated with superior performance in these areas. However, although bowlers are required to bowl consistently, at the elite level they must also be able to vary line, length and speed to adapt to opposition batters’ strengths and weaknesses. The relationship between technique and performance variability (and consistency) has not been investigated in previous fast bowling research. Consequently, the aim of this study was to quantify both technique (bowling action and coordination) and performance variability in elite fast bowlers from Australian Junior and National Pace Squads. Technique variability was analysed to investigate whether it could be classified as functional or dysfunctional in relation to speed and accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated the longitudinal performance of 583 students on six map items that were represented in various graphic forms. Specifically, this study compared the performance of 7-9-year-olds (across Grades 2 and 3) from metropolitan and non-metropolitan locations. The results of the study revealed significant performance differences in favour of metropolitan students on two of six map tasks. Implications include the need for teachers in non-metropolitan locations to ensure that their students do not overly fixate on landmarks represented on maps but rather consider the arrangement of all elements encompassed within the graphic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper establishes practical stability results for an important range of approximate discrete-time filtering problems involving mismatch between the true system and the approximating filter model. Using local consistency assumption, the practical stability established is in the sense of an asymptotic bound on the amount of bias introduced by the model approximation. Significantly, these practical stability results do not require the approximating model to be of the same model type as the true system. Our analysis applies to a wide range of estimation problems and justifies the common practice of approximating intractable infinite dimensional nonlinear filters by simpler computationally tractable filters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many of the classification algorithms developed in the machine learning literature, including the support vector machine and boosting, can be viewed as minimum contrast methods that minimize a convex surrogate of the 0–1 loss function. The convexity makes these algorithms computationally efficient. The use of a surrogate, however, has statistical consequences that must be balanced against the computational virtues of convexity. To study these issues, we provide a general quantitative relationship between the risk as assessed using the 0–1 loss and the risk as assessed using any nonnegative surrogate loss function. We show that this relationship gives nontrivial upper bounds on excess risk under the weakest possible condition on the loss function—that it satisfies a pointwise form of Fisher consistency for classification. The relationship is based on a simple variational transformation of the loss function that is easy to compute in many applications. We also present a refined version of this result in the case of low noise, and show that in this case, strictly convex loss functions lead to faster rates of convergence of the risk than would be implied by standard uniform convergence arguments. Finally, we present applications of our results to the estimation of convergence rates in function classes that are scaled convex hulls of a finite-dimensional base class, with a variety of commonly used loss functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The risk, or probability of error, of the classifier produced by the AdaBoost algorithm is investigated. In particular, we consider the stopping strategy to be used in AdaBoost to achieve universal consistency. We show that provided AdaBoost is stopped after n1-ε iterations---for sample size n and ε ∈ (0,1)---the sequence of risks of the classifiers it produces approaches the Bayes risk.