990 resultados para Task constraints


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: Students often read for long periods and prolonged reading practice may be important for developing reading skills. For students with low vision, reading at a close working distance imposes high demands on their near visual functions, which might make it difficult to sustain prolonged reading. The aim of this study was to investigate the performance of a prolonged reading task by students with low vision. Method: Forty students with low vision, aged from eight to 20 years and without any intellectual, reading or learning disability, were recruited through the Paediatric Low Vision Clinic, Buranda, Queensland. Following a preliminary vision examination, reading performance measures—critical print size (CPS), maximum oral reading rates (MORR), near text visual acuity— were recorded using the Bailey-Lovie text reading charts before and after a 30-minute prolonged reading task. Results: The mean age of the participants was 13.03 ± 3 years. The distance and near visual acuities ranged between -0.1 to 1.24 logMAR and 0.0 to 1.60 logMAR, respectively. The mean working distance of the participants was 11.2 ± 5.8 cm. Most of the participants (65 per cent) in this study were able to complete the prolonged reading task. Overall, there was no significant change in CPS, MORR and near text visual acuity following the prolonged task (p > 0.05). MORR was significantly correlated to age and near text visual acuity (p < 0.05). Conclusions: In this study, students with low vision were able to maintain their reading performance over a 30-minute prolonged reading task. Overall, there was no significant increase or decrease in reading performance following a prolonged reading task performed at their habitual close working distances but there were wide individual variations within the group.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper firstly presents an extended ambiguity resolution model that deals with an ill-posed problem and constraints among the estimated parameters. In the extended model, the regularization criterion is used instead of the traditional least squares in order to estimate the float ambiguities better. The existing models can be derived from the general model. Secondly, the paper examines the existing ambiguity searching methods from four aspects: exclusion of nuisance integer candidates based on the available integer constraints; integer rounding; integer bootstrapping and integer least squares estimations. Finally, this paper systematically addresses the similarities and differences between the generalized TCAR and decorrelation methods from both theoretical and practical aspects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ready availability of suitably zoned and serviced land is one of the key factors in the timely and cost effective provision of new land for development. Unfortunately, in many high population growth areas, land that may be available does not have ready access to infrastructure, or the appropriate designation/s (zoning) in place. The corresponding lag in supply frequently bears the blame for the resultant disequilibrium in the market and affordability pressures on the end product. Government has the capacity to respond to the issue of land supply in a number of ways. Proactive measures define longer term goals and set the ground rules moving forwards. Reactive policy decisions are made in an often hostile environment where stakeholder interests conflict. With a trend to increased regulation, government risks further constraining the viability of land development in high growth areas, without full consideration of all the supply side variables. This preliminary paper will identify a number of the variables which may be constraining the supply of land for residential development in South East Queensland given the current regulatory environment. It will examine the interrelationship between these supply side constraints, a full understanding of which is required by government in order for its policies to stimulate, rather than restrict the supply of land in this high growth region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical flow (OF) is a powerful motion cue that captures the fusion of two important properties for the task of obstacle avoidance − 3D self-motion and 3D environmental surroundings. The problem of extracting such information for obstacle avoidance is commonly addressed through quantitative techniques such as time-to-contact and divergence, which are highly sensitive to noise in the OF image. This paper presents a new strategy towards obstacle avoidance in an indoor setting, using the combination of quantitative and structural properties of the OF field, coupled with the flexibility and efficiency of a machine learning system.The resulting system is able to effectively control the robot in real-time, avoiding obstacles in familiar and unfamiliar indoor environments, under given motion constraints. Furthermore, through the examination of the networks internal weights, we show how OF properties are being used toward the detection of these indoor obstacles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Support and education for parents faced with managing a child with atopic dermatitis is crucial to the success of current treatments. Interventions aiming to improve parent management of this condition are promising. Unfortunately, evaluation is hampered by lack of precise research tools to measure change. OBJECTIVES: To develop a suite of valid and reliable research instruments to appraise parents' self-efficacy for performing atopic dermatitis management tasks; outcome expectations of performing management tasks; and self-reported task performance in a community sample of parents of children with atopic dermatitis. METHODS: The Parents' Eczema Management Scale (PEMS) and the Parents' Outcome Expectations of Eczema Management Scale (POEEMS) were developed from an existing self-efficacy scale, the Parental Self-Efficacy with Eczema Care Index (PASECI). Each scale was presented in a single self-administered questionnaire, to measure self-efficacy, outcome expectations, and self-reported task performance related to managing child atopic dermatitis. Each was tested with a community sample of parents of children with atopic dermatitis, and psychometric evaluation of the scales' reliability and validity was conducted. SETTING AND PARTICIPANTS: A community-based convenience sample of 120 parents of children with atopic dermatitis completed the self-administered questionnaire. Participants were recruited through schools across Australia. RESULTS: Satisfactory internal consistency and test-retest reliability was demonstrated for all three scales. Construct validity was satisfactory, with positive relationships between self-efficacy for managing atopic dermatitis and general perceived self-efficacy; self-efficacy for managing atopic dermatitis and self-reported task performance; and self-efficacy for managing atopic dermatitis and outcome expectations. Factor analyses revealed two-factor structures for PEMS and PASECI alike, with both scales containing factors related to performing routine management tasks, and managing the child's symptoms and behaviour. Factor analysis was also applied to POEEMS resulting in a three-factor structure. Factors relating to independent management of atopic dermatitis by the parent, involving healthcare professionals in management, and involving the child in the management of atopic dermatitis were found. Parents' self-efficacy and outcome expectations had a significant influence on self-reported task performance. CONCLUSIONS: Findings suggest that PEMS and POEEMS are valid and reliable instruments worthy of further psychometric evaluation. Likewise, validity and reliability of PASECI was confirmed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experts in injection molding often refer to previous solutions to find a mold design similar to the current mold and use previous successful molding process parameters with intuitive adjustment and modification as a start for the new molding application. This approach saves a substantial amount of time and cost in experimental based corrective actions which are required in order to reach optimum molding conditions. A Case-Based Reasoning (CBR) System can perform the same task by retrieving a similar case which is applied to the new case from the case library and uses the modification rules to adapt a solution to the new case. Therefore, a CBR System can simulate human e~pertise in injection molding process design. This research is aimed at developing an interactive Hybrid Expert System to reduce expert dependency needed on the production floor. The Hybrid Expert System (HES) is comprised of CBR, flow analysis, post-processor and trouble shooting systems. The HES can provide the first set of operating parameters in order to achieve moldability condition and producing moldings free of stress cracks and warpage. In this work C++ programming language is used to implement the expert system. The Case-Based Reasoning sub-system is constructed to derive the optimum magnitude of process parameters in the cavity. Toward this end the Flow Analysis sub-system is employed to calculate the pressure drop and temperature difference in the feed system to determine the required magnitude of parameters at the nozzle. The Post-Processor is implemented to convert the molding parameters to machine setting parameters. The parameters designed by HES are implemented using the injection molding machine. In the presence of any molding defect, a trouble shooting subsystem can determine which combination of process parameters must be changed iii during the process to deal with possible variations. Constraints in relation to the application of this HES are as follows. - flow length (L) constraint: 40 mm < L < I 00 mm, - flow thickness (Th) constraint: -flow type: - material types: I mm < Th < 4 mm, unidirectional flow, High Impact Polystyrene (HIPS) and Acrylic. In order to test the HES, experiments were conducted and satisfactory results were obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many large coal mining operations in Australia rely heavily on the rail network to transport coal from mines to coal terminals at ports for shipment. Over the last few years, due to the fast growing demand, the coal rail network is becoming one of the worst industrial bottlenecks in Australia. As a result, this provides great incentives for pursuing better optimisation and control strategies for the operation of the whole rail transportation system under network and terminal capacity constraints. This PhD research aims to achieve a significant efficiency improvement in a coal rail network on the basis of the development of standard modelling approaches and generic solution techniques. Generally, the train scheduling problem can be modelled as a Blocking Parallel- Machine Job-Shop Scheduling (BPMJSS) problem. In a BPMJSS model for train scheduling, trains and sections respectively are synonymous with jobs and machines and an operation is regarded as the movement/traversal of a train across a section. To begin, an improved shifting bottleneck procedure algorithm combined with metaheuristics has been developed to efficiently solve the Parallel-Machine Job- Shop Scheduling (PMJSS) problems without the blocking conditions. Due to the lack of buffer space, the real-life train scheduling should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold a train until the next section on the routing becomes available. As a consequence, the problem has been considered as BPMJSS with the blocking conditions. To develop efficient solution techniques for BPMJSS, extensive studies on the nonclassical scheduling problems regarding the various buffer conditions (i.e. blocking, no-wait, limited-buffer, unlimited-buffer and combined-buffer) have been done. In this procedure, an alternative graph as an extension of the classical disjunctive graph is developed and specially designed for the non-classical scheduling problems such as the blocking flow-shop scheduling (BFSS), no-wait flow-shop scheduling (NWFSS), and blocking job-shop scheduling (BJSS) problems. By exploring the blocking characteristics based on the alternative graph, a new algorithm called the topological-sequence algorithm is developed for solving the non-classical scheduling problems. To indicate the preeminence of the proposed algorithm, we compare it with two known algorithms (i.e. Recursive Procedure and Directed Graph) in the literature. Moreover, we define a new type of non-classical scheduling problem, called combined-buffer flow-shop scheduling (CBFSS), which covers four extreme cases: the classical FSS (FSS) with infinite buffer, the blocking FSS (BFSS) with no buffer, the no-wait FSS (NWFSS) and the limited-buffer FSS (LBFSS). After exploring the structural properties of CBFSS, we propose an innovative constructive algorithm named the LK algorithm to construct the feasible CBFSS schedule. Detailed numerical illustrations for the various cases are presented and analysed. By adjusting only the attributes in the data input, the proposed LK algorithm is generic and enables the construction of the feasible schedules for many types of non-classical scheduling problems with different buffer constraints. Inspired by the shifting bottleneck procedure algorithm for PMJSS and characteristic analysis based on the alternative graph for non-classical scheduling problems, a new constructive algorithm called the Feasibility Satisfaction Procedure (FSP) is proposed to obtain the feasible BPMJSS solution. A real-world train scheduling case is used for illustrating and comparing the PMJSS and BPMJSS models. Some real-life applications including considering the train length, upgrading the track sections, accelerating a tardy train and changing the bottleneck sections are discussed. Furthermore, the BPMJSS model is generalised to be a No-Wait Blocking Parallel- Machine Job-Shop Scheduling (NWBPMJSS) problem for scheduling the trains with priorities, in which prioritised trains such as express passenger trains are considered simultaneously with non-prioritised trains such as freight trains. In this case, no-wait conditions, which are more restrictive constraints than blocking constraints, arise when considering the prioritised trains that should traverse continuously without any interruption or any unplanned pauses because of the high cost of waiting during travel. In comparison, non-prioritised trains are allowed to enter the next section immediately if possible or to remain in a section until the next section on the routing becomes available. Based on the FSP algorithm, a more generic algorithm called the SE algorithm is developed to solve a class of train scheduling problems in terms of different conditions in train scheduling environments. To construct the feasible train schedule, the proposed SE algorithm consists of many individual modules including the feasibility-satisfaction procedure, time-determination procedure, tune-up procedure and conflict-resolve procedure algorithms. To find a good train schedule, a two-stage hybrid heuristic algorithm called the SE-BIH algorithm is developed by combining the constructive heuristic (i.e. the SE algorithm) and the local-search heuristic (i.e. the Best-Insertion- Heuristic algorithm). To optimise the train schedule, a three-stage algorithm called the SE-BIH-TS algorithm is developed by combining the tabu search (TS) metaheuristic with the SE-BIH algorithm. Finally, a case study is performed for a complex real-world coal rail network under network and terminal capacity constraints. The computational results validate that the proposed methodology would be very promising because it can be applied as a fundamental tool for modelling and solving many real-world scheduling problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vigilance declines when exposed to highly predictable and uneventful tasks. Monotonous tasks provide little cognitive and motor stimulation and contribute to human errors. This paper aims to model and detect vigilance decline in real time through participant’s reaction times during a monotonous task. A lab-based experiment adapting the Sustained Attention to Response Task (SART) is conducted to quantify the effect of monotony on overall performance. Then relevant parameters are used to build a model detecting hypovigilance throughout the experiment. The accuracy of different mathematical models are compared to detect in real-time – minute by minute - the lapses in vigilance during the task. We show that monotonous tasks can lead to an average decline in performance of 45%. Furthermore, vigilance modelling enables to detect vigilance decline through reaction times with an accuracy of 72% and a 29% false alarm rate. Bayesian models are identified as a better model to detect lapses in vigilance as compared to Neural Networks and Generalised Linear Mixed Models. This modelling could be used as a framework to detect vigilance decline of any human performing monotonous tasks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A nutrient amendment experiment was conducted for two growing seasons in two alpine tundra communities to test the hypotheses that: (1) primary production is limited by nutrient availability, and (2) physiological and developmental constraints act to limit the responses of plants from a nutrient-poor community more than plants from a more nutrient-rich community to increases in nutrient availability. Experimental treatments consisted of N, P, and N+P amendments applied to plots in two physiognomically similar communities, dry and wet meadows. Extractable N and P from soils in nonfertilized control plots indicated that the wet meadow had higher N and P availability. Photosynthetic, nutrient uptake, and growth responses of the dominants in the two communities showed little difference in the relative capacity of these plants to respond to the nutrient additions. Aboveground production responses of the communities to the treatments indicated N availability was limiting to production in the dry meadow community while N and P availability colimited production in the wet meadow community. There was a greater production response to the N and N+P amendments in the dry meadow relative to the wet meadow, despite equivalent functional responses of the dominant species of both communities. The greater production response in the dry meadow was in part related to changes in community structure, with an increase in the proportion of graminoid and forb biomass, and a decrease in the proportion of community biomass made up by the dominant sedge Kobresia myosuroides. Species richness increased significantly in response to the N+P treatment in the dry meadow. Graminoid biomass increased significantly in the wet meadow N and N+P plots, while forb biomass decreased significantly, suggesting a competitive interaction for light. Thus, the difference in community response to nutrient amendments was not the result of functional changes at the leaf level of the dominant species, but rather was related to changes in community structure in the dry meadow, and to a shift from a nutrient to a light limitation of production in the wet meadow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In open railway access markets, a train service provider (TSP) negotiates with an infrastructure provider (IP) for track access rights. This negotiation has been modeled by a multi-agent system (MAS) in which the IP and TSP are represented by separate software agents. One task of the IP agent is to generate feasible (and preferably optimal) track access rights, subject to the constraints submitted by the TSP agent. This paper formulates an IP-TSP transaction and proposes a branch-and-bound algorithm for the IP agent to identify the optimal track access rights. Empirical simulation results show that the model is able to emulate rational agent behaviors. The simulation results also show good consistency between timetables attained from the proposed methods and those derived by the scheduling principles adopted in practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With daily commercial and social activity in cities, regulation of train service in mass rapid transit railways is necessary to maintain service and passenger flow. Dwell-time adjustment at stations is one commonly used approach to regulation of train service, but its control space is very limited. Coasting control is a viable means of meeting the specific run-time in an inter-station run. The current practice is to start coasting at a fixed distance from the departed station. Hence, it is only optimal with respect to a nominal operational condition of the train schedule, but not the current service demand. The advantage of coasting can only be fully secured when coasting points are determined in real-time. However, identifying the necessary starting point(s) for coasting under the constraints of current service conditions is no simple task as train movement is governed by a large number of factors. The feasibility and performance of classical and heuristic searching measures in locating coasting point(s) is studied with the aid of a single train simulator, according to specified inter-station run times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background / context: The ALTC WIL Scoping Study identified a need to develop innovative assessment methods for work integrated learning (WIL) that encourage reflection and integration of theory and practice within the constraints that result from the level of engagement of workplace supervisors and the ability of academic supervisors to become involved in the workplace. Aims: The aim of this paper is to examine how poster presentations can be used to authentically assess student learning during WIL. Method / Approach: The paper uses a case study approach to evaluate the use of poster presentations for assessment in two internship units at the Queensland University of Technology. The first is a unit in the Faculty of Business where students majoring in advertising, marketing and public relations are placed in a variety of organisations. The second unit is a law unit where students complete placements in government legal offices. Results / Discussion: While poster presentations are commonly used for assessment in the sciences, they are an innovative approach to assessment in the humanities. This paper argues that posters are one way that universities can overcome the substantial challenges of assessing work integrated learning. The two units involved in the case study adopt different approaches to the poster assessment; the Business unit is non-graded and the poster assessment task requires students to reflect on their learning during the internship. The Law unit is graded and requires students to present on a research topic that relates to their internship. In both units the posters were presented during a poster showcase which was attended by students, workplace supervisors and members of faculty. The paper evaluates the benefits of poster presentations for students, workplace supervisors and faculty and proposes some criteria for poster assessment in WIL. Conclusions / Implications: The paper concludes that posters can effectively and authentically assess various learning outcomes in WIL in different disciplines while at the same time offering a means to engage workplace supervisors with academic staff and other students and supervisors participating in the unit. Posters have the ability to demonstrate reflection in learning and are an excellent demonstration of experiential learning and assessing authentically. Keywords: Work integrated learning, assessment, poster presentations, industry engagement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The track allocation problem (TAP) at a multi-track, multi-platform mainline railway station is defined by the station track layout and service timetable, which implies combinations of spatial and temporal conflicts. Feasible solutions are available from either traditional planning or advanced intelligent searching methods and their evaluations with respect to operational requirements are essential for the operators. To facilitate thorough analysis, a timed Coloured Petri Nets (CPN) model is presented here to encapsulate the inter-relationships of the spatial and temporal constraints in the TAP.