855 resultados para Travel Cost Method
Resumo:
This research project examines the application of the Suzuki Actor Training Method (the Suzuki Method) within the work ofTadashi Suzuki's company in Japan, the Shizuoka Performing Arts Complex (SPAC), within the work of Brisbane theatre company Frank:Austral Asian Performance Ensemble (Frank:AAPE), and as related to the development of the theatre performance Surfacing. These three theatrical contexts have been studied from the viewpoint of a "participant- observer". The researcher has trained in the Suzuki Method with Frank:AAPE and SP AC, performed with Frank:AAPE, and was the solo performer and collaborative developer in the performance Surfacing (directed by Leah Mercer). Observations of these three groups are based on a phenomenological definition of the "integrated actor", an actor who is able to achieve a totality or unity between the body and the mind, and between the body and the voice, through a powerful sense of intention. The term "integrated actor" has been informed by the philosophy of Merleau-Ponty and his concept of the "lived body". Three main hypotheses are presented in this study: that the Suzuki Method focuses on actors learning through their body; that the Suzuki Method presents an holistic approach to the body and the voice; and that the Suzuki Method develops actors with a strong sense of intention. These three aspects of the Suzuki Method are explored in relation to the stylistic features of the work of SPAC, Frank:AAPE and the performance Surfacing.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
LiteSteel Beam (LSB) is a new cold-formed steel beam produced by OneSteel Australian Tube Mills. The new beam is effectively a channel section with two rectangular hollow flanges and a slender web, and is manufactured using a combined cold-forming and electric resistance welding process. OneSteel Australian Tube Mills is promoting the use of LSBs as flexural members in a range of applications, such as floor bearers. When LSBs are used as back to back built-up sections, they are likely to improve their moment capacity and thus extend their applications further. However, the structural behaviour of built-up beams is not well understood. Many steel design codes include guidelines for connecting two channels to form a built-up I-section including the required longitudinal spacing of connections. But these rules were found to be inadequate in some applications. Currently the safe spans of builtup beams are determined based on twice the moment capacity of a single section. Research has shown that these guidelines are conservative. Therefore large scale lateral buckling tests and advanced numerical analyses were undertaken to investigate the flexural behaviour of back to back LSBs connected by fasteners (bolts) at various longitudinal spacings under uniform moment conditions. In this research an experimental investigation was first undertaken to study the flexural behaviour of back to back LSBs including its buckling characteristics. This experimental study included tensile coupon tests, initial geometric imperfection measurements and lateral buckling tests. The initial geometric imperfection measurements taken on several back to back LSB specimens showed that the back to back bolting process is not likely to alter the imperfections, and the measured imperfections are well below the fabrication tolerance limits. Twelve large scale lateral buckling tests were conducted to investigate the behaviour of back to back built-up LSBs with various longitudinal fastener spacings under uniform moment conditions. Tests also included two single LSB specimens. Test results showed that the back to back LSBs gave higher moment capacities in comparison with single LSBs, and the fastener spacing influenced the ultimate moment capacities. As the fastener spacing was reduced the ultimate moment capacities of back to back LSBs increased. Finite element models of back to back LSBs with varying fastener spacings were then developed to conduct a detailed parametric study on the flexural behaviour of back to back built-up LSBs. Two finite element models were developed, namely experimental and ideal finite element models. The models included the complex contact behaviour between LSB web elements and intermittently fastened bolted connections along the web elements. They were validated by comparing their results with experimental results and numerical results obtained from an established buckling analysis program called THIN-WALL. These comparisons showed that the developed models could accurately predict both the elastic lateral distortional buckling moments and the non-linear ultimate moment capacities of back to back LSBs. Therefore the ideal finite element models incorporating ideal simply supported boundary conditions and uniform moment conditions were used in a detailed parametric study on the flexural behaviour of back to back LSB members. In the detailed parametric study, both elastic buckling and nonlinear analyses of back to back LSBs were conducted for 13 LSB sections with varying spans and fastener spacings. Finite element analysis results confirmed that the current design rules in AS/NZS 4600 (SA, 2005) are very conservative while the new design rules developed by Anapayan and Mahendran (2009a) for single LSB members were also found to be conservative. Thus new member capacity design rules were developed for back to back LSB members as a function of non-dimensional member slenderness. New empirical equations were also developed to aid in the calculation of elastic lateral distortional buckling moments of intermittently fastened back to back LSBs. Design guidelines were developed for the maximum fastener spacing of back to back LSBs in order to optimise the use of fasteners. A closer fastener spacing of span/6 was recommended for intermediate spans and some long spans where the influence of fastener spacing was found to be high. In the last phase of this research, a detailed investigation was conducted to investigate the potential use of different types of connections and stiffeners in improving the flexural strength of back to back LSB members. It was found that using transverse web stiffeners was the most cost-effective and simple strengthening method. It is recommended that web stiffeners are used at the supports and every third points within the span, and their thickness is in the range of 3 to 5 mm depending on the size of LSB section. The use of web stiffeners eliminated most of the lateral distortional buckling effects and hence improved the ultimate moment capacities. A suitable design equation was developed to calculate the elastic lateral buckling moments of back to back LSBs with the above recommended web stiffener configuration while the same design rules developed for unstiffened back to back LSBs were recommended to calculate the ultimate moment capacities.
Resumo:
Many large coal mining operations in Australia rely heavily on the rail network to transport coal from mines to coal terminals at ports for shipment. Over the last few years, due to the fast growing demand, the coal rail network is becoming one of the worst industrial bottlenecks in Australia. As a result, this provides great incentives for pursuing better optimisation and control strategies for the operation of the whole rail transportation system under network and terminal capacity constraints. This PhD research aims to achieve a significant efficiency improvement in a coal rail network on the basis of the development of standard modelling approaches and generic solution techniques. Generally, the train scheduling problem can be modelled as a Blocking Parallel- Machine Job-Shop Scheduling (BPMJSS) problem. In a BPMJSS model for train scheduling, trains and sections respectively are synonymous with jobs and machines and an operation is regarded as the movement/traversal of a train across a section. To begin, an improved shifting bottleneck procedure algorithm combined with metaheuristics has been developed to efficiently solve the Parallel-Machine Job- Shop Scheduling (PMJSS) problems without the blocking conditions. Due to the lack of buffer space, the real-life train scheduling should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold a train until the next section on the routing becomes available. As a consequence, the problem has been considered as BPMJSS with the blocking conditions. To develop efficient solution techniques for BPMJSS, extensive studies on the nonclassical scheduling problems regarding the various buffer conditions (i.e. blocking, no-wait, limited-buffer, unlimited-buffer and combined-buffer) have been done. In this procedure, an alternative graph as an extension of the classical disjunctive graph is developed and specially designed for the non-classical scheduling problems such as the blocking flow-shop scheduling (BFSS), no-wait flow-shop scheduling (NWFSS), and blocking job-shop scheduling (BJSS) problems. By exploring the blocking characteristics based on the alternative graph, a new algorithm called the topological-sequence algorithm is developed for solving the non-classical scheduling problems. To indicate the preeminence of the proposed algorithm, we compare it with two known algorithms (i.e. Recursive Procedure and Directed Graph) in the literature. Moreover, we define a new type of non-classical scheduling problem, called combined-buffer flow-shop scheduling (CBFSS), which covers four extreme cases: the classical FSS (FSS) with infinite buffer, the blocking FSS (BFSS) with no buffer, the no-wait FSS (NWFSS) and the limited-buffer FSS (LBFSS). After exploring the structural properties of CBFSS, we propose an innovative constructive algorithm named the LK algorithm to construct the feasible CBFSS schedule. Detailed numerical illustrations for the various cases are presented and analysed. By adjusting only the attributes in the data input, the proposed LK algorithm is generic and enables the construction of the feasible schedules for many types of non-classical scheduling problems with different buffer constraints. Inspired by the shifting bottleneck procedure algorithm for PMJSS and characteristic analysis based on the alternative graph for non-classical scheduling problems, a new constructive algorithm called the Feasibility Satisfaction Procedure (FSP) is proposed to obtain the feasible BPMJSS solution. A real-world train scheduling case is used for illustrating and comparing the PMJSS and BPMJSS models. Some real-life applications including considering the train length, upgrading the track sections, accelerating a tardy train and changing the bottleneck sections are discussed. Furthermore, the BPMJSS model is generalised to be a No-Wait Blocking Parallel- Machine Job-Shop Scheduling (NWBPMJSS) problem for scheduling the trains with priorities, in which prioritised trains such as express passenger trains are considered simultaneously with non-prioritised trains such as freight trains. In this case, no-wait conditions, which are more restrictive constraints than blocking constraints, arise when considering the prioritised trains that should traverse continuously without any interruption or any unplanned pauses because of the high cost of waiting during travel. In comparison, non-prioritised trains are allowed to enter the next section immediately if possible or to remain in a section until the next section on the routing becomes available. Based on the FSP algorithm, a more generic algorithm called the SE algorithm is developed to solve a class of train scheduling problems in terms of different conditions in train scheduling environments. To construct the feasible train schedule, the proposed SE algorithm consists of many individual modules including the feasibility-satisfaction procedure, time-determination procedure, tune-up procedure and conflict-resolve procedure algorithms. To find a good train schedule, a two-stage hybrid heuristic algorithm called the SE-BIH algorithm is developed by combining the constructive heuristic (i.e. the SE algorithm) and the local-search heuristic (i.e. the Best-Insertion- Heuristic algorithm). To optimise the train schedule, a three-stage algorithm called the SE-BIH-TS algorithm is developed by combining the tabu search (TS) metaheuristic with the SE-BIH algorithm. Finally, a case study is performed for a complex real-world coal rail network under network and terminal capacity constraints. The computational results validate that the proposed methodology would be very promising because it can be applied as a fundamental tool for modelling and solving many real-world scheduling problems.