973 resultados para found


Relevância:

10.00% 10.00%

Publicador:

Resumo:

An Asset Management (AM) life-cycle constitutes a set of processes that align with the development, operation and maintenance of assets, in order to meet the desired requirements and objectives of the stake holders of the business. The scope of AM is often broad within an organization due to the interactions between its internal elements such as human resources, finance, technology, engineering operation, information technology and management, as well as external elements such as governance and environment. Due to the complexity of the AM processes, it has been proposed that in order to optimize asset management activities, process modelling initiatives should be adopted. Although organisations adopt AM principles and carry out AM initiatives, most do not document or model their AM processes, let alone enacting their processes (semi-) automatically using a computer-supported system. There is currently a lack of knowledge describing how to model AM processes through a methodical and suitable manner so that the processes are streamlines and optimized and are ready for deployment in a computerised way. This research aims to overcome this deficiency by developing an approach that will aid organisations in constructing AM process models quickly and systematically whilst using the most appropriate techniques, such as workflow technology. Currently, there is a wealth of information within the individual domains of AM and workflow. Both fields are gaining significant popularity in many industries thus fuelling the need for research in exploring the possible benefits of their cross-disciplinary applications. This research is thus inspired to investigate these two domains to exploit the application of workflow to modelling and execution of AM processes. Specifically, it will investigate appropriate methodologies in applying workflow techniques to AM frameworks. One of the benefits of applying workflow models to AM processes is to adapt and enable both ad-hoc and evolutionary changes over time. In addition, this can automate an AM process as well as to support the coordination and collaboration of people that are involved in carrying out the process. A workflow management system (WFMS) can be used to support the design and enactment (i.e. execution) of processes and cope with changes that occur to the process during the enactment. So far few literatures can be found in documenting a systematic approach to modelling the characteristics of AM processes. In order to obtain a workflow model for AM processes commonalities and differences between different AM processes need to be identified. This is the fundamental step in developing a conscientious workflow model for AM processes. Therefore, the first stage of this research focuses on identifying the characteristics of AM processes, especially AM decision making processes. The second stage is to review a number of contemporary workflow techniques and choose a suitable technique for application to AM decision making processes. The third stage is to develop an intermediate ameliorated AM decision process definition that improves the current process description and is ready for modelling using the workflow language selected in the previous stage. All these lead to the fourth stage where a workflow model for an AM decision making process is developed. The process model is then deployed (semi-) automatically in a state-of-the-art WFMS demonstrating the benefits of applying workflow technology to the domain of AM. Given that the information in the AM decision making process is captured at an abstract level within the scope of this work, the deployed process model can be used as an executable guideline for carrying out an AM decision process in practice. Moreover, it can be used as a vanilla system that, once being incorporated with rich information from a specific AM decision making process (e.g. in the case of a building construction or a power plant maintenance), is able to support the automation of such a process in a more elaborated way.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this research I have examined how ePortfolios can be designed for Music postgraduate study through a practice led research enquiry. This process involved designing two Web 2.0 ePortfolio systems for a group of five post graduate music research students. The design process revolved around the application of an iterative methodology called Software Develop as Research (SoDaR) that seeks to simultaneously develop design and pedagogy. The approach to designing these ePortfolio systems applied four theoretical protocols to examine the use of digitised artefacts in ePortfolio systems to enable a dynamic and inclusive dialogue around representations of the students work. The research and design process involved an analysis of existing software and literature with a focus upon identifying the affordances of available Web 2.0 software and the applications of these ideas within 21st Century life. The five post graduate music students each posed different needs in relation to the management of digitised artefacts and the communication of their work amongst peers, supervisors and public display. An ePortfolio was developed for each of them that was flexible enough to address their needs within the university setting. However in this first SoDaR iteration data gathering phase I identified aspects of the university context that presented a negative case that impacted upon the design and usage of the ePortfolios and prevented uptake. Whilst the portfolio itself functioned effectively, the university policies and technical requirements prevented serious use. The negative case analysis of the case study found revealed that Access and Control and Implementation, Technical and Policy Constraints protocols where limiting user uptake. From the semistructured interviews carried out as part of this study participant feedback revealed that whilst the participants did not use the ePortfolio system I designed, each student was employing Web 2.0 social networking and storage processes in their lives and research. In the subsequent iterations I then designed a more ‘ideal’ system that could be applied outside of the University context that draws upon the employment of these resources. In conclusion I suggest recommendations about ePortfolio design that considers what the applications of the theoretical protocols reveal about creative arts settings. The transferability of these recommendations are of course dependent upon the reapplication of the theoretical protocols in a new context. To address the mobility of ePortfolio design between Institutions and wider settings I have also designed a prototype for a business card sized USB portal for the artists’ ePortfolio. This research project is not a static one; it stands as an evolving design for a Web 2.0 ePortfolio that seeks to refer to users needs, institutional and professional contexts and the development of software that can be incorporated within the design. What it potentially provides to creative artist is an opportunity to have a dialogue about art with artefacts of the artist products and processes in that discussion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently it has been shown that the consumption of a diet high in saturated fat is associated with impaired insulin sensitivity and increased incidence of type 2 diabetes. In contrast, diets that are high in monounsaturated fatty acids (MUFAs) or polyunsaturated fatty acids (PUFAs), especially very long chain n-3 fatty acids (FAs), are protective against disease. However, the molecular mechanisms by which saturated FAs induce the insulin resistance and hyperglycaemia associated with metabolic syndrome and type 2 diabetes are not clearly defined. It is possible that saturated FAs may act through alternative mechanisms compared to MUFA and PUFA to regulate of hepatic gene expression and metabolism. It is proposed that, like MUFA and PUFA, saturated FAs regulate the transcription of target genes. To test this hypothesis, hepatic gene expression analysis was undertaken in a human hepatoma cell line, Huh-7, after exposure to the saturated FA, palmitate. These experiments showed that palmitate is an effective regulator of gene expression for a wide variety of genes. A total of 162 genes were differentially expressed in response to palmitate. These changes not only affected the expression of genes related to nutrient transport and metabolism, they also extend to other cellular functions including, cytoskeletal architecture, cell growth, protein synthesis and oxidative stress response. In addition, this thesis has shown that palmitate exposure altered the expression patterns of several genes that have previously been identified in the literature as markers of risk of disease development, including CVD, hypertension, obesity and type 2 diabetes. The altered gene expression patterns associated with an increased risk of disease include apolipoprotein-B100 (apo-B100), apo-CIII, plasminogen activator inhibitor 1, insulin-like growth factor-I and insulin-like growth factor binding protein 3. This thesis reports the first observation that palmitate directly signals in cultured human hepatocytes to regulate expression of genes involved in energy metabolism as well as other important genes. Prolonged exposure to long-chain saturated FAs reduces glucose phosphorylation and glycogen synthesis in the liver. Decreased glucose metabolism leads to elevated rates of lipolysis, resulting in increased release of free FAs. Free FAs have a negative effect on insulin action on the liver, which in turn results in increased gluconeogenesis and systemic dyslipidaemia. It has been postulated that disruption of glucose transport and insulin secretion by prolonged excessive FA availability might be a non-genetic factor that has contributed to the staggering rise in prevalence of type 2 diabetes. As glucokinase (GK) is a key regulatory enzyme of hepatic glucose metabolism, changes in its activity may alter flux through the glycolytic and de novo lipogenic pathways and result in hyperglycaemia and ultimately insulin resistance. This thesis investigated the effects of saturated FA on the promoter activity of the glycolytic enzyme, GK, and various transcription factors that may influence the regulation of GK gene expression. These experiments have shown that the saturated FA, palmitate, is capable of decreasing GK promoter activity. In addition, quantitative real-time PCR has shown that palmitate incubation may also regulate GK gene expression through a known FA sensitive transcription factor, sterol regulatory element binding protein-1c (SREBP-1c), which upregulates GK transcription. To parallel the investigations into the mechanisms of FA molecular signalling, further studies of the effect of FAs on metabolic pathway flux were performed. Although certain FAs reduce SREBP-1c transcription in vitro, it is unclear whether this will result in decreased GK activity in vivo where positive effectors of SREBP-1c such as insulin are also present. Under these conditions, it is uncertain if the inhibitory effects of FAs would be overcome by insulin. The effects of a combination of FAs, insulin and glucose on glucose phosphorylation and metabolism in cultured primary rat hepatocytes at concentrations that mimic those in the portal circulation after a meal was examined. It was found that total GK activity was unaffected by an increased concentration of insulin, but palmitate and eicosapentaenoic acid significantly lowered total GK activity in the presence of insulin. Despite the fact that total GK enzyme activity was reduced in response to FA incubation, GK enzyme translocation from the inactive, nuclear bound, to active, cytoplasmic state was unaffected. Interestingly, none of the FAs tested inhibited glucose phosphorylation or the rate of glycolysis when insulin is present. These results suggest that in the presence of insulin the levels of the active, unbound cytoplasmic GK are sufficient to buffer a slight decrease in GK enzyme activity and decreased promoter activity caused by FA exposure. Although a high fat diet has been associated with impaired hepatic glucose metabolism, there is no evidence from this thesis that FAs themselves directly modulate flux through the glycolytic pathway in isolated primary hepatocytes when insulin is also present. Therefore, although FA affected expression of a wide range of genes, including GK, this did not affect glycolytic flux in the presence of insulin. However, it may be possible that a saturated FA-induced decrease in GK enzyme activity when combined with the onset of insulin resistance may promote the dys-regulation of glucose homeostasis and the subsequent development of hyperglycaemia, metabolic syndrome and type 2 diabetes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To date, studies have focused on the acquisition of alphabetic second languages (L2s) in alphabetic first language (L1) users, demonstrating significant transfer effects. The present study examined the process from a reverse perspective, comparing logographic (Mandarin-Chinese) and alphabetic (English) L1 users in the acquisition of an artificial logographic script, in order to determine whether similar language-specific advantageous transfer effects occurred. English monolinguals, English-French bilinguals and Chinese-English bilinguals learned a small set of symbols in an artificial logographic script and were subsequently tested on their ability to process this script in regard to three main perspectives: L2 reading, L2 working memory (WM), and inner processing strategies. In terms of L2 reading, a lexical decision task on the artificial symbols revealed markedly faster response times in the Chinese-English bilinguals, indicating a logographic transfer effect suggestive of a visual processing advantage. A syntactic decision task evaluated the degree to which the new language was mastered beyond the single word level. No L1-specific transfer effects were found for artificial language strings. In order to investigate visual processing of the artificial logographs further, a series of WM experiments were conducted. Artificial logographs were recalled under concurrent auditory and visuo-spatial suppression conditions to disrupt phonological and visual processing, respectively. No L1-specific transfer effects were found, indicating no visual processing advantage of the Chinese-English bilinguals. However, a bilingual processing advantage was found indicative of a superior ability to control executive functions. In terms of L1 WM, the Chinese-English bilinguals outperformed the alphabetic L1 users when processing L1 words, indicating a language experience-specific advantage. Questionnaire data on the cognitive strategies that were deployed during the acquisition and processing of the artificial logographic script revealed that the Chinese-English bilinguals rated their inner speech as lower than the alphabetic L1 users, suggesting that they were transferring their phonological processing skill set to the acquisition and use of an artificial script. Overall, evidence was found to indicate that language learners transfer specific L1 orthographic processing skills to L2 logographic processing. Additionally, evidence was also found indicating that a bilingual history enhances cognitive performance in L2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of three related studies: an ERP Major Issues Study; an Historical Study of the Queensland Government Financial Management System; and a Meta-Study that integrates these and other related studies conducted under the umbrella of the Cooperative ERP Lifecycle Knowledge Management research program. This research provides a comprehensive view of ERP lifecycle issues encountered in SAP R/3 projects across the Queensland Government. This study follows a preliminary ERP issues study (Chang, 2002) conducted in five Queensland Government agencies. The Major Issues Study aims to achieve the following: (1) identify / explicate major issues in relation to the ES life-cycle in the public sector; (2) rank the importance of these issues; and, (3) highlight areas of consensus and dissent among stakeholder groups. To provide a rich context for this study, this thesis includes an historical recount of the Queensland Government Financial Management System (QGFMS). This recount tells of its inception as a centralised system; the selection of SAP and subsequent decentralisation; and, its eventual recentralisation under the Shared Services Initiative and CorpTech. This historical recount gives an insight into the conditions that affected the selection and ongoing management and support of QGFMS. This research forms part of a program entitled Cooperative ERP Lifecycle Knowledge Management. This thesis provides a concluding report for this research program by summarising related studies conducted in the Queensland Government SAP context: Chan (2003); Vayo et al (2002); Ng (2003); Timbrell et al (2001); Timbrell et al (2002); Chang (2002); Putra (1998); and, Niehus et al (1998). A study of Oracle in the United Arab Emirates by Dhaheri (2002) is also included. The thesis then integrates the findings from these studies in an overarching Meta-Study. The Meta-Study discusses key themes across all of these studies, creating an holistic report for the research program. Themes discussed in the meta-study include common issues found across the related studies; knowledge dynamics of the ERP lifecycle; ERP maintenance and support; and, the relationship between the key players in the ERP lifecycle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis explores a way to inform the architectural design process for contemporary workplace environments. It reports on both theoretical and practical outcomes through an exclusively Australian case study of a network enterprise comprised of collaborative, yet independent business entities. The internet revolution, substantial economic and cultural shifts, and an increased emphasis on lifestyle considerations have prompted a radical re-ordering of organisational relationships and the associated structures, processes, and places of doing business. The social milieu of the information age and the knowledge economy is characterised by an almost instantaneous flow of information and capital. This has culminated in a phenomenon termed by Manuel Castells as the network society, where physical locations are joined together by continuous communication and virtual connectivity. A new spatial logic encompassing redefined concepts of space and distance, and requiring a comprehensive shift in the approach to designing workplace environments for today’s adaptive, collaborative organisations in a dynamic business world, provides the backdrop for this research. Within the duality of space and an augmentation of the traditional notions of place, organisational and institutional structures pose new challenges for the design professions. The literature revealed that there has always been a mono-organisational focus in relation to workplace design strategies. The phenomenon of inter-organisational collaboration has enabled the identification of a gap in the knowledge relative to workplace design. This new context generated the formulation of a unique research construct, the NetWorkPlace™©, which captures the complexity of contemporary employment structures embracing both physical and virtual work environments and practices, and provided the basis for investigating the factors that are shaping and defining interactions within and across networked organisational settings. The methodological orientation and the methods employed follow a qualitative approach and an abductively driven strategy comprising two distinct components, a cross-sectional study of the whole of the network and a longitudinal study, focusing on a single discrete workplace site. The complexity of the context encountered dictated that a multi-dimensional investigative framework was required to be devised. The adoption of a pluralist ontology and the reconfiguration of approaches from traditional paradigms into a collaborative, trans-disciplinary, multi-method epistemology provided an explicit and replicatable method of investigation. The identification and introduction of the NetWorkPlace™© phenomenon, by necessity, spans a number of traditional disciplinary boundaries. Results confirm that in this context, architectural research, and by extension architectural practice, must engage with what other disciplines have to offer. The research concludes that no single disciplinary approach to either research or practice in this area of design can suffice. Pierre Bourdieau’s philosophy of ‘practice’ provides a framework within which the governance and technology structures, together with the mechanisms enabling the production of social order in this context, can be understood. This is achieved by applying the concepts of position and positioning to the corporate power dynamics, and integrating the conflict found to exist between enterprise standard and ferally conceived technology systems. By extending existing theory and conceptions of ‘place’ and the ‘person-environment relationship’, relevant understandings of the tensions created between Castells’ notions of the space of place and the space of flows are established. The trans-disciplinary approach adopted, and underpinned by a robust academic and practical framework, illustrates the potential for expanding the range and richness of understanding applicable to design in this context. The outcome informs workplace design by extending theoretical horizons, and by the development of a comprehensive investigative process comprising a suite of models and techniques for both architectural and interior design research and practice, collectively entitled the NetWorkPlace™© Application Framework. This work contributes to the body of knowledge within the design disciplines in substantive, theoretical, and methodological terms, whilst potentially also influencing future organisational network theories, management practices, and information and communication technology applications. The NetWorkPlace™© as reported in this thesis, constitutes a multi-dimensional concept having the capacity to deal with the fluidity and ambiguity characteristic of the network context, as both a topic of research and the way of going about it.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Support and education for parents faced with managing a child with atopic dermatitis is crucial to the success of current treatments. Interventions aiming to improve parent management of this condition are promising. Unfortunately, evaluation is hampered by lack of precise research tools to measure change. OBJECTIVES: To develop a suite of valid and reliable research instruments to appraise parents' self-efficacy for performing atopic dermatitis management tasks; outcome expectations of performing management tasks; and self-reported task performance in a community sample of parents of children with atopic dermatitis. METHODS: The Parents' Eczema Management Scale (PEMS) and the Parents' Outcome Expectations of Eczema Management Scale (POEEMS) were developed from an existing self-efficacy scale, the Parental Self-Efficacy with Eczema Care Index (PASECI). Each scale was presented in a single self-administered questionnaire, to measure self-efficacy, outcome expectations, and self-reported task performance related to managing child atopic dermatitis. Each was tested with a community sample of parents of children with atopic dermatitis, and psychometric evaluation of the scales' reliability and validity was conducted. SETTING AND PARTICIPANTS: A community-based convenience sample of 120 parents of children with atopic dermatitis completed the self-administered questionnaire. Participants were recruited through schools across Australia. RESULTS: Satisfactory internal consistency and test-retest reliability was demonstrated for all three scales. Construct validity was satisfactory, with positive relationships between self-efficacy for managing atopic dermatitis and general perceived self-efficacy; self-efficacy for managing atopic dermatitis and self-reported task performance; and self-efficacy for managing atopic dermatitis and outcome expectations. Factor analyses revealed two-factor structures for PEMS and PASECI alike, with both scales containing factors related to performing routine management tasks, and managing the child's symptoms and behaviour. Factor analysis was also applied to POEEMS resulting in a three-factor structure. Factors relating to independent management of atopic dermatitis by the parent, involving healthcare professionals in management, and involving the child in the management of atopic dermatitis were found. Parents' self-efficacy and outcome expectations had a significant influence on self-reported task performance. CONCLUSIONS: Findings suggest that PEMS and POEEMS are valid and reliable instruments worthy of further psychometric evaluation. Likewise, validity and reliability of PASECI was confirmed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis provides a behavioural perspective to the problem of collusive tendering in the construction market by examining the decision making factors of individuals potentially involved in such agreements using marketing ethics theory and techniques. The findings of a cross disciplinary literature review were synthesised into a model of factors theoretically expected to determine the individual's behavioural intent towards a set of collusive tendering agreements and the means of reaching them. The factors were grouped as internal cognitive (the individuals' value systems) and affective (demographic and psychographic characteristics) as well as external environmental (legal, industrial and organisational codes and norms) and situational (company, market and economic conditions). The model was tested using empirical data collected through a questionnaire survey of estimators employed in the largest Australian construction firms. All forms of explicit collusive tendering agreements were considered as having a prohibitive moral content by the majority of respondents who also clearly differentiated between agreements and discussions of contract terms (which they found to be a moral concern but not prohibitive) or of prices. The comparisons between those of the respondents that would never participate in a collusive agreement and the potential offenders clearly showed two distinctly different groups. The law abiding estimators are less reliant on situational factors, happier and more comfortable in their work environments and they live according to personal value and belief systems. The potential offenders on the other hand are mistrustful of colleagues, feel their values are not respected, put company priorities above principles and none of them is religious or a member of a professional body. The research results indicate that Australian estimators are, overall law abiding and principled and accept the existing codification of collusion as morally defensible and binding. Professional bodies' and organisational codes of conduct as well as personal value and belief systems that guide one's own conduct appear to be deterrents to collusive tendering intent and so are moral comfort and work satisfaction. These observations are potential indicators of areas where intervention and behaviour modification can increase individuals' resistance to collusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a study of the mechanical properties of thin films. The main aim was to determine the properties of sol-gel derived coatings. These films are used in a range of different applications and are known to be quite porous. Very little work has been carried out in this area and in order to study the mechanical properties of sol-gel films, some of the work was carried out on magnetron sputtered metal coatings in order to validate the techniques developed in this work. The main part of the work has concentrated on the development of various bending techniques to study the elastic modulus of the thin films, including both a small scale three-point bending, as well as a novel bi-axial bending technique based on a disk resting on three supporting balls. The bending techniques involve a load being applied to the sample being tested and the bending response to this force being recorded. These experiments were carried out using an ultra micro indentation system with very sensitive force and depth recording capabilities. By analysing the result of these forces and deflections using existing theories of elasticity, the elastic modulus may be determined. In addition to the bi-axial bending study, a finite element analysis of the stress distribution in a disk during bending was carried out. The results from the bi-axial bending tests of the magnetron sputtered films was confirmed by ultra micro indentation tests, giving information of the hardness and elastic modulus of the films. It was found that while the three point bending method gave acceptable results for uncoated steel substrates, it was very susceptible to slight deformations of the substrate. Improvements were made by more careful preparation of the substrates in order to avoid deformation. However the technique still failed to give reasonable results for coated specimens. In contrast, biaxial bending gave very reliable results even for very thin films and this technique was also found to be useful for determination of the properties of sol-gel coatings. In addition, an ultra micro indentation study of the hardness and elastic modulus of sol-gel films was conducted. This study included conventionally fired films as well as films ion implanted in a range of doses. The indentation tests showed that for implantation of H+ ions at doses exceeding 3x1016 ions/cm2, the mechanical properties closely resembled those of films that were conventionally fired to 450°C.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Inflatable Rescue Boat (IRB) is arguably the most effective rescue tool used by the Australian surf lifesavers. The exceptional features of high mobility and rapid response have enabled it to become an icon on Australia's popular beaches. However, the IRB's extensive use within an environment that is as rugged as it is spectacular, has led it to become a danger to those who risk their lives to save others. Epidemiological research revealed lower limb injuries to be predominant, particularly the right leg. The common types of injuries were fractures and dislocations, as well as muscle or ligament strains and tears. The concern expressed by Surf Life Saving Queensland (SLSQ) and Surf Life Saving Australia (SLSA) led to a biomechanical investigation into this unique and relatively unresearched field. The aim of the research was to identify the causes of injury and propose processes that may reduce the instances and severity of injury to surf lifesavers during IRB operation. Following a review of related research, a design analysis of the craft was undertaken as an introduction to the craft, its design and uses. The mechanical characteristics of the vessel were then evaluated and the accelerations applied to the crew in the IRB were established through field tests. The data were then combined and modelled in the 3-D mathematical modelling and simulation package, MADYMO. A tool was created to compare various scenarios of boat design and methods of operation to determine possible mechanisms to reduce injuries. The results of this study showed that under simulated wave loading the boats flex around a pivot point determined by the position of the hinge in the floorboard. It was also found that the accelerations experienced by the crew exhibited similar characteristics to road vehicle accidents. Staged simulations indicated the attributes of an optimum foam in terms of thickness and density. Likewise, modelling of the boat and crew produced simulations that predicted realistic crew response to tested variables. Unfortunately, the observed lack of adherence to the SLSA footstrap Standard has impeded successful epidemiological and modelling outcomes. If uniformity of boat setup can be assured then epidemiological studies will be able to highlight the influence of implementing changes to the boat design. In conclusion, the research provided a tool to successfully link the epidemiology and injury diagnosis to the mechanical engineering design through the use of biomechanics. This was a novel application of the mathematical modelling software MADYMO. Other craft can also be investigated in this manner to provide solutions to the problem identified and therefore reduce risk of injury for the operators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OneSteel Australian Tube Mills has recently developed a new hollow flange channel cold-formed section, known as the LiteSteel Beam (LSB). The innovative LSB sections have the beneficial characteristics of torsionally rigid closed rectangular flanges combined with economical fabrication processes from a single strip of high strength steel. They combine the stability of hot-rolled steel sections with the high strength to weight ratio of conventional cold-formed steel sections. The LSB sections are commonly used as flexural members in residential, industrial and commercial buildings. In order to ensure safe and efficient designs of LSBs, many research studies have been undertaken on the flexural behaviour of LSBs. However, no research has been undertaken on the shear behaviour of LSBs. Therefore this thesis investigated the ultimate shear strength behaviour of LSBs with and without web openings including their elastic buckling and post-buckling characteristics using both experimental and finite element analyses, and developed accurate shear design rules. Currently the elastic shear buckling coefficients of web panels are determined by assuming conservatively that the web panels are simply supported at the junction between the web and flange elements. Therefore finite element analyses were conducted first to investigate the elastic shear buckling behaviour of LSBs to determine the true support condition at the junction between their web and flange elements. An equation for the higher elastic shear buckling coefficient of LSBs was developed and included in the shear capacity equations in the cold-formed steel structures code, AS/NZS 4600. Predicted shear capacities from the modified equations and the available experimental results demonstrated the improvements to the shear capacities of LSBs due to the presence of higher level of fixity at the LSB flange to web juncture. A detailed study into the shear flow distribution of LSB was also undertaken prior to the elastic buckling analysis study. The experimental study of ten LSB sections included 42 shear tests of LSBs with aspect ratios of 1.0 and 1.5 that were loaded at midspan until failure. Both single and back to back LSB arrangements were used. Test specimens were chosen such that all three types of shear failure (shear yielding, inelastic and elastic shear buckling) occurred in the tests. Experimental results showed that the current cold-formed steel design rules are very conservative for the shear design of LSBs. Significant improvements to web shear buckling occurred due to the presence of rectangular hollow flanges while considerable post-buckling strength was also observed. Experimental results were presented and compared with corresponding predictions from the current design rules. Appropriate improvements have been proposed for the shear strength of LSBs based on AISI (2007) design equations and test results. Suitable design rules were also developed under the direct strength method (DSM) format. This thesis also includes the shear test results of cold-formed lipped channel beams from LaBoube and Yu (1978a), and the new design rules developed based on them using the same approach used with LSBs. Finite element models of LSBs in shear were also developed to investigate the ultimate shear strength behaviour of LSBs including their elastic and post-buckling characteristics. They were validated by comparing their results with experimental test results. Details of the finite element models of LSBs, the nonlinear analysis results and their comparisons with experimental results are presented in this thesis. Finite element analysis results showed that the current cold-formed steel design rules are very conservative for the shear design of LSBs. They also confirmed other experimental findings relating to elastic and post-buckling shear strength of LSBs. A detailed parametric study based on validated experimental finite element model was undertaken to develop an extensive shear strength data base and was then used to confirm the accuracy of the new shear strength equations proposed in this thesis. Experimental and numerical studies were also undertaken to investigate the shear behaviour of LSBs with web openings. Twenty six shear tests were first undertaken using a three point loading arrangement. It was found that AS/NZS 4600 and Shan et al.'s (1997) design equations are conservative for the shear design of LSBs with web openings while McMahon et al.'s (2008) design equation are unconservative. Experimental finite element models of LSBs with web openings were then developed and validated by comparing their results with experimental test results. The developed nonlinear finite element model was found to predict the shear capacity of LSBs with web opening with very good accuracy. Improved design equations have been proposed for the shear capacity of LSBs with web openings based on both experimental and FEA parametric study results. This thesis presents the details of experimental and numerical studies of the shear behaviour and strength of LSBs with and without web openings and the results including the developed accurate design rules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

LiteSteel Beam (LSB) is a new cold-formed steel beam produced by OneSteel Australian Tube Mills. The new beam is effectively a channel section with two rectangular hollow flanges and a slender web, and is manufactured using a combined cold-forming and electric resistance welding process. OneSteel Australian Tube Mills is promoting the use of LSBs as flexural members in a range of applications, such as floor bearers. When LSBs are used as back to back built-up sections, they are likely to improve their moment capacity and thus extend their applications further. However, the structural behaviour of built-up beams is not well understood. Many steel design codes include guidelines for connecting two channels to form a built-up I-section including the required longitudinal spacing of connections. But these rules were found to be inadequate in some applications. Currently the safe spans of builtup beams are determined based on twice the moment capacity of a single section. Research has shown that these guidelines are conservative. Therefore large scale lateral buckling tests and advanced numerical analyses were undertaken to investigate the flexural behaviour of back to back LSBs connected by fasteners (bolts) at various longitudinal spacings under uniform moment conditions. In this research an experimental investigation was first undertaken to study the flexural behaviour of back to back LSBs including its buckling characteristics. This experimental study included tensile coupon tests, initial geometric imperfection measurements and lateral buckling tests. The initial geometric imperfection measurements taken on several back to back LSB specimens showed that the back to back bolting process is not likely to alter the imperfections, and the measured imperfections are well below the fabrication tolerance limits. Twelve large scale lateral buckling tests were conducted to investigate the behaviour of back to back built-up LSBs with various longitudinal fastener spacings under uniform moment conditions. Tests also included two single LSB specimens. Test results showed that the back to back LSBs gave higher moment capacities in comparison with single LSBs, and the fastener spacing influenced the ultimate moment capacities. As the fastener spacing was reduced the ultimate moment capacities of back to back LSBs increased. Finite element models of back to back LSBs with varying fastener spacings were then developed to conduct a detailed parametric study on the flexural behaviour of back to back built-up LSBs. Two finite element models were developed, namely experimental and ideal finite element models. The models included the complex contact behaviour between LSB web elements and intermittently fastened bolted connections along the web elements. They were validated by comparing their results with experimental results and numerical results obtained from an established buckling analysis program called THIN-WALL. These comparisons showed that the developed models could accurately predict both the elastic lateral distortional buckling moments and the non-linear ultimate moment capacities of back to back LSBs. Therefore the ideal finite element models incorporating ideal simply supported boundary conditions and uniform moment conditions were used in a detailed parametric study on the flexural behaviour of back to back LSB members. In the detailed parametric study, both elastic buckling and nonlinear analyses of back to back LSBs were conducted for 13 LSB sections with varying spans and fastener spacings. Finite element analysis results confirmed that the current design rules in AS/NZS 4600 (SA, 2005) are very conservative while the new design rules developed by Anapayan and Mahendran (2009a) for single LSB members were also found to be conservative. Thus new member capacity design rules were developed for back to back LSB members as a function of non-dimensional member slenderness. New empirical equations were also developed to aid in the calculation of elastic lateral distortional buckling moments of intermittently fastened back to back LSBs. Design guidelines were developed for the maximum fastener spacing of back to back LSBs in order to optimise the use of fasteners. A closer fastener spacing of span/6 was recommended for intermediate spans and some long spans where the influence of fastener spacing was found to be high. In the last phase of this research, a detailed investigation was conducted to investigate the potential use of different types of connections and stiffeners in improving the flexural strength of back to back LSB members. It was found that using transverse web stiffeners was the most cost-effective and simple strengthening method. It is recommended that web stiffeners are used at the supports and every third points within the span, and their thickness is in the range of 3 to 5 mm depending on the size of LSB section. The use of web stiffeners eliminated most of the lateral distortional buckling effects and hence improved the ultimate moment capacities. A suitable design equation was developed to calculate the elastic lateral buckling moments of back to back LSBs with the above recommended web stiffener configuration while the same design rules developed for unstiffened back to back LSBs were recommended to calculate the ultimate moment capacities.