887 resultados para Ontologies Representing the same Conceptualisation
Resumo:
Knowledge is a powerful organisational asset yet intangible and hard to manage, particularly in a project environment where there is a tendency to repeat the same mistakes rather than learn from previous project lessons. A lack of effective knowledge sharing across projects causes reinventions that are costly, and time consuming. Research on knowledge transfer has focused mainly on functional organisations and only recent attention has been directed towards knowledge transfer in projects. Furthermore, there is little evidence in the literature which examines trust in the knowledge transfer processes. This paper studies how the three types of trust - ability, benevolence, and integrity impact on knowledge transfer from the inter-project perspective. Three case studies investigated the matter. A detailed description of the work undertaken and an analysis of interviews with project professionals from large project-based organisations are presented in this paper. The key finding identifies the positive impact of ability trust on knowledge transfer. However, it was also found that perception on both integrity and benevolence varied across organisations suggesting that there can be a possible impact of organisational factors on the way trust is perceived in inter-project knowledge transfer. The paper concludes with a discussion and recommendations regarding the development of trust for inter-project environment.
Resumo:
Environmental impacts caused during Australia's comparatively recent settlement by Europeans are evident. Governments (both Commonwealth and States) have been largely responsible for requiring landholders – through leasehold development conditions and taxation concessions – to conduct clearing that is now perceived as damage. Most governments are now demanding resource protection. There is a measure of bewilderment (if not resentment) among landholders because of this change. The more populous States, where most overall damage has been done (i.e. Victoria and New South Wales), provide most support for attempts to stop development in other regions where there has been less damage. Queensland, i.e. the north-eastern quarter of the continent, has been relatively slow to develop. It also holds the largest and most diverse natural environments. Tree clearing is an unavoidable element of land development, whether to access and enhance native grasses for livestock or to allow for urban developments (with exotic tree plantings). The consequences in terms of regulations are particularly complex because of the dynamic nature of vegetation. The regulatory terms used in current legislation – such as 'Endangered' and 'Of concern' – depend on legally-defined, static baselines. Regrowth and fire damage are two obvious causes of change. A less obvious aspect is succession, where ecosystems change naturally over long timeframes. In the recent past, the Queensland Government encouraged extensive tree-clearing e.g. through the State Brigalow Development Scheme (mostly 1962 to 1975) which resulted in the removal of some 97% of the wide-ranging mature forests of Acacia harpophylla. At the same time, this government controls National Parks and other reservations (occupying some 4% of the State's 1.7 million km2 area) and also holds major World Heritage Areas (such as the Great Barrier Reef and the Wet Tropics Rainforest) promulgated under Commonwealth legislation. This is a highly prescriptive approach, where the community is directed on the one hand to develop (largely through lease conditions) and on the other to avoid development (largely by unusable reserves). Another approach to development and conservation is still possible in Queensland. For this to occur, however, a more workable and equitable solution than has been employed to date is needed, especially for the remote lands of this State. This must involve resident landholders, who have the capacity (through local knowledge, infrastructure and daily presence) to undertake most costeffectively sustainable land-use management (with suitable attention to ecosystems requiring special conservation effort), that is, provided they have the necessary direction, encouragement and incentive to do so.
Resumo:
We review and discuss the literature on small firm growth with an intention to provide a useful vantage point for new research studies regarding this important phenomenon. We first discuss conceptual and methodological issues that represent critical choices for those who research growth and which make it challenging to compare results from previous studies. The substantial review of past research is organized into four sections representing two smaller and two larger literatures. The first of the latter focuses on internal and external drivers of small firm growth. Here we find that much has been learnt and that many valuable generalizations can be made. However, we also conclude that more research of the same kind is unlikely to yield much. While interactive and non-linear effects may be worth pursuing it is unlikely that any new and important growth drivers or strong, linear main effects would be found. The second large literature deals with organizational life-cycles or stages of development. While deservedly criticized for unwarranted determinism and weak empirics this type of approach addresses problems of high practical and also theoretical relevance, and should not be shunned by researchers. We argue that with a change in the fundamental assumptions and improved empirical design, research on the organizational and managerial consequences of growth is an important line of inquiry. With this, we overlap with one of the smaller literatures, namely studies focusing on the effects of growth. We argue that studies too often assume that growth equals success. We advocate instead the use of growth as an intermediary variable that influences more fundamental goals in ways that should be carefully examined rather than assumed. The second small literature distinguishes between different modes or forms of growth, including, e.g., organic vs. acquisition-based growth, and international expansion. We note that modes of growth is an important topic that has been under studied in the growth literature, whereas in other branches of research aspects of it may have been studied intensely, but not primarily from a growth perspective. In the final section we elaborate on ways forward for research on small firm growth. We point at rich opportunities for researchers who look beyond drivers of growth, where growth is viewed as a homogenous phenomenon assumed to unambiguously reflect success, and instead focus on growth as a process and a multi-dimensional phenomenon, as well as on how growth relates to more fundamental outcomes.
Resumo:
Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.
Resumo:
This paper presents a technique for tracking road edges in a panoramic image sequence. The major contribution is that instead of unwarping the image to find parallel lines representing the road edges, we choose to warp the parallel groundplane lines into the image plane of the equiangular panospheric camera. Updating the parameters of the line thus involves searching a very small number of pixels in the panoramic image, requiring considerably less computation than unwarping. Results using real-world images, including shadows, intersections and curves, are presented.
Resumo:
In this paper we introduce the Reaction Wheel Pendulum, a novel mechanical system consisting of a physical pendulum with a rotating bob. This system has several attractive features both from a pedagogical standpoint and from a research standpoint. From a pedagogical standpoint, the dynamics are the simplest among the various pendulum experiments available so that the system can be introduced to students earlier in their education. At the same time, the system is nonlinear and underactuated so that it can be used as a benchmark experiment to study recent advanced methodologies in nonlinear control, such as feedback linearization, passivity methods, backstepping and hybrid control. In this paper we discuss two control approaches for the problems of swingup and balance, namely, feedback linearization and passivity based control. We first show that the system is locally feedback linearizable by a local diffeomorphism in state space and nonlinear feedback. We compare the feedback linearization control with a linear pole-placement control for the problem of balancing the pendulum about the inverted position. For the swingup problem we discuss an energy approach based on collocated partial feedback linearization, and passivity of the resulting zero dynamics. A hybrid/switching control strategy is used to switch between the swingup and the balance control. Experimental results are presented.
Resumo:
International assessments of student science achievement, and growing evidence of students' waning interest in school science, have ensured that the development of scientific literacy continues to remain an important educational priority. Furthermore, researchers have called for teaching and learning strategies to engage students in the learning of science, particularly in the middle years of schooling. This study extends previous national and international research that has established a link between writing and learning science. Specifically, it investigates the learning experiences of eight intact Year 9 science classes as they engage in the writing of short stories that merge scientific and narrative genres (i.e., hybridised scientific narratives) about the socioscientific issue of biosecurity. This study employed a triangulation mixed methods research design, generating both quantitative and qualitative data, in order to investigate three research questions that examined the extent to which the students' participation in the study enhanced their scientific literacy; the extent to which the students demonstrated conceptual understanding of related scientific concepts through their written artefacts and in interviews about the artefacts; and the extent to which the students' participation in the project influenced their attitudes toward science and science learning. Three aspects of scientific literacy were investigated in this study: conceptual science understandings (a derived sense of scientific literacy), the students' transformation of scientific information in written stories about biosecurity (simple and expanded fundamental senses of scientific literacy), and attitudes toward science and science learning. The stories written by students in a selected case study class (N=26) were analysed quantitatively using a series of specifically-designed matrices that produce numerical scores that reflect students' developing fundamental and derived senses of scientific literacy. All students (N=152) also completed a Likert-style instrument (i.e., BioQuiz), pretest and posttest, that examined their interest in learning science, science self-efficacy, their perceived personal and general value of science, their familiarity with biosecurity issues, and their attitudes toward biosecurity. Socioscientific issues (SSI) education served as a theoretical framework for this study. It sought to investigate an alternative discourse with which students can engage in the context of SSI education, and the role of positive attitudes in engaging students in the negotiation of socioscientific issues. Results of the study have revealed that writing BioStories enhanced selected aspects of the participants' attitudes toward science and science learning, and their awareness and conceptual understanding of issues relating to biosecurity. Furthermore, the students' written artefacts alone did not provide an accurate representation of the level of their conceptual science understandings. An examination of these artefacts in combination with interviews about the students' written work provided a more comprehensive assessment of their developing scientific literacy. These findings support extensive calls for the utilisation of diversified writing-to-learn strategies in the science classroom, and therefore make a significant contribution to the writing-to-learn science literature, particularly in relation to the use of hybridised scientific genres. At the same time, this study presents the argument that the writing of hybridised scientific narratives such as BioStories can be used to complement the types of written discourse with which students engage in the negotiation of socioscientific issues, namely, argumentation, as the development of positive attitudes toward science and science learning can encourage students' participation in the discourse of science. The implications of this study for curricular design and implementation, and for further research, are also discussed.
Resumo:
The high morbidity and mortality associated with atherosclerotic coronary vascular disease (CVD) and its complications are being lessened by the increased knowledge of risk factors, effective preventative measures and proven therapeutic interventions. However, significant CVD morbidity remains and sudden cardiac death continues to be a presenting feature for some subsequently diagnosed with CVD. Coronary vascular disease is also the leading cause of anaesthesia related complications. Stress electrocardiography/exercise testing is predictive of 10 year risk of CVD events and the cardiovascular variables used to score this test are monitored peri-operatively. Similar physiological time-series datasets are being subjected to data mining methods for the prediction of medical diagnoses and outcomes. This study aims to find predictors of CVD using anaesthesia time-series data and patient risk factor data. Several pre-processing and predictive data mining methods are applied to this data. Physiological time-series data related to anaesthetic procedures are subjected to pre-processing methods for removal of outliers, calculation of moving averages as well as data summarisation and data abstraction methods. Feature selection methods of both wrapper and filter types are applied to derived physiological time-series variable sets alone and to the same variables combined with risk factor variables. The ability of these methods to identify subsets of highly correlated but non-redundant variables is assessed. The major dataset is derived from the entire anaesthesia population and subsets of this population are considered to be at increased anaesthesia risk based on their need for more intensive monitoring (invasive haemodynamic monitoring and additional ECG leads). Because of the unbalanced class distribution in the data, majority class under-sampling and Kappa statistic together with misclassification rate and area under the ROC curve (AUC) are used for evaluation of models generated using different prediction algorithms. The performance based on models derived from feature reduced datasets reveal the filter method, Cfs subset evaluation, to be most consistently effective although Consistency derived subsets tended to slightly increased accuracy but markedly increased complexity. The use of misclassification rate (MR) for model performance evaluation is influenced by class distribution. This could be eliminated by consideration of the AUC or Kappa statistic as well by evaluation of subsets with under-sampled majority class. The noise and outlier removal pre-processing methods produced models with MR ranging from 10.69 to 12.62 with the lowest value being for data from which both outliers and noise were removed (MR 10.69). For the raw time-series dataset, MR is 12.34. Feature selection results in reduction in MR to 9.8 to 10.16 with time segmented summary data (dataset F) MR being 9.8 and raw time-series summary data (dataset A) being 9.92. However, for all time-series only based datasets, the complexity is high. For most pre-processing methods, Cfs could identify a subset of correlated and non-redundant variables from the time-series alone datasets but models derived from these subsets are of one leaf only. MR values are consistent with class distribution in the subset folds evaluated in the n-cross validation method. For models based on Cfs selected time-series derived and risk factor (RF) variables, the MR ranges from 8.83 to 10.36 with dataset RF_A (raw time-series data and RF) being 8.85 and dataset RF_F (time segmented time-series variables and RF) being 9.09. The models based on counts of outliers and counts of data points outside normal range (Dataset RF_E) and derived variables based on time series transformed using Symbolic Aggregate Approximation (SAX) with associated time-series pattern cluster membership (Dataset RF_ G) perform the least well with MR of 10.25 and 10.36 respectively. For coronary vascular disease prediction, nearest neighbour (NNge) and the support vector machine based method, SMO, have the highest MR of 10.1 and 10.28 while logistic regression (LR) and the decision tree (DT) method, J48, have MR of 8.85 and 9.0 respectively. DT rules are most comprehensible and clinically relevant. The predictive accuracy increase achieved by addition of risk factor variables to time-series variable based models is significant. The addition of time-series derived variables to models based on risk factor variables alone is associated with a trend to improved performance. Data mining of feature reduced, anaesthesia time-series variables together with risk factor variables can produce compact and moderately accurate models able to predict coronary vascular disease. Decision tree analysis of time-series data combined with risk factor variables yields rules which are more accurate than models based on time-series data alone. The limited additional value provided by electrocardiographic variables when compared to use of risk factors alone is similar to recent suggestions that exercise electrocardiography (exECG) under standardised conditions has limited additional diagnostic value over risk factor analysis and symptom pattern. The effect of the pre-processing used in this study had limited effect when time-series variables and risk factor variables are used as model input. In the absence of risk factor input, the use of time-series variables after outlier removal and time series variables based on physiological variable values’ being outside the accepted normal range is associated with some improvement in model performance.
Resumo:
This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.
Resumo:
The paper examines whether there was an excess of deaths and the relative role of temperature and ozone in a heatwave during 7–26 February 2004 in Brisbane, Australia, a subtropical city accustomed to warm weather. The data on daily counts of deaths from cardiovascular disease and non-external causes, meteorological conditions, and air pollution in Brisbane from 1 January 2001 to 31 October 2004 were supplied by the Australian Bureau of Statistics, Australian Bureau of Meteorology, and Queensland Environmental Protection Agency, respectively. The relationship between temperature and mortality was analysed using a Poisson time series regression model with smoothing splines to control for nonlinear effects of confounding factors. The highest temperature recorded in the 2004 heatwave was 42°C compared with the highest recorded temperature of 34°C during the same periods of 2001–2003. There was a significant relationship between exposure to heat and excess deaths in the 2004 heatwave estimated increase in non-external deaths: 75 [(95% confidence interval, CI: 11–138; cardiovascular deaths: 41 (95% CI: −2 to 84)]. There was no apparent evidence of substantial short-term mortality displacement. The excess deaths were mainly attributed to temperature but exposure to ozone also contributed to these deaths.
Resumo:
This thesis considers Max Dupain (1911-1992) and his contribution to the development of architectural photography in Australia. Through his continuous and prolific output over six decades of professional photography Dupain greatly stimulated awareness of and interest in Australian architecture. Before Dupain began specialising in the field, little consistent professional architectural photography had been practised in Australia. He and some of his close associates subsequently developed architectural photography as both a specialised branch of photography and - perhaps more significantly - as a necessary adjunct to architectural practice. In achieving these dual accomplishments, Dupain and like-minded practitioners succeeded in elevating architectural photography to the status of a discipline in its own right. They also gave Australians generally a deeper understanding of the heritage represented by the nation's built environment. At the same time, some of the photographic images he created became firmly fixed in the public imagination as historical icons within the development of a distinctive Australian tradition in the visual arts. Within his chosen field Dupain was the dominant Australian figure of his time. He was instrumental in breaking the link with Pictorialism by bringing Modernist and Documentary perspectives to Australian architectural photography. He was an innovator in the earlier decades of his professional career, however, his photographic techniques and practice did not develop beyond that. By the end of the 1980s he had largely lost touch with the technology and techniques of contemporary practice. Dupain's reputation, which has continued growing since his death in 1992, therefore arises from reasons other than his photographic images alone. It reflects his accomplishment in raising his fellow citizens' awareness of a worthwhile home-grown artistic tradition.
Resumo:
Experts in injection molding often refer to previous solutions to find a mold design similar to the current mold and use previous successful molding process parameters with intuitive adjustment and modification as a start for the new molding application. This approach saves a substantial amount of time and cost in experimental based corrective actions which are required in order to reach optimum molding conditions. A Case-Based Reasoning (CBR) System can perform the same task by retrieving a similar case which is applied to the new case from the case library and uses the modification rules to adapt a solution to the new case. Therefore, a CBR System can simulate human e~pertise in injection molding process design. This research is aimed at developing an interactive Hybrid Expert System to reduce expert dependency needed on the production floor. The Hybrid Expert System (HES) is comprised of CBR, flow analysis, post-processor and trouble shooting systems. The HES can provide the first set of operating parameters in order to achieve moldability condition and producing moldings free of stress cracks and warpage. In this work C++ programming language is used to implement the expert system. The Case-Based Reasoning sub-system is constructed to derive the optimum magnitude of process parameters in the cavity. Toward this end the Flow Analysis sub-system is employed to calculate the pressure drop and temperature difference in the feed system to determine the required magnitude of parameters at the nozzle. The Post-Processor is implemented to convert the molding parameters to machine setting parameters. The parameters designed by HES are implemented using the injection molding machine. In the presence of any molding defect, a trouble shooting subsystem can determine which combination of process parameters must be changed iii during the process to deal with possible variations. Constraints in relation to the application of this HES are as follows. - flow length (L) constraint: 40 mm < L < I 00 mm, - flow thickness (Th) constraint: -flow type: - material types: I mm < Th < 4 mm, unidirectional flow, High Impact Polystyrene (HIPS) and Acrylic. In order to test the HES, experiments were conducted and satisfactory results were obtained.
Resumo:
LiteSteel Beam (LSB) is a new cold-formed steel beam produced by OneSteel Australian Tube Mills. The new beam is effectively a channel section with two rectangular hollow flanges and a slender web, and is manufactured using a combined cold-forming and electric resistance welding process. OneSteel Australian Tube Mills is promoting the use of LSBs as flexural members in a range of applications, such as floor bearers. When LSBs are used as back to back built-up sections, they are likely to improve their moment capacity and thus extend their applications further. However, the structural behaviour of built-up beams is not well understood. Many steel design codes include guidelines for connecting two channels to form a built-up I-section including the required longitudinal spacing of connections. But these rules were found to be inadequate in some applications. Currently the safe spans of builtup beams are determined based on twice the moment capacity of a single section. Research has shown that these guidelines are conservative. Therefore large scale lateral buckling tests and advanced numerical analyses were undertaken to investigate the flexural behaviour of back to back LSBs connected by fasteners (bolts) at various longitudinal spacings under uniform moment conditions. In this research an experimental investigation was first undertaken to study the flexural behaviour of back to back LSBs including its buckling characteristics. This experimental study included tensile coupon tests, initial geometric imperfection measurements and lateral buckling tests. The initial geometric imperfection measurements taken on several back to back LSB specimens showed that the back to back bolting process is not likely to alter the imperfections, and the measured imperfections are well below the fabrication tolerance limits. Twelve large scale lateral buckling tests were conducted to investigate the behaviour of back to back built-up LSBs with various longitudinal fastener spacings under uniform moment conditions. Tests also included two single LSB specimens. Test results showed that the back to back LSBs gave higher moment capacities in comparison with single LSBs, and the fastener spacing influenced the ultimate moment capacities. As the fastener spacing was reduced the ultimate moment capacities of back to back LSBs increased. Finite element models of back to back LSBs with varying fastener spacings were then developed to conduct a detailed parametric study on the flexural behaviour of back to back built-up LSBs. Two finite element models were developed, namely experimental and ideal finite element models. The models included the complex contact behaviour between LSB web elements and intermittently fastened bolted connections along the web elements. They were validated by comparing their results with experimental results and numerical results obtained from an established buckling analysis program called THIN-WALL. These comparisons showed that the developed models could accurately predict both the elastic lateral distortional buckling moments and the non-linear ultimate moment capacities of back to back LSBs. Therefore the ideal finite element models incorporating ideal simply supported boundary conditions and uniform moment conditions were used in a detailed parametric study on the flexural behaviour of back to back LSB members. In the detailed parametric study, both elastic buckling and nonlinear analyses of back to back LSBs were conducted for 13 LSB sections with varying spans and fastener spacings. Finite element analysis results confirmed that the current design rules in AS/NZS 4600 (SA, 2005) are very conservative while the new design rules developed by Anapayan and Mahendran (2009a) for single LSB members were also found to be conservative. Thus new member capacity design rules were developed for back to back LSB members as a function of non-dimensional member slenderness. New empirical equations were also developed to aid in the calculation of elastic lateral distortional buckling moments of intermittently fastened back to back LSBs. Design guidelines were developed for the maximum fastener spacing of back to back LSBs in order to optimise the use of fasteners. A closer fastener spacing of span/6 was recommended for intermediate spans and some long spans where the influence of fastener spacing was found to be high. In the last phase of this research, a detailed investigation was conducted to investigate the potential use of different types of connections and stiffeners in improving the flexural strength of back to back LSB members. It was found that using transverse web stiffeners was the most cost-effective and simple strengthening method. It is recommended that web stiffeners are used at the supports and every third points within the span, and their thickness is in the range of 3 to 5 mm depending on the size of LSB section. The use of web stiffeners eliminated most of the lateral distortional buckling effects and hence improved the ultimate moment capacities. A suitable design equation was developed to calculate the elastic lateral buckling moments of back to back LSBs with the above recommended web stiffener configuration while the same design rules developed for unstiffened back to back LSBs were recommended to calculate the ultimate moment capacities.
Resumo:
The Queensland Coal Industry Employees Health Scheme was implemented in 1993 to provide health surveillance for all Queensland coal industry workers. Tt1e government, mining employers and mining unions agreed that the scheme should operate for seven years. At the expiry of the scheme, an assessment of the contribution of health surveillance to meet coal industry needs would be an essential part of determining a future health surveillance program. This research project has analysed the data made available between 1993 and 1998. All current coal industry employees have had at least one health assessment. The project examined how the centralised nature of the Health Scheme benefits industry by identi~)jng key health issues and exploring their dimensions on a scale not possible by corporate based health surveillance programs. There is a body of evidence that indicates that health awareness - on the scale of the individual, the work group and the industry is not a part of the mining industry culture. There is also growing evidence that there is a need for this culture to change and that some change is in progress. One element of this changing culture is a growth in the interest by the individual and the community in information on health status and benchmarks that are reasonably attainable. This interest opens the way for health education which contains personal, community and occupational elements. An important element of such education is the data on mine site health status. This project examined the role of health surveillance in the coal mining industry as a tool for generating the necessary information to promote an interest in health awareness. The Health Scheme Database provides the material for the bulk of the analysis of this project. After a preliminary scan of the data set, more detailed analysis was undertaken on key health and related safety issues that include respiratory disorders, hearing loss and high blood pressure. The data set facilitates control for confounding factors such as age and smoking status. Mines can be benchmarked to identify those mines with effective health management and those with particular challenges. While the study has confirmed the very low prevalence of restrictive airway disease such as pneu"moconiosis, it has demonstrated a need to examine in detail the emergence of obstructive airway disease such as bronchitis and emphysema which may be a consequence of the increasing use of high dust longwall technology. The power of the Health Database's electronic data management is demonstrated by linking the health data to other data sets such as injury data that is collected by the Department of l\1mes and Energy. The analysis examines serious strain -sprain injuries and has identified a marked difference between the underground and open cut sectors of the industry. The analysis also considers productivity and OHS data to examine the extent to which there is correlation between any pairs ofJpese and previously analysed health parameters. This project has demonstrated that the current structure of the Coal Industry Employees Health Scheme has largely delivered to mines and effective health screening process. At the same time, the centralised nature of data collection and analysis has provided to the mines, the unions and the government substantial statistical cross-sectional data upon which strategies to more effectively manage health and relates safety issues can be based.
Resumo:
Our aim in this article is twofold. First, we challenge the essentialized notion of adolescents and young people as perpetually driven to resist the authority of adults. At the same time, we disrupt linguistic conceptions of adolescent discourse, along with the discourse of youth at risk, by analyzing a transcript of classroom discourse that reflects an exchange between a highly regarded and well liked preservice teacher and his students. This representative transcript highlights the preservice teacher's ability to query, without a concomitant ability to listen, respond, and build a classroom dialogue with his students; what we call here a Socratic monologue. Second, we link the notions of dialogue and responsiveness to Bakhtin's concept of answerability, emphasizing the joint construction of classroom discourse as an ethically answerable relation between teacher and students.