926 resultados para size-fecundity variation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The New Zealand creative sector was responsible for almost 121,000 jobs at the time of the 2006 Census (6.3% of total employment). These are divided between • 35,751 creative specialists – persons employed doing creative work in creative industries • 42,300 support workers - persons providing management and support services in creative industries • 42,792 embedded creative workers – persons engaged in creative work in other types of enterprise The most striking feature of this breakdown is the fact that the largest group of creative workers are employed outside the creative industries, i.e. in other types of businesses. Even within the creative industries, there are fewer people directly engaged in creative work than in providing management and support. Creative sector employees earned incomes of approximately $52,000 per annum at the time of the 2006 Census. This is relatively uniform across all three types of creative worker, and is significantly above the average for all employed persons (of approximately $40,700). Creative employment and incomes were growing strongly over both five year periods between the 1996, 2001 and 2006 Censuses. However, when we compare creative and general trends, we see two distinct phases in the development of the creative sector: • rapid structural growth over the five years to 2001 (especially led by developments in ICT), with creative employment and incomes increasing rapidly at a time when they were growing modestly across the whole economy; • subsequent consolidation, with growth driven by more by national economic expansion than structural change, and creative employment and incomes moving in parallel with strong economy-wide growth. Other important trends revealed by the data are that • the strongest growth during the decade was in embedded creative workers, especially over the first five years. The weakest growth was in creative specialists, with support workers in creative industries in the middle rank, • by far the strongest growth in creative industries’ employment was in Software & digital content, which trebled in size over the decade Comparing New Zealand with the United Kingdom and Australia, the two southern hemisphere nations have significantly lower proportions of total employment in the creative sector (both in creative industries and embedded employment). New Zealand’s and Australia’s creative shares in 2001 were similar (5.4% each), but in the following five years, our share has expanded (to 5.7%) whereas Australia’s fell slightly (to 5.2%) – in both cases, through changes in creative industries’ employment. The creative industries generated $10.5 billion in total gross output in the March 2006 year. Resulting from this was value added totalling $5.1b, representing 3.3% of New Zealand’s total GDP. Overall, value added in the creative industries represents 49% of industry gross output, which is higher than the average across the whole economy, 45%. This is a reflection of the relatively high labour intensity and high earnings of the creative industries. Industries which have an above-average ratio of value added to gross output are usually labour-intensive, especially when wages and salaries are above average. This is true for Software & Digital Content and Architecture, Design & Visual Arts, with ratios of 60.4% and 55.2% respectively. However there is significant variation in this ratio between different parts of the creative industries, with some parts (e.g. Software & Digital Content and Architecture, Design & Visual Arts) generating even higher value added relative to output, and others (e.g. TV & Radio, Publishing and Music & Performing Arts) less, because of high capital intensity and import content. When we take into account the impact of the creative industries’ demand for goods and services from its suppliers and consumption spending from incomes earned, we estimate that there is an addition to economic activity of: • $30.9 billion in gross output, $41.4b in total • $15.1b in value added, $20.3b in total • 158,100 people employed, 234,600 in total The total economic impact of the creative industries is approximately four times their direct output and value added, and three times their direct employment. Their effect on output and value added is roughly in line with the average over all industries, although the effect on employment is significantly lower. This is because of the relatively high labour intensity (and high earnings) of the creative industries, which generate below-average demand from suppliers, but normal levels of demand though expenditure from incomes. Drawing on these numbers and conclusions, we suggest some (slightly speculative) directions for future research. The goal is to better understand the contribution the creative sector makes to productivity growth; in particular, the distinctive contributions from creative firms and embedded creative workers. The ideas for future research can be organised into the several categories: • Understanding the categories of the creative sector– who is doing the business? In other words, examine via more fine grained research (at a firm level perhaps) just what is the creative contribution from the different aspects of the creative sector industries. It may be possible to categorise these in terms of more or less striking innovations. • Investigate the relationship between the characteristics and the performance of the various creative industries/ sectors; • Look more closely at innovation at an industry level e.g. using an index of relative growth of exports, and see if this can be related to intensity of use of creative inputs; • Undertake case studies of the creative sector; • Undertake case studies of the embedded contribution to growth in the firms and industries that employ them, by examining taking several high performing noncreative industries (in the same way as proposed for the creative sector). • Look at the aggregates – drawing on the broad picture of the extent of the numbers of creative workers embedded within the different industries, consider the extent to which these might explain aspects of the industries’ varied performance in terms of exports, growth and so on. • This might be able to extended to examine issues like the type of creative workers that are most effective when embedded, or test the hypothesis that each industry has its own particular requirements for embedded creative workers that overwhelms any generic contributions from say design, or IT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Mechanical forces either due to accommodation or myopia may stretch the retina and/or cause shear between the retina and choroid. This can be investigated by making use of the Stiles-Crawford effect (SCE), which is the phenomenon of light changing in apparent brightness as it enters through different positions in the pupil. The SCE can be measured by psychophysical and objective techniques, with the SCE parameters being directionality (rate of change across the pupil), and orientation (the location of peak sensitivity in the pupil). Aims: 1. To study the changes in foveal SCE with accommodation in emmetropes and myopes using a subjective (psychophysical) technique. 2. To develop and evaluate a quick objective technique of measuring the SCE using the multifocal electroretinogram. Methods: The SCE was measured in 6 young emmetropes and 6 young myopes for up to 8 D accommodation stimulus with a psychophysical technique and its variants. An objective technique using the multifocal electroretinogram was developed and evaluated with 5 emmetropes. Results: Using the psychophysical technique, the SCE directionality increased by similar amounts in both emmetropes and myopes as accommodation increased, with an increase of 15-20% with 6 D of accommodation. However, there were no significant orientation changes. Additional measurements showed that most of the change in the directionality was probably an artefact of optical factors such as higher-order aberrations and accommodative lag rather a true effect of accommodation. The multifocal technique demonstrated the presence of the SCE, but results were noisy and too variable to detect any changes in SCE directionality or orientation with accommodation. Conclusion: There is little true change in the SCE with accommodation responses up to 6 D in either emmetropes or myopes, although it is possible that substantial changes might occur at very high accommodation levels. The objective technique using the multifocal electroretinogram was quicker and less demanding for the subjects than the psychophysical technique, but as implemented in this thesis, it is not a reliable method of measuring the SCE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: Ecological studies support the hypothesis that there is an association between vitamin D and pancreatic cancer (PaCa) mortality, but observational studies are somewhat conflicting. We sought to contribute further data to this issue by analyzing the differences in PaCa mortality across the eastern states of Australia and investigating if there is a role of vitamin D-effective ultraviolet radiation (DUVR), which is related to latitude. ---------- Methods: Mortality data from 1968 to 2005 were sourced from the Australian General Record of Incidence and Mortality books. Negative binomial models were fitted to calculate the association between state and PaCa mortality. Clear sky monthly DUVR in each capital city was also modeled. ---------- Results: Mortality from PaCa was 10% higher in southern states than in Queensland, with those in Victoria recording the highest mortality risk (relative risk, 1.13; 95% confidence interval, 1.09-1.17). We found a highly significant association between DUVR and PaCa mortality, with an estimated 1.5% decrease in the risk per 10-kJ/m2 increase in yearly DUVR. ---------- Conclusions: These data show an association between latitude, DUVR, and PaCa mortality. Although this study cannot be used to infer causality, it supports the need for further investigations of a possible role of vitamin D in PaCa etiology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are increasing indications that the contribution of holding costs and its impact on housing affordability is very significant. Their importance and perceived high level impact can be gauged from considering the unprecedented level of attention policy makers have given them recently. This may be evidenced by the embedding of specific strategies to address burgeoning holding costs (and particularly those cost savings associated with streamlining regulatory assessment) within statutory instruments such as the Queensland Housing Affordability Strategy, and the South East Queensland Regional Plan. However, several key issues require further investigation. Firstly, the computation and methodology behind the calculation of holding costs varies widely. In fact, it is not only variable, but in some instances completely ignored. Secondly, some ambiguity exists in terms of the inclusion of various elements of holding costs and assessment of their relative contribution. Perhaps this may in part be explained by their nature: such costs are not always immediately apparent. They are not as visible as more tangible cost items associated with greenfield development such as regulatory fees, government taxes, acquisition costs, selling fees, commissions and others. Holding costs are also more difficult to evaluate since for the most part they must be ultimately assessed over time in an ever-changing environment based on their strong relationship with opportunity cost which is in turn dependant, inter alia, upon prevailing inflation and / or interest rates. This paper seeks to provide a more detailed investigation of those elements related to holding costs, and in so doing determine the size of their impact specifically on the end user. It extends research in this area clarifying the extent to which holding costs impact housing affordability. Geographical diversity indicated by the considerable variation between various planning instruments and the length of regulatory assessment periods suggests further research should adopt a case study approach in order to test the relevance of theoretical modelling conducted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research quantitatively examines the determinants of board size and the consequence it has on the performance of large companies in Australia. In line with international and the prevalent United States research the results suggest that there is no significant relationship between board size and their subsequent performance. In examining whether more complex operations require larger boards it was found that larger firms or firms with more lines of business tended to have more directors. Data analysis from the research supports the proposition that blockholders could affect management practices and that they enhances performance as measured by shareholder return.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

European American (EA) women report greater body dissatisfaction and less dietary control than do African American (AA) women. This study investigated whether ethnic differences in dieting history contributed to differences in body dissatisfaction and dietary control, or to differential changes that may occur during weight loss and regain. Eighty-nine EA and AA women underwent dual-energy X-ray absorptiometry to measure body composition and completed questionnaires to assess body dissatisfaction and dietary control before, after, and one year following, a controlled weight-loss intervention. While EA women reported a more extensive dieting history than AA women, this difference did not contribute to ethnic differences in body dissatisfaction and perceived dietary control. During weight loss, body satisfaction improved more for AA women, and during weight regain, dietary self-efficacy worsened to a greater degree for EA women. Ethnic differences in dieting history did not contribute significantly to these differential changes. Although ethnic differences in body image and dietary control are evident prior to weight loss, and some change differentially by ethnic group during weight loss and regain, differences in dieting history do not contribute significantly to ethnic differences in body image and dietary control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic recognition of people is an active field of research with important forensic and security applications. In these applications, it is not always possible for the subject to be in close proximity to the system. Voice represents a human behavioural trait which can be used to recognise people in such situations. Automatic Speaker Verification (ASV) is the process of verifying a persons identity through the analysis of their speech and enables recognition of a subject at a distance over a telephone channel { wired or wireless. A significant amount of research has focussed on the application of Gaussian mixture model (GMM) techniques to speaker verification systems providing state-of-the-art performance. GMM's are a type of generative classifier trained to model the probability distribution of the features used to represent a speaker. Recently introduced to the field of ASV research is the support vector machine (SVM). An SVM is a discriminative classifier requiring examples from both positive and negative classes to train a speaker model. The SVM is based on margin maximisation whereby a hyperplane attempts to separate classes in a high dimensional space. SVMs applied to the task of speaker verification have shown high potential, particularly when used to complement current GMM-based techniques in hybrid systems. This work aims to improve the performance of ASV systems using novel and innovative SVM-based techniques. Research was divided into three main themes: session variability compensation for SVMs; unsupervised model adaptation; and impostor dataset selection. The first theme investigated the differences between the GMM and SVM domains for the modelling of session variability | an aspect crucial for robust speaker verification. Techniques developed to improve the robustness of GMMbased classification were shown to bring about similar benefits to discriminative SVM classification through their integration in the hybrid GMM mean supervector SVM classifier. Further, the domains for the modelling of session variation were contrasted to find a number of common factors, however, the SVM-domain consistently provided marginally better session variation compensation. Minimal complementary information was found between the techniques due to the similarities in how they achieved their objectives. The second theme saw the proposal of a novel model for the purpose of session variation compensation in ASV systems. Continuous progressive model adaptation attempts to improve speaker models by retraining them after exploiting all encountered test utterances during normal use of the system. The introduction of the weight-based factor analysis model provided significant performance improvements of over 60% in an unsupervised scenario. SVM-based classification was then integrated into the progressive system providing further benefits in performance over the GMM counterpart. Analysis demonstrated that SVMs also hold several beneficial characteristics to the task of unsupervised model adaptation prompting further research in the area. In pursuing the final theme, an innovative background dataset selection technique was developed. This technique selects the most appropriate subset of examples from a large and diverse set of candidate impostor observations for use as the SVM background by exploiting the SVM training process. This selection was performed on a per-observation basis so as to overcome the shortcoming of the traditional heuristic-based approach to dataset selection. Results demonstrate the approach to provide performance improvements over both the use of the complete candidate dataset and the best heuristically-selected dataset whilst being only a fraction of the size. The refined dataset was also shown to generalise well to unseen corpora and be highly applicable to the selection of impostor cohorts required in alternate techniques for speaker verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We investigate whether the two 2 zero cost portfolios, SMB and HML, have the ability to predict economic growth for markets investigated in this paper. Our findings show that there are only a limited number of cases when the coefficients are positive and significance is achieved in an even more limited number of cases. Our results are in stark contrast to Liew and Vassalou (2000) who find coefficients to be generally positive and of a similar magnitude. We go a step further and also employ the methodology of Lakonishok, Shleifer and Vishny (1994) and once again fail to support the risk-based hypothesis of Liew and Vassalou (2000). In sum, we argue that search for a robust economic explanation for firm size and book-to-market equity effects needs sustained effort as these two zero cost portfolios do not represent economically relevant risk.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lifecycle funds offered by retirement plan providers allocate aggressively to risky asset classes when the employee participants are young, gradually switching to more conservative asset classes as they grow older and approach retirement. This approach focuses on maximizing growth of the accumulation fund in the initial years and preserving its value in the later years. The authors simulate terminal wealth outcomes based on conventional lifecycle asset allocation rules as well as on contrarian strategies that reverse the direction of asset switching. The evidence suggests that the growth in portfolio size over time significantly impacts the asset allocation decision. Due to the portfolio size effect that is observed by the authors, the terminal value of accumulation in retirement accounts is influenced more by the asset allocation strategy adopted in later years relative to that adopted in early years. By mechanistically switching to conservative assets in the later years of a plan, lifecycle strategies sacrifice significant growth opportunity and prove counterproductive to the participant's wealth accumulation objective. The authors' conclude that this sacrifice does not seem to be compensated adequately in terms of reducing the risk of potentially adverse outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RatSLAM is a vision-based SLAM system based on extended models of the rodent hippocampus. RatSLAM creates environment representations that can be processed by the experience mapping algorithm to produce maps suitable for goal recall. The experience mapping algorithm also allows RatSLAM to map environments many times larger than could be achieved with a one to one correspondence between the map and environment, by reusing the RatSLAM maps to represent multiple sections of the environment. This paper describes experiments investigating the effects of the environment-representation size ratio and visual ambiguity on mapping and goal navigation performance. The experiments demonstrate that system performance is weakly dependent on either parameter in isolation, but strongly dependent on their joint values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three particular geometrical shapes of parallelepiped, cylindrical and spheres were selected from potatoes (aspect ratio = 1:1, 2:1, 3:1), cut beans (length:diameter = 1:1, 2:1, 3:1) and peas respectively. The density variation of food particulates was studied in a batch fluidised bed dryer connected to a heat pump dehumidifier system. Apparent density and bulk density were evaluated with non-dimensional moisture at three different drying temperatures of 30, 40 and 50 o C. Relative humidity of hot air was kept at 15% in all drying temperatures. Several empirical relationships were developed for the determination of changes in densities with the moisture content. Simple mathematical models were obtained to relate apparent density and bulk density with moisture content.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement. ---------------- The Port currently has a network of stormwater sample collection points where event based samples together with grab samples are tested for a range of water quality parameters. Whilst this information provides a ‘snapshot’ of the pollutants being washed from the catchment/s, it does not allow for a quantifiable assessment of total contaminant loads being discharged to the waters of Moreton Bay. It also does not represent pollutant build-up and wash-off from the different land uses across a broader range of rainfall events which might be expected. As such, it is difficult to relate stormwater quality to different pollutant sources within the Port environment. ----------------- Consequently, this would make the source tracking of pollutants to receiving waters extremely difficult and in turn the ability to implement appropriate mitigation measures. Also, without this detailed understanding, the efficacy of the various stormwater quality mitigation measures implemented cannot be determined with certainty. --------------- Current knowledge on port stormwater runoff quality Currently, little knowledge exists with regards to the pollutant generation capacity specific to port land uses as these do not necessarily compare well with conventional urban industrial or commercial land use due to the specific nature of port activities such as inter-modal operations and cargo management. Furthermore, traffic characteristics in a port area are different to a conventional urban area. Consequently, as data inputs based on an industrial and commercial land uses for modelling purposes is questionable. ------------------ A comprehensive review of published research failed to locate any investigations undertaken with regards to pollutant build-up and wash-off for port specific land uses. Furthermore, there is very limited information made available by various ports worldwide about the pollution generation potential of their facilities. Published work in this area has essentially focussed on the water quality or environmental values in the receiving waters such as the downstream bay or estuary. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is the undertaking of ‘cutting edge’ research to strengthen the environmental custodianship of the Port area. This project aims to develop a port specific stormwater quality model to allow informed decision making in relation to stormwater quality improvement in the context of the increased growth of the Port. --------------- Stage 1 of the research project focussed on the assessment of pollutant build-up and wash-off using rainfall simulation from the current Port of Brisbane facilities with the longer-term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. Investigation of complex processes such as pollutant wash-off using naturally occurring rainfall events has inherent difficulties. These can be overcome using simulated rainfall for the investigations. ----------------- The deliverables for Stage 1 included the following: * Pollutant build-up and wash-off profiles for six primary land uses within the Port of Brisbane to be used for water quality model development. * Recommendations with regards to future stormwater quality monitoring and pollution mitigation measures. The outcomes are expected to deliver the following benefits to the Port of Brisbane: * The availability of Port specific pollutant build-up and wash-off data will enable the implementation of customised stormwater pollution mitigation strategies. * The water quality data collected would form the baseline data for a Port specific water quality model for mitigation and predictive purposes. * To be at the cutting-edge in terms of water quality management and environmental best practice in the context of port infrastructure. ---------------- Conclusions: The important conclusions from the study are: * It confirmed that the Port environment is unique in terms of pollutant characteristics and is not comparable to typical urban land uses. * For most pollutant types, the Port land uses exhibited lower pollutant concentrations when compared to typical urban land uses. * The pollutant characteristics varied across the different land uses and were not consistent in terms of the land use. Hence, the implementation of stereotypical structural water quality improvement devices could be of limited value. * The <150m particle size range was predominant in suspended solids for pollutant build-up as well as wash-off. Therefore, if suspended solids are targeted as the surrogate parameter for water quality improvement, this specific particle size range needs to be removed. ------------------- Recommendations: Based on the study results the following preliminary recommendations are made: * Due to the appreciable variation in pollutant characteristics for different port land uses, any water quality monitoring stations should preferably be located such that source areas can be easily identified. * The study results having identified significant pollutants for the different land uses should enable the development of a more customised water quality monitoring and testing regime targeting the critical pollutants. * A ‘one size fits all’ approach may not be appropriate for the different port land uses due to the varying pollutant characteristics. As such, pollution mitigation will need to be specifically tailored to suit the specific land use. * Any structural measures implemented for pollution mitigation to be effective should have the capability to remove suspended solids of size <150m. * Based on the results presented and the particularly the fact that the Port land uses cannot be compared to conventional urban land uses in relation to pollutant generation, consideration should be given to the development of a port specific water quality model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Obese children move less and with greater difficulty than normal-weight counterparts but expend comparable energy. Increased metabolic costs have been attributed to poor biomechanics but few studies have investigated the influence of obesity on mechanical demands of gait. This study sought to assess three-dimensional lower extremity joint powers in two walking cadences in 28 obese and normal-weight children. 3D-motion analysis was conducted for five trials of barefoot walking at self-selected and 30% greater than self-selected cadences. Mechanical power was calculated at the hip, knee, and ankle in sagittal, frontal and transverse planes. Significant group differences were seen for all power phases in the sagittal plane, hip and knee power at weight acceptance and hip power at propulsion in the frontal plane, and knee power during mid-stance in the transverse plane. After adjusting for body weight, group differences existed in hip and knee power phases at weight acceptance in sagittal and frontal planes, respectively. Differences in cadence existed for all hip joint powers in the sagittal plane and frontal plane hip power at propulsion. Frontal plane knee power at weight acceptance and sagittal plane knee power at propulsion were significantly different between cadences. Larger joint powers in obese children contribute to difficulty performing locomotor tasks, potentially decreasing motivation to exercise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.