39 resultados para EXTRAPOLATION
Resumo:
From 19 authoritative lists with 164 entries of ‘endangered’ Australian mammal species, 39 species have been reported as extinct. When examined in the light of field conditions, the 18 of these species thought to be from Queensland consist of (a) species described from fragmentary museum material collected in the earliest days of exploration, (b) populations inferred to exist in Queensland by extrapolation from distribution records in neighbouring States or countries, (c) inhabitants of remote and harsh locations where search effort is extraordinarily difficult (especially in circumstances of drought or flooding). and/or (d) individuals that are clearly transitory or peripheral in distribution. ‘Rediscovery’ of such scarce species - a not infrequent occurrence - is nowadays attracting increasing attention. Management in respect of any scarce wildlife in Queensland presently derives from such official lists. The analyses here indicate that this method of prioritizing action needs review. This is especially so because action then tends to be centred on species chosen out of the lists for populist reasons and that mostly addresses Crown lands. There is reason to believe that the preferred management may lie private lands where casual observation has provided for rediscovery and where management is most desirable and practicable.
Resumo:
Policies that encourage greenhouse-gas emitters to mitigate emissions through terrestrial carbon (C) offsets – C sequestration in soils or biomass – will promote practices that reduce erosion and build soil fertility, while fostering adaptation to climate change, agricultural development, and rehabilitation of degraded soils. However none of these benefits will be possible until changes in C stocks can be documented accurately and cost-effectively. This is particularly challenging when dealing with changes in soil organic C (SOC) stocks. Precise methods for measuring C in soil samples are well established, but spatial variability in the factors that determine SOC stocks makes it difficult to document change. Widespread interest in the benefits of SOC sequestration has brought this issue to the fore in the development of US and international climate policy. Here, we review the challenges to documenting changes in SOC stocks, how policy decisions influence offset documentation requirements, and the benefits and drawbacks of different sampling strategies and extrapolation methods.
Resumo:
Purpose: Leadership styles are reviewed and reassessed given recent research that links destructive leadership behaviours exhibited by unscrupulous executives with traits commonly identified as indicators of corporate psychopathy. Method/approach: A review of the literature describing the various theories dealing with the nature of leadership styles and the rise of interest in corporate psychopathy and destructive leadership. Implications: This paper offers a psychological perspective for future research which provides both impetus and additional support for further analysis and exploration of such leadership styles in the business environment. One distinct advantage of this extrapolation is the articulation of insights into aspects of decision making by leaders, providing further insight into the formulation of leadership development programs in organisations and courses in business schools training future leaders.
Resumo:
This article provides an overview on some of the key aspects that relate to the co-evolution of languages and its associated content in the Internet environment. A focus on such a co-evolution is pertinent as the evolution of languages in the Internet environment can be better understood if the development of its existing and emerging content, that is, the content in the respective language, is taken into consideration. By doing so, this article examines two related aspects: the governance of languages at critical sites of the Internet environment, including ICANN, Wikipedia and Google Translate. Following on from this examination, the second part outlines how the co-evolution of languages and associated content in the Internet environment extends policy-making related to linguistic pluralism. It is argued that policies which centre on language availability in the Internet environment must shift their focus to the dynamics of available content instead. The notion of language pairs as a new regime of intersection for both languages and content is discussed to introduce an extended understanding of the uses of linguistic pluralism in the Internet environment. The ultimate extrapolation of such an enhanced approach, it is argued, centres less on 6,000 languages but, instead, on 36 million language pairs. This article describes how such a powerful resource evolves in the Internet environment.
Resumo:
Student satisfaction data has been collected on a national basis in Australia since 1972. In recent years this data has been used by federal government agencies to allocate funding, and by students in selecting their universities of choice. The purpose of this paper is to present the findings of an action research project designed to identify and implement unit improvement initiatives over a three year period for an underperforming unit. This research utilises student survey data and teacher reflections to identify areas of unit improvement, with a view to aligning learning experiences, teaching and assessment to learning outcomes and improved student satisfaction. This research concludes that whilst a voluntary student survey system may be imperfect, it nevertheless provides important data that can be utilised to the benefit of the unit, learning outcomes and student satisfaction ratings, as well as wider course related outcomes. Extrapolation of these findings is recommended to other underperforming units.
Resumo:
In this study x-ray CT has been used to produce a 3D image of an irradiated PAGAT gel sample, with noise-reduction achieved using the ‘zero-scan’ method. The gel was repeatedly CT scanned and a linear fit to the varying Hounsfield unit of each pixel in the 3D volume was evaluated across the repeated scans, allowing a zero-scan extrapolation of the image to be obtained. To minimise heating of the CT scanner’s x-ray tube, this study used a large slice thickness (1 cm), to provide image slices across the irradiated region of the gel, and a relatively small number of CT scans (63), to extrapolate the zero-scan image. The resulting set of transverse images shows reduced noise compared to images from the initial CT scan of the gel, without being degraded by the additional radiation dose delivered to the gel during the repeated scanning. The full, 3D image of the gel has a low spatial resolution in the longitudinal direction, due to the selected scan parameters. Nonetheless, important features of the dose distribution are apparent in the 3D x-ray CT scan of the gel. The results of this study demonstrate that the zero-scan extrapolation method can be applied to the reconstruction of multiple x-ray CT slices, to provide useful 2D and 3D images of irradiated dosimetry gels.
Resumo:
The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.
Resumo:
This thesis takes a new data mining approach for analyzing road/crash data by developing models for the whole road network and generating a crash risk profile. Roads with an elevated crash risk due to road surface friction deficit are identified. The regression tree model, predicting road segment crash rate, is applied in a novel deployment coined regression tree extrapolation that produces a skid resistance/crash rate curve. Using extrapolation allows the method to be applied across the network and cope with the high proportion of missing road surface friction values. This risk profiling method can be applied in other domains.
Resumo:
Road surface skid resistance has been shown to have a strong relationship to road crash risk, however, applying the current method of using investigatory levels to identify crash prone roads is problematic as they may fail in identifying risky roads outside of the norm. The proposed method analyses a complex and formerly impenetrable volume of data from roads and crashes using data mining. This method rapidly identifies roads with elevated crash-rate, potentially due to skid resistance deficit, for investigation. A hypothetical skid resistance/crash risk curve is developed for each road segment, driven by the model deployed in a novel regression tree extrapolation method. The method potentially solves the problem of missing skid resistance values which occurs during network-wide crash analysis, and allows risk assessment of the major proportion of roads without skid resistance values.
Resumo:
Accurately quantifying total freshwater storage methane release to atmosphere requires the spatial–temporal measurement of both diffusive and ebullitive emissions. Existing floating chamber techniques provide localised assessment of methane flux, however, significant errors can arise when weighting and extrapolation to the entire storage, particularly when ebullition is significant. An improved technique has been developed that compliments traditional chamber based experiments to quantify the storage-scale release of methane gas to atmosphere through ebullition using the measurements from an Optical Methane Detector (OMD) and a robotic boat. This provides a conservative estimate of the methane emission rate from ebullition along with the bubble volume distribution. It also georeferences the area of ebullition activity across entire storages at short temporal scales. An assessment on Little Nerang Dam in Queensland, Australia, demonstrated whole storage methane release significantly differed spatially and throughout the day. Total methane emission estimates showed a potential 32-fold variation in whole-of-dam rates depending on the measurement and extrapolation method and time of day used. The combined chamber and OMD technique showed that 1.8–7.0% of the surface area of Little Nerang Dam is accounting for up to 97% of total methane release to atmosphere throughout the day. Additionally, over 95% of detectable ebullition occurred in depths less than 12 m during the day and 6 m at night. This difference in spatial and temporal methane release rate distribution highlights the need to monitor significant regions of, if not the entire, water storage in order to provide an accurate estimate of ebullition rates and their contribution to annual methane emissions.
Resumo:
Background Physiotherapists are a professional group with a high rate of attrition and at high risk of musculoskeletal disorders. The purpose of this investigation was to examine the physical activity levels and health-related quality of life of physiotherapists working in metropolitan clinical settings in an Australian hospital and health service. It was hypothesized that practicing physiotherapists would report excellent health-related quality of life and would already be physically active. Such a finding would add weight to a claim that general physical activity conditioning strategies may not be useful for preventing musculoskeletal disorders among active healthy physiotherapists, but rather, future investigations should focus on the development and evaluation of role specific conditioning strategies. Methods A questionnaire was completed by 44 physiotherapists from three inpatient units and three ambulatory clinics (63.7% response rate). Physical activity levels were reported using the Active Australia Survey. Health-related quality of life was examined using the EQ-5D instrument. Physical activity and EQ-5D data were examined using conventional descriptive statistics; with domain responses for the EQ-5D presented in a frequency histogram. Results The majority of physiotherapists in this sample were younger than 30 years of age (n = 25, 56.8%) consistent with the presence of a high attrition rate. Almost all respondents exceeded minimum recommended physical activity guidelines (n = 40, 90.9%). Overall the respondents engaged in more vigorous physical activity (median = 180 minutes) and walking (median = 135 minutes) than moderate exercise (median = 35 minutes) each week. Thirty-seven (84.1%) participants reported no pain or discomfort impacting their health-related quality of life, with most (n = 35,79.5%) being in full health. Conclusions Physical-conditioning based interventions for the prevention of musculoskeletal disorders among practicing physiotherapists may be better targeted to role or task specific conditioning rather than general physical conditioning among this physically active population. It is plausible that an inherent attrition of physiotherapists may occur among those not as active or healthy as therapists who cope with the physical demands of clinical practice. Extrapolation of findings from this study may be limited due to the sample characteristics. However, this investigation addressed the study objectives and has provided a foundation for larger scale longitudinal investigations in this field.
Resumo:
Biomonitoring has become the ‘gold standard’ in assessing chemical exposures, and plays an important role in risk assessment. The pooling of biological specimens – combining multiple individual specimens into a single sample – can be used in biomonitoring studies to monitor levels of exposure and identify exposure trends, or to identify susceptible populations in a cost-effective manner. Pooled samples provide an estimate of central tendency, and may also reveal information about variation within the population. The development of a pooling strategy requires careful consideration of the type and number of samples collected, the number of pools required, and the number of specimens to combine per pool in order to maximize the type and robustness of the data. Creative pooling strategies can be used to explore exposure-outcome associations, and extrapolation from other larger studies can be useful in identifying elevated exposures in specific individuals. The use of pooled specimens is advantageous as it saves significantly on analytical costs, may reduce the time and resources required for recruitment, and in certain circumstances, allows quantification of samples approaching the limit of detection. In addition, use of pooled samples can provide population estimates while avoiding ethical difficulties that may be associated with reporting individual results.