883 resultados para Rischio finanziario, Value-at-Risk, Expected Shortfall


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Australia communities are concerned about atrazine being detected in drinking water supplies. It is important to understand mechanisms by which atrazine is transported from paddocks to waterways if we are to reduce movement of agricultural chemicals from the site of application. Two paddocks cropped with grain sorghum on a Black Vertosol were monitored for atrazine, potassium chloride (KCl) extractable atrazine, desethylatrazine (DEA), and desisopropylatrazine (DIA) at 4 soil depths (0-0.05, 0.05-0.10, 0.10-0.20, and 0.20-0.30 m) and in runoff water and runoff sediment. Atrazine + DEA + DIA (total atrazine) had a half-life in soil of 16-20 days, more rapid dissipation than in many earlier reports. Atrazine extracted in dilute potassium chloride, considered available for weed control, was initially 34% of the total and had a half-life of 15-20 days until day 30, after which it dissipated rapidly with a half life of 6 days. We conclude that, in this region, atrazine may not pose a risk for groundwater contamination, as only 0.5% of applied atrazine moved deeper than 0.20 m into the soil, where it dissipated rapidly. In runoff (including suspended sediment) atrazine concentrations were greatest during the first runoff event (57 days after application) (85 μg/L) and declined with time. After 160 days, the total atrazine lost in runoff was 0.4% of the initial application. The total atrazine concentration in runoff was strongly related to the total concentration in soil, as expected. Even after 98% of the KCl-extractable atrazine had dissipated (and no longer provided weed control), runoff concentrations still exceeded the human health guideline value of 40 μg/L. For total atrazine in soil (0-0.05 m), the range for coefficient of soil sorption (Kd) was 1.9-28.4 mL/g and for soil organic carbon sorption (KOC) was 100-2184 mL/g, increasing with time of contact with the soil and rapid dissipation of the more soluble, available phase. Partition coefficients in runoff for total atrazine were initially 3, increasing to 32 and 51 with time, values for DEA being half these. To minimise atrazine losses, cultural practices that maximise rain infiltration, and thereby minimise runoff, and minimise concentrations in the soil surface should be adopted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The complexity, variability and vastness of the northern Australian rangelands make it difficult to assess the risks associated with climate change. In this paper we present a methodology to help industry and primary producers assess risks associated with climate change and to assess the effectiveness of adaptation options in managing those risks. Our assessment involved three steps. Initially, the impacts and adaptation responses were documented in matrices by ‘experts’ (rangeland and climate scientists). Then, a modified risk management framework was used to develop risk management matrices that identified important impacts, areas of greatest vulnerability (combination of potential impact and adaptive capacity) and priority areas for action at the industry level. The process was easy to implement and useful for arranging and analysing large amounts of information (both complex and interacting). Lastly, regional extension officers (after minimal ‘climate literacy’ training) could build on existing knowledge provided here and implement the risk management process in workshops with rangeland land managers. Their participation is likely to identify relevant and robust adaptive responses that are most likely to be included in regional and property management decisions. The process developed here for the grazing industry could be modified and used in other industries and sectors. By 2030, some areas of northern Australia will experience more droughts and lower summer rainfall. This poses a serious threat to the rangelands. Although the impacts and adaptive responses will vary between ecological and geographic systems, climate change is expected to have noticeable detrimental effects: reduced pasture growth and surface water availability; increased competition from woody vegetation; decreased production per head (beef and wool) and gross margin; and adverse impacts on biodiversity. Further research and development is needed to identify the most vulnerable regions, and to inform policy in time to facilitate transitional change and enable land managers to implement those changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While the method using specialist herbivores in managing invasive plants (classical biological control) is regarded as relatively safe and cost-effective in comparison to other methods of management, the rarity of strict monophagy among insect herbivores illustrates that, like any management option, biological control is not risk-free. The challenge for classical biological control is therefore to predict risks and benefits a priori. In this study we develop a simulation model that may aid in this process. We use this model to predict the risks and benefits of introducing the chrysomelid beetle Charidotis auroguttata to manage the invasive liana Macfadyena unguis-cati in Australia. Preliminary host-specificity testing of this herbivore indicated that there was limited feeding on a non-target plant, although the non-target was only able to sustain some transitions of the life cycle of the herbivore. The model includes herbivore, target and non-target life history and incorporates spillover dynamics of populations of this herbivore from the target to the non-target under a variety of scenarios. Data from studies of this herbivore in the native range and under quarantine were used to parameterize the model and predict the relative risks and benefits of this herbivore when the target and non-target plants co-occur. Key model outputs include population dynamics on target (apparent benefit) and non-target (apparent risk) and fitness consequences to the target (actual benefit) and non-target plant (actual risk) of herbivore damage. The model predicted that risk to the non-target became unacceptable (i.e. significant negative effects on fitness) when the ratio of target to non-target in a given patch ranged from 1:1 to 3:2. By comparing the current known distribution of the non-target and the predicted distribution of the target we were able to identify regions in Australia where the agent may be pose an unacceptable risk. By considering risk and benefit simultaneously, we highlight how such a simulation modelling approach can assist scientists and regulators in making more objective decisions a priori, on the value of releasing specialist herbivores as biological control agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When exposed to hot (22-35 degrees C) and dry climatic conditions in the field during the final 4-6 weeks of pod filling, peanuts (Arachis hypogaea L.) can accumulate highly carcinogenic and immuno-suppressing aflatoxins. Forecasting of the risk posed by these conditions can assist in minimizing pre-harvest contamination. A model was therefore developed as part of the Agricultural Production Systems Simulator (APSIM) peanut module, which calculated an aflatoxin risk index (ARI) using four temperature response functions when fractional available soil water was <0.20 and the crop was in the last 0.40 of the pod-filling phase. ARI explained 0.95 (P <= 0.05) of the variation in aflatoxin contamination, which varied from 0 to c. 800 mu g/kg in 17 large-scale sowings in tropical and four sowings in sub-tropical environments carried out in Australia between 13 November and 16 December 2007. ARI also explained 0.96 (P <= 0.01) of the variation in the proportion of aflatoxin-contaminated loads (>15 mu g/kg) of peanuts in the Kingaroy region of Australia during the period between the 1998/99 and 2007/08 seasons. Simulation of ARI using historical climatic data from 1890 to 2007 indicated a three-fold increase in its value since 1980 compared to the entire previous period. The increase was associated with increases in ambient temperature and decreases in rainfall. To facilitate routine monitoring of aflatoxin risk by growers in near real time, a web interface of the model was also developed. The ARI predicted using this interface for eight growers correlated significantly with the level of contamination in crops (r=095, P <= 0.01). These results suggest that ARI simulated by the model is a reliable indicator of aflatoxin contamination that can be used in aflatoxin research as well as a decision-support tool to monitor pre-harvest aflatoxin risk in peanuts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Type 2 diabetes is an increasing, serious, and costly public health problem. The increase in the prevalence of the disease can mainly be attributed to changing lifestyles leading to physical inactivity, overweight, and obesity. These lifestyle-related risk factors offer also a possibility for preventive interventions. Until recently, proper evidence regarding the prevention of type 2 diabetes has been virtually missing. To be cost-effective, intensive interventions to prevent type 2 diabetes should be directed to people at an increased risk of the disease. The aim of this series of studies was to investigate whether type 2 diabetes can be prevented by lifestyle intervention in high-risk individuals, and to develop a practical method to identify individuals who are at high risk of type 2 diabetes and would benefit from such an intervention. To study the effect of lifestyle intervention on diabetes risk, we recruited 522 volunteer, middle-aged (aged 40 - 64 at baseline), overweight (body mass index > 25 kg/m2) men (n = 172) and women (n = 350) with impaired glucose tolerance to the Diabetes Prevention Study (DPS). The participants were randomly allocated either to the intensive lifestyle intervention group or the control group. The control group received general dietary and exercise advice at baseline, and had annual physician's examination. The participants in the intervention group received, in addition, individualised dietary counselling by a nutritionist. They were also offered circuit-type resistance training sessions and were advised to increase overall physical activity. The intervention goals were to reduce body weight (5% or more reduction from baseline weight), limit dietary fat (< 30% of total energy consumed) and saturated fat (< 10% of total energy consumed), and to increase dietary fibre intake (15 g / 1000 kcal or more) and physical activity (≥ 30 minutes/day). Diabetes status was assessed annually by a repeated 75 g oral glucose tolerance testing. First analysis on end-points was completed after a mean follow-up of 3.2 years, and the intervention phase was terminated after a mean duration of 3.9 years. After that, the study participants continued to visit the study clinics for the annual examinations, for a mean of 3 years. The intervention group showed significantly greater improvement in each intervention goal. After 1 and 3 years, mean weight reductions were 4.5 and 3.5 kg in the intervention group and 1.0 kg and 0.9 kg in the control group. Cardiovascular risk factors improved more in the intervention group. After a mean follow-up of 3.2 years, the risk of diabetes was reduced by 58% in the intervention group compared with the control group. The reduction in the incidence of diabetes was directly associated with achieved lifestyle goals. Furthermore, those who consumed moderate-fat, high-fibre diet achieved the largest weight reduction and, even after adjustment for weight reduction, the lowest diabetes risk during the intervention period. After discontinuation of the counselling, the differences in lifestyle variables between the groups still remained favourable for the intervention group. During the post-intervention follow-up period of 3 years, the risk of diabetes was still 36% lower among the former intervention group participants, compared with the former control group participants. To develop a simple screening tool to identify individuals who are at high risk of type 2 diabetes, follow-up data of two population-based cohorts of 35-64 year old men and women was used. The National FINRISK Study 1987 cohort (model development data) included 4435 subjects, with 182 new drug-treated cases of diabetes identified during ten years, and the FINRISK Study 1992 cohort (model validation data) included 4615 subjects, with 67 new cases of drug-treated diabetes during five years, ascertained using the Social Insurance Institution's Drug register. Baseline age, body mass index, waist circumference, history of antihypertensive drug treatment and high blood glucose, physical activity and daily consumption of fruits, berries or vegetables were selected into the risk score as categorical variables. In the 1987 cohort the optimal cut-off point of the risk score identified 78% of those who got diabetes during the follow-up (= sensitivity of the test) and 77% of those who remained free of diabetes (= specificity of the test). In the 1992 cohort the risk score performed equally well. The final Finnish Diabetes Risk Score (FINDRISC) form includes, in addition to the predictors of the model, a question about family history of diabetes and the age category of over 64 years. When applied to the DPS population, the baseline FINDRISC value was associated with diabetes risk among the control group participants only, indicating that the intensive lifestyle intervention given to the intervention group participants abolished the diabetes risk associated with baseline risk factors. In conclusion, the intensive lifestyle intervention produced long-term beneficial changes in diet, physical activity, body weight, and cardiovascular risk factors, and reduced diabetes risk. Furthermore, the effects of the intervention were sustained after the intervention was discontinued. The FINDRISC proved to be a simple, fast, inexpensive, non-invasive, and reliable tool to identify individuals at high risk of type 2 diabetes. The use of FINDRISC to identify high-risk subjects, followed by lifestyle intervention, provides a feasible scheme in preventing type 2 diabetes, which could be implemented in the primary health care system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In humans with a loss of uricase the final oxidation product of purine catabolism is uric acid (UA). The prevalence of hyperuricemia has been increasing around the world accompanied by a rapid increase in obesity and diabetes. Since hyperuricemia was first described as being associated with hyperglycemia and hypertension by Kylin in 1923, there has been a growing interest in the association between elevated UA and other metabolic abnormalities of hyperglycemia, abdominal obesity, dyslipidemia, and hypertension. The direction of causality between hyperuricemia and metabolic disorders, however, is unceartain. The association of UA with metabolic abnormalities still needs to be delineated in population samples. Our overall aims were to study the prevalence of hyperuricemia and the metabolic factors clustering with hyperuricemia, to explore the dynamical changes in blood UA levels with the deterioration in glucose metabolism and to estimate the predictive capability of UA in the development of diabetes. Four population-based surveys for diabetes and other non-communicable diseases were conducted in 1987, 1992, and 1998 in Mauritius, and in 2001-2002 in Qingdao, China. The Qingdao study comprised 1 288 Chinese men and 2 344 women between 20-74, and the Mauritius study consisted of 3 784 Mauritian Indian and Mauritian Creole men and 4 442 women between 25-74. In Mauritius, re-exams were made in 1992 and/or 1998 for 1 941 men (1 409 Indians and 532 Creoles) and 2 318 non pregnant women (1 645 Indians and 673 Creoles), free of diabetes, cardiovascular diseases, and gout at baseline examinations in 1987 or 1992, using the same study protocol. The questionnaire was designed to collect demographic details, physical examinations and standard 75g oral glucose tolerance tests were performed in all cohorts. Fasting blood UA and lipid profiles were also determined. The age-standardized prevalence in Chinese living in Qingdao was 25.3% for hyperuricemia (defined as fasting serum UA > 420 μmol/l in men and > 360 μmol/l in women) and 0.36% for gout in adults between 20-74. Hyperuricemia was more prevalent in men than in women. One standard deviation increase in UA concentration was associated with the clustering of metabolic risk factors for both men and women in three ethnic groups. Waist circumference, body mass index, and serum triglycerides appeared to be independently associated with hyperuricemia in both sexes and in all ethnic groups except in Chinese women, in whom triglycerides, high-density lipoprotein cholesterol, and total cholesterol were associated with hyperuricemia. Serum UA increased with increasing fasting plasma glucose levels up to a value of 7.0 mmol/l, but significantly decreased thereafter in mainland Chinese. An inverse relationship occurred between 2-h plasma glucose and serum UA when 2-h plasma glucose higher than 8.0 mmol/l. In the prospective study in Mauritius, 337 (17.4%) men and 379 (16.4%) women developed diabetes during the follow-up. Elevated UA levels at baseline increased 1.14-fold in risk of incident diabetes in Indian men and 1.37-fold in Creole men, but no significant risk was observed in women. In conclusion, the prevalence of hyperuricemia was high in Chinese in Qingdao, blood UA was associated with the clustering of metabolic risk factors in Mauritian Indian, Mauritian Creole, and Chinese living in Qingdao, and a high baseline UA level independently predicted the development of diabetes in Mauritian men. The clinical use of UA as a marker of hyperglycemia and other metabolic disorders needs to be further studied. Keywords: Uric acid, Hyperuricemia, Risk factors, Type 2 Diabetes, Incidence, Mauritius, Chinese

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Given moderately strong genetic contributions to variation in alcoholism and heaviness of drinking (50% to 60% heritability) with high correlation of genetic influences, we have conducted a quantitative trait genome-wide association study (GWAS) for phenotypes related to alcohol use and dependence. METHODS Diagnostic interview and blood/buccal samples were obtained from sibships ascertained through the Australian Twin Registry. Genome-wide single nucleotide polymorphism (SNP) genotyping was performed with 8754 individuals (2062 alcohol-dependent cases) selected for informativeness for alcohol use disorder and associated quantitative traits. Family-based association tests were performed for alcohol dependence, dependence factor score, and heaviness of drinking factor score, with confirmatory case-population control comparisons using an unassessed population control series of 3393 Australians with genome-wide SNP data. RESULTS No findings reached genome-wide significance (p = 8.4 x 10(-8) for this study), with lowest p value for primary phenotypes of 1.2 x 10(-7). Convergent findings for quantitative consumption and diagnostic and quantitative dependence measures suggest possible roles for a transmembrane protein gene (TMEM108) and for ANKS1A. The major finding, however, was small effect sizes estimated for individual SNPs, suggesting that hundreds of genetic variants make modest contributions (1/4% of variance or less) to alcohol dependence risk. CONCLUSIONS We conclude that: - 1) meta-analyses of consumption data may contribute usefully to gene discovery; - 2) translation of human alcoholism GWAS results to drug discovery or clinically useful prediction of risk will be challenging, and; - 3) through accumulation across studies, GWAS data may become valuable for improved genetic risk differentiation in research in biological psychiatry (e.g., prospective high-risk or resilience studies).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A case study was undertaken to determine the economic impact of a change in management class as detailed in the A, B, C and D management class framework. This document focuses on the implications of changing from D to C, C to B and B to A class management in the Burdekin River irrigation area (BRIA) and if the change is worthwhile from an economic perspective. This report provides a guide to the economic impact that may be expected when undertaking a particular change in farming practices and will ultimately lead to more informed decisions being made by key industry stakeholders. It is recognised that these management classes have certain limitations and in many cases the grouping of practices may not be reflective of the real situation. The economic case study is based on the A, B, C and D management class framework for water quality improvement developed in 2007/2008 for the Burdekin natural resource management region. The framework for the Burdekin is currently being updated to clarify some issues and incorporate new knowledge since the earlier version of the framework. However, this updated version is not yet complete and so the Paddock to Reef project has used the most current available version of the framework for the modelling and economics. As part of the project specification, sugarcane crop production data for the BRIA was provided by the APSIM model. The information obtained from the APSIM crop modelling programme included sugarcane yields and legume grain yield (legume grain yield only applies to A class management practice). Because of the complexity involved in the economic calculations, a combination of the FEAT, PiRisk and a custom made spreadsheet was used for the economic analysis. Figures calculated in the FEAT program were transferred to the custom made spreadsheet to develop a discounted cash flow analysis. The marginal cash flow differences for each farming system were simulated over a 5-year and 10-year planning horizon to determine the net present value of changing across different management practices. PiRisk was used to test uncertain parameters in the economic analysis and the potential risk associated with a change in value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A case study was undertaken to determine the economic impact of a change in management class as detailed in the A, B, C and D management class framework. This document focuses on the implications of changing from D to C, C to B and B to A class management in the Burdekin Delta region and if the change is worthwhile from an economic perspective. This report provides a guide to the economic impact that may be expected when undertaking a particular change in farming practices and will ultimately lead to more informed decisions being made by key industry stakeholders. It is recognised that these management classes have certain limitations and in many cases the grouping of practices may not be reflective of the real situation. The economic case study is based on the A, B, C and D management class framework for water quality improvement developed in 2007/2008 for the Burdekin natural resource management region. The framework for the Burdekin is currently being updated to clarify some issues and incorporate new knowledge since the earlier version of the framework. However, this updated version is not yet complete and so the Paddock to Reef project has used the most current available version of the framework for the modelling and economics. As part of the project specification, sugarcane crop production data for the Burdekin Delta region was provided by the APSIM model. The information obtained from the APSIM crop modelling programme included sugarcane yields and legume grain yield (legume grain yield only applies to A class management practice). Because of the complexity involved in the economic calculations, a combination of the FEAT, PiRisk and a custom made spreadsheet was used for the economic analysis. Figures calculated in the FEAT program were transferred to the custom made spreadsheet to develop a discounted cash flow analysis. The marginal cash flow differences for each farming system were simulated over a 5-year and 10-year planning horizon to determine the Net Present Value of changing across different management practices. PiRisk was used to test uncertain parameters in the economic analysis and the potential risk associated with a change in value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A case study was undertaken to determine the economic impact of a change in management class as detailed in the A, B, C and D management class framework. This document focuses on the implications of changing from D to C, C to B and B to A class management in the Tully region and if the change is worthwhile from an economic perspective. This report provides a guide to the economic impact that may be expected when undertaking a particular change in farming practices and will ultimately lead to more informed decisions being made by key industry stakeholders. It is recognised that these management classes have certain limitations and in many cases the grouping of practices may not be reflective of the real situation. The economic case study is based on the A, B, C and D management class framework for water quality improvement developed in 2007/2008 by the wet tropics natural resource management region. The framework for wet tropics is currently being updated to clarify some issues and incorporate new knowledge since the earlier version of the framework. However, this updated version is not yet complete and so the Paddock to Reef project has used the most current available version of the framework for the modelling and economics. As part of the project specification, sugarcane crop production data for the Tully region was provided by the APSIM model. Because of the complexity involved in the economic calculations, a combination of the FEAT, PiRisk and a custom made spreadsheet was used for the economic analysis. Figures calculated in the FEAT program were transferred to the custom made spreadsheet to develop a discounted cash flow analysis. The marginal cash flow differences for each farming system were simulated over a 5-year and 10-year planning horizon to determine the Net Present Value of changing across different management practices. PiRisk was used to test uncertain parameters in the economic analysis and the potential risk associated with a change in value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preputial prolapse is an obvious condition affecting bulls from many breeds. Unfortunately, the losses in production and welfare concerns associated with preputial prolapse can remain undetected for long periods of time in the extensive beef areas of northern Australia where the bulls are not inspected regularly. Thus, there is a critical need to identify the structural factors predisposing to preputial prolapse in young bulls so that they can be culled early. Despite there being no firm scientific evidence of an association between preputial eversion and preputial prolapse, it seems logical that the increased exposure of the sensitive prepuce as a consequence of preputial eversion may increase the risk of bulls developing preputial pathology, in particular preputial prolapse. This may be particularly relevant in Bos indicus bulls as they have a more pendulous sheath and thus eversion of the prepuce may be associated with a greater risk of injury to the prepuce compared to that in Bos taurus bulls. Further, studies of preputial eversion in Bos taurus bulls have concluded that there is an association between polledness and increased prevalence and severity (length of everted prepuce and duration of eversion) of preputial eversion due primarily to the absence or poor development of the caudal preputial muscles. No similar definitive work in Bos indicus bulls has been conducted and thus anatomical studies reported in this thesis were conducted to determine if a similar association occurred in Bos indicus bulls. A survey of a sample of large beef breeding herds in northern Australia found that preputial prolapse is a significant problem in Bos indicus and Bos indicus derived bulls and affected both young and older bulls. The importance of preputial prolapse confirmed the value of further research into the causes of this problem. A series of anatomical studies confirmed that preputial eversion in Bos indicus derived bulls was not more prevalent in polled bulls than horned bulls and was not associated with deficiency of the caudal preputial muscles as was established in Bos taurus bulls. An anatomical study of Bos indicus derived bulls with preputial prolapse found that preputial prolapse occurred in horned bulls of varying ages and these bulls did not have any evidence of deficiency in the caudal preputial muscles. However, preputial prolapse was observed in young polled bulls that had poorly developed or absent caudal preputial muscles. It was concluded that deficiency of the caudal preputial muscles in polled Bos indicus derived bulls may predispose to preputial prolapse at an early age, but no predisposing anatomical factors were found for horned Bos indicus derived bulls. In these studies, preputial eversion and preputial prolapse were found in horned Bos indicus derived bulls that did not have any preputial muscle deficiency and it was noted that preputial eversion was not related to the length of the prepuce. Further studies confirmed that preputial eversion was linearly and consistently associated with position of the glans penis within the sheath in Bos indicus derived bulls, and movement of the glans penis towards the preputial orifice consistently resulted in preputial eversion in these bulls. A method to objectively measure the relationship between movement of the glans penis within the sheath and preputial eversion was developed. Studies in humans have linked function of some abdominal muscles to function of the pelvic organs. This relationship was investigated in Bos indicus derived bulls to determine whether the function of specific abdominal muscles affected position of the penis in the sheath. Using the method developed to objectively measure the relationship between penis movement and preputial eversion, the abdominal muscles that potentially were associated with movement of the glans penis or preputial eversion were examined but no significant relationships were observed. In the anatomical study of Bos indicus derived bulls not affected with preputial prolapse a more pendulous sheath was associated with increased prevalence of preputial eversion. This relationship was confirmed for horned and polled bulls in the penis movement studies. Bos indicus derived bulls with more pendulous sheaths evert their prepuces more than bulls with less pendulous sheaths thus increasing the risk of damage to the prepuce either from the environment, other bulls, or from them inadvertently stepping on the everted prepuce when they get to their feet. Culling Bos indicus derived bulls with more pendulous sheaths should reduce the incidence of preputial eversion and possibly preputial prolapse. The anatomical study of Bos indicus derived bulls that did not have preputial prolapse demonstrates that there are herds of bulls where the polled bulls do not have any evidence of deficiency of the caudal preputial iv muscles. There is a need to develop a practical and cost effective test to identify polled Bos indicus bulls that have a deficiency in their caudal preputial muscles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Arguing the value of effective HRM practice has long been a focus in the HRM literature. However, there is also a case for identifying the risks presented by inappropriate or absent HRM practices. Although risk management has been established in the broader management literature for over two decades, human resource related risks have not featured as prominently as other types of risks. HRM as a discipline has a role to play in addressing this situation and raising awareness of human resource issues as risks for the organization. A review of papers published since the year 2000 in a broad range of high quality management journals, identifies that limited research has thus far taken a risk management perspective on human resources. Although the HRM and risk management disciplines stand to benefit from drawing the two areas together, this review concludes that further research and development of the phenomenon of human resource risk management is needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In irrigated cropping, as with any other industry, profit and risk are inter-dependent. An increase in profit would normally coincide with an increase in risk, and this means that risk can be traded for profit. It is desirable to manage a farm so that it achieves the maximum possible profit for the desired level of risk. This paper identifies risk-efficient cropping strategies that allocate land and water between crop enterprises for a case study of an irrigated farm in Southern Queensland, Australia. This is achieved by applying stochastic frontier analysis to the output of a simulation experiment. The simulation experiment involved changes to the levels of business risk by systematically varying the crop sowing rules in a bioeconomic model of the case study farm. This model utilises the multi-field capability of the process based Agricultural Production System Simulator (APSIM) and is parameterised using data collected from interviews with a collaborating farmer. We found sowing rules that increased the farm area sown to cotton caused the greatest increase in risk-efficiency. Increasing maize area also improved risk-efficiency but to a lesser extent than cotton. Sowing rules that increased the areas sown to wheat reduced the risk-efficiency of the farm business. Sowing rules were identified that had the potential to improve the expected farm profit by ca. $50,000 Annually, without significantly increasing risk. The concept of the shadow price of risk is discussed and an expression is derived from the estimated frontier equation that quantifies the trade-off between profit and risk.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research project developed a quantitative approach to assess the risk to human health from heavy metals and polycyclic aromatic hydrocarbons in urban stormwater based on traffic and land use factors. The research outcomes are expected to strengthen the scientifically robust management and reuse of urban stormwater. The innovative methodology developed can be applied to evaluate human health risk in relation to toxic chemical pollutants in urban stormwater runoff and for the development of effective risk mitigation strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Puccinia psidii, the causal agent of myrtle rust, was first recorded from Latin America more than 100 years ago. It occurs on many native species of Myrtaceae in Latin America and also infects non-native plantation-grown Eucalyptus species in the region. The pathogen has gradually spread to new areas including Australia and most recently South Africa. The aim of this study was to consider the susceptibility of selected Eucalyptus genotypes, particularly those of interest to South African forestry, to infection by P. psidii. In addition, risk maps were compiled based on suitable climatic conditions and the occurrence of potential susceptible tree species. This made it possible to identify the season when P. psidii would be most likely to infect and to define the geographic areas where the rust disease would be most likely to establish in South Africa. As expected, variation in susceptibility was observed between eucalypt genotypes tested. Importantly, species commonly planted in South Africa show good potential for yielding disease-tolerant material for future planting. Myrtle rust is predicted to be more common in spring and summer. Coastal areas, as well as areas in South Africa with subtropical climates, are more conducive to outbreaks of the pathogen.