964 resultados para five-bottle test
Resumo:
Business models to date have remained the creation of management, however, it is the belief of the authors that designers should be critically approaching, challenging and creating new business models as part of their practice. This belief portrays a new era where business model constructs become the new design brief of the future and fuel design and innovation to work together at the strategic level of an organisation. Innovation can no longer rely on technology and R&D alone but must incorporate business models. Business model innovation has become a strong type of competitive advantage. As firms choose not to compete only on price, but through the delivery of a unique value proposition in order to engage with customers and to differentiate a company within a competitive market. The purpose of this paper is to explore and investigate business model design through various product and/or service deliveries, and identify common drivers that are catalysts for business model innovation. Fifty companies spanning a diverse range of criteria were chosen, to evaluate and compare commonalities and differences in the design of their business models. The analysis of these business cases uncovered commonalities of the key strategic drivers behind these innovative business models. Five Meta Models were derived from this content analysis: Customer Led, Cost Driven, Resource Led, Partnership Led and Price Led. These five key foci provide a designer with a focus from which quick prototypes of new business models are created. Implications from this research suggest there is no ‘one right’ model, but rather through experimentation, the generation of many unique and diverse concepts can result in greater possibilities for future innovation and sustained competitive advantage.
Resumo:
This study was undertaken to examine the influence that a set of Professional Development (PD) initiatives had on faculty use of Moodle, a well known Course Management System. The context of the study was a private language university just outside Tokyo, Japan. Specifically, it aimed to identify the way in which the PD initiatives adhered to professional development best practice criteria; how faculty members perceived the PD initiatives; what impact the PD initiatives had on faculty use of Moodle; and other variables that may have influenced faculty in their use of Moodle. The study utilised a mixed methods approach. Participants in the study were 42 teachers who worked at the university in the academic year 2008/9. The online survey consisted of 115 items, factored into 10 constructs. Data was collected through an online survey, semi-structured face-to-face interviews, post-workshop surveys, and a collection of textual artefacts. The quantitative data were analysed in SPSS, using descriptive statistics, Spearman's Rank Order correlation tests and a Kruskal-Wallis means test. The qualitative data was used to develop and expand findings and ideas. The results indicated that the PD initiatives adhered closely to criteria posited in technology-related professional development best practice criteria. Further, results from the online survey, post workshop surveys, and follow up face-to-face interviews indicated that while the PD initiatives that were implemented were positively perceived by faculty, they did not have the anticipated impact on Moodle use among faculty. Further results indicated that other variables, such as perceptions of Moodle, and institutional issues, had a considerable influence on Moodle use. The findings of the study further strengthened the idea that the five variables Everett Rogers lists in his Diffusion of Innovations model, including perceived attributes of an innovation; type of innovation decision; communication channels; nature of the social system; extent of change agents' promotion efforts, most influence the adoption of an innovation. However, the results also indicated that some of the variables in Rogers' DOI seem to have more of an influence than others, particularly the perceived attributes of an innovation variable. In addition, the findings of the study could serve to inform universities that have Course Management Systems (CMS), such as Moodle, about how to utilise them most efficiently and effectively. The findings could also help to inform universities about how to help faculty members acquire the skills necessary to incorporate CMSs into curricula and teaching practice. A limitation of this study was the use of a non-randomised sample, which could appear to have limited the generalisations of the findings to this particular Japanese context.
Resumo:
In Victoria, as in other jurisdictions, there is very little research on the potential risks and benefits of lane filtering by motorcyclists, particularly from a road safety perspective. This on-road proof of concept study aimed to investigate whether and how lane filtering influences motorcycle rider situation awareness at intersections and to address factors that need to be considered for the design of a larger study in this area. Situation awareness refers to road users’ understanding of ‘what is going on’ around them and is a critical commodity for safe performance. Twenty-five experienced motorcyclists rode their own instrumented motorcycle around an urban test route in Melbourne whilst providing verbal protocols. Lane filtering occurred in 27% of 43 possible instances in which there were one or more vehicles in the traffic queue and the traffic lights were red on approach to the intersection. A network analysis procedure, based on the verbal protocols provided by motorcyclists, was used to identify differences in motorcyclist situation awareness between filtering and non-filtering events. Although similarities in situation awareness across filtering and nonfiltering motorcyclists were found, the analysis revealed some differences. For example, filtering motorcyclists placed more emphasis on the timing of the traffic light sequence and on their own actions when moving to the front of the traffic queue, whilst non-filtering motorcyclists paid greater attention to traffic moving through the intersection and approaching from behind. Based on the results of this study, the paper discusses some methodological and theoretical issues to be addressed in a larger study comparing situation awareness between filtering and non-filtering motorcyclists.
Resumo:
Background and significance: Nurses' job dissatisfaction is associated with negative nursing and patient outcomes. One of the most powerful reasons for nurses to stay in an organisation is satisfaction with leadership. However, nurses are frequently promoted to leadership positions without appropriate preparation for the role. Although a number of leadership programs have been described, none have been tested for effectiveness, using a randomised control trial methodology. Aims: The aims of this research were to develop an evidence based leadership program and to test its effectiveness on nurse unit managers' (NUMs') and nursing staff's (NS's) job satisfaction, and on the leader behaviour scores of nurse unit managers. Methods: First, the study used a comprehensive literature review to examine the evidence on job satisfaction, leadership and front-line manager competencies. From this evidence a summary of leadership practices was developed to construct a two component leadership model. The components of this model were then combined with the evidence distilled from previous leadership development programs to develop a Leadership Development Program (LDP). This evidence integrated the program's design, its contents, teaching strategies and learning environment. Central to the LDP were the evidence-based leadership practices associated with increasing nurses' job satisfaction. A randomised controlled trial (RCT) design was employed for this research to test the effectiveness of the LDP. A RCT is one of the most powerful tools of research and the use of this method makes this study unique, as a RCT has never been used previously to evaluate any leadership program for front-line nurse managers. Thirty-nine consenting nurse unit managers from a large tertiary hospital were randomly allocated to receive either the leadership program or only the program's written information about leadership. Demographic baseline data were collected from participants in the NUM groups and the nursing staff who reported to them. Validated questionnaires measuring job satisfaction and leader behaviours were administered at baseline, at three months after the commencement of the intervention and at six months after the commencement of the intervention, to the nurse unit managers and to the NS. Independent and paired t-tests were used to analyse continuous outcome variables and Chi Square tests were used for categorical data. Results: The study found that the nurse unit managers' overall job satisfaction score was higher at 3-months (p = 0.016) and at 6-months p = 0.027) post commencement of the intervention in the intervention group compared with the control group. Similarly, at 3-months testing, mean scores in the intervention group were higher in five of the six "positive" sub-categories of the leader behaviour scale when compared to the control group. There was a significant difference in one sub-category; effectiveness, p = 0.015. No differences were observed in leadership behaviour scores between groups by 6-months post commencement of the intervention. Over time, at three month and six month testing there were significant increases in four transformational leader behaviour scores and in one positive transactional leader behaviour scores in the intervention group. Over time at 3-month testing, there were significant increases in the three leader behaviour outcome scores, however at 6-months testing; only one of these leader behaviour outcome scores remained significantly increased. Job satisfaction scores were not significantly increased between the NS groups at three months and at six months post commencement of the intervention. However, over time within the intervention group at 6-month testing there was a significant increase in job satisfaction scores of NS. There were no significant increases in NUM leader behaviour scores in the intervention group, as rated by the nursing staff who reported to them. Over time, at 3-month testing, NS rated nurse unit managers' leader behaviour scores significantly lower in two leader behaviours and two leader behaviour outcome scores. At 6-month testing, over time, one leader behaviour score was rated significantly lower and the nontransactional leader behaviour was rated significantly higher. Discussion: The study represents the first attempt to test the effectiveness of a leadership development program (LDP) for nurse unit managers using a RCT. The program's design, contents, teaching strategies and learning environment were based on a summary of the literature. The overall improvement in role satisfaction was sustained for at least 6-months post intervention. The study's results may reflect the program's evidence-based approach to developing the LDP, which increased the nurse unit managers' confidence in their role and thereby their job satisfaction. Two other factors possibly contributed to nurse unit managers' increased job satisfaction scores. These are: the program's teaching strategies, which included the involvement of the executive nursing team of the hospital, and the fact that the LDP provided recognition of the importance of the NUM role within the hospital. Consequently, participating in the program may have led to nurse unit managers feeling valued and rewarded for their service; hence more satisfied. Leadership behaviours remaining unchanged between groups at the 6 months data collection time may relate to the LDP needing to be conducted for a longer time period. This is suggested because within the intervention group, over time, at 3 and 6 months there were significant increases in self-reported leader behaviours. The lack of significant changes in leader behaviour scores between groups may equally signify that leader behaviours require different interventions to achieve change. Nursing staff results suggest that the LDP's design needs to consider involving NS in the program's aims and progress from the outset. It is also possible that by including regular feedback from NS to the nurse unit managers during the LDP that NS's job satisfaction and their perception of nurse unit managers' leader behaviours may alter. Conclusion/Implications: This study highlights the value of providing an evidence-based leadership program to nurse unit managers to increase their job satisfaction. The evidence based leadership program increased job satisfaction but its effect on leadership behaviour was only seen over time. Further research is required to test interventions which attempt to change leader behaviours. Also further research on NS' job satisfaction is required to test the indirect effects of LDP on NS whose nurse unit managers participate in LDPs.
Resumo:
The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.
Resumo:
The inconsistent findings of past board diversity research demand a test of competing linear and curvilinear diversity–performance predictions. This research focuses on board age and gender diversity, and presents a positive linear prediction based on resource dependence theory, a negative linear prediction based on social identity theory, and an inverted U-shaped curvilinear prediction based on the integration of resource dependence theory with social identity theory. The predictions were tested using archival data on 288 large organizations listed on the Australian Securities Exchange, with a 1-year time lag between diversity (age and gender) and performance (employee productivity and return on assets). The results indicate a positive linear relationship between gender diversity and employee productivity, a negative linear relationship between age diversity and return on assets, and an inverted U-shaped curvilinear relationship between age diversity and return on assets. The findings provide additional evidence on the business case for board gender diversity and refine the business case for board age diversity.
Resumo:
Introduction Malnutrition is common among hospitalised patients, with poor follow-up of nutrition support post-discharge. Published studies on the efficacy of ambulatory nutrition support (ANS) for malnourished patients post-discharge are scarce. The aims of this study were to evaluate the rate of dietetics follow-up of malnourished patients post-discharge, before (2008) and after (2010) implementation of a new ANS service, and to evaluate nutritional outcomes post-implementation. Materials and Methods Consecutive samples of 261 (2008) and 163 (2010) adult inpatients referred to dietetics and assessed as malnourished using Subjective Global Assessment (SGA) were enrolled. All subjects received inpatient nutrition intervention and dietetic outpatient clinic follow-up appointments. For the 2010 cohort, ANS was initiated to provide telephone follow-up and home visits for patients who failed to attend the outpatient clinic. Subjective Global Assessment, body weight, quality of life (EQ-5D VAS) and handgrip strength were measured at baseline and five months post-discharge. Paired t-test was used to compare pre- and post-intervention results. Results In 2008, only 15% of patients returned for follow-up with a dietitian within four months post-discharge. After implementation of ANS in 2010, the follow-up rate was 100%. Mean weight improved from 44.0 ± 8.5kg to 46.3 ± 9.6kg, EQ-5D VAS from 61.2 ± 19.8 to 71.6 ± 17.4 and handgrip strength from 15.1 ± 7.1 kg force to 17.5 ± 8.5 kg force; p<0.001 for all. Seventy-four percent of patients improved in SGA score. Conclusion Ambulatory nutrition support resulted in significant improvements in follow-up rate, nutritional status and quality of life of malnourished patients post-discharge.
Resumo:
The pull-through/local dimpling failure strength of screwed connections is very important in the design of profiled steel cladding systems to help them resist storms and hurricanes. The current American and European provisions recommend four different test methods for the screwed connections in tension, but the accuracy of these methods in determining the connection strength is not known. It is unlikely that the four test methods are equivalent in all cases and thus it is necessary to reduce the number of methods recommended. This paper presents a review of these test methods based on some laboratory tests on crest- and valley-fixed claddings and then recommends alternative tests methods that reproduce the real behavior of the connections, including the bending and membrane deformations of the cladding around the screw fasteners and the tension load in the fastener.
Resumo:
Introduction Malnutrition is common among hospitalised patients, with poor follow-up of nutrition support post-discharge. Published studies on the efficacy of ambulatory nutrition support (ANS) for malnourished patients post-discharge are scarce. The aims of this study were to evaluate the rate of dietetics follow-up of malnourished patients post-discharge, before (2008) and after (2010) implementation of a new ANS service, and to evaluate nutritional outcomes post-implementation. Materials and Methods Consecutive samples of 261 (2008) and 163 (2010) adult inpatients referred to dietetics and assessed as malnourished using Subjective Global Assessment (SGA) were enrolled. All subjects received inpatient nutrition intervention and dietetic outpatient clinic follow-up appointments. For the 2010 cohort, ANS was initiated to provide telephone follow-up and home visits for patients who failed to attend the outpatient clinic. Subjective Global Assessment, body weight, quality of life (EQ-5D VAS) and handgrip strength were measured at baseline and five months post-discharge. Paired t-test was used to compare pre- and post-intervention results. Results In 2008, only 15% of patients returned for follow-up with a dietitian within four months post-discharge. After implementation of ANS in 2010, the follow-up rate was 100%. Mean weight improved from 44.0 ± 8.5kg to 46.3 ± 9.6kg, EQ-5D VAS from 61.2 ± 19.8 to 71.6 ± 17.4 and handgrip strength from 15.1 ± 7.1 kg force to 17.5 ± 8.5 kg force; p<0.001 for all. Seventy-four percent of patients improved in SGA score. Conclusion Ambulatory nutrition support resulted in significant improvements in follow-up rate, nutritional status and quality of life of malnourished patients post-discharge.
Resumo:
Background & Aims: Access to sufficient amounts of safe and culturally-acceptable foods is a fundamental human right. Food security exists when all people, at all times, have physical, social, and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life. Food insecurity therefore occurs when the availability or access to sufficient amounts of nutritionally-adequate, culturally-appropriate and safe foods, or, the ability to acquire such foods in socially-acceptable ways, is limited. Food insecurity may result in significant adverse effects for the individual and these outcomes may vary between adults and children. Among adults, food insecurity may be associated with overweight or obesity, poorer self-rated general health, depression, increased health-care utilisation and dietary intakes less consistent with national recommendations. Among children, food insecurity may result in poorer self or parent-reported general health, behavioural problems, lower levels of academic achievement and poor social outcomes. The majority of research investigating the potential correlates of food insecurity has been undertaken in the United States (US), where regular national screening for food insecurity is undertaken using a comprehensive multi-item measurement. In Australia, screening for food insecurity takes place on a three yearly basis via the use of a crude, single-item included in the National Health Survey (NHS). This measure has been shown to underestimate the prevalence of food insecurity by 5%. From 1995 – 2004, the prevalence of food insecurity among the Australian population remained stable at 5%. Due to the perceived low prevalence of this issue, screening for food insecurity was not undertaken in the most recent NHS. Furthermore, there are few Australian studies investigating the potential determinants of food insecurity and none investigating potential outcomes among adults and children. This study aimed to examine these issues by a) investigating the prevalence of food insecurity among households residing in disadvantaged urban areas and comparing prevalence rates estimated by the more comprehensive 18-item and 6-item United States Department of Agriculture (USDA) Food Security Survey Module (FSSM) to those estimated by the current single-item measure used for surveillance in Australia and b) investigating the potential determinants and outcomes of food insecurity, Methods: A comprehensive literature review was undertaken to investigate the potential determinants and consequences of food insecurity among developed countries. This was followed by a cross-sectional study in which 1000 households from the most disadvantaged 5% of Brisbane areas were sampled and data collected via mail-based survey (final response rate = 53%, n = 505). Data were collected for food security status, sociodemographic characteristics (household income, education, age, gender, employment status, housing tenure and living arrangements), fruit and vegetable intakes, meat and take-away consumption, presence of depressive symptoms, presence of chronic disease and body mass index (BMI) among adults. Among children, data pertaining to BMI, parent-reported general health, days away from school and activities and behavioural problems were collected. Rasch analysis was used to investigate the psychometric properties of the 18-, 10- and 6-item adaptations of the USDA-FSSM, and McNemar's test was used to investigate the difference in the prevalence of food insecurity as measured by these three adaptations compared to the current single-item measure used in Australia. Chi square and logistic regression were used to investigate the differences in dietary and health outcomes among adults and health and behavioural outcomes among children. Results were adjusted for equivalised household income and, where necessary, for indigenous status, education and family type. Results: Overall, 25% of households in these urbanised-disadvantaged areas reported experiencing food insecurity; this increased to 34% when only households with children were analysed. The current reliance on a single-item measure to screen for food insecurity may underestimate the true burden among the Australian population, as this measure was shown to significantly underestimate the prevalence of food insecurity by five percentage points. Internationally, major potential determinants of food insecurity included poverty and indicators of poverty, such as low-income, unemployment and lower levels of education. Ethnicity, age, transportation and cooking and financial skills were also found to be potential determinants of food insecurity. Among Australian adults in disadvantaged urban areas, food insecurity was associated with a three-fold increase in experiencing poorer self-rated general health and a two-to-five-fold increase in the risk of depression. Furthermore, adults from food insecure households were twoto- three times more likely to have seen a general practitioner and/or been admitted to hospital within the previous six months, compared to their food secure counterparts. Weight status and intakes of fruits, vegetables and meat were not associated with food insecurity. Among Australian households with children, those in the lowest tertile were over 16 times more likely to experience food insecurity compared to those in the highest tertile for income. After adjustment for equivalised household income, children from food insecure households were three times more likely to have missed days away from school or other activities. Furthermore, children from food insecure households displayed a two-fold increase in atypical emotions and behavioural difficulties. Conclusions: Food insecurity is an important public health issue and may contribute to the burden on the health care system through its associations with depression and increased health care utilisation among adults and behavioural and emotional problems among children. Current efforts to monitor food insecurity in Australia do not occur frequently and use a tool that may underestimate the prevalence of food insecurity. Efforts should be made to improve the regularity of screening for food insecurity via the use of a more accurate screening measure. Most of the current strategies that aim to alleviate food insecurity do not sufficiently address the issue of insufficient financial resources for acquiring food; a factor which is an important determinant of food insecurity. Programs to address this issue should be developed in collaboration with groups at higher risk of developing food insecurity and should incorporate strategies to address the issue of low income as a barrier to food acquisition.
Resumo:
Diagnosis threat is a psychosocial factor that has been proposed to contribute to poor outcomes following mild traumatic brain injury (mTBI). This threat is thought to impair the cognitive test performance of individuals with mTBI because of negative injury stereotypes. University students (N= 45, 62.2% female) with a history of mTBI were randomly allocated to a diagnosis threat (DT, n=15), reduced threat (DT-reduced, n=15) or neutral (n=15) group. The reduced threat condition invoked a positive stereotype (i.e., that people with mTBI can perform well on cognitive tests). All participants were given neutral instructions before they completed baseline tests of: a) objective cognitive function across a number of domains; b) psychological symptoms; and, c) PCS symptoms, including self-reported cognitive and emotional difficulties. Participants then received either neutral, DT or DT-reduced instructions, before repeating the tests. Results were analyzed using separate mixed model ANOVAs; one for each dependent measure. The only significant result was for the 2 X 3 ANOVA on an objective test of attention/working memory, Digit Span, p<.05, such that the DT-reduced group performed better than the other groups, which were not different from each other. Although not consistent with predictions or earlier DT studies, the absence of group differences on most tests fits with several recent DT findings. The results of this study suggest that it is timely to reconsider the role of DT as a unique contributor to poor mTBI outcome.
Resumo:
Asset service organisations often recognize asset management as a core competence to deliver benefits to their business. But how do organizations know whether their asset management processes are adequate? Asset management maturity models, which combine best practices and competencies, provide a useful approach to test the capacity of organisations to manage their assets. Asset management frameworks are required to meet the dynamic challenges of managing assets in contemporary society. Although existing models are subject to wide variations in their implementation and sophistication, they also display a distinct weakness in that they tend to focus primarily on the operational and technical level and neglect the levels of strategy, policy and governance as well as the social and human resources – the people elements. Moreover, asset management maturity models have to respond to the external environmental factors, including such as climate change and sustainability, stakeholders and community demand management. Drawing on five dimensions of effective asset management – spatial, temporal, organisational, statistical, and evaluation – as identified by Amadi Echendu et al. [1], this paper carries out a comprehensive comparative analysis of six existing maturity models to identify the gaps in key process areas. Results suggest incorporating these into an integrated approach to assess the maturity of asset-intensive organizations. It is contended that the adoption of an integrated asset management maturity model will enhance effective and efficient delivery of services.
Resumo:
Consumer awareness and usage of Unit Price (UP) information continues to hold academic interest. Originally designed as a device to enable shoppers to make comparisons between grocery products, it is argued consumers still lack a sufficient understanding of the device. Previous research has tended to focus on product choice, effect of time, and structural changes to price presentation. No studies have tested the effect of UP consumer education on grocery shopping expenditure. Supported by distributed learning theories, this is the first study to condition participants over a twenty week period, to comprehend and employ UP information while shopping. A 3x5 mixed factorial design was employed to collect data from 357 shoppers. A 3 (Control, Massed, Spaced) x 5 (Time Point: Week 0, 5, 10, 15 and 20) mixed factorial analysis of variance (ANOVA) was performed to analyse the data. Preliminary results revealed that the three groups differed in their average expenditure over the twenty weeks. The Control group remained stable across the five time points. Results indicated that both intensive (Massed) and less intensive (Spaced) exposure to UP information achieved similar results, with both group reducing average expenditure similarly by Week 5. These patterns held for twenty weeks, with conditioned groups reducing their grocery expenditure by over 10%. This research has academic value as a test of applied learning theories. We argue, retailers can attain considerable market advantages as efforts to enhance customers’ knowledge, through consumer education campaigns, can have a positive and strong impact on customer trust and goodwill toward the organisation. Hence, major practical implications for both regulators and retailers exist.
Death of a five year old from meningococcal disease in Darwin : a case of unprecedented public alarm
Resumo:
On Saturday, 25 October 1997 a five year old boy died in the Intensive Care Unit (ICU) of Royal Darwin Hospital (RDH) from meningococcal disease. While disease is expected throughout Australia during the late winter and early spring months, and deaths occur, this case was remarkable in the Northern Territory with respect to the unprecedented public response to media reports of the death.