275 resultados para demand-side management
Resumo:
This study examined the effects of role demand on both work–family conflict and family–work conflict, and the moderating effects of role salience and support on these relationships. Based on 391 dual-career (managerial and blue-collar employees) couples from a Taiwanese company in China, the results of this survey study showed clear gender differences in the patterns of relationships observed. For men, the most important demands that negatively impacted on work–family conflict were frequency of overtime and frequency of socializing for work purposes (yingchou), and supervisory support buffered the negative impact of frequent overtime. For women however, strong supervisory support and low work role salience were more important for reducing work–family conflict, and there was no significant main effect found for any of the role demand factors. Furthermore, women with high work role salience were more likely to feel the impact of yingchou on work–family conflict. In the family domain, the most influential demand for men was hours spent on household tasks, but for women, it was the frequency of family-related leave. Interestingly, males reported higher family role salience than females and spouse support intensified rather than buffered the positive impact of hours spent on household tasks on family–work conflict for males.
Resumo:
Organisations are constantly seeking efficiency gains for their business processes in terms of time and cost. Management accounting enables detailed cost reporting of business operations for decision making purposes, although significant effort is required to gather accurate operational data. Process mining, on the other hand, may provide valuable insight into processes through analysis of events recorded in logs by IT systems, but its primary focus is not on cost implications. In this paper, a framework is proposed which aims to exploit the strengths of both fields in order to better support management decisions on cost control. This is achieved by automatically merging cost data with historical data from event logs for the purposes of monitoring, predicting, and reporting process-related costs. The on-demand generation of accurate, relevant and timely cost reports, in a style akin to reports in the area of management accounting, will also be illustrated. This is achieved through extending the open-source process mining framework ProM.
Resumo:
Previous research into the potential ‘dark’ side of trait emotional intelligence (EI) has repeatedly demonstrated that trait EI is negatively associated with Machiavellianism. In this study, we reassess the potential dark side of trait EI, by testing whether Agreeableness mediates and/or moderates the relationship between trait EI and Machiavellianism. Hypothesized mediation and moderation effects were tested using a large sample of 884 workers who completed several self-report questionnaires. Results provide support for both hypotheses; Agreeableness was found to mediate and moderate the relationship between trait EI and Machiavellianism. Overall, results indicate that individuals high in trait EI tend to have low levels of Machiavellianism because they generally have a positive nature (i.e. are agreeable) and not because they are emotionally competent per se. Results also indicate that individuals high in ‘perceived emotional competence’ have the potential to be high in Machiavellianism, particularly when they are low in Agreeableness.
Resumo:
It has been reported that poor nutritional status, in the form of weight loss and resulting body mass index (BMI) changes, is an issue in people with Parkinson's disease (PWP). The symptoms resulting from Parkinson's disease (PD) and the side effects of PD medication have been implicated in the aetiology of nutritional decline. However, the evidence on which these claims are based is, on one hand, contradictory, and on the other, restricted primarily to otherwise healthy PWP. Despite the claims that PWP suffer from poor nutritional status, evidence is lacking to inform nutrition-related care for the management of malnutrition in PWP. The aims of this thesis were to better quantify the extent of poor nutritional status in PWP, determine the important factors differentiating the well-nourished from the malnourished and evaluate the effectiveness of an individualised nutrition intervention on nutritional status. Phase DBS: Nutritional status in people with Parkinson's disease scheduled for deep-brain stimulation surgery The pre-operative rate of malnutrition in a convenience sample of people with Parkinson's disease (PWP) scheduled for deep-brain stimulation (DBS) surgery was determined. Poorly controlled PD symptoms may result in a higher risk of malnutrition in this sub-group of PWP. Fifteen patients (11 male, median age 68.0 (42.0 – 78.0) years, median PD duration 6.75 (0.5 – 24.0) years) participated and data were collected during hospital admission for the DBS surgery. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference, waist circumference, body mass index (BMI)) were taken, and body composition was measured using bioelectrical impedance spectroscopy (BIS). Six (40%) of the participants were malnourished (SGA-B) while 53% reported significant weight loss following diagnosis. BMI was significantly different between SGA-A and SGA-B (25.6 vs 23.0kg/m 2, p<.05). There were no differences in any other variables, including PG-SGA score and the presence of non-motor symptoms. The conclusion was that malnutrition in this group is higher than that in other studies reporting malnutrition in PWP, and it is under-recognised. As poorer surgical outcomes are associated with poorer pre-operative nutritional status in other surgeries, it might be beneficial to identify patients at nutritional risk prior to surgery so that appropriate nutrition interventions can be implemented. Phase I: Nutritional status in community-dwelling adults with Parkinson's disease The rate of malnutrition in community-dwelling adults (>18 years) with Parkinson's disease was determined. One hundred twenty-five PWP (74 male, median age 70.0 (35.0 – 92.0) years, median PD duration 6.0 (0.0 – 31.0) years) participated. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference (MAC), calf circumference, waist circumference, body mass index (BMI)) were taken. Nineteen (15%) of the participants were malnourished (SGA-B). All anthropometric indices were significantly different between SGA-A and SGA-B (BMI 25.9 vs 20.0kg/m2; MAC 29.1 – 25.5cm; waist circumference 95.5 vs 82.5cm; calf circumference 36.5 vs 32.5cm; all p<.05). The PG-SGA score was also significantly lower in the malnourished (2 vs 8, p<.05). The nutrition impact symptoms which differentiated between well-nourished and malnourished were no appetite, constipation, diarrhoea, problems swallowing and feel full quickly. This study concluded that malnutrition in community-dwelling PWP is higher than that documented in community-dwelling elderly (2 – 11%), yet is likely to be under-recognised. Nutrition impact symptoms play a role in reduced intake. Appropriate screening and referral processes should be established for early detection of those at risk. Phase I: Nutrition assessment tools in people with Parkinson's disease There are a number of validated and reliable nutrition screening and assessment tools available for use. None of these tools have been evaluated in PWP. In the sample described above, the use of the World Health Organisation (WHO) cut-off (≤18.5kg/m2), age-specific BMI cut-offs (≤18.5kg/m2 for under 65 years, ≤23.5kg/m2 for 65 years and older) and the revised Mini-Nutritional Assessment short form (MNA-SF) were evaluated as nutrition screening tools. The PG-SGA (including the SGA classification) and the MNA full form were evaluated as nutrition assessment tools using the SGA classification as the gold standard. For screening, the MNA-SF performed the best with sensitivity (Sn) of 94.7% and specificity (Sp) of 78.3%. For assessment, the PG-SGA with a cut-off score of 4 (Sn 100%, Sp 69.8%) performed better than the MNA (Sn 84.2%, Sp 87.7%). As the MNA has been recommended more for use as a nutrition screening tool, the MNA-SF might be more appropriate and take less time to complete. The PG-SGA might be useful to inform and monitor nutrition interventions. Phase I: Predictors of poor nutritional status in people with Parkinson's disease A number of assessments were conducted as part of the Phase I research, including those for the severity of PD motor symptoms, cognitive function, depression, anxiety, non-motor symptoms, constipation, freezing of gait and the ability to carry out activities of daily living. A higher score in all of these assessments indicates greater impairment. In addition, information about medical conditions, medications, age, age at PD diagnosis and living situation was collected. These were compared between those classified as SGA-A and as SGA-B. Regression analysis was used to identify which factors were predictive of malnutrition (SGA-B). Differences between the groups included disease severity (4% more severe SGA-A vs 21% SGA-B, p<.05), activities of daily living score (13 SGA-A vs 18 SGA-B, p<.05), depressive symptom score (8 SGA-A vs 14 SGA-B, p<.05) and gastrointestinal symptoms (4 SGA-A vs 6 SGA-B, p<.05). Significant predictors of malnutrition according to SGA were age at diagnosis (OR 1.09, 95% CI 1.01 – 1.18), amount of dopaminergic medication per kg body weight (mg/kg) (OR 1.17, 95% CI 1.04 – 1.31), more severe motor symptoms (OR 1.10, 95% CI 1.02 – 1.19), less anxiety (OR 0.90, 95% CI 0.82 – 0.98) and more depressive symptoms (OR 1.23, 95% CI 1.07 – 1.41). Significant predictors of a higher PG-SGA score included living alone (β=0.14, 95% CI 0.01 – 0.26), more depressive symptoms (β=0.02, 95% CI 0.01 – 0.02) and more severe motor symptoms (OR 0.01, 95% CI 0.01 – 0.02). More severe disease is associated with malnutrition, and this may be compounded by lack of social support. Phase II: Nutrition intervention Nineteen of the people identified in Phase I as requiring nutrition support were included in Phase II, in which a nutrition intervention was conducted. Nine participants were in the standard care group (SC), which received an information sheet only, and the other 10 participants were in the intervention group (INT), which received individualised nutrition information and weekly follow-up. INT gained 2.2% of starting body weight over the 12 week intervention period resulting in significant increases in weight, BMI, mid-arm circumference and waist circumference. The SC group gained 1% of starting weight over the 12 weeks which did not result in any significant changes in anthropometric indices. Energy and protein intake (18.3kJ/kg vs 3.8kJ/kg and 0.3g/kg vs 0.15g/kg) increased in both groups. The increase in protein intake was only significant in the SC group. The changes in intake, when compared between the groups, were no different. There were no significant changes in any motor or non-motor symptoms or in "off" times or dyskinesias in either group. Aspects of quality of life improved over the 12 weeks as well, especially emotional well-being. This thesis makes a significant contribution to the evidence base for the presence of malnutrition in Parkinson's disease as well as for the identification of those who would potentially benefit from nutrition screening and assessment. The nutrition intervention demonstrated that a traditional high protein, high energy approach to the management of malnutrition resulted in improved nutritional status and anthropometric indices with no effect on the presence of Parkinson's disease symptoms and a positive effect on quality of life.
Resumo:
Asset service organisations often recognize asset management as a core competence to deliver benefits to their business. But how do organizations know whether their asset management processes are adequate? Asset management maturity models, which combine best practices and competencies, provide a useful approach to test the capacity of organisations to manage their assets. Asset management frameworks are required to meet the dynamic challenges of managing assets in contemporary society. Although existing models are subject to wide variations in their implementation and sophistication, they also display a distinct weakness in that they tend to focus primarily on the operational and technical level and neglect the levels of strategy, policy and governance as well as the social and human resources – the people elements. Moreover, asset management maturity models have to respond to the external environmental factors, including such as climate change and sustainability, stakeholders and community demand management. Drawing on five dimensions of effective asset management – spatial, temporal, organisational, statistical, and evaluation – as identified by Amadi Echendu et al. [1], this paper carries out a comprehensive comparative analysis of six existing maturity models to identify the gaps in key process areas. Results suggest incorporating these into an integrated approach to assess the maturity of asset-intensive organizations. It is contended that the adoption of an integrated asset management maturity model will enhance effective and efficient delivery of services.
Resumo:
Executive Summary Emergency health is a critical component of Australia’s health system and emergency departments (EDs) are increasingly congested from growing demand and blocked access to inpatient beds. The Emergency Health Services Queensland (EHSQ) study aims to identify the factors driving increased demand for emergency health and to evaluate strategies which may safely reduce the future demand growth. This monograph addresses the perspectives of users of both ambulance services and EDs. The research reported here aimed to identify the perspectives of users of emergency health services, both ambulance services and public hospital Emergency Departments and to identify the factors that they took into consideration when exercising their choice of location for acute health care. A cross-sectional survey design was used involving a survey of patients or their carers presenting to the EDs of a stratified sample of eight hospitals. A specific purpose questionnaire was developed based on a novel theoretical model which had been derived from analysis of the literature (Monograph 1). Two survey versions were developed: one for adult patients (self-complete); and one for children (to be completed by parents/guardians). The questionnaires measured perceptions of social support, health status, illness severity, self-efficacy; beliefs and attitudes towards ED and ambulance services; reasons for using these services, and actions taken prior to the service request. The survey was conducted at a stratified sample of eight hospitals representing major cities (four), inner regional (two) and outer regional and remote (two). Due to practical limitations, data were collected for ambulance and ED users within hospital EDs, while patients were waiting for or under treatment. A sample size quota was determined for each ED based on their 2009/10 presentation volumes. The data collection was conducted by four members of the research team and a group of eight interviewers between March and May 2011 (corresponding to autumn season). Of the total of 1608 patients in all eight emergency departments the interviewers were able to approach 1361 (85%) patients and seek their consent to participate in the study. In total, 911 valid surveys were available for analysis (response rate= 67%). These studies demonstrate that patients elected to attend hospital EDs in a considered fashion after weighing up alternatives and there is no evidence of deliberate or ill-informed misuse. • Patients attending ED have high levels of social support and self-efficacy that speak to the considered and purposeful nature of the exercise of choice. • About one third of patients have new conditions while two thirds have chronic illnesses • More than half the attendees (53.1%) had consulted a healthcare professional prior to making the decision. • The decision to seek urgent care at an ED was mostly constructed around the patient’s perception of the urgency and severity of their illness, reinforced by a strong perception that the hospital ED was the correct location for them (better specialised staff, better care for my condition, other options not as suitable). • 33% of the respondent held private hospital insurance but nevertheless attended a public hospital ED. Similarly patients exercised considered and rational judgements in their choice to seek help from the ambulance service. • The decision to call for ambulance assistance was based on a strong perception about the severity of the illness (too severe to use other means of transport) and that other options were not considered appropriate. • The decision also appeared influenced by a perception that the ambulance provided appropriate access to the ED which was considered most appropriate for their particular condition (too severe to go elsewhere, all facilities in one spot, better specialised and better care). • In 43.8% of cases a health care professional advised use of the ambulance. • Only a small number of people perceived that ambulance should be freely available regardless of severity or appropriateness. These findings confirm a growing understanding that the choice of professional emergency health care services is not made lightly but rather made by reasonable people exercising a judgement which is influenced by public awareness of the risks of acute health and which is most often informed by health professionals. It is also made on the basis of a rational weighing up of alternatives and a deliberate and considered choice to seek assistance from a service which the patient perceived was most appropriate to their needs at that time. These findings add weight to dispensing with public perceptions that ED and ambulance congestion is a result of inappropriate choice by patients. The challenge for health services is to better understand the patient’s needs and to design and validate services that meet those needs. The failure of our health system to do so should not be grounds for blaming the patient, claiming inappropriate patient choices.
Resumo:
Metastatic breast cancer (MBC) may present de novo but more commonly develops in women initially presenting with early breast cancer despite the widespread use of adjuvant hormonal and cytotoxic chemotherapy. MBC is incurable. Hormone sensitive MBC eventually becomes resistant to endocrine therapy in most women. Anthracyclines are the agents of choice in the treatment of endocrine resistant MBC. With the widespread use of anthracyclines in the adjuvant setting, taxanes have become the agents of choice for many patients. Recently capecitabine has become established as a standard of care for patients pretreated with anthracyclines and taxanes. However, a range of agents have activity as third line treatment. These include gemcitabine, vinorelbine and platinum analogues. The sequential use of non-cross resistant single agents rather than combination therapy is preferable in most women with MBC. Even though combination therapy can improve response rates and increase progression free interval, there is no robust evidence to indicate an advantage in terms of overall survival. Moreover, combination therapy is associated with a higher toxicity rate and poor quality of life. There is no role for dose-intense therapy, high dose therapy or maintenance chemotherapy outside the context of a clinical trial. The introduction of trastuzumab, monoclonal antibody targeting growth factor receptors, has improved the therapeutic options for women with tumours overexpressing HER2/neu. DNA micro-array profiles of tumours can potentially help to individualise therapy in future. Molecular targeted therapy has the potential to revolutionise the management of MBC.
Resumo:
Background Radiographic examinations of the ankle are important in the clinical management of ankle injuries in hospital emergency departments. National (Australian) Emergency Access Targets (NEAT) stipulate that 90 percent of presentations should leave the emergency department within 4 hours. For a radiological report to have clinical usefulness and relevance to clinical teams treating patients with ankle injuries in emergency departments, the report would need to be prepared and available to the clinical team within the NEAT 4 hour timeframe; before the patient has left the emergency department. However, little is known about the demand profile of ankle injuries requiring radiographic examination or time until radiological reports are available for this clinical group in Australian public hospital emergency settings. Methods This study utilised a prospective cohort of consecutive cases of ankle examinations from patients (n=437) with suspected traumatic ankle injuries presenting to the emergency department of a tertiary hospital facility. Time stamps from the hospital Picture Archiving and Communication System were used to record the timing of three processing milestones for each patient's radiographic examination; the time of image acquisition, time of a provisional radiological report being made available for viewing by referring clinical teams, and time of final verification of radiological report. Results Radiological reports and all three time stamps were available for 431 (98.6%) cases and were included in analysis. The total time between image acquisition and final radiological report verification exceeded 4?hours for 404 (92.5%) cases. The peak demand for radiographic examination of ankles was on weekend days, and in the afternoon and evening. The majority of examinations were provisionally reported and verified during weekday daytime shift hours. Conclusions Provisional or final radiological reports were frequently not available within 4 hours of image acquisition among this sample. Effective and cost-efficient strategies to improve the support provided to referring clinical teams from medical imaging departments may enhance emergency care interventions for people presenting to emergency departments with ankle injuries; particularly those with imaging findings that may be challenging for junior clinical staff to interpret without a definitive radiological report.
Resumo:
Numerous initiatives have been employed around the world in order to address rising greenhouse gas (GHG) emissions originating from the transport sector. These measures include: travel demand management (congestion‐charging), increased fuel taxes, alternative fuel subsidies and low‐emission vehicle (LEV) rebates. Incentivizing the purchase of LEVs has been one of the more prevalent approaches in attempting to tackle this global issue. LEVs, whilst having the advantage of lower emissions and, in some cases, more efficient fuel consumption, also bring the downsides of increased purchase cost, reduced convenience of vehicle fuelling, and operational uncertainty. To stimulate demand in the face of these challenges, various incentive‐based policies, such as toll exemptions, have been used by national and local governments to encourage the purchase of these types of vehicles. In order to address rising GHG emissions in Stockholm, and in line with the Swedish Government’s ambition to operate a fossil free fleet by 2030, a number of policies were implemented targeting the transport sector. Foremost amongst these was the combination of a congestion charge – initiated to discourage emissions‐intensive travel – and an exemption from this charge for some LEVs, established to encourage a transition towards a ‘green’ vehicle fleet. Although both policies shared the aim of reducing GHG emissions, the exemption for LEVs carried the risk of diminishing the effectiveness of the congestion charging scheme. As the number of vehicle owners choosing to transition to an eligible LEV increased, the congestion‐reduction effectiveness of the charging scheme weakened. In fact, policy makers quickly recognized this potential issue and consequently phased out the LEV exemption less than 18 months after its introduction (1). Several studies have investigated the demand for LEVs through stated‐preference (SP) surveys across multiple countries, including: Denmark (2), Germany (3, 4), UK (5), Canada (6), USA (7, 8) and Australia (9). Although each of these studies differed in approach, all involved SP surveys where differing characteristics between various types of vehicles, including LEVs, were presented to respondents and these respondents in turn made hypothetical decisions about which vehicle they would be most likely to purchase. Although these studies revealed a number of interesting findings in regards to the potential demand for LEVs, they relied on SP data. In contrast, this paper employs an approach where LEV choice is modelled by taking a retrospective view and by using revealed preference (RP) data. By examining the revealed preferences of vehicle owners in Stockholm, this study overcomes one of the principal limitations of SP data, namely that stated preferences may not in fact reflect individuals’ actual choices, such as when cost, time, and inconvenience factors are real rather than hypothetical. This paper’s RP approach involves modelling the characteristics of individuals who purchased new LEVs, whilst estimating the effect of the congestion charging exemption upon choice probabilities and subsequent aggregate demand. The paper contributes to the current literature by examining the effectiveness of a toll exemption under revealed preference conditions, and by assessing the total effect of the policy based on key indicators for policy makers, including: vehicle owner home location, commuting patterns, number of children, age, gender and income. Extended Abstract Submission for Kuhmo Nectar Conference 2014 2 The two main research questions motivating this study were: Which individuals chose to purchase a new LEV in Stockholm in 2008?; and, How did the congestion charging exemption affect the aggregate demand for new LEVs in Stockholm in 2008? In order to answer these research questions the analysis was split into two stages. Firstly, a multinomial logit (MNL) model was used to identify which demographic characteristics were most significantly related to the purchase of an LEV over a conventional vehicle. The three most significant variables were found to be: intra‐cordon residency (positive); commuting across the cordon (positive); and distance of residence from the cordon (negative). In order to estimate the effect of the exemption policy on vehicle purchase choice, the model included variables to control for geographic differences in preferences, based on the location of the vehicle owners’ homes and workplaces in relation to the congestion‐charging cordon boundary. These variables included one indicator representing commutes across the cordon and another indicator representing intra‐cordon residency. The effect of the exemption policy on the probability of purchasing LEVs was estimated in the second stage of the analysis by focusing on the groups of vehicle owners that were most likely to have been affected by the policy i.e. those commuting across the cordon boundary (in both directions). Given the inclusion of the indicator variable representing commutes across the cordon, it is assumed that the estimated coefficient of this variable captures the effect of the exemption policy on the utility of choosing to purchase an exempt LEV for these two groups of vehicle owners. The intra‐cordon residency indicator variable also controls for differences between the two groups, based upon direction of travel across the cordon boundary. A counter‐hypothesis to this assumption is that the coefficient of the variable representing commuting across the cordon boundary instead only captures geo‐demographic differences that lead to variations in LEV ownership across the different groups of vehicle owners in relation to the cordon boundary. In order to address this counter‐hypothesis, an additional analysis was performed on data from a city with a similar geodemographic pattern to Stockholm, Gothenburg ‐ Sweden’s second largest city. The results of this analysis provided evidence to support the argument that the coefficient of the variable representing commutes across the cordon was capturing the effect of the exemption policy. Based upon this framework, the predicted vehicle type shares were calculated using the estimated coefficients of the MNL model and compared with predicted vehicle type shares from a simulated scenario where the exemption policy was inactive. This simulated scenario was constructed by setting the coefficient for the variable representing commutes across the cordon boundary to zero for all observations to remove the utility benefit of the exemption policy. Overall, the procedure of this second stage of the analysis led to results showing that the exemption had a substantial effect upon the probability of purchasing and aggregate demand for exempt LEVs in Stockholm during 2008. By making use of unique evidence of revealed preferences of LEV owners, this study identifies the common characteristics of new LEV owners and estimates the effect of Stockholm's congestion charging exemption upon the demand for new LEVs during 2008. It was found that the variables that had the greatest effect upon the choice of purchasing an exempt LEV included intra‐cordon residency (positive), distance of home from the cordon (negative), and commuting across the cordon (positive). It was also determined that owners under the age of 30 years preferred non‐exempt LEVs (low CO2 LEVs), whilst those over the age of 30 years preferred electric vehicles. In terms of electric vehicles, it was apparent that those individuals living within the city had the highest propensity towards purchasing this vehicle type. A negative relationship between choosing an electric vehicle and the distance of an individuals’ residency from the cordon was also evident. Overall, the congestion charging exemption was found to have increased the share of exempt LEVs in Stockholm by 1.9%, with, as expected, a much stronger effect on those commuting across the boundary, with those living inside the cordon having a 13.1% increase, and those owners living outside the cordon having a 5.0% increase. This increase in demand corresponded to an additional 538 (+/‐ 93; 95% C.I.) new exempt LEVs purchased in Stockholm during 2008 (out of a total of 5 427; 9.9%). Policy makers can take note that an incentive‐based policy can increase the demand for LEVs and appears to be an appropriate approach to adopt when attempting to reduce transport emissions through encouraging a transition towards a ‘green’ vehicle fleet.
Resumo:
Background & Objectives Emergency health services (EHS) throughout the world are increasingly congested. As more people use EHS, factors such as population growth and aging cannot fully explain this increase. Also, focus on patients’ clinical characteristics ignores the role that attitudinal and perceptual factors and motivations play in directing their decisions and actions. The aim of this study is to review and synthesize an integrated conceptual framework for understanding social psychological factors underpinning demand for EHS. Methodology A comprehensive search and review of empirical and theoretical studies about the utilization of EHS was conducted using major medical, health, social and behavioral sciences databases. Results A small number of studies used a relevant conceptual framework (e.g. Health Services Utilization Model or Health Belief Model) or their components to analyze patients’ decision to use EHS. The studies evidenced that demand was affected by perceived severity of the condition; perceived costs and benefits (e.g. availability, accessibility and affordability of alternative services); experience, preference and knowledge; perceived and actual social support; and demographic characteristics (e.g. age, sex, socioeconomic status, ethnicity, marital and living circumstances, place of residence). Conclusions Conceptual models that are commonly used in areas like social and behavioral sciences have rarely been applied in the EHS utilization field. Understanding patients’ decision-making and associated factors will lay the groundwork for identification of the evidence to inform improved policy responses and the development of demand management strategies. An integrated conceptual framework will be introduced as part of this study.
Resumo:
INTRODUCTION Managing spinal deformities in young children is challenging, particularly early onset scoliosis (EOS). Surgical intervention is often required if EOS has been unresponsive to conservative treatment particularly with rapidly progressive curves. An emerging treatment option for EOS is fusionless scoliosis surgery. Similar to bracing, this surgical option potentially harnesses growth, motion and function of the spine along with correcting spinal deformity. Dual growing rods are one such fusionless treatment, which aims to modulate growth of the vertebrae. The aim of this study was to ascertain the extent to which semiconstrained growing rods (Medtronic, Sofamor, Danek, Memphis, TN) with a telescopic sleeve component, reduce rotational constraint on the spine compared with standard "constrained / rigid" rods and hence potentially provide a more physiological mechanical environment for the growing spine. METHODS Six 40-60kg English Large White porcine spines served as a model for the paediatric human spine. Each spine was dissected into a 7 level thoracolumbar multi-segment unit (MSU), removing all non-ligamentous soft tissues and leaving 3cm of ribs either side. Pure nondestructive axial rotation moments of ±4Nm at a constant rotation rate of 8deg.s-1 were applied to the mounted MSU spines using a biaxial Instron testing machine. Displacement of each vertebral level was captured using a 3D motion tracking system (Optotrak 3020, Northern Digital Inc, Waterloo, ON). Each spine was tested in an un-instrumented state first and then with appropriately sized semi-constrained growing rods and rigid rods in alternating sequence. The rods were secured by multi-axial pedicle screws (Medtronic CD Horizon) at levels 2 and 6 of the construct. The range of motion (ROM), neutral zone (NZ) size and stiffness (Nm.deg-1) were calculated from the Instron load-displacement data and intervertebral ROM was calculated through a MATLAB algorithm from Optotrak data. RESULTS Irrespective of the order of testing, rigid rods significantly reduced the total ROM compared with semi-constrained rods (p<0.05) with in a significantly stiffer spine for both left and right axial rotation (p<0.05). Analysing the intervertebral motion within the instrumented levels 2-6, rigid rods showed reduced ROM compared with semi-constrained growing rods and compared with un-instrumented motion segments. CONCLUSION Semi-constrained growing rods maintain similar stiffness in axial rotation to un-instrumented spines, while dual rigid rods significantly reduce axial rotation. Clinically the effect of semi-constrained growing rods as observed in this study is that they would be expected to allow growth via the telescopic rod components while maintaining the axial flexibility of the spine, which may reduce occurrence of the crankshaft phenomenon.
Resumo:
This thesis introduces advanced Demand Response algorithms for residential appliances to provide benefits for both utility and customers. The algorithms are engaged in scheduling appliances appropriately in a critical peak day to alleviate network peak, adverse voltage conditions and wholesale price spikes also reducing the cost of residential energy consumption. Initially, a demand response technique via customer reward is proposed, where the utility controls appliances to achieve network improvement. Then, an improved real-time pricing scheme is introduced and customers are supported by energy management schedulers to actively participate in it. Finally, the demand response algorithm is improved to provide frequency regulation services.
Resumo:
In recent years, there has been a significant trend toward land acquisition in developing countries, establishing forestry plantations for offsetting carbon pollution generated in the Global North. Badged as “green economic development,” global carbon markets are often championed not only as solutions to climate change, but as drivers of positive development outcomes for local communities. But there is mounting evidence that these corporate land acquisitions for climate change mitigation—including forestry plantations—severely compromise not only local ecologies but also the livelihoods of the some of the world’s most vulnerable people living at subsistence level in rural areas in developing countries.
Resumo:
In current bridge management systems (BMSs), load and speed restrictions are applied on unhealthy bridges to keep the structure safe and serviceable for as long as possible. But the question is, whether applying these restrictions will always decrease the internal forces in critical components of the bridge and enhance the safety of the unhealthy bridges. To find the answer, this paper for the first time in literature, looks into the design aspects through studying the changes in demand by capacity ratios of the critical components of a bridge under the train loads. For this purpose, a structural model of a simply supported bridge, whose dynamic behaviour is similar to a group of real railway bridges, is developed. Demand by capacity ratios of the critical components of the bridge are calculated, to identify their sensitivity to increase of speed and magnitude of live load. The outcomes of this study are very significant as they show that, on the contrary to what is expected, by applying restriction on speed, the demand by capacity ratio of components may increase and make the bridge unsafe for carrying live load. Suggestions are made to solve the problem.
A framework for understanding and generating integrated solutions for residential peak energy demand
Resumo:
Supplying peak energy demand in a cost effective, reliable manner is a critical focus for utilities internationally. Successfully addressing peak energy concerns requires understanding of all the factors that affect electricity demand especially at peak times. This paper is based on past attempts of proposing models designed to aid our understanding of the influences on residential peak energy demand in a systematic and comprehensive way. Our model has been developed through a group model building process as a systems framework of the problem situation to model the complexity within and between systems and indicate how changes in one element might flow on to others. It is comprised of themes (social, technical and change management options) networked together in a way that captures their influence and association with each other and also their influence, association and impact on appliance usage and residential peak energy demand. The real value of the model is in creating awareness, understanding and insight into the complexity of residential peak energy demand and in working with this complexity to identify and integrate the social, technical and change management option themes and their impact on appliance usage and residential energy demand at peak times.