839 resultados para Dwelling
Resumo:
This is the protocol for a review and there is no abstract. The objectives are as follows: The main aim of the review is to determine the effectiveness of using incentive-based approaches (IBAs) (financial and non-financial) to increase physical activity in community-dwelling children and adults. A secondary objective will be to address the use of incentives to improve cardiovascular and metabolic fitness. A final objective will be to explore: - whether there are any adverse effects associated with the use of IBAs for increasing physical activity; - whether there are any differential effects of IBAs within and between study populations by age, gender, education, inequalities and health status; and - whether the use of disincentive/aversive approaches leads to a reduction in sedentary behaviour.
Resumo:
Objective In Parkinson's disease (PD), commonly reported risk factors for malnutrition in other populations commonly occur. Few studies have explored which of these factors are of particular importance in malnutrition in PD. The aim was to identify the determinants of nutritional status in people with Parkinson's disease (PWP). Methods Community-dwelling PWP (>18 years) were recruited (n = 125; 73M/52F; Mdn 70 years). Self-report assessments included Beck's Depression Inventory (BDI), Spielberger Trait Anxiety Inventory (STAI), Scales for Outcomes in Parkinson's disease – Autonomic (SCOPA-AUT), Modified Constipation Assessment Scale (MCAS) and Freezing of Gait Questionnaire (FOG-Q). Information about age, PD duration, medications, co-morbid conditions and living situation was obtained. Addenbrooke's Cognitive Examination (ACE-R), Unified Parkinson's Disease Rating Scale (UPDRS) II and UPDRS III were performed. Nutritional status was assessed using the Subjective Global Assessment (SGA) as part of the scored Patient-Generated Subjective Global Assessment (PG-SGA). Results Nineteen (15%) were malnourished (SGA-B). Median PG-SGA score was 3. More of the malnourished were elderly (84% vs. 71%) and had more severe disease (H&Y: 21% vs. 5%). UPDRS II and UPDRS III scores and levodopa equivalent daily dose (LEDD)/body weight(mg/kg) were significantly higher in the malnourished (Mdn 18 vs. 15; 20 vs. 15; 10.1 vs. 7.6 respectively). Regression analyses revealed older age at diagnosis, higher LEDD/body weight (mg/kg), greater UPDRS III score, lower STAI score and higher BDI score as significant predictors of malnutrition (SGA-B). Living alone and higher BDI and UPDRS III scores were significant predictors of a higher log-adjusted PG-SGA score. Conclusions In this sample of PWP, the rate of malnutrition was higher than that previously reported in the general community. Nutrition screening should occur regularly in those with more severe disease and depression. Community support should be provided to PWP living alone. Dopaminergic medication should be reviewed with body weight changes.
Resumo:
Climate change is leading to an increased frequency and severity of heat waves. Spells of several consecutive days of unusually high temperatures have led to increased mortality rates for the more vulnerable in the community. The problem is compounded by the escalating energy costs and increasing peak electrical demand as people become more reliant on air conditioning. Domestic air conditioning is the primary determinant of peak power demand which has been a major driver of higher electricity costs. This report presents the findings of multidisciplinary research which develops a national framework to evaluate the potential impacts of heat waves. It presents a technical, social and economic approach to adapt Australian residential buildings to ameliorate the impact of heat waves in the community and reduce the risk of its adverse outcomes. Through the development of a methodology for estimating the impact of global warming on key weather parameters in 2030 and 2050, it is possible to re-evaluate the size and anticipated energy consumption of air conditioners in future years for various climate zones in Australia. Over the coming decades it is likely that mainland Australia will require more cooling than heating. While in some parts the total electricity usage for heating and cooling may remain unchanged, there is an overall significant increase in peak electricity demand, likely to further drive electricity prices. Through monitoring groups of households in South Australia, New South Wales and Queensland, the impact of heat waves on both thermal comfort sensation and energy consumption for air conditioning has been evaluated. The results show that households are likely to be able to tolerate slightly increased temperature levels indoors during periods of high outside temperatures. The research identified that household electricity costs are likely to rise above what is currently projected due to the impact of climate change. Through a number of regulatory changes to both household design and air conditioners, this impact can be minimised. A number of proposed retrofit and design measures are provided, which can readily reduce electricity usage for cooling at minimal cost to the household. Using a number of social research instruments, it is evident that households are willing to change behaviour rather than to spend money. Those on lower income and elderly individuals are the least able to afford the use of air conditioning and should be a priority for interventions and assistance. Increasing community awareness of cost effective strategies to manage comfort and health during heat waves is a high priority recommended action. Overall, the research showed that a combined approach including behaviour change, dwelling modification and improved air conditioner selection can readily adapt Australian households to the impact of heat waves, reducing the risk of heat related deaths and household energy costs.
Resumo:
BACKGROUND: US Centers for Disease Control guidelines recommend replacement of peripheral intravenous (IV) catheters no more frequently than every 72 to 96 hours. Routine replacement is thought to reduce the risk of phlebitis and bloodstream infection. Catheter insertion is an unpleasant experience for patients and replacement may be unnecessary if the catheter remains functional and there are no signs of inflammation. Costs associated with routine replacement may be considerable. This is an update of a review first published in 2010. OBJECTIVES: To assess the effects of removing peripheral IV catheters when clinically indicated compared with removing and re-siting the catheter routinely. SEARCH METHODS: For this update the Cochrane Peripheral Vascular Diseases (PVD) Group Trials Search Co-ordinator searched the PVD Specialised Register (December 2012) and CENTRAL (2012, Issue 11). We also searched MEDLINE (last searched October 2012) and clinical trials registries. SELECTION CRITERIA: Randomised controlled trials that compared routine removal of peripheral IV catheters with removal only when clinically indicated in hospitalised or community dwelling patients receiving continuous or intermittent infusions. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed trial quality and extracted data. MAIN RESULTS: Seven trials with a total of 4895 patients were included in the review. Catheter-related bloodstream infection (CRBSI) was assessed in five trials (4806 patients). There was no significant between group difference in the CRBSI rate (clinically-indicated 1/2365; routine change 2/2441). The risk ratio (RR) was 0.61 but the confidence interval (CI) was wide, creating uncertainty around the estimate (95% CI 0.08 to 4.68; P = 0.64). No difference in phlebitis rates was found whether catheters were changed according to clinical indications or routinely (clinically-indicated 186/2365; 3-day change 166/2441; RR 1.14, 95% CI 0.93 to 1.39). This result was unaffected by whether infusion through the catheter was continuous or intermittent. We also analysed the data by number of device days and again no differences between groups were observed (RR 1.03, 95% CI 0.84 to 1.27; P = 0.75). One trial assessed all-cause bloodstream infection. There was no difference in this outcome between the two groups (clinically-indicated 4/1593 (0.02%); routine change 9/1690 (0.05%); P = 0.21). Cannulation costs were lower by approximately AUD 7.00 in the clinically-indicated group (mean difference (MD) -6.96, 95% CI -9.05 to -4.86; P ≤ 0.00001). AUTHORS' CONCLUSIONS: The review found no evidence to support changing catheters every 72 to 96 hours. Consequently, healthcare organisations may consider changing to a policy whereby catheters are changed only if clinically indicated. This would provide significant cost savings and would spare patients the unnecessary pain of routine re-sites in the absence of clinical indications. To minimise peripheral catheter-related complications, the insertion site should be inspected at each shift change and the catheter removed if signs of inflammation, infiltration, or blockage are present. OBJECTIVES: To assess the effects of removing peripheral IV catheters when clinically indicated compared with removing and re-siting the catheter routinely. SEARCH METHODS: For this update the Cochrane Peripheral Vascular Diseases (PVD) Group Trials Search Co-ordinator searched the PVD Specialised Register (December 2012) and CENTRAL (2012, Issue 11). We also searched MEDLINE (last searched October 2012) and clinical trials registries. SELECTION CRITERIA: Randomised controlled trials that compared routine removal of peripheral IV catheters with removal only when clinically indicated in hospitalised or community dwelling patients receiving continuous or intermittent infusions. DATA COLLECTION AND ANALYSIS: Two review authors independently assessed trial quality and extracted data. MAIN RESULTS: Seven trials with a total of 4895 patients were included in the review. Catheter-related bloodstream infection (CRBSI) was assessed in five trials (4806 patients). There was no significant between group difference in the CRBSI rate (clinically-indicated 1/2365; routine change 2/2441). The risk ratio (RR) was 0.61 but the confidence interval (CI) was wide, creating uncertainty around the estimate (95% CI 0.08 to 4.68; P = 0.64). No difference in phlebitis rates was found whether catheters were changed according to clinical indications or routinely (clinically-indicated 186/2365; 3-day change 166/2441; RR 1.14, 95% CI 0.93 to 1.39). This result was unaffected by whether infusion through the catheter was continuous or intermittent. We also analysed the data by number of device days and again no differences between groups were observed (RR 1.03, 95% CI 0.84 to 1.27; P = 0.75). One trial assessed all-cause bloodstream infection. There was no difference in this outcome between the two groups (clinically-indicated 4/1593 (0.02%); routine change 9/1690 (0.05%); P = 0.21). Cannulation costs were lower by approximately AUD 7.00 in the clinically-indicated group (mean difference (MD) -6.96, 95% CI -9.05 to -4.86; P ≤ 0.00001). AUTHORS' CONCLUSIONS: The review found no evidence to support changing catheters every 72 to 96 hours. Consequently, healthcare organisations may consider changing to a policy whereby catheters are changed only if clinically indicated. This would provide significant cost savings and would spare patients the unnecessary pain of routine re-sites in the absence of clinical indications. To minimise peripheral catheter-related complications, the insertion site should be inspected at each shift change and the catheter removed if signs of inflammation, infiltration, or blockage are present.
Resumo:
It has been reported that poor nutritional status, in the form of weight loss and resulting body mass index (BMI) changes, is an issue in people with Parkinson's disease (PWP). The symptoms resulting from Parkinson's disease (PD) and the side effects of PD medication have been implicated in the aetiology of nutritional decline. However, the evidence on which these claims are based is, on one hand, contradictory, and on the other, restricted primarily to otherwise healthy PWP. Despite the claims that PWP suffer from poor nutritional status, evidence is lacking to inform nutrition-related care for the management of malnutrition in PWP. The aims of this thesis were to better quantify the extent of poor nutritional status in PWP, determine the important factors differentiating the well-nourished from the malnourished and evaluate the effectiveness of an individualised nutrition intervention on nutritional status. Phase DBS: Nutritional status in people with Parkinson's disease scheduled for deep-brain stimulation surgery The pre-operative rate of malnutrition in a convenience sample of people with Parkinson's disease (PWP) scheduled for deep-brain stimulation (DBS) surgery was determined. Poorly controlled PD symptoms may result in a higher risk of malnutrition in this sub-group of PWP. Fifteen patients (11 male, median age 68.0 (42.0 – 78.0) years, median PD duration 6.75 (0.5 – 24.0) years) participated and data were collected during hospital admission for the DBS surgery. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference, waist circumference, body mass index (BMI)) were taken, and body composition was measured using bioelectrical impedance spectroscopy (BIS). Six (40%) of the participants were malnourished (SGA-B) while 53% reported significant weight loss following diagnosis. BMI was significantly different between SGA-A and SGA-B (25.6 vs 23.0kg/m 2, p<.05). There were no differences in any other variables, including PG-SGA score and the presence of non-motor symptoms. The conclusion was that malnutrition in this group is higher than that in other studies reporting malnutrition in PWP, and it is under-recognised. As poorer surgical outcomes are associated with poorer pre-operative nutritional status in other surgeries, it might be beneficial to identify patients at nutritional risk prior to surgery so that appropriate nutrition interventions can be implemented. Phase I: Nutritional status in community-dwelling adults with Parkinson's disease The rate of malnutrition in community-dwelling adults (>18 years) with Parkinson's disease was determined. One hundred twenty-five PWP (74 male, median age 70.0 (35.0 – 92.0) years, median PD duration 6.0 (0.0 – 31.0) years) participated. The scored PG-SGA was used to assess nutritional status, anthropometric measures (weight, height, mid-arm circumference (MAC), calf circumference, waist circumference, body mass index (BMI)) were taken. Nineteen (15%) of the participants were malnourished (SGA-B). All anthropometric indices were significantly different between SGA-A and SGA-B (BMI 25.9 vs 20.0kg/m2; MAC 29.1 – 25.5cm; waist circumference 95.5 vs 82.5cm; calf circumference 36.5 vs 32.5cm; all p<.05). The PG-SGA score was also significantly lower in the malnourished (2 vs 8, p<.05). The nutrition impact symptoms which differentiated between well-nourished and malnourished were no appetite, constipation, diarrhoea, problems swallowing and feel full quickly. This study concluded that malnutrition in community-dwelling PWP is higher than that documented in community-dwelling elderly (2 – 11%), yet is likely to be under-recognised. Nutrition impact symptoms play a role in reduced intake. Appropriate screening and referral processes should be established for early detection of those at risk. Phase I: Nutrition assessment tools in people with Parkinson's disease There are a number of validated and reliable nutrition screening and assessment tools available for use. None of these tools have been evaluated in PWP. In the sample described above, the use of the World Health Organisation (WHO) cut-off (≤18.5kg/m2), age-specific BMI cut-offs (≤18.5kg/m2 for under 65 years, ≤23.5kg/m2 for 65 years and older) and the revised Mini-Nutritional Assessment short form (MNA-SF) were evaluated as nutrition screening tools. The PG-SGA (including the SGA classification) and the MNA full form were evaluated as nutrition assessment tools using the SGA classification as the gold standard. For screening, the MNA-SF performed the best with sensitivity (Sn) of 94.7% and specificity (Sp) of 78.3%. For assessment, the PG-SGA with a cut-off score of 4 (Sn 100%, Sp 69.8%) performed better than the MNA (Sn 84.2%, Sp 87.7%). As the MNA has been recommended more for use as a nutrition screening tool, the MNA-SF might be more appropriate and take less time to complete. The PG-SGA might be useful to inform and monitor nutrition interventions. Phase I: Predictors of poor nutritional status in people with Parkinson's disease A number of assessments were conducted as part of the Phase I research, including those for the severity of PD motor symptoms, cognitive function, depression, anxiety, non-motor symptoms, constipation, freezing of gait and the ability to carry out activities of daily living. A higher score in all of these assessments indicates greater impairment. In addition, information about medical conditions, medications, age, age at PD diagnosis and living situation was collected. These were compared between those classified as SGA-A and as SGA-B. Regression analysis was used to identify which factors were predictive of malnutrition (SGA-B). Differences between the groups included disease severity (4% more severe SGA-A vs 21% SGA-B, p<.05), activities of daily living score (13 SGA-A vs 18 SGA-B, p<.05), depressive symptom score (8 SGA-A vs 14 SGA-B, p<.05) and gastrointestinal symptoms (4 SGA-A vs 6 SGA-B, p<.05). Significant predictors of malnutrition according to SGA were age at diagnosis (OR 1.09, 95% CI 1.01 – 1.18), amount of dopaminergic medication per kg body weight (mg/kg) (OR 1.17, 95% CI 1.04 – 1.31), more severe motor symptoms (OR 1.10, 95% CI 1.02 – 1.19), less anxiety (OR 0.90, 95% CI 0.82 – 0.98) and more depressive symptoms (OR 1.23, 95% CI 1.07 – 1.41). Significant predictors of a higher PG-SGA score included living alone (β=0.14, 95% CI 0.01 – 0.26), more depressive symptoms (β=0.02, 95% CI 0.01 – 0.02) and more severe motor symptoms (OR 0.01, 95% CI 0.01 – 0.02). More severe disease is associated with malnutrition, and this may be compounded by lack of social support. Phase II: Nutrition intervention Nineteen of the people identified in Phase I as requiring nutrition support were included in Phase II, in which a nutrition intervention was conducted. Nine participants were in the standard care group (SC), which received an information sheet only, and the other 10 participants were in the intervention group (INT), which received individualised nutrition information and weekly follow-up. INT gained 2.2% of starting body weight over the 12 week intervention period resulting in significant increases in weight, BMI, mid-arm circumference and waist circumference. The SC group gained 1% of starting weight over the 12 weeks which did not result in any significant changes in anthropometric indices. Energy and protein intake (18.3kJ/kg vs 3.8kJ/kg and 0.3g/kg vs 0.15g/kg) increased in both groups. The increase in protein intake was only significant in the SC group. The changes in intake, when compared between the groups, were no different. There were no significant changes in any motor or non-motor symptoms or in "off" times or dyskinesias in either group. Aspects of quality of life improved over the 12 weeks as well, especially emotional well-being. This thesis makes a significant contribution to the evidence base for the presence of malnutrition in Parkinson's disease as well as for the identification of those who would potentially benefit from nutrition screening and assessment. The nutrition intervention demonstrated that a traditional high protein, high energy approach to the management of malnutrition resulted in improved nutritional status and anthropometric indices with no effect on the presence of Parkinson's disease symptoms and a positive effect on quality of life.
Resumo:
Passenger flow studies in airport terminals have shown consistent statistical relationships between airport spatial layout and pedestrian movement, facilitating prediction of movement from terminal designs. However, these studies are done at an aggregate level and do not incorporate how individual passengers make decisions at a microscopic level. Therefore, they do not explain the formation of complex movement flows. In addition, existing models mostly focus on standard airport processing procedures such as immigration and security, but seldom consider discretionary activities of passengers, and thus are not able to truly describe the full range of passenger flows within airport terminals. As the route-choice decision-making of passengers involves many uncertain factors within the airport terminals, the mechanisms to fulfill the capacity of managing the route-choice have proven difficult to acquire and quantify. Could the study of cognitive factors of passengers (i.e. human mental preferences of deciding which on-airport facility to use) be useful to tackle these issues? Assuming the movement in virtual simulated environments can be analogous to movement in real environments, passenger behaviour dynamics can be similar to those generated in virtual experiments. Three levels of dynamics have been devised for motion control: the localised field, tactical level, and strategic level. A localised field refers to basic motion capabilities, such as walking speed, direction and avoidance of obstacles. The other two fields represent cognitive route-choice decision-making. This research views passenger flow problems via a "bottom-up approach", regarding individual passengers as independent intelligent agents who can behave autonomously and are able to interact with others and the ambient environment. In this regard, passenger flow formation becomes an emergent phenomenon of large numbers of passengers interacting with others. In the thesis, first, the passenger flow in airport terminals was investigated. Discretionary activities of passengers were integrated with standard processing procedures in the research. The localised field for passenger motion dynamics was constructed by a devised force-based model. Next, advanced traits of passengers (such as their desire to shop, their comfort with technology and their willingness to ask for assistance) were formulated to facilitate tactical route-choice decision-making. The traits consist of quantified measures of mental preferences of passengers when they travel through airport terminals. Each category of the traits indicates a decision which passengers may take. They were inferred through a Bayesian network model by analysing the probabilities based on currently available data. Route-choice decision-making was finalised by calculating corresponding utility results based on those probabilities observed. Three sorts of simulation outcomes were generated: namely, queuing length before checkpoints, average dwell time of passengers at service facilities, and instantaneous space utilisation. Queuing length reflects the number of passengers who are in a queue. Long queues no doubt cause significant delay in processing procedures. The dwell time of each passenger agent at the service facilities were recorded. The overall dwell time of passenger agents at typical facility areas were analysed so as to demonstrate portions of utilisation in the temporal aspect. For the spatial aspect, the number of passenger agents who were dwelling within specific terminal areas can be used to estimate service rates. All outcomes demonstrated specific results by typical simulated passenger flows. They directly reflect terminal capacity. The simulation results strongly suggest that integrating discretionary activities of passengers makes the passenger flows more intuitive, observing probabilities of mental preferences by inferring advanced traits make up an approach capable of carrying out tactical route-choice decision-making. On the whole, the research studied passenger flows in airport terminals by an agent-based model, which investigated individual characteristics of passengers and their impact on psychological route-choice decisions of passengers. Finally, intuitive passenger flows in airport terminals were able to be realised in simulation.
Resumo:
This paper aims to evaluate the brand value of property in subdivision developments in the Bangkok Metropolitan Region (BMR), Thailand. The result has been determined by the application of a hedonic price model. The development of the model is developed based on a sample of 1,755 property sales during the period of 1992-2010 in eight zones of the BMR. The results indicate that the use of a semi-logarithmic model has stronger explanatory power and is more reliable. Property price increases 12.90% from the branding. Meanwhile, the price annually increases 2.96%; lot size and dwelling area have positive impacts on the price. In contrast, duplexes and townhouses have a negative impact on the price compared to single detached houses. Moreover, the price of properties which are located outside the Bangkok inner city area is reduced by 21.26% to 43.19%. These findings also contribute towards a new understanding of the positive impact of branding on the property price in the BMR. The result is useful for setting selling prices for branded and unbranded properties, and the model could provide a reference for setting property prices in subdivision developments in the BMR.
Resumo:
A key challenge for the 21st Century is to make our cities more liveable and foster economically sustainable, environmentally responsible, and socially inclusive communities. Design thinking, particularly a human-centred approach, offers a way to tackle this challenge. Findings from two recent Australian research projects highlight how facilitating sustainable, liveable communities in a humid sub-tropical environment requires an in-depth understanding of people’s perspectives, experiences and practices. Project 1 (‘Research House’) documents the reflections of a family who lived in a ‘test’ sustainable house for two years, outlining their experience and evaluations of universal design and sustainable technologies. The study family was very impressed with the natural lighting, natural ventilation, spaciousness and ease of access, which contributed significantly to their comfort and the liveability of their home. Project 2 (‘Inner-Urban High Density Living’) explored Brisbane residents’ opinions about high-density living, through a survey (n=636), interviews (n=24), site observations (over 300 hours) and environmental monitoring, assessing opinions on the liveability of their individual dwelling, the multi-unit host building and the surrounding neighbourhood. Nine areas, categorised into three general domains, were identified as essential for enhancing high density liveability. In terms of the dwelling, thermal comfort/ventilation, natural light, noise mitigation were important; shared space, good neighbour protocols, and support for environmentally sustainable behaviour were desired in the building/complex; and accessible/sustainable transport, amenities and services, sense of community were considered important in the surrounding neighbourhood. Combined, these findings emphasise the importance and complexity associated with designing liveable building, cities and communities, illustrating how adopting a design thinking, human-centred approach will help create sustainable communities that will meet the needs of current and future generations.
Resumo:
Background & Aims Nutrition screening and assessment enable early identification of malnourished people and those at risk of malnutrition. Appropriate assessment tools assist with informing and monitoring nutrition interventions. Tool choice needs to be appropriate to the population and setting. Methods Community-dwelling people with Parkinson’s disease (>18 years) were recruited. Body mass index (BMI) was calculated from weight and height. Participants were classified as underweight according to World Health Organisation (WHO) (≤18.5kg/m2) and age specific (<65 years,≤18.5kg/m2; ≥65 years,≤23.5kg/m2) cut-offs. The Mini-Nutritional Assessment (MNA) screening (MNA-SF) and total assessment scores were calculated. The Patient-Generated Subjective Global Assessment (PG-SGA), including the Subjective Global Assessment (SGA), was performed. Sensitivity, specificity, positive predictive value, negative predictive value and weighted kappa statistic of each of the above compared to SGA were determined. Results Median age of the 125 participants was 70.0(35-92) years. Age-specific BMI (Sn 68.4%, Sp 84.0%) performed better than WHO (Sn 15.8%, Sp 99.1%) categories. MNA-SF performed better (Sn 94.7%, Sp 78.3%) than both BMI categorisations for screening purposes. MNA had higher specificity but lower sensitivity than PG-SGA (MNA Sn 84.2%, Sp 87.7%; PG-SGA Sn 100.0%, Sp 69.8%). Conclusions BMI lacks sensitivity to identify malnourished people with Parkinson’s disease and should be used with caution. The MNA-SF may be a better screening tool in people with Parkinson’s disease. The PG-SGA performed well and may assist with informing and monitoring nutrition interventions. Further research should be conducted to validate screening and assessment tools in Parkinson’s disease.
Resumo:
Electricity network investment and asset management require accurate estimation of future demand in energy consumption within specified service areas. For this purpose, simple models are typically developed to predict future trends in electricity consumption using various methods and assumptions. This paper presents a statistical model to predict electricity consumption in the residential sector at the Census Collection District (CCD) level over the state of New South Wales, Australia, based on spatial building and household characteristics. Residential household demographic and building data from the Australian Bureau of Statistics (ABS) and actual electricity consumption data from electricity companies are merged for 74 % of the 12,000 CCDs in the state. Eighty percent of the merged dataset is randomly set aside to establish the model using regression analysis, and the remaining 20 % is used to independently test the accuracy of model prediction against actual consumption. In 90 % of the cases, the predicted consumption is shown to be within 5 kWh per dwelling per day from actual values, with an overall state accuracy of -1.15 %. Given a future scenario with a shift in climate zone and a growth in population, the model is used to identify the geographical or service areas that are most likely to have increased electricity consumption. Such geographical representation can be of great benefit when assessing alternatives to the centralised generation of energy; having such a model gives a quantifiable method to selecting the 'most' appropriate system when a review or upgrade of the network infrastructure is required.
Resumo:
We identify the 10 major terrestrial and marine ecosystems in Australia most vulnerable to tipping points, in which modest environmental changes can cause disproportionately large changes in ecosystem properties. To accomplish this we independently surveyed the coauthors of this paper to produce a list of candidate ecosystems, and then refined this list during a 2-day workshop. The list includes (1) elevationally restricted mountain ecosystems, (2) tropical savannas, (3) coastal floodplains and wetlands, (4) coral reefs, (5) drier rainforests, (6) wetlands and floodplains in the Murray-Darling Basin, (7) the Mediterranean ecosystems of southwestern Australia, (8) offshore islands, (9) temperate eucalypt forests, and (10) salt marshes and mangroves. Some of these ecosystems are vulnerable to widespread phase-changes that could fundamentally alter ecosystem properties such as habitat structure, species composition, fire regimes, or carbon storage. Others appear susceptible to major changes across only part of their geographic range, whereas yet others are susceptible to a large-scale decline of key biotic components, such as small mammals or stream-dwelling amphibians. For each ecosystem we consider the intrinsic features and external drivers that render it susceptible to tipping points, and identify subtypes of the ecosystem that we deem to be especially vulnerable. © 2011 Elsevier Ltd.
Resumo:
Most urban dwelling Australians take secure and safe water supplies for granted. That is, they have an adequate quantity of water at a quality that can be used by people without harm from human and animal wastes, salinity and hardness or pollutants from agriculture and manufacturing industries. Australia wide urban and peri-urban dwellers use safe water for all domestic as well as industrial purposes. However, this is not the situation remote regions in Australia where availability and poor quality water can be a development constraint. Nor is it the case in Sri Lanka where people in rural regions are struggling to obtain a secure supply of water, irrespective of it being safe because of the impact of faecal and other contaminants. The purposes of this paper are to overview: the population and environmental health challenges arising from the lack of safe water in rural and remote communities; response pathways to address water quality issues; and the status of and need for integrated catchment management (ICM) in selected remote regions of Australia and vulnerable and lagging rural regions in Sri Lanka. Conclusions are drawn that focus on the opportunity for inter-regional collaborations between Australia and Sri Lanka for the delivery of safe water through ICM.
Resumo:
In Australia, the idea of home ownership or The Great Australian Dream is still perceived as the main achievement of every Australian’s life. Perception of an ideal home is changing over the decades. Each generation has special requirements criteria which foster their dwelling space. This research identifies and compares three generations’ (Baby Boomers, Generation X and Generation Y) demographics, special requirements and perceptions regarding their ideal home. The examination of previous research and literature into the Queensland context reveals that the Baby Boomers population of people 65 and older is currently 11.8% of the state population and is expected to grow to almost one quarter of the population by 2051. This is the highest growth rate among these three generations. Further analysis of these three generations’ status and requirements shows that aging is the most critical issue for the housing systems. This is especially the case for Baby Boomers due to their demand for support services and health care in the home. The study reveals that ‘ageing in place’, is a preferred option for the aged. This raises questions as to how well the housing system and neighbourhood environments are able to support ageing in place, and what aging factors should be taken into consideration when designing Baby boomer’s home to facilitate health and wellbeing. Therefore, this research designed a qualitative approach to investigate Australian Baby Boomers homes around Queensland, predominantly in the Brisbane area, using semi-structured interviews and observations. It aims to find out the level of satisfaction of Australian Baby Boomers with their current home and their preferences and requirements in light of their ideal home. The findings contribute new knowledge in the light of ideal home mechanisms. A set of strategies has been developed from the findings that may help improve the level of comfort, safety and satisfaction that Baby Boomers experience in their current and future homes.
Resumo:
Purpose To evaluate the validity of a uniaxial accelerometer (MTI Actigraph) for measuring physical activity in people with acquired brain injury (ABI) using portable indirect calorimetry (Cosmed K4b(2)) as a criterion measure. Methods Fourteen people with ABI and related gait pattern impairment (age 32 +/- 8 yr) wore an MTI Actigraph that measured activity (counts(.)min-(1)) and a Cosmed K4b(2) that measured oxygen consumption (mL(.)kg(-1.)min(-1)) during four activities: quiet sitting (QS) and comfortable paced (CP), brisk paced (BP), and fast paced (FP) walking. MET levels were predicted from Actigraph counts using a published equation and compared with Cosmed measures. Predicted METs for each of the 56 activity bouts (14 participants X 4 bouts) were classified (light, moderate, vigorous, or very vigorous intensity) and compared with Cosmed-based classifications. Results Repeated-measures ANOVA indicated that walking condition intensities were significantly different (P < 0.05) and the Actigraph detected the differences. Overall correlation between measured and predicted METs was positive, moderate, and significant (r = 0.74). Mean predicted METs were not significantly different from measured for CP and BP, but for FP walking, predicted METs were significantly less than measured (P < 0.05). The Actigraph correctly classified intensity for 76.8% of all activity bouts and 91.5% of light- and moderate-intensity bouts. Conclusions Actigraph counts provide a valid index of activity across the intensities investigated in this study. For light to moderate activity, Actigraph-based estimates of METs are acceptable for group-level analysis and are a valid means of classifying activity intensity. The Actigraph significantly underestimated higher intensity activity, although, in practice, this limitation will have minimal impact on activity measurement of most community-dwelling people with ABI.
Resumo:
A Space for Spirituality: Dutton Park Community House Exhibition of QUT Student Design Work for Murri Watch Men’s Shed, Dutton Park. As designers it is important to work with communities to develop inclusive spaces and be mindful of the diversity of cultures, histories and indeed spirituality. This exhibition includes a selection of proposals from QUT Interior Design students for the adaptation of the Murri Watch Men’s Shed, Dutton Park. The designs respond to local community narratives and environmental qualities, such as site texture, landscape and light to propose a dwelling space for spirituality and gathering.