614 resultados para Leave alone
Resumo:
Introduction: Lower-limb amputations are a serious adverse consequence of lifestyle related chronic conditions and a serious concern among the aging population in Australia. Lower limb amputations have severe personal, social and economic impacts on the individual, healthcare system and broader community. This study aimed to address a critical gap in the research literature by investigating the physical functioning and social characteristics of lower limb amputees at discharge from tertiary hospital inpatient rehabilitation. Method: A cohort study was implemented among patients with lower limb amputations admitted to a Geriatric Assessment and Rehabilitation Unit for rehabilitation at a tertiary hospital. Conventional descriptive statistics were used to examine patient demographic, physical functioning and social living outcomes recorded for patients admitted between 2005 and 2011. Results: A total of 423 admissions occurred during the study period, 313 (74%) were male. This sample included admissions for left (n = 189, 45%), right (n = 220, 52%) and bilateral (n = 14, 3%) lower limb amputations, with 15 (3%) patients dying whilst an inpatient. The mean (standard deviation) age was 65 (13.9) years. Amputations attributed to vascular causes accounted for 333 (78%) admissions; 65 (15%) of these had previously had an amputation. The mean (SD) length of stay in the rehabilitation unit was 56 (42) days. Prior to this admission, 123 (29%) patients were living alone, 289 (68%) were living with another and 3 (0.7%) were living in residential care. Following this amputation related admission, 89 (21%) patients did not return to their prior living situation. Of those admitted, 187 (44%) patients were discharged with a lower limb prosthesis. Conclusion: The clinical group is predominately older adults. The ratio of males to females was approximately 3:1. Over half did not return to walking and many were not able to return to their prior accommodation. However, few patients died during their admission.
Resumo:
Preparing valuations is a time consuming process involving site inspections, research and report formulation. The ease of access to the internet has changed how and where valuations may be undertaken. No longer is it necessary to return to the office to finalise reports, or leave your desk in order to undertake research. This enables more streamlined service delivery and is viewed as a positive. However, it is not without negative impacts. This paper seeks to inform practitioners of the work environment changes flowing from increased access to the internet. It identifies how increased accessibility to, and use of, technology and the internet has, and will continue to, impact upon valuation service provision into the future.
Resumo:
Mental health is a major global health issue. Neuropsychiatric conditions are the most significant cause of disability worldwide, and account for 14% of the global burden of disease. Depression in particular places a huge burden on society, with the Global Burden of Disease 2000 study listing it as the fourth leading cause of disease burden worldwide and the largest non-fatal disease burden. In Australia, mental disorders are startlingly common and related to significant disability. The 2007 National Survey of Mental Health and Wellbeing revealed that the lifetime prevalence of any mental disorder was 45%, and within the last 12 months 20% of Australians met criteria for a mental disorder. Many of the articles in this issue explore mental health issues in young people. Indeed, mental health issues account for a large proportion of the disease burden in young people. Across the globe, mental health disorders caused the greatest number of years lost to disability(YLDs) amongst young people aged 10 to 24 years (45% of total YLDs). Depression caused the highest number of disability-adjusted life-years (DALYs) across this age group, accounting for 8. 2% of DALYs alone.6 It is clear that mental health is a critical area of focus for researchers, practitioners, and policy makers.
Resumo:
This paper presents an efficient face detection method suitable for real-time surveillance applications. Improved efficiency is achieved by constraining the search window of an AdaBoost face detector to pre-selected regions. Firstly, the proposed method takes a sparse grid of sample pixels from the image to reduce whole image scan time. A fusion of foreground segmentation and skin colour segmentation is then used to select candidate face regions. Finally, a classifier-based face detector is applied only to selected regions to verify the presence of a face (the Viola-Jones detector is used in this paper). The proposed system is evaluated using 640 x 480 pixels test images and compared with other relevant methods. Experimental results show that the proposed method reduces the detection time to 42 ms, where the Viola-Jones detector alone requires 565 ms (on a desktop processor). This improvement makes the face detector suitable for real-time applications. Furthermore, the proposed method requires 50% of the computation time of the best competing method, while reducing the false positive rate by 3.2% and maintaining the same hit rate.
Resumo:
A comparison was made of accelerated professional development (APD) for nurses (n=64), involving peer consultation and reflective practice, and peer consultation alone (n=30). Although APD participants had a higher completion rate, improvements in caregiver behaviors and work environment were not significantly different.
Resumo:
The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.
Resumo:
Ingredients: - 1 cup Vision - 100ml ‘Real World’ Application - 100ml Unit Structure/Organisation - 100ml Student-centric Approach [optional: Add Social Media/Popular Culture for extra goodness] - Large Dollop of Passion + Enthusiasm - Sprinkle of Approachability Mix all ingredients well. Cover and leave to rise in a Lecture Theatre for 1.5 hours. Cook in a Classroom for 1.5 hours. Garnish with a dash of Humour before serving. Serves 170 Students
Resumo:
Adolescent idiopathic scoliosis is a complex three dimensional deformity affecting 2-3% of the general population. The resulting spinal deformity consists of coronal curvature, hypokyphosis of the thoracic spine and vertebral rotation in the axial plane with posterior elements turned into the curve concavity. The potential for curve progression is heightened during the adolescent growth spurt. Success of scoliosis deformity correction depends on solid bony fusion between adjacent vertebrae after the intervertebral (IV) discs have been surgically cleared and the disc spaces filled with graft material. Recently a bioactive and resorbable scaffold fabricated from medical grade polycaprolactone has been developed for bone regeneration at load bearing sites. Combined with rhBMP-2, this has been shown to be successful in acting as a bone graft substitute in a porcine lumbar interbody fusion model when compared to autologous bone graft alone. The study aimed to establish a large animal thoracic spine interbody fusion model, develop spine biodegradable scaffolds (PCL) in combination with biologics (rhBMP-2) and to establish a platform for research into spine tissue engineering constructs. Preliminary results demonstrate higher grades of radiologically evident bony fusion across all levels when comparing fusion scores between the 3 and 6 month postop groups at the PCL CaP coated scaffold level, which is observed to be a similar grade to autograft, while no fusion is seen at the scaffold only level. Results to date suggest that the combination of rhBMP-2 and scaffold engineering actively promotes bone formation, laying the basis of a viable tissue engineered constructs.
Resumo:
This paper is concerned with the unsupervised learning of object representations by fusing visual and motor information. The problem is posed for a mobile robot that develops its representations as it incrementally gathers data. The scenario is problematic as the robot only has limited information at each time step with which it must generate and update its representations. Object representations are refined as multiple instances of sensory data are presented; however, it is uncertain whether two data instances are synonymous with the same object. This process can easily diverge from stability. The premise of the presented work is that a robot's motor information instigates successful generation of visual representations. An understanding of self-motion enables a prediction to be made before performing an action, resulting in a stronger belief of data association. The system is implemented as a data-driven partially observable semi-Markov decision process. Object representations are formed as the process's hidden states and are coordinated with motor commands through state transitions. Experiments show the prediction process is essential in enabling the unsupervised learning method to converge to a solution - improving precision and recall over using sensory data alone.
Resumo:
International comparison is complicated by the use of different terms, classification methods, policy frameworks and system structures, not to mention different languages and terminology. Multi-case studies can assist in the understanding of the influence wielded by cultural, social, economic, historical and political forces upon educational decisions, policy construction and changes over time. But case studies alone are not enough. In this paper, we argue for an ecological or scaled approach that travels through macro, meso and micro levels to build nested case-studies to allow for more comprehensive analysis of the external and internal factors that shape policy-making and education systems. Such an approach allows for deeper understanding of the relationship between globalizing trends and policy developments.
Resumo:
Conspicuity limitations make bicycling at night dangerous. This experiment quantified bicyclists’ estimates of the distance at which approaching drivers would first recognize them. Twenty five participants (including 13 bicyclists who rode at least once per week, and 12 who rode once per month or less) cycled in place on a closed-road circuit at night-time and indicated when they were confident that an approaching driver would first recognize that a bicyclist was present. Participants wore black clothing alone or together with a fluorescent bicycling vest, a fluorescent bicycling vest with additional retroreflective tape, or the fluorescent retroreflective vest plus ankle and knee reflectors in a modified ‘biomotion’ configuration. The bicycle had a light mounted on the handlebars which was either static, flashing or off. Participants judged that black clothing made them least visible, retroreflective strips on the legs in addition to a retroreflective vest made them most visible and that adding retroreflective materials to a fluorescent vest provides no conspicuity benefits. Flashing bicycle lights were associated with higher conspicuity than static lights. Additionally, occasional bicyclists judged themselves to be more visible than did frequent bicyclists. Overall, bicyclists overestimated their conspicuity compared to previously collected recognition distances and underestimated the conspicuity benefits of retroreflective markings on their ankles and knees. Participants mistakenly judged that a fluorescent vest that did not include retroreflective material would enhance their night-time conspicuity. These findings suggest that bicyclists have dangerous misconceptions concerning the magnitude of the night-time conspicuity problem and the potential value of conspicuity treatments.
Resumo:
Load modeling plays an important role in power system dynamic stability assessment. One of the widely used methods in assessing load model impact on system dynamic response is through parametric sensitivity analysis. Load ranking provides an effective measure of such impact. Traditionally, load ranking is based on either static or dynamic load model alone. In this paper, composite load model based load ranking framework is proposed. It enables comprehensive investigation into load modeling impacts on system stability considering the dynamic interactions between load and system dynamics. The impact of load composition on the overall sensitivity and therefore on ranking of the load is also investigated. Dynamic simulations are performed to further elucidate the results obtained through sensitivity based load ranking approach.
Resumo:
Organizations from every industry sector seek to enhance their business performance and competitiveness through the deployment of contemporary information systems (IS), such as Enterprise Systems (ERP). Investments in ERP are complex and costly, attracting scrutiny and pressure to justify their cost. Thus, IS researchers highlight the need for systematic evaluation of information system success, or impact, which has resulted in the introduction of varied models for evaluating information systems. One of these systematic measurement approaches is the IS-Impact Model introduced by a team of researchers at Queensland University of technology (QUT) (Gable, Sedera, & Chan, 2008). The IS-Impact Model is conceptualized as a formative, multidimensional index that consists of four dimensions. Gable et al. (2008) define IS-Impact as "a measure at a point in time, of the stream of net benefits from the IS, to date and anticipated, as perceived by all key-user-groups" (p.381). The IT Evaluation Research Program (ITE-Program) at QUT has grown the IS-Impact Research Track with the central goal of conducting further studies to enhance and extend the IS-Impact Model. The overall goal of the IS-Impact research track at QUT is "to develop the most widely employed model for benchmarking information systems in organizations for the joint benefit of both research and practice" (Gable, 2009). In order to achieve that, the IS-Impact research track advocates programmatic research having the principles of tenacity, holism, and generalizability through extension research strategies. This study was conducted within the IS-Impact Research Track, to further generalize the IS-Impact Model by extending it to the Saudi Arabian context. According to Hofsted (2012), the national culture of Saudi Arabia is significantly different from the Australian national culture making the Saudi Arabian culture an interesting context for testing the external validity of the IS-Impact Model. The study re-visits the IS-Impact Model from the ground up. Rather than assume the existing instrument is valid in the new context, or simply assess its validity through quantitative data collection, the study takes a qualitative, inductive approach to re-assessing the necessity and completeness of existing dimensions and measures. This is done in two phases: Exploratory Phase and Confirmatory Phase. The exploratory phase addresses the first research question of the study "Is the IS-Impact Model complete and able to capture the impact of information systems in Saudi Arabian Organization?". The content analysis, used to analyze the Identification Survey data, indicated that 2 of the 37 measures of the IS-Impact Model are not applicable for the Saudi Arabian Context. Moreover, no new measures or dimensions were identified, evidencing the completeness and content validity of the IS-Impact Model. In addition, the Identification Survey data suggested several concepts related to IS-Impact, the most prominent of which was "Computer Network Quality" (CNQ). The literature supported the existence of a theoretical link between IS-Impact and CNQ (CNQ is viewed as an antecedent of IS-Impact). With the primary goal of validating the IS-Impact model within its extended nomological network, CNQ was introduced to the research model. The Confirmatory Phase addresses the second research question of the study "Is the Extended IS-Impact Model Valid as a Hierarchical Multidimensional Formative Measurement Model?". The objective of the Confirmatory Phase was to test the validity of IS-Impact Model and CNQ Model. To achieve that, IS-Impact, CNQ, and IS-Satisfaction were operationalized in a survey instrument, and then the research model was assessed by employing the Partial Least Squares (PLS) approach. The CNQ model was validated as a formative model. Similarly, the IS-Impact Model was validated as a hierarchical multidimensional formative construct. However, the analysis indicated that one of the IS-Impact Model indicators was insignificant and can be removed from the model. Thus, the resulting Extended IS-Impact Model consists of 4 dimensions and 34 measures. Finally, the structural model was also assessed against two aspects: explanatory and predictive power. The analysis revealed that the path coefficient between CNQ and IS-Impact is significant with t-value= (4.826) and relatively strong with â = (0.426) with CNQ explaining 18% of the variance in IS-Impact. These results supported the hypothesis that CNQ is antecedent of IS-Impact. The study demonstrates that the quality of Computer Network affects the quality of the Enterprise System (ERP) and consequently the impacts of the system. Therefore, practitioners should pay attention to the Computer Network quality. Similarly, the path coefficient between IS-Impact and IS-Satisfaction was significant t-value = (17.79) and strong â = (0.744), with IS-Impact alone explaining 55% of the variance in Satisfaction, consistent with results of the original IS-Impact study (Gable et al., 2008). The research contributions include: (a) supporting the completeness and validity of IS-Impact Model as a Hierarchical Multi-dimensional Formative Measurement Model in the Saudi Arabian context, (b) operationalizing Computer Network Quality as conceptualized in the ITU-T Recommendation E.800 (ITU-T, 1993), (c) validating CNQ as a formative measurement model and as an antecedent of IS Impact, and (d) conceptualizing and validating IS-Satisfaction as a reflective measurement model and as an immediate consequence of IS Impact. The CNQ model provides a framework to perceptually measure Computer Network Quality from multiple perspectives. The CNQ model features an easy-to-understand, easy-to-use, and economical survey instrument.
Resumo:
It is certain that there will be changes in environmental conditions across the globe as a result of climate change. Such changes will require the building of biological, human and infrastructure resilience. In some instances the building of such resilience will be insufficient to deal with extreme changes in environmental conditions and legal frameworks will be required to provide recognition and support for people dislocated because of environmental change. Such dislocation may occur internally within the country of original origin or externally into another State’s territory. International and national legal frameworks do not currently recognise or assist people displaced as a result of environmental factors including displacement occurring as a result of climate change. Legal frameworks developed to deal with this issue will need to consider the legal rights of those people displaced and the legal responsibilities of those countries required to respond to such displacement. The objective of this article is to identify the most suitable international institution to host a program addressing climate displacement. There are a number of areas of international law that are relevant to climate displacement, including refugee law, human rights law and international environmental law. These regimes, however, were not designed to protect people relocating as a result of environmental change. As such, while they indirectly may be of relevance to climate displacement, they currently do nothing to directly address this complex issue. In order to determine the most appropriate institution to address and regulate climate displacement, it is imperative to consider issues of governance. This paper seeks to examine this issue and determine whether it is preferable to place climate displacement programs into existing international legal frameworks or whether it is necessary to regulate this area in an entirely new institution specifically designed to deal with the complex and cross-cutting issues surrounding the topic. Commentators in this area have proposed three different regulatory models for addressing climate displacement. These models include: (a) Expand the definition of refugee under the Refugee Convention to encompass persons displaced by climate change; (b) Implement a new stand alone Climate Displacement Convention; and (c) Implement a Climate Displacement Protocol to the UNFCCC. This article will examine each of these proposed models against a number of criteria to determine the model that is most likely to address the needs and requirements of people displaced by climate change. It will also identify the model that is likely to be most politically acceptable and realistic for those countries likely to attract responsibilities by its implementation. In order to assess whether the rights and needs of the people to be displaced are to be met, theories of procedural, distributive and remedial justice will be used to consider the equity of the proposed schemes. In order to consider the most politically palatable and realistic scheme, reference will be made to previous state practice and compliance with existing obligations in the area. It is suggested that the criteria identified by this article should underpin any future climate displacement instrument.
Resumo:
The fungal genera Ustilago, Sporisorium and Macalpinomyces represent an unresolved complex. Taxa within the complex often possess characters that occur in more than one genus, creating uncertainty for species placement. Previous studies have indicated that the genera cannot be separated by morphology alone. Here we chronologically review the history of the Ustilago-Sporisorium-Macalpinomyces complex, argue for its resolution and suggest methods to accomplish a stable taxonomy. A combined molecular and morphological approach is required to identify synapomorphic characters that underpin a new classification. Ustilago, Sporisorium and Macalpinomyces require explicit re-description and new genera, based on monophyletic groups, are needed to accommodate taxa that no longer fit the emended descriptions. A resolved classification will end the taxonomic confusion that surrounds generic placement of these smut fungi.