255 resultados para Dipl.-Ing. Frank Abelbeck
Resumo:
- Objective To compare health service cost and length of stay between a traditional and an accelerated diagnostic approach to assess acute coronary syndromes (ACS) among patients who presented to the emergency department (ED) of a large tertiary hospital in Australia. - Design, setting and participants This historically controlled study analysed data collected from two independent patient cohorts presenting to the ED with potential ACS. The first cohort of 938 patients was recruited in 2008–2010, and these patients were assessed using the traditional diagnostic approach detailed in the national guideline. The second cohort of 921 patients was recruited in 2011–2013 and was assessed with the accelerated diagnostic approach named the Brisbane protocol. The Brisbane protocol applied early serial troponin testing for patients at 0 and 2 h after presentation to ED, in comparison with 0 and 6 h testing in traditional assessment process. The Brisbane protocol also defined a low-risk group of patients in whom no objective testing was performed. A decision tree model was used to compare the expected cost and length of stay in hospital between two approaches. Probabilistic sensitivity analysis was used to account for model uncertainty. - Results Compared with the traditional diagnostic approach, the Brisbane protocol was associated with reduced expected cost of $1229 (95% CI −$1266 to $5122) and reduced expected length of stay of 26 h (95% CI −14 to 136 h). The Brisbane protocol allowed physicians to discharge a higher proportion of low-risk and intermediate-risk patients from ED within 4 h (72% vs 51%). Results from sensitivity analysis suggested the Brisbane protocol had a high chance of being cost-saving and time-saving. - Conclusions This study provides some evidence of cost savings from a decision to adopt the Brisbane protocol. Benefits would arise for the hospital and for patients and their families.
Resumo:
- Introduction Heat-based training (HT) is becoming increasingly popular as a means of inducing acclimation before athletic competition in hot conditions and/or to augment the training impulse beyond that achieved in thermo-neutral conditions. Importantly, current understanding of the effects of HT on regenerative processes such as sleep and the interactions with common recovery interventions remain unknown. This study aimed to examine sleep characteristics during five consecutive days of training in the heat with the inclusion of cold-water immersion (CWI) compared to baseline sleep patterns. - Methods Thirty recreationally-trained males completed HT in 32 ± 1 °C and 60% rh for five consecutive days. Conditions included: 1) 90 min cycling at 40 % power at VO2max (Pmax) (90CONT; n = 10); 90 min cycling at 40 % Pmax with a 20 min CWI (14 ± 1 °C; 90CWI; n = 10); and 30 min cycling alternating between 40 and 70 % Pmax every 3 min, with no recovery intervention (30HIT; n = 10). Sleep quality and quantity was assessed during HT and four nights of 'baseline' sleep (BASE). Actigraphy provided measures of time in and out of bed, sleep latency, efficiency, total time in bed and total time asleep, wake after sleep onset, number of awakenings, and wakening duration. Subjective ratings of sleep were also recorded using a 1-5 Likert scale. Repeated measures analysis of variance (ANOVA) was completed to determine effect of time and condition on sleep quality and quantity. Cohen's d effect sizes were also applied to determine magnitude and trends in the data. - Results Sleep latency, efficiency, total time in bed and number of awakenings were not significantly different between BASE and HT (P > 0.05). However, total time asleep was significantly reduced (P = 0.01; d = 1.46) and the duration periods of wakefulness after sleep onset was significantly greater during HT compared with BASE (P = 0.001; d = 1.14). Comparison between training groups showed latency was significantly higher for the 30HIT group compared to 90CONT (P = 0.02; d = 1.33). Nevertheless, there were no differences between training groups for sleep efficiency, total time in bed or asleep, wake after sleep onset, number of awakenings or awake duration (P > 0.05). Further, cold-water immersion recovery had no significant effect on sleep characteristics (P > 0.05). - Discussion Sleep plays an important role in athletic recovery and has previously been demonstrated to be influenced by both exercise training and thermal strain. Present data highlight the effect of HT on reduced sleep quality, specifically reducing total time asleep due to longer duration awake during awakenings after sleep onset. Importantly, although cold water recovery accelerates the removal of thermal load, this intervention did not blunt the negative effects of HT on sleep characteristics. - Conclusion Training in hot conditions may reduce both sleep quantity and quality and should be taken into consideration when administering this training intervention in the field.
Resumo:
This paper investigates the challenges of delivering parent training intervention for autism over video. We conducted a qualitative field study of an intervention, which is based on a well-established training program for parents of children with autism, called Hanen More Than Words. The study was conducted with a Hanen Certified speech pathologist who delivered video based training to two mothers, each with a son having autism. We conducted observations of 14 sessions of the intervention spanning 3 months along with 3 semi-structured interviews with each participant. We identified different activities that participants performed across different sessions and analysed them based upon their implications on technology. We found that all the participants welcomed video based training but they also faced several difficulties, particularly in establishing rapport with other participants, inviting equal participation, and in observing and providing feedback on parent-child interactions. Finally, we reflect on our findings and motivate further investigations by defining three design sensitivities of Adaptation, Group Participation, and Physical Setup.
Resumo:
Recent research about technology during mealtime has been mostly concerned with developing technology rather than creating a deeper understanding of the context of family mealtimes and associated practices. In this paper, we present a two-phase study discussing how the temporal, social, and food related features are intertwined with technology use during mealtimes. Our findings show how people differentiate technology usage during weekday meals, weekend meals, and among different meals of the day. We identify and analyse prototypical situations ranging from the use of arbitrary technologies while eating solitary, to idiosyncratic family norms and practices associated with shared technologies. We discuss the use of mealtime technology to create appropriate ambience for meals with guests and demonstrate how technology can be used to complement food in everyday meals and special occasions. Our findings make recommendation about the need for HCI research to recognize the contextual nature of technology usage during family mealtimes and to adopt appropriate design strategies.
Resumo:
Natural User Interfaces (NUI) offer rich ways for interacting with the digital world that make innovative use of existing human capabilities. They include and often combine different input modalities such as voice, gesture, eye gaze, body interactions, touch and touchless interactions. However much of the focus of NUI research and development has been on enhancing the experience of individuals interacting with technology. Effective NUIs must also acknowledge our innately social characteristics, and support how we communicate with each other, play together, learn together and collaboratively work together. This workshop concerns the social aspects of NUI. The workshop seeks to better understand the social uses and applications of these new NUI technologies -- how we design these technologies for new social practices and how we understand the use of these technologies in key social contexts.
Resumo:
In this paper we report the results of a study comparing implicit-only and explicit-only interactions in a collaborative, video-mediated task with shared content. Expanding on earlier work which has typically only evaluated how implicit interaction can augment primarily explicit systems, we report issues surrounding control, anxiousness and negotiation in the context of video mediated collaboration. We conclude that implicit interaction has the potential to improve collaborative work, but that there are a multitude of issues that must first be negotiated.
Resumo:
Exposure to ambient air pollution is a major risk factor for global disease. Assessment of the impacts of air pollution on population health and the evaluation of trends relative to other major risk factors requires regularly updated, accurate, spatially resolved exposure estimates. We combined satellite-based estimates, chemical transport model (CTM) simulations and ground measurements from 79 different countries to produce new global estimates of annual average fine particle (PM2.5) and ozone concentrations at 0.1° × 0.1° spatial resolution for five-year intervals from 1990-2010 and the year 2013. These estimates were then applied to assess population-weighted mean concentrations for 1990 – 2013 for each of 188 countries. In 2013, 87% of the world’s population lived in areas exceeding the World Health Organization (WHO) Air Quality Guideline of 10 μg/m3 PM2.5 (annual average). Between 1990 and 2013, decreases in population-weighted mean concentrations of PM2.5 were evident in most high income countries, in contrast to increases estimated in South Asia, throughout much of Southeast Asia, and in China. Population-weighted mean concentrations of ozone increased in most countries from 1990 - 2013, with modest decreases in North America, parts of Europe, and several countries in Southeast Asia.
Resumo:
Background Risk-stratification of diffuse large B-cell lymphoma (DLBCL) requires identification of patients with disease that is not cured despite initial R-CHOP. Although the prognostic importance of the tumour microenvironment (TME) is established, the optimal strategy to quantify it is unknown. Methods The relationship between immune-effector and inhibitory (checkpoint) genes was assessed by NanoString™ in 252 paraffin-embedded DLBCL tissues. A model to quantify net anti-tumoural immunity as an outcome predictor was tested in 158 R-CHOP treated patients, and validated in tissue/blood from two independent R-CHOP treated cohorts of 233 and 140 patients respectively. Findings T and NK-cell immune-effector molecule expression correlated with tumour associated macrophage and PD-1/PD-L1 axis markers consistent with malignant B-cells triggering a dynamic checkpoint response to adapt to and evade immune-surveillance. A tree-based survival model was performed to test if immune-effector to checkpoint ratios were prognostic. The CD4*CD8:(CD163/CD68)*PD-L1 ratio was better able to stratify overall survival than any single or combination of immune markers, distinguishing groups with disparate 4-year survivals (92% versus 47%). The immune ratio was independent of and added to the revised international prognostic index (R-IPI) and cell-of-origin (COO). Tissue findings were validated in 233 DLBCL R-CHOP treated patients. Furthermore, within the blood of 140 R-CHOP treated patients immune-effector:checkpoint ratios were associated with differential interim-PET/CT+ve/-ve expression.
Resumo:
Homozygosity has long been associated with rare, often devastating, Mendelian disorders1, and Darwin was one of the first to recognize that inbreeding reduces evolutionary fitness2. However, the effect of the more distant parental relatedness that is common in modern human populations is less well understood. Genomic data now allow us to investigate the effects of homozygosity on traits of public health importance by observing contiguous homozygous segments (runs of homozygosity), which are inferred to be homozygous along their complete length. Given the low levels of genome-wide homozygosity prevalent in most human populations, information is required on very large numbers of people to provide sufficient power3, 4. Here we use runs of homozygosity to study 16 health-related quantitative traits in 354,224 individuals from 102 cohorts, and find statistically significant associations between summed runs of homozygosity and four complex traits: height, forced expiratory lung volume in one second, general cognitive ability and educational attainment (P < 1 × 10−300, 2.1 × 10−6, 2.5 × 10−10 and 1.8 × 10−10, respectively). In each case, increased homozygosity was associated with decreased trait value, equivalent to the offspring of first cousins being 1.2 cm shorter and having 10 months’ less education. Similar effect sizes were found across four continental groups and populations with different degrees of genome-wide homozygosity, providing evidence that homozygosity, rather than confounding, directly contributes to phenotypic variance. Contrary to earlier reports in substantially smaller samples5, 6, no evidence was seen of an influence of genome-wide homozygosity on blood pressure and low density lipoprotein cholesterol, or ten other cardio-metabolic traits. Since directional dominance is predicted for traits under directional evolutionary selection7, this study provides evidence that increased stature and cognitive function have been positively selected in human evolution, whereas many important risk factors for late-onset complex diseases may not have been.
Resumo:
Grand Push Auto is an exertion game in which players aim to push a full sized car to ever increasing speeds. The re-appropriation of a car as essentially a large weight allows us to create a highly portable and distributable exertion game in which the main game element has a weight of over 1000 kilograms. In this paper we discuss initial experiences with GPA, and present 3 questions for ongoing study which have been identified from our early testing: How might we appropriate existing objects in exertion game design, and does appropriation change how we think about these objects in different contexts, for example environmental awareness? How does this relate to more traditional sled based weight training? How can we create exertion games that allow truly brutal levels of force?
Resumo:
Improved forecasting of urban rail patronage is essential for effective policy development and efficient planning for new rail infrastructure. Past modelling and forecasting of urban rail patronage has been based on legacy modelling approaches and often conducted at the general level of public transport demand, rather than being specific to urban rail. This project canvassed current Australian practice and international best practice to develop and estimate time series and cross-sectional models of rail patronage for Australian mainland state capital cities. This involved the implementation of a large online survey of rail riders and non-riders for each of the state capital cities, thereby resulting in a comprehensive database of respondent socio-economic profiles, travel experience, attitudes to rail and other modes of travel, together with stated preference responses to a wide range of urban travel scenarios. Estimation of the models provided a demonstration of their ability to provide information on the major influences on the urban rail travel decision. Rail fares, congestion and rail service supply all have a strong influence on rail patronage, while a number of less significant factors such as fuel price and access to a motor vehicle are also influential. Of note, too, is the relative homogeneity of rail user profiles across the state capitals. Rail users tended to have higher incomes and education levels. They are also younger and more likely to be in full-time employment than non-rail users. The project analysis reported here represents only a small proportion of what could be accomplished utilising the survey database. More comprehensive investigation was beyond the scope of the project and has been left for future work.
Resumo:
This study examined the effect of exercise intensity and duration during 5-day heat acclimation (HA) on cycling performance and neuromuscular responses. 20 recreationally trained males completed a ‘baseline’ trial followed by 5 consecutive days HA, and a ‘post-acclimation’ trial. Baseline and post-acclimation trials consisted of maximal voluntary contractions (MVC), a single and repeated countermovement jump protocol, 20 km cycling time trial(TT) and 5x6 s maximal sprints (SPR). Cycling trials were undertaken in 33.0 ± 0.8 °C and 60 ± 3% relative humidity.Core(Tcore), and skin temperatures (Tskin), heart rate (HR), rating of perceived exertion (RPE) and thermal sensation were recorded throughout cycling trials. Participants were assigned to either 30 min high-intensity (30HI) or 90 min low-intensity (90LI) cohorts for HA, conducted in environmental conditions of 32.0 ± 1.6 °C. Percentage change time to complete the 20 km TT for the 90LI cohort was significantly improved post-acclimation(-5.9 ± 7.0%; P=0.04) compared to the 30HI cohort (-0.18 ± 3.9%; P<0.05). The 30HI cohort showed greatest improvements in power output (PO) during post-acclimation SPR1 and 2 compared to 90LI (546 ± 128 W and 517 ± 87 W,respectively; P<0.02). No differences were evident for MVC within 30HI cohort, however, a reduced performance indicated by % change within the 90LI (P=0.04). Compared to baseline, mean Tcore was reduced post-acclimation within the 30HI cohort (P=0.05) while mean Tcore and HR were significantly reduced within the 90LI cohort (P=0.01 and 0.04, respectively). Greater physiological adaptations and performance improvements were noted within the 90LI cohort compared to the 30HI. However, 30HI did provide some benefit to anaerobic performance including sprint PO and MVC. These findings suggest specifying training duration and intensity during heat acclimation may be useful for specific post-acclimation performance.
Resumo:
Objective: To evaluate the feasibility, reliability and acceptability of the mini clinical evaluation exercise (mini-CEX) for performance assessment among international medical graduates (IMGs). Design, setting and participants: Observational study of 209 patient encounters involving 28 IMGs and 35 examiners at three metropolitan teaching hospitals in New South Wales, Victoria and Queensland, September-December 2006. Main outcome measures: The reliability of the mini-CEX was estimated using generatisability (G) analysis, and its acceptability was evaluated by a written survey of the examiners and IMGs. Results: The G coefficient for eight encounters was 0.88, suggesting that the reliability of the mini-CEX was 0.90 for 10 encounters. Almost half of the IMGs (7/16) and most examiners (14/18) were satisfied with the mini-CEX as a learning tool. Most of the IMGs and examiners enjoyed the immediate feedback, which is a strong component of the tool. Conclusion: The mini-CEX is a reliable tool for performance assessment of IMGs, and is acceptable to and well received by both learners and supervisors.
Resumo:
The most difficult operation in the flood inundation mapping using optical flood images is to separate fully inundated areas from the ‘wet’ areas where trees and houses are partly covered by water. This can be referred as a typical problem the presence of mixed pixels in the images. A number of automatic information extraction image classification algorithms have been developed over the years for flood mapping using optical remote sensing images. Most classification algorithms generally, help in selecting a pixel in a particular class label with the greatest likelihood. However, these hard classification methods often fail to generate a reliable flood inundation mapping because the presence of mixed pixels in the images. To solve the mixed pixel problem advanced image processing techniques are adopted and Linear Spectral unmixing method is one of the most popular soft classification technique used for mixed pixel analysis. The good performance of linear spectral unmixing depends on two important issues, those are, the method of selecting endmembers and the method to model the endmembers for unmixing. This paper presents an improvement in the adaptive selection of endmember subset for each pixel in spectral unmixing method for reliable flood mapping. Using a fixed set of endmembers for spectral unmixing all pixels in an entire image might cause over estimation of the endmember spectra residing in a mixed pixel and hence cause reducing the performance level of spectral unmixing. Compared to this, application of estimated adaptive subset of endmembers for each pixel can decrease the residual error in unmixing results and provide a reliable output. In this current paper, it has also been proved that this proposed method can improve the accuracy of conventional linear unmixing methods and also easy to apply. Three different linear spectral unmixing methods were applied to test the improvement in unmixing results. Experiments were conducted in three different sets of Landsat-5 TM images of three different flood events in Australia to examine the method on different flooding conditions and achieved satisfactory outcomes in flood mapping.