889 resultados para SOFA SCORE


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background It is evident from previous research that the role of dietary composition in relation to the development of childhood obesity remains inconclusive. Several studies investigating the relationship between body mass index (BMI), waist circumference (WC) and/or skin fold measurements with energy intake have suggested that the macronutrient composition of the diet (protein, carbohydrate, fat) may play an important contributing role to obesity in childhood as it does in adults. This study investigated the possible relationship between BMI and WC with energy intake and percentage energy intake from macronutrients in Australian children and adolescents. Methods Height, weight and WC measurements, along with 24 h food and drink records (FDR) intake data were collected from 2460 boys and girls aged 5-17 years living in the state of Queensland, Australia. Results Statistically significant, yet weak correlations between BMI z-score and WC with total energy intake were observed in grades 1, 5 and 10, with only 55% of subjects having a physiologically plausible 24 hr FDR. Using Pearson correlations to examine the relationship between BMI and WC with energy intake and percentage macronutrient intake, no significant correlations were observed between BMI z-score or WC and percentage energy intake from protein, carbohydrate or fat. One way ANOVAs showed that although those with a higher BMI z-score or WC consumed significantly more energy than their lean counterparts. Conclusion No evidence of an association between percentage macronutrient intake and BMI or WC was found. Evidently, more robust longitudinal studies are needed to elucidate the relationship linking obesity and dietary intake.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using American panel data from the National Education Longitudinal Study of 1988, this article investigates the effect of working during grade 12 on attainment.We employ, for the first time in the related literature, a semiparametric propensity score matching approach combined with difference-in-differences. We address selection on both observables and unobservables associated with part-time work decisions, without the need for instrumental variable. Once such factors are controlled for, little to no effects on reading and math scores are found. Overall, our results therefore suggest a negligible academic cost from part-time working by the end of high school.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Tumour necrosis (TN) is recognized to be a consequence of chronic cellular hypoxia. TN and hypoxia correlate with poor prognosis in solid tumours. Methods In a retrospective study the prognostic implications of the extent of TN was evaluated in non-small cell lung cancer (NSCLC) and correlated with clinicopathological variables and expression of epidermal growth factor receptor, Bcl-2, p53 and matrix metalloproteinase-9 (MMP-9). Tissue specimens from 178 surgically resected cases of stage I-IIIA NSCLC with curative intent were studied. The specimens were routinely processed, formalin-fixed and paraffin-embedded. TN was graded as extensive or either limited or absent by two independent observers; disagreements were resolved using a double-headed microscope. The degree of reproducibility was estimated by re-interpreting 40 randomly selected cases after a 4 month interval. Results Reproducibility was attained in 36/40 cases, Kappa score=0.8 P<0.001. TN correlated with T-stage (P=0.001), platelet count (P=0.004) and p53 expression (P=0.031). Near significant associations of TN with N-stage (P=0.063) and MMP-9 expression (P=0.058) were seen. No association was found with angiogenesis (P=0.98). On univariate (P=0.0016) and multivariate analysis (P=0.023) TN was prognostic. Conclusion These results indicate that extensive TN reflects an aggressive tumour phenotype in NSCLC and may improve the predictive power of the TMN staging system. The lack of association between TN and angiogenesis may be important although these variables were not evaluated on serial sections. © 2002 Elsevier Science Ireland Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interpreting acoustic recordings of the natural environment is an increasingly important technique for ecologists wishing to monitor terrestrial ecosystems. Technological advances make it possible to accumulate many more recordings than can be listened to or interpreted, thereby necessitating automated assistance to identify elements in the soundscape. In this paper we examine the problem of estimating avian species richness by sampling from very long acoustic recordings. We work with data recorded under natural conditions and with all the attendant problems of undefined and unconstrained acoustic content (such as wind, rain, traffic, etc.) which can mask content of interest (in our case, bird calls). We describe 14 acoustic indices calculated at one minute resolution for the duration of a 24 hour recording. An acoustic index is a statistic that summarizes some aspect of the structure and distribution of acoustic energy and information in a recording. Some of the indices we calculate are standard (e.g. signal-to-noise ratio), some have been reported useful for the detection of bioacoustic activity (e.g. temporal and spectral entropies) and some are directed to avian sources (spectral persistence of whistles). We rank the one minute segments of a 24 hour recording in descending order according to an "acoustic richness" score which is derived from a single index or a weighted combination of two or more. We describe combinations of indices which lead to more efficient estimates of species richness than random sampling from the same recording, where efficiency is defined as total species identified for given listening effort. Using random sampling, we achieve a 53% increase in species recognized over traditional field surveys and an increase of 87% using combinations of indices to direct the sampling. We also demonstrate how combinations of the same indices can be used to detect long duration acoustic events (such as heavy rain and cicada chorus) and to construct long duration (24 h) spectrograms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE To compare different reliability coefficients (exact agreement, and variations of the kappa (generalised, Cohen's and Prevalence Adjusted and Biased Adjusted (PABAK))) for four physiotherapists conducting visual assessments of scapulae. DESIGN Inter-therapist reliability study. SETTING Research laboratory. PARTICIPANTS 30 individuals with no history of neck or shoulder pain were recruited with no obvious significant postural abnormalities. MAIN OUTCOME MEASURES Ratings of scapular posture were recorded in multiple biomechanical planes under four test conditions (at rest, and while under three isometric conditions) by four physiotherapists. RESULTS The magnitude of discrepancy between the two therapist pairs was 0.04 to 0.76 for Cohen's kappa, and 0.00 to 0.86 for PABAK. In comparison, the generalised kappa provided a score between the two paired kappa coefficients. The difference between mean generalised kappa coefficients and mean Cohen's kappa (0.02) and between mean generalised kappa and PABAK (0.02) were negligible, but the magnitude of difference between the generalised kappa and paired kappa within each plane and condition was substantial; 0.02 to 0.57 for Cohen's kappa and 0.02 to 0.63 for PABAK, respectively. CONCLUSIONS Calculating coefficients for therapist pairs alone may result in inconsistent findings. In contrast, the generalised kappa provided a coefficient close to the mean of the paired kappa coefficients. These findings support an assertion that generalised kappa may lead to a better representation of reliability between three or more raters and that reliability studies only calculating agreement between two raters should be interpreted with caution. However, generalised kappa may mask more extreme cases of agreement (or disagreement) that paired comparisons may reveal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study examined the prevalence of depressive symptoms and elucidated the causal pathway between socioeconomic status and depression in a community in the central region of Vietnam. The study used a combination of qualitative and quantitative research methods. Indepth interviews were applied with two local psychiatric experts and ten residents for qualitative research. A cross sectional survey with structured interview technique was implemented with 100 residents in the pilot quantitative survey. The Center for Epidemiological Studies-Depression Scale (CES-D) was applied to valuate depressive symptoms ( CES-D score over 21) and depression ( CESD core over 25). Ordinary Least Squares Regression following the three steps of Baron and Kenny’s framework was employed for testing mediation models. There was a strong social gradient with respect to depressive symptoms. People with higher education levels reported fewer depressive symptoms (lower CES-D scores). Incomes were also inversely associated with depressive symptoms, but only the ones at the bottom of the quartile income. Low level and unstable individuals in terms of occupation were associated with higher depressive symptoms compared with the highest occupation group. Employment status showed the strongest gradient with respect to its impact on the burden of depressive symptoms compared with other indicators of SES. Findings from this pilot study suggest a pattern on the negative association between socioeconomic status and depression in Vietnamese adults.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objectives To evaluate the feasibility, acceptability and effects of a Tai Chi and Qigong exercise programme in adults with elevated blood glucose. Design, Setting, and Participants A single group pre–post feasibility trial with 11 participants (3 male and 8 female; aged 42–65 years) with elevated blood glucose. Intervention Participants attended Tai Chi and Qigong exercise training for 1 to 1.5 h, 3 times per week for 12 weeks, and were encouraged to practise the exercises at home. Main Outcome Measures Indicators of metabolic syndrome (body mass index (BMI), waist circumference, blood pressure, fasting blood glucose, triglycerides, HDL-cholesterol); glucose control (HbA1c, fasting insulin and insulin resistance (HOMA)); health-related quality of life; stress and depressive symptoms. Results There was good adherence and high acceptability. There were significant improvements in four of the seven indicators of metabolic syndrome including BMI (mean difference −1.05, p<0.001), waist circumference (−2.80 cm, p<0.05), and systolic (−11.64 mm Hg, p<0.01) and diastolic blood pressure (−9.73 mm Hg, p<0.001), as well as in HbA1c (−0.32%, p<0.01), insulin resistance (−0.53, p<0.05), stress (−2.27, p<0.05), depressive symptoms (−3.60, p<0.05), and the SF-36 mental health summary score (5.13, p<0.05) and subscales for general health (19.00, p<0.01), mental health (10.55, p<0.01) and vitality (23.18, p<0.05). Conclusions The programme was feasible and acceptable and participants showed improvements in metabolic and psychological variables. A larger controlled trial is now needed to confirm these promising preliminary results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Different reputation models are used in the web in order to generate reputation values for products using uses' review data. Most of the current reputation models use review ratings and neglect users' textual reviews, because it is more difficult to process. However, we argue that the overall reputation score for an item does not reflect the actual reputation for all of its features. And that's why the use of users' textual reviews is necessary. In our work we introduce a new reputation model that defines a new aggregation method for users' extracted opinions about products' features from users' text. Our model uses features ontology in order to define general features and sub-features of a product. It also reflects the frequencies of positive and negative opinions. We provide a case study to show how our results compare with other reputation models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Post-stroke recovery is demanding. Increasing studies have examined the effectiveness of self-management programs for stroke survivors. However no systematic review has been conducted to summarize the effectiveness of theory-based stroke self-management programs. Objectives The aim is to present the best available research evidence about effectiveness of theory-based self-management programs on community-dwelling stroke survivors’ recovery. Inclusion criteria Types of participants All community-residing adults aged 18 years or above, and had a clinical diagnosis of stroke. Types of interventions Studies which examined effectiveness of a self-management program underpinned by a theoretical or conceptual framework for community-dwelling stroke survivors. Types of studies Randomized controlled trials. Types of outcomes Primary outcomes included health-related quality of life and self-management behaviors. Secondary outcomes included physical (activities of daily living), psychological (self-efficacy, depressive symptoms), and social outcomes (community reintegration, perceived social support). Search Strategy A three-step approach was adopted to identify all relevant published and unpublished studies in English or Chinese. Methodological quality The methodological quality of the included studies was assessed using the Joanna Briggs Institute critical appraisal checklist for experimental studies. Data Collection A standardized JBI data extraction form was used. There was no disagreement between the two reviewers on the data extraction results. Data Synthesis There were incomplete details about the number of participants and the results in two studies, which makes it impossible to perform meta-analysis. A narrative summary of the effectiveness of stroke self-management programs is presented. Results Three studies were included. The key issues of concern in methodological quality included insufficient information about random assignment, allocation concealment, reliability and validity of the measuring instruments, absence of intention-to-treat analysis, and small sample sizes. The three programs were designed based on the Stanford Chronic Disease Self-management program and were underpinned by the principles of self-efficacy. One study showed improvement in the intervention group in family and social roles three months after program completion, and work productivity at six months as measured by the Stroke Specific Quality of Life Scale (SSQOL). The intervention group also had an increased mean self-efficacy score in communicating with physicians six months after program completion. The mean changes from baseline in these variables were significantly different from the control group. No significant difference was found in time spent in aerobic exercise between the intervention and control groups at three and six months after program completion. Another study, using SSQOL, showed a significant interaction effect by treatment and time on family roles, fine motor tasks, self-care, and work productivity. However there was no significant interaction by treatment and time on self-efficacy. The third study showed improvement in quality of life, community participation, and depressive symptoms among the participants receiving the stroke self-management program, Stanford Chronic Disease Self-management program, or usual care six months after program completion. However, there was no significant difference between the groups. Conclusions There is inconclusive evidence about the effectiveness of theory-based stroke self-management programs on community-dwelling stroke survivors’ recovery. However the preliminary evidence suggests potential benefits in improving stroke survivors’ quality of life and self-efficacy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Variations in 'slope' (how steep or flat the ground is) may be good for health. As walking up hills is a physiologically vigorous physical activity and can contribute to weight control, greater neighbourhood slopes may provide a protective barrier to weight gain, and help prevent Type 2 diabetes onset. We explored whether living in 'hilly' neighbourhoods was associated with diabetes prevalence among the Australian adult population. METHODS: Participants ([greater than or equal to]25years; n=11,406) who completed the Western Australian Health and Wellbeing Surveillance System Survey (2003-2009) were asked whether or not they had medically-diagnosed diabetes. Geographic Information Systems (GIS) software was used to calculate a neighbourhood mean slope score, and other built environment measures at 1600m around each participant's home. Logistic regression models were used to predict the odds of self-reported diabetes after progressive adjustment for individual measures (i.e., age, sex), socioeconomic status (i.e., education, income), built environment, destinations, nutrition, and amount of walking. RESULTS: After full adjustment, the odds of self-reported diabetes was 0.72 (95% CI 0.55-0.95) and 0.52 (95% CI 0.39-0.69) for adults living in neighbourhoods with moderate and higher levels of slope, respectively, compared with adults living in neighbourhoods with the lowest levels of slope. The odds of having diabetes was 13% lower (odds ratio 0.87; 95% CI 0.80-0.94) for each increase of one percent in mean slope. CONCLUSIONS: Living in a hilly neighbourhood may be protective of diabetes onset or this finding is spurious. Nevertheless, the results are promising and have implications for future research and the practice of flattening land in new housing developments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We explored the impact of neighborhood walkability on young adults, early-middle adults, middle-aged adults, and older adults' walking across different neighborhood buffers. Participants completed the Western Australian Health and Wellbeing Surveillance System Survey (2003–2009) and were allocated a neighborhood walkability score at 200 m, 400 m, 800 m, and 1600 m around their home. We found little difference in strength of associations across neighborhood size buffers for all life stages. We conclude that neighborhood walkability supports more walking regardless of adult life stage and is relevant for small (e.g., 200 m) and larger (e.g., 1600 m) neighborhood buffers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research Statement: An urban film produced by Luke Harrison Mitchell Benham, Sharlene Anderson, Tristan Clark. RIVE NOIR explores the film noir tradition, shot on location in a dark urban space between high-rises and the river, sheltered by a highway. With an original score and striking cinematography, Rive Noir radically transforms the abandoned river’s edge through the production of an amplified reality ordinarily unseen in the Northbank. The work produced under my supervision was selected to appear in the Expanded Architecture Research Group’s International Architecture Film Festival and Panel Discussion in Sydney: The University of Sydney and Carriageworks Performance Space, 06 November 2011. QUT School of Design research submission was selected alongside exhibits by AA School of Architecture, London; The Bartlett School of Architecture, London; University of The Arts, London; Arrhaus School of Architecture, Denmark; Dublin as a Cinematic City, Ireland; Design Lab Screen Studio, Australia; and Sona Cinecity, The University of Melbourne. The exhibit included not only the screening of the film but the design project that derived from and extended the aesthetics of the urban film. The urban proposal and architectural intervention that followed the film was subsequently published in the Brisbane Times, after the urban proposal won first place in The Future of Brisbane architecture competition, which demonstrates the impact of the research project as a whole. EXPANDED ARCHITECTURE 2011 - 6th November Architecture Film Night + Panel Discussion @ Performance Space CarriageWorks was Sydney's first International Architectural Film Festival. With over 40 architectural films by local and international artists, film makers and architects. It was followed by Panel Discussion of esteemed academics and artists working in the field of architectural film.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Appropriate disposition of emergency department (ED) patients with chest pain is dependent on clinical evaluation of risk. A number of chest pain risk stratification tools have been proposed. The aim of this study was to compare the predictive performance for major adverse cardiac events (MACE) using risk assessment tools from the National Heart Foundation of Australia (HFA), the Goldman risk score and the Thrombolysis in Myocardial Infarction risk score (TIMI RS). Methods: This prospective observational study evaluated ED patients aged ≥30 years with non-traumatic chest pain for which no definitive non-ischemic cause was found. Data collected included demographic and clinical information, investigation findings and occurrence of MACE by 30 days. The outcome of interest was the comparative predictive performance of the risk tools for MACE at 30 days, as analyzed by receiver operator curves (ROC). Results: Two hundred eighty-one patients were studied; the rate of MACE was 14.1%. Area under the curve (AUC) of the HFA, TIMI RS and Goldman tools for the endpoint of MACE was 0.54, 0.71 and 0.67, respectively, with the difference between the tools in predictive ability for MACE being highly significant [chi2 (3) = 67.21, N = 276, p < 0.0001]. Conclusion: The TIMI RS and Goldman tools performed better than the HFA in this undifferentiated ED chest pain population, but selection of cutoffs balancing sensitivity and specificity was problematic. There is an urgent need for validated risk stratification tools specific for the ED chest pain population.