897 resultados para Score Normalisation
Resumo:
This study examined the prevalence of depressive symptoms and elucidated the causal pathway between socioeconomic status and depression in a community in the central region of Vietnam. The study used a combination of qualitative and quantitative research methods. Indepth interviews were applied with two local psychiatric experts and ten residents for qualitative research. A cross sectional survey with structured interview technique was implemented with 100 residents in the pilot quantitative survey. The Center for Epidemiological Studies-Depression Scale (CES-D) was applied to valuate depressive symptoms ( CES-D score over 21) and depression ( CESD core over 25). Ordinary Least Squares Regression following the three steps of Baron and Kenny’s framework was employed for testing mediation models. There was a strong social gradient with respect to depressive symptoms. People with higher education levels reported fewer depressive symptoms (lower CES-D scores). Incomes were also inversely associated with depressive symptoms, but only the ones at the bottom of the quartile income. Low level and unstable individuals in terms of occupation were associated with higher depressive symptoms compared with the highest occupation group. Employment status showed the strongest gradient with respect to its impact on the burden of depressive symptoms compared with other indicators of SES. Findings from this pilot study suggest a pattern on the negative association between socioeconomic status and depression in Vietnamese adults.
Resumo:
In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.
Resumo:
Objectives To evaluate the feasibility, acceptability and effects of a Tai Chi and Qigong exercise programme in adults with elevated blood glucose. Design, Setting, and Participants A single group pre–post feasibility trial with 11 participants (3 male and 8 female; aged 42–65 years) with elevated blood glucose. Intervention Participants attended Tai Chi and Qigong exercise training for 1 to 1.5 h, 3 times per week for 12 weeks, and were encouraged to practise the exercises at home. Main Outcome Measures Indicators of metabolic syndrome (body mass index (BMI), waist circumference, blood pressure, fasting blood glucose, triglycerides, HDL-cholesterol); glucose control (HbA1c, fasting insulin and insulin resistance (HOMA)); health-related quality of life; stress and depressive symptoms. Results There was good adherence and high acceptability. There were significant improvements in four of the seven indicators of metabolic syndrome including BMI (mean difference −1.05, p<0.001), waist circumference (−2.80 cm, p<0.05), and systolic (−11.64 mm Hg, p<0.01) and diastolic blood pressure (−9.73 mm Hg, p<0.001), as well as in HbA1c (−0.32%, p<0.01), insulin resistance (−0.53, p<0.05), stress (−2.27, p<0.05), depressive symptoms (−3.60, p<0.05), and the SF-36 mental health summary score (5.13, p<0.05) and subscales for general health (19.00, p<0.01), mental health (10.55, p<0.01) and vitality (23.18, p<0.05). Conclusions The programme was feasible and acceptable and participants showed improvements in metabolic and psychological variables. A larger controlled trial is now needed to confirm these promising preliminary results.
Resumo:
Different reputation models are used in the web in order to generate reputation values for products using uses' review data. Most of the current reputation models use review ratings and neglect users' textual reviews, because it is more difficult to process. However, we argue that the overall reputation score for an item does not reflect the actual reputation for all of its features. And that's why the use of users' textual reviews is necessary. In our work we introduce a new reputation model that defines a new aggregation method for users' extracted opinions about products' features from users' text. Our model uses features ontology in order to define general features and sub-features of a product. It also reflects the frequencies of positive and negative opinions. We provide a case study to show how our results compare with other reputation models.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Background Post-stroke recovery is demanding. Increasing studies have examined the effectiveness of self-management programs for stroke survivors. However no systematic review has been conducted to summarize the effectiveness of theory-based stroke self-management programs. Objectives The aim is to present the best available research evidence about effectiveness of theory-based self-management programs on community-dwelling stroke survivors’ recovery. Inclusion criteria Types of participants All community-residing adults aged 18 years or above, and had a clinical diagnosis of stroke. Types of interventions Studies which examined effectiveness of a self-management program underpinned by a theoretical or conceptual framework for community-dwelling stroke survivors. Types of studies Randomized controlled trials. Types of outcomes Primary outcomes included health-related quality of life and self-management behaviors. Secondary outcomes included physical (activities of daily living), psychological (self-efficacy, depressive symptoms), and social outcomes (community reintegration, perceived social support). Search Strategy A three-step approach was adopted to identify all relevant published and unpublished studies in English or Chinese. Methodological quality The methodological quality of the included studies was assessed using the Joanna Briggs Institute critical appraisal checklist for experimental studies. Data Collection A standardized JBI data extraction form was used. There was no disagreement between the two reviewers on the data extraction results. Data Synthesis There were incomplete details about the number of participants and the results in two studies, which makes it impossible to perform meta-analysis. A narrative summary of the effectiveness of stroke self-management programs is presented. Results Three studies were included. The key issues of concern in methodological quality included insufficient information about random assignment, allocation concealment, reliability and validity of the measuring instruments, absence of intention-to-treat analysis, and small sample sizes. The three programs were designed based on the Stanford Chronic Disease Self-management program and were underpinned by the principles of self-efficacy. One study showed improvement in the intervention group in family and social roles three months after program completion, and work productivity at six months as measured by the Stroke Specific Quality of Life Scale (SSQOL). The intervention group also had an increased mean self-efficacy score in communicating with physicians six months after program completion. The mean changes from baseline in these variables were significantly different from the control group. No significant difference was found in time spent in aerobic exercise between the intervention and control groups at three and six months after program completion. Another study, using SSQOL, showed a significant interaction effect by treatment and time on family roles, fine motor tasks, self-care, and work productivity. However there was no significant interaction by treatment and time on self-efficacy. The third study showed improvement in quality of life, community participation, and depressive symptoms among the participants receiving the stroke self-management program, Stanford Chronic Disease Self-management program, or usual care six months after program completion. However, there was no significant difference between the groups. Conclusions There is inconclusive evidence about the effectiveness of theory-based stroke self-management programs on community-dwelling stroke survivors’ recovery. However the preliminary evidence suggests potential benefits in improving stroke survivors’ quality of life and self-efficacy.
Resumo:
This paper proposes techniques to improve the performance of i-vector based speaker verification systems when only short utterances are available. Short-length utterance i-vectors vary with speaker, session variations, and the phonetic content of the utterance. Well established methods such as linear discriminant analysis (LDA), source-normalized LDA (SN-LDA) and within-class covariance normalisation (WCCN) exist for compensating the session variation but we have identified the variability introduced by phonetic content due to utterance variation as an additional source of degradation when short-duration utterances are used. To compensate for utterance variations in short i-vector speaker verification systems using cosine similarity scoring (CSS), we have introduced a short utterance variance normalization (SUVN) technique and a short utterance variance (SUV) modelling approach at the i-vector feature level. A combination of SUVN with LDA and SN-LDA is proposed to compensate the session and utterance variations and is shown to provide improvement in performance over the traditional approach of using LDA and/or SN-LDA followed by WCCN. An alternative approach is also introduced using probabilistic linear discriminant analysis (PLDA) approach to directly model the SUV. The combination of SUVN, LDA and SN-LDA followed by SUV PLDA modelling provides an improvement over the baseline PLDA approach. We also show that for this combination of techniques, the utterance variation information needs to be artificially added to full-length i-vectors for PLDA modelling.
Resumo:
A security system based on the recognition of the iris of human eyes using the wavelet transform is presented. The zero-crossings of the wavelet transform are used to extract the unique features obtained from the grey-level profiles of the iris. The recognition process is performed in two stages. The first stage consists of building a one-dimensional representation of the grey-level profiles of the iris, followed by obtaining the wavelet transform zerocrossings of the resulting representation. The second stage is the matching procedure for iris recognition. The proposed approach uses only a few selected intermediate resolution levels for matching, thus making it computationally efficient as well as less sensitive to noise and quantisation errors. A normalisation process is implemented to compensate for size variations due to the possible changes in the camera-to-face distance. The technique has been tested on real images in both noise-free and noisy conditions. The technique is being investigated for real-time implementation, as a stand-alone system, for access control to high-security areas.
Resumo:
BACKGROUND: Variations in 'slope' (how steep or flat the ground is) may be good for health. As walking up hills is a physiologically vigorous physical activity and can contribute to weight control, greater neighbourhood slopes may provide a protective barrier to weight gain, and help prevent Type 2 diabetes onset. We explored whether living in 'hilly' neighbourhoods was associated with diabetes prevalence among the Australian adult population. METHODS: Participants ([greater than or equal to]25years; n=11,406) who completed the Western Australian Health and Wellbeing Surveillance System Survey (2003-2009) were asked whether or not they had medically-diagnosed diabetes. Geographic Information Systems (GIS) software was used to calculate a neighbourhood mean slope score, and other built environment measures at 1600m around each participant's home. Logistic regression models were used to predict the odds of self-reported diabetes after progressive adjustment for individual measures (i.e., age, sex), socioeconomic status (i.e., education, income), built environment, destinations, nutrition, and amount of walking. RESULTS: After full adjustment, the odds of self-reported diabetes was 0.72 (95% CI 0.55-0.95) and 0.52 (95% CI 0.39-0.69) for adults living in neighbourhoods with moderate and higher levels of slope, respectively, compared with adults living in neighbourhoods with the lowest levels of slope. The odds of having diabetes was 13% lower (odds ratio 0.87; 95% CI 0.80-0.94) for each increase of one percent in mean slope. CONCLUSIONS: Living in a hilly neighbourhood may be protective of diabetes onset or this finding is spurious. Nevertheless, the results are promising and have implications for future research and the practice of flattening land in new housing developments.
Resumo:
We explored the impact of neighborhood walkability on young adults, early-middle adults, middle-aged adults, and older adults' walking across different neighborhood buffers. Participants completed the Western Australian Health and Wellbeing Surveillance System Survey (2003–2009) and were allocated a neighborhood walkability score at 200 m, 400 m, 800 m, and 1600 m around their home. We found little difference in strength of associations across neighborhood size buffers for all life stages. We conclude that neighborhood walkability supports more walking regardless of adult life stage and is relevant for small (e.g., 200 m) and larger (e.g., 1600 m) neighborhood buffers.
Resumo:
Research Statement: An urban film produced by Luke Harrison Mitchell Benham, Sharlene Anderson, Tristan Clark. RIVE NOIR explores the film noir tradition, shot on location in a dark urban space between high-rises and the river, sheltered by a highway. With an original score and striking cinematography, Rive Noir radically transforms the abandoned river’s edge through the production of an amplified reality ordinarily unseen in the Northbank. The work produced under my supervision was selected to appear in the Expanded Architecture Research Group’s International Architecture Film Festival and Panel Discussion in Sydney: The University of Sydney and Carriageworks Performance Space, 06 November 2011. QUT School of Design research submission was selected alongside exhibits by AA School of Architecture, London; The Bartlett School of Architecture, London; University of The Arts, London; Arrhaus School of Architecture, Denmark; Dublin as a Cinematic City, Ireland; Design Lab Screen Studio, Australia; and Sona Cinecity, The University of Melbourne. The exhibit included not only the screening of the film but the design project that derived from and extended the aesthetics of the urban film. The urban proposal and architectural intervention that followed the film was subsequently published in the Brisbane Times, after the urban proposal won first place in The Future of Brisbane architecture competition, which demonstrates the impact of the research project as a whole. EXPANDED ARCHITECTURE 2011 - 6th November Architecture Film Night + Panel Discussion @ Performance Space CarriageWorks was Sydney's first International Architectural Film Festival. With over 40 architectural films by local and international artists, film makers and architects. It was followed by Panel Discussion of esteemed academics and artists working in the field of architectural film.
Resumo:
Background: Appropriate disposition of emergency department (ED) patients with chest pain is dependent on clinical evaluation of risk. A number of chest pain risk stratification tools have been proposed. The aim of this study was to compare the predictive performance for major adverse cardiac events (MACE) using risk assessment tools from the National Heart Foundation of Australia (HFA), the Goldman risk score and the Thrombolysis in Myocardial Infarction risk score (TIMI RS). Methods: This prospective observational study evaluated ED patients aged ≥30 years with non-traumatic chest pain for which no definitive non-ischemic cause was found. Data collected included demographic and clinical information, investigation findings and occurrence of MACE by 30 days. The outcome of interest was the comparative predictive performance of the risk tools for MACE at 30 days, as analyzed by receiver operator curves (ROC). Results: Two hundred eighty-one patients were studied; the rate of MACE was 14.1%. Area under the curve (AUC) of the HFA, TIMI RS and Goldman tools for the endpoint of MACE was 0.54, 0.71 and 0.67, respectively, with the difference between the tools in predictive ability for MACE being highly significant [chi2 (3) = 67.21, N = 276, p < 0.0001]. Conclusion: The TIMI RS and Goldman tools performed better than the HFA in this undifferentiated ED chest pain population, but selection of cutoffs balancing sensitivity and specificity was problematic. There is an urgent need for validated risk stratification tools specific for the ED chest pain population.
Resumo:
AIMS: Recent studies on corneal markers have advocated corneal nerve fibre length as the most important measure of diabetic peripheral neuropathy. The aim of this study was to determine if standardizing corneal nerve fibre length for tortuosity increases its association with other measures of diabetic peripheral neuropathy. METHODS: Two hundred and thirty-one individuals with diabetes with either predominantly mild or absent neuropathic changes and 61 control subjects underwent evaluation of diabetic neuropathy symptom score, neuropathy disability score, testing with 10-g monofilament, quantitative sensory testing (warm, cold, vibration detection) and nerve conduction studies. Corneal nerve fibre length and corneal nerve fibre tortuosity were measured using corneal confocal microscopy. A tortuosity-standardised corneal nerve fibre length variable was generated by dividing corneal nerve fibre length by corneal nerve fibre tortuosity. Differences in corneal nerve morphology between individuals with and without diabetic peripheral neuropathy and control subjects were determined and associations were estimated between corneal morphology and established tests of, and risk factors for, diabetic peripheral neuropathy. RESULTS: The tortuosity-standardised corneal nerve fibre length variable was better than corneal nerve fibre length in demonstrating differences between individuals with diabetes, with and without neuropathy (tortuosity-standardised corneal nerve fibre length variable: 70.5 ± 27.3 vs. 84.9 ± 28.7, P < 0.001, receiver operating characteristic area under the curve = 0.67; corneal nerve fibre length: 15.9 ± 6.9 vs. 18.4 ± 6.2 mm/mm(2) , P = 0.004, receiver operating characteristic area under the curve = 0.64). Furthermore, the tortuosity-standardised corneal nerve fibre length variable demonstrated a significant difference between the control subjects and individuals with diabetes, without neuropathy, while corneal nerve fibre length did not (tortuosity-standardised corneal nerve fibre length variable: 94.3 ± 27.1 vs. 84.9 ± 28.7, P = 0.028; corneal nerve fibre length: 20.1 ± 6.3 vs. 18.4 ± 6.2 mm/mm(2) , P = 0.084). Correlations between corneal nerve fibre length and established measures of neuropathy and risk factors for neuropathy were higher when a correction was made for the nerve tortuosity. CONCLUSIONS: Standardizing corneal nerve fibre length for tortuosity enhances the ability to differentiate individuals with diabetes, with and without neuropathy.
Resumo:
Background Dietary diversity is recognized as a key element of a high quality diet. However, diets that offer a greater variety of energy-dense foods could increase food intake and body weight. The aim of this study was to explore association of diet diversity with obesity in Sri Lankan adults. Methods Six hundred adults aged > 18 years were randomly selected by using multi-stage stratified sample. Dietary intake assessment was undertaken by a 24 hour dietary recall. Three dietary scores, Dietary Diversity Score (DDS), Dietary Diversity Score with Portions (DDSP) and Food Variety Score (FVS) were calculated. Body mass index (BMI) ≥ 25 kg.m-2 is defined as obese and Asian waist circumference cut-offs were used diagnosed abdominal obesity. Results Mean of DDS for men and women were 6.23 and 6.50 (p=0.06), while DDSP was 3.26 and 3.17 respectively (p=0.24). FVS values were significantly different between men and women 9.55 and 10.24 (p=0.002). Dietary diversity among Sri Lankan adults was significantly associated with gender, residency, ethnicity, education level but not with diabetes status. As dietary scores increased, the percentage consumption was increased in most of food groups except starches. Obese and abdominal obese adults had the highest DDS compared to non obese groups (p<0.05). With increased dietary diversity the level of BMI, waist circumference and energy consumption was significantly increased in this population. Conclusion Our data suggests that dietary diversity is positively associated with several socio-demographic characteristics and obesity among Sri Lankan adults. Although high dietary diversity is widely recommended, public health messages should emphasize to improve dietary diversity in selective food items.
Resumo:
Importance Approximately one-third of patients with peripheral artery disease experience intermittent claudication, with consequent loss of quality of life. Objective To determine the efficacy of ramipril for improving walking ability, patient-perceived walking performance, and quality of life in patients with claudication. Design, Setting, and Patients Randomized, double-blind, placebo-controlled trial conducted among 212 patients with peripheral artery disease (mean age, 65.5 [SD, 6.2] years), initiated in May 2008 and completed in August 2011 and conducted at 3 hospitals in Australia. Intervention Patients were randomized to receive 10 mg/d of ramipril (n = 106) or matching placebo (n = 106) for 24 weeks. Main Outcome Measures Maximum and pain-free walking times were recorded during a standard treadmill test. The Walking Impairment Questionnaire (WIQ) and Short-Form 36 Health Survey (SF-36) were used to assess walking ability and quality of life, respectively. Results At 6 months, relative to placebo, ramipril was associated with a 75-second (95% CI, 60-89 seconds) increase in mean pain-free walking time (P < .001) and a 255-second (95% CI, 215-295 seconds) increase in maximum walking time (P < .001). Relative to placebo, ramipril improved the WIQ median distance score by 13.8 (Hodges-Lehmann 95% CI, 12.2-15.5), speed score by 13.3 (95% CI, 11.9-15.2), and stair climbing score by 25.2 (95% CI, 25.1-29.4) (P < .001 for all). The overall SF-36 median Physical Component Summary score improved by 8.2 (Hodges-Lehmann 95% CI, 3.6-11.4; P = .02) in the ramipril group relative to placebo. Ramipril did not affect the overall SF-36 median Mental Component Summary score. Conclusions and Relevance Among patients with intermittent claudication, 24-week treatment with ramipril resulted in significant increases in pain-free and maximum treadmill walking times compared with placebo. This was associated with a significant increase in the physical functioning component of the SF-36 score. Trial Registration clinicaltrials.gov Identifier: NCT00681226