811 resultados para Recall interval
Resumo:
Objectives: Ecological studies support the hypothesis that there is an association between vitamin D and pancreatic cancer (PaCa) mortality, but observational studies are somewhat conflicting. We sought to contribute further data to this issue by analyzing the differences in PaCa mortality across the eastern states of Australia and investigating if there is a role of vitamin D-effective ultraviolet radiation (DUVR), which is related to latitude. ---------- Methods: Mortality data from 1968 to 2005 were sourced from the Australian General Record of Incidence and Mortality books. Negative binomial models were fitted to calculate the association between state and PaCa mortality. Clear sky monthly DUVR in each capital city was also modeled. ---------- Results: Mortality from PaCa was 10% higher in southern states than in Queensland, with those in Victoria recording the highest mortality risk (relative risk, 1.13; 95% confidence interval, 1.09-1.17). We found a highly significant association between DUVR and PaCa mortality, with an estimated 1.5% decrease in the risk per 10-kJ/m2 increase in yearly DUVR. ---------- Conclusions: These data show an association between latitude, DUVR, and PaCa mortality. Although this study cannot be used to infer causality, it supports the need for further investigations of a possible role of vitamin D in PaCa etiology.
Resumo:
Information Retrieval is an important albeit imperfect component of information technologies. A problem of insufficient diversity of retrieved documents is one of the primary issues studied in this research. This study shows that this problem leads to a decrease of precision and recall, traditional measures of information retrieval effectiveness. This thesis presents an adaptive IR system based on the theory of adaptive dual control. The aim of the approach is the optimization of retrieval precision after all feedback has been issued. This is done by increasing the diversity of retrieved documents. This study shows that the value of recall reflects this diversity. The Probability Ranking Principle is viewed in the literature as the “bedrock” of current probabilistic Information Retrieval theory. Neither the proposed approach nor other methods of diversification of retrieved documents from the literature conform to this principle. This study shows by counterexample that the Probability Ranking Principle does not in general lead to optimal precision in a search session with feedback (for which it may not have been designed but is actively used). Retrieval precision of the search session should be optimized with a multistage stochastic programming model to accomplish the aim. However, such models are computationally intractable. Therefore, approximate linear multistage stochastic programming models are derived in this study, where the multistage improvement of the probability distribution is modelled using the proposed feedback correctness method. The proposed optimization models are based on several assumptions, starting with the assumption that Information Retrieval is conducted in units of topics. The use of clusters is the primary reasons why a new method of probability estimation is proposed. The adaptive dual control of topic-based IR system was evaluated in a series of experiments conducted on the Reuters, Wikipedia and TREC collections of documents. The Wikipedia experiment revealed that the dual control feedback mechanism improves precision and S-recall when all the underlying assumptions are satisfied. In the TREC experiment, this feedback mechanism was compared to a state-of-the-art adaptive IR system based on BM-25 term weighting and the Rocchio relevance feedback algorithm. The baseline system exhibited better effectiveness than the cluster-based optimization model of ADTIR. The main reason for this was insufficient quality of the generated clusters in the TREC collection that violated the underlying assumption.
Resumo:
PURPOSE: To examine the association between neighborhood disadvantage and physical activity (PA). ---------- METHODS: We use data from the HABITAT multilevel longitudinal study of PA among mid-aged (40-65 years) men and women (n=11, 037, 68.5% response rate) living in 200 neighborhoods in Brisbane, Australia. PA was measured using three questions from the Active Australia Survey (general walking, moderate, and vigorous activity), one indicator of total activity, and two questions about walking and cycling for transport. The PA measures were operationalized using multiple categories based on time and estimated energy expenditure that were interpretable with reference to the latest PA recommendations. The association between neighborhood disadvantage and PA was examined using multilevel multinomial logistic regression and Markov Chain Monte Carlo simulation. The contribution of neighborhood disadvantage to between-neighborhood variation in PA was assessed using the 80% interval odds ratio. ---------- RESULTS: After adjustment for sex, age, living arrangement, education, occupation, and household income, reported participation in all measures and levels of PA varied significantly across Brisbane’s neighborhoods, and neighborhood disadvantage accounted for some of this variation. Residents of advantaged neighborhoods reported significantly higher levels of total activity, general walking, moderate, and vigorous activity; however, they were less likely to walk for transport. There was no statistically significant association between neighborhood disadvantage and cycling for transport. In terms of total PA, residents of advantaged neighborhoods were more likely to exceed PA recommendations. ---------- CONCLUSIONS: Neighborhoods may exert a contextual effect on residents’ likelihood of participating in PA. The greater propensity of residents in advantaged neighborhoods to do high levels of total PA may contribute to lower rates of cardiovascular disease and obesity in these areas
Resumo:
In this work, we investigate an alternative bootstrap approach based on a result of Ramsey [F.L. Ramsey, Characterization of the partial autocorrelation function, Ann. Statist. 2 (1974), pp. 1296-1301] and on the Durbin-Levinson algorithm to obtain a surrogate series from linear Gaussian processes with long range dependence. We compare this bootstrap method with other existing procedures in a wide Monte Carlo experiment by estimating, parametrically and semi-parametrically, the memory parameter d. We consider Gaussian and non-Gaussian processes to prove the robustness of the method to deviations from normality. The approach is also useful to estimate confidence intervals for the memory parameter d by improving the coverage level of the interval.
Resumo:
RatSLAM is a vision-based SLAM system based on extended models of the rodent hippocampus. RatSLAM creates environment representations that can be processed by the experience mapping algorithm to produce maps suitable for goal recall. The experience mapping algorithm also allows RatSLAM to map environments many times larger than could be achieved with a one to one correspondence between the map and environment, by reusing the RatSLAM maps to represent multiple sections of the environment. This paper describes experiments investigating the effects of the environment-representation size ratio and visual ambiguity on mapping and goal navigation performance. The experiments demonstrate that system performance is weakly dependent on either parameter in isolation, but strongly dependent on their joint values.
Resumo:
Background: The Functional Capacity Index (FCI) was designed to predict physical function 12 months after injury. We report a validation study of the FCI. Methods: This was a consecutive case series registered in the Queensland Trauma Registry who consented to the prospective 12-month telephone-administered follow-up study. FCI scores measured at 12 months were compared with those originally predicted. Results: Complete Abbreviated Injury Scale score information was available for 617 individuals, of whom 587 (95%) could be assigned at least one FCI score (range, 1-17). Agreement between the largest predicted FCI and observed FCI score was poor ([kappa] = 0.05; 95% confidence interval, 0.00-0.10) and explained only 1% of the variability in observed FCI. Using an encompassing model that included all FCI assignments, agreement remained poor ([kappa] = 0.05; 95% confidence interval, -0.02-0.12), and the model explained only 9% of the variability in observed FCI. Conclusion: The predicted functional capacity poorly agrees with actual functional outcomes. Further research should consider including other (noninjury) explanatory factors in predicting FCI at 12 months.
Resumo:
Immersive environments are part of a recent media innovation that allow users to become so involved within a computer-based simulated environment that they feel part of that virtual world (Grigorovici, 2003). A specific example is Second Life, which is an internet-based, three-dimensional immersive virtual world in which users create an online representation of themselves (an avatar) to play games and interact socially with thousands of people simultaneously. This study focuses on Second Life as an example of an immersive environment, as it is the largest adult freeform virtual world, home to 12 million avatars (IOWA State University, 2008). Already in Second Life there are more than 100 real-life brands from a range of industries, including automotive, professional services, and consumer goods and travel, among others (KZero, 2007; New Business Horizons, 2009). Compared to traditional advertising media, this interactive media can immerse users in the environment. As a result of this interactivity, users can become more involved with a virtual environment, resulting in prolonged usage over weeks, months and even years. Also, it can facilitate presence. Despite these developments, little is known about the effectiveness of marketing messages in a virtual world context. Marketers are incorporating products into Second Life using a strategy of online product placement. This study, therefore, explores the perceived effectiveness of online product placement in Second Life in terms of effects on product/brand recall, purchase intentions and trial. This research examines the association between individuals’ involvement with Second Life and online product placement effectiveness, as well as the relationship between individuals’ Second Life involvement and the effectiveness of online product placement. In addition, it investigates the association of immersion and product placement involvement. It also examines the impact of product placement involvement on online product placement effectiveness and the role of presence in affecting this relationship. An exploratory study was conducted for this research using semi-structured in-depth interviews face-to-face, email-based and in-world. The sample comprised 24 active Second Life users. Results indicate that product placement effectiveness is not directly associated with Second Life involvement, but rather effectiveness is impacted through the effect of Second Life involvement on product placement involvement. A positive relationship was found between individuals’ product placement involvement and online product placement effectiveness. Findings also indicate that online product placement effectiveness is not directly associated with immersion. Rather, it appears that effectiveness is impacted through the effect of immersion on product placement involvement. Moreover, higher levels of presence appear to have a positive impact on the relationship between product placement involvement and product placement effectiveness. Finally, a model was developed from this qualitative study for future testing. In terms of theoretical contributions, this study provides a new model for testing the effectiveness of product placement within immersive environments. From a methodological perspective, in-world interviews as a new research method were undertaken. In terms of a practical contribution, findings identified useful information for marketers and advertising agencies that aim to promote their products in immersive virtual environments like Second Life.
Resumo:
This paper reports the distribution of Polycyclic Aromatic Hydrocarbons (PAHs) in wash-off in urban stormwater in Gold Coast, Australia. Runoff samples collected from residential, industrial and commercial sites were separated into a dissolved fraction (<0.45µm), and three particulate fractions (0.45-75µm, 75-150µm and >150µm). Patterns in the distribution of PAHs in the fractions were investigated using Principal Component Analysis. Regardless of the land use and particle size fraction characteristics, the presence of organic carbon plays a dominant role in the distribution of PAHs. The PAHs concentrations were also found to decrease with rainfall duration. Generally, the 1- and 2-year average recurrence interval rainfall events were associated with the majority of the PAHs and the wash-off was a source limiting process. In the context of stormwater quality mitigation, targeting the initial part of the rainfall event is the most effective treatment strategy. The implications of the study results for urban stormwater quality management are also discussed.
Resumo:
Wide-angle images exhibit significant distortion for which existing scale-space detectors such as the scale-invariant feature transform (SIFT) are inappropriate. The required scale-space images for feature detection are correctly obtained through the convolution of the image, mapped to the sphere, with the spherical Gaussian. A new visual key-point detector, based on this principle, is developed and several computational approaches to the convolution are investigated in both the spatial and frequency domain. In particular, a close approximation is developed that has comparable computation time to conventional SIFT but with improved matching performance. Results are presented for monocular wide-angle outdoor image sequences obtained using fisheye and equiangular catadioptric cameras. We evaluate the overall matching performance (recall versus 1-precision) of these methods compared to conventional SIFT. We also demonstrate the use of the technique for variable frame-rate visual odometry and its application to place recognition.
Resumo:
Background: Up to 1% of adults will suffer from leg ulceration at some time. The majority of leg ulcers are venous in origin and are caused by high pressure in the veins due to blockage or weakness of the valves in the veins of the leg. Prevention and treatment of venous ulcers is aimed at reducing the pressure either by removing / repairing the veins, or by applying compression bandages / stockings to reduce the pressure in the veins. The vast majority of venous ulcers are healed using compression bandages. Once healed they often recur and so it is customary to continue applying compression in the form of bandages, tights, stockings or socks in order to prevent recurrence. Compression bandages or hosiery (tights, stockings, socks) are often applied for ulcer prevention. Objectives To assess the effects of compression hosiery (socks, stockings, tights) or bandages in preventing the recurrence of venous ulcers. To determine whether there is an optimum pressure/type of compression to prevent recurrence of venous ulcers. Search methods The searches for the review were first undertaken in 2000. For this update we searched the Cochrane Wounds Group Specialised Register (October 2007), The Cochrane Central Register of Controlled Trials (CENTRAL) - The Cochrane Library 2007 Issue 3, Ovid MEDLINE - 1950 to September Week 4 2007, Ovid EMBASE - 1980 to 2007 Week 40 and Ovid CINAHL - 1982 to October Week 1 2007. Selection criteria Randomised controlled trials evaluating compression bandages or hosiery for preventing venous leg ulcers. Data collection and analysis Data extraction and assessment of study quality were undertaken by two authors independently. Results No trials compared recurrence rates with and without compression. One trial (300 patients) compared high (UK Class 3) compression hosiery with moderate (UK Class 2) compression hosiery. A intention to treat analysis found no significant reduction in recurrence at five years follow up associated with high compression hosiery compared with moderate compression hosiery (relative risk of recurrence 0.82, 95% confidence interval 0.61 to 1.12). This analysis would tend to underestimate the effectiveness of the high compression hosiery because a significant proportion of people changed from high compression to medium compression hosiery. Compliance rates were significantly higher with medium compression than with high compression hosiery. One trial (166 patients) found no statistically significant difference in recurrence between two types of medium (UK Class 2) compression hosiery (relative risk of recurrence with Medi was 0.74, 95% confidence interval 0.45 to 1.2). Both trials reported that not wearing compression hosiery was strongly associated with ulcer recurrence and this is circumstantial evidence that compression reduces ulcer recurrence. No trials were found which evaluated compression bandages for preventing ulcer recurrence. Authors' conclusions No trials compared compression with vs no compression for prevention of ulcer recurrence. Not wearing compression was associated with recurrence in both studies identified in this review. This is circumstantial evidence of the benefit of compression in reducing recurrence. Recurrence rates may be lower in high compression hosiery than in medium compression hosiery and therefore patients should be offered the strongest compression with which they can comply. Further trials are needed to determine the effectiveness of hosiery prescribed in other settings, i.e. in the UK community, in countries other than the UK.
Resumo:
Background: Ambiguity remains about the effectiveness of wearing surgical face masks. The purpose of this study was to assess the impact on surgical site infections when non-scrubbed operating room staff did not wear surgical face masks. Design: Randomised controlled trial. Participants: Patients undergoing elective or emergency obstetric, gynecological, general, orthopaedic, breast or urological surgery in an Australian tertiary hospital. Intervention: 827 participants were enrolled and complete follow-up data was available for 811 (98.1%) patients. Operating room lists were randomly allocated to a ‘Mask roup’ (all non-scrubbed staff wore a mask) or ‘No Mask group’ (none of the non-scrubbed staff wore masks). Primary end point: Surgical site infection (identified using in-patient surveillance; post discharge follow-up and chart reviews). The patient was followed for up to six weeks. Results: Overall, 83 (10.2%) surgical site infections were recorded; 46/401 (11.5%) in the Masked group and 37/410 (9.0%) in the No Mask group; odds ratio (OR) 0.77 (95% confidence interval (CI) 0.49 to 1.21), p = 0.151. Independent risk factors for surgical site infection included: any pre-operative stay (adjusted odds ratio [aOR], 0.43 (95% CI, 0.20; 0.95), high BMI aOR, 0.38 (95% CI, 0.17; 0.87), and any previous surgical site infection aOR, 0.40 (95% CI, 0.17; 0.89). Conclusion: Surgical site infection rates did not increase when non-scrubbed operating room personnel did not wear a face mask.
Resumo:
Product placement is a fast growing multi-billion dollar industry yet measures of its effectiveness, which influence the critical area of pricing, have been problematic. Past attempts to measure the effect of a placement, and therefore provide a basis for pricing of placements, have been confounded by the effect on consumers of multiple prior exposures of a brand name in all marketing communications. Virtual product placement offers certain advantages: as a tool to measure the effectiveness of product placements; assistance with the problem of lack of audience selectivity in traditional product placement; testing different audiences for brands and addressing a gap in the existing academic literature by focusing on the impact of product placement on recall and recognition of new brands.
Resumo:
The paper examines whether there was an excess of deaths and the relative role of temperature and ozone in a heatwave during 7–26 February 2004 in Brisbane, Australia, a subtropical city accustomed to warm weather. The data on daily counts of deaths from cardiovascular disease and non-external causes, meteorological conditions, and air pollution in Brisbane from 1 January 2001 to 31 October 2004 were supplied by the Australian Bureau of Statistics, Australian Bureau of Meteorology, and Queensland Environmental Protection Agency, respectively. The relationship between temperature and mortality was analysed using a Poisson time series regression model with smoothing splines to control for nonlinear effects of confounding factors. The highest temperature recorded in the 2004 heatwave was 42°C compared with the highest recorded temperature of 34°C during the same periods of 2001–2003. There was a significant relationship between exposure to heat and excess deaths in the 2004 heatwave estimated increase in non-external deaths: 75 [(95% confidence interval, CI: 11–138; cardiovascular deaths: 41 (95% CI: −2 to 84)]. There was no apparent evidence of substantial short-term mortality displacement. The excess deaths were mainly attributed to temperature but exposure to ozone also contributed to these deaths.
Resumo:
BACKGROUND: A number of epidemiological studies have examined the adverse effect of air pollution on mortality and morbidity. Also, several studies have investigated the associations between air pollution and specific-cause diseases including arrhythmia, myocardial infarction, and heart failure. However, little is known about the relationship between air pollution and the onset of hypertension. OBJECTIVE: To explore the risk effect of particulate matter air pollution on the emergency hospital visits (EHVs) for hypertension in Beijing, China. METHODS: We gathered data on daily EHVs for hypertension, fine particulate matter less than 2.5 microm in aerodynamic diameter (PM(2.5)), particulate matter less than 10 microm in aerodynamic diameter (PM(10)), sulfur dioxide, and nitrogen dioxide in Beijing, China during 2007. A time-stratified case-crossover design with distributed lag model was used to evaluate associations between ambient air pollutants and hypertension. Daily mean temperature and relative humidity were controlled in all models. RESULTS: There were 1,491 EHVs for hypertension during the study period. In single pollutant models, an increase in 10 microg/m(3) in PM(2.5) and PM(10) was associated with EHVs for hypertension with odds ratios (overall effect of five days) of 1.084 (95% confidence interval (CI): 1.028, 1.139) and 1.060% (95% CI: 1.020, 1.101), respectively. CONCLUSION: Elevated levels of ambient particulate matters are associated with an increase in EHVs for hypertension in Beijing, China.
Resumo:
Speaker verification is the process of verifying the identity of a person by analysing their speech. There are several important applications for automatic speaker verification (ASV) technology including suspect identification, tracking terrorists and detecting a person’s presence at a remote location in the surveillance domain, as well as person authentication for phone banking and credit card transactions in the private sector. Telephones and telephony networks provide a natural medium for these applications. The aim of this work is to improve the usefulness of ASV technology for practical applications in the presence of adverse conditions. In a telephony environment, background noise, handset mismatch, channel distortions, room acoustics and restrictions on the available testing and training data are common sources of errors for ASV systems. Two research themes were pursued to overcome these adverse conditions: Modelling mismatch and modelling uncertainty. To directly address the performance degradation incurred through mismatched conditions it was proposed to directly model this mismatch. Feature mapping was evaluated for combating handset mismatch and was extended through the use of a blind clustering algorithm to remove the need for accurate handset labels for the training data. Mismatch modelling was then generalised by explicitly modelling the session conditions as a constrained offset of the speaker model means. This session variability modelling approach enabled the modelling of arbitrary sources of mismatch, including handset type, and halved the error rates in many cases. Methods to model the uncertainty in speaker model estimates and verification scores were developed to address the difficulties of limited training and testing data. The Bayes factor was introduced to account for the uncertainty of the speaker model estimates in testing by applying Bayesian theory to the verification criterion, with improved performance in matched conditions. Modelling the uncertainty in the verification score itself met with significant success. Estimating a confidence interval for the "true" verification score enabled an order of magnitude reduction in the average quantity of speech required to make a confident verification decision based on a threshold. The confidence measures developed in this work may also have significant applications for forensic speaker verification tasks.