649 resultados para survivorship care models


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Predicting safety on roadways is standard practice for road safety professionals and has a corresponding extensive literature. The majority of safety prediction models are estimated using roadway segment and intersection (microscale) data, while more recently efforts have been undertaken to predict safety at the planning level (macroscale). Safety prediction models typically include roadway, operations, and exposure variables—factors known to affect safety in fundamental ways. Environmental variables, in particular variables attempting to capture the effect of rain on road safety, are difficult to obtain and have rarely been considered. In the few cases weather variables have been included, historical averages rather than actual weather conditions during which crashes are observed have been used. Without the inclusion of weather related variables researchers have had difficulty explaining regional differences in the safety performance of various entities (e.g. intersections, road segments, highways, etc.) As part of the NCHRP 8-44 research effort, researchers developed PLANSAFE, or planning level safety prediction models. These models make use of socio-economic, demographic, and roadway variables for predicting planning level safety. Accounting for regional differences - similar to the experience for microscale safety models - has been problematic during the development of planning level safety prediction models. More specifically, without weather related variables there is an insufficient set of variables for explaining safety differences across regions and states. Furthermore, omitted variable bias resulting from excluding these important variables may adversely impact the coefficients of included variables, thus contributing to difficulty in model interpretation and accuracy. This paper summarizes the results of an effort to include weather related variables, particularly various measures of rainfall, into accident frequency prediction and the prediction of the frequency of fatal and/or injury degree of severity crash models. The purpose of the study was to determine whether these variables do in fact improve overall goodness of fit of the models, whether these variables may explain some or all of observed regional differences, and identifying the estimated effects of rainfall on safety. The models are based on Traffic Analysis Zone level datasets from Michigan, and Pima and Maricopa Counties in Arizona. Numerous rain-related variables were found to be statistically significant, selected rain related variables improved the overall goodness of fit, and inclusion of these variables reduced the portion of the model explained by the constant in the base models without weather variables. Rain tends to diminish safety, as expected, in fairly complex ways, depending on rain frequency and intensity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is important to examine the nature of the relationships between roadway, environmental, and traffic factors and motor vehicle crashes, with the aim to improve the collective understanding of causal mechanisms involved in crashes and to better predict their occurrence. Statistical models of motor vehicle crashes are one path of inquiry often used to gain these initial insights. Recent efforts have focused on the estimation of negative binomial and Poisson regression models (and related deviants) due to their relatively good fit to crash data. Of course analysts constantly seek methods that offer greater consistency with the data generating mechanism (motor vehicle crashes in this case), provide better statistical fit, and provide insight into data structure that was previously unavailable. One such opportunity exists with some types of crash data, in particular crash-level data that are collected across roadway segments, intersections, etc. It is argued in this paper that some crash data possess hierarchical structure that has not routinely been exploited. This paper describes the application of binomial multilevel models of crash types using 548 motor vehicle crashes collected from 91 two-lane rural intersections in the state of Georgia. Crash prediction models are estimated for angle, rear-end, and sideswipe (both same direction and opposite direction) crashes. The contributions of the paper are the realization of hierarchical data structure and the application of a theoretically appealing and suitable analysis approach for multilevel data, yielding insights into intersection-related crashes by crash type.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A study was done to develop macrolevel crash prediction models that can be used to understand and identify effective countermeasures for improving signalized highway intersections and multilane stop-controlled highway intersections in rural areas. Poisson and negative binomial regression models were fit to intersection crash data from Georgia, California, and Michigan. To assess the suitability of the models, several goodness-of-fit measures were computed. The statistical models were then used to shed light on the relationships between crash occurrence and traffic and geometric features of the rural signalized intersections. The results revealed that traffic flow variables significantly affected the overall safety performance of the intersections regardless of intersection type and that the geometric features of intersections varied across intersection type and also influenced crash type.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The intent of this note is to succinctly articulate additional points that were not provided in the original paper (Lord et al., 2005) and to help clarify a collective reluctance to adopt zero-inflated (ZI) models for modeling highway safety data. A dialogue on this important issue, just one of many important safety modeling issues, is healthy discourse on the path towards improved safety modeling. This note first provides a summary of prior findings and conclusions of the original paper. It then presents two critical and relevant issues: the maximizing statistical fit fallacy and logic problems with the ZI model in highway safety modeling. Finally, we provide brief conclusions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Physical activity (PA) is recommended for managing osteoarthritis (OA). However, few people with OA are physically active. Understanding the factors associated with PA is necessary to increase PA in this population. This cross-sectional study examined factors associated with leisure-time PA, stretching exercises, and strengthening exercises in people with OA. Methods: For a mail survey, 485 individuals, aged 68.0 y (SD=10.6) with hip or knee OA, were asked about factors that may influence PA participation, including use of non-PA OA management strategies and both psychological and physical health-related factors. Associations between factors and each PA outcome were examined in multivariable logistic regression models. Results: Non-PA management strategies were the main factors associated with the outcomes. Information/education courses, heat/cold treatments, and paracetamol were associated with stretching and strengthening exercises (P<0.05). Hydrotherapy and magnet therapy were associated with leisure-time PA; using orthotics and massage therapy, with stretching exercises; and occupational therapy, with strengthening exercises (P<0.05). Few psychological or health15 related factors were associated with the outcomes. Conclusions: Some management strategies may make it easier for people with OA to be physically active, and could be promoted to encourage PA. Providers of strategies are potential avenues for recruiting people with OA into PA programs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The learning experiences of student nurses undertaking clinical placement are reported widely, however little is known about the learning experiences of health professionals undertaking continuing professional development (CPD) in a clinical setting, especially in palliative care. The aim of this study, which was conducted as part of the national evaluation of a professional development program involving clinical attachments with palliative care services (The Program of Experience in the Palliative Approach [PEPA]), was to explore factors influencing the learning experiences of participants over time. Thirteen semi-structured, one-to-one telephone interviews were conducted with five participants throughout their PEPA experience. The analysis was informed by the traditions of adult, social and psychological learning theories and relevant literature. The participants' learning was enhanced by engaging interactively with host site staff and patients, and by the validation of their personal and professional life experiences together with the reciprocation of their knowledge with host site staff. Self-directed learning strategies maximised the participants' learning outcomes. Inclusion in team activities aided the participants to feel accepted within the host site. Personal interactions with host site staff and patients shaped this social/cultural environment of the host site. Optimal learning was promoted when participants were actively engaged, felt accepted and supported by, and experienced positive interpersonal interactions with, the host site staff.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We advance the proposition that dynamic stochastic general equilibrium (DSGE) models should not only be estimated and evaluated with full information methods. These require that the complete system of equations be specified properly. Some limited information analysis, which focuses upon specific equations, is therefore likely to be a useful complement to full system analysis. Two major problems occur when implementing limited information methods. These are the presence of forward-looking expectations in the system as well as unobservable non-stationary variables. We present methods for dealing with both of these difficulties, and illustrate the interaction between full and limited information methods using a well-known model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In general, the performance of construction projects, including their sustainability performance, does not meet optimal expectations. One aspect of this is the performance of the participants who are independent and make a significance impact on overall project outcomes. Of these participants, the client is traditionally the owner of the project, the architect or engineer is engaged as the lead designer and a contractor is selected to construct the facilities. Generally, the performance of the participants is gauged by considering three main factors, namely, time, cost and quality. As the level of satisfaction is a subjective issue, it is rarely used in the performance evaluation of construction work. Recently, various approaches to the measurement of satisfaction have been made in an attempt to determine the performance of construction project outcomes - for instance, client satisfaction, customer satisfaction, contractor satisfaction, occupant satisfaction and home buyer satisfaction. These not only identify the performance of the construction project but are also used to improve and maintain relationships. In addition, these assessments are necessary for the continuous improvement and enhanced cooperation of participants. The measurement of satisfaction levels primarily involves expectations and perceptions. An expectation can be regarded as a comparative standard of different needs, motives and beliefs, while a perception is a subjective interpretation that is influenced by moods, experiences and values. This suggests that the disparity between perceptions and expectations may possibly be used to represent different levels of satisfaction. However, this concept is rather new and in need of further investigation. This chapter examines the methods commonly practised in measuring satisfaction levels today and the advantages of promoting these methods. The results provide a preliminary review of the advantages of satisfaction measurement in the construction industry and recommendations are made concerning the most appropriate methods to use in identifying the performance of project outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To undertake exploratory benchmarking of a set of clinical indicators of quality care in residential care in Australia, data were collected from 107 residents within four medium-sized facilities (40–80 beds) in Brisbane, Australia. The proportion of residents in each sample facility with a particular clinical problem was compared with US Minimum Data Set quality indicator thresholds. Results demonstrated variability within and between clinical indicators, suggesting breadth of assessment using various clinical indicators of quality is an important factor when monitoring quality of care. More comprehensive and objective measures of quality of care would be of great assistance in determining and monitoring the effectiveness of residential aged care provision in Australia, particularly as demands for accountability by consumers and their families increase. What is known about the topic? The key to quality improvement is effective quality assessment, and one means of evaluating quality of care is through clinical outcomes. The Minimum Data Set quality indicators have been credited with improving quality in United States nursing homes. What does this paper add? The Clinical Care Indicators Tool was used to collect data on clinical outcomes, enabling comparison of data from a small Australian sample with American quality benchmarks to illustrate the utility of providing guidelines for interpretation. What are the implications for practitioners? Collecting and comparing clinical outcome data would enable practitioners to better understand the quality of care being provided and whether practices required review. The Clinical Care Indicator Tool could provide a comprehensive and systematic means of doing this, thus filling a gap in quality monitoring within Australian residential aged care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Patterns of diagnosis and management for men diagnosed with prostate cancer in Queensland, Australia, have not yet been systematically documented and so assumptions of equity are untested. This longitudinal study investigates the association between prostate cancer diagnostic and treatment outcomes and key area-level characteristics and individual-level demographic, clinical and psychosocial factors.---------- Methods/Design: A total of 1064 men diagnosed with prostate cancer between February 2005 and July 2007 were recruited through hospital-based urology outpatient clinics and private practices in the centres of Brisbane, Townsville and Mackay (82% of those referred). Additional clinical and diagnostic information for all 6609 men diagnosed with prostate cancer in Queensland during the study period was obtained via the population-based Queensland Cancer Registry. Respondent data are collected using telephone and self-administered questionnaires at pre-treatment and at 2 months, 6 months, 12 months, 24 months, 36 months, 48 months and 60 months post-treatment. Assessments include demographics, medical history, patterns of care, disease and treatment characteristics together with outcomes associated with prostate cancer, as well as information about quality of life and psychological adjustment. Complementary detailed treatment information is abstracted from participants’ medical records held in hospitals and private treatment facilities and collated with health service utilisation data obtained from Medicare Australia. Information about the characteristics of geographical areas is being obtained from data custodians such as the Australian Bureau of Statistics. Geo-coding and spatial technology will be used to calculate road travel distances from patients’ residences to treatment centres. Analyses will be conducted using standard statistical methods along with multilevel regression models including individual and area-level components.---------- Conclusions: Information about the diagnostic and treatment patterns of men diagnosed with prostate cancer is crucial for rational planning and development of health delivery and supportive care services to ensure equitable access to health services, regardless of geographical location and individual characteristics. This study is a secondary outcome of the randomised controlled trial registered with the Australian New Zealand Clinical Trials Registry (ACTRN12607000233426)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Factors that individually influence blood sugar control, health-related quality of life, and diabetes self-care behaviors have been widely investigated; however, most previous diabetes studies have not tested an integrated association between a series of factors and multiple health outcomes. ---------- Objectives: The purposes of this study are to identify risk factors and protective factors and to examine the impact of risk factors and protective factors on adaptive outcomes in people with type 2 diabetes.---------- Design: A descriptive correlational design was used to examine a theoretical model of risk factors, protective factors, and adaptive outcomes.---------- Settings: This study was conducted at the endocrine outpatient departments of three hospitals in Taiwan. Participants A convenience sample of 334 adults with type 2 diabetes aged 40 and over.---------- Methods: Data were collected by a self-reported questionnaire and physiological examination. Using the structural equation modeling technique, measurement and structural regression models were tested.---------- Results: Age and life events reflected the construct of risk factors. The construct of protective factors was explained by diabetes symptoms, coping strategy, and social support. The construct of adaptive outcomes comprised HbA1c, health-related quality of life, and self-care behaviors. Protective factors had a significant direct effect on adaptive outcomes (β = 0.68, p < 0.001); however, risk factors did not predict adaptive outcomes (β = − 0.48, p = 0.118).---------- Conclusions: Identifying and managing risk factors and protective factors are an integral part of diabetes care. This theoretical model provides a better understanding of how risk factors and protective factors work together to influence multiple adaptive outcomes in people living with type 2 diabetes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background For CAM to feature prominently in health care decision-making there is a need to expand the evidence-base and to further incorporate economic evaluation into research priorities. In a world of scarce health care resources and an emphasis on efficiency and clinical efficacy, CAM, as indeed do all other treatments, requires rigorous evaluation to be considered in budget decision-making. Methods Economic evaluation provides the tools to measure the costs and health consequences of CAM interventions and thereby inform decision making. This article offers CAM researchers an introductory framework for understanding, undertaking and disseminating economic evaluation. The types of economic evaluation available for the study of CAM are discussed, and decision modelling is introduced as a method for economic evaluation with much potential for use in CAM. Two types of decision models are introduced, decision trees and Markov models, along with a worked example of how each method is used to examine costs and health consequences. This is followed by a discussion of how this information is used by decision makers. Conclusions Undoubtedly, economic evaluation methods form an important part of health care decision making. Without formal training it can seem a daunting task to consider economic evaluation, however, multidisciplinary teams provide an opportunity for health economists, CAM practitioners and other interested researchers, to work together to further develop the economic evaluation of CAM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the advances in computer hardware and software development techniques in the past 25 years, digital computer simulation of train movement and traction systems has been widely adopted as a standard computer-aided engineering tool [1] during the design and development stages of existing and new railway systems. Simulators of different approaches and scales are used extensively to investigate various kinds of system studies. Simulation is now proven to be the cheapest means to carry out performance predication and system behaviour characterisation. When computers were first used to study railway systems, they were mainly employed to perform repetitive but time-consuming computational tasks, such as matrix manipulations for power network solution and exhaustive searches for optimal braking trajectories. With only simple high-level programming languages available at the time, full advantage of the computing hardware could not be taken. Hence, structured simulations of the whole railway system were not very common. Most applications focused on isolated parts of the railway system. It is more appropriate to regard those applications as primarily mechanised calculations rather than simulations. However, a railway system consists of a number of subsystems, such as train movement, power supply and traction drives, which inevitably contains many complexities and diversities. These subsystems interact frequently with each other while the trains are moving; and they have their special features in different railway systems. To further complicate the simulation requirements, constraints like track geometry, speed restrictions and friction have to be considered, not to mention possible non-linearities and uncertainties in the system. In order to provide a comprehensive and accurate account of system behaviour through simulation, a large amount of data has to be organised systematically to ensure easy access and efficient representation; the interactions and relationships among the subsystems should be defined explicitly. These requirements call for sophisticated and effective simulation models for each component of the system. The software development techniques available nowadays allow the evolution of such simulation models. Not only can the applicability of the simulators be largely enhanced by advanced software design, maintainability and modularity for easy understanding and further development, and portability for various hardware platforms are also encouraged. The objective of this paper is to review the development of a number of approaches to simulation models. Attention is, in particular, given to models for train movement, power supply systems and traction drives. These models have been successfully used to enable various ‘what-if’ issues to be resolved effectively in a wide range of applications, such as speed profiles, energy consumption, run times etc.