919 resultados para explanatory variables
Resumo:
Determining the ecologically relevant spatial scales for predicting species occurrences is an important concept when determining species–environment relationships. Therefore species distribution modelling should consider all ecologically relevant spatial scales. While several recent studies have addressed this problem in artificially fragmented landscapes, few studies have researched relevant ecological scales for organisms that also live in naturally fragmented landscapes. This situation is exemplified by the Australian rock-wallabies’ preference for rugged terrain and we addressed the issue of scale using the threatened brush-tailed rock-wallaby (Petrogale penicillata) in eastern Australia. We surveyed for brush-tailed rock-wallabies at 200 sites in southeast Queensland, collecting potentially influential site level and landscape level variables. We applied classification trees at either scale to capture a hierarchy of relationships between the explanatory variables and brush-tailed rock-wallaby presence/absence. Habitat complexity at the site level and geology at the landscape level were the best predictors of where we observed brush-tailed rock-wallabies. Our study showed that the distribution of the species is affected by both site scale and landscape scale factors, reinforcing the need for a multi-scale approach to understanding the relationship between a species and its environment. We demonstrate that careful design of data collection, using coarse scale spatial datasets and finer scale field data, can provide useful information for identifying the ecologically relevant scales for studying species–environment relationships. Our study highlights the need to determine patterns of environmental influence at multiple scales to conserve specialist species such as the brush-tailed rock-wallaby in naturally fragmented landscapes.
Resumo:
A growing literature seeks to explain differences in individuals' self-reported satisfaction with their jobs. The evidence so far has mainly been based on cross-sectional data and when panel data have been used, individual unobserved heterogeneity has been modelled as an ordered probit model with random effects. This article makes use of longitudinal data for Denmark, taken from the waves 1995-1999 of the European Community Household Panel, and estimates fixed effects ordered logit models using the estimation methods proposed by Ferrer-i-Carbonel and Frijters (2004) and Das and van Soest (1999). For comparison and testing purposes a random effects ordered probit is also estimated. Estimations are carried out separately on the samples of men and women for individuals' overall satisfaction with the jobs they hold. We find that using the fixed effects approach (that clearly rejects the random effects specification), considerably reduces the number of key explanatory variables. The impact of central economic factors is the same as in previous studies, though. Moreover, the determinants of job satisfaction differ considerably between the genders, in particular once individual fixed effects are allowed for.
Resumo:
Background: Apart from promoting physical recovery and assisting in activities of daily living, a major challenge in stroke rehabilitation is to minimize psychosocial morbidity and to promote the reintegration of stroke survivors into their family and community. The identification of key factors influencing long-term outcome are essential in developing more effective rehabilitation measures for reducing stroke-related morbidity. The aim of this study was to test a theoretical model of predictors of participation restriction which included the direct and indirect effects between psychosocial outcomes, physical outcome, and socio-demographic variables at 12 months after stroke.--------- Methods: Data were collected from 188 stroke survivors at 12 months following their discharge from one of the two rehabilitation hospitals in Hong Kong. The settings included patients' homes and residential care facilities. Path analysis was used to test a hypothesized model of participation restriction at 12 months.---------- Results: The path coefficients show functional ability having the largest direct effect on participation restriction (β = 0.51). The results also show that more depressive symptoms (β = -0.27), low state self-esteem (β = 0.20), female gender (β = 0.13), older age (β = -0.11) and living in a residential care facility (β = -0.12) have a direct effect on participation restriction. The explanatory variables accounted for 71% of the variance in explaining participation restriction at 12 months.---------- Conclusion: Identification of stroke survivors at risk of high levels of participation restriction, depressive symptoms and low self-esteem will assist health professionals to devise appropriate rehabilitation interventions that target improving both physical and psychosocial functioning.
Resumo:
Disability following a stroke can impose various restrictions on patients’ attempts at participating in life roles. The measurement of social participation, for instance, is important in estimating recovery and assessing quality of care at the community level. Thus, the identification of factors influencing social participation is essential in developing effective measures for promoting the reintegration of stroke survivors into the community. Data were collected from 188 stroke survivors (mean age 71.7 years) 12 months after discharge from a stroke rehabilitation hospital. Of these survivors, 128 (61 %) had suffered a first ever stroke, and 81 (43 %) had a right hemisphere lesion. Most (n = 156, 83 %) were living in their own home, though 32 (17 %) were living in residential care facilities. Path analysis was used to test a hypothesized model of participation restriction which included the direct and indirect effects between social, psychological and physical outcomes and demographic variables. Participation restriction was the dependent variable. Exogenous independent variables were age, functional ability, living arrangement and gender. Endogenous independent variables were depressive symptoms, state self-esteem and social support satisfaction. The path coefficients showed functional ability having the largest direct effect on participation restriction. The results also showed that more depressive symptoms, low state self-esteem, female gender, older age and living in a residential care facility had a direct effect on participation restriction. The explanatory variables accounted for 71% of the variance in explaining participation restriction. Prediction models have empirical and practical applications such as suggesting important factors to be considered in promoting stroke recovery. The findings suggest that interventions offered over the course of rehabilitation should be aimed at improving functional ability and promoting psychological aspects of recovery. These are likely to enhance stroke survivors resume or maximize their social participation so that they may fulfill productive and positive life roles.
Resumo:
Typical daily decision-making process of individuals regarding use of transport system involves mainly three types of decisions: mode choice, departure time choice and route choice. This paper focuses on the mode and departure time choice processes and studies different model specifications for a combined mode and departure time choice model. The paper compares different sets of explanatory variables as well as different model structures to capture the correlation among alternatives and taste variations among the commuters. The main hypothesis tested in this paper is that departure time alternatives are also correlated by the amount of delay. Correlation among different alternatives is confirmed by analyzing different nesting structures as well as error component formulations. Random coefficient logit models confirm the presence of the random taste heterogeneity across commuters. Mixed nested logit models are estimated to jointly account for the random taste heterogeneity and the correlation among different alternatives. Results indicate that accounting for the random taste heterogeneity as well as inter-alternative correlation improves the model performance.
Resumo:
One major gap in transportation system safety management is the ability to assess the safety ramifications of design changes for both new road projects and modifications to existing roads. To fulfill this need, FHWA and its many partners are developing a safety forecasting tool, the Interactive Highway Safety Design Model (IHSDM). The tool will be used by roadway design engineers, safety analysts, and planners throughout the United States. As such, the statistical models embedded in IHSDM will need to be able to forecast safety impacts under a wide range of roadway configurations and environmental conditions for a wide range of driver populations and will need to be able to capture elements of driving risk across states. One of the IHSDM algorithms developed by FHWA and its contractors is for forecasting accidents on rural road segments and rural intersections. The methodological approach is to use predictive models for specific base conditions, with traffic volume information as the sole explanatory variable for crashes, and then to apply regional or state calibration factors and accident modification factors (AMFs) to estimate the impact on accidents of geometric characteristics that differ from the base model conditions. In the majority of past approaches, AMFs are derived from parameter estimates associated with the explanatory variables. A recent study for FHWA used a multistate database to examine in detail the use of the algorithm with the base model-AMF approach and explored alternative base model forms as well as the use of full models that included nontraffic-related variables and other approaches to estimate AMFs. That research effort is reported. The results support the IHSDM methodology.
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Background, aim, and scope Urban motor vehicle fleets are a major source of particulate matter pollution, especially of ultrafine particles (diameters < 0.1 µm), and exposure to particulate matter has known serious health effects. A considerable body of literature is available on vehicle particle emission factors derived using a wide range of different measurement methods for different particle sizes, conducted in different parts of the world. Therefore the choice as to which are the most suitable particle emission factors to use in transport modelling and health impact assessments presented as a very difficult task. The aim of this study was to derive a comprehensive set of tailpipe particle emission factors for different vehicle and road type combinations, covering the full size range of particles emitted, which are suitable for modelling urban fleet emissions. Materials and methods A large body of data available in the international literature on particle emission factors for motor vehicles derived from measurement studies was compiled and subjected to advanced statistical analysis, to determine the most suitable emission factors to use in modelling urban fleet emissions. Results This analysis resulted in the development of five statistical models which explained 86%, 93%, 87%, 65% and 47% of the variation in published emission factors for particle number, particle volume, PM1, PM2.5 and PM10 respectively. A sixth model for total particle mass was proposed but no significant explanatory variables were identified in the analysis. From the outputs of these statistical models, the most suitable particle emission factors were selected. This selection was based on examination of the statistical robustness of the statistical model outputs, including consideration of conservative average particle emission factors with the lowest standard errors, narrowest 95% confidence intervals and largest sample sizes, and the explanatory model variables, which were Vehicle Type (all particle metrics), Instrumentation (particle number and PM2.5), Road Type (PM10) and Size Range Measured and Speed Limit on the Road (particle volume). Discussion A multiplicity of factors need to be considered in determining emission factors that are suitable for modelling motor vehicle emissions, and this study derived a set of average emission factors suitable for quantifying motor vehicle tailpipe particle emissions in developed countries. Conclusions The comprehensive set of tailpipe particle emission factors presented in this study for different vehicle and road type combinations enable the full size range of particles generated by fleets to be quantified, including ultrafine particles (measured in terms of particle number). These emission factors have particular application for regions which may have a lack of funding to undertake measurements, or insufficient measurement data upon which to derive emission factors for their region. Recommendations and perspectives In urban areas motor vehicles continue to be a major source of particulate matter pollution and of ultrafine particles. It is critical that in order to manage this major pollution source methods are available to quantify the full size range of particles emitted for traffic modelling and health impact assessments.
Resumo:
Numerous econometric models have been proposed for forecasting property market performance, but limited success has been achieved in finding a reliable and consistent model to predict property market movements over a five to ten year timeframe. This research focuses on office rental growth forecasts and overviews many of the office rent models that have evolved over the past 20 years. A model by DiPasquale and Wheaton is selected for testing in the Brisbane, Australia office market. The adaptation of this study did not provide explanatory variables that could assist in developing a reliable, predictive model of office rental growth. In light of this result, the paper suggests a system dynamics framework that includes an econometric model based on historical data as well as user input guidance for the primary variables. The rent forecast outputs would be assessed having regard to market expectations and probability profiling undertaken for use in simulation exercises. The paper concludes with ideas for ongoing research.
Resumo:
Durland and McCurdy [Durland, J.M., McCurdy, T.H., 1994. Duration-dependent transitions in a Markov model of US GNP growth. Journal of Business and Economic Statistics 12, 279–288] investigated the issue of duration dependence in US business cycle phases using a Markov regime-switching approach, introduced by Hamilton [Hamilton, J., 1989. A new approach to the analysis of time series and the business cycle. Econometrica 57, 357–384] and extended to the case of variable transition parameters by Filardo [Filardo, A.J., 1994. Business cycle phases and their transitional dynamics. Journal of Business and Economic Statistics 12, 299–308]. In Durland and McCurdy’s model duration alone was used as an explanatory variable of the transition probabilities. They found that recessions were duration dependent whilst expansions were not. In this paper, we explicitly incorporate the widely-accepted US business cycle phase change dates as determined by the NBER, and use a state-dependent multinomial Logit modelling framework. The model incorporates both duration and movements in two leading indexes – one designed to have a short lead (SLI) and the other designed to have a longer lead (LLI) – as potential explanatory variables. We find that doing so suggests that current duration is not only a significant determinant of transition out of recessions, but that there is some evidence that it is also weakly significant in the case of expansions. Furthermore, we find that SLI has more informational content for the termination of recessions whilst LLI does so for expansions.
Resumo:
Childhood sun exposure has been associated with increased risk of developing melanoma later in life. Sunscreen, children.s preferred method of sun protection, has been shown to reduce skin cancer risk. However, the effectiveness of sunscreen is largely dependent on user compliance, such as the thickness of application. To reach the sun protection factor (SPF) sunscreen must be applied at a thickness of 2mg/cm2. It has been demonstrated that adults tend to apply less than half of the recommended 2mg/cm2. This was the first study to measure the thickness at which children apply sunscreen. We recruited 87 primary school aged children (n=87, median age 8.7, 5-12 years) from seven state schools within one Brisbane education district (32% consent rate). The children were supplied with sunscreen in three dispenser types (pump, squeeze and roll-on) and were asked to use these for one week each. We measured the weight of the sunscreen before and after use, and calculated the children.s body surface area (based on height and weight) and area to which sunscreen was applied (based on children.s self-reported body coverage of application). Combined these measurements resulted in an average thickness of sunscreen application, which was our main outcome measure. We asked parents to complete a self-administered questionnaire which captured information about potential explanatory variables. Children applied sunscreen at a median thickness of 0.48mg/cm2, significantly less than the recommended 2mg/cm2 (p<0.001). When using the roll-on dispenser (median 0.22mg/cm2), children applied significantly less sunscreen thickness, compared to the pump (median 0.75mg.cm2, p<0.001), and squeeze (median 0.57mg/cm2, p<0.001) dispensers. School grade (1-7) was significantly associated with thickness of application (p=0.032), with children in the youngest grades applying the most. Other variables that were significantly associated with the outcome variable included: number of siblings (p=0.001), household annual income (p<0.001), and the number of lifetime sunburns the child had experienced (p=0.007). This work is the first to measure children.s sunscreen application thickness and demonstrates that regardless of their age or the type of dispenser that they use, children do not apply enough sunscreen to reach the advertised SPF. It is envisaged that this study will assist in the formulation of recommendations for future research, practice and policy aimed at improving childhood sun protection to reduce skin cancer incidence in the future.
Resumo:
This paper seeks to identify and quantify sources of the lagging productivity in Singapore’s retail sector as reported in the Economic Strategies Committee 2010 report. A two-stage analysis is adopted. In the first stage, the Malmquist productivity index is employed which provides measures of productivity change, technological change and efficiency change. In the second stage, technical efficiency estimates are regressed against explanatory variables based on a truncated regression model. Sources of technical efficiency were attributed to quality of workers while product assortment and competition negatively impacted on efficiency.
Resumo:
Prevention and safety promotion programmes. Traditionally, in-depth investigations of crash risks are conducted using exposure controlled study or case-control methodology. However, these studies need either observational data for control cases or exogenous exposure data like vehicle-kilometres travel, entry flow or product of conflicting flow for a particular traffic location, or a traffic site. These data are not readily available and often require extensive data collection effort on a system-wide basis. Aim: The objective of this research is to propose an alternative methodology to investigate crash risks of a road user group in different circumstances using readily available traffic police crash data. Methods: This study employs a combination of a log-linear model and the quasi-induced exposure technique to estimate crash risks of a road user group. While the log-linear model reveals the significant interactions and thus the prevalence of crashes of a road user group under various sets of traffic, environmental and roadway factors, the quasi-induced exposure technique estimates relative exposure of that road user in the same set of explanatory variables. Therefore, the combination of these two techniques provides relative measures of crash risks under various influences of roadway, environmental and traffic conditions. The proposed methodology has been illustrated using Brisbane motorcycle crash data of five years. Results: Interpretations of results on different combination of interactive factors show that the poor conspicuity of motorcycles is a predominant cause of motorcycle crashes. Inability of other drivers to correctly judge the speed and distance of an oncoming motorcyclist is also evident in right-of-way violation motorcycle crashes at intersections. Discussion and Conclusions: The combination of a log-linear model and the induced exposure technique is a promising methodology and can be applied to better estimate crash risks of other road users. This study also highlights the importance of considering interaction effects to better understand hazardous situations. A further study on the comparison between the proposed methodology and case-control method would be useful.
Resumo:
This paper seeks to explain the lagging productivity in Singapore’s manufacturing noted in the statements of the Economic Strategies Committee Report 2010. Two methods are employed: the Malmquist productivity to measure total factor productivity change and Simar and Wilson’s (J Econ, 136:31–64, 2007) bootstrapped truncated regression approach. In the first stage, the nonparametric data envelopment analysis is used to measure technical efficiency. To quantify the economic drivers underlying inefficiencies, the second stage employs a bootstrapped truncated regression whereby bias-corrected efficiency estimates are regressed against explanatory variables. The findings reveal that growth in total factor productivity was attributed to efficiency change with no technical progress. Most industries were technically inefficient throughout the period except for ‘Pharmaceutical Products’. Sources of efficiency were attributed to quality of worker and flexible work arrangements while incessant use of foreign workers lowered efficiency.