363 resultados para Statistical method
Resumo:
Fractional Fokker–Planck equations have been used to model several physical situations that present anomalous diffusion. In this paper, a class of time- and space-fractional Fokker–Planck equations (TSFFPE), which involve the Riemann–Liouville time-fractional derivative of order 1-α (α(0, 1)) and the Riesz space-fractional derivative (RSFD) of order μ(1, 2), are considered. The solution of TSFFPE is important for describing the competition between subdiffusion and Lévy flights. However, effective numerical methods for solving TSFFPE are still in their infancy. We present three computationally efficient numerical methods to deal with the RSFD, and approximate the Riemann–Liouville time-fractional derivative using the Grünwald method. The TSFFPE is then transformed into a system of ordinary differential equations (ODE), which is solved by the fractional implicit trapezoidal method (FITM). Finally, numerical results are given to demonstrate the effectiveness of these methods. These techniques can also be applied to solve other types of fractional partial differential equations.
Resumo:
The Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE) model provides an external validation capability for hot stabilized option; the model is one of several new modal emissions models designed to predict hot stabilized emission rates for various motor vehicle groups as a function of the conditions under which the vehicles are operating. The validation of aggregate measurements, such as speed and acceleration profile, is performed on an independent data set using three statistical criteria. The MEASURE algorithms have proved to provide significant improvements in both average emission estimates and explanatory power over some earlier models for pollutants across almost every operating cycle tested.
Resumo:
Crash prediction models are used for a variety of purposes including forecasting the expected future performance of various transportation system segments with similar traits. The influence of intersection features on safety have been examined extensively because intersections experience a relatively large proportion of motor vehicle conflicts and crashes compared to other segments in the transportation system. The effects of left-turn lanes at intersections in particular have seen mixed results in the literature. Some researchers have found that left-turn lanes are beneficial to safety while others have reported detrimental effects on safety. This inconsistency is not surprising given that the installation of left-turn lanes is often endogenous, that is, influenced by crash counts and/or traffic volumes. Endogeneity creates problems in econometric and statistical models and is likely to account for the inconsistencies reported in the literature. This paper reports on a limited-information maximum likelihood (LIML) estimation approach to compensate for endogeneity between left-turn lane presence and angle crashes. The effects of endogeneity are mitigated using the approach, revealing the unbiased effect of left-turn lanes on crash frequency for a dataset of Georgia intersections. The research shows that without accounting for endogeneity, left-turn lanes ‘appear’ to contribute to crashes; however, when endogeneity is accounted for in the model, left-turn lanes reduce angle crash frequencies as expected by engineering judgment. Other endogenous variables may lurk in crash models as well, suggesting that the method may be used to correct simultaneity problems with other variables and in other transportation modeling contexts.
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Now in its second edition, this book describes tools that are commonly used in transportation data analysis. The first part of the text provides statistical fundamentals while the second part presents continuous dependent variable models. With a focus on count and discrete dependent variable models, the third part features new chapters on mixed logit models, logistic regression, and ordered probability models. The last section provides additional coverage of Bayesian statistical modeling, including Bayesian inference and Markov chain Monte Carlo methods. Data sets are available online to use with the modeling techniques discussed.
Resumo:
This paper outlines a method of constructing narratives about an individual’s self-efficacy. Self-efficacy is defined as “people’s judgments of their capabilities to organise and execute courses of action required to attain designated types of performances” (Bandura, 1986, p. 391), and as such represents a useful construct for thinking about personal agency. Social cognitive theory provides the theoretical framework for understanding the sources of self-efficacy, that is, the elements that contribute to a sense of self-efficacy. The narrative approach adopted offers an alternative to traditional, positivist psychology, characterised by a preoccupation with measuring psychological constructs (like self-efficacy) by means of questionnaires and scales. It is argued that these instruments yield scores which are somewhat removed from the lived experience of the person—respondent or subject—associated with the score. The method involves a cyclical and iterative process using qualitative interviews to collect data from participants – four mature aged university students. The method builds on a three-interview procedure designed for life history research (Dolbeare & Schuman, cited in Seidman, 1998). This is achieved by introducing reflective homework tasks, as well as written data generated by research participants, as they are guided in reflecting on those experiences (including behaviours, cognitions and emotions) that constitute a sense of self-efficacy, in narrative and by narrative. The method illustrates how narrative analysis is used “to produce stories as the outcome of the research” (Polkinghorne, 1995, p.15), with detail and depth contributing to an appreciation of the ‘lived experience’ of the participants. The method is highly collaborative, with narratives co-constructed by researcher and research participants. The research outcomes suggest an enhanced understanding of self-efficacy contributes to motivation, application of effort and persistence in overcoming difficulties. The paper concludes with an evaluation of the research process by the students who participated in the author’s doctoral study.
Resumo:
Proper application of sunscreen is essential as an effective public health strategy for skin cancer prevention. Insufficient application is common among sunbathers, results in decreased sun protection and may therefore lead to increased UV damage of the skin. However, no objective measure of sunscreen application thickness (SAT) is currently available for field-based use. We present a method to detect SAT on human skin for determining the amount of sunscreen applied and thus enabling comparisons to manufacturer recommendations. Using a skin swabbing method and subsequent spectrophotometric analysis, we were able to determine SAT on human skin. A swabbing method was used to derive SAT on skin (in mg sunscreen per cm2 of skin area) through the concentration–absorption relationship of sunscreen determined in laboratory experiments. Analysis differentiated SATs between 0.25 and 4 mg cm−2 and showed a small but significant decrease in concentration over time postapplication. A field study was performed, in which the heterogeneity of sunscreen application could be investigated. The proposed method is a low cost, noninvasive method for the determination of SAT on skin and it can be used as a valid tool in field- and population-based studies.
Resumo:
Over the past ten years, minimally invasive plate osteosynthesis (MIPO) for the fixation of long bone fractures has become a clinically accepted method with good outcomes, when compared to the conventional open surgical approach (open reduction internal fixation, ORIF). However, while MIPO offers some advantages over ORIF, it also has some significant drawbacks, such as a more demanding surgical technique and increased radiation exposure. No clinical or experimental study to date has shown a difference between the healing outcomes in fractures treated with the two surgical approaches. Therefore, a novel, standardised severe trauma model in sheep has been developed and validated in this project to examine the effect of the two surgical approaches on soft tissue and fracture healing. Twenty four sheep were subjected to severe soft tissue damage and a complex distal femur fracture. The fractures were initially stabilised with an external fixator. After five days of soft tissue recovery, internal fixation with a plate was applied, randomised to either MIPO or ORIF. Within the first fourteen days, the soft tissue damage was monitored locally with a compartment pressure sensor and systemically by blood tests. The fracture progress was assessed fortnightly by x-rays. The sheep were sacrificed in two groups after four and eight weeks, and CT scans and mechanical testing performed. Soft tissue monitoring showed significantly higher postoperative Creatine Kinase and Lactate Dehydrogenase values in the ORIF group compared to MIPO. After four weeks, the torsional stiffness was significantly higher in the MIPO group (p=0.018) compared to the ORIF group. The torsional strength also showed increased values for the MIPO technique (p=0.11). The measured total mineralised callus volumes were slightly higher in the ORIF group. However, a newly developed morphological callus bridging score showed significantly higher values for the MIPO technique (p=0.007), with a high correlation to the mechanical properties (R2=0.79). After eight weeks, the same trends continued, but without statistical significance. In summary, this clinically relevant study, using the newly developed severe trauma model in sheep, clearly demonstrates that the minimally invasive technique minimises additional soft tissue damage and improves fracture healing in the early stage compared to the open surgical approach method.
Resumo:
Extensive groundwater withdrawal has resulted in a severe seawater intrusion problem in the Gooburrum aquifers at Bundaberg, Queensland, Australia. Better management strategies can be implemented by understanding the seawater intrusion processes in those aquifers. To study the seawater intrusion process in the region, a two-dimensional density-dependent, saturated and unsaturated flow and transport computational model is used. The model consists of a coupled system of two non-linear partial differential equations. The first equation describes the flow of a variable-density fluid, and the second equation describes the transport of dissolved salt. A two-dimensional control volume finite element model is developed for simulating the seawater intrusion into the heterogeneous aquifer system at Gooburrum. The simulation results provide a realistic mechanism by which to study the convoluted transport phenomena evolving in this complex heterogeneous coastal aquifer.
Resumo:
In a seminal data mining article, Leo Breiman [1] argued that to develop effective predictive classification and regression models, we need to move away from the sole dependency on statistical algorithms and embrace a wider toolkit of modeling algorithms that include data mining procedures. Nevertheless, many researchers still rely solely on statistical procedures when undertaking data modeling tasks; the sole reliance on these procedures has lead to the development of irrelevant theory and questionable research conclusions ([1], p.199). We will outline initiatives that the HPC & Research Support group is undertaking to engage researchers with data mining tools and techniques; including a new range of seminars, workshops, and one-on-one consultations covering data mining algorithms, the relationship between data mining and the research cycle, and limitations and problems with these new algorithms. Organisational limitations and restrictions to these initiatives are also discussed.
Resumo:
This study investigates the application of local search methods on the railway junction traffic conflict-resolution problem, with the objective of attaining a quick and reasonable solution. A procedure based on local search relies on finding a better solution than the current one by a search in the neighbourhood of the current one. The structure of neighbourhood is therefore very important to an efficient local search procedure. In this paper, the formulation of the structure of the solution, which is the right-of-way sequence assignment, is first described. Two new neighbourhood definitions are then proposed and the performance of the corresponding local search procedures is evaluated by simulation. It has been shown that they provide similar results but they can be used to handle different traffic conditions and system requirements.
Resumo:
This overview focuses on the application of chemometrics techniques for the investigation of soils contaminated by polycyclic aromatic hydrocarbons (PAHs) and metals because these two important and very diverse groups of pollutants are ubiquitous in soils. The salient features of various studies carried out in the micro- and recreational environments of humans, are highlighted in the context of the various multivariate statistical techniques available across discipline boundaries that have been effectively used in soil studies. Particular attention is paid to techniques employed in the geosciences that may be effectively utilized for environmental soil studies; classical multivariate approaches that may be used in isolation or as complementary methods to these are also discussed. Chemometrics techniques widely applied in atmospheric studies for identifying sources of pollutants or for determining the importance of contaminant source contributions to a particular site, have seen little use in soil studies, but may be effectively employed in such investigations. Suitable programs are also available for suggesting mitigating measures in cases of soil contamination, and these are also considered. Specific techniques reviewed include pattern recognition techniques such as Principal Components Analysis (PCA), Fuzzy Clustering (FC) and Cluster Analysis (CA); geostatistical tools include variograms, Geographical Information Systems (GIS), contour mapping and kriging; source identification and contribution estimation methods reviewed include Positive Matrix Factorisation (PMF), and Principal Component Analysis on Absolute Principal Component Scores (PCA/APCS). Mitigating measures to limit or eliminate pollutant sources may be suggested through the use of ranking analysis and multi criteria decision making methods (MCDM). These methods are mainly represented in this review by studies employing the Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE) and its associated graphic output, Geometrical Analysis for Interactive Aid (GAIA).
Resumo:
There is a need for educational frameworks for computer ethics education. This discussion paper presents an approach to developing students’ moral sensitivity, an awareness of morally relevant issues, in project-based learning (PjBL). The proposed approach is based on a study of IT professionals’ levels of awareness of ethics. These levels are labelled My world, The corporate world, A shared world, The client’s world and The wider world. We give recommendations for how instructors may stimulate students’ thinking with the levels and how the levels may be taken into account in managing a project course and in an IS department. Limitations of the recommendations are assessed and issues for discussion are raised.
Resumo:
There is a renaissance of interest in public service motivation in public management research. Moynnihan and Pandey (2007) assert that public service motivation (PSM) has significant practical relevance as it deals with the relationship between motivation and the public interest. There is a need to explore employee needs generated by public service motivation in order to attract and retain a high calibre cadre of public servants (Gabris & Simo, 1995). Such exploration is particularly important beyond the American context which has dominated the literature to date (Taylor, 2007; Vandenabeele, Scheepers, & Hondeghem, 2006; Vandenabeele & Van de Walle, 2008).