942 resultados para Little Higgs model
Resumo:
In an open railway access market, the provisions of railway infrastructures and train services are separated and independent. Negotiations between the track owner and train service providers are thus required for the allocation of the track capacity and the formulation of the services timetables, in which each party, i.e. a stakeholder, exhibits intelligence from the previous negotiation experience to obtain the favourable terms and conditions for the track access. In order to analyse the realistic interacting behaviour among the stakeholders in the open railway access market schedule negotiations, intelligent learning capability should be included in the behaviour modelling. This paper presents a reinforcement learning approach on modelling the intelligent negotiation behaviour. The effectiveness of incorporating learning capability in the stakeholder negotiation behaviour is then demonstrated through simulation.
Resumo:
Safety at roadway intersections is of significant interest to transportation professionals due to the large number of intersections in transportation networks, the complexity of traffic movements at these locations that leads to large numbers of conflicts, and the wide variety of geometric and operational features that define them. A variety of collision types including head-on, sideswipe, rear-end, and angle crashes occur at intersections. While intersection crash totals may not reveal a site deficiency, over exposure of a specific crash type may reveal otherwise undetected deficiencies. Thus, there is a need to be able to model the expected frequency of crashes by collision type at intersections to enable the detection of problems and the implementation of effective design strategies and countermeasures. Statistically, it is important to consider modeling collision type frequencies simultaneously to account for the possibility of common unobserved factors affecting crash frequencies across crash types. In this paper, a simultaneous equations model of crash frequencies by collision type is developed and presented using crash data for rural intersections in Georgia. The model estimation results support the notion of the presence of significant common unobserved factors across crash types, although the impact of these factors on parameter estimates is found to be rather modest.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
At least two important transportation planning activities rely on planning-level crash prediction models. One is motivated by the Transportation Equity Act for the 21st Century, which requires departments of transportation and metropolitan planning organizations to consider safety explicitly in the transportation planning process. The second could arise from a need for state agencies to establish incentive programs to reduce injuries and save lives. Both applications require a forecast of safety for a future period. Planning-level crash prediction models for the Tucson, Arizona, metropolitan region are presented to demonstrate the feasibility of such models. Data were separated into fatal, injury, and property-damage crashes. To accommodate overdispersion in the data, negative binomial regression models were applied. To accommodate the simultaneity of fatality and injury crash outcomes, simultaneous estimation of the models was conducted. All models produce crash forecasts at the traffic analysis zone level. Statistically significant (p-values < 0.05) and theoretically meaningful variables for the fatal crash model included population density, persons 17 years old or younger as a percentage of the total population, and intersection density. Significant variables for the injury and property-damage crash models were population density, number of employees, intersections density, percentage of miles of principal arterial, percentage of miles of minor arterials, and percentage of miles of urban collectors. Among several conclusions it is suggested that planning-level safety models are feasible and may play a role in future planning activities. However, caution must be exercised with such models.
Resumo:
A number of studies have focused on estimating the effects of accessibility on housing values by using the hedonic price model. In the majority of studies, estimation results have revealed that housing values increase as accessibility improves, although the magnitude of estimates has varied across studies. Adequately estimating the relationship between transportation accessibility and housing values is challenging for at least two reasons. First, the monocentric city assumption applied in location theory is no longer valid for many large or growing cities. Second, rather than being randomly distributed in space, housing values are clustered in space—often exhibiting spatial dependence. Recognizing these challenges, a study was undertaken to develop a spatial lag hedonic price model in the Seoul, South Korea, metropolitan region, which includes a measure of local accessibility as well as systemwide accessibility, in addition to other model covariates. Although the accessibility measures can be improved, the modeling results suggest that the spatial interactions of apartment sales prices occur across and within traffic analysis zones, and the sales prices for apartment communities are devalued as accessibility deteriorates. Consistent with findings in other cities, this study revealed that the distance to the central business district is still a significant determinant of sales price.
Resumo:
Statisticians along with other scientists have made significant computational advances that enable the estimation of formerly complex statistical models. The Bayesian inference framework combined with Markov chain Monte Carlo estimation methods such as the Gibbs sampler enable the estimation of discrete choice models such as the multinomial logit (MNL) model. MNL models are frequently applied in transportation research to model choice outcomes such as mode, destination, or route choices or to model categorical outcomes such as crash outcomes. Recent developments allow for the modification of the potentially limiting assumptions of MNL such as the independence from irrelevant alternatives (IIA) property. However, relatively little transportation-related research has focused on Bayesian MNL models, the tractability of which is of great value to researchers and practitioners alike. This paper addresses MNL model specification issues in the Bayesian framework, such as the value of including prior information on parameters, allowing for nonlinear covariate effects, and extensions to random parameter models, so changing the usual limiting IIA assumption. This paper also provides an example that demonstrates, using route-choice data, the considerable potential of the Bayesian MNL approach with many transportation applications. This paper then concludes with a discussion of the pros and cons of this Bayesian approach and identifies when its application is worthwhile
Resumo:
This paper addresses the problem of constructing consolidated business process models out of collections of process models that share common fragments. The paper considers the construction of unions of multiple models (called merged models) as well as intersections (called digests). Merged models are intended for analysts who wish to create a model that subsumes a collection of process models - typically representing variants of the same underlying process - with the aim of replacing the variants with the merged model. Digests, on the other hand, are intended for analysts who wish to identify the most recurring fragments across a collection of process models, so that they can focus their efforts on optimizing these fragments. The paper presents an algorithm for computing merged models and an algorithm for extracting digests from a merged model. The merging and digest extraction algorithms have been implemented and tested against collections of process models taken from multiple application domains. The tests show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes. Furthermore, a case study conducted in a large insurance company has demonstrated the usefulness of the merging and digest extraction operators in a practical setting.
Resumo:
Survival probability prediction using covariate-based hazard approach is a known statistical methodology in engineering asset health management. We have previously reported the semi-parametric Explicit Hazard Model (EHM) which incorporates three types of information: population characteristics; condition indicators; and operating environment indicators for hazard prediction. This model assumes the baseline hazard has the form of the Weibull distribution. To avoid this assumption, this paper presents the non-parametric EHM which is a distribution-free covariate-based hazard model. In this paper, an application of the non-parametric EHM is demonstrated via a case study. In this case study, survival probabilities of a set of resistance elements using the non-parametric EHM are compared with the Weibull proportional hazard model and traditional Weibull model. The results show that the non-parametric EHM can effectively predict asset life using the condition indicator, operating environment indicator, and failure history.
Resumo:
With the recent regulatory reforms in a number of countries, railways resources are no longer managed by a single party but are distributed among different stakeholders. To facilitate the operation of train services, a train service provider (SP) has to negotiate with the infrastructure provider (IP) for a train schedule and the associated track access charge. This paper models the SP and IP as software agents and the negotiation as a prioritized fuzzy constraint satisfaction (PFCS) problem. Computer simulations have been conducted to demonstrate the effects on the train schedule when the SP has different optimization criteria. The results show that by assigning different priorities on the fuzzy constraints, agents can represent SPs with different operational objectives.
Resumo:
Cloninger’s psychobiological model of temperament and character is a general model of personality that has been widely used in clinical psychology, but has seldom been applied in other domains. In this research we apply Cloninger’s model to the study of leadership. Our study comprised 81 participants who took part in a diverse range of small group tasks. Participants rotated through tasks and groups and rated each other on “emergent leadership.” As hypothesized, leader emergence tended to be consistent regardless of the specific tasks and groups. It was found that personality factors from Cloninger, Svrakic, and Przybeck’s (1993) model could explain trait-based variance in emergent leadership. Results also highlight the role of “cooperativeness” in the prediction of leadership emergence. Implications are discussed in terms of our theoretical understanding of trait-based leadership, and more generally in terms of the utility of Cloninger’s model in leadership research.
Resumo:
We present a novel modified theory based upon Rayleigh scattering of ultrasound from composite nanoparticles with a liquid core and solid shell. We derive closed form solutions to the scattering cross-section and have applied this model to an ultrasound contrast agent consisting of a liquid-filled core (perfluorooctyl bromide, PFOB) encapsulated by a polymer shell (poly-caprolactone, PCL). Sensitivity analysis was performed to predict the dependence of the scattering cross-section upon material and dimensional parameters. A rapid increase in the scattering cross-section was achieved by increasing the compressibility of the core, validating the incorporation of high compressibility PFOB; the compressibility of the shell had little impact on the overall scattering cross-section although a more compressible shell is desirable. Changes in the density of the shell and the core result in predicted local minima in the scattering cross-section, approximately corresponding to the PFOB-PCL contrast agent considered; hence, incorporation of a lower shell density could potentially significantly improve the scattering cross-section. A 50% reduction in shell thickness relative to external radius increased the predicted scattering cross-section by 50%. Although it has often been considered that the shell has a negative effect on the echogeneity due to its low compressibility, we have shown that it can potentially play an important role in the echogeneity of the contrast agent. The challenge for the future is to identify suitable shell and core materials that meet the predicted characteristics in order to achieve optimal echogenity.
Resumo:
Students with learning disabilities (LD) often experience significant feelings of loneliness. There is some evidence to suggest that these feelings of loneliness may be related to social difficulties that are linked to their learning disability. Adolescents experience more loneliness than any other age group, primarily because this is a time of identity formation and self-evaluation. Therefore, adolescents with learning disabilities are highly likely to experience the negative feelings of loneliness. Many areas of educational research have highlighted the impact of negative feelings on learning. This begs the question, =are adolescents with learning disabilities doubly disadvantaged in regard to their learning?‘ That is, if their learning experience is already problematic, does loneliness exacerbate these learning difficulties? This thesis reveals the findings of a doctoral project which examined this complicated relationship between loneliness and classroom participation using a social cognitive framework. In this multiple case-study design, narratives were constructed using classroom observations and interviews which were conducted with 4 adolescent students (2 girls and 2 boys, from years 9-12) who were identified as likely to be experiencing learning disabilities. Discussion is provided on the method used to identify students with learning disabilities and the related controversy of using disability labels. A key aspect of the design was that it allowed the students to relate their school experiences and have their stories told. The design included an ethnographic element in its focus on the interactions of the students within the school as a culture and elements of narrative inquiry were used, particularly in reporting the results. The narratives revealed all participants experienced problematic social networks. Further, an alarmingly high level of bullying was discovered. Participants reported that when they were feeling rejected or were missing a valued other they had little cognitive energy for learning and did not want to be in school. Absenteeism amongst the group was high, but this was also true for the rest of the school population. A number of relationships emerged from the narratives using social cognitive theory. These relationships highlighted the impact of cognitive, behavioural and environmental factors in the school experience of lonely students with learning disabilities. This approach reflects the social model of disability that frames the research.
Resumo:
Sustainable urban development and the liveability of a city are increasingly important issues in the context of land use planning and infrastructure management. In recent years, the promotion of sustainable urban development in Australia and overseas is facing various physical, socio-economic and environmental challenges. These challenges and problems arise from the lack of capability of local governments to accommodate the needs of the population and economy in a relatively short timeframe. The planning of economic growth and development is often dealt with separately and not included in the conventional land use planning process. There is also a sharp rise in the responsibilities and roles of local government for infrastructure planning and management. This increase in responsibilities means that local elected officials and urban planners have less time to prepare background information and make decisions. The Brisbane Urban Growth Model has proven initially successful in providing a dynamic platform to ensure timely and coordinated delivery of urban infrastructure. Most importantly, this model is the first step for local governments in moving toward a systematic approach to pursuing sustainable and effective urban infrastructure management.
Resumo:
This paper introduces an event-based traffic model for railway systems adopting fixed-block signalling schemes. In this model, the events of trains' arrival at and departure from signalling blocks constitute the states of the traffic flow. A state transition is equivalent to the progress of the trains by one signalling block and it is realised by referring to past and present states, as well as a number of pre-calculated look-up tables of run-times in the signalling block under various signalling conditions. Simulation results are compared with those from a time-based multi-train simulator to study the improvement of processing time and accuracy.
Resumo:
Measuring the comparative sustainability levels of cities, regions, institutions and projects is an essential procedure in creating sustainable urban futures. This paper introduces a new urban sustainability assessment model: “The Sustainable Infrastructure, Land-use, Environment and Transport Model (SILENT)”. The SILENT Model is an advanced geographic information system and indicator-based comparative urban sustainability indexing model. The model aims to assist planners and policy makers in their daily tasks in sustainable urban planning and development by providing an integrated sustainability assessment framework. The paper gives an overview of the conceptual framework and components of the model and discusses the theoretical constructs, methodological procedures, and future development of this promising urban sustainability assessment model.