356 resultados para Analytical modeling
em Queensland University of Technology - ePrints Archive
Analytical modeling and sensitivity analysis for travel time estimation on signalized urban networks
Resumo:
This paper presents a model for estimation of average travel time and its variability on signalized urban networks using cumulative plots. The plots are generated based on the availability of data: a) case-D, for detector data only; b) case-DS, for detector data and signal timings; and c) case-DSS, for detector data, signal timings and saturation flow rate. The performance of the model for different degrees of saturation and different detector detection intervals is consistent for case-DSS and case-DS whereas, for case-D the performance is inconsistent. The sensitivity analysis of the model for case-D indicates that it is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
This paper provides fundamental understanding for the use of cumulative plots for travel time estimation on signalized urban networks. Analytical modeling is performed to generate cumulative plots based on the availability of data: a) Case-D, for detector data only; b) Case-DS, for detector data and signal timings; and c) Case-DSS, for detector data, signal timings and saturation flow rate. The empirical study and sensitivity analysis based on simulation experiments have observed the consistency in performance for Case-DS and Case-DSS, whereas, for Case-D the performance is inconsistent. Case-D is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
In this paper, we propose a multivariate GARCH model with a time-varying conditional correlation structure. The new double smooth transition conditional correlation (DSTCC) GARCH model extends the smooth transition conditional correlation (STCC) GARCH model of Silvennoinen and Teräsvirta (2005) by including another variable according to which the correlations change smoothly between states of constant correlations. A Lagrange multiplier test is derived to test the constancy of correlations against the DSTCC-GARCH model, and another one to test for another transition in the STCC-GARCH framework. In addition, other specification tests, with the aim of aiding the model building procedure, are considered. Analytical expressions for the test statistics and the required derivatives are provided. Applying the model to the stock and bond futures data, we discover that the correlation pattern between them has dramatically changed around the turn of the century. The model is also applied to a selection of world stock indices, and we find evidence for an increasing degree of integration in the capital markets.
Resumo:
The Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991 mandated the consideration of safety in the regional transportation planning process. As part of National Cooperative Highway Research Program Project 8-44, "Incorporating Safety into the Transportation Planning Process," we conducted a telephone survey to assess safety-related activities and expertise at Governors Highway Safety Associations (GHSAs), and GHSA relationships with metropolitan planning organizations (MPOs) and state departments of transportation (DOTs). The survey results were combined with statewide crash data to enable exploratory modeling of the relationship between GHSA policies and programs and statewide safety. The modeling objective was to illuminate current hurdles to ISTEA implementation, so that appropriate institutional, analytical, and personnel improvements can be made. The study revealed that coordination of transportation safety across DOTs, MPOs, GHSAs, and departments of public safety is generally beneficial to the implementation of safety. In addition, better coordination is characterized by more positive and constructive attitudes toward incorporating safety into planning.
Resumo:
The quality and bitrate modeling is essential to effectively adapt the bitrate and quality of videos when delivered to multiplatform devices over resource constraint heterogeneous networks. The recent model proposed by Wang et al. estimates the bitrate and quality of videos in terms of the frame rate and quantization parameter. However, to build an effective video adaptation framework, it is crucial to incorporate the spatial resolution in the analytical model for bitrate and perceptual quality adaptation. Hence, this paper proposes an analytical model to estimate the bitrate of videos in terms of quantization parameter, frame rate, and spatial resolution. The model can fit the measured data accurately which is evident from the high Pearson correlation. The proposed model is based on the observation that the relative reduction in bitrate due to decreasing spatial resolution is independent of the quantization parameter and frame rate. This modeling can be used for rate-constrained bit-stream adaptation scheme which selects the scalability parameters to optimize the perceptual quality for a given bandwidth constraint.
Resumo:
Advances in safety research—trying to improve the collective understanding of motor vehicle crash causes and contributing factors—rest upon the pursuit of numerous lines of research inquiry. The research community has focused considerable attention on analytical methods development (negative binomial models, simultaneous equations, etc.), on better experimental designs (before-after studies, comparison sites, etc.), on improving exposure measures, and on model specification improvements (additive terms, non-linear relations, etc.). One might logically seek to know which lines of inquiry might provide the most significant improvements in understanding crash causation and/or prediction. It is the contention of this paper that the exclusion of important variables (causal or surrogate measures of causal variables) cause omitted variable bias in model estimation and is an important and neglected line of inquiry in safety research. In particular, spatially related variables are often difficult to collect and omitted from crash models—but offer significant opportunities to better understand contributing factors and/or causes of crashes. This study examines the role of important variables (other than Average Annual Daily Traffic (AADT)) that are generally omitted from intersection crash prediction models. In addition to the geometric and traffic regulatory information of intersection, the proposed model includes many spatial factors such as local influences of weather, sun glare, proximity to drinking establishments, and proximity to schools—representing a mix of potential environmental and human factors that are theoretically important, but rarely used. Results suggest that these variables in addition to AADT have significant explanatory power, and their exclusion leads to omitted variable bias. Provided is evidence that variable exclusion overstates the effect of minor road AADT by as much as 40% and major road AADT by 14%.
Resumo:
The skyrocketing trend for social media on the Internet greatly alters analytical Customer Relationship Management (CRM). Against this backdrop, the purpose of this paper is to advance the conceptual design of Business Intelligence (BI) systems with data identified from social networks. We develop an integrated social network data model, based on an in-depth analysis of Facebook. The data model can inform the design of data warehouses in order to offer new opportunities for CRM analyses, leading to a more consistent and richer picture of customers? characteristics, needs, wants, and demands. Four major contributions are offered. First, Social CRM and Social BI are introduced as emerging fields of research. Second, we develop a conceptual data model to identify and systematize the data available on online social networks. Third, based on the identified data, we design a multidimensional data model as an early contribution to the conceptual design of Social BI systems and demonstrate its application by developing management reports in a retail scenario. Fourth, intellectual challenges for advancing Social CRM and Social BI are discussed.
Resumo:
Finding an appropriate linking method to connect different dimensional element types in a single finite element model is a key issue in the multi-scale modeling. This paper presents a mixed dimensional coupling method using multi-point constraint equations derived by equating the work done on either side of interface connecting beam elements and shell elements for constructing a finite element multiscale model. A typical steel truss frame structure is selected as case example and the reduced scale specimen of this truss section is then studied in the laboratory to measure its dynamic and static behavior in global truss and local welded details while the different analytical models are developed for numerical simulation. Comparison of dynamic and static response of the calculated results among different numerical models as well as the good agreement with those from experimental results indicates that the proposed multi-scale model is efficient and accurate.
Resumo:
In this work, a Langevin dynamics model of the diffusion of water in articular cartilage was developed. Numerical simulations of the translational dynamics of water molecules and their interaction with collagen fibers were used to study the quantitative relationship between the organization of the collagen fiber network and the diffusion tensor of water in model cartilage. Langevin dynamics was used to simulate water diffusion in both ordered and partially disordered cartilage models. In addition, an analytical approach was developed to estimate the diffusion tensor for a network comprising a given distribution of fiber orientations. The key findings are that (1) an approximately linear relationship was observed between collagen volume fraction and the fractional anisotropy of the diffusion tensor in fiber networks of a given degree of alignment, (2) for any given fiber volume fraction, fractional anisotropy follows a fiber alignment dependency similar to the square of the second Legendre polynomial of cos(θ), with the minimum anisotropy occurring at approximately the magic angle (θMA), and (3) a decrease in the principal eigenvalue and an increase in the transverse eigenvalues is observed as the fiber orientation angle θ progresses from 0◦ to 90◦. The corresponding diffusion ellipsoids are prolate for θ < θMA, spherical for θ ≈ θMA, and oblate for θ > θMA. Expansion of the model to include discrimination between the combined effects of alignment disorder and collagen fiber volume fraction on the diffusion tensor is discussed.
Resumo:
Carbonatites are known to contain the highest concentrations of rare-earth elements (REE) among all igneous rocks. The REE distribution of carbonatites is commonly believed to be controlled by that of the rock forming Ca minerals (i.e., calcite, dolomite, and ankerite) and apatite because of their high modal content and tolerance for the substitution of Ca by light REE (LREE). Contrary to this conjecture, calcite from the Miaoya carbonatite (China), analyzed in situ by laser-ablation inductively-coupled-plasma mass-spectrometry, is characterized by low REE contents (100–260 ppm) and relatively !at chondrite-normalized REE distribution patterns [average (La/Yb)CN=1.6]. The carbonatite contains abundant REE-rich minerals, including monazite and !uorapatite, both precipitated earlier than the REE-poor calcite, and REE-fluorocarbonates that postdated the calcite. Hydrothermal REE-bearing !uorite and barite veins are not observed at Miaoya. The textural and analytical evidence indicates that the initially high concentrations of REE and P in the carbonatitic magma facilitated early precipitation of REE-rich phosphates. Subsequent crystallization of REE-poor calcite led to enrichment of the residual liquid in REE, particularly LREE. This implies that REE are generally incompatible with respect to calcite and the calcite/melt partition coefficients for heavy REE (HREE) are significantly greater than those for LREE. Precipitation of REE-fluorocarbonates late in the evolutionary history resulted in depletion of the residual liquid in LREE, as manifested by the development of HREE-enriched late-stage calcite [(La/Yb)CN=0.7] in syenites associated with the carbonatite. The observed variations of REE distribution between calcite and whole rocks are interpreted to arise from multistage fractional crystallization (phosphates!calcite!REE-!uorocarbonates) from an initially REE-rich carbonatitic liquid.
Resumo:
The business value of information technology (IT) is increasingly being cocreated by multiple parties, opening opportunities for new research initiatives. Previous studies on IT value cocreation mainly focus on analyzing sources of cocreated IT value, yet inadequately accommodating the influence of competition relationships in IT value cocreation activities. To fill the gap, this in-progress paper suggests an agent-based modeling (also simulation) approach to investigating potential influences of the dynamic interplay between cooperation and competition relationships in IT value cocreation settings. In particular, the research proposes a high-level conceptual framework to position general IT value cocreation processes. A relational network view is offered, aiming at decomposing and systemizing several typical cooperation and competition scenarios in practical IT value cocreation settings. The application of a simulation approach to analytical insights and to theory building is illustrated.
Resumo:
In this paper we propose a new multivariate GARCH model with time-varying conditional correlation structure. The time-varying conditional correlations change smoothly between two extreme states of constant correlations according to a predetermined or exogenous transition variable. An LM–test is derived to test the constancy of correlations and LM- and Wald tests to test the hypothesis of partially constant correlations. Analytical expressions for the test statistics and the required derivatives are provided to make computations feasible. An empirical example based on daily return series of five frequently traded stocks in the S&P 500 stock index completes the paper.
Resumo:
This paper proposes an analytical Incident Traffic Management framework for freeway incident modeling and traffic re-routing. The proposed framework incorporates an econometric incident duration model and a traffic re-routing optimization module. The incident duration model is used to estimate the expected duration of the incident and thus determine the planning horizon for the re-routing module. The re-routing module is a CTM-based Single Destination System Optimal Dynamic Traffic Assignment model that generates optimal real-time strategies of re-routing freeway traffic to its adjacent arterial network during incidents. The proposed framework has been applied to a case study network including a freeway and its adjacent arterial network in South East Queensland, Australia. The results from different scenarios of freeway demand and incident blockage extent have been analyzed and advantages of the proposed framework are demonstrated.
Resumo:
Statistical comparison of oil samples is an integral part of oil spill identification, which deals with the process of linking an oil spill with its source of origin. In current practice, a frequentist hypothesis test is often used to evaluate evidence in support of a match between a spill and a source sample. As frequentist tests are only able to evaluate evidence against a hypothesis but not in support of it, we argue that this leads to unsound statistical reasoning. Moreover, currently only verbal conclusions on a very coarse scale can be made about the match between two samples, whereas a finer quantitative assessment would often be preferred. To address these issues, we propose a Bayesian predictive approach for evaluating the similarity between the chemical compositions of two oil samples. We derive the underlying statistical model from some basic assumptions on modeling assays in analytical chemistry, and to further facilitate and improve numerical evaluations, we develop analytical expressions for the key elements of Bayesian inference for this model. The approach is illustrated with both simulated and real data and is shown to have appealing properties in comparison with both standard frequentist and Bayesian approaches