297 resultados para causal modeling
Resumo:
The realistic strength and deflection behavior of industrial and commercial steel portal frame buildings are understood only if the effects of rigidity of end frames and profiled steel claddings are included. The conventional designs ignore these effects and are very much based on idealized two-dimensional (2D) frame behavior. Full-scale tests of a 1212 m steel portal frame building under a range of design load cases indicated that the observed deflections and bending moments in the portal frame were considerably different from those obtained from a 2D analysis of frames ignoring these effects. Three-dimensional (3D) analyses of the same building, including the effects of end frames and cladding, were carried out, and the results agreed well with full-scale test results. Results clearly indicated the need for such an analysis and for testing to study the true behavior of steel portal frame buildings. It is expected that such a 3D analysis will lead to lighter steel frames as the maximum moments and deflections are reduced.
Resumo:
Conservation of free-ranging cheetah (Acinonyx jubatus) populations is multi faceted and needs to be addressed from an ecological, biological and management perspective. There is a wealth of published research, each focusing on a particular aspect of cheetah conservation. Identifying the most important factors, making sense of various (and sometimes contrasting) findings, and taking decisions when little or no empirical data is available, are everyday challenges facing conservationists. Bayesian networks (BN) provide a statistical modeling framework that enables analysis and integration of information addressing different aspects of conservation. There has been an increased interest in the use of BNs to model conservation issues, however the development of more sophisticated BNs, utilizing object-oriented (OO) features, is still at the frontier of ecological research. We describe an integrated, parallel modeling process followed during a BN modeling workshop held in Namibia to combine expert knowledge and data about free-ranging cheetahs. The aim of the workshop was to obtain a more comprehensive view of the current viability of the free-ranging cheetah population in Namibia, and to predict the effect different scenarios may have on the future viability of this free-ranging cheetah population. Furthermore, a complementary aim was to identify influential parameters of the model to more effectively target those parameters having the greatest impact on population viability. The BN was developed by aggregating diverse perspectives from local and independent scientists, agents from the national ministry, conservation agency members and local fieldworkers. This integrated BN approach facilitates OO modeling in a multi-expert context which lends itself to a series of integrated, yet independent, subnetworks describing different scientific and management components. We created three subnetworks in parallel: a biological, ecological and human factors network, which were then combined to create a complete representation of free-ranging cheetah population viability. Such OOBNs have widespread relevance to the effective and targeted conservation management of vulnerable and endangered species.
Resumo:
This paper comprehensively reviews recent developments in modeling lane-changing behavior. The major lane changing models in the literature are categorized into two groups: models that aim to capture the lane changing decision-making process, and models that aim to quantify the impact of lane changing behavior on surrounding vehicles. The methodologies and important features (including their limitations) of representative models in each category are outlined and discussed. Future research needs are determined.
Resumo:
A fractional differential equation is used to describe a fractal model of mobile/immobile transport with a power law memory function. This equation is the limiting equation that governs continuous time random walks with heavy tailed random waiting times. In this paper, we firstly propose a finite difference method to discretize the time variable and obtain a semi-discrete scheme. Then we discuss its stability and convergence. Secondly we consider a meshless method based on radial basis functions (RBFs) to discretize the space variable. In contrast to conventional FDM and FEM, the meshless method is demonstrated to have distinct advantages: calculations can be performed independent of a mesh, it is more accurate and it can be used to solve complex problems. Finally the convergence order is verified from a numerical example which is presented to describe a fractal model of mobile/immobile transport process with different problem domains. The numerical results indicate that the present meshless approach is very effective for modeling and simulating fractional differential equations, and it has good potential in the development of a robust simulation tool for problems in engineering and science that are governed by various types of fractional differential equations.
Resumo:
Focused on the alternative futures of terrorism, this study engages with the different levels of terrorism knowledge to identify and challenge the restrictive narratives that define terrorism: that "society must be defended" from the "constant and evolving terrorist threat". Using Causal Layered Analysis to deconstruct and reconstruct strategies, alternative scenarios emerge. These alternative futures are depicted collectively as a maze, highlighting the prospect of navigating towards preferred and even shared terrorism futures, once these are supported by new and inclusive metaphors and stakeholder engagement.
Resumo:
The method of generalized estimating equations (GEE) is a popular tool for analysing longitudinal (panel) data. Often, the covariates collected are time-dependent in nature, for example, age, relapse status, monthly income. When using GEE to analyse longitudinal data with time-dependent covariates, crucial assumptions about the covariates are necessary for valid inferences to be drawn. When those assumptions do not hold or cannot be verified, Pepe and Anderson (1994, Communications in Statistics, Simulations and Computation 23, 939–951) advocated using an independence working correlation assumption in the GEE model as a robust approach. However, using GEE with the independence correlation assumption may lead to significant efficiency loss (Fitzmaurice, 1995, Biometrics 51, 309–317). In this article, we propose a method that extracts additional information from the estimating equations that are excluded by the independence assumption. The method always includes the estimating equations under the independence assumption and the contribution from the remaining estimating equations is weighted according to the likelihood of each equation being a consistent estimating equation and the information it carries. We apply the method to a longitudinal study of the health of a group of Filipino children.
Resumo:
Previous studies have shown that users’ cognitive styles play an important role during Web searching. However, only limited studies have showed the relationship between cognitive styles and Web search behavior. Most importantly, it is not clear which components of Web search behavior are influenced by cognitive styles. This paper examines the relationships between users’ cognitive styles and their Web searching and develops a model that portrays the relationship. The study uses qualitative and quantitative analyses to inform the study results based on data gathered from 50 participants. A questionnaire was utilised to collect participants’ demographic information, and Riding’s (1991) Cognitive Style Analysis (CSA) test to assess their cognitive styles. Results show that users’ cognitive styles influenced their information searching strategies, query reformulation behaviour, Web navigational styles and information processing approaches. The user model developed in this study depicts the fundamental relationships between users’ Web search behavior and their cognitive styles. Modeling Web search behavior with a greater understanding of user’s cognitive styles can help information science researchers and information systems designers to bridge the semantic gap between the user and the systems. Implications of the research for theory and practice, and future work are discussed.
Resumo:
Carbonatites are known to contain the highest concentrations of rare-earth elements (REE) among all igneous rocks. The REE distribution of carbonatites is commonly believed to be controlled by that of the rock forming Ca minerals (i.e., calcite, dolomite, and ankerite) and apatite because of their high modal content and tolerance for the substitution of Ca by light REE (LREE). Contrary to this conjecture, calcite from the Miaoya carbonatite (China), analyzed in situ by laser-ablation inductively-coupled-plasma mass-spectrometry, is characterized by low REE contents (100–260 ppm) and relatively !at chondrite-normalized REE distribution patterns [average (La/Yb)CN=1.6]. The carbonatite contains abundant REE-rich minerals, including monazite and !uorapatite, both precipitated earlier than the REE-poor calcite, and REE-fluorocarbonates that postdated the calcite. Hydrothermal REE-bearing !uorite and barite veins are not observed at Miaoya. The textural and analytical evidence indicates that the initially high concentrations of REE and P in the carbonatitic magma facilitated early precipitation of REE-rich phosphates. Subsequent crystallization of REE-poor calcite led to enrichment of the residual liquid in REE, particularly LREE. This implies that REE are generally incompatible with respect to calcite and the calcite/melt partition coefficients for heavy REE (HREE) are significantly greater than those for LREE. Precipitation of REE-fluorocarbonates late in the evolutionary history resulted in depletion of the residual liquid in LREE, as manifested by the development of HREE-enriched late-stage calcite [(La/Yb)CN=0.7] in syenites associated with the carbonatite. The observed variations of REE distribution between calcite and whole rocks are interpreted to arise from multistage fractional crystallization (phosphates!calcite!REE-!uorocarbonates) from an initially REE-rich carbonatitic liquid.
Resumo:
Background The application of theoretical frameworks for modeling predictors of drug risk among male street laborers remains limited. The objective of this study was to test a modified version of the IMB (Information-Motivation-Behavioral Skills Model), which includes psychosocial stress, and compare this modified version with the original IMB model in terms of goodness-of-fit to predict risky drug use behavior among this population. Methods In a cross-sectional study, social mapping technique was conducted to recruit 450 male street laborers from 135 street venues across 13 districts of Hanoi city, Vietnam, for face-to-face interviews. Structural equation modeling (SEM) was used to analyze data from interviews. Results Overall measures of fit via SEM indicated that the original IMB model provided a better fit to the data than the modified version. Although the former model was able to predict a lesser variance than the latter (55% vs. 62%), it was of better fit. The findings suggest that men who are better informed and motivated for HIV prevention are more likely to report higher behavioral skills, which, in turn, are less likely to be engaged in risky drug use behavior. Conclusions This was the first application of the modified IMB model for drug use in men who were unskilled, unregistered laborers in urban settings. An AIDS prevention program for these men should not only distribute information and enhance motivations for HIV prevention, but consider interventions that could improve self-efficacy for preventing HIV infection. Future public health research and action may also consider broader factors such as structural social capital and social policy to alter the conditions that drive risky drug use among these men.
Resumo:
This paper presents mathematical models for BRT station operation, calibrated using microscopic simulation modelling. Models are presented for station capacity and bus queue length. No reliable model presently exists to estimate bus queue length. The proposed bus queue model is analogous to an unsignalized intersection queuing model.
Resumo:
Stations on Bus Rapid Transit (BRT) lines ordinarily control line capacity because they act as bottlenecks. At stations with passing lanes, congestion may occur when buses maneuvering into and out of the platform stopping lane interfere with bus flow, or when a queue of buses forms upstream of the station blocking inflow. We contend that, as bus inflow to the station area approaches capacity, queuing will become excessive in a manner similar to operation of a minor movement on an unsignalized intersection. This analogy is used to treat BRT station operation and to analyze the relationship between station queuing and capacity. In the first of three stages, we conducted microscopic simulation modeling to study and analyze operating characteristics of the station under near steady state conditions through output variables of capacity, degree of saturation and queuing. A mathematical model was then developed to estimate the relationship between average queue and degree of saturation and calibrated for a specified range of controlled scenarios of mean and coefficient of variation of dwell time. Finally, simulation results were calibrated and validated.
Resumo:
Public transport travel time variability (PTTV) is essential for understanding deteriorations in the reliability of travel time, optimizing transit schedules and route choices. This paper establishes key definitions of PTTV in which firstly include all buses, and secondly include only a single service from a bus route. The paper then analyses the day-to-day distribution of public transport travel time by using Transit Signal Priority data. A comprehensive approach using both parametric bootstrapping Kolmogorov-Smirnov test and Bayesian Information Creation technique is developed, recommends Lognormal distribution as the best descriptor of bus travel time on urban corridors. The probability density function of Lognormal distribution is finally used for calculating probability indicators of PTTV. The findings of this study are useful for both traffic managers and statisticians for planning and researching the transit systems.
Resumo:
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.
Resumo:
Hot spot identification (HSID) aims to identify potential sites—roadway segments, intersections, crosswalks, interchanges, ramps, etc.—with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset.
Resumo:
Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.