994 resultados para injection modeling
Resumo:
Public transport travel time variability (PTTV) is essential for understanding deteriorations in the reliability of travel time, optimizing transit schedules and route choices. This paper establishes key definitions of PTTV in which firstly include all buses, and secondly include only a single service from a bus route. The paper then analyses the day-to-day distribution of public transport travel time by using Transit Signal Priority data. A comprehensive approach using both parametric bootstrapping Kolmogorov-Smirnov test and Bayesian Information Creation technique is developed, recommends Lognormal distribution as the best descriptor of bus travel time on urban corridors. The probability density function of Lognormal distribution is finally used for calculating probability indicators of PTTV. The findings of this study are useful for both traffic managers and statisticians for planning and researching the transit systems.
Resumo:
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.
Resumo:
Hot spot identification (HSID) aims to identify potential sites—roadway segments, intersections, crosswalks, interchanges, ramps, etc.—with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset.
Resumo:
Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.
Resumo:
The purpose of this explorative study is to contribute to the understanding of current music industry dynamics. The situation is undeniably quite dramatic: Since the turn of the millennium, the global music industry has declined by $ US 6.2 billion in value—a fall of 16.3% in constant dollar terms. IFPI, the trade organization representing the international recording industry, identifies a number of exogenous factors as the main drivers of the downturn. This article suggests that other factors, in addition to those identified by IFPI, may have contributed to the current difficulties. A model is presented which indicates that business strategies which were designed to cope with the challenging business environment have reduced product diversity, damaged profitability, and contributed to the problem they were intended to solve.
Resumo:
Surveying 1,700 journalists from seventeen countries, this study investigates perceived influences on news work. Analysis reveals a dimensional structure of six distinct domains—political, economic, organizational, professional, and procedural influences, as well as reference groups. Across countries, these six dimensions build up a hierarchical structure where organizational, professional, and procedural influences are perceived as more powerful limits to journalists' work than political and economic influences.
Resumo:
After the terrorist attacks in the United States on 11 September 2001, terrorism and counter-terrorism efforts moved to the front of popular consciousness and became the focus of national security for governments worldwide. With this increased attention came an urgent interest in understanding and identifying what works in fighting terrorism (Belasco 2010). For Australia, understanding the relative effectiveness of counter-terrorism efforts in nearby neighbours of Indonesia, Thailand and the Philippines is highly relevant for our country's national security. Indonesia, Thailand and the Philippines are all countries that are important to Australia not just because of geographic proximity, but also because of a history of economic ties and the role these countries play as Australia’s regional partners...
Resumo:
Experience gained from numerous projects conducted by the U.S. Environmental Protection Agency's (EPA) Environmental Monitoring Systems Laboratory in Las Vegas, Nevada has provided insight to functional issues of mapping, monitoring, and modeling of wetland habitats. Three case studies in poster form describe these issues pertinent to managing wetland resources as mandated under Federal laws. A multiphase project was initiated by the EPA Alaska operations office to provide detailed wetland mapping of arctic plant communities in an area under petroleum development pressure. Existing classification systems did not meet EPA needs. Therefore a Habitat Classification System (HCS) derived from aerial photography was compiled. In conjunction with this photointerpretive keys were developed. These products enable EPA personnel to map large inaccessible areas of the arctic coastal plain and evaluate the sensitivity of various wetland habitats relative to petroleum development needs.
Resumo:
A numerical investigation of the behaviour of fuel injection through a porous surface in an inlet-fuelled, radial-farming scramjet is presented. The performance of porous fuel injection is compared to discrete port hole injection at an equivalence ratio of φ ≈ 0.4 for both cases. The comparison is performed at a Mach 6.5 flow condition with a total specific enthalpy of 4.3 MJ/kg. The numerical results are compared to experiments performed in the T4 shock tunnel where available. The presented results demonstrate for the first time, that porous fuel injection has the potential to outperform port hole injectors in scramjet engines in terms of fuel-air mixing, ignition delays and achievable combustion efficiencies despite reduced fuel penetration heights.
Resumo:
This paper reports on the experimental testing of oxygen compatible ceramic matrix composite porous injectors in a nominally two-dimensional hydrogen fuelled and oxygen enriched radical farming scramjet in the T4 shock tunnel facility. All experiments were performed at a dynamic pressure of 146 kPa, an equivalent flight Mach number of 9.7, a stagnation pressure and enthalpy of 40MPa and 4.3 MJ/kg respectively and at a fuelling condition that resulted in an average equivalence ratio of 0.472. Oxygen was pre-mixed with the fuel prior to injection to achieve enrichment percentages of approximately 13%, 15% and 17%. These levels ensured that the hydrogen-oxidiser mix injected into the engine always remained too fuel rich to sustain a flame without any additional mixing with the captured air. Addition of pre-mixed oxygen with the fuel was found to significantly alter the performance of the engine; enhancing both combustion and ignition and converting a previously observed limited combustion condition into one with sustained and noticeable combustion induced pressure rise. Increases in the enrichment percentage lead to further increases in combustion levels and acted to reduce ignition lengths within the engine. Suppressed combustion runs, where a nitrogen test gas was used, confirmed that the pressure rise observed in these experiments as attributed to the oxygen enrichment and not associated with the increased mass injected.
Resumo:
Perflurooctanoic acid (PFOA) and perfluorooctane sulfonic acid (PFOS) have been used for a variety of applications including fluoropolymer processing, fire-fighting foams and surface treatments since the 1950s. Both PFOS and PFOA are polyfluoroalkyl chemicals (PFCs), man-made compounds that are persistent in the environment and humans; some PFCs have shown adverse effects in laboratory animals. Here we describe the application of a simple one compartment pharmacokinetic model to estimate total intakes of PFOA and PFOS for the general population of urban areas on the east coast of Australia. Key parameters for this model include the elimination rate constants and the volume of distribution within the body. A volume of distribution was calibrated for PFOA to a value of 170ml/kgbw using data from two communities in the United States where the residents' serum concentrations could be assumed to result primarily from a known and characterized source, drinking water contaminated with PFOA by a single fluoropolymer manufacturing facility. For PFOS, a value of 230ml/kgbw was used, based on adjustment of the PFOA value. Applying measured Australian serum data to the model gave mean+/-standard deviation intake estimates of PFOA of 1.6+/-0.3ng/kgbw/day for males and females >12years of age combined based on samples collected in 2002-2003 and 1.3+/-0.2ng/kg bw/day based on samples collected in 2006-2007. Mean intakes of PFOS were 2.7+/-0.5ng/kgbw/day for males and females >12years of age combined based on samples collected in 2002-2003, and 2.4+/-0.5ng/kgbw/day for the 2006-2007 samples. ANOVA analysis was run for PFOA intake and demonstrated significant differences by age group (p=0.03), sex (p=0.001) and date of collection (p<0.001). Estimated intake rates were highest in those aged >60years, higher in males compared to females, and higher in 2002-2003 compared to 2006-2007. The same results were seen for PFOS intake with significant differences by age group (p<0.001), sex (p=0.001) and date of collection (p=0.016).
Resumo:
This paper presents two novel nonlinear models of u-shaped anti-roll tanks for ships, and their linearizations. In addition, a third simplified nonlinear model is presented. The models are derived using Lagrangian mechanics. This formulation not only simplifies the modeling process, but also allows one to obtain models that satisfy energy-related physical properties. The proposed nonlinear models and their linearizations are validated using model-scale experimental data. Unlike other models in the literature, the nonlinear models in this paper are valid for large roll amplitudes. Even at moderate roll angles, the nonlinear models have three orders of magnitude lower mean square error relative to experimental data than the linear models.
Resumo:
Process models are usually depicted as directed graphs, with nodes representing activities and directed edges control flow. While structured processes with pre-defined control flow have been studied in detail, flexible processes including ad-hoc activities need further investigation. This paper presents flexible process graph, a novel approach to model processes in the context of dynamic environment and adaptive process participants’ behavior. The approach allows defining execution constraints, which are more restrictive than traditional ad-hoc processes and less restrictive than traditional control flow, thereby balancing structured control flow with unstructured ad-hoc activities. Flexible process graph focuses on what can be done to perform a process. Process participants’ routing decisions are based on the current process state. As a formal grounding, the approach uses hypergraphs, where each edge can associate any number of nodes. Hypergraphs are used to define execution semantics of processes formally. We provide a process scenario to motivate and illustrate the approach.
Resumo:
Lean construction and building information modeling (BIM) are quite different initiatives, but both are having profound impacts on the construction industry. A rigorous analysis of the myriad specific interactions between them indicates that a synergy exists which, if properly understood in theoretical terms, can be exploited to improve construction processes beyond the degree to which it might be improved by application of either of these paradigms independently. Using a matrix that juxtaposes BIM functionalities with prescriptive lean construction principles, 56 interactions have been identified, all but four of which represent constructive interaction. Although evidence for the majority of these has been found, the matrix is not considered complete but rather a framework for research to explore the degree of validity of the interactions. Construction executives, managers, designers, and developers of information technology systems for construction can also benefit from the framework as an aid to recognizing the potential synergies when planning their lean and BIM adoption strategies.
Resumo:
Unstable density-driven flow can lead to enhanced solute transport in groundwater. Only recently has the complex fingering pattern associated with free convection been documented in field settings. Electrical resistivity (ER) tomography has been used to capture a snapshot of convective instabilities at a single point in time, but a thorough transient analysis is still lacking in the literature. We present the results of a 2 year experimental study at a shallow aquifer in the United Arab Emirates that was designed to specifically explore the transient nature of free convection. ER tomography data documented the presence of convective fingers following a significant rainfall event. We demonstrate that the complex fingering pattern had completely disappeared a year after the rainfall event. The observation is supported by an analysis of the aquifer halite budget and hydrodynamic modeling of the transient character of the fingering instabilities. Modeling results show that the transient dynamics of the gravitational instabilities (their initial development, infiltration into the underlying lower-density groundwater, and subsequent decay) are in agreement with the timing observed in the time-lapse ER measurements. All experimental observations and modeling results are consistent with the hypothesis that a dense brine that infiltrated into the aquifer from a surficial source was the cause of free convection at this site, and that the finite nature of the dense brine source and dispersive mixing led to the decay of instabilities with time. This study highlights the importance of the transience of free convection phenomena and suggests that these processes are more rapid than was previously understood.