924 resultados para Computational methods
Resumo:
Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.
Resumo:
The authors present a qualitative and quantitative comparison of various similarity measures that form the kernel of common area-based stereo-matching systems. The authors compare classical difference and correlation measures as well as nonparametric measures based on the rank and census transforms for a number of outdoor images. For robotic applications, important considerations include robustness to image defects such as intensity variation and noise, the number of false matches, and computational complexity. In the absence of ground truth data, the authors compare the matching techniques based on the percentage of matches that pass the left-right consistency test. The authors also evaluate the discriminatory power of several match validity measures that are reported in the literature for eliminating false matches and for estimating match confidence. For guidance applications, it is essential to have and estimate of confidence in the three-dimensional points generated by stereo vision. Finally, a new validity measure, the rank constraint, is introduced that is capable of resolving ambiguous matches for rank transform-based matching.
Resumo:
This paper gives a review of recent progress in the design of numerical methods for computing the trajectories (sample paths) of solutions to stochastic differential equations. We give a brief survey of the area focusing on a number of application areas where approximations to strong solutions are important, with a particular focus on computational biology applications, and give the necessary analytical tools for understanding some of the important concepts associated with stochastic processes. We present the stochastic Taylor series expansion as the fundamental mechanism for constructing effective numerical methods, give general results that relate local and global order of convergence and mention the Magnus expansion as a mechanism for designing methods that preserve the underlying structure of the problem. We also present various classes of explicit and implicit methods for strong solutions, based on the underlying structure of the problem. Finally, we discuss implementation issues relating to maintaining the Brownian path, efficient simulation of stochastic integrals and variable-step-size implementations based on various types of control.
Resumo:
Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.
Resumo:
Now as in earlier periods of acute change in the media environment, new disciplinary articulations are producing new methods for media and communication research. At the same time, established media and communication studies meth- ods are being recombined, reconfigured, and remediated alongside their objects of study. This special issue of JOBEM seeks to explore the conceptual, political, and practical aspects of emerging methods for digital media research. It does so at the conjuncture of a number of important contemporary trends: the rise of a ‘‘third wave’’ of the Digital Humanities and the ‘‘computational turn’’ (Berry, 2011) associated with natively digital objects and the methods for studying them; the apparently ubiquitous Big Data paradigm—with its various manifestations across academia, business, and government — that brings with it a rapidly increasing interest in social media communication and online ‘‘behavior’’ from the ‘‘hard’’ sciences; along with the multisited, embodied, and emplaced nature of everyday digital media practice.
Resumo:
Fractional mathematical models represent a new approach to modelling complex spatial problems in which there is heterogeneity at many spatial and temporal scales. In this paper, a two-dimensional fractional Fitzhugh-Nagumo-monodomain model with zero Dirichlet boundary conditions is considered. The model consists of a coupled space fractional diffusion equation (SFDE) and an ordinary differential equation. For the SFDE, we first consider the numerical solution of the Riesz fractional nonlinear reaction-diffusion model and compare it to the solution of a fractional in space nonlinear reaction-diffusion model. We present two novel numerical methods for the two-dimensional fractional Fitzhugh-Nagumo-monodomain model using the shifted Grunwald-Letnikov method and the matrix transform method, respectively. Finally, some numerical examples are given to exhibit the consistency of our computational solution methodologies. The numerical results demonstrate the effectiveness of the methods.
Resumo:
Transport processes within heterogeneous media may exhibit non-classical diffusion or dispersion; that is, not adequately described by the classical theory of Brownian motion and Fick's law. We consider a space fractional advection-dispersion equation based on a fractional Fick's law. The equation involves the Riemann-Liouville fractional derivative which arises from assuming that particles may make large jumps. Finite difference methods for solving this equation have been proposed by Meerschaert and Tadjeran. In the variable coefficient case, the product rule is first applied, and then the Riemann-Liouville fractional derivatives are discretised using standard and shifted Grunwald formulas, depending on the fractional order. In this work, we consider a finite volume method that deals directly with the equation in conservative form. Fractionally-shifted Grunwald formulas are used to discretise the fractional derivatives at control volume faces. We compare the two methods for several case studies from the literature, highlighting the convenience of the finite volume approach.
Resumo:
ForscherInnen aus Sozial- und Geisteswissenschaften interessieren sich seit nunmehr einem Jahrzehnt für Blogs, Online-Tagebücher und Online-Journale. Auch wenn die Zuwachsrate der Blogosphäre seit der Blütezeit des Bloggens in den 2000ern stagniert, bleiben Blogs doch eines der bedeutendsten Genres der internetgestützten Kommunikation. Tatsächlich ist nach der Massenabwanderung zu Facebook, Twitter und anderen erst in jüngerer Zeit entstandenen Kommunikationsmitteln eine etwas kleinere, aber umso stärker etablierte Blogosphäre von engagierten und eingeschworenen Teilnehmenden übriggeblieben. Blogs werden mittlerweile als Teil einer institutionellen, persönlichen und Gruppen-Kommunikationstrategie akzeptiert. In Stil und Inhalt liegen sie zwischen den statischeren Informationen auf konventionellen Websites und den ständig aktualisierten Facebook- und Twitter-Newsfeeds. Blogs ermöglichen es ihren AutorInnen (und deren KommentatorInnen), bestimmte Themen im Umfang von einigen hundert bis zu einigen tausend Wörtern zu durchdenken, in kürzeren Posts ins Detail zu gehen und ggf. intensiver durchdachte Texte anderswo zu publizieren. Zudem sind sie auch ein sehr flexibles Medium: Bilder, Audio-, Video- sowie andere Materialien können mühelos eingefügt werden - und natürlich auch das grundlegende Instrument des Bloggens: Hyperlinks.
Resumo:
There has been considerable recent work on the development of energy conserving one-step methods that are not symplectic. Here we extend these ideas to stochastic Hamiltonian problems with additive noise and show that there are classes of Runge-Kutta methods that are very effective in preserving the expectation of the Hamiltonian, but care has to be taken in how the Wiener increments are sampled at each timestep. Some numerical simulations illustrate the performance of these methods.
Resumo:
Background The management of unruptured aneurysms is controversial with the decision to treat influenced by aneurysm characteristics including size and morphology. Aneurysmal bleb formation is thought to be associated with an increased risk of rupture. Objective To correlate computational fluid dynamic (CFD) indices with bleb formation. Methods Anatomical models were constructed from three-dimensional rotational angiogram (3DRA) data in 27 patients with cerebral aneurysms harbouring single blebs. Additional models representing the aneurysm before bleb formation were constructed by digitally removing the bleb. We characterised haemodynamic features of models both with and without the bleb using CFDs. Flow structure, wall shear stress (WSS), pressure and oscillatory shear index (OSI) were analysed. Results There was a statistically significant association between bleb location at or adjacent to the point of maximal WSS (74.1%, p=0.019), irrespective of rupture status. Aneurysmal blebs were related to the inflow or outflow jet in 88.9% of cases (p<0.001) whilst 11.1% were unrelated. Maximal wall pressure and OSI were not significantly related to bleb location. The bleb region attained a lower WSS following its formation in 96.3% of cases (p<0.001) and was also lower than the average aneurysm WSS in 86% of cases (p<0.001). Conclusion Cerebral aneurysm blebs generally form at or adjacent to the point of maximal WSS and are aligned with major flow structures. Wall pressure and OSI do not contribute to determining bleb location. The measurement of WSS using CFD models may potentially predict bleb formation and thus improve the assessment of rupture risk in unruptured aneurysms.
Resumo:
Flexible fixation or the so-called ‘biological fixation’ has been shown to encourage the formation of fracture callus, leading to better healing outcomes. However, the nature of the relationship between the degree of mechanical stability provided by a flexible fixation and the optimal healing outcomes has not been fully understood. In this study, we have developed a validated quantitative model to predict how cells in fracture callus might respond to change in their mechanical microenvironment due to different configurations of locking compression plate (LCP) in clinical practice, particularly in the early stage of healing. The model predicts that increasing flexibility of the LCP by changing the bone–plate distance (BPD) or the plate working length (WL) could enhance interfragmentary strain in the presence of a relatively large gap size (.3 mm). Furthermore, conventional LCP normally results in asymmetric tissue development during early stage of callus formation, and the increase of BPD or WL is insufficient to alleviate this problem.
Resumo:
Carbon nanotubes with specific nitrogen doping are proposed for controllable, highly selective, and reversible CO2 capture. Using density functional theory incorporating long-range dispersion corrections, we investigated the adsorption behavior of CO2 on (7,7) single-walled carbon nanotubes (CNTs) with several nitrogen doping configurations and varying charge states. Pyridinic-nitrogen incorporation in CNTs is found to induce an increasing CO2 adsorption strength with electron injecting, leading to a highly selective CO2 adsorption in comparison with N2. This functionality could induce intrinsically reversible CO2 adsorption as capture/release can be controlled by switching the charge carrying state of the system on/off. This phenomenon is verified for a number of different models and theoretical methods, with clear ramifications for the possibility of implementation with a broader class of graphene-based materials. A scheme for the implementation of this remarkable reversible electrocatalytic CO2-capture phenomenon is considered.
Resumo:
The reaction of the aromatic distonic peroxyl radical cations N-methyl pyridinium-4-peroxyl (PyrOO center dot+) and 4-(N,N,N-trimethyl ammonium)-phenyl peroxyl (AnOO center dot+), with symmetrical dialkyl alkynes 10?ac was studied in the gas phase by mass spectrometry. PyrOO center dot+ and AnOO center dot+ were produced through reaction of the respective distonic aryl radical cations Pyr center dot+ and An center dot+ with oxygen, O2. For the reaction of Pyr center dot+ with O2 an absolute rate coefficient of k1=7.1X10-12 cm3 molecule-1 s-1 and a collision efficiency of 1.2?% was determined at 298 K. The strongly electrophilic PyrOO center dot+ reacts with 3-hexyne and 4-octyne with absolute rate coefficients of khexyne=1.5X10-10 cm3 molecule-1 s-1 and koctyne=2.8X10-10 cm3 molecule-1 s-1, respectively, at 298 K. The reaction of both PyrOO center dot+ and AnOO center dot+ proceeds by radical addition to the alkyne, whereas propargylic hydrogen abstraction was observed as a very minor pathway only in the reactions involving PyrOO center dot+. A major reaction pathway of the vinyl radicals 11 formed upon PyrOO center dot+ addition to the alkynes involves gamma-fragmentation of the peroxy O?O bond and formation of PyrO center dot+. The PyrO center dot+ is rapidly trapped by intermolecular hydrogen abstraction, presumably from a propargylic methylene group in the alkyne. The reaction of the less electrophilic AnOO center dot+ with alkynes is considerably slower and resulted in formation of AnO center dot+ as the only charged product. These findings suggest that electrophilic aromatic peroxyl radicals act as oxygen atom donors, which can be used to generate alpha-oxo carbenes 13 (or isomeric species) from alkynes in a single step. Besides gamma-fragmentation, a number of competing unimolecular dissociative reactions also occur in vinyl radicals 11. The potential energy diagrams of these reactions were explored with density functional theory and ab initio methods, which enabled identification of the chemical structures of the most important products.
Jacobian-free Newton-Krylov methods with GPU acceleration for computing nonlinear ship wave patterns
Resumo:
The nonlinear problem of steady free-surface flow past a submerged source is considered as a case study for three-dimensional ship wave problems. Of particular interest is the distinctive wedge-shaped wave pattern that forms on the surface of the fluid. By reformulating the governing equations with a standard boundary-integral method, we derive a system of nonlinear algebraic equations that enforce a singular integro-differential equation at each midpoint on a two-dimensional mesh. Our contribution is to solve the system of equations with a Jacobian-free Newton-Krylov method together with a banded preconditioner that is carefully constructed with entries taken from the Jacobian of the linearised problem. Further, we are able to utilise graphics processing unit acceleration to significantly increase the grid refinement and decrease the run-time of our solutions in comparison to schemes that are presently employed in the literature. Our approach provides opportunities to explore the nonlinear features of three-dimensional ship wave patterns, such as the shape of steep waves close to their limiting configuration, in a manner that has been possible in the two-dimensional analogue for some time.