402 resultados para Gaussian probability function


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present the application of a non-linear dimensionality reduction technique for the learning and probabilistic classification of hyperspectral image. Hyperspectral image spectroscopy is an emerging technique for geological investigations from airborne or orbital sensors. It gives much greater information content per pixel on the image than a normal colour image. This should greatly help with the autonomous identification of natural and manmade objects in unfamiliar terrains for robotic vehicles. However, the large information content of such data makes interpretation of hyperspectral images time-consuming and userintensive. We propose the use of Isomap, a non-linear manifold learning technique combined with Expectation Maximisation in graphical probabilistic models for learning and classification. Isomap is used to find the underlying manifold of the training data. This low dimensional representation of the hyperspectral data facilitates the learning of a Gaussian Mixture Model representation, whose joint probability distributions can be calculated offline. The learnt model is then applied to the hyperspectral image at runtime and data classification can be performed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chlamydia trachomatis is an obligate intracellular bacterial pathogen that infects the genital and ocular mucosa of humans, causing infections that can lead to pelvic inflammatory disease, infertility, and blinding trachoma. C. pneumoniae is a respiratory pathogen that is the cause of 12–15% of community-acquired pneumonia. Both chlamydial species were believed to be restricted to the epithelia of the genital, ocular, and respiratory mucosa; however, increasing evidence suggests that both these pathogens can be isolated from peripheral blood of both healthy individuals and patients with inflammatory conditions such as coronary artery disease and asthma. Chlamydia can also be isolated from brain tissues of patients with degenerative neurological disorders such as Alzheimer’s disease and multiple sclerosis, and also from certain lymphomas. An increasing number of in vitro studies suggest that some chlamydial species can infect immune cells, at least at low levels. These infections may alter immune cell function in a way that promotes chlamydial persistence in the host and contributes to the progression of several chronic inflammatory diseases. In this paper, we review the evidence for the growth of Chlamydia in immune cells, particularly monocytes/macrophages and dendritic cells, and describe how infection may affect the function of these cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DNA exists predominantly in a duplex form that is preserved via specific base pairing. This base pairing affords a considerable degree of protection against chemical or physical damage and preserves coding potential. However, there are many situations, e.g. during DNA damage and programmed cellular processes such as DNA replication and transcription, in which the DNA duplex is separated into two singlestranded DNA (ssDNA) strands. This ssDNA is vulnerable to attack by nucleases, binding by inappropriate proteins and chemical attack. It is very important to control the generation of ssDNA and protect it when it forms, and for this reason all cellular organisms and many viruses encode a ssDNA binding protein (SSB). All known SSBs use an oligosaccharide/oligonucleotide binding (OB)-fold domain for DNA binding. SSBs have multiple roles in binding and sequestering ssDNA, detecting DNA damage, stimulating strand-exchange proteins and helicases, and mediation of protein–protein interactions. Recently two additional human SSBs have been identified that are more closely related to bacterial and archaeal SSBs. Prior to this it was believed that replication protein A, RPA, was the only human equivalent of bacterial SSB. RPA is thought to be required for most aspects of DNA metabolism including DNA replication, recombination and repair. This review will discuss in further detail the biological pathways in which human SSBs function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a method of voice activity detection (VAD) for high noise scenarios, using a noise robust voiced speech detection feature. The developed method is based on the fusion of two systems. The first system utilises the maximum peak of the normalised time-domain autocorrelation function (MaxPeak). The second zone system uses a novel combination of cross-correlation and zero-crossing rate of the normalised autocorrelation to approximate a measure of signal pitch and periodicity (CrossCorr) that is hypothesised to be noise robust. The score outputs by the two systems are then merged using weighted sum fusion to create the proposed autocorrelation zero-crossing rate (AZR) VAD. Accuracy of AZR was compared to state of the art and standardised VAD methods and was shown to outperform the best performing system with an average relative improvement of 24.8% in half-total error rate (HTER) on the QUT-NOISE-TIMIT database created using real recordings from high-noise environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The success rate of carrier phase ambiguity resolution (AR) is the probability that the ambiguities are successfully fixed to their correct integer values. In existing works, an exact success rate formula for integer bootstrapping estimator has been used as a sharp lower bound for the integer least squares (ILS) success rate. Rigorous computation of success rate for the more general ILS solutions has been considered difficult, because of complexity of the ILS ambiguity pull-in region and computational load of the integration of the multivariate probability density function. Contributions of this work are twofold. First, the pull-in region mathematically expressed as the vertices of a polyhedron is represented by a multi-dimensional grid, at which the cumulative probability can be integrated with the multivariate normal cumulative density function (mvncdf) available in Matlab. The bivariate case is studied where the pull-region is usually defined as a hexagon and the probability is easily obtained using mvncdf at all the grid points within the convex polygon. Second, the paper compares the computed integer rounding and integer bootstrapping success rates, lower and upper bounds of the ILS success rates to the actual ILS AR success rates obtained from a 24 h GPS data set for a 21 km baseline. The results demonstrate that the upper bound probability of the ILS AR probability given in the existing literatures agrees with the actual ILS success rate well, although the success rate computed with integer bootstrapping method is a quite sharp approximation to the actual ILS success rate. The results also show that variations or uncertainty of the unit–weight variance estimates from epoch to epoch will affect the computed success rates from different methods significantly, thus deserving more attentions in order to obtain useful success probability predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Skid resistance is a condition parameter characterising the contribution that a road makes to the friction between a road surface and a vehicle tyre. Studies of traffic crash histories around the world have consistently found that a disproportionate number of crashes occur where the road surface has a low level of surface friction and/or surface texture, particularly when the road surface is wet. Various research results have been published over many years and have tried to quantify the influence of skid resistance on accident occurrence and to characterise a correlation between skid resistance and accident frequency. Most of the research studies used simple statistical correlation methods in analysing skid resistance and crash data.----- ------ Preliminary findings of a systematic and extensive literature search conclude that there is rarely a single causation factor in a crash. Findings from research projects do affirm various levels of correlation between skid resistance and accident occurrence. Studies indicate that the level of skid resistance at critical places such as intersections, curves, roundabouts, ramps and approaches to pedestrian crossings needs to be well maintained.----- ----- Management of risk is an integral aspect of the Queensland Department of Main Roads (QDMR) strategy for managing its infrastructure assets. The risk-based approach has been used in many areas of infrastructure engineering. However, very limited information is reported on using risk-based approach to mitigate crash rates related to road surface. Low skid resistance and surface texture may increase the risk of traffic crashes.----- ----- The objectives of this paper are to explore current issues of skid resistance in relation to crashes, to provide a framework of probability-based approach to be adopted by QDMR in assessing the relationship between crash accidents and pavement properties, and to explain why the probability-based approach is a suitable tool for QDMR in order to reduce accident rates due to skid resistance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road accidents are of great concerns for road and transport departments around world, which cause tremendous loss and dangers for public. Reducing accident rates and crash severity are imperative goals that governments, road and transport authorities, and researchers are aimed to achieve. In Australia, road crash trauma costs the nation A$ 15 billion annually. Five people are killed, and 550 are injured every day. Each fatality costs the taxpayer A$1.7 million. Serious injury cases can cost the taxpayer many times the cost of a fatality. Crashes are in general uncontrolled events and are dependent on a number of interrelated factors such as driver behaviour, traffic conditions, travel speed, road geometry and condition, and vehicle characteristics (e.g. tyre type pressure and condition, and suspension type and condition). Skid resistance is considered one of the most important surface characteristics as it has a direct impact on traffic safety. Attempts have been made worldwide to study the relationship between skid resistance and road crashes. Most of these studies used the statistical regression and correlation methods in analysing the relationships between skid resistance and road crashes. The outcomes from these studies provided mix results and not conclusive. The objective of this paper is to present a probability-based method of an ongoing study in identifying the relationship between skid resistance and road crashes. Historical skid resistance and crash data of a road network located in the tropical east coast of Queensland were analysed using the probability-based method. Analysis methodology and results of the relationships between skid resistance, road characteristics and crashes are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vibration serviceability limit state is an important design consideration for two-way, suspended concrete floors that is not always well understood by many practicing structural engineers. Although the field of floor vibration has been extensively developed, at present there are no convenient design tools that deal with this problem. Results from this research have enabled the development of a much-needed, new method for assessing the vibration serviceability of flat, suspended concrete floors in buildings. This new method has been named, the Response Coefficient-Root Function (RCRF) method. Full-scale, laboratory tests have been conducted on a post-tensioned floor specimen at Queensland University of Technology’s structural laboratory. Special support brackets were fabricated to perform as frictionless, pinned connections at the corners of the specimen. A series of static and dynamic tests were performed in the laboratory to obtain basic material and dynamic properties of the specimen. Finite-element-models have been calibrated against data collected from laboratory experiments. Computational finite-element-analysis has been extended to investigate a variety of floor configurations. Field measurements of floors in existing buildings are in good agreement with computational studies. Results from this parametric investigation have led to the development of new approach for predicting the design frequencies and accelerations of flat, concrete floor structures. The RCRF method is convenient tool to assist structural engineers in the design for the vibration serviceability limit-state of in-situ concrete floor systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper develops a general theory of validation gating for non-linear non-Gaussian mod- els. Validation gates are used in target tracking to cull very unlikely measurement-to-track associa- tions, before remaining association ambiguities are handled by a more comprehensive (and expensive) data association scheme. The essential property of a gate is to accept a high percentage of correct associ- ations, thus maximising track accuracy, but provide a su±ciently tight bound to minimise the number of ambiguous associations. For linear Gaussian systems, the ellipsoidal vali- dation gate is standard, and possesses the statistical property whereby a given threshold will accept a cer- tain percentage of true associations. This property does not hold for non-linear non-Gaussian models. As a system departs from linear-Gaussian, the ellip- soid gate tends to reject a higher than expected pro- portion of correct associations and permit an excess of false ones. In this paper, the concept of the ellip- soidal gate is extended to permit correct statistics for the non-linear non-Gaussian case. The new gate is demonstrated by a bearing-only tracking example.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The combination of alcohol and driving is a major health and economic burden to most communities in industrialised countries. The total cost of crashes for Australia in 1996 was estimated at approximately 15 billion dollars and the costs for fatal crashes were about 3 billion dollars (BTE, 2000). According to the Bureau of Infrastructure, Transport and Regional Development and Local Government (2009; BITRDLG) the overall cost of road fatality crashes for 2006 $3.87 billion, with a single fatal crash costing an estimated $2.67 million. A major contributing factor to crashes involving serious injury is alcohol intoxication while driving. It is a well documented fact that consumption of liquor impairs judgment of speed, distance and increases involvement in higher risk behaviours (Waller, Hansen, Stutts, & Popkin, 1986a; Waller et al., 1986b). Waller et al. (1986a; b) asserts that liquor impairs psychomotor function and therefore renders the driver impaired in a crisis situation. This impairment includes; vision (degraded), information processing (slowed), steering, and performing two tasks at once in congested traffic (Moskowitz & Burns, 1990). As BAC levels increase the risk of crashing and fatality increase exponentially (Department of Transport and Main Roads, 2009; DTMR). According to Compton et al. (2002) as cited in the Department of Transport and Main Roads (2009), crash risk based on probability, is five times higher when the BAC is 0.10 compared to a BAC of 0.00. The type of injury patterns sustained also tends to be more severe when liquor is involved, especially with injuries to the brain (Waller et al., 1986b). Single and Rohl (1997) reported that 30% of all fatal crashes in Australia where alcohol involvement was known were associated with Breadth Analysis Content (BAC) above the legal limit of 0.05gms/100ml. Alcohol related crashes therefore contributes to a third of the total cost of fatal crashes (i.e. $1 billion annually) and crashes where alcohol is involved are more likely to result in death or serious injury (ARRB Transport Research, 1999). It is a major concern that a drug capable of impairment such as is the most available and popular drug in Australia (Australian Institute of Health and Welfare, 2007; AIHW). According to the AIHW (2007) 89.9% of the approximately 25,000 Australians over the age of 14 surveyed had consumed at some point in time, and 82.9% had consumed liquor in the previous year. This study found that 12.1% of individuals admitted to driving a motor vehicle whilst intoxicated. In general males consumed more liquor in all age groups. In Queensland there were 21503 road crashes in 2001, involving 324 fatalities and the largest contributing factor was alcohol and or drugs (Road Traffic Report, 2001). 23438 road crashes in 2004, involving 289 fatalities and the largest contributing factor was alcohol and or drugs (DTMR, 2009). Although a number of measures such as random breath testing have been effective in reducing the road toll (Watson, Fraine & Mitchell, 1995) the recidivist drink driver remains a serious problem. These findings were later supported with research by Leal, King, and Lewis (2006). This Queensland study found that of the 24661 drink drivers intercepted in 2004, 3679 (14.9%) were recidivists with multiple drink driving convictions in the previous three years covered (Leal et al., 2006). The legal definition of the term “recidivist” is consistent with the Transport Operations (Road Use Management) Act (1995) and is assigned to individuals who have been charged with multiple drink driving offences in the previous five years. In Australia relatively little attention has been given to prevention programs that target high-risk repeat drink drivers. However, over the last ten years a rehabilitation program specifically designed to reduce recidivism among repeat drink drivers has been operating in Queensland. The program, formally known as the “Under the Limit” drink driving rehabilitation program (UTL) was designed and implemented by the research team at the Centre for Accident Research and Road Safety in Queensland with funding from the Federal Office of Road Safety and the Institute of Criminology (see Sheehan, Schonfeld & Davey, 1995). By 2009 over 8500 drink-drivering offenders had been referred to the program (Australian Institute of Crime, 2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimating and predicting degradation processes of engineering assets is crucial for reducing the cost and insuring the productivity of enterprises. Assisted by modern condition monitoring (CM) technologies, most asset degradation processes can be revealed by various degradation indicators extracted from CM data. Maintenance strategies developed using these degradation indicators (i.e. condition-based maintenance) are more cost-effective, because unnecessary maintenance activities are avoided when an asset is still in a decent health state. A practical difficulty in condition-based maintenance (CBM) is that degradation indicators extracted from CM data can only partially reveal asset health states in most situations. Underestimating this uncertainty in relationships between degradation indicators and health states can cause excessive false alarms or failures without pre-alarms. The state space model provides an efficient approach to describe a degradation process using these indicators that can only partially reveal health states. However, existing state space models that describe asset degradation processes largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires that failures and inspections only happen at fixed intervals. The discrete state assumption entails discretising continuous degradation indicators, which requires expert knowledge and often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This research proposes a Gamma-based state space model that does not have discrete time, discrete state, linear and Gaussian assumptions to model partially observable degradation processes. Monte Carlo-based algorithms are developed to estimate model parameters and asset remaining useful lives. In addition, this research also develops a continuous state partially observable semi-Markov decision process (POSMDP) to model a degradation process that follows the Gamma-based state space model and is under various maintenance strategies. Optimal maintenance strategies are obtained by solving the POSMDP. Simulation studies through the MATLAB are performed; case studies using the data from an accelerated life test of a gearbox and a liquefied natural gas industry are also conducted. The results show that the proposed Monte Carlo-based EM algorithm can estimate model parameters accurately. The results also show that the proposed Gamma-based state space model have better fitness result than linear and Gaussian state space models when used to process monotonically increasing degradation data in the accelerated life test of a gear box. Furthermore, both simulation studies and case studies show that the prediction algorithm based on the Gamma-based state space model can identify the mean value and confidence interval of asset remaining useful lives accurately. In addition, the simulation study shows that the proposed maintenance strategy optimisation method based on the POSMDP is more flexible than that assumes a predetermined strategy structure and uses the renewal theory. Moreover, the simulation study also shows that the proposed maintenance optimisation method can obtain more cost-effective strategies than a recently published maintenance strategy optimisation method by optimising the next maintenance activity and the waiting time till the next maintenance activity simultaneously.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to accurately predict the remaining useful life of machine components is critical for machine continuous operation and can also improve productivity and enhance system’s safety. In condition-based maintenance (CBM), maintenance is performed based on information collected through condition monitoring and assessment of the machine health. Effective diagnostics and prognostics are important aspects of CBM for maintenance engineers to schedule a repair and to acquire replacement components before the components actually fail. Although a variety of prognostic methodologies have been reported recently, their application in industry is still relatively new and mostly focused on the prediction of specific component degradations. Furthermore, they required significant and sufficient number of fault indicators to accurately prognose the component faults. Hence, sufficient usage of health indicators in prognostics for the effective interpretation of machine degradation process is still required. Major challenges for accurate longterm prediction of remaining useful life (RUL) still remain to be addressed. Therefore, continuous development and improvement of a machine health management system and accurate long-term prediction of machine remnant life is required in real industry application. This thesis presents an integrated diagnostics and prognostics framework based on health state probability estimation for accurate and long-term prediction of machine remnant life. In the proposed model, prior empirical (historical) knowledge is embedded in the integrated diagnostics and prognostics system for classification of impending faults in machine system and accurate probability estimation of discrete degradation stages (health states). The methodology assumes that machine degradation consists of a series of degraded states (health states) which effectively represent the dynamic and stochastic process of machine failure. The estimation of discrete health state probability for the prediction of machine remnant life is performed using the ability of classification algorithms. To employ the appropriate classifier for health state probability estimation in the proposed model, comparative intelligent diagnostic tests were conducted using five different classifiers applied to the progressive fault data of three different faults in a high pressure liquefied natural gas (HP-LNG) pump. As a result of this comparison study, SVMs were employed in heath state probability estimation for the prediction of machine failure in this research. The proposed prognostic methodology has been successfully tested and validated using a number of case studies from simulation tests to real industry applications. The results from two actual failure case studies using simulations and experiments indicate that accurate estimation of health states is achievable and the proposed method provides accurate long-term prediction of machine remnant life. In addition, the results of experimental tests show that the proposed model has the capability of providing early warning of abnormal machine operating conditions by identifying the transitional states of machine fault conditions. Finally, the proposed prognostic model is validated through two industrial case studies. The optimal number of health states which can minimise the model training error without significant decrease of prediction accuracy was also examined through several health states of bearing failure. The results were very encouraging and show that the proposed prognostic model based on health state probability estimation has the potential to be used as a generic and scalable asset health estimation tool in industrial machinery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background In order to provide insights into the complex biochemical processes inside a cell, modelling approaches must find a balance between achieving an adequate representation of the physical phenomena and keeping the associated computational cost within reasonable limits. This issue is particularly stressed when spatial inhomogeneities have a significant effect on system's behaviour. In such cases, a spatially-resolved stochastic method can better portray the biological reality, but the corresponding computer simulations can in turn be prohibitively expensive. Results We present a method that incorporates spatial information by means of tailored, probability distributed time-delays. These distributions can be directly obtained by single in silico or a suitable set of in vitro experiments and are subsequently fed into a delay stochastic simulation algorithm (DSSA), achieving a good compromise between computational costs and a much more accurate representation of spatial processes such as molecular diffusion and translocation between cell compartments. Additionally, we present a novel alternative approach based on delay differential equations (DDE) that can be used in scenarios of high molecular concentrations and low noise propagation. Conclusions Our proposed methodologies accurately capture and incorporate certain spatial processes into temporal stochastic and deterministic simulations, increasing their accuracy at low computational costs. This is of particular importance given that time spans of cellular processes are generally larger (possibly by several orders of magnitude) than those achievable by current spatially-resolved stochastic simulators. Hence, our methodology allows users to explore cellular scenarios under the effects of diffusion and stochasticity in time spans that were, until now, simply unfeasible. Our methodologies are supported by theoretical considerations on the different modelling regimes, i.e. spatial vs. delay-temporal, as indicated by the corresponding Master Equations and presented elsewhere.