45 resultados para Stochastic simulation methods
Resumo:
Indoor wireless network based client localisation requires the use of a radio map to relate received signal strength to specific locations. However, signal strength measurements are time consuming, expensive and usually require unrestricted access to all parts of the building concerned. An obvious option for circumventing this difficulty is to estimate the radio map using a propagation model. This paper compares the effect of measured and simulated radio maps on the accuracy of two different methods of wireless network based localisation. The results presented indicate that, although the propagation model used underestimated the signal strength by up to 15 dB at certain locations, there was not a signigicant reduction in localisation performance. In general, the difference in performance between the simulated and measured radio maps was around a 30 % increase in rms error
Resumo:
Previous papers have noted the difficulty in obtaining neural models which are stable under simulation when trained using prediction-error-based methods. Here the differences between series-parallel and parallel identification structures for training neural models are investigated. The effect of the error surface shape on training convergence and simulation performance is analysed using a standard algorithm operating in both training modes. A combined series-parallel/parallel training scheme is proposed, aiming to provide a more effective means of obtaining accurate neural simulation models. Simulation examples show the combined scheme is advantageous in circumstances where the solution space is known or suspected to be complex. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Purpose: Positron emission tomography (PET), in addition to computed tomography (CT), has an effect in target volume definition for radical radiotherapy (RT) for non–small-cell lung cancer (NSCLC). In previously PET-CT staged patients with NSCLC, we assessed the effect of using an additional planning PET-CT scan for gross tumor volume (GTV) definition. Methods and Materials: A total of 28 patients with Stage IA-IIIB NSCLC were enrolled. All patients had undergone staging PET-CT to ensure suitability for radical RT. Of the 28 patients, 14 received induction chemotherapy. In place of a RT planning CT scan, patients underwent scanning on a PET-CT scanner. In a virtual planning study, four oncologists independently delineated the GTVon the CT scan alone and then on the PET-CTscan. Intraobserver and interobserver variability were assessed using the concordance index (CI), and the results were compared using the Wilcoxon signed ranks test. Results: PET-CT improved the CI between observers when defining the GTVusing the PET-CT images compared with using CTalone for matched cases (median CI, 0.57 for CTand 0.64 for PET-CT, p = .032). The median of the mean percentage of volume change from GTVCT to GTVFUSED was 5.21% for the induction chemotherapy group and 18.88% for the RT-alone group. Using the Mann-Whitney U test, this was significantly different (p = .001). Conclusion: PET-CT RT planning scan, in addition to a staging PET-CT scan, reduces interobserver variability in GTV definition for NSCLC. The GTV size with PET-CT compared with CT in the RT-alone group increased and was reduced in the induction chemotherapy group.
Resumo:
We present experimental results on benchmark problems in 3D cubic lattice structures with the Miyazawa-Jernigan energy function for two local search procedures that utilise the pull-move set: (i) population-based local search (PLS) that traverses the energy landscape with greedy steps towards (potential) local minima followed by upward steps up to a certain level of the objective function; (ii) simulated annealing with a logarithmic cooling schedule (LSA). The parameter settings for PLS are derived from short LSA-runs executed in pre-processing and the procedure utilises tabu lists generated for each member of the population. In terms of the total number of energy function evaluations both methods perform equally well, however. PLS has the potential of being parallelised with an expected speed-up in the region of the population size. Furthermore, both methods require a significant smaller number of function evaluations when compared to Monte Carlo simulations with kink-jump moves. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
A family of stochastic gradient algorithms and their behaviour in the data echo cancellation work platform are presented. The cost function adaptation algorithms use an error exponent update strategy based on an absolute error mapping, which is updated at every iteration. The quadratic and nonquadratic cost functions are special cases of the new family. Several possible realisations are introduced using these approaches. The noisy error problem is discussed and the digital recursive filter estimator is proposed. The simulation outcomes confirm the effectiveness of the proposed family of algorithms.
Resumo:
In this paper, a complete method for finite-difference time-domain modeling of rooms in 2-D using compact explicit schemes is presented. A family of interpolated schemes using a rectilinear, nonstaggered grid is reviewed, and the most accurate and isotropic schemes are identified. Frequency-dependent boundaries are modeled using a digital impedance filter formulation that is consistent with locally reacting surface theory. A structurally stable and efficient boundary formulation is constructed by carefully combining the boundary condition with the interpolated scheme. An analytic prediction formula for the effective numerical reflectance is given, and a stability proof provided. The results indicate that the identified accurate and isotropic schemes are also very accurate in terms of numerical boundary reflectance, and outperform directly related methods such as Yee's scheme and the standard digital waveguide mesh. In addition, one particular scheme-referred to here as the interpolated wideband scheme-is suggested as the best scheme for most applications.
Resumo:
This paper presents methods for simulating room acoustics using the finite-difference time-domain (FDTD) technique, focusing on boundary and medium modeling. A family of nonstaggered 3-D compact explicit FDTD schemes is analyzed in terms of stability, accuracy, and computational efficiency, and the most accurate and isotropic schemes based on a rectilinear grid are identified. A frequency-dependent boundary model that is consistent with locally reacting surface theory is also presented, in which the wall impedance is represented with a digital filter. For boundaries, accuracy in numerical reflection is analyzed and a stability proof is provided. The results indicate that the proposed 3-D interpolated wideband and isotropic schemes outperform directly related techniques based on Yee's staggered grid and standard digital waveguide mesh, and that the boundary formulations generally have properties that are similar to that of the basic scheme used.
Resumo:
The stochastic nature of oil price fluctuations is investigated over a twelve-year period, borrowing feedback from an existing database (USA Energy Information Administration database, available online). We evaluate the scaling exponents of the fluctuations by employing different statistical analysis methods, namely rescaled range analysis (R/S), scale windowed variance analysis (SWV) and the generalized Hurst exponent (GH) method. Relying on the scaling exponents obtained, we apply a rescaling procedure to investigate the complex characteristics of the probability density functions (PDFs) dominating oil price fluctuations. It is found that PDFs exhibit scale invariance, and in fact collapse onto a single curve when increments are measured over microscales (typically less than 30 days). The time evolution of the distributions is well fitted by a Levy-type stable distribution. The relevance of a Levy distribution is made plausible by a simple model of nonlinear transfer. Our results also exhibit a degree of multifractality as the PDFs change and converge toward to a Gaussian distribution at the macroscales.
Resumo:
Objective: Positron emission tomography (PET)/CT scans can improve target definition in radiotherapy for non-small cell lung cancer (NSCLC). As staging PET/CT scans are increasingly available, we evaluated different methods for co-registration of staging PET/CT data to radiotherapy simulation (RTP) scans.
Methods: 10 patients underwent staging PET/CT followed by RTP PET/CT. On both scans, gross tumour volumes (GTVs) were delineated using CT (GTVCT) and PET display settings. Four PET-based contours (manual delineation, two threshold methods and a source-to-background ratio method) were delineated. The CT component of the staging scan was co-registered using both rigid and deformable techniques to the CT component of RTP PET/CT. Subsequently rigid registration and deformation warps were used to transfer PET and CT contours from the staging scan to the RTP scan. Dice’s similarity coefficient (DSC) was used to assess the registration accuracy of staging-based GTVs following both registration methods with the GTVs delineated on the RTP PET/CT scan.
Results: When the GTVCT delineated on the staging scan after both rigid registration and deformation was compared with the GTVCT on the RTP scan, a significant improvement in overlap (registration) using deformation was observed (mean DSC 0.66 for rigid registration and 0.82 for deformable registration, p50.008). A similar comparison for PET contours revealed no significant improvement in overlap with the use of deformable registration.
Conclusions: No consistent improvements in similarity measures were observed when deformable registration was used for transferring PET-based contours from a staging PET/CT. This suggests that currently the use of rigid registration remains the most appropriate method for RTP in NSCLC.
Resumo:
In this paper the use of eigenvalue stability analysis of very large dimension aeroelastic numerical models arising from the exploitation of computational fluid dynamics is reviewed. A formulation based on a block reduction of the system Jacobian proves powerful to allow various numerical algorithms to be exploited, including frequency domain solvers, reconstruction of a term describing the fluid–structure interaction from the sparse data which incurs the main computational cost, and sampling to place the expensive samples where they are most needed. The stability formulation also allows non-deterministic analysis to be carried out very efficiently through the use of an approximate Newton solver. Finally, the system eigenvectors are exploited to produce nonlinear and parameterised reduced order models for computing limit cycle responses. The performance of the methods is illustrated with results from a number of academic and large dimension aircraft test cases.
Resumo:
Wind power generation differs from conventional thermal generation due to the stochastic nature of wind. Thus wind power forecasting plays a key role in dealing with the challenges of balancing supply and demand in any electricity system, given the uncertainty associated with the wind farm power output. Accurate wind power forecasting reduces the need for additional balancing energy and reserve power to integrate wind power. Wind power forecasting tools enable better dispatch, scheduling and unit commitment of thermal generators, hydro plant and energy storage plant and more competitive market trading as wind power ramps up and down on the grid. This paper presents an in-depth review of the current methods and advances in wind power forecasting and prediction. Firstly, numerical wind prediction methods from global to local scales, ensemble forecasting, upscaling and downscaling processes are discussed. Next the statistical and machine learning approach methods are detailed. Then the techniques used for benchmarking and uncertainty analysis of forecasts are overviewed, and the performance of various approaches over different forecast time horizons is examined. Finally, current research activities, challenges and potential future developments are appraised. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
To develop real-time simulations of wind instruments, digital waveguides filters can be used as an efficient representation of the air column. Many aerophones are shaped as horns which can be approximated using conical sections. Therefore the derivation of conical waveguide filters is of special interest. When these filters are used in combination with a generalized reed excitation, several classes of wind instruments can be simulated. In this paper we present the methods for transforming a continuous description of conical tube segments to a discrete filter representation. The coupling of the reed model with the conical waveguide and a simplified model of the termination at the open end are described in the same way. It turns out that the complete lossless conical waveguide requires only one type of filter.Furthermore, we developed a digital reed excitation model, which is purely based on numerical integration methods, i.e., without the use of a look-up table.
Resumo:
The Finite Difference Time Domain (FDTD) method is becoming increasingly popular for room acoustics simulation. Yet, the literature on grid excitation methods is relatively sparse, and source functions are traditionally implemented in a hard or additive form
using arbitrarily-shaped functions which do not necessarily obey the physical laws of sound generation. In this paper we formulate
a source function based on a small pulsating sphere model. A physically plausible method to inject a source signal into the grid
is derived from first principles, resulting in a source with a near-flat spectrum that does not scatter incoming waves. In the final
discrete-time formulation, the source signal is the result of passing a Gaussian pulse through a digital filter simulating the dynamics of the pulsating sphere, hence facilitating a physically correct means to design source functions that generate a prescribed sound field.