129 resultados para 2447: modelling and forecasting


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The material flow in friction stir spot welding of aluminium to both aluminium and steel has been investigated, using pinless tools in a lap joint geometry. The flow behaviour was revealed experimentally using dissimilar Al alloys of similar strength. The effect on the material flow of tool surface features, welding conditions (rotation speed, plunge depth, dwell time), and the surface state of the steel sheet (un-coated or galvanized) have been systematically studied. A novel kinematic flow model is presented, which successfully predicts the observed layering of the dissimilar Al alloys under a range of conditions. The model and the experimental observations provide a consistent interpretation of the stick-slip conditions at the tool-workpiece interface, addressing an elusive and long-standing issue in the modelling of heat generation in friction stir processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade an Auburn-Rollins-Strathclyde consortium has developed several suites of parallel R-matrix codes [1, 2, 3] that can meet the fundamental data needs required for the interpretation of astrophysical observation and/or plasma experiments. Traditionally our collisional work on light fusion-related atoms has been focused towards spectroscopy and impurity transport for magnetically confined fusion devices. Our approach has been to provide a comprehensive data set for the excitation/ionization for every ion stage of a particular element. As we progress towards a burning fusion plasma, there is a demand for the collisional processes involving tungsten, which has required a revitalization of the relativistic R-matrix approach. The implementation of these codes on massively parallel supercomputers has facilitated the progression to models involving thousands of levels in the close-coupling expansion required by the open d and f sub-shell systems of mid Z tungsten. This work also complements the electron-impact excitation of Fe-Peak elements required by astrophysics, in particular the near neutral species, which offer similar atomic structure challenges. Although electron-impact excitation work is our primary focus in terms of fusion application, the single photon photoionisation codes are also being developed in tandem, and benefit greatly from this ongoing work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Passive intermodulation (PIM) often limits the performance of communication systems with analog and digitally-modulated signals and especially of systems supporting multiple carriers. Since the origins of the apparently multiple physical sources of nonlinearity causing PIM are not fully understood, the behavioral models are frequently used to describe the process of PIM generation. In this paper a polynomial model of memoryless nonlinearity is deduced from PIM measurements of a microstrip line with distributed nonlinearity with two-tone CW signals. The analytical model of nonlinearity is incorporated in Keysight Technology’s ADS simulator to evaluate the metrics of signal fidelity in the receive band for analog and digitally-modulated signals. PIM-induced distortion and cross-band interference with modulated signals are compared to those with two-tone CW signals. It is shown that conventional metrics can be applied to quantify the effect of distributed nonlinearities on signal fidelity. It is found that the two-tone CW test provides a worst-case estimate of cross-band interference for two-carrier modulated signals whereas with a three-carrier signal PIM interference in the receive band is noticeably overestimated. The simulated constellation diagrams for QPSK signals demonstrate that PIM interference exhibits the distinctive signatures of correlated distortion and this indicates that there are opportunities for mitigating PIM interference and that PIM interference cannot be treated as noise. One of the interesting results is that PIM distortion on a transmission line results in asymmetrical regrowth of output PIM interference for modulated signals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Artificial neural networks (ANNs) can be easily applied to short-term load forecasting (STLF) models for electric power distribution applications. However, they are not typically used in medium and long term load forecasting (MLTLF) electric power models because of the difficulties associated with collecting and processing the necessary data. Virtual instrument (VI) techniques can be applied to electric power load forecasting but this is rarely reported in the literature. In this paper, we investigate the modelling and design of a VI for short, medium and long term load forecasting using ANNs. Three ANN models were built for STLF of electric power. These networks were trained using historical load data and also considering weather data which is known to have a significant affect of the use of electric power (such as wind speed, precipitation, atmospheric pressure, temperature and humidity). In order to do this a V-shape temperature processing model is proposed. With regards MLTLF, a model was developed using radial basis function neural networks (RBFNN). Results indicate that the forecasting model based on the RBFNN has a high accuracy and stability. Finally, a virtual load forecaster which integrates the VI and the RBFNN is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The standard linear-quadratic (LQ) survival model for external beam radiotherapy is reviewed with particular emphasis on studying how different schedules of radiation treatment planning may be affected by different tumour repopulation kinetics. The LQ model is further examined in the context of tumour control probability (TCP) models. The application of the Zaider and Minerbo non-Poissonian TCP model incorporating the effect of cellular repopulation is reviewed. In particular the recent development of a cell cycle model within the original Zaider and Minerbo TCP formalism is highlighted. Application of this TCP cell-cycle model in clinical treatment plans is explored and analysed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Value-at-risk (VaR) forecasting generally relies on a parametric density function of portfolio returns that ignores higher moments or assumes them constant. In this paper, we propose a simple approach to forecasting of a portfolio VaR. We employ the Gram-Charlier expansion (GCE) augmenting the standard normal distribution with the first four moments, which are allowed to vary over time. In an extensive empirical study, we compare the GCE approach to other models of VaR forecasting and conclude that it provides accurate and robust estimates of the realized VaR. In spite of its simplicity, on our dataset GCE outperforms other estimates that are generated by both constant and time-varying higher-moments models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper compares the applicability of three ground survey methods for modelling terrain: one man electronic tachymetry (TPS), real time kinematic GPS (GPS), and terrestrial laser scanning (TLS). Vertical accuracy of digital terrain models (DTMs) derived from GPS, TLS and airborne laser scanning (ALS) data is assessed. Point elevations acquired by the four methods represent two sections of a mountainous area in Cumbria, England. They were chosen so that the presence of non-terrain features is constrained to the smallest amount. The vertical accuracy of the DTMs was addressed by subtracting each DTM from TPS point elevations. The error was assessed using exploratory measures including statistics, histograms, and normal probability plots. The results showed that the internal measurement accuracy of TPS, GPS, and TLS was below a centimetre. TPS and GPS can be considered equally applicable alternatives for sampling the terrain in areas accessible on foot. The highest DTM vertical accuracy was achieved with GPS data, both on sloped terrain (RMSE 0.16. m) and flat terrain (RMSE 0.02. m). TLS surveying was the most efficient overall but veracity of terrain representation was subject to dense vegetation cover. Therefore, the DTM accuracy was the lowest for the sloped area with dense bracken (RMSE 0.52. m) although it was the second highest on the flat unobscured terrain (RMSE 0.07. m). ALS data represented the sloped terrain more realistically (RMSE 0.23. m) than the TLS. However, due to a systematic bias identified on the flat terrain the DTM accuracy was the lowest (RMSE 0.29. m) which was above the level stated by the data provider. Error distribution models were more closely approximated by normal distribution defined using median and normalized median absolute deviation which supports the use of the robust measures in DEM error modelling and its propagation. © 2012 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present optical and near-infrared (NIR) photometry and spectroscopy as well as modelling of the lightcurves of the Type IIb supernova (SN) 2011dh. Our extensive dataset, for which we present the observations obtained after day 100, spans two years, and complemented with Spitzer mid-infrared (MIR) data, we use it to build an optical-to-MIR bolometric lightcurve between days 3 and 732. To model the bolometric lightcurve before day 400 we use a grid of hydrodynamical SN models, which allows us to determine the errors in the derived quantities, and a bolometric correction determined with steady-state non-local thermodynamic equilibrium (NLTE) modelling. Using this method we find a helium core mass of 3.1<sup>+0.7</sup><inf>-0.4</inf> M<inf>⊙</inf> for SN 2011dh, consistent within error bars with previous results obtained using the bolometric lightcurve before day 80. We compute bolometric and broad-band lightcurves between days 100 and 500 from spectral steady-state NLTE models, presented and discussed in a companion paper. The preferred 12 M<inf>⊙</inf> (initial mass) model, previously found to agree well with the observed spectra, shows a good overall agreement with the observed lightcurves, although some discrepancies exist. Time-dependent NLTE modelling shows that after day ∼600 a steady-state assumption is no longer valid. The radioactive energy deposition in this phase is likely dominated by the positrons emitted in the decay of <sup>56</sup>Co, but seems insufficient to reproduce the lightcurves, and what energy source is dominating the emitted flux is unclear. We find an excess in the K and the MIR bands developing between days 100 and 250, during which an increase in the optical decline rate is also observed. A local origin of the excess is suggested by the depth of the He I 20 581 Å absorption. Steady-state NLTE models with a modest dust opacity in the core (τ = 0.44), turned on during this period, reproduce the observed behaviour, but an additional excess in the Spitzer 4.5 μm band remains. Carbon-monoxide (CO) first-overtone band emission is detected at day 206, and possibly at day 89, and assuming the additional excess to bedominated by CO fundamental band emission, we find fundamental to first-overtone band ratios considerably higher than observed in SN 1987A. The profiles of the [O i] 6300 Å and Mg i] 4571 Å lines show a remarkable similarity, suggesting that these lines originate from a common nuclear burning zone (O/Ne/Mg), and using small scale fluctuations in the line profiles we estimate a filling factor of ≲ 0.07 for the emitting material. This paper concludes our extensive observational and modelling work on SN 2011dh. The results from hydrodynamical modelling, steady-state NLTE modelling, and stellar evolutionary progenitor analysis are all consistent, and suggest an initial mass of ∼12 M<inf>⊙</inf> for the progenitor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trends and focii of interest in atomic modelling and data are identified in connection with recent observations and experiments in fusion and astrophysics. In the fusion domain, spectral observations are included of core, beam penetrated and divertor plasma. The helium beam experiments at JET and the studies with very heavy species at ASDEX and JET are noted. In the astrophysics domain, illustrations are given from the SOHO and CHANDRA spacecraft which span from the solar upper atmosphere, through soft x-rays from comets to supernovae remnants. It is shown that non-Maxwellian, dynamic and possibly optically thick regimes must be considered. The generalized collisional-radiative model properly describes the collisional regime of most astrophysical and laboratory fusion plasmas and yields self-consistent derived data for spectral emission, power balance and ionization state studies. The tuning of this method to routine analysis of the spectral observations is described. A forward look is taken as to how such atomic modelling, and the atomic data which underpin it, ought to evolve to deal with the extended conditions and novel environments of the illustrations. It is noted that atomic physics influences most aspects of fusion and astrophysical plasma behaviour but the effectiveness of analysis depends on the quality of the bi-directional pathway from fundamental data production through atomic/plasma model development to the confrontation with experiment. The principal atomic data capability at JET, and other fusion and astrophysical laboratories, is supplied via the Atomic Data and Analysis Structure (ADAS) Project. The close ties between the various experiments and ADAS have helped in this path of communication.