29 resultados para Autoregressive-Moving Average model
Resumo:
A nature inspired decentralised multi-agent algorithm is proposed to solve a problem of distributed task selection in which cities produce and store batches of different mail types. Agents must collect and process the mail batches, without a priori knowledge of the available mail at the cities or inter-agent communication. In order to process a different mail type than the previous one, agents must undergo a change-over during which it remains inactive. We propose a threshold based algorithm in order to maximise the overall efficiency (the average amount of mail collected). We show that memory, i.e. the possibility for agents to develop preferences for certain cities, not only leads to emergent cooperation between agents, but also to a significant increase in efficiency (above the theoretical upper limit for any memoryless algorithm), and we systematically investigate the influence of the various model parameters. Finally, we demonstrate the flexibility of the algorithm to changes in circumstances, and its excellent scalability.
Resumo:
For 35 years, Arnstein's ladder of citizen participation has been a touchstone for policy makers and practitioners promoting user involvement. This article critically assesses Arnstein's writing in relation to user involvement in health drawing on evidence from the United Kingdom, the Netherlands, Finland, Sweden and Canada. Arnstein's model, however, by solely emphasizing power, limits effective responses to the challenge of involving users in services and undermines the potential of the user involvement process. Such an emphasis on power assumes that it has a common basis for users, providers and policymakers and ignores the existence of different relevant forms of knowledge and expertise. It also fails to recognise that for some users, participation itself may be a goal. We propose a new model to replace the static image of a ladder and argue that for user involvement to improve health services it must acknowledge the value of the process and the diversity of knowledge and experience of both health professionals and lay people.
Resumo:
We show that the variation in dispersion managed soliton energy that occurs as the amplifier position varies within the dispersion map, for a fixed map strength, can be interpreted using the concept of effective average dispersion. Using this concept we physically explain why the location of the amplifier can produce a greater or lesser energy enhancement factor than the lossless model. © 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Improving the performance of private sector small and medium sized enterprises (SMEs) in a cost effective manner is a major concern for government. Governments have saved costs by moving information online rather than through more expensive face-to-face exchanges between advisers and clients. Building on previous work that distinguished between types of advice, this article evaluates whether these changes to delivery mechanisms affect the type of advice received. Using a multinomial logit model of 1334 cases of business advice to small firms collected in England, the study found that advice to improve capabilities was taken by smaller firms who were less likely to have limited liability or undertake business planning. SMEs sought word-of-mouth referrals before taking internal, capability-enhancing advice. This is also the case when that advice was part of a wider package of assistance involving both internal and external aspects. Only when firms took advice that used extant capabilities did they rely on the Internet. Therefore, when the Internet is privileged over face-to-face advice the changes made by each recipient of advice are likely to diminish causing less impact from advice within the economy. It implies that fewer firms will adopt the sorts of management practices that would improve their productivity. © 2014 Taylor & Francis.
Resumo:
This chapter explains a functional integral approach about impurity in the Tomonaga–Luttinger model. The Tomonaga–Luttinger model of one-dimensional (1D) strongly correlates electrons gives a striking example of non-Fermi-liquid behavior. For simplicity, the chapter considers only a single-mode Tomonaga–Luttinger model, with one species of right- and left-moving electrons, thus, omitting spin indices and considering eventually the simplest linearized model of a single-valley parabolic electron band. The standard operator bosonization is one of the most elegant methods developed in theoretical physics. The main advantage of the bosonization, either in standard or functional form, is that including the quadric electron–electron interaction does not substantially change the free action. The chapter demonstrates the way to develop the formalism of bosonization based on the functional integral representation of observable quantities within the Keldysh formalism.
Surface roughness after excimer laser ablation using a PMMA model:profilometry and effects on vision
Resumo:
PURPOSE: To show that the limited quality of surfaces produced by one model of excimer laser systems can degrade visual performance with a polymethylmethacrylate (PMMA) model. METHODS: A range of lenses of different powers was ablated in PMMA sheets using five DOS-based Nidek EC-5000 laser systems (Nidek Technologies, Gamagori, Japan) from different clinics. Surface quality was objectively assessed using profilometry. Contrast sensitivity and visual acuity were measured through the lenses when their powers were neutralized with suitable spectacle trial lenses. RESULTS: Average surface roughness was found to increase with lens power, roughness values being higher for negative lenses than for positive lenses. Losses in visual contrast sensitivity and acuity measured in two subjects were found to follow a similar pattern. Findings are similar to those previously published with other excimer laser systems. CONCLUSIONS: Levels of surface roughness produced by some laser systems may be sufficient to degrade visual performance under some circumstances.
Resumo:
In this paper, the authors use an exponential generalized autoregressive conditional heteroscedastic (EGARCH) error-correction model (ECM), that is, EGARCH-ECM, to estimate the pass-through effects of foreign exchange (FX) rates and producers’ prices for 20 U.K. export sectors. The long-run adjustment of export prices to FX rates and producers’ prices is within the range of -1.02% (for the Textiles sector) and -17.22% (for the Meat sector). The contemporaneous pricing-to-market (PTM) coefficient is within the range of -72.84% (for the Fuels sector) and -8.05% (for the Textiles sector). Short-run FX rate pass-through is not complete even after several months. Rolling EGARCH-ECMs show that the short and long-run effects of FX rate and producers’ prices fluctuate substantially as are asymmetry and volatility estimates before equilibrium is achieved.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. We use non-linear, artificial intelligence techniques, namely, recurrent neural networks, evolution strategies and kernel methods in our forecasting experiment. In the experiment, these three methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. There is evidence in the literature that evolutionary methods can be used to evolve kernels hence our future work should combine the evolutionary and kernel methods to get the benefits of both.
Resumo:
Background: Coronary heart disease (CHD) is a public health priority in the UK. The National Service Framework (NSF) has set standards for the prevention, diagnosis and treatment of CHD, which include the use of cholesterol-lowering agents aimed at achieving targets of blood total cholesterol (TC) < 5.0 mmol/L and low density lipoprotein-cholesterol (LDL-C) < 3.0 mmol/L. In order to achieve these targets cost effectively, prescribers need to make an informed choice from the range of statins available. Aim: To estimate the average and relative cost effectiveness of atorvastatin, fluvastatin, pravastatin and simvastatin in achieving the NSF LDL-C and TC targets. Design: Model-based economic evaluation. Methods: An economic model was constructed to estimate the number of patients achieving the NSF targets for LDL-C and TC at each dose of statin, and to calculate the average drug cost and incremental drug cost per patient achieving the target levels. The population baseline LDL-C and TC, and drug efficacy and drug costs were taken from previously published data. Estimates of the distribution of patients receiving each dose of statin were derived from the UK national DIN-LINK database. Results: The estimated annual drug cost per 1000 patients treated with atorvastatin was £289 000, with simvastatin £315 000, with pravastatin £333 000 and with fluvastatin £167 000. The percentages of patients achieving target are 74.4%, 46.4%, 28.4% and 13.2% for atorvastatin, simvastatin, pravastatin and fluvastatin, respectively. Incremental drug cost per extra patient treated to LDL-C and TC targets compared with fluvastafin were £198 and £226 for atorvastatin, £443 and £567 for simvastatin and £1089 and £2298 for pravastatin, using 2002 drug costs. Conclusions: As a result of its superior efficacy, atorvastatin generates a favourable cost-effectiveness profile as measured by drug cost per patient treated to LDL-C and TC targets. For a given drug budget, more patients would achieve NSF LDL-C and TC targets with atorvastatin than with any of the other statins examined.
Resumo:
In this work we propose a NLSE-based model of power and spectral properties of the random distributed feedback (DFB) fiber laser. The model is based on coupled set of non-linear Schrödinger equations for pump and Stokes waves with the distributed feedback due to Rayleigh scattering. The model considers random backscattering via its average strength, i.e. we assume that the feedback is incoherent. In addition, this allows us to speed up simulations sufficiently (up to several orders of magnitude). We found that the model of the incoherent feedback predicts the smooth and narrow (comparing with the gain spectral profile) generation spectrum in the random DFB fiber laser. The model allows one to optimize the random laser generation spectrum width varying the dispersion and nonlinearity values: we found, that the high dispersion and low nonlinearity results in narrower spectrum that could be interpreted as four-wave mixing between different spectral components in the quasi-mode-less spectrum of the random laser under study could play an important role in the spectrum formation. Note that the physical mechanism of the random DFB fiber laser formation and broadening is not identified yet. We investigate temporal and statistical properties of the random DFB fiber laser dynamics. Interestingly, we found that the intensity statistics is not Gaussian. The intensity auto-correlation function also reveals that correlations do exist. The possibility to optimize the system parameters to enhance the observed intrinsic spectral correlations to further potentially achieved pulsed (mode-locked) operation of the mode-less random distributed feedback fiber laser is discussed.
Resumo:
Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
The cell:cell bond between an immune cell and an antigen presenting cell is a necessary event in the activation of the adaptive immune response. At the juncture between the cells, cell surface molecules on the opposing cells form non-covalent bonds and a distinct patterning is observed that is termed the immunological synapse. An important binding molecule in the synapse is the T-cell receptor (TCR), that is responsible for antigen recognition through its binding with a major-histocompatibility complex with bound peptide (pMHC). This bond leads to intracellular signalling events that culminate in the activation of the T-cell, and ultimately leads to the expression of the immune eector function. The temporal analysis of the TCR bonds during the formation of the immunological synapse presents a problem to biologists, due to the spatio-temporal scales (nanometers and picoseconds) that compare with experimental uncertainty limits. In this study, a linear stochastic model, derived from a nonlinear model of the synapse, is used to analyse the temporal dynamics of the bond attachments for the TCR. Mathematical analysis and numerical methods are employed to analyse the qualitative dynamics of the nonequilibrium membrane dynamics, with the specic aim of calculating the average persistence time for the TCR:pMHC bond. A single-threshold method, that has been previously used to successfully calculate the TCR:pMHC contact path sizes in the synapse, is applied to produce results for the average contact times of the TCR:pMHC bonds. This method is extended through the development of a two-threshold method, that produces results suggesting the average time persistence for the TCR:pMHC bond is in the order of 2-4 seconds, values that agree with experimental evidence for TCR signalling. The study reveals two distinct scaling regimes in the time persistent survival probability density prole of these bonds, one dominated by thermal uctuations and the other associated with the TCR signalling. Analysis of the thermal fluctuation regime reveals a minimal contribution to the average time persistence calculation, that has an important biological implication when comparing the probabilistic models to experimental evidence. In cases where only a few statistics can be gathered from experimental conditions, the results are unlikely to match the probabilistic predictions. The results also identify a rescaling relationship between the thermal noise and the bond length, suggesting a recalibration of the experimental conditions, to adhere to this scaling relationship, will enable biologists to identify the start of the signalling regime for previously unobserved receptor:ligand bonds. Also, the regime associated with TCR signalling exhibits a universal decay rate for the persistence probability, that is independent of the bond length.
Resumo:
We investigate the mobility of nonlinear localized modes in a generalized discrete Ginzburg-Landau-type model, describing a one-dimensional waveguide array in an active Kerr medium with intrinsic, saturable gain and damping. It is shown that exponentially localized, traveling discrete dissipative breather-solitons may exist as stable attractors supported only by intrinsic properties of the medium, i.e., in the absence of any external field or symmetry-breaking perturbations. Through an interplay by the gain and damping effects, the moving soliton may overcome the Peierls-Nabarro barrier, present in the corresponding conservative system, by self-induced time-periodic oscillations of its power (norm) and energy (Hamiltonian), yielding exponential decays to zero with different rates in the forward and backward directions. In certain parameter windows, bistability appears between fast modes with small oscillations and slower, large-oscillation modes. The velocities and the oscillation periods are typically related by lattice commensurability and exhibit period-doubling bifurcations to chaotically "walking" modes under parameter variations. If the model is augmented by intersite Kerr nonlinearity, thereby reducing the Peierls-Nabarro barrier of the conservative system, the existence regime for moving solitons increases considerably, and a richer scenario appears including Hopf bifurcations to incommensurately moving solutions and phase-locking intervals. Stable moving breathers also survive in the presence of weak disorder. © 2014 American Physical Society.