49 resultados para Discrete analytic function theory
Resumo:
A modified radial basis function (RBF) neural network and its identification algorithm based on observational data with heterogeneous noise are introduced. The transformed system output of Box-Cox is represented by the RBF neural network. To identify the model from observational data, the singular value decomposition of the full regression matrix consisting of basis functions formed by system input data is initially carried out and a new fast identification method is then developed using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator (MLE) for a model base spanned by the largest eigenvectors. Finally, the Box-Cox transformation-based RBF neural network, with good generalisation and sparsity, is identified based on the derived optimal Box-Cox transformation and an orthogonal forward regression algorithm using a pseudo-PRESS statistic to select a sparse RBF model with good generalisation. The proposed algorithm and its efficacy are demonstrated with numerical examples.
Resumo:
In a previous paper (J. of Differential Equations, Vol. 249 (2010), 3081-3098) we examined a family of periodic Sturm-Liouville problems with boundary and interior singularities which are highly non-self-adjoint but have only real eigenvalues. We now establish Schatten class properties of the associated resolvent operator.
Resumo:
This study explores the implications of an organization moving toward service-dominant logic (S-D logic) on the sales function. Driven by its customers’ needs, a service orientation by its nature requires personal interaction and sales personnel are in an ideal position to develop offerings with the customer. However, the development of S-D logic may require sales staff to develop additional skills. Employing a single case study, the study identified that sales personnel are quick to appreciate the advantages of S-D logic for customer satisfaction and six specific skills were highlighted and explored. Further, three propositions were identified: in an organization adopting S-D logic, the sales process needs to elicit needs at both embedded-value and value-in-use levels. In addition, the sales process needs to coproduce not just goods and service attributes but also attributes of the customer’s usage processes. Further, the sales process needs to coproduce not just goods and service attributes but also attributes of the customer’s usage processes.
Resumo:
A neural network enhanced self-tuning controller is presented, which combines the attributes of neural network mapping with a generalised minimum variance self-tuning control (STC) strategy. In this way the controller can deal with nonlinear plants, which exhibit features such as uncertainties, nonminimum phase behaviour, coupling effects and may have unmodelled dynamics, and whose nonlinearities are assumed to be globally bounded. The unknown nonlinear plants to be controlled are approximated by an equivalent model composed of a simple linear submodel plus a nonlinear submodel. A generalised recursive least squares algorithm is used to identify the linear submodel and a layered neural network is used to detect the unknown nonlinear submodel in which the weights are updated based on the error between the plant output and the output from the linear submodel. The procedure for controller design is based on the equivalent model therefore the nonlinear submodel is naturally accommodated within the control law. Two simulation studies are provided to demonstrate the effectiveness of the control algorithm.
Resumo:
An efficient numerical self-consistent field theory (SCFT) algorithm is developed for treating structured polymers on spherical surfaces. The method solves the diffusion equations of SCFT with a pseudospectral approach that combines a spherical-harmonics expansion for the angular coordinates with a modified real-space Crank–Nicolson method for the radial direction. The self-consistent field equations are solved with Anderson-mixing iterations using dynamical parameters and an alignment procedure to prevent angular drift of the solution. A demonstration of the algorithm is provided for thin films of diblock copolymer grafted to the surface of a spherical core, in which the sequence of equilibrium morphologies is predicted as a function of diblock composition. The study reveals an array of interesting behaviors as the block copolymer pattern is forced to adapt to the finite surface area of the sphere.
Resumo:
This paper compares a number of different extreme value models for determining the value at risk (VaR) of three LIFFE futures contracts. A semi-nonparametric approach is also proposed, where the tail events are modeled using the generalised Pareto distribution, and normal market conditions are captured by the empirical distribution function. The value at risk estimates from this approach are compared with those of standard nonparametric extreme value tail estimation approaches, with a small sample bias-corrected extreme value approach, and with those calculated from bootstrapping the unconditional density and bootstrapping from a GARCH(1,1) model. The results indicate that, for a holdout sample, the proposed semi-nonparametric extreme value approach yields superior results to other methods, but the small sample tail index technique is also accurate.
Resumo:
The Routh-stability method is employed to reduce the order of discrete-time system transfer functions. It is shown that the Routh approximant is well suited to reduce both the denominator and the numerator polynomials, although alternative methods, such as PadÃ�Â(c)-Markov approximation, are also used to fit the model numerator coefficients.
Resumo:
This paper derives exact discrete time representations for data generated by a continuous time autoregressive moving average (ARMA) system with mixed stock and flow data. The representations for systems comprised entirely of stocks or of flows are also given. In each case the discrete time representations are shown to be of ARMA form, the orders depending on those of the continuous time system. Three examples and applications are also provided, two of which concern the stationary ARMA(2, 1) model with stock variables (with applications to sunspot data and a short-term interest rate) and one concerning the nonstationary ARMA(2, 1) model with a flow variable (with an application to U.S. nondurable consumers’ expenditure). In all three examples the presence of an MA(1) component in the continuous time system has a dramatic impact on eradicating unaccounted-for serial correlation that is present in the discrete time version of the ARMA(2, 0) specification, even though the form of the discrete time model is ARMA(2, 1) for both models.
Resumo:
In this paper we study generalised prime systems for which the integer counting function NP(x) is asymptotically well behaved, in the sense that NP(x)=ρx+O(xβ), where ρ is a positive constant and . For such systems, the associated zeta function ζP(s) is holomorphic for . We prove that for , for any ε>0, and also for ε=0 for all such σ except possibly one value. The Dirichlet divisor problem for generalised integers concerns the size of the error term in NkP(x)−Ress=1(ζPk(s)xs/s), which is O(xθ) for some θ<1. Letting αk denote the infimum of such θ, we show that .
Resumo:
We study generalised prime systems (both discrete and continuous) for which the `integer counting function' N(x) has the property that N(x) ¡ cx is periodic for some c > 0. We show that this is extremely rare. In particular, we show that the only such system for which N is continuous is the trivial system with N(x) ¡ cx constant, while if N has finitely many discontinuities per bounded interval, then N must be the counting function of the g-prime system containing the usual primes except for finitely many. Keywords and phrases: Generalised prime systems. I
Resumo:
This paper investigates the frequency of extreme events for three LIFFE futures contracts for the calculation of minimum capital risk requirements (MCRRs). We propose a semiparametric approach where the tails are modelled by the Generalized Pareto Distribution and smaller risks are captured by the empirical distribution function. We compare the capital requirements form this approach with those calculated from the unconditional density and from a conditional density - a GARCH(1,1) model. Our primary finding is that both in-sample and for a hold-out sample, our extreme value approach yields superior results than either of the other two models which do not explicitly model the tails of the return distribution. Since the use of these internal models will be permitted under the EC-CAD II, they could be widely adopted in the near future for determining capital adequacies. Hence, close scrutiny of competing models is required to avoid a potentially costly misallocation capital resources while at the same time ensuring the safety of the financial system.
Resumo:
As the field of international business has matured, there have been shifts in the core unit of analysis. First, there was analysis at country level, using national statistics on trade and foreign direct investment (FDI). Next, the focus shifted to the multinational enterprise (MNE) and the parent’s firm specific advantages (FSAs). Eventually the MNE was analysed as a network and the subsidiary became a unit of analysis. We untangle the last fifty years of international business theory using a classification by these three units of analysis. This is the country-specific advantage (CSA) and firm-specific advantage (FSA) matrix. Will this integrative framework continue to be useful in the future? We demonstrate that this is likely as the CSA/FSA matrix permits integration of potentially useful alternative units of analysis, including the broad region of the triad. Looking forward, we develop a new framework, visualized in two matrices, to show how distance really matters and how FSAs function in international business. Key to this are the concepts of compounded distance and resource recombination barriers facing MNEs when operating across national borders.
Resumo:
Military doctrine is one of the conceptual components of war. Its raison d’être is that of a force multiplier. It enables a smaller force to take on and defeat a larger force in battle. This article’s departure point is the aphorism of Sir Julian Corbett, who described doctrine as ‘the soul of warfare’. The second dimension to creating a force multiplier effect is forging doctrine with an appropriate command philosophy. The challenge for commanders is how, in unique circumstances, to formulate, disseminate and apply an appropriate doctrine and combine it with a relevant command philosophy. This can only be achieved by policy-makers and senior commanders successfully answering the Clausewitzian question: what kind of conflict are they involved in? Once an answer has been provided, a synthesis of these two factors can be developed and applied. Doctrine has implications for all three levels of war. Tactically, doctrine does two things: first, it helps to create a tempo of operations; second, it develops a transitory quality that will produce operational effect, and ultimately facilitate the pursuit of strategic objectives. Its function is to provide both training and instruction. At the operational level instruction and understanding are critical functions. Third, at the strategic level it provides understanding and direction. Using John Gooch’s six components of doctrine, it will be argued that there is a lacunae in the theory of doctrine as these components can manifest themselves in very different ways at the three levels of war. They can in turn affect the transitory quality of tactical operations. Doctrine is pivotal to success in war. Without doctrine and the appropriate command philosophy military operations cannot be successfully concluded against an active and determined foe.
Resumo:
In this paper we perform an analytical and numerical study of Extreme Value distributions in discrete dynamical systems. In this setting, recent works have shown how to get a statistics of extremes in agreement with the classical Extreme Value Theory. We pursue these investigations by giving analytical expressions of Extreme Value distribution parameters for maps that have an absolutely continuous invariant measure. We compare these analytical results with numerical experiments in which we study the convergence to limiting distributions using the so called block-maxima approach, pointing out in which cases we obtain robust estimation of parameters. In regular maps for which mixing properties do not hold, we show that the fitting procedure to the classical Extreme Value Distribution fails, as expected. However, we obtain an empirical distribution that can be explained starting from a different observable function for which Nicolis et al. (Phys. Rev. Lett. 97(21): 210602, 2006) have found analytical results.