954 resultados para Short Loadlength, Fast Algorithms
Resumo:
This paper examines the asymmetric behavior of conditional mean and variance. Short-horizon mean-reversion behavior in mean is modeled with an asymmetric nonlinear autoregressive model, and the variance is modeled with an Exponential GARCH in Mean model. The results of the empirical investigation of the Nordic stock markets indicates that negative returns revert faster to positive returns when positive returns generally persist longer. Asymmetry in both mean and variance can be seen on all included markets and are fairly similar. Volatility rises following negative returns more than following positive returns which is an indication of overreactions. Negative returns lead to increased variance and positive returns leads even to decreased variance.
Resumo:
Increased media exposure to layoffs and corporate quarterly financial reporting have created arguable a common perception – especially favored by the media itself – that the companies have been forced to improve their financial performance from quarter to quarter. Academically the relevant question is whether the companies themselves feel that they are exposed to short-term pressure to perform even if it means that they have to compromise company’s long-term future. This paper studies this issue using results from a survey conducted among the 500 largest companies in Finland. The results show that companies in general feel moderate short-term pressure, with reasonable dispersion across firms. There seems to be a link between the degree of pressure felt, and the firm’s ownership structure, i.e. we find support for the existence of short-term versus long-term owners. We also find significant ownership related differences, in line with expectations, in how such short-term pressure is reflected in actual decision variables such as the investment criteria used.
Resumo:
Grover's database search algorithm, although discovered in the context of quantum computation, can be implemented using any physical system that allows superposition of states. A physical realization of this algorithm is described using coupled simple harmonic oscillators, which can be exactly solved in both classical and quantum domains. Classical wave algorithms are far more stable against decoherence compared to their quantum counterparts. In addition to providing convenient demonstration models, they may have a role in practical situations, such as catalysis.
Resumo:
Four algorithms, all variants of Simultaneous Perturbation Stochastic Approximation (SPSA), are proposed. The original one-measurement SPSA uses an estimate of the gradient of objective function L containing an additional bias term not seen in two-measurement SPSA. As a result, the asymptotic covariance matrix of the iterate convergence process has a bias term. We propose a one-measurement algorithm that eliminates this bias, and has asymptotic convergence properties making for easier comparison with the two-measurement SPSA. The algorithm, under certain conditions, outperforms both forms of SPSA with the only overhead being the storage of a single measurement. We also propose a similar algorithm that uses perturbations obtained from normalized Hadamard matrices. The convergence w.p. 1 of both algorithms is established. We extend measurement reuse to design two second-order SPSA algorithms and sketch the convergence analysis. Finally, we present simulation results on an illustrative minimization problem.
Resumo:
Utilization bounds for Earliest Deadline First(EDF) and Rate Monotonic(RM) scheduling are known and well understood for uniprocessor systems. In this paper, we derive limits on similar bounds for the multiprocessor case, when the individual processors need not be identical. Tasks are partitioned among the processors and RM scheduling is assumed to be the policy used in individual processors. A minimum limit on the bounds for a 'greedy' class of algorithms is given and proved, since the actual value of the bound depends on the algorithm that allocates the tasks. We also derive the utilization bound of an algorithm which allocates tasks in decreasing order of utilization factors. Knowledge of such bounds allows us to carry out very fast schedulability tests although we are constrained by the fact that the tests are sufficient but not necessary to ensure schedulability.
Resumo:
A relationship between 2-monotonicity and 2-asummability has been established and thereby a fast method for testing 2-asummability of switching functions derived. The approach is based on the fact that only a particular type of 2-sums need be examined for 2-asummability testing of 2-monotonic switching functions. These 2-sums are those which contain more than five 1's. 2-asummability testing for these 2-sums can be easily done by using the authors' technique.
Resumo:
Recent studies have shown that changes in global mean precipitation are larger for solar forcing than for CO2 forcing of similar magnitude.In this paper, we use an atmospheric general circulation model to show that the differences originate from differing fast responses of the climate system. We estimate the adjusted radiative forcing and fast response using Hansen's ``fixed-SST forcing'' method.Total climate system response is calculated using mixed layer simulations using the same model. Our analysis shows that the fast response is almost 40% of the total response for few key variables like precipitation and evaporation. We further demonstrate that the hydrologic sensitivity, defined as the change in global mean precipitation per unit warming, is the same for the two forcings when the fast responses are excluded from the definition of hydrologic sensitivity, suggesting that the slow response (feedback) of the hydrological cycle is independent of the forcing mechanism. Based on our results, we recommend that the fast and slow response be compared separately in multi-model intercomparisons to discover and understand robust responses in hydrologic cycle. The significance of this study to geoengineering is discussed.
Resumo:
Silver iodide-based fast ion conducting glasses containing silver phosphate and silver borate have been studied. An attempt is made to identify the interaction between anions by studying the chemical shifts of31P and11B atoms in high resolution (HR) magic angle spinning (MAS) NMR spectra. Variation in the chemical shifts of31P or11B has been observed which is attributed to the change in the partial charge on the31P or11B. This is indicative of the change in the electronegativity of the anion matrix as a whole. This in turn is interpreted as due to significant interaction among anions. The significance of such interaction to the concept of structural unpinning of silver ions in fast ion conducting glasses is discussed.
Resumo:
dThe work looks at the response to three-point loading of carbon-epoxy (CF-EP) composites with inserted buffer strip (BS) material. Short beam Shear tests were performed to study the load-deflection response as well as fracture features through macroscopy on the CF-EP system containing the interleaved PTFE-coated fabric material. Significant differences were noticed in the response of the CF-EP system to the bending process consequent to the architectural modification. It was inferred that introduction of small amounts of less adherent layers of material at specific locations causes a decrement in the load carrying capability. Further the number and the ease with which interface separation occurs is found to depend on the extent to which the inserted layer is present in either single or multiple layer positions.
Resumo:
The conformational dependence of interproton distances in model proline peptides has been investigated in order to facilitate interpretation of the results of Nuclear Overhauser Effect (NOE) studies on such peptides. For this purpose two model systems, namely, Ac-Pro-NHMe and Ac-Pro-X-NHMe have been chosen and used. In the former, short interproton distances detectable in NOE experiments permit a clear distinction between conformations with Pro ψ = -300 (helical region) and those in which ψ is around 1200 (polyproline region). For the latter, the variation of distances between the protons of methyl amide and the Pro ring have been studied by superimposing on the Ramachandran map in the (φ3, ψ3) plane. The results show that β-turns and non-β-turn conformations can be readily distinguished from NOE data and such long range NOEs should be detectable for specific non-β-turn conformations. NOEs involving Cβ and Cγ protons are particularly sensitive to the state of pyrrolidine ring puckering.
Resumo:
In many problems of decision making under uncertainty the system has to acquire knowledge of its environment and learn the optimal decision through its experience. Such problems may also involve the system having to arrive at the globally optimal decision, when at each instant only a subset of the entire set of possible alternatives is available. These problems can be successfully modelled and analysed by learning automata. In this paper an estimator learning algorithm, which maintains estimates of the reward characteristics of the random environment, is presented for an automaton with changing number of actions. A learning automaton using the new scheme is shown to be e-optimal. The simulation results demonstrate the fast convergence properties of the new algorithm. The results of this study can be extended to the design of other types of estimator algorithms with good convergence properties.
Resumo:
Let G - (V, E) be a weighted undirected graph having nonnegative edge weights. An estimate (delta) over cap (u, v) of the actual distance d( u, v) between u, v is an element of V is said to be of stretch t if and only if delta(u, v) <= (delta) over cap (u, v) <= t . delta(u, v). Computing all-pairs small stretch distances efficiently ( both in terms of time and space) is a well-studied problem in graph algorithms. We present a simple, novel, and generic scheme for all-pairs approximate shortest paths. Using this scheme and some new ideas and tools, we design faster algorithms for all-pairs t-stretch distances for a whole range of stretch t, and we also answer an open question posed by Thorup and Zwick in their seminal paper [J. ACM, 52 (2005), pp. 1-24].
Resumo:
A common and practical paradigm in cooperative communications is the use of a dynamically selected 'best' relay to decode and forward information from a source to a destination. Such a system consists of two core phases: a relay selection phase, in which the system expends resources to select the best relay, and a data transmission phase, in which it uses the selected relay to forward data to the destination. In this paper, we study and optimize the trade-off between the selection and data transmission phase durations. We derive closed-form expressions for the overall throughput of a non-adaptive system that includes the selection phase overhead, and then optimize the selection and data transmission phase durations. Corresponding results are also derived for an adaptive system in which the relays can vary their transmission rates. Our results show that the optimal selection phase overhead can be significant even for fast selection algorithms. Furthermore, the optimal selection phase duration depends on the number of relays and whether adaptation is used.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.