996 resultados para Optimal tests
Resumo:
The aim of the thesis is to assess the fishery of Baltic cod, herring and sprat by simulation over 50 years time period. We form a bioeconomic multispecies model for the species. We include species interactions into the model because especially cod and sprat stocks have significant effects on each other. We model the development of population dynamics, catches and profits of the fishery with current fishing mortalities, as well as with the optimal profit maximizing fishing mortalities. Thus, we see how the fishery would develop with current mortalities, and how the fishery should be developed in order to yield maximal profits. Especially cod stock has been quite low recently and by optimizing the fishing mortality it could get recovered. In addition, we assess what would happen to the fisheries of the species if more favourable environmental conditions for cod recruitment dominate in the Baltic Sea. The results may yield new information for the fisheries management. According to the results the fishery of Baltic cod, herring and sprat are not at the most profitable level. The fishing mortalities of each species should be lower in order to maximize the profits. By fishing mortality optimizing the net present value would be almost three times higher in the simulation period. The lower fishing mortality of cod would result in a cod stock recovery. If the environmental conditions in the Baltic Sea improved, cod stock would recover even without a decrease in the fishing mortality. Then the increased cod stock would restrict herring and sprat stock remarkably, and harvesting of these species would not be as profitable anymore.
Resumo:
A model comprising several servers, each equipped with its own queue and with possibly different service speeds, is considered. Each server receives a dedicated arrival stream of jobs; there is also a stream of generic jobs that arrive to a job scheduler and can be individually allocated to any of the servers. It is shown that if the arrival streams are all Poisson and all jobs have the same exponentially distributed service requirements, the probabilistic splitting of the generic stream that minimizes the average job response time is such that it balances the server idle times in a weighted least-squares sense, where the weighting coefficients are related to the service speeds of the servers. The corresponding result holds for nonexponentially distributed service times if the service speeds are all equal. This result is used to develop adaptive quasi-static algorithms for allocating jobs in the generic arrival stream when the load parameters are unknown. The algorithms utilize server idle-time measurements which are sent periodically to the central job scheduler. A model is developed for these measurements, and the result mentioned is used to cast the problem into one of finding a projection of the root of an affine function, when only noisy values of the function can be observed
Resumo:
Details of an efficient optimal closed-loop guidance algorithm for a three-dimensional launch are presented with simulation results. Two types of orbital injections, with either true anomaly or argument of perigee being free at injection, are considered. The resulting steering-angle profile under the assumption of uniform gravity lies in a canted plane which transforms a three-dimensional problem into an equivalent two-dimensional one. Effects of thrust are estimated using a series in a recursive way. Encke's method is used to predict the trajectory during powered flight and then to compute the changes due to actual gravity using two gravity-related vectors. Guidance parameters are evaluated using the linear differential correction method. Optimality of the algorithm is tested against a standard ground-based trajectory optimization package. The performance of the algorithm is tested for accuracy, robustness, and efficiency for a sun-synchronous mission involving guidance for a multistage vehicle that requires large pitch and yaw maneuver. To demonstrate applicability of the algorithm to a range of missions, injection into a geostationary transfer orbit is also considered. The performance of the present algorithm is found to be much better than others.
Resumo:
The superconducting transition temperatures in Bi2Ca1−xLnxSr2Cu2O8+δ, TlCa1−xLnxSr2Cu2O6+δ, and Tl0.8Ca1−xLnxBa2Cu23O6+δ (Ln=Y or rare earth) vary with composition and show a maximum at a specific value of x or δ. This observation suggests that an optimal carrier concentration is required to attain maximum Tc in such cuprates which seem to be two‐band systems
Resumo:
Equilibrium sediment volume tests are conducted on field soils to classify them based on their degree of expansivity and/or to predict the liquid limit of soils. The present technical paper examines different equilibrium sediment volume tests, critically evaluating each of them. It discusses the settling behavior of fine-grained soils during the soil sediment formation to evolve a rationale for conducting the latest version of equilibrium sediment volume test. Probable limitations of equilibrium sediment volume test and the possible solution to overcome the same have also been indicated.
Resumo:
A simple and efficient algorithm for the bandwidth reduction of sparse symmetric matrices is proposed. It involves column-row permutations and is well-suited to map onto the linear array topology of the SIMD architectures. The efficiency of the algorithm is compared with the other existing algorithms. The interconnectivity and the memory requirement of the linear array are discussed and the complexity of its layout area is derived. The parallel version of the algorithm mapped onto the linear array is then introduced and is explained with the help of an example. The optimality of the parallel algorithm is proved by deriving the time complexities of the algorithm on a single processor and the linear array.
Resumo:
Assuming an entropic origin for phason elasticity in quasicrystals, we derive predictions for the temperature dependence of grain-boundary structure and free energy, the nature of the elastic instability in these systems, and the behavior of sound damping near the instability. We believe that these will provide decisive tests of the entropic model for quasicrystals.
Resumo:
The K-means algorithm for clustering is very much dependent on the initial seed values. We use a genetic algorithm to find a near-optimal partitioning of the given data set by selecting proper initial seed values in the K-means algorithm. Results obtained are very encouraging and in most of the cases, on data sets having well separated clusters, the proposed scheme reached a global minimum.
Resumo:
Stress relaxation testing is often utilised for determining whether athermal straining contributes to plastic flow; if plastic strain rate is continuous across the transition from tension to relaxation then plastic strain is fully thermally activated. This method was applied to an aged type 316 stainless steel tested in the temperature range 973–1123 K and to a high purity Al in the recrystallised annealed condition tested in the temperature range 274–417 K. The results indicated that plastic strain is thermally activated in these materials at these corresponding test temperatures. For Al, because of its high strain rate sensitivity, it was necessary to adopt a back extrapolation procedure to correct for the finite period that the crosshead requires to decelerate from the constant speed during tension to a dead stop for stress relaxation.
Resumo:
Bayesian networks are compact, flexible, and interpretable representations of a joint distribution. When the network structure is unknown but there are observational data at hand, one can try to learn the network structure. This is called structure discovery. This thesis contributes to two areas of structure discovery in Bayesian networks: space--time tradeoffs and learning ancestor relations. The fastest exact algorithms for structure discovery in Bayesian networks are based on dynamic programming and use excessive amounts of space. Motivated by the space usage, several schemes for trading space against time are presented. These schemes are presented in a general setting for a class of computational problems called permutation problems; structure discovery in Bayesian networks is seen as a challenging variant of the permutation problems. The main contribution in the area of the space--time tradeoffs is the partial order approach, in which the standard dynamic programming algorithm is extended to run over partial orders. In particular, a certain family of partial orders called parallel bucket orders is considered. A partial order scheme that provably yields an optimal space--time tradeoff within parallel bucket orders is presented. Also practical issues concerning parallel bucket orders are discussed. Learning ancestor relations, that is, directed paths between nodes, is motivated by the need for robust summaries of the network structures when there are unobserved nodes at work. Ancestor relations are nonmodular features and hence learning them is more difficult than modular features. A dynamic programming algorithm is presented for computing posterior probabilities of ancestor relations exactly. Empirical tests suggest that ancestor relations can be learned from observational data almost as accurately as arcs even in the presence of unobserved nodes.
Resumo:
It is observed that general explicit guidance schemes exhibit numerical instability close to the injection point. This difficulty is normally attributed to the demand for exact injection which, in turn, calls for finite corrections to be enforced in a relatively short time. The deviations in vehicle state which need corrective maneuvers are caused by the off-nominal operating conditions. Hence, the onset of terminal instability depends on the type of off-nominal conditions encountered. The proposed separate terminal guidance scheme overcomes the above difficulty by minimizing a quadratic penalty on injection errors rather than demanding an exact injection. There is also a special requirement in the terminal phase for the faster guidance computations. The faster guidance computations facilitate a more frequent guidance update enabling an accurate terminal thrust cutoff. The objective of faster computations is realized in the terminal guidance scheme by employing realistic assumptions that are accurate enough for a short terminal trajectory. It is observed from simulations that one of the guidance parameters (P) related to the thrust steering angular rates can indicate the onset of terminal instability due to different off-nominal operating conditions. Therefore, the terminal guidance scheme can be dynamically invoked based on monitoring of deviations in the lone parameter P.
Resumo:
We consider the problem of optimally scheduling a processor executing a multilayer protocol in an intelligent Network Interface Controller (NIC). In particular, we assume a typical LAN environment with class 4 transport service, a connectionless network service, and a class 1 link level protocol. We develop a queuing model for the problem. In the most general case this becomes a cyclic queuing network in which some queues have dedicated servers, and the others have a common schedulable server. We use sample path arguments and Markov decision theory to determine optimal service schedules. The optimal throughputs are compared with those obtained with simple policies. The optimal policy yields upto 25% improvement in some cases. In some other cases, the optimal policy does only slightly better than much simpler policies.
Resumo:
Maintaining quantum coherence is a crucial requirement for quantum computation; hence protecting quantum systems against their irreversible corruption due to environmental noise is an important open problem. Dynamical decoupling (DD) is an effective method for reducing decoherence with a low control overhead. It also plays an important role in quantum metrology, where, for instance, it is employed in multiparameter estimation. While a sequence of equidistant control pulses the Carr-Purcell-Meiboom-Gill (CPMG) sequence] has been ubiquitously used for decoupling, Uhrig recently proposed that a nonequidistant pulse sequence the Uhrig dynamic decoupling (UDD) sequence] may enhance DD performance, especially for systems where the spectral density of the environment has a sharp frequency cutoff. On the other hand, equidistant sequences outperform UDD for soft cutoffs. The relative advantage provided by UDD for intermediate regimes is not clear. In this paper, we analyze the relative DD performance in this regime experimentally, using solid-state nuclear magnetic resonance. Our system qubits are C-13 nuclear spins and the environment consists of a H-1 nuclear spin bath whose spectral density is close to a normal (Gaussian) distribution. We find that in the presence of such a bath, the CPMG sequence outperforms the UDD sequence. An analogy between dynamical decoupling and interference effects in optics provides an intuitive explanation as to why the CPMG sequence performs better than any nonequidistant DD sequence in the presence of this kind of environmental noise.
Resumo:
It is possible to sample signals at sub-Nyquist rate and still be able to reconstruct them with reasonable accuracy provided they exhibit local Fourier sparsity. Underdetermined systems of equations, which arise out of undersampling, have been solved to yield sparse solutions using compressed sensing algorithms. In this paper, we propose a framework for real time sampling of multiple analog channels with a single A/D converter achieving higher effective sampling rate. Signal reconstruction from noisy measurements on two different synthetic signals has been presented. A scheme of implementing the algorithm in hardware has also been suggested.