894 resultados para NOMINAL INTEREST-RATES
Resumo:
ENGLISH: Longline hook rates of bigeye and yellowfin tunas in the eastern Pacific Ocean were standardized by maximum depth of fishing, area, and season, using generalized linear models (GLM's). The annual trends of the standardized hook rates differ from the unstandardized, and are more likely to represent the changes in abundance of tunas in the age groups most vulnerable to longliners in the fishing grounds. For both species all of the interactions in the GLM's involving years, depths of fishing, areas, and seasons were significant. This means that the annual trends in hook rates depend on which depths, areas, and seasons are being considered. The overall average hook rates for each were estimated by weighting each 5-degree quadrangle equally and each season by the number of months in it. Since the annual trends in hook rates for each fishing depth category are roughly the same for bigeye, total average annual hook rate estimates are possible with the GLM. For yellowfin, the situation is less clear because of a preponderance of empty cells in the model. The full models explained 55% of the variation in bigeye hook rate and 33% of that of yellowfin. SPANISH: Se estandardizaron las tasas de captura con palangre de atunes patudo y aleta amarilla en el Océano Pacífico oriental por la profunidad máxima de pesca, área, y temporada, usando modelos lineales generalizados (MLG). Las tendencias anuales de las tasas de captura estandardizadas son diferentes a las de las tasas no estandardizadas, y es más que representen los cambios en la abundancia de los atunes en los grupos de edad más vulnerables a los palangreros en las áreas de pesca. Para ambas especies fueron significativas todas las interacciones en los MLG con año, profundidad de pesca, área, y temporada. Esto significa que las tendencias anuales de las tasas de captura dependen de cuál profundidad, área, y temporado se está considerando. Para la estimación de la tasa de captura general media para cada especie se ponderó cada cuadrángulo de 5 grados igualmente y cada temporada por el número de meses que contiene. Ya que las tendencias anuales en las tasas de captura para cada categoría de profundidad de pesca son aproximadamente iguales para el patudo, son posibles estimaciones de la tasa de captura anual media total con el MLG. En el caso del aleta amarilla, la situación es más confusa, debido a una preponderancia de celdas vacías en el modelo. Los modelos completos explican el 55% de la variación de la tasa de captura de patudo y 33% de la del aleta amarilla. (PDF contains 19 pages.)
Resumo:
ENGLISH: Increments in otoliths (sagittae) were examined, using light and scanning electron microscopy, to determine ages and estimate growth rates of larval and early-juvenile black skipjack, Euthynnus lineatus. Larvae and juveniles were collected between 1987 and 1989 from coastal waters of Panama in the eastern Pacific Ocean. Results from a laboratory experiment indicated that immersion for 6 and 12 hours in a 200 mg/L solution of tetracycline hydrochloride adequately marks otoliths and that increments are formed daily in the sagittae of postflexion larvae and early juveniles. Further, survival rates of tetracycline-treated fish were not significantly different from those of control fish. Growth rates were derived from length-age relationships of 218 field-collected specimens ranging in size from 5.7 to 20.3 mm SL. A growth rate of 0.70 mm/d was estimated from the weighted regression of standard length on age for all specimens. This rate lies within the range reported for larvae and early juveniles of other species of subtropical and tropical scombrids. Growth rates of postflexion larvae and early juveniles were not significantly different between the rainy season in July-August 1988 and the dry, upwelling season in January-February 1989. Growth was, however, significantly more variable for older individuals in July-August than in January-February, and may correspond, in part, to seasonal patchiness of prey. The growth rates of the otoliths relative to fish length were also not significantly different between seasons; however, the otoliths were larger relative to the lengths of fish collected in the rainy season, which may reflect slower growth during earlier larval stages. SPANISH: Se examinaron incrementos en otolitos (ságitas), usando microscopia de luz y de barrido electrónico, a fin de determinar la edad y estimar las tasas de crecimiento de barriletes negros, Euthynnus lineatus, larvales y juveniles tempranos. Entre 1987 y 1989 se capturaron larvas y juveniles en las aguas costeras de Panamá en el Océano Pacífico oriental. Los resultados de un experimento de laboratorio indicaron que una inmersión de 6 a 12 horas de duración en una solución de 200 mg/L de hidrocloro de tetraciclina marca los otolitos adecuadamente y que los incrementos se forman a diario en las ságitas de larvas en postflexión y juveniles tempranos. Además, las tasas de supervivencia de los peces tratados con tetraciclina no fueron significativamente diferentes a aquellas de los peces de control. Se calcularon las tasas de crecimiento a partir de las relaciones de talla-edad de 218 especímenes de TE entre 5.7 y 20.3 mm capturados en el mar. Se estimó.una tasa de crecimiento de 0.70 mm/día a partir de la regresión ponderada de talla estándar sobre edad para todos los especímenes. Esta tasa cae dentro del rango reportado para larvas y juveniles tempranos de otras especies de escómbridos subtropicales y tropicales. Las tasas de crecimiento de larvas en postflexión y juveniles tempranos no fueron significativamente diferentes entre la temporada de lluvias en julio-agosto de 1988 y la temporada de sequía y afloramiento en enero-febrero de 1989. Sin emoargo, el crecimiento fue significativamente más variable para los individuos de mayor edad en julio-agosto que en enero-febrero, y quizás corresponda parcialmente a la irregularidad temporal de la abundancia de presas. Las tasas de crecimiento de los otolitos en relación a la talla de los peces tampoco fueron significativamente diferentes entre temporadas; sin embargo, los otolitos eran más grandes en relación a la talla en peces capturados en la temporada de lluvias, lo cual podría reflejar crecimiento más lento durante las etapas larvales más tempranas. (PDF contains 42 pages.)
Resumo:
Trawl surveys to assess the stocks of Lake Victoria (Tanzania) for estimates of biomass and yield, together with the establishment of exploitation patterns, are being undertaken under the Lake Victoria Fisheries Research Project. Preliminary surveys to establish the sampling stations and strategy were carried out between October 1997 and February 1998. Three cruises to cover the whole of Tanzanian waters were undertaken with 133 sampling stations. Data on each rates, species composition, and distribution were collected. Three sampling areas were designated: area A, B and C. In each area, almost the same distribution pattern over depth was found. Lates niloticus (L) formed over 90% of the total catch. Most L. niloticus were from 5-40 cm TL. Abundance decreased with depth, few fish were found deeper than 40m and most fish were caught at <20 m deep. Catch rates varied considerably between stations and areas. Area A had the highest catch rates with little variation over the stations. There is an indication of recovery of species diversity compared with the surveys of RV Kiboko(1985 and 1989)
Resumo:
The October meeting of the ACFM of ICES gave advice for a number of North-Atlantic fish stocks. The results of the most important stocks are given here from the perspective of German fishery management. The are chiefly North Sea plaice and sole, for which a reduction of 25 % of the fishing mortality (F) is recommended for 1998, North Sea saithe (minus 20 % in F), while North Sea cod is in the process of recovery and North Sea haddock is inside safe biological limits. The mackerel stock of the North Sea has not yet recovered, while the western mackerel stock as an entity has stabilised at a level of about 2.3 million t.
Resumo:
Over the past decade, scholarly interest concerning the use of limitations to constrain government spending and taxing has noticeably increased. The call for constitutional restrictions can be credited, in part, to Washington's apparent inability to legislate any significant reductions in government expenditures or in the size of the national debt. At the present time, the federal government is far from instituting any constitutional limitations on spending or borrowing; however, the states have incorporated many controls on revenues and expenditures, the oldest being strictures on full faith and credit borrowing. This dissertations examines the efficacy of these restrictions on borrowing across the states (excluding Alaska) for the period dating from 1961 to 1990 and also studies the limitations on taxing and spending synonymous with the Tax Revolt.
We include socio-economic information in our calculations to control for factors other than the institutional variables that affect state borrowing levels. Our results show that certain constitutional restrictions (in particular, the referendum requirement and the dollar debt limit) are more effective than others. The apparent ineffectiveness of other limitations, such as the flexible debt limit, seem related to the bindingness of the limitations in at least half of the cases. Other variables, such as crime rates, number of schoolage children, and state personal income do affect the levels of full faith and credit debt, but not as strongly as the limitations. While some degree of circumvention can be detected (the amount of full faith and credit debt does inversely affect the levels of nonguaranteed debt), it is so small when compared to the effectiveness of the constitutional restrictions that it is almost negligible. The examination of the tax revolt era limitations yielded quite similar conclusions, with the additional fact that constitutional restrictions appear more binding than statutory ones. Our research demonstrates that constitutional limitations on borrowing can be applied effectively to constrain excessive borrowing, but caution must be used. The efficacy of these restrictions decrease dramatically as the number of loopholes increase.
Resumo:
In three essays we examine user-generated product ratings with aggregation. While recommendation systems have been studied extensively, this simple type of recommendation system has been neglected, despite its prevalence in the field. We develop a novel theoretical model of user-generated ratings. This model improves upon previous work in three ways: it considers rational agents and allows them to abstain from rating when rating is costly; it incorporates rating aggregation (such as averaging ratings); and it considers the effect on rating strategies of multiple simultaneous raters. In the first essay we provide a partial characterization of equilibrium behavior. In the second essay we test this theoretical model in laboratory, and in the third we apply established behavioral models to the data generated in the lab. This study provides clues to the prevalence of extreme-valued ratings in field implementations. We show theoretically that in equilibrium, ratings distributions do not represent the value distributions of sincere ratings. Indeed, we show that if rating strategies follow a set of regularity conditions, then in equilibrium the rate at which players participate is increasing in the extremity of agents' valuations of the product. This theoretical prediction is realized in the lab. We also find that human subjects show a disproportionate predilection for sincere rating, and that when they do send insincere ratings, they are almost always in the direction of exaggeration. Both sincere and exaggerated ratings occur with great frequency despite the fact that such rating strategies are not in subjects' best interest. We therefore apply the behavioral concepts of quantal response equilibrium (QRE) and cursed equilibrium (CE) to the experimental data. Together, these theories explain the data significantly better than does a theory of rational, Bayesian behavior -- accurately predicting key comparative statics. However, the theories fail to predict the high rates of sincerity, and it is clear that a better theory is needed.
Resumo:
Nuclear weak interaction rates, including electron and positron emission rates, and continuum electron and positron capture rates , as well as the associated v and –/v energy loss rates are calculated on a detailed grid of temperature and density for the free nucleons and 226 nuclei with masses between A = 21 and 60. Gamow-Teller and Fermi discrete-state transition matrix element systematics and the Gamow-Teller T^< →/← T^> resonance transitions are discussed in depth and are implemented in the stellar rate calculations. Results of the calculations are presented on an abbreviated grid of temperature and density and comparison is made to terrestrial weak transition rates where possible. Neutron shell blocking of allowed electron capture on heavy nuclei during stellar core collapse is discussed along with several unblocking mechanisms operative at high temperature and density. The results of one-zone collapse calculations are presented which suggest that the effect of neutron shell blocking is to produce a larger core lepton fraction at neutrino trapping which leads to a larger inner-core mass and hence a stronger post-bounce shock.
Resumo:
This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices.
Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.
Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided.
The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds.
In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.
Resumo:
In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.
The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.
The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.
Resumo:
In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.
For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.
Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.