989 resultados para Technical reports
Resumo:
Enclosed is a bibliography of 556 published articles, technical reports, theses, dissertations, and books that form the basis for a conceptual model of salt marsh management on Merritt Island, Florida (Section 1). A copy of each item is available on file at the Florida Cooperative Fish and Wildlife Research Unit, Gainesville. Some relevant proprietary items and unpublished drafts have not been included pending permission of the authors. We will continue to add pertinent references to our bibliography and files. Currently, some topics are represented by very few items. As our synthesis develops, we will be able to indicate a subset of papers most pertinent to an understanding of the ecology and management of Merritt Island salt marshes. (98 page document)
Resumo:
(287 page document)
Resumo:
Executive Summary: Information found in this report covers the years 1986 through 2005. Mussel Watch began monitoring a suite of trace metals and organic contaminants such as DDT, PCBs and PAHs. Through time additional chemicals were added, and today approximately 140 analytes are monitored. The Mussel Watch Program is the longest running estuarine and coastal pollutant monitoring effort conducted in the United States that is national in scope each year. Hundreds of scientific journal articles and technical reports based on Mussel Watch data have been written; however, this report is the first that presents local, regional and national findings across all years in a Quick Reference format, suitable for use by policy makers, scientists, resource managers and the general public. Pollution often starts at the local scale where high concentrations point to a specific source of contamination, yet some contaminants such as PCBs are atmospherically transported across regional and national scales, resulting in contamination far from their origin. Findings presented here showed few national trends for trace metals and decreasing trends for most organic contaminants; however, a wide variety of trends, both increasing and decreasing, emerge at regional and local levels. For most organic contaminants, trends have resulted from state and federal regulation. The highest concentrations for both metal and organic contaminants are found near urban and industrial areas. In addition to monitoring throughout the nation’s coastal shores and Great Lakes, Mussel Watch samples are stored in a specimen bank so that trends can be determined retrospectively for new and emerging contaminants of concern. For example, there is heightened awareness of a group of flame retardants that are finding their way into the marine environment. These compounds, known as polybrominated diphenyl ethers (PBDEs), are now being studied using historic samples from the specimen bank and current samples to determine their spatial distribution. We will continue to use this kind of investigation to assess new contaminant threats. We hope you find this document to be valuable, and that you continue to look towards the Mussel Watch Program for information on the condition of your coastal waters. (PDF contains 118 pages)
Resumo:
This is only the table of contents for a series of technical reports done from 1975-1978. The papers were done on contract for BLM by a number of universities and consulting firms such as Science Applications, Inc., University of Southern California, Scripps Institute of Oceanography, Moss Landing Marine Laboratories, and various campuses of University of California and California State University. (PDF contains 36 pages)
Resumo:
[EU]Txosten honen bitartez, etxebizitza baten egiaztatze teknikoa burutu da. Horretarako, beharrezkoa den etxearen informazioa lortu egin da eta CE3X programa erabiliz neurtu dira etxearen bero galerak. Hau egin eta gero, etxe honen bero galerak minimizatzeko zenbait hobekuntza garatu dira, europar batasunean eta estatu espainiarrean dagoen legedia aztertu ostean hobekuntzak gauzatzeko laguntzarik dagoen ikusteko. Hauen bitartez, kalifikazioa hobetu egin da Ftik Cra pasatuz. Azkenik, hobekuntza bakoitzaren txosten teknikoak eta aurrekontuak garatu dira.
Resumo:
This paper analyses the Biometric challenges in fisheries research from a cross section of activities in the core research areas, annual reports, technical reports, dissertations and field visits and provides insight into the development of possible remedies to the various challenges.
Resumo:
Quadsim is an intermediate code simulator. It allows you to "run" programs that your compiler generates in intermediate code format. Its user interface is similar to most debuggers in that you can step through your program, instruction by instruction, set breakpoints, examine variable values, and so on. The intermediate code format used by Quadsim is that described in [Aho 86]. If your compiler generates intermediate code in this format, you will be able to take intermediate-code files generated by your compiler, load them into the simulator, and watch them "run." You are provided with functions that hide the internal representation of intermediate code. You can use these functions within your compiler to generate intermediate code files that can be read by the simulator. Quadsim was inspired and greatly influenced by [Aho 86]. The material in chapter 8 (Intermediate Code Generation) of [Aho 86] should be considered background material for users of Quadsim.
Resumo:
Speculative Concurrency Control (SCC) [Best92a] is a new concurrency control approach especially suited for real-time database applications. It relies on the use of redundancy to ensure that serializable schedules are discovered and adopted as early as possible, thus increasing the likelihood of the timely commitment of transactions with strict timing constraints. In [Best92b], SCC-nS, a generic algorithm that characterizes a family of SCC-based algorithms was described, and its correctness established by showing that it only admits serializable histories. In this paper, we evaluate the performance of the Two-Shadow SCC algorithm (SCC-2S), a member of the SCC-nS family, which is notable for its minimal use of redundancy. In particular, we show that SCC-2S (as a representative of SCC-based algorithms) provides significant performance gains over the widely used Optimistic Concurrency Control with Broadcast Commit (OCC-BC), under a variety of operating conditions and workloads.
Resumo:
Swiss National Science Foundation; Austrian Federal Ministry of Science and Research; Deutsche Forschungsgemeinschaft (SFB 314); Christ Church, Oxford; Oxford University Computing Laboratory
Resumo:
The proliferation of inexpensive workstations and networks has prompted several researchers to use such distributed systems for parallel computing. Attempts have been made to offer a shared-memory programming model on such distributed memory computers. Most systems provide a shared-memory that is coherent in that all processes that use it agree on the order of all memory events. This dissertation explores the possibility of a significant improvement in the performance of some applications when they use non-coherent memory. First, a new formal model to describe existing non-coherent memories is developed. I use this model to prove that certain problems can be solved using asynchronous iterative algorithms on shared-memory in which the coherence constraints are substantially relaxed. In the course of the development of the model I discovered a new type of non-coherent behavior called Local Consistency. Second, a programming model, Mermera, is proposed. It provides programmers with a choice of hierarchically related non-coherent behaviors along with one coherent behavior. Thus, one can trade-off the ease of programming with coherent memory for improved performance with non-coherent memory. As an example, I present a program to solve a linear system of equations using an asynchronous iterative algorithm. This program uses all the behaviors offered by Mermera. Third, I describe the implementation of Mermera on a BBN Butterfly TC2000 and on a network of workstations. The performance of a version of the equation solving program that uses all the behaviors of Mermera is compared with that of a version that uses coherent behavior only. For a system of 1000 equations the former exhibits at least a 5-fold improvement in convergence time over the latter. The version using coherent behavior only does not benefit from employing more than one workstation to solve the problem while the program using non-coherent behavior continues to achieve improved performance as the number of workstations is increased from 1 to 6. This measurement corroborates our belief that non-coherent shared memory can be a performance boon for some applications.
Resumo:
Coherent shared memory is a convenient, but inefficient, method of inter-process communication for parallel programs. By contrast, message passing can be less convenient, but more efficient. To get the benefits of both models, several non-coherent memory behaviors have recently been proposed in the literature. We present an implementation of Mermera, a shared memory system that supports both coherent and non-coherent behaviors in a manner that enables programmers to mix multiple behaviors in the same program[HS93]. A programmer can debug a Mermera program using coherent memory, and then improve its performance by selectively reducing the level of coherence in the parts that are critical to performance. Mermera permits a trade-off of coherence for performance. We analyze this trade-off through measurements of our implementation, and by an example that illustrates the style of programming needed to exploit non-coherence. We find that, even on a small network of workstations, the performance advantage of non-coherence is compelling. Raw non-coherent memory operations perform 20-40~times better than non-coherent memory operations. An example application program is shown to run 5-11~times faster when permitted to exploit non-coherence. We conclude by commenting on our use of the Isis Toolkit of multicast protocols in implementing Mermera.
Resumo:
We investigate the problem of learning disjunctions of counting functions, which are general cases of parity and modulo functions, with equivalence and membership queries. We prove that, for any prime number p, the class of disjunctions of integer-weighted counting functions with modulus p over the domain Znq (or Zn) for any given integer q ≥ 2 is polynomial time learnable using at most n + 1 equivalence queries, where the hypotheses issued by the learner are disjunctions of at most n counting functions with weights from Zp. The result is obtained through learning linear systems over an arbitrary field. In general a counting function may have a composite modulus. We prove that, for any given integer q ≥ 2, over the domain Zn2, the class of read-once disjunctions of Boolean-weighted counting functions with modulus q is polynomial time learnable with only one equivalence query, and the class of disjunctions of log log n Boolean-weighted counting functions with modulus q is polynomial time learnable. Finally, we present an algorithm for learning graph-based counting functions.
Resumo:
For communication-intensive parallel applications, the maximum degree of concurrency achievable is limited by the communication throughput made available by the network. In previous work [HPS94], we showed experimentally that the performance of certain parallel applications running on a workstation network can be improved significantly if a congestion control protocol is used to enhance network performance. In this paper, we characterize and analyze the communication requirements of a large class of supercomputing applications that fall under the category of fixed-point problems, amenable to solution by parallel iterative methods. This results in a set of interface and architectural features sufficient for the efficient implementation of the applications over a large-scale distributed system. In particular, we propose a direct link between the application and network layer, supporting congestion control actions at both ends. This in turn enhances the system's responsiveness to network congestion, improving performance. Measurements are given showing the efficacy of our scheme to support large-scale parallel computations.
Resumo:
We give a hybrid algorithm for parsing epsilon grammars based on Tomita's non-ϵ-grammar parsing algorithm ([Tom86]) and Nozohoor-Farshi's ϵ-grammar recognition algorithm ([NF91]). The hybrid parser handles the same set of grammars handled by Nozohoor-Farshi's recognizer. The algorithm's details and an example of its use are given. We also discuss the deployment of the hybrid algorithm within a GB parser, and the reason an ϵ grammar parser is needed in our GB parser.
Resumo:
By utilizing structure sharing among its parse trees, a GB parser can increase its efficiency dramatically. Using a GB parser which has as its phrase structure recovery component an implementation of Tomita's algorithm (as described in [Tom86]), we investigate how a GB parser can preserve the structure sharing output by Tomita's algorithm. In this report, we discuss the implications of using Tomita's algorithm in GB parsing, and we give some details of the structuresharing parser currently under construction. We also discuss a method of parallelizing a GB parser, and relate it to the existing literature on parallel GB parsing. Our approach to preserving sharing within a shared-packed forest is applicable not only to GB parsing, but anytime we want to preserve structure sharing in a parse forest in the presence of features.