11 resultados para complex data

em Aquatic Commons


Relevância:

60.00% 60.00%

Publicador:

Resumo:

ADMB2R is a collection of AD Model Builder routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 ADMB2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the ADMB2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer ADMB2R to others in the hope that they will find it useful. (PDF contains 30 pages)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

C2R is a collection of C routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 C2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the C2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer C2R to others in the hope that they will find it useful. (PDF contains 27 pages)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For2R is a collection of Fortran routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 For2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the For2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer For2R to others in the hope that they will find it useful. (PDF contains 31 pages)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Policy makers, natural resource managers, regulators, and the public often call on scientists to estimate the potential ecological changes caused by both natural and human-induced stresses, and to determine how those changes will impact people and the environment. To develop accurate forecasts of ecological changes we need to: 1) increase understanding of ecosystem composition, structure, and functioning, 2) expand ecosystem monitoring and apply advanced scientific information to make these complex data widely available, and 3) develop and improve forecast and interpretative tools that use a scientific basis to assess the results of management and science policy actions. (PDF contains 120 pages)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Steady-state procedures, of their very nature, cannot deal with dynamic situations. Statistical models require extensive calibration, and predictions often have to be made for environmental conditions which are often outside the original calibration conditions. In addition, the calibration requirement makes them difficult to transfer to other lakes. To date, no computer programs have been developed which will successfully predict changes in species of algae. The obvious solution to these limitations is to apply our limnological knowledge to the problem and develop functional models, so reducing the requirement for such rigorous calibration. Reynolds has proposed a model, based on fundamental principles of algal response to environmental events, which has successfully recreated the maximum observed biomass, the timing of events and a fair simulation of the species succession in several lakes. A forerunner of this model was developed jointly with Welsh Water under contract to Messrs. Wallace Evans and Partners, for use in the Cardiff Bay Barrage study. In this paper the authors test a much developed form of this original model against a more complex data-set and, using a simple example, show how it can be applied as an aid in the choice of management strategy for the reduction of problems caused by eutrophication. Some further developments of the model are indicated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using water quality management programs is a necessary and inevitable way for preservation and sustainable use of water resources. One of the important issues in determining the quality of water in rivers is designing effective quality control networks, so that the measured quality variables in these stations are, as far as possible, indicative of overall changes in water quality. One of the methods to achieve this goal is increasing the number of quality monitoring stations and sampling instances. Since this will dramatically increase the annual cost of monitoring, deciding on which stations and parameters are the most important ones, along with increasing the instances of sampling, in a way that shows maximum change in the system under study can affect the future decision-making processes for optimizing the efficacy of extant monitoring network, removing or adding new stations or parameters and decreasing or increasing sampling instances. This end, the efficiency of multivariate statistical procedures was studied in this thesis. Multivariate statistical procedure, with regard to its features, can be used as a practical and useful method in recognizing and analyzing rivers’ pollution and consequently in understanding, reasoning, controlling, and correct decision-making in water quality management. This research was carried out using multivariate statistical techniques for analyzing the quality of water and monitoring the variables affecting its quality in Gharasou river, in Ardabil province in northwest of Iran. During a year, 28 physical and chemical parameters were sampled in 11 stations. The results of these measurements were analyzed by multivariate procedures such as: Cluster Analysis (CA), Principal Component Analysis (PCA), Factor Analysis (FA), and Discriminant Analysis (DA). Based on the findings from cluster analysis, principal component analysis, and factor analysis the stations were divided into three groups of highly polluted (HP), moderately polluted (MP), and less polluted (LP) stations Thus, this study illustrates the usefulness of multivariate statistical techniques for analysis and interpretation of complex data sets, and in water quality assessment, identification of pollution sources/factors and understanding spatial variations in water quality for effective river water quality management. This study also shows the effectiveness of these techniques for getting better information about the water quality and design of monitoring network for effective management of water resources. Therefore, based on the results, Gharasou river water quality monitoring program was developed and presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

NMFS bottom trawl survey data were used to describe changes in distribution, abundance, and rates of population change occurring in the Gulf of Maine–Georges Bank herring (Clupea harengus) complex during 1963–98. Herring in the region have fully recovered following severe overfishing during the 1960s and 1970s. Three distinct, but seasonally intermingling components from the Gulf of Maine, Nantucket Shoals (Great South Channel area), and Georges Bank appear to compose the herring resource in the region. Distribution ranges contracted as herring biomass declined in the late 1970s and then the range expanded in the 1990s as herring increased. Analysis of research survey data suggest that herring are currently at high levels of abundance and biomass. All three components of the stock complex, including the Georges Bank component, have recovered to pre-1960s abundance. Survey data support the theory that herring recolonized the Georges Bank region in stages from adjacent components during the late 1980s, most likely from herring spawning in the Gulf of Maine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mapping and geospatial analysis of benthic environments are multidisciplinary tasks that have become more accessible in recent years because of advances in technology and cost reductions in survey systems. The complex relationships that exist among physical, biological, and chemical seafloor components require advanced, integrated analysis techniques to enable scientists and others to visualize patterns and, in so doing, allow inferences to be made about benthic processes. Effective mapping, analysis, and visualization of marine habitats are particularly important because the subtidal seafloor environment is not readily viewed directly by eye. Research in benthic environments relies heavily, therefore, on remote sensing techniques to collect effective data. Because many benthic scientists are not mapping professionals, they may not adequately consider the links between data collection, data analysis, and data visualization. Projects often start with clear goals, but may be hampered by the technical details and skills required for maintaining data quality through the entire process from collection through analysis and presentation. The lack of technical understanding of the entire data handling process can represent a significant impediment to success. While many benthic mapping efforts have detailed their methodology as it relates to the overall scientific goals of a project, only a few published papers and reports focus on the analysis and visualization components (Paton et al. 1997, Weihe et al. 1999, Basu and Saxena 1999, Bruce et al. 1997). In particular, the benthic mapping literature often briefly describes data collection and analysis methods, but fails to provide sufficiently detailed explanation of particular analysis techniques or display methodologies so that others can employ them. In general, such techniques are in large part guided by the data acquisition methods, which can include both aerial and water-based remote sensing methods to map the seafloor without physical disturbance, as well as physical sampling methodologies (e.g., grab or core sampling). The terms benthic mapping and benthic habitat mapping are often used synonymously to describe seafloor mapping conducted for the purpose of benthic habitat identification. There is a subtle yet important difference, however, between general benthic mapping and benthic habitat mapping. The distinction is important because it dictates the sequential analysis and visualization techniques that are employed following data collection. In this paper general seafloor mapping for identification of regional geologic features and morphology is defined as benthic mapping. Benthic habitat mapping incorporates the regional scale geologic information but also includes higher resolution surveys and analysis of biological communities to identify the biological habitats. In addition, this paper adopts the definition of habitats established by Kostylev et al. (2001) as a “spatially defined area where the physical, chemical, and biological environment is distinctly different from the surrounding environment.” (PDF contains 31 pages)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The eastern Bering Sea is a major marine ecosystem containing some of the largest populations of groundfish, crabs, birds, and marine mammals in the world. Commercial catches of groundfish in this region have averaged about 1.6 million tons (t) annually in 1970-86. This report describes the species and relative importance of species in the eastern Bering Sea groundfish complex, the environment in which they live, and the history of the fisheries and management during the years 1954 - 1985. Historical changes in abundance and the condition of the principal species at the end of this first 30 years of exploitation are also examined. Results suggest that the biomass of the groundfish complex is characterized by variability rather than stability. The most reliable data (1979 to 1985) suggests that the biomass of the complex fluctuated between 11.8 and 15.7 million t. Even greater variability is suggested by the less reliable data from earlier years. Because of its dominance in the complex and wide fluctuations in abundance, walleye pollock (Theragra chalcogramma) is primarily responsible for the major variations in abundance of the complex. After 30 years of exploitation, the complex was generally in excellent condition. (PDF file contains 100 pages.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop and test a method to estimate relative abundance from catch and effort data using neural networks. Most stock assessment models use time series of relative abundance as their major source of information on abundance levels. These time series of relative abundance are frequently derived from catch-per-unit-of-effort (CPUE) data, using general linearized models (GLMs). GLMs are used to attempt to remove variation in CPUE that is not related to the abundance of the population. However, GLMs are restricted in the types of relationships between the CPUE and the explanatory variables. An alternative approach is to use structural models based on scientific understanding to develop complex non-linear relationships between CPUE and the explanatory variables. Unfortunately, the scientific understanding required to develop these models may not be available. In contrast to structural models, neural networks uses the data to estimate the structure of the non-linear relationship between CPUE and the explanatory variables. Therefore neural networks may provide a better alternative when the structure of the relationship is uncertain. We use simulated data based on a habitat based-method to test the neural network approach and to compare it to the GLM approach. Cross validation and simulation tests show that the neural network performed better than nominal effort and the GLM approach. However, the improvement over GLMs is not substantial. We applied the neural network model to CPUE data for bigeye tuna (Thunnus obesus) in the Pacific Ocean.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Southern bluefin tuna (SBT) (Thunnus maccoyii) growth rates are estimated from tag-return data associated with two time periods, the 1960s and 1980s. The traditional von Bertalanffy growth model (VBG) and a two-phase VBG model were fitted to the data by maximum likelihood. The traditional VBG model did not provide an adequate representation of growth in SBT, and the two-phase VBG yielded a significantly better fit. The results indicated that significant change occurs in the pattern of growth in relation to a VBG curve during the juvenile stages of the SBT life cycle, which may be related to the transition from a tightly schooling fish that spends substantial time in near and surface shore waters to one that is found primarily in more offshore and deeper waters. The results suggest that more complex growth models should be considered for other tunas and for other species that show a marked change in habitat use with age. The likelihood surface for the two-phase VBG model was found to be bimodal and some implications of this are investigated. Significant and substantial differences were found in the growth for fish spawned in the 1960s and in the 1980s, such that after age four there is a difference of about one year in the expected age of a fish of similar length which persists over the size range for which meaningful recapture data are available. This difference may be a density-dependent response as a consequence of the marked reduction in the SBT population. Given the key role that estimates of growth have in most stock assessments, the results indicate that there is a need both for the regular monitoring of growth rates and for provisions for changes in growth over time (possibly related to changes in abundance) in the stock assessment models used for SBT and other species.