779 resultados para Aiken Technical College--Statistics
Resumo:
ADMB2R is a collection of AD Model Builder routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 ADMB2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the ADMB2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer ADMB2R to others in the hope that they will find it useful. (PDF contains 30 pages)
Resumo:
C2R is a collection of C routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 C2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the C2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer C2R to others in the hope that they will find it useful. (PDF contains 27 pages)
Resumo:
For2R is a collection of Fortran routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 For2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the For2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer For2R to others in the hope that they will find it useful. (PDF contains 31 pages)
Resumo:
A Knowledge Exchange workshop took place in March 2010 to bring together technical experts working in partner projects collecting usage statistics including PIRUS2, OAstatistik and SURFsure projects. Experts from other related projects (RePec and NeeO) were also involved. Following the workshop, experts came to the conclusion that it would be valuable for institutions in all four Knowledge Exchange partner countries as well as countries outside of KE, to have an agreed set of guidelines which could be used when collecting and comparing data on usage statistics. This work connects with work already planned in projects already funded in three of the countries. Together they have set up a set of guidelines which will be tested and refined by Autumn 2010. An element in these guidelines is an agreed robotlist. This list extends on the COUNTER list of robots which should be excluded when counting downloads.
Resumo:
Knowledge Exchange hosted a workshop in March 2010 with the aim to bring together technical experts working in partner projects collecting usage statistics including PIRUS2, OAstatistik and SURFsure projects. Experts from other related projects (RePec and NeeO) were also involved. The workshop produced a briefing paper on combined usage statistics as a basis for research intelligence. In this paper, the experts make a cause for collecting and exchanging usage statistics as this can provide valuable insight in how research is being used, not only by the research community, but also by business and society in general. This would provide a basis for 'Research Intelligence', an intelligent use of reliable numerical data which can be used as a basis for decision making in higher education and research. Usage statistics are a clear example of data which can offer a valuable contribution to the information required to make informed decisions. To allow for the meaningful collection, exchange and analysis of usage statistics, a number of challenges and opportunities need to be addressed and further steps need to be taken.
Resumo:
Atlantic menhaden, Brrvoortia tyrannus, the object of a major purse-seine fishery along the U.S. east coast, are landed at plants from northern Florida to central Maine. The National Marine Fisheries Service has sampled these landings since 1955 for length, weight, and age. Together with records of landings at each plant, the samples are used to estimate numbers of fish landed at each age. This report analyzes the sampling design in terms of probablity sampling theory. The design is c1assified as two-stage cluster sampling, the first stage consisting of purse-seine sets randomly selected from the population of all sets landed, and the second stage consisting of fish randomly selected from each sampled set. Implicit assumptions of this design are discussed with special attention to current sampling procedures. Methods are developed for estimating mean fish weight, numbers of fish landed, and age composition of the catch, with approximate 95% confidence intervals. Based on specific results from three ports (port Monmouth, N.J., Reedville, Va., and Beaufort, N.C.) for the 1979 fishing season, recommendations are made for improving sampling procedures to comply more exactly with assumptions of the sampling design. These recommendatlons include adopting more formal methods for randomizing set and fish selection, increasing the number of sets sampled, considering the bias introduced by unequal set sizes, and developing methods to optimize the use of funds and personnel. (PDF file contains 22 pages.)
Resumo:
Results are presented for the first 4 years data (1994-1998) of the Kainji Lake catch assessment survey, collected and analysed by the Nigerian-German Kainji Lake Fisheries Promotion Project. The following areas are covered: Methodology and alterations of the original sampling concept; Frame survey results - factors relating to the CAS; Catch assessment survey results - general; Gill net fishery; Drift net fishery; Beach seine fishery; Cast net fishery; Longline fishery; Trap fishery; and, Catch statistics from fisherwomen. (PDF contains 143 pages)
Resumo:
John Nathan Cobb (1868–1930) became the founding Director of the College of Fisheries, University of Washington, Seattle, in 1919 without the benefit of a college education. An inquisitive and ambitious man, he began his career in the newspaper business and was introduced to commercial fisheries when he joined the U.S. Fish Commission (USFC) in 1895 as a clerk, and he was soon promoted to a “Field Agent” in the Division of Statistics, Washington, D.C. During the next 17 years, Cobb surveyed commercial fisheries from Maine to Florida, Hawaii, the Pacific Northwest, and Alaska for the USFC and its successor, the U.S. Bureau of Fisheries. In 1913, he became editor of the prominent west coast trade magazine, Pacific Fisherman, of Seattle, Wash., where he became known as a leading expert on the fisheries of the Pacific Northwest. He soon joined the campaign, led by his employer, to establish the nation’s first fisheries school at the University of Washington. After a brief interlude (1917–1918) with the Alaska Packers Association in San Francisco, Calif., he was chosen as the School’s founding director in 1919. Reflecting his experience and mindset, as well as the University’s apparent initial desire, Cobb established the College of Fisheries primarily as a training ground for those interested in applied aspects of the commercial fishing industry. Cobb attracted sufficient students, was a vigorous spokesman for the College, and had ambitions plans for expansion of the school’s faculty and facilities. He became aware that the College was not held in high esteem by his faculty colleagues or by the University administration because of the school’s failure to emphasize scholastic achievement, and he attempted to correct this deficiency. Cobb became ill with heart problems in 1929 and died on 13 January 1930. The University soon thereafter dissolved the College and dismissed all but one of its faculty. A Department of Fisheries, in the College of Science, was then established in 1930 and was led by William Francis Thompson (1888–1965), who emphasized basic science and fishery biology. The latter format continues to the present in the Department’s successor, The School of Aquatic Fisheries and Science.
Resumo:
This study examined the technical efficiency in artisanal fisheries in Lagos State of Nigeria. The study employed a two stage random sampling procedure for the selection of 120 respondents. The analytical techniques involved descriptive statistics and estimation of technical efficiency following maximum likelihood estimation (MLE) procedure available in FRONTIER 4.1. The MLE result of the stochastic frontier production function showed that hired labour, cost of repair and capital items are critical factors that influences productivity of artisanal fishermen with the coefficient of hired labour being highly elastic. This implies that employing more labour will significantly increase the catch in the study area. The predicted farm efficiency with an average value of 0.92 showed that there is a marginal potential of about 8 percent to increase the catch, hence the income of the fishermen. The study further examined the factors that influence productivity of fishermen in the study area. Year of education, mode of operation and frequency of fishing have important implication on the technical efficiency of fishermen in the study area.
Resumo:
Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.