993 resultados para Correlation (Statistics)
Resumo:
Random field theory has been used to model the spatial average soil properties, whereas the most widely used, geostatistics, on which also based a common basis (covariance function) has been successfully used to model and estimate natural resource since 1960s. Therefore, geostistics should in principle be an efficient way to model soil spatial variability Based on this, the paper presents an alternative approach to estimate the scale of fluctuation or correlation distance of a soil stratum by geostatistics. The procedure includes four steps calculating experimental variogram from measured data, selecting a suited theoretical variogram model, fitting the theoretical one to the experimental variogram, taking the parameters within the theoretical model obtained from optimization into a simple and finite correlation distance 6 relationship to the range a. The paper also gives eight typical expressions between a and b. Finally, a practical example was presented for showing the methodology.
Resumo:
This paper deals with turbulence behavior inbenthalboundarylayers by means of large eddy simulation (LES). The flow is modeled by moving an infinite plate in an otherwise quiescent water with an oscillatory and a steady velocity components. The oscillatory one aims to simulate wave effect on the flow. A number of large-scale turbulence databases have been established, based on which we have obtained turbulencestatisticsof the boundarylayers, such as Reynolds stress, turbulence intensity, skewness and flatness ofturbulence, and temporal and spatial scales of turbulent bursts, etc. Particular attention is paid to the dependences of those statistics on two nondimensional parameters, namely the Reynolds number and the current-wave velocity ratio defined as the steady current velocity over the oscillatory velocity amplitude. It is found that the Reynolds stress and turbulence intensity profile differently from phase to phase, and exhibit two types of distributions in an oscillatory cycle. One is monotonic occurring during the time when current and wave-induced components are in the same direction, and the other inflectional occurring during the time when current and wave-induced components are in opposite directions. Current component makes an asymmetrical time series of Reynolds stress, as well as turbulence intensity, although the mean velocity series is symmetrical as a sine/cosine function. The skewness and flatness variations suggest that the turbulence distribution is not a normal function but approaches to a normal one with the increasing of Reynolds number and the current-wave velocity ratio as well. As for turbulent bursting, the dimensionless period and the mean area of all bursts per unit bed area tend to increase with Reynolds number and current-wave velocity ratio, rather than being constant as in steady channel flows.
Resumo:
ADMB2R is a collection of AD Model Builder routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 ADMB2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the ADMB2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer ADMB2R to others in the hope that they will find it useful. (PDF contains 30 pages)
Resumo:
C2R is a collection of C routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 C2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the C2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer C2R to others in the hope that they will find it useful. (PDF contains 27 pages)
Resumo:
For2R is a collection of Fortran routines for saving complex data structures into a file that can be read in the R statistics environment with a single command.1 For2R provides both the means to transfer data structures significantly more complex than simple tables, and an archive mechanism to store data for future reference. We developed this software because we write and run computationally intensive numerical models in Fortran, C++, and AD Model Builder. We then analyse results with R. We desired to automate data transfer to speed diagnostics during working-group meetings. We thus developed the For2R interface to write an R data object (of type list) to a plain-text file. The master list can contain any number of matrices, values, dataframes, vectors or lists, all of which can be read into R with a single call to the dget function. This allows easy transfer of structured data from compiled models to R. Having the capacity to transfer model data, metadata, and results has sharply reduced the time spent on diagnostics, and at the same time, our diagnostic capabilities have improved tremendously. The simplicity of this interface and the capabilities of R have enabled us to automate graph and table creation for formal reports. Finally, the persistent storage in files makes it easier to treat model results in analyses or meta-analyses devised months—or even years—later. We offer For2R to others in the hope that they will find it useful. (PDF contains 31 pages)
Resumo:
A Knowledge Exchange workshop took place in March 2010 to bring together technical experts working in partner projects collecting usage statistics including PIRUS2, OAstatistik and SURFsure projects. Experts from other related projects (RePec and NeeO) were also involved. Following the workshop, experts came to the conclusion that it would be valuable for institutions in all four Knowledge Exchange partner countries as well as countries outside of KE, to have an agreed set of guidelines which could be used when collecting and comparing data on usage statistics. This work connects with work already planned in projects already funded in three of the countries. Together they have set up a set of guidelines which will be tested and refined by Autumn 2010. An element in these guidelines is an agreed robotlist. This list extends on the COUNTER list of robots which should be excluded when counting downloads.
Resumo:
Knowledge Exchange hosted a workshop in March 2010 with the aim to bring together technical experts working in partner projects collecting usage statistics including PIRUS2, OAstatistik and SURFsure projects. Experts from other related projects (RePec and NeeO) were also involved. The workshop produced a briefing paper on combined usage statistics as a basis for research intelligence. In this paper, the experts make a cause for collecting and exchanging usage statistics as this can provide valuable insight in how research is being used, not only by the research community, but also by business and society in general. This would provide a basis for 'Research Intelligence', an intelligent use of reliable numerical data which can be used as a basis for decision making in higher education and research. Usage statistics are a clear example of data which can offer a valuable contribution to the information required to make informed decisions. To allow for the meaningful collection, exchange and analysis of usage statistics, a number of challenges and opportunities need to be addressed and further steps need to be taken.
Resumo:
Atlantic menhaden, Brrvoortia tyrannus, the object of a major purse-seine fishery along the U.S. east coast, are landed at plants from northern Florida to central Maine. The National Marine Fisheries Service has sampled these landings since 1955 for length, weight, and age. Together with records of landings at each plant, the samples are used to estimate numbers of fish landed at each age. This report analyzes the sampling design in terms of probablity sampling theory. The design is c1assified as two-stage cluster sampling, the first stage consisting of purse-seine sets randomly selected from the population of all sets landed, and the second stage consisting of fish randomly selected from each sampled set. Implicit assumptions of this design are discussed with special attention to current sampling procedures. Methods are developed for estimating mean fish weight, numbers of fish landed, and age composition of the catch, with approximate 95% confidence intervals. Based on specific results from three ports (port Monmouth, N.J., Reedville, Va., and Beaufort, N.C.) for the 1979 fishing season, recommendations are made for improving sampling procedures to comply more exactly with assumptions of the sampling design. These recommendatlons include adopting more formal methods for randomizing set and fish selection, increasing the number of sets sampled, considering the bias introduced by unequal set sizes, and developing methods to optimize the use of funds and personnel. (PDF file contains 22 pages.)