916 resultados para Sampling (Statistics)
Resumo:
The uncertainty of any analytical determination depends on analysis and sampling. Uncertainty arising from sampling is usually not controlled and methods for its evaluation are still little known. Pierre Gy’s sampling theory is currently the most complete theory about samplingwhich also takes the design of the sampling equipment into account. Guides dealing with the practical issues of sampling also exist, published by international organizations such as EURACHEM, IUPAC (International Union of Pure and Applied Chemistry) and ISO (International Organization for Standardization). In this work Gy’s sampling theory was applied to several cases, including the analysis of chromite concentration estimated on SEM (Scanning Electron Microscope) images and estimation of the total uncertainty of a drug dissolution procedure. The results clearly show that Gy’s sampling theory can be utilized in both of the above-mentioned cases and that the uncertainties achieved are reliable. Variographic experiments introduced in Gy’s sampling theory are beneficially applied in analyzing the uncertainty of auto-correlated data sets such as industrial process data and environmental discharges. The periodic behaviour of these kinds of processes can be observed by variographic analysis as well as with fast Fourier transformation and auto-correlation functions. With variographic analysis, the uncertainties are estimated as a function of the sampling interval. This is advantageous when environmental data or process data are analyzed as it can be easily estimated how the sampling interval is affecting the overall uncertainty. If the sampling frequency is too high, unnecessary resources will be used. On the other hand, if a frequency is too low, the uncertainty of the determination may be unacceptably high. Variographic methods can also be utilized to estimate the uncertainty of spectral data produced by modern instruments. Since spectral data are multivariate, methods such as Principal Component Analysis (PCA) are needed when the data are analyzed. Optimization of a sampling plan increases the reliability of the analytical process which might at the end have beneficial effects on the economics of chemical analysis,
Resumo:
Aim: To investigate and understand patient's satisfaction with nursing care in the intensive care unit to identify the dimensions of the concept of"satisfaction" from the patient's point of view. To design and validate a questionnaire that measures satisfaction levels in critical patients. Background: There are many instruments capable of measuring satisfaction with nursing care; however, they do not address the reality for critical patients nor are they applicable in our context. Design: A dual approach study comprising: a qualitative phase employing Grounded Theory and a quantitative and descriptive phase to prepare and validate the questionnaire. Methods: Data collection in the qualitative phase will consist of: in-depth interview after theoretical sampling, on-site diary and expert discussion group. The sample size will depend on the expected theoretical saturation n = 27-36. Analysis will be based on Grounded Theory. For the quantitative phase, the sampling will be based on convenience (n = 200). A questionnaire will be designed on the basis of qualitative data. Descriptive and inferential statistics will be used. The validation will be developed on the basis of the validity of the content, the criteria of the construct and reliability of the instrument by the Cronbach's alpha and test-retest approach. Approval date for this protocol was November 2010. Discussion: Self-perceptions, beliefs, experiences, demographic, socio-cultural epistemological and political factors are determinants for satisfaction, and these should be taken into account when compiling a questionnaire on satisfaction with nursing care among critical patients.
Resumo:
Selective papers of the workshop on "Development of models and forest soil surveys for monitoring of soil carbon", Koli, Finland, April 5-9 2006.
Resumo:
This study was carried to evaluate the efficiency of the Bitterlich method in growth and yield modeling of the even-aged Eucalyptus stands. 25 plots were setup in Eucalyptus grandis cropped under a high bole system in the Central Western Region of Minas Gerais, Brazil. The sampling points were setup in the center of each plot. The data of four annual mesurements were colleted and used to adjust the three model types using the age, the site index and the basal area as independent variables. The growths models were fitted for volume and mass of trees. The efficiency of the Bitterlich method was confirmed for generating the data for growth and yield modeling.
Resumo:
In order to verify Point-Centered Quarter Method (PCQM) accuracy and efficiency, using different numbers of individuals by per sampled area, in 28 quarter points in an Araucaria forest, southern Paraná, Brazil. Three variations of the PCQM were used for comparison associated to the number of sampled individual trees: standard PCQM (SD-PCQM), with four sampled individuals by point (one in each quarter), second measured (VAR1-PCQM), with eight sampled individuals by point (two in each quarter), and third measuring (VAR2-PCQM), with 16 sampled individuals by points (four in each quarter). Thirty-one species of trees were recorded by the SD-PCQM method, 48 by VAR1-PCQM and 60 by VAR2-PCQM. The level of exhaustiveness of the vegetation census and diversity index showed an increasing number of individuals considered by quadrant, indicating that VAR2-PCQM was the most accurate and efficient method when compared with VAR1-PCQM and SD-PCQM.
Resumo:
Through the site-specific management, the precision agriculture brings new techniques for the agricultural sector, as well as a larger detailing of the used methods and increase of the global efficiency of the system. The objective of this work was to analyze two techniques for definition of management zones using soybean yield maps, in a productive area handled with localized fertilization and other with conventional fertilization. The sampling area has 1.74 ha, with 128 plots with site-specific fertilization and 128 plots with conventional fertilization. The productivity data were normalized by two techniques (normalized and standardized equivalent productivity), being later classified in management zones. It can be concluded that the two methods of management zones definition had revealed to be efficient, presenting similarities in the data disposal. Due to the fact that the equivalent standardized productivity uses standard score, it contemplates a better statistics justification.
Resumo:
Taking into account that the sampling intensity of soil attributes is a determining factor for applying of concepts of precision agriculture, this study aims to determine the spatial distribution pattern of soil attributes and corn yield at four soil sampling intensities and verify how sampling intensity affects cause-effect relationship between soil attributes and corn yield. A 100-referenced point sample grid was imposed on the experimental site. Thus, each sampling cell encompassed an area of 45 m² and was composed of five 10-m long crop rows, where referenced points were considered the center of the cell. Samples were taken from at 0 to 0.1 m and 0.1 to 0.2 m depths. Soil chemical attributes and clay content were evaluated. Sampling intensities were established by initial 100-point sampling, resulting data sets of 100; 75; 50 and 25 points. The data were submitted to descriptive statistical and geostatistics analyses. The best sampling intensity to know the spatial distribution pattern was dependent on the soil attribute being studied. The attributes P and K+ content showed higher spatial variability; while the clay content, Ca2+, Mg2+ and base saturation values (V) showed lesser spatial variability. The spatial distribution pattern of clay content and V at the 100-point sampling were the ones which best explained the spatial distribution pattern of corn yield.
Resumo:
The mechanical harvesting is an important stage in the production process of soybeans and, in this process; the loss of a significant number of grains is common. Despite the existence of mechanisms to monitor these losses, it is still essential to use sampling methods to quantify them. Assuming that the size of the sample area affects the reliability and variability between samples in quantifying losses, this paper aimed to analyze the variability and feasibility of using different sizes of sample area (1, 2 and 3 m²) in quantifying losses in the mechanical harvesting of soybeans. Were sampled 36 sites and the cutting losses, losses by other mechanisms of the combine and total losses were evaluated, as well as the water content in seeds, straw distribution and crop productivity. Data were subjected to statistical analysis (descriptive statistics and analysis of variance) and Statistical Control Process (SCP). The coefficients of variation were similar for the three frames available. Combine losses showed stable behavior, whereas cutting losses and total losses showed unstable behavior. The frame size did not affect the quantification and variability of losses in the mechanical harvesting of soybeans, thus a frame of 1 m² can be used for determining losses.
Resumo:
In the current study, we performed a soybean production spatial distribution analysis in Paraná State. Seven crop-year data, from 2003-04 to 2009-10, obtained from the Paraná Department of Agriculture and Supply (SEAB) were used to develop a Boxmap for each crop-year, show soybean production throughout this time interval. Moran's index was used to measure spatial autocorrelation among municipalities at an aggregate level, while LISA index local correlation. For each index, different contiguity matrix and order were used and there was a significance level study. As a result, we have showed spatial relationship among cities regarding the production, which allowed the indication of high and low production clusters. Finally, identifying main soybean-producing cities, what may provide supply chain members with information to strengthen the crop production in Paraná.
Resumo:
This dissertation examines knowledge and industrial knowledge creation processes. It looks at the way knowledge is created in industrial processes based on data, which is transformed into information and finally into knowledge. In the context of this dissertation the main tool for industrial knowledge creation are different statistical methods. This dissertation strives to define industrial statistics. This is done using an expert opinion survey, which was sent to a number of industrial statisticians. The survey was conducted to create a definition for this field of applied statistics and to demonstrate the wide applicability of statistical methods to industrial problems. In this part of the dissertation, traditional methods of industrial statistics are introduced. As industrial statistics are the main tool for knowledge creation, the basics of statistical decision making and statistical modeling are also included. The widely known Data Information Knowledge Wisdom (DIKW) hierarchy serves as a theoretical background for this dissertation. The way that data is transformed into information, information into knowledge and knowledge finally into wisdom is used as a theoretical frame of reference. Some scholars have, however, criticized the DIKW model. Based on these different perceptions of the knowledge creation process, a new knowledge creation process, based on statistical methods is proposed. In the context of this dissertation, the data is a source of knowledge in industrial processes. Because of this, the mathematical categorization of data into continuous and discrete types is explained. Different methods for gathering data from processes are clarified as well. There are two methods for data gathering in this dissertation: survey methods and measurements. The enclosed publications provide an example of the wide applicability of statistical methods in industry. In these publications data is gathered using surveys and measurements. Enclosed publications have been chosen so that in each publication, different statistical methods are employed in analyzing of data. There are some similarities between the analysis methods used in the publications, but mainly different methods are used. Based on this dissertation the use of statistical methods for industrial knowledge creation is strongly recommended. With statistical methods it is possible to handle large datasets and different types of statistical analysis results can easily be transformed into knowledge.