993 resultados para Over sampling
Resumo:
This paper deals with the joint economic design of (x) over bar and R charts when the occurrence times of assignable causes follow Weibull distributions with increasing failure rates. The variable quality characteristic is assumed to be normally distributed and the process is subject to two independent assignable causes (such as tool wear-out, overheating, or vibration). One cause changes the process mean and the other changes the process variance. However, the occurrence of one kind of assignable cause does not preclude the occurrence of the other. A cost model is developed and a non-uniform sampling interval scheme is adopted. A two-step search procedure is employed to determine the optimum design parameters. Finally, a sensitivity analysis of the model is conducted, and the cost savings associated with the use of non-uniform sampling intervals instead of constant sampling intervals are evaluated.
Resumo:
When joint (X) over bar and R charts are in use, samples of fixed size are regularly taken from the process, and their means and ranges are plotted on the (X) over bar and R charts, respectively. In this article, joint (X) over bar and R charts have been used for monitoring continuous production processes. The sampling is performed, in two stages. During the first stage, one item of the sample is inspected and, depending on the result, the sampling is interrupted if the process is found to be in control; otherwise, it goes on to the second stage, where the remaining sample items are inspected. The two-stage sampling procedure speeds up the detection of process disturbances. The proposed joint (X) over bar and R charts are easier to administer and are more efficient than the joint (X) over bar and R charts with variable sample size where the quality characteristic of interest can be evaluated either by attribute or variable. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
The usual practice in using a control chart to monitor a process is to take samples of size n from the process every h hours This article considers the properties of the XBAR chart when the size of each sample depends on what is observed in the preceding sample. The idea is that the sample should be large if the sample point of the preceding sample is close to but not actually outside the control limits and small if the sample point is close to the target. The properties of the variable sample size (VSS) XBAR chart are obtained using Markov chains. The VSS XBAR chart is substantially quicker than the traditional XBAR chart in detecting moderate shifts in the process.
Resumo:
This paper presents an economic design of (X) over bar control charts with variable sample sizes, variable sampling intervals, and variable control limits. The sample size n, the sampling interval h, and the control limit coefficient k vary between minimum and maximum values, tightening or relaxing the control. The control is relaxed when an (X) over bar value falls close to the target and is tightened when an (X) over bar value falls far from the target. A cost model is constructed that involves the cost of false alarms, the cost of finding and eliminating the assignable cause, the cost associated with production in an out-of-control state, and the cost of sampling and testing. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A comprehensive study is performed to examine the economic advantages of varying the (X) over bar chart parameters.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The purpose of the current study was to investigate the role of visual information on gait control in people with Parkinson's disease as they crossed over obstacles. Twelve healthy individuals, and 12 patients with mild to moderate Parkinson's disease, walked at their preferred speeds along a walkway and stepped over obstacles of varying heights (ankle height or half-knee height), under three visual sampling conditions: dynamic (normal lighting), static (static visual samples, similar to stroboscopic lighting), and voluntary visual sampling. Subjects wore liquid crystal glasses for visual manipulation. In the static visual sampling condition only, the patients with Parkinson's disease made contact with the obstacle more often than did the control subjects. In the successful trials, the patients increased their crossing step width in the static visual sampling condition as compared to the dynamic and voluntary visual sampling conditions; the control group maintained the same step width for all visual sampling conditions. The patients showed lower horizontal mean velocity values during obstacle crossing than did the controls. The patients with Parkinson's disease were more dependent on optic flow information for successful task and postural stability than were the control subjects. Bradykinesia influenced obstacle crossing in the patients with Parkinson's disease. © 2013 Elsevier B.V.
Resumo:
Classical sampling methods can be used to estimate the mean of a finite or infinite population. Block kriging also estimates the mean, but of an infinite population in a continuous spatial domain. In this paper, I consider a finite population version of block kriging (FPBK) for plot-based sampling. The data are assumed to come from a spatial stochastic process. Minimizing mean-squared-prediction errors yields best linear unbiased predictions that are a finite population version of block kriging. FPBK has versions comparable to simple random sampling and stratified sampling, and includes the general linear model. This method has been tested for several years for moose surveys in Alaska, and an example is given where results are compared to stratified random sampling. In general, assuming a spatial model gives three main advantages over classical sampling: (1) FPBK is usually more precise than simple or stratified random sampling, (2) FPBK allows small area estimation, and (3) FPBK allows nonrandom sampling designs.
Resumo:
Contamination by butyltin compounds (BTs) has been reported in estuarine environments worldwide, with serious impacts on the biota of these areas. Considering that BTs can be degraded by varying environmental conditions such as incident light and salinity, the short-term variations in such factors may lead to inaccurate estimates of BTs concentrations in nature. Therefore, the present study aimed to evaluate the possibility that measurements of BTs in estuarine sediments are influenced by different sampling conditions, including period of the day (day or night), tidal zone (intertidal or subtidal), and tides (high or low). The study area is located on the Brazilian southeastern coast, Sao Vicente Estuary, at Pescadores Beach, where BT contamination was previously detected. Three replicate samples of surface sediment were collected randomly in each combination of period of the day, tidal zone, and tide condition, from three subareas along the beach, totaling 72 samples. BTs were analyzed by GC-PFPD using a tin filter and a VF-5 column, by means of a validated method. The concentrations of tributyltin (TBT), dibutyltin (DBT), and monobutyltin (MBT) ranged from undetectable to 161 ng Sn g(-1) (d.w.). In most samples (71%), only MBT was quantifiable, whereas TBTs were measured in only 14, suggesting either an old contamination or rapid degradation processes. DBT was found in 27 samples, but could be quantified in only one. MBT concentrations did not differ significantly with time of day, zones, or tide conditions. DBT and TBT could not be compared under all these environmental conditions, because only a few samples were above the quantification limit. Pooled samples of TBT did not reveal any difference between day and night. These results indicated that, in assessing contamination by butyltin compounds, surface-sediment samples can be collected in any environmental conditions. However, the wide variation of BTs concentrations in the study area, i.e., over a very small geographic scale, illustrates the need for representative hierarchical and composite sampling designs that are compatible with the multiscalar temporal and spatial variability common to most marine systems. The use of such sampling designs will be necessary for future attempts to quantitatively evaluate and monitor the occurrence and impact of these compounds in nature
Resumo:
Within-site variability in species detectability is a problem common to many biodiversity assessments and can strongly bias the results. Such variability can be caused by many factors, including simple counting inaccuracies, which can be solved by increasing sample size, or by temporal changes in species behavior, meaning that the way the temporal sampling protocol is designed is also very important. Here we use the example of mist-netted tropical birds to determine how design decisions in the temporal sampling protocol can alter the data collected and how these changes might affect the detection of ecological patterns, such as the species-area relationship (SAR). Using data from almost 3400 birds captured from 21,000 net-hours at 31 sites in the Brazilian Atlantic Forest, we found that the magnitude of ecological trends remained fairly stable, but the probability of detecting statistically significant ecological patterns varied depending on sampling effort, time of day and season in which sampling was conducted. For example, more species were detected in the wet season, but the SAR was strongest in the dry season. We found that the temporal distribution of sampling effort was more important than its total amount, discovering that similar ecological results could have been obtained with one-third of the total effort, as long as each site had been equally sampled over 2 yr. We discuss that projects with the same sampling effort and spatial design, but with different temporal sampling protocol are likely to report different ecological patterns, which may ultimately lead to inappropriate conservation strategies.
Resumo:
An extensive study of the morphology and the dynamics of the equatorial ionosphere over South America is presented here. A multi parametric approach is used to describe the physical characteristics of the ionosphere in the regions where the combination of the thermospheric electric field and the horizontal geomagnetic field creates the so-called Equatorial Ionization Anomalies. Ground based measurements from GNSS receivers are used to link the Total Electron Content (TEC), its spatial gradients and the phenomenon known as scintillation that can lead to a GNSS signal degradation or even to a GNSS signal ‘loss of lock’. A new algorithm to highlight the features characterizing the TEC distribution is developed in the framework of this thesis and the results obtained are validated and used to improve the performance of a GNSS positioning technique (long baseline RTK). In addition, the correlation between scintillation and dynamics of the ionospheric irregularities is investigated. By means of a software, here implemented, the velocity of the ionospheric irregularities is evaluated using high sampling rate GNSS measurements. The results highlight the parallel behaviour of the amplitude scintillation index (S4) occurrence and the zonal velocity of the ionospheric irregularities at least during severe scintillations conditions (post-sunset hours). This suggests that scintillations are driven by TEC gradients as well as by the dynamics of the ionospheric plasma. Finally, given the importance of such studies for technological applications (e.g. GNSS high-precision applications), a validation of the NeQuick model (i.e. the model used in the new GALILEO satellites for TEC modelling) is performed. The NeQuick performance dramatically improves when data from HF radar sounding (ionograms) are ingested. A custom designed algorithm, based on the image recognition technique, is developed to properly select the ingested data, leading to further improvement of the NeQuick performance.
Resumo:
The Advanced Very High Resolution Radiometer (AVHRR) carried on board the National Oceanic and Atmospheric Administration (NOAA) and the Meteorological Operational Satellite (MetOp) polar orbiting satellites is the only instrument offering more than 25 years of satellite data to analyse aerosols on a daily basis. The present study assessed a modified AVHRR aerosol optical depth τa retrieval over land for Europe. The algorithm might also be applied to other parts of the world with similar surface characteristics like Europe, only the aerosol properties would have to be adapted to a new region. The initial approach used a relationship between Sun photometer measurements from the Aerosol Robotic Network (AERONET) and the satellite data to post-process the retrieved τa. Herein a quasi-stand-alone procedure, which is more suitable for the pre-AERONET era, is presented. In addition, the estimation of surface reflectance, the aerosol model, and other processing steps have been adapted. The method's cross-platform applicability was tested by validating τa from NOAA-17 and NOAA-18 AVHRR at 15 AERONET sites in Central Europe (40.5° N–50° N, 0° E–17° E) from August 2005 to December 2007. Furthermore, the accuracy of the AVHRR retrieval was related to products from two newer instruments, the Medium Resolution Imaging Spectrometer (MERIS) on board the Environmental Satellite (ENVISAT) and the Moderate Resolution Imaging Spectroradiometer (MODIS) on board Aqua/Terra. Considering the linear correlation coefficient R, the AVHRR results were similar to those of MERIS with even lower root mean square error RMSE. Not surprisingly, MODIS, with its high spectral coverage, gave the highest R and lowest RMSE. Regarding monthly averaged τa, the results were ambiguous. Focusing on small-scale structures, R was reduced for all sensors, whereas the RMSE solely for MERIS substantially increased. Regarding larger areas like Central Europe, the error statistics were similar to the individual match-ups. This was mainly explained with sampling issues. With the successful validation of AVHRR we are now able to concentrate on our large data archive dating back to 1985. This is a unique opportunity for both climate and air pollution studies over land surfaces.
Resumo:
BACKGROUND: For almost 30 years, phosphatidylethanol (PEth) has been known as a direct marker of alcohol consumption. This marker stands for consumption in high amounts and for a longer time period, but it has been also detected after 1 high single intake of ethanol (EtOH). The aim of this study was to obtain further information about the formation and elimination of PEth 16:0/18:1 by simulating extensive drinking. METHODS: After 3 weeks of alcohol abstinence, 11 test persons drank an amount of EtOH leading to an estimated blood ethanol concentration of 1 g/kg on each of 5 successive days. After the drinking episode, they stayed abstinent for 16 days with regular blood sampling. PEth 16:0/18:1 analysis was performed using liquid chromatography-tandem mass spectrometry (high-performance liquid chromatography 1100 system and QTrap 2000 triple quadrupole linear ion trap mass spectrometer. Values of blood alcohol were obtained using a standardized method with headspace gas chromatography flame ionization detector. RESULTS: Maximum measured concentrations of EtOH were 0.99 to 1.83 g/kg (mean 1.32 g/kg). These values were reached 1 to 3 hours after the start of drinking (mean 1.9 hours). For comparison, 10 of 11 volunteers had detectable PEth 16:0/18:1 values 1 hour after the start of drinking, ranging from 45 to 138 ng/ml PEth 16:0/18:1. Over the following days, concentrations of PEth 16:0/18:1 increased continuously and reached the maximum concentrations of 74 to 237 ng/ml between days 3 and 6. CONCLUSIONS: This drinking experiment led to measurable PEth concentrations. However, PEth 16:0/18:1 concentrations stayed rather low compared with those of alcohol abusers from previous studies.
Resumo:
Background: The lectin pathway of complement activation, in particular mannose-binding lectin (MBL), has been extensively investigated over recent years. So far, studies were exclusively based on venous samples. The aim of this study was to investigate whether measurements of lectin pathway proteins obtained by capillary sampling are in agreement with venous samples. Methods: Prospective study including 31 infants that were admitted with suspected early-onset sepsis. Lectin pathway proteins were measured in simultaneously obtained capillary and venous samples. Bland–Altman plots of logarithmized results were constructed, and the mean capillary to venous ratios (ratiocap/ven) were calculated with their 95% confidence intervals (CI). Results: The agreement between capillary and venous sampling was very high for MBL (mean ratiocap/ven, 1.01; 95% CI, 0.85–1.19). Similarly, high agreement was observed for H-ficolin (mean ratiocap/ven, 1.02; 95% CI, 0.72–1.44), MASP-2 (1.04; 0.59–1.84), MASP-3 (0.96; 0.71–1.28), and MAp44 (1.01; 0.82–1.25), while the agreement was moderate for M-ficolin (mean ratiocap/ven, 0.78; 95% CI, 0.27–2.28). Conclusions: The results of this study show an excellent agreement between capillary and venous samples for most lectin pathway proteins. Except for M-ficolin, small volume capillary samples can thus be used when assessing lectin pathway proteins in neonates and young children.
Resumo:
Purpose: Development of an interpolation algorithm for re‐sampling spatially distributed CT‐data with the following features: global and local integral conservation, avoidance of negative interpolation values for positively defined datasets and the ability to control re‐sampling artifacts. Method and Materials: The interpolation can be separated into two steps: first, the discrete CT‐data has to be continuously distributed by an analytic function considering the boundary conditions. Generally, this function is determined by piecewise interpolation. Instead of using linear or high order polynomialinterpolations, which do not fulfill all the above mentioned features, a special form of Hermitian curve interpolation is used to solve the interpolation problem with respect to the required boundary conditions. A single parameter is determined, by which the behavior of the interpolation function is controlled. Second, the interpolated data have to be re‐distributed with respect to the requested grid. Results: The new algorithm was compared with commonly used interpolation functions based on linear and second order polynomial. It is demonstrated that these interpolation functions may over‐ or underestimate the source data by about 10%–20% while the parameter of the new algorithm can be adjusted in order to significantly reduce these interpolation errors. Finally, the performance and accuracy of the algorithm was tested by re‐gridding a series of X‐ray CT‐images. Conclusion: Inaccurate sampling values may occur due to the lack of integral conservation. Re‐sampling algorithms using high order polynomialinterpolation functions may result in significant artifacts of the re‐sampled data. Such artifacts can be avoided by using the new algorithm based on Hermitian curve interpolation
Resumo:
Despite widespread use of species-area relationships (SARs), dispute remains over the most representative SAR model. Using data of small-scale SARs of Estonian dry grassland communities, we address three questions: (1) Which model describes these SARs best when known artifacts are excluded? (2) How do deviating sampling procedures (marginal instead of central position of the smaller plots in relation to the largest plot; single values instead of average values; randomly located subplots instead of nested subplots) influence the properties of the SARs? (3) Are those effects likely to bias the selection of the best model? Our general dataset consisted of 16 series of nested-plots (1 cm(2)-100 m(2), any-part system), each of which comprised five series of subplots located in the four corners and the centre of the 100-m(2) plot. Data for the three pairs of compared sampling designs were generated from this dataset by subsampling. Five function types (power, quadratic power, logarithmic, Michaelis-Menten, Lomolino) were fitted with non-linear regression. In some of the communities, we found extremely high species densities (including bryophytes and lichens), namely up to eight species in 1 cm(2) and up to 140 species in 100 m(2), which appear to be the highest documented values on these scales. For SARs constructed from nested-plot average-value data, the regular power function generally was the best model, closely followed by the quadratic power function, while the logarithmic and Michaelis-Menten functions performed poorly throughout. However, the relative fit of the latter two models increased significantly relative to the respective best model when the single-value or random-sampling method was applied, however, the power function normally remained far superior. These results confirm the hypothesis that both single-value and random-sampling approaches cause artifacts by increasing stochasticity in the data, which can lead to the selection of inappropriate models.