960 resultados para Calibration curves
Resumo:
Person-to-stock order picking is highly flexible and requires minimal investment costs in comparison to automated picking solutions. For these reasons, tradi-tional picking is widespread in distribution and production logistics. Due to its typically large proportion of manual activities, picking causes the highest operative personnel costs of all intralogistics process. The required personnel capacity in picking varies short- and mid-term due to capacity requirement fluctuations. These dynamics are often balanced by employing minimal permanent staff and using seasonal help when needed. The resulting high personnel fluctuation necessitates the frequent training of new pickers, which, in combination with in-creasingly complex work contents, highlights the im-portance of learning processes in picking. In industrial settings, learning is often quantified based on diminishing processing time and cost requirements with increasing experience. The best-known industrial learning curve models include those from Wright, de Jong, Baloff and Crossman, which are typically applied to the learning effects of an entire work crew rather than of individuals. These models have been validated in largely static work environments with homogeneous work contents. Little is known of learning effects in picking systems. Here, work contents are heterogeneous and individual work strategies vary among employees. A mix of temporary and steady employees with varying degrees of experience necessitates the observation of individual learning curves. In this paper, the individual picking performance development of temporary employees is analyzed and compared to that of steady employees in the same working environment.
Resumo:
High-resolution, well-calibrated records of lake sediments are critically important for quantitative climate reconstructions, but they remain a methodological and analytical challenge. While several comprehensive paleotemperature reconstructions have been developed across Europe, only a few quantitative high-resolution studies exist for precipitation. Here we present a calibration and verification study of lithoclastic sediment proxies from proglacial Lake Oeschinen (46°30′N, 7°44′E, 1,580 m a.s.l., north–west Swiss Alps) that are sensitive to rainfall for the period AD 1901–2008. We collected two sediment cores, one in 2007 and another in 2011. The sediments are characterized by two facies: (A) mm-laminated clastic varves and (B) turbidites. The annual character of the laminae couplets was confirmed by radiometric dating (210Pb, 137Cs) and independent flood-layer chronomarkers. Individual varves consist of a dark sand-size spring-summer layer enriched in siliciclastic minerals and a lighter clay-size calcite-rich winter layer. Three subtypes of varves are distinguished: Type I with a 1–1.5 mm fining upward sequence; Type II with a distinct fine-sand base up to 3 mm thick; and Type III containing multiple internal microlaminae caused by individual summer rainstorm deposits. Delta-fan surface samples and sediment trap data fingerprint different sediment source areas and transport processes from the watershed and confirm the instant response of sediment flux to rainfall and erosion. Based on a highly accurate, precise and reproducible chronology, we demonstrate that sediment accumulation (varve thickness) is a quantitative predictor for cumulative boreal alpine spring (May–June) and spring/summer (May–August) rainfall (rMJ = 0.71, rMJJA = 0.60, p < 0.01). Bootstrap-based verification of the calibration model reveals a root mean squared error of prediction (RMSEPMJ = 32.7 mm, RMSEPMJJA = 57.8 mm) which is on the order of 10–13 % of mean MJ and MJJA cumulative precipitation, respectively. These results highlight the potential of the Lake Oeschinen sediments for high-resolution reconstructions of past rainfall conditions in the northern Swiss Alps, central and eastern France and south-west Germany.
Resumo:
Water-conducting faults and fractures were studied in the granite-hosted A¨ spo¨ Hard Rock Laboratory (SE Sweden). On a scale of decametres and larger, steeply dipping faults dominate and contain a variety of different fault rocks (mylonites, cataclasites, fault gouges). On a smaller scale, somewhat less regular fracture patterns were found. Conceptual models of the fault and fracture geometries and of the properties of rock types adjacent to fractures were derived and used as input for the modelling of in situ dipole tracer tests that were conducted in the framework of the Tracer Retention Understanding Experiment (TRUE-1) on a scale of metres. After the identification of all relevant transport and retardation processes, blind predictions of the breakthroughs of conservative to moderately sorbing tracers were calculated and then compared with the experimental data. This paper provides the geological basis and model calibration, while the predictive and inverse modelling work is the topic of the companion paper [J. Contam. Hydrol. 61 (2003) 175]. The TRUE-1 experimental volume is highly fractured and contains the same types of fault rocks and alterations as on the decametric scale. The experimental flow field was modelled on the basis of a 2D-streamtube formalism with an underlying homogeneous and isotropic transmissivity field. Tracer transport was modelled using the dual porosity medium approach, which is linked to the flow model by the flow porosity. Given the substantial pumping rates in the extraction borehole, the transport domain has a maximum width of a few centimetres only. It is concluded that both the uncertainty with regard to the length of individual fractures and the detailed geometry of the network along the flowpath between injection and extraction boreholes are not critical because flow is largely one-dimensional, whether through a single fracture or a network. Process identification and model calibration were based on a single uranine breakthrough (test PDT3), which clearly showed that matrix diffusion had to be included in the model even over the short experimental time scales, evidenced by a characteristic shape of the trailing edge of the breakthrough curve. Using the geological information and therefore considering limited matrix diffusion into a thin fault gouge horizon resulted in a good fit to the experiment. On the other hand, fresh granite was found not to interact noticeably with the tracers over the time scales of the experiments. While fracture-filling gouge materials are very efficient in retarding tracers over short periods of time (hours–days), their volume is very small and, with time progressing, retardation will be dominated by altered wall rock and, finally, by fresh granite. In such rocks, both porosity (and therefore the effective diffusion coefficient) and sorption Kds are more than one order of magnitude smaller compared to fault gouge, thus indicating that long-term retardation is expected to occur but to be less pronounced.
Resumo:
Based on the results from detailed structural and petrological characterisation and on up-scaled laboratory values for sorption and diffusion, blind predictions were made for the STT1 dipole tracer test performed in the Swedish A¨ spo¨ Hard Rock Laboratory. The tracers used were nonsorbing, such as uranine and tritiated water, weakly sorbing 22Na+, 85Sr2 +, 47Ca2 +and more strongly sorbing 86Rb+, 133Ba2 +, 137Cs+. Our model consists of two parts: (1) a flow part based on a 2D-streamtube formalism accounting for the natural background flow field and with an underlying homogeneous and isotropic transmissivity field and (2) a transport part in terms of the dual porosity medium approach which is linked to the flow part by the flow porosity. The calibration of the model was done using the data from one single uranine breakthrough (PDT3). The study clearly showed that matrix diffusion into a highly porous material, fault gouge, had to be included in our model evidenced by the characteristic shape of the breakthrough curve and in line with geological observations. After the disclosure of the measurements, it turned out that, in spite of the simplicity of our model, the prediction for the nonsorbing and weakly sorbing tracers was fairly good. The blind prediction for the more strongly sorbing tracers was in general less accurate. The reason for the good predictions is deemed to be the result of the choice of a model structure strongly based on geological observation. The breakthrough curves were inversely modelled to determine in situ values for the transport parameters and to draw consequences on the model structure applied. For good fits, only one additional fracture family in contact with cataclasite had to be taken into account, but no new transport mechanisms had to be invoked. The in situ values for the effective diffusion coefficient for fault gouge are a factor of 2–15 larger than the laboratory data. For cataclasite, both data sets have values comparable to laboratory data. The extracted Kd values for the weakly sorbing tracers are larger than Swedish laboratory data by a factor of 25–60, but agree within a factor of 3–5 for the more strongly sorbing nuclides. The reason for the inconsistency concerning Kds is the use of fresh granite in the laboratory studies, whereas tracers in the field experiments interact only with fracture fault gouge and to a lesser extent with cataclasite both being mineralogically very different (e.g. clay-bearing) from the intact wall rock.
Resumo:
PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.
Resumo:
We measured the concentrations and isotopic compositions of He, Ne, and Ar in bulk samples and metal separates of 14 ordinary chondrite falls with long exposure ages and high metamorphic grades. In addition, we measured concentrations of the cosmogenic radionuclides 10Be, 26Al, and 36Cl in metal separates and in the nonmagnetic fractions of the selected meteorites. Using cosmogenic 36Cl and 36Ar measured in the metal separates, we determined 36Cl-36Ar cosmic-ray exposure (CRE) ages, which are shielding-independent and therefore particularly reliable. Using the cosmogenic noble gases and radionuclides, we are able to decipher the CRE history for the studied objects. Based on the correlation 3He/21Ne versus 22Ne/21Ne, we demonstrate that, among the meteorites studied, only one suffered significant diffusive losses (about 35%). The data confirm that the linear correlation 3He/21Ne versus 22Ne/21Ne breaks down at high shielding. Using 36Cl-36Ar exposure ages and measured noble gas concentrations, we determine 21Ne and 38Ar production rates as a function of 22Ne/21Ne. The new data agree with recent model calculations for the relationship between 21Ne and 38Ar production rates and the 22Ne/21Ne ratio, which does not always provide unique shielding information. Based on the model calculations, we determine a new correlation line for 21Ne and 38Ar production rates as a function of the shielding indicator 22Ne/21Ne for H, L, and LL chondrites with preatmospheric radii less than about 65 cm. We also calculated the 10Be/21Ne and 26Al/21Ne production rate ratios for the investigated samples, which show good agreement with recent model calculations.
Resumo:
A non-parametric method was developed and tested to compare the partial areas under two correlated Receiver Operating Characteristic curves. Based on the theory of generalized U-statistics the mathematical formulas have been derived for computing ROC area, and the variance and covariance between the portions of two ROC curves. A practical SAS application also has been developed to facilitate the calculations. The accuracy of the non-parametric method was evaluated by comparing it to other methods. By applying our method to the data from a published ROC analysis of CT image, our results are very close to theirs. A hypothetical example was used to demonstrate the effects of two crossed ROC curves. The two ROC areas are the same. However each portion of the area between two ROC curves were found to be significantly different by the partial ROC curve analysis. For computation of ROC curves with large scales, such as a logistic regression model, we applied our method to the breast cancer study with Medicare claims data. It yielded the same ROC area computation as the SAS Logistic procedure. Our method also provides an alternative to the global summary of ROC area comparison by directly comparing the true-positive rates for two regression models and by determining the range of false-positive values where the models differ. ^
Resumo:
A detailed microdosimetric characterization of the M. D. Anderson 42 MeV (p,Be) fast neutron beam was performed using the techniques of microdosimetry and a 1/2 inch diameter Rossi proportional counter. These measurements were performed at 5, 15, and 30 cm depths on the central axis, 3 cm inside, and 3 cm outside the field edge for 10 $\times$ 10 and 20 $\times$ 20 cm field sizes. Spectra were also measured at 5 and 15 cm depth on central axis for a 6 $\times$ 6 cm field size. Continuous slowing down approximation calculations were performed to model the nuclear processes that occur in the fast neutron beam. Irradiation of the CR-39 was performed using a tandem electrostatic accelerator for protons of 10, 6, and 3 MeV and alpha particles of 15, 10, and 7 MeV incident energy on target at angles of incidence from 0 to 85 degrees. The critical angle as well as track etch rate and normal incidence diameter versus linear energy transfer (LET) were obtained from these measurements. The bulk etch rate was also calculated from these measurements. Dose response of the material was studied, and the angular distribution of charged particles created by the fast neutron beam was measured with CR-39. The efficiency of CR-39 was calculated versus that of the Rossi chamber, and an algorithm was devised for derivation of LET spectra from the major and minor axis dimensions of the observed tracks. The CR-39 was irradiated in the same positions as the Rossi chamber, and the derived spectra were compared directly. ^
Resumo:
The reliability of millimeter and sub-millimeter wave radiometer measurements is dependent on the accuracy of the loads they employ as calibration targets. In the recent past on-board calibration loads have been developed for a variety of satellite remote sensing instruments. Unfortunately some of these have suffered from calibration inaccuracies which had poor thermal performance of the calibration target as the root cause. Stringent performance parameters of the calibration target such as low reflectivity, high temperature uniformity, low mass and low power consumption combined with low volumetric requirements remain a challenge for the space instrument developer. In this paper we present a novel multi-layer absorber concept for a calibration load which offers an excellent compromise between very good radiometric performance and temperature uniformity and the mass and volumetric constraints required by space-borne calibration targets.
Resumo:
X-ray imaging is one of the most commonly used medical imaging modality. Albeit X-ray radiographs provide important clinical information for diagnosis, planning and post-operative follow-up, the challenging interpretation due to its 2D projection characteristics and the unknown magnification factor constrain the full benefit of X-ray imaging. In order to overcome these drawbacks, we proposed here an easy-to-use X-ray calibration object and developed an optimization method to robustly find correspondences between the 3D fiducials of the calibration object and their 2D projections. In this work we present all the details of this outlined concept. Moreover, we demonstrate the potential of using such a method to precisely extract information from calibrated X-ray radiographs for two different orthopedic applications: post-operative acetabular cup implant orientation measurement and 3D vertebral body displacement measurement during preoperative traction tests. In the first application, we have achieved a clinically acceptable accuracy of below 1° for both anteversion and inclination angles, where in the second application an average displacement of 8.06±3.71 mm was measured. The results of both applications indicate the importance of using X-ray calibration in the clinical routine.
Resumo:
We propose notions of calibration for probabilistic forecasts of general multivariate quantities. Probabilistic copula calibration is a natural analogue of probabilistic calibration in the univariate setting. It can be assessed empirically by checking for the uniformity of the copula probability integral transform (CopPIT), which is invariant under coordinate permutations and coordinatewise strictly monotone transformations of the predictive distribution and the outcome. The CopPIT histogram can be interpreted as a generalization and variant of the multivariate rank histogram, which has been used to check the calibration of ensemble forecasts. Climatological copula calibration is an analogue of marginal calibration in the univariate setting. Methods and tools are illustrated in a simulation study and applied to compare raw numerical model and statistically postprocessed ensemble forecasts of bivariate wind vectors.