158 resultados para Calibration measurements
Resumo:
A sequence of moments obtained from statistical trials encodes a classical probability distribution. However, it is well known that an incompatible set of moments arises in the quantum scenario, when correlation outcomes associated with measurements on spatially separated entangled states are considered. This feature, viz., the incompatibility of moments with a joint probability distribution, is reflected in the violation of Bell inequalities. Here, we focus on sequential measurements on a single quantum system and investigate if moments and joint probabilities are compatible with each other. By considering sequential measurement of a dichotomic dynamical observable at three different time intervals, we explicitly demonstrate that the moments and the probabilities are inconsistent with each other. Experimental results using a nuclear magnetic resonance system are reported here to corroborate these theoretical observations, viz., the incompatibility of the three-time joint probabilities with those extracted from the moment sequence when sequential measurements on a single-qubit system are considered.
Resumo:
Wind stress is the most important ocean forcing for driving tropical surface currents. Stress can be estimated from scatterometer-reported wind measurements at 10 m that have been extrapolated to the surface, assuming a neutrally stable atmosphere and no surface current. Scatterometer calibration is designed to account for the assumption of neutral stability; however, the assumption of a particular sea state and negligible current often introduces an error in wind stress estimations. Since the fundamental scatterometer measurement is of the surface radar backscatter (sigma-0) which is related to surface roughness and, thus, stress, we develop a method to estimate wind stress directly from the scatterometer measurements of sigma-0 and their associated azimuth angle and incidence angle using a neural network approach. We compare the results with in situ estimations and observe that the wind stress estimations from this approach are more accurate compared with those obtained from the conventional estimations using 10-m-height wind measurements.
Resumo:
SARAS is a correlation spectrometer purpose designed for precision measurements of the cosmic radio background and faint features in the sky spectrum at long wavelengths that arise from redshifted 21-cm from gas in the reionization epoch. SARAS operates in the octave band 87.5-175 MHz. We present herein the system design arguing for a complex correlation spectrometer concept. The SARAS design concept provides a differential measurement between the antenna temperature and that of an internal reference termination, with measurements in switched system states allowing for cancellation of additive contaminants from a large part of the signal flow path including the digital spectrometer. A switched noise injection scheme provides absolute spectral calibration. Additionally, we argue for an electrically small frequency-independent antenna over an absorber ground. Various critical design features that aid in avoidance of systematics and in providing calibration products for the parametrization of other unavoidable systematics are described and the rationale discussed. The signal flow and processing is analyzed and the response to noise temperatures of the antenna, reference termination and amplifiers is computed. Multi-path propagation arising from internal reflections are considered in the analysis, which includes a harmonic series of internal reflections. We opine that the SARAS design concept is advantageous for precision measurement of the absolute cosmic radio background spectrum; therefore, the design features and analysis methods presented here are expected to serve as a basis for implementations tailored to measurements of a multiplicity of features in the background sky at long wavelengths, which may arise from events in the dark ages and subsequent reionization era.
Resumo:
Perception of operator influences ultrasound image acquisition and processing. Lower costs are attracting new users to medical ultrasound. Anticipating an increase in this trend, we conducted a study to quantify the variability in ultrasonic measurements made by novice users and identify methods to reduce it. We designed a protocol with four presets and trained four new users to scan and manually measure the head circumference of a fetal phantom with an ultrasound scanner. In the first phase, the users followed this protocol in seven distinct sessions. They then received feedback on the quality of the scans from an expert. In the second phase, two of the users repeated the entire protocol aided by visual cues provided to them during scanning. We performed off-line measurements on all the images using a fully automated algorithm capable of measuring the head circumference from fetal phantom images. The ground truth (198.1 +/- 1.6 mm) was based on sixteen scans and measurements made by an expert. Our analysis shows that: (1) the inter-observer variability of manual measurements was 5.5 mm, whereas the inter-observer variability of automated measurements was only 0.6 mm in the first phase (2) consistency of image appearance improved and mean manual measurements was 4-5 mm closer to the ground truth in the second phase (3) automated measurements were more precise, accurate and less sensitive to different presets compared to manual measurements in both phases. Our results show that visual aids and automation can bring more reproducibility to ultrasonic measurements made by new users.
Resumo:
We study consistency properties of surrogate loss functions for general multiclass classification problems, defined by a general loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be classification calibrated with respect to a loss matrix in this setting. We then introduce the notion of \emph{classification calibration dimension} of a multiclass loss matrix, which measures the smallest `size' of a prediction space for which it is possible to design a convex surrogate that is classification calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, as one application, we provide a different route from the recent result of Duchi et al.\ (2010) for analyzing the difficulty of designing `low-dimensional' convex surrogates that are consistent with respect to pairwise subset ranking losses. We anticipate the classification calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems.
Resumo:
Stochastic modelling is a useful way of simulating complex hard-rock aquifers as hydrological properties (permeability, porosity etc.) can be described using random variables with known statistics. However, very few studies have assessed the influence of topological uncertainty (i.e. the variability of thickness of conductive zones in the aquifer), probably because it is not easy to retrieve accurate statistics of the aquifer geometry, especially in hard rock context. In this paper, we assessed the potential of using geophysical surveys to describe the geometry of a hard rock-aquifer in a stochastic modelling framework. The study site was a small experimental watershed in South India, where the aquifer consisted of a clayey to loamy-sandy zone (regolith) underlain by a conductive fissured rock layer (protolith) and the unweathered gneiss (bedrock) at the bottom. The spatial variability of the thickness of the regolith and fissured layers was estimated by electrical resistivity tomography (ERT) profiles, which were performed along a few cross sections in the watershed. For stochastic analysis using Monte Carlo simulation, the generated random layer thickness was made conditional to the available data from the geophysics. In order to simulate steady state flow in the irregular domain with variable geometry, we used an isoparametric finite element method to discretize the flow equation over an unstructured grid with irregular hexahedral elements. The results indicated that the spatial variability of the layer thickness had a significant effect on reducing the simulated effective steady seepage flux and that using the conditional simulations reduced the uncertainty of the simulated seepage flux. As a conclusion, combining information on the aquifer geometry obtained from geophysical surveys with stochastic modelling is a promising methodology to improve the simulation of groundwater flow in complex hard-rock aquifers. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The occurrence of musth, a period of elevated levels of androgens and heightened sexual activity, has been well documented for the male Asian elephant (Elephas maximus). However, the relationship between androgen-dependent musth and adrenocortical function in this species is unclear. The current study is the first assessment of testicular and adrenocortical function in free-ranging male Asian elephants by measuring levels of testosterone (androgen) and cortisol (glucocorticoid - a physiological indicator of stress) metabolites in faeces. During musth, males expectedly showed significant elevation in faecal testosterone metabolite levels. Interestingly, glucocorticoid metabolite concentrations remained unchanged between musth and non-musth periods. This observation is contrary to that observed with wild and captive African elephant bulls and captive Asian bull elephants. Our results show that musth may not necessarily represent a stressful condition in free-ranging male Asian elephants.
Missing (in-situ) snow cover data hampers climate change and runoff studies in the Greater Himalayas
Resumo:
The Himalayas are presently holding the largest ice masses outside the polar regions and thus (temporarily) store important freshwater resources. In contrast to the contemplation of glaciers, the role of runoff from snow cover has received comparably little attention in the past, although (i) its contribution is thought to be at least equally or even more important than that of ice melt in many Himalayan catchments and (ii) climate change is expected to have widespread and significant consequences on snowmelt runoff. Here, we show that change assessment of snowmelt runoff and its timing is not as straightforward as often postulated, mainly as larger partial pressure of H2O, CO2, CH4, and other greenhouse gases might increase net long-wave input for snowmelt quite significantly in a future atmosphere. In addition, changes in the short-wave energy balance such as the pollution of the snow cover through black carbon or the sensible or latent heat contribution to snowmelt are likely to alter future snowmelt and runoff characteristics as well. For the assessment of snow cover extent and depletion, but also for its monitoring over the extremely large areas of the Himalayas, remote sensing has been used in the past and is likely to become even more important in the future. However, for the calibration and validation of remotely-sensed data, and even-more so in light of possible changes in snow-cover energy balance, we strongly call for more in-situ measurements across the Himalayas, in particular for daily data on new snow and snow cover water equivalent, or the respective energy balance components. Moreover, data should be made accessible to the scientific community, so that the latter can more accurately estimate climate change impacts on Himalayan snow cover and possible consequences thereof on runoff. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Background: Deviated nasal septum (DNS) is one of the major causes of nasal obstruction. Polyvinylidene fluoride (PVDF) nasal sensor is the new technique developed to assess the nasal obstruction caused by DNS. This study evaluates the PVDF nasal sensor measurements in comparison with PEAK nasal inspiratory flow (PNIF) measurements and visual analog scale (VAS) of nasal obstruction. Methods: Because of piezoelectric property, two PVDF nasal sensors provide output voltage signals corresponding to the right and left nostril when they are subjected to nasal airflow. The peak-to-peak amplitude of the voltage signal corresponding to nasal airflow was analyzed to assess the nasal obstruction. PVDF nasal sensor and PNIF were performed on 30 healthy subjects and 30 DNS patients. Receiver operating characteristic was used to analyze the DNS of these two methods. Results: Measurements of PVDF nasal sensor strongly correlated with findings of PNIF (r = 0.67; p < 0.01) in DNS patients. A significant difference (p < 0.001) was observed between PVDF nasal sensor measurements and PNIF measurements of the DNS and the control group. A cutoff between normal and pathological of 0.51 Vp-p for PVDF nasal sensor and 120 L/min for PNIF was calculated. No significant difference in terms of sensitivity of PVDF nasal sensor and PNIF (89.7% versus 82.6%) and specificity (80.5% versus 78.8%) was calculated. Conclusion: The result shows that PVDF measurements closely agree with PNIF findings. Developed PVDF nasal sensor is an objective method that is simple, inexpensive, fast, and portable for determining DNS in clinical practice.
Resumo:
Chalcogenide glasses are interesting materials for their infrared transmitting properties and photo-induced effects. This paper reports the influence of light on the optical properties of Sb10S40Se50 thin films. The amorphous nature and chemical composition of the deposited film was studied by X-ray diffraction and energy dispersive X-ray analysis (EDAX). The optical constants, i.e., refractive index, extinction coefficient, and optical band gap as well as film thickness are determined from the measured transmission spectra using the Swanepoel method. The dispersion of the refractive index is discussed in terms of the single-oscillator Wemple-DiDomenico model. The dispersion energy parameter was found to be less for the laser-irradiated film, which indicates the laser-irradiated film is more microstructurally disordered as compared to the as-prepared film. It is observed that laser-irradiation of the films leads to decrease in optical band gap (photo-darkening) while increase in refractive index. The decrease in the optical band gap is explained on the basis of change in nature of films due to chemical disorderness and the increase in refractive index may be due to the densification of films with improved grain structure because of microstructural disorderness in the films. The optical changes are supported by X-ray photoelectron spectroscopy data. (C) 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Resumo:
The present article describes a working or combined calibration curve in laser-induced breakdown spectroscopic analysis, which is the cumulative result of the calibration curves obtained from neutral and singly ionized atomic emission spectral lines. This working calibration curve reduces the effect of change in matrix between different zone soils and certified soil samples because it includes both the species' (neutral and singly ionized) concentration of the element of interest. The limit of detection using a working calibration curve is found better as compared to its constituent calibration curves (i.e., individual calibration curves). The quantitative results obtained using the working calibration curve is in better agreement with the result of inductively coupled plasma-atomic emission spectroscopy as compared to the result obtained using its constituent calibration curves.
Resumo:
We develop iterative diffraction tomography algorithms, which are similar to the distorted Born algorithms, for inverting scattered intensity data. Within the Born approximation, the unknown scattered field is expressed as a multiplicative perturbation to the incident field. With this, the forward equation becomes stable, which helps us compute nearly oscillation-free solutions that have immediate bearing on the accuracy of the Jacobian computed for use in a deterministic Gauss-Newton (GN) reconstruction. However, since the data are inherently noisy and the sensitivity of measurement to refractive index away from the detectors is poor, we report a derivative-free evolutionary stochastic scheme, providing strictly additive updates in order to bridge the measurement-prediction misfit, to arrive at the refractive index distribution from intensity transport data. The superiority of the stochastic algorithm over the GN scheme for similar settings is demonstrated by the reconstruction of the refractive index profile from simulated and experimentally acquired intensity data. (C) 2014 Optical Society of America
Resumo:
An optimal measurement selection strategy based on incoherence among rows (corresponding to measurements) of the sensitivity (or weight) matrix for the near infrared diffuse optical tomography is proposed. As incoherence among the measurements can be seen as providing maximum independent information into the estimation of optical properties, this provides high level of optimization required for knowing the independency of a particular measurement on its counterparts. The proposed method was compared with the recently established data-resolution matrix-based approach for optimal choice of independent measurements and shown, using simulated and experimental gelatin phantom data sets, to be superior as it does not require an optimal regularization parameter for providing the same information. (C) 2014 Society of Photo-Optical Instrumentation Engineers (SPIE)