922 resultados para Measurement Error Estimation
Resumo:
The Smart Grid needs a large amount of information to be operated and day by day new information is required to improve the operation performance. It is also fundamental that the available information is reliable and accurate. Therefore, the role of metrology is crucial, especially if applied to the distribution grid monitoring and the electrical assets diagnostics. This dissertation aims at better understanding the sensors and the instrumentation employed by the power system operators in the above-mentioned applications and studying new solutions. Concerning the research on the measurement applied to the electrical asset diagnostics: an innovative drone-based measurement system is proposed for monitoring medium voltage surge arresters. This system is described, and its metrological characterization is presented. On the other hand, the research regarding the measurements applied to the grid monitoring consists of three parts. The first part concerns the metrological characterization of the electronic energy meters’ operation under off-nominal power conditions. Original test procedures have been designed for both frequency and harmonic distortion as influence quantities, aiming at defining realistic scenarios. The second part deals with medium voltage inductive current transformers. An in-depth investigation on their accuracy behavior in presence of harmonic distortion is carried out by applying realistic current waveforms. The accuracy has been evaluated by means of the composite error index and its approximated version. Based on the same test setup, a closed-form expression for the measured current total harmonic distortion uncertainty estimation has been experimentally validated. The metrological characterization of a virtual phasor measurement unit is the subject of the third and last part: first, a calibrator has been designed and the uncertainty associated with its steady-state reference phasor has been evaluated; then this calibrator acted as a reference, and it has been used to characterize the phasor measurement unit implemented within a real-time simulator.
Resumo:
In the agri-food sector, measurement and monitoring activities contribute to high quality end products. In particular, considering food of plant origin, several product quality attributes can be monitored. Among the non-destructive measurement techniques, a large variety of optical techniques are available, including hyperspectral imaging (HSI) in the visible/near-infrared (Vis/NIR) range, which, due to the capacity to integrate image analysis and spectroscopy, proved particularly useful in agronomy and food science. Many published studies regarding HSI systems were carried out under controlled laboratory conditions. In contrast, few studies describe the application of HSI technology directly in the field, in particular for high-resolution proximal measurements carried out on the ground. Based on this background, the activities of the present PhD project were aimed at exploring and deepening knowledge in the application of optical techniques for the estimation of quality attributes of agri-food plant products. First, research activities on laboratory trials carried out on apricots and kiwis for the estimation of soluble solids content (SSC) and flesh firmness (FF) through HSI were reported; subsequently, FF was estimated on kiwis using a NIR-sensitive device; finally, the procyanidin content of red wine was estimated through a device based on the pulsed spectral sensitive photometry technique. In the second part, trials were carried out directly in the field to assess the degree of ripeness of red wine grapes by estimating SSC through HSI, and finally a method for the automatic selection of regions of interest in hyperspectral images of the vineyard was developed. The activities described above have revealed the potential of the optical techniques for sorting-line application; moreover, the application of the HSI technique directly in the field has proved particularly interesting, suggesting further investigations to solve a variety of problems arising from the many environmental variables that may affect the results of the analyses.
Resumo:
Mentre si svolgono operazioni su dei qubit, possono avvenire vari errori, modificando così l’informazione da essi contenuta. La Quantum Error Correction costruisce algoritmi che permettono di tollerare questi errori e proteggere l’informazione che si sta elaborando. Questa tesi si focalizza sui codici a 3 qubit, che possono correggere un errore di tipo bit-flip o un errore di tipo phase-flip. Più precisamente, all’interno di questi algoritmi, l’attenzione è posta sulla procedura di encoding, che punta a proteggere meglio dagli errori l’informazione contenuta da un qubit, e la syndrome measurement, che specifica su quale qubit è avvenuto un errore senza alterare lo stato del sistema. Inoltre, sfruttando la procedura della syndrome measurement, è stata stimata la probabilità di errore di tipo bit-flip e phase-flip su un qubit attraverso l’utilizzo della IBM quantum experience.
Resumo:
In the last few years there has been a great development of techniques like quantum computers and quantum communication systems, due to their huge potentialities and the growing number of applications. However, physical qubits experience a lot of nonidealities, like measurement errors and decoherence, that generate failures in the quantum computation. This work shows how it is possible to exploit concepts from classical information in order to realize quantum error-correcting codes, adding some redundancy qubits. In particular, the threshold theorem states that it is possible to lower the percentage of failures in the decoding at will, if the physical error rate is below a given accuracy threshold. The focus will be on codes belonging to the family of the topological codes, like toric, planar and XZZX surface codes. Firstly, they will be compared from a theoretical point of view, in order to show their advantages and disadvantages. The algorithms behind the minimum perfect matching decoder, the most popular for such codes, will be presented. The last section will be dedicated to the analysis of the performances of these topological codes with different error channel models, showing interesting results. In particular, while the error correction capability of surface codes decreases in presence of biased errors, XZZX codes own some intrinsic symmetries that allow them to improve their performances if one kind of error occurs more frequently than the others.
Resumo:
The comfort level of the seat has a major effect on the usage of a vehicle; thus, car manufacturers have been working on elevating car seat comfort as much as possible. However, still, the testing and evaluation of comfort are done using exhaustive trial and error testing and evaluation of data. In this thesis, we resort to machine learning and Artificial Neural Networks (ANN) to develop a fully automated approach. Even though this approach has its advantages in minimizing time and using a large set of data, it takes away the degree of freedom of the engineer on making decisions. The focus of this study is on filling the gap in a two-step comfort level evaluation which used pressure mapping with body regions to evaluate the average pressure supported by specific body parts and the Self-Assessment Exam (SAE) questions on evaluation of the person’s interest. This study has created a machine learning algorithm that works on giving a degree of freedom to the engineer in making a decision when mapping pressure values with body regions using ANN. The mapping is done with 92% accuracy and with the help of a Graphical User Interface (GUI) that facilitates the process during the testing time of comfort level evaluation of the car seat, which decreases the duration of the test analysis from days to hours.
Resumo:
We report measurements of single- and double-spin asymmetries for W^{±} and Z/γ^{*} boson production in longitudinally polarized p+p collisions at sqrt[s]=510 GeV by the STAR experiment at RHIC. The asymmetries for W^{±} were measured as a function of the decay lepton pseudorapidity, which provides a theoretically clean probe of the proton's polarized quark distributions at the scale of the W mass. The results are compared to theoretical predictions, constrained by polarized deep inelastic scattering measurements, and show a preference for a sizable, positive up antiquark polarization in the range 0.05
Resumo:
77
Resumo:
One of the most important properties of quantum dots (QDs) is their size. Their size will determine optical properties and in a colloidal medium their range of interaction. The most common techniques used to measure QD size are transmission electron microscopy (TEM) and X-ray diffraction. However, these techniques demand the sample to be dried and under a vacuum. This way any hydrodynamic information is excluded and the preparation process may alter even the size of the QDs. Fluorescence correlation spectroscopy (FCS) is an optical technique with single molecule sensitivity capable of extracting the hydrodynamic radius (HR) of the QDs. The main drawback of FCS is the blinking phenomenon that alters the correlation function implicating in a QD apparent size smaller than it really is. In this work, we developed a method to exclude blinking of the FCS and measured the HR of colloidal QDs. We compared our results with TEM images, and the HR obtained by FCS is higher than the radius measured by TEM. We attribute this difference to the cap layer of the QD that cannot be seen in the TEM images.
Resumo:
Measurement instruments are an integral part of clinical practice, health evaluation and research. These instruments are only useful and able to present scientifically robust results when they are developed properly and have appropriate psychometric properties. Despite the significant increase of rating scales, the literature suggests that many of them have not been adequately developed and validated. The scope of this study was to conduct a narrative review on the process of developing new measurement instruments and to present some tools which can be used in some stages of the development process. The steps described were: I-The establishment of a conceptual framework, and the definition of the objectives of the instrument and the population involved; II-Development of the items and of the response scales; III-Selection and organization of the items and structuring of the instrument; IV-Content validity, V-Pre-test. This study also included a brief discussion on the evaluation of the psychometric properties due to their importance for the instruments to be accepted and acknowledged in both scientific and clinical environments.
Resumo:
The purpose of this study was to correlate the pre-operative imaging, vascularity of the proximal pole, and histology of the proximal pole bone of established scaphoid fracture non-union. This was a prospective non-controlled experimental study. Patients were evaluated pre-operatively for necrosis of the proximal scaphoid fragment by radiography, computed tomography (CT) and magnetic resonance imaging (MRI). Vascular status of the proximal scaphoid was determined intra-operatively, demonstrating the presence or absence of puncate bone bleeding. Samples were harvested from the proximal scaphoid fragment and sent for pathological examination. We determined the association between the imaging and intra-operative examination and histological findings. We evaluated 19 male patients diagnosed with scaphoid nonunion. CT evaluation showed no correlation to scaphoid proximal fragment necrosis. MRI showed marked low signal intensity on T1-weighted images that confirmed the histological diagnosis of necrosis in the proximal scaphoid fragment in all patients. Intra-operative assessment showed that 90% of bones had absence of intra-operative puncate bone bleeding, which was confirmed necrosis by microscopic examination. In scaphoid nonunion MRI images with marked low signal intensity on T1-weighted images and the absence of intra-operative puncate bone bleeding are strong indicatives of osteonecrosis of the proximal fragment.
Resumo:
A method to quantify lycopene and β-carotene in freeze dried tomato pulp by high performance liquid chromatography (HLPC) was validated according to the criteria of selectivity, sensitivity, precision and accuracy, and uncertainty estimation of measurement was determined with data obtained in the validation. The validated method presented is selective in terms of analysis, and it had a good precision and accuracy. Detection limit for lycopene and β-carotene was 4.2 and 0.23 mg 100 g-1, respectively. The estimation of expanded uncertainty (K = 2) for lycopene was 104 ± 21 mg 100 g-1 and for β-carotene was 6.4 ± 1.5 mg 100 g-1.
Resumo:
Prey size is an important factor in food consumption. In studies of feeding ecology, prey items are usually measured individually using calipers or ocular micrometers. Among amphibians and reptiles, there are species that feed on large numbers of small prey items (e.g. ants, termites). This high intake makes it difficult to estimate prey size consumed by these animals. We addressed this problem by developing and evaluating a procedure for subsampling the stomach contents of such predators in order to estimate prey size. Specifically, we developed a protocol based on a bootstrap procedure to obtain a subsample with a precision error of at the most 5%, with a confidence level of at least 95%. This guideline should reduce the sampling effort and facilitate future studies on the feeding habits of amphibians and reptiles, and also provide a means of obtaining precise estimates of prey size.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física