957 resultados para Planar jet, hot-wire anemometry, calibration procedure, experiments for the characterization
Resumo:
O trabalho desenvolvido centrou-se na preparação da acreditação NP EN ISO/IEC 17025 do Laboratório de Metrologia da empresa Frilabo para prestação de serviços na área das temperaturas, no ensaio a câmaras térmicas e na calibração de termómetros industriais. Considerando o âmbito do trabalho desenvolvido, são abordados nesta tese conceitos teóricos sobre temperatura e incertezas bem como considerações técnicas de medição da temperatura e cálculo de incertezas. São também referidas considerações sobre os diferentes tipos de câmaras térmicas e termómetros. O texto apresenta os documentos elaborados pelo autor sobre os procedimentos de ensaio a câmaras térmicas e respetivo procedimento de cálculo da incerteza. Também estão presentes neste texto documentos elaborados pelo autor sobre os procedimentos de calibração de termómetros industriais e respetivo procedimento de cálculo da incerteza. Relativamente aos ensaios a câmara térmicas e calibração de termómetros o autor elaborou os fluxogramas sobre a metodologia da medição da temperatura nos ensaios, a metodologia de medição da temperatura nas calibrações, e respetivos cálculos de incertezas. Nos diferentes anexos estão apresentados vários documentos tais como o modelo de folha de cálculo para tratamento de dados relativos ao ensaio, modelo de folha de cálculo para tratamento de dados relativo às calibrações, modelo de relatório de ensaio, modelo de certificado de calibração, folhas de cálculo para gestão de clientes/equipamentos e numeração automática de relatórios de ensaio e certificados de calibração que cumprem os requisitos de gestão do laboratório. Ainda em anexo constam todas as figuras relativas à monitorização da temperatura nas câmara térmicas como também as figuras da disposição dos termómetros no interior das câmaras térmicas. Todas as figuras que aparecem ao longo do documento que não estão referenciadas são da adaptação ou elaboração própria do autor. A decisão de alargar o âmbito da acreditação do Laboratório de Metrologia da Frilabo para calibração de termómetros, prendeu-se com o facto de que sendo acreditado como laboratório de ensaios na área das temperaturas, a realização da rastreabilidade dos padrões de medida internamente, permitiria uma gestão de recursos otimizada e rentabilizada. A metodologia da preparação de todo o processo de acreditação do Laboratório de Metrologia da Frilabo, foi desenvolvida pelo autor e está expressa ao longo do texto da tese incluindo dados relevantes para a concretização da referida acreditação nos dois âmbitos. A avaliação de todo o trabalho desenvolvido será efetuada pelo o organismo designado IPAC (Instituto Português de Acreditação) que confere a acreditação em Portugal. Este organismo irá auditar a empresa com base nos procedimentos desenvolvidos e nos resultados obtidos, sendo destes o mais importante o Balanço da Melhor Incerteza (BMI) da medição também conhecido por Melhor Capacidade de Medição (MCM), quer para o ensaio às câmaras térmicas, quer para a calibração dos termómetros, permitindo desta forma complementar os serviços prestados aos clientes fidelizados à Frilabo. As câmaras térmicas e os termómetros industriais são equipamentos amplamente utilizados em diversos segmentos industriais, engenharia, medicina, ensino e também nas instituições de investigação, sendo um dos objetivos respetivamente, a simulação de condições específicas controladas e a medição de temperatura. Para entidades acreditadas, como os laboratórios, torna-se primordial que as medições realizadas com e nestes tipos de equipamentos ostentem confiabilidade metrológica1, uma vez que, resultados das medições inadequados podem levar a conclusões equivocadas sobre os testes realizados. Os resultados obtidos nos ensaios a câmaras térmicas e nas calibrações de termómetros, são considerados bons e aceitáveis, uma vez que as melhores incertezas obtidas, podem ser comparadas, através de consulta pública do Anexo Técnico do IPAC, com as incertezas de outros laboratórios acreditados em Portugal. Numa abordagem mais experimental, pode dizer-se que no ensaio a câmaras térmicas a obtenção de incertezas mais baixas ou mais altas depende maioritariamente do comportamento, características e estado de conservação das câmaras, tornando relevante o processo de estabilização da temperatura no interior das mesmas. A maioria das fontes de incerteza na calibração dos termómetros são obtidas pelas características e especificações do fabricante dos equipamentos, que se traduzem por uma contribuição com o mesmo peso para o cálculo da incerteza expandida (a exatidão de fabricante, as incertezas herdadas de certificados de calibração, da estabilidade e da uniformidade do meio térmico onde se efetuam as calibrações). Na calibração dos termómetros as incertezas mais baixas obtêm-se para termómetros de resoluções mais baixas. Verificou-se que os termómetros com resolução de 1ºC não detetavam as variações do banho térmico. Nos termómetros com resoluções inferiores, o peso da contribuição da dispersão de leituras no cálculo da incerteza, pode variar consoante as características do termómetro. Por exemplo os termómetros com resolução de 0,1ºC, apresentaram o maior peso na contribuição da componente da dispersão de leituras. Pode concluir-se que a acreditação de um laboratório é um processo que não é de todo fácil. Podem salientar-se aspetos que podem comprometer a acreditação, como por exemplo a má seleção do ou dos técnicos e equipamentos (má formação do técnico, equipamento que não seja por exemplo adequado à gama, mal calibrado, etc…) que vão efetuar as medições. Se não for bem feita, vai comprometer todo o processo nos passos seguintes. Deve haver também o envolvimento do todos os intervenientes do laboratório, o gestor da qualidade, o responsável técnico e os técnicos, só assim é que é possível chegar à qualidade pretendida e à melhoria contínua da acreditação do laboratório. Outro aspeto importante na preparação de uma acreditação de um laboratório é a pesquisa de documentação necessária e adequada para poder tomar decisões corretas na elaboração dos procedimentos conducentes à referida. O laboratório tem de mostrar/comprovar através de registos a sua competência. Finalmente pode dizer-se que competência é a palavra chave de uma acreditação, pois ela manifesta-se nas pessoas, equipamentos, métodos, instalações e outros aspetos da instituição a que pertence o laboratório sob acreditação.
Resumo:
Considering the social and economic importance that the milk has, the objective of this study was to evaluate the incidence and quantifying antimicrobial residues in the food. The samples were collected in dairy industry of southwestern Paraná state and thus they were able to cover all ten municipalities in the region of Pato Branco. The work focused on the development of appropriate models for the identification and quantification of analytes: tetracycline, sulfamethazine, sulfadimethoxine, chloramphenicol and ampicillin, all antimicrobials with health interest. For the calibration procedure and validation of the models was used the Infrared Spectroscopy Fourier Transform associated with chemometric method based on Partial Least Squares regression (PLS - Partial Least Squares). To prepare a work solution antimicrobials, the five analytes of interest were used in increasing doses, namely tetracycline from 0 to 0.60 ppm, sulfamethazine 0 to 0.12 ppm, sulfadimethoxine 0 to 2.40 ppm chloramphenicol 0 1.20 ppm and ampicillin 0 to 1.80 ppm to perform the work with the interest in multiresidues analysis. The performance of the models constructed was evaluated through the figures of merit: mean square error of calibration and cross-validation, correlation coefficients and offset performance ratio. For the purposes of applicability in this work, it is considered that the models generated for Tetracycline, Sulfadimethoxine and Chloramphenicol were considered viable, with the greatest predictive power and efficiency, then were employed to evaluate the quality of raw milk from the region of Pato Branco . Among the analyzed samples by NIR, 70% were in conformity with sanitary legislation, and 5% of these samples had concentrations below the Maximum Residue permitted, and is also satisfactory. However 30% of the sample set showed unsatisfactory results when evaluating the contamination with antimicrobials residues, which is non conformity related to the presence of antimicrobial unauthorized use or concentrations above the permitted limits. With the development of this work can be said that laboratory tests in the food area, using infrared spectroscopy with multivariate calibration was also good, fast in analysis, reduced costs and with minimum generation of laboratory waste. Thus, the alternative method proposed meets the quality concerns and desired efficiency by industrial sectors and society in general.
Resumo:
When components of a propulsion system are exposed to elevated flow temperatures there is a risk for catastrophic failure if the components are not properly protected from the thermal loads. Among several strategies, slot film cooling is one of the most commonly used, yet poorly understood active cooling techniques. Tangential injection of a relatively cool fluid layer protects the surface(s) in question, but the turbulent mixing between the hot mainstream and cooler film along with the presence of the wall presents an inherently complex problem where kinematics, thermal transport and multimodal heat transfer are coupled. Furthermore, new propulsion designs rely heavily on CFD analysis to verify their viability. These CFD models require validation of their results, and the current literature does not provide a comprehensive data set for film cooling that meets all the demands for proper validation, namely a comprehensive (kinematic, thermal and boundary condition data) data set obtained over a wide range of conditions. This body of work aims at solving the fundamental issue of validation by providing high quality comprehensive film cooling data (kinematics, thermal mixing, heat transfer). 3 distinct velocity ratios (VR=uc/u∞) are examined corresponding to wall-wake (VR~0.5), min-shear (VR ~ 1.0), and wall-jet (VR~2.0) type flows at injection, while the temperature ratio TR= T∞/Tc is approximately 1.5 for all cases. Turbulence intensities at injection are 2-4% for the mainstream (urms/u∞, vrms/u∞,), and on the order of 8-10% for the coolant (urms/uc, vrms/uc,). A special emphasis is placed on inlet characterization, since inlet data in the literature is often incomplete or is of relatively low quality for CFD development. The data reveals that min-shear injection provides the best performance, followed by the wall-jet. The wall-wake case is comparably poor in performance. The comprehensive data suggests that this relative performance is due to the mixing strength of each case, as well as the location of regions of strong mixing with respect to the wall. Kinematic and thermal data show that strong mixing occurs in the wall-jet away from the wall (y/s>1), while strong mixing in the wall-wake occurs much closer to the wall (y/s<1). Min-shear cases exhibit noticeably weaker mixing confined to about y/s=1. Additionally to these general observations, the experimental data obtained in this work is analyzed to reveal scaling laws for the inlets, near-wall scaling, detecting and characterizing coherent structures in the flow as well as to provide data reduction strategies for comparison to CFD models (RANS and LES).
Resumo:
Context. With about 2000 extrasolar planets confirmed, the results show that planetary systems have a whole range of unexpected properties. This wide diversity provides fundamental clues to the processes of planet formation and evolution. Aims: We present a full investigation of the HD 219828 system, a bright metal-rich star for which a hot Neptune has previously been detected. Methods: We used a set of HARPS, SOPHIE, and ELODIE radial velocities to search for the existence of orbiting companions to HD 219828. The spectra were used to characterise the star and its chemical abundances, as well as to check for spurious, activity induced signals. A dynamical analysis is also performed to study the stability of the system and to constrain the orbital parameters and planet masses. Results: We announce the discovery of a long period (P = 13.1 yr) massive (m sini = 15.1 MJup) companion (HD 219828 c) in a very eccentric orbit (e = 0.81). The same data confirms the existence of a hot Neptune, HD 219828 b, with a minimum mass of 21 M⊕ and a period of 3.83 days. The dynamical analysis shows that the system is stable, and that the equilibrium eccentricity of planet b is close to zero. Conclusions: The HD 219828 system is extreme and unique in several aspects. First, ammong all known exoplanet systems it presents an unusually high mass ratio. We also show that systems like HD 219828, with a hot Neptune and a long-period massive companion are more frequent than similar systems with a hot Jupiter instead. This suggests that the formation of hot Neptunes follows a different path than the formation of their hot jovian counterparts. The high mass, long period, and eccentricity of HD 219828 c also make it a good target for Gaia astrometry as well as a potential target for atmospheric characterisation, using direct imaging or high-resolution spectroscopy. Astrometric observations will allow us to derive its real mass and orbital configuration. If a transit of HD 219828 b is detected, we will be able to fully characterise the system, including the relative orbital inclinations. With a clearly known mass, HD 219828 c may become a benchmark object for the range in between giant planets and brown dwarfs.
Resumo:
How can we control the experimental conditions towards the isolation of specific structures? Why do particular architectures form? These are some challenging questions that synthetic chemists try to answer, specifically within polyoxometalate (POM) chemistry, where there is still much unknown regarding the synthesis of novel molecular structures in a controlled and predictive manner. This work covers a wide range of POM chemistry, exploring the redox self-assembly of polyoxometalate clusters, using both “one-pot”, flow and hydrothermal conditions. For this purpose, different vanadium, molybdenum and tungsten reagents, heteroatoms, inorganic salts and reducing agents have been used. The template effect of lone-pair containing pyramidal heteroatoms has been investigated. Efforts to synthesize new POM clusters displaying pyramidal heteroanions (XO32-, where X= S, Se, Te, P) are reported. The reaction of molybdenum with vanadium in the presence of XO32- heteroatoms is explored, showing how via the cation and experimental control it is possible to direct the self-assembly process and to isolate isostructural compounds. A series of four isostructural (two new, namely {Mo11V7P} and {Mo11V7Te} and two already known, namely {Mo11V7Se} and {Mo11V7S} disordered egg-shaped Polyoxometalates have been reported. The compounds were characterized by X-ray structural analysis, TGA, UV-Vis, FT-IR, Elemental and Flame Atomic Absorption Spectroscopy (FAAS) analysis and Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES). Cyclic Voltammetry measurements have been carried out in all four compounds showing the effect of the ionic density of the heteroatom on the potential. High-Resolution ESI-MS studies have revealed that the structures retain their integrity in solution. Efforts to synthesize new mixed-metal compounds led to isolation, structural, and electronic characterization of the theoretically predicted, but experimentally elusive δ-isomer of the Keggin polyoxometalate cluster anion, {H2W4V9O33(C6H13NO3)}, by the reaction of tungstate(VI) and vanadium(V) with triethanolammonium ions (TEAH), acting as a tripodal ligand grafted to the surface of the cluster. Control experiments (in the absence of the organic compound) have proven that the tripodal ligand plays crucial role on the formation of the isomer. The six vanadium metal centres, which consist the upper part of the cluster, are bonded to the “capping” TEA tripodal ligand. This metal-ligand bonding directs and stabilises the formation of the final product. The δ-Keggin species was characterized by single-crystal X-ray diffraction, FT-IR, UV-vis, NMR and ESI-MS spectrometry. Electronic structure and structure-stability correlations were evaluated by means of DFT calculations. The compounds exhibited photochromic properties by undergoing single-crystal-to-single-crystal (SC-SC) transformations and changing colour under light. Non-conventional synthetic approaches are also used for the synthesis of the POM clusters comparing the classical “one-pot” reaction conditions and exploring the synthetic parameters of the synthesis of POM compounds. Reactions under hydrothermal and flow conditions, where single crystals that depend on the solubility of the minerals under hot water and high pressure can be synthesized, resulted in the isolation of two isostructural compounds, namely, {Mo12V3Te5}. The compound isolated from a continuous processing method, crystallizes in a hexagonal crystal system, forming a 2D porous plane net, while the compound isolated using hard experimental conditions (high temperature and pressure) crystallizes in monoclinic system, resulting in a different packing configuration. Utilizing these alternative synthetic approaches, the most kinetically and thermodynamically compounds would possibly be isolated. These compounds were characterised by single-crystal X-ray diffraction, FT-IR and UV-vis spectroscopy. Finally, the redox-controlled driven oscillatory template exchange between phosphate (P) and vanadate (V) anions enclosed in an {M18O54(XO4)2} cluster is further investigated using UV-vis spectroscopy as a function of reaction time, showed that more than six complete oscillations interconverting the capsule species present in solution from {P2M18} to {V2M18} were possible, provided that a sufficient concentration of the TEA reducing agent was present in solution. In an effort to investigate the periodicity of the exchange of the phosphate and vanadate anions, time dependent Uv-vis measurements were performed for a period at a range of 170-550 hours. Different experimental conditions were also applied in order to investigate the role of the reducing agent, as well as the effect of other experimental variables on the oscillatory system.
Resumo:
Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.
Resumo:
Microfluidic technologies have great potential to help create automated, cost-effective, portable devices for rapid point of care (POC) diagnostics in diverse patient settings. Unfortunately commercialization is currently constrained by the materials, reagents, and instrumentation required and detection element performance. While most microfluidic studies utilize planar detection elements, this dissertation demonstrates the utility of porous volumetric detection elements to improve detection sensitivity and reduce assay times. Impedemetric immunoassays were performed utilizing silver enhanced gold nanoparticle immunoconjugates (AuIgGs) and porous polymer monolith or silica bead bed detection elements within a thermoplastic microchannel. For a direct assay with 10 µm spaced electrodes the detection limit was 0.13 fM AuIgG with a 3 log dynamic range. The same assay was performed with electrode spacing of 15, 40, and 100 µm with no significant difference between configurations. For a sandwich assay the detection limit was10 ng/mL with a 4 log dynamic range. While most impedemetric assays rely on expensive high resolution electrodes to enhance planar senor performance, this study demonstrates the employment of porous volumetric detection elements to achieve similar performance using lower resolution electrodes and shorter incubation times. Optical immunoassays were performed using porous volumetric capture elements perfused with refractive index matching solutions to limit light scattering and enhance signal. First, fluorescence signal enhancement was demonstrated with a porous polymer monolith within a silica capillary. Next, transmission enhancement of a direct assay was demonstrated by infusing aqueous sucrose solutions through silica bead beds with captured silver enhanced AuIgGs yielding a detection limit of 0.1 ng/mL and a 5 log dynamic range. Finally, ex situ functionalized porous silica monolith segments were integrated into thermoplastic channels for a reflectance based sandwich assay yielding a detection limit of 1 ng/mL and a 5 log dynamic range. The simple techniques for optical signal enhancement and ex situ element integration enable development of sensitive, multiplexed microfluidic sensors. Collectively the demonstrated experiments validate the use of porous volumetric detection elements to enhance impedemetric and optical microfluidic assays. The techniques rely on commercial reagents, materials compatible with manufacturing, and measurement instrumentation adaptable to POC diagnostics.
Resumo:
A comparison of the Rietveld quantitative phase analyses (RQPA) obtained using Cu-Kα1, Mo-Kα1, and synchrotron strictly monochromatic radiations is presented. The main aim is to test a simple hypothesis: high energy Mo-radiation, combined with high resolution laboratory X-ray powder diffraction optics, could yield more accurate RQPA, for challenging samples, than well-established Cu-radiation procedure(s). In order to do so, three set of mixtures with increasing amounts of a given phase (spiking-method) were prepared and the corresponding RQPA results have been evaluated. Firstly, a series of crystalline inorganic phase mixtures with increasing amounts of an analyte was studied in order to determine if Mo-Kα1 methodology is as robust as the well-established Cu-Kα1 one. Secondly, a series of crystalline organic phase mixtures with increasing amounts of an organic compound was analyzed. This type of mixture can result in transparency problems in reflection and inhomogeneous loading in narrow capillaries for transmission studies. Finally, a third series with variable amorphous content was studied. Limit of detection in Cu-patterns, ~0.2 wt%, are slightly lower than those derived from Mo-patterns, ~0.3 wt%, for similar recording times and limit of quantification for a well crystallized inorganic phase using laboratory powder diffraction was established ~0.10 wt%. However, the accuracy was comprised as relative errors were ~100%. Contents higher than 1.0 wt% yielded analyses with relative errors lower than 20%. From the obtained results it is inferred that RQPA from Mo-Kα1 radiation have slightly better accuracies than those obtained from Cu-Kα1. This behavior has been established with the calibration graphics obtained through the spiking method and also from Kullback-Leibler distance statistic studies. We explain this outcome, in spite of the lower diffraction power for Mo-radiation (compared to Cu-radiation), due to the larger volume tested with Mo, also because higher energy minimize pattern systematic errors and the microabsorption effect.
Resumo:
In the present study, embryotoxicity experiments using the sea urchin Lytechinus variegatus were carried out to better clarify the ecotoxicological effects of tributyltin (TBT) and triphenyltin (TPT) (the recently banned antifouling agents), and Irgarol and Diuron (two of the new commonly used booster biocides). Organisms were individually examined to evaluate the intensity and type of effects on embryo-larval development, this procedure has not been commonly used, however it showed to be a potentially suitable approach for toxicity assessment. NOEC and LOEC were similar for compounds of same chemical class, and IC10 values were very close and showed overlapping of confidence intervals between TBT and TPT, and between Diuron and Irgarol. In addition, IC10 were similar to NOEC values. Regardless of this, the observed effects were different. Embryo development was interrupted at the gastrula and blastula stages at 1.25 and 2.5 mu g l(-1) of TBT, respectively, whereas pluteus stage was reached with the corresponding concentrations of TPT. Furthermore, embryos reached the prism and morula stages at 5 mu g l(-1) of TPT and TBT, respectively. The effects induced by Irgarol were also more pronounced than those caused by Diuron. Pluteus stage was always reached at any tested Diuron concentration, while embryogenesis was interrupted at blastula/gastrula stages at the highest concentrations of Irgarol. Therefore, this study proposes a complementary approach for interpreting embryo-larval responses that may be employed together with the traditional way of analysis. Consequently, this application leads to a more powerful ecotoxicological assessment tool focused on embryotoxicity.
Resumo:
Dissertação de Mestrado Integrado em Medicina Veterinária
Resumo:
Dissertação de mestrado, Qualidade em Análises, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2014
Resumo:
This document aims to improve the quality of the production test in vertical tanks with free water drain pipes, through a device to control the draining system. This proposal consists of an interface detector close to the tank bottle and a control valve on the pipe-drain; they are attached to a remote supervisor system, which will be minimizing the human influence in the conclusion of the test result. And for more consciousness the work shows the importance of the wells production test in the attendance and diagnosis of the productive process, informing the large number of tests executed and problems of the procedure adopted in the field today. There are many possible sources of uncertainty in this kind of test as shown in the experiments realized in the field; the object prototype of this dissertation will be made in the field, based upon the definition of parameters and characteristics of the devices proposal. For a better definition of the draining process the action results of the assessment test are shown, especially changed some for the understand ing of the real process. It shows the proposal details and the configuration that will be used in the tank of Monte Alegre s field Production Station, explaining the interface detector kind and the control system. It is the base to a pilot project now in development, named as the new project classified in the status of the new technology and production improvement of PETROBRAS in Rio Grande do Norte and Ceará. This dissertation concludes that the automation of the conventional test with the draining system will bring benefits both economically as metrologically, because it reduces the uncertainty of the test procedures with free water draining, and also decreases the number of tests with problems
Resumo:
Cloud edge mixing plays an important role in the life cycle and development of clouds. Entrainment of subsaturated air affects the cloud at the microscale, altering the number density and size distribution of its droplets. The resulting effect is determined by two timescales: the time required for the mixing event to complete, and the time required for the droplets to adjust to their new environment. If mixing is rapid, evaporation of droplets is uniform and said to be homogeneous in nature. In contrast, slow mixing (compared to the adjustment timescale) results in the droplets adjusting to the transient state of the mixture, producing an inhomogeneous result. Studying this process in real clouds involves the use of airborne optical instruments capable of measuring clouds at the `single particle' level. Single particle resolution allows for direct measurement of the droplet size distribution. This is in contrast to other `bulk' methods (i.e. hot-wire probes, lidar, radar) which measure a higher order moment of the distribution and require assumptions about the distribution shape to compute a size distribution. The sampling strategy of current optical instruments requires them to integrate over a path tens to hundreds of meters to form a single size distribution. This is much larger than typical mixing scales (which can extend down to the order of centimeters), resulting in difficulties resolving mixing signatures. The Holodec is an optical particle instrument that uses digital holography to record discrete, local volumes of droplets. This method allows for statistically significant size distributions to be calculated for centimeter scale volumes, allowing for full resolution at the scales important to the mixing process. The hologram also records the three dimensional position of all particles within the volume, allowing for the spatial structure of the cloud volume to be studied. Both of these features represent a new and unique view into the mixing problem. In this dissertation, holographic data recorded during two different field projects is analyzed to study the mixing structure of cumulus clouds. Using Holodec data, it is shown that mixing at cloud top can produce regions of clear but humid air that can subside down along the edge of the cloud as a narrow shell, or advect down shear as a `humid halo'. This air is then entrained into the cloud at lower levels, producing mixing that appears to be very inhomogeneous. This inhomogeneous-like mixing is shown to be well correlated with regions containing elevated concentrations of large droplets. This is used to argue in favor of the hypothesis that dilution can lead to enhanced droplet growth rates. I also make observations on the microscale spatial structure of observed cloud volumes recorded by the Holodec.
Resumo:
Detection canines represent the fastest and most versatile means of illicit material detection. This research endeavor in its most simplistic form is the improvement of detection canines through training, training aids, and calibration. This study focuses on developing a universal calibration compound for which all detection canines, regardless of detection substance, can be tested daily to ensure that they are working with acceptable parameters. Surrogate continuation aids (SCAs) were developed for peroxide based explosives along with the validation of the SCAs already developed within the International Forensic Research Institute (IFRI) prototype surrogate explosives kit. Storage parameters of the SCAs were evaluated to give recommendations to the detection canine community on the best possible training aid storage solution that minimizes the likelihood of contamination. Two commonly used and accepted detection canine imprinting methods were also evaluated for the speed in which the canine is trained and their reliability. As a result of the completion of this study, SCAs have been developed for explosive detection canine use covering: peroxide based explosives, TNT based explosives, nitroglycerin based explosives, tagged explosives, plasticized explosives, and smokeless powders. Through the use of these surrogate continuation aids a more uniform and reliable system of training can be implemented in the field than is currently used today. By examining the storage parameters of the SCAs, an ideal storage system has been developed using three levels of containment for the reduction of possible contamination. The developed calibration compound will ease the growing concerns over the legality and reliability of detection canine use by detailing the daily working parameters of the canine, allowing for Daubert rules of evidence admissibility to be applied. Through canine field testing, it has been shown that the IFRI SCAs outperform other commercially available training aids on the market. Additionally, of the imprinting methods tested, no difference was found in the speed in which the canines are trained or their reliability to detect illicit materials. Therefore, if the recommendations discovered in this study are followed, the detection canine community will greatly benefit through the use of scientifically validated training techniques and training aids.
Resumo:
In the study of the spatial characteristics of the visual channels, the power spectrum model of visual masking is one of the most widely used. When the task is to detect a signal masked by visual noise, this classical model assumes that the signal and the noise are previously processed by a bank of linear channels and that the power of the signal at threshold is proportional to the power of the noise passing through the visual channel that mediates detection. The model also assumes that this visual channel will have the highest ratio of signal power to noise power at its output. According to this, there are masking conditions where the highest signal-to-noise ratio (SNR) occurs in a channel centered in a spatial frequency different from the spatial frequency of the signal (off-frequency looking). Under these conditions the channel mediating detection could vary with the type of noise used in the masking experiment and this could affect the estimation of the shape and the bandwidth of the visual channels. It is generally believed that notched noise, white noise and double bandpass noise prevent off-frequency looking, and high-pass, low-pass and bandpass noises can promote it independently of the channel's shape. In this study, by means of a procedure that finds the channel that maximizes the SNR at its output, we performed numerical simulations using the power spectrum model to study the characteristics of masking caused by six types of one-dimensional noise (white, high-pass, low-pass, bandpass, notched, and double bandpass) for two types of channel's shape (symmetric and asymmetric). Our simulations confirm that (1) high-pass, low-pass, and bandpass noises do not prevent the off-frequency looking, (2) white noise satisfactorily prevents the off-frequency looking independently of the shape and bandwidth of the visual channel, and interestingly we proved for the first time that (3) notched and double bandpass noises prevent off-frequency looking only when the noise cutoffs around the spatial frequency of the signal match the shape of the visual channel (symmetric or asymmetric) involved in the detection. In order to test the explanatory power of the model with empirical data, we performed six visual masking experiments. We show that this model, with only two free parameters, fits the empirical masking data with high precision. Finally, we provide equations of the power spectrum model for six masking noises used in the simulations and in the experiments.