641 resultados para Cameras.
Resumo:
O trabalho desenvolvido centrou-se na preparação da acreditação NP EN ISO/IEC 17025 do Laboratório de Metrologia da empresa Frilabo para prestação de serviços na área das temperaturas, no ensaio a câmaras térmicas e na calibração de termómetros industriais. Considerando o âmbito do trabalho desenvolvido, são abordados nesta tese conceitos teóricos sobre temperatura e incertezas bem como considerações técnicas de medição da temperatura e cálculo de incertezas. São também referidas considerações sobre os diferentes tipos de câmaras térmicas e termómetros. O texto apresenta os documentos elaborados pelo autor sobre os procedimentos de ensaio a câmaras térmicas e respetivo procedimento de cálculo da incerteza. Também estão presentes neste texto documentos elaborados pelo autor sobre os procedimentos de calibração de termómetros industriais e respetivo procedimento de cálculo da incerteza. Relativamente aos ensaios a câmara térmicas e calibração de termómetros o autor elaborou os fluxogramas sobre a metodologia da medição da temperatura nos ensaios, a metodologia de medição da temperatura nas calibrações, e respetivos cálculos de incertezas. Nos diferentes anexos estão apresentados vários documentos tais como o modelo de folha de cálculo para tratamento de dados relativos ao ensaio, modelo de folha de cálculo para tratamento de dados relativo às calibrações, modelo de relatório de ensaio, modelo de certificado de calibração, folhas de cálculo para gestão de clientes/equipamentos e numeração automática de relatórios de ensaio e certificados de calibração que cumprem os requisitos de gestão do laboratório. Ainda em anexo constam todas as figuras relativas à monitorização da temperatura nas câmara térmicas como também as figuras da disposição dos termómetros no interior das câmaras térmicas. Todas as figuras que aparecem ao longo do documento que não estão referenciadas são da adaptação ou elaboração própria do autor. A decisão de alargar o âmbito da acreditação do Laboratório de Metrologia da Frilabo para calibração de termómetros, prendeu-se com o facto de que sendo acreditado como laboratório de ensaios na área das temperaturas, a realização da rastreabilidade dos padrões de medida internamente, permitiria uma gestão de recursos otimizada e rentabilizada. A metodologia da preparação de todo o processo de acreditação do Laboratório de Metrologia da Frilabo, foi desenvolvida pelo autor e está expressa ao longo do texto da tese incluindo dados relevantes para a concretização da referida acreditação nos dois âmbitos. A avaliação de todo o trabalho desenvolvido será efetuada pelo o organismo designado IPAC (Instituto Português de Acreditação) que confere a acreditação em Portugal. Este organismo irá auditar a empresa com base nos procedimentos desenvolvidos e nos resultados obtidos, sendo destes o mais importante o Balanço da Melhor Incerteza (BMI) da medição também conhecido por Melhor Capacidade de Medição (MCM), quer para o ensaio às câmaras térmicas, quer para a calibração dos termómetros, permitindo desta forma complementar os serviços prestados aos clientes fidelizados à Frilabo. As câmaras térmicas e os termómetros industriais são equipamentos amplamente utilizados em diversos segmentos industriais, engenharia, medicina, ensino e também nas instituições de investigação, sendo um dos objetivos respetivamente, a simulação de condições específicas controladas e a medição de temperatura. Para entidades acreditadas, como os laboratórios, torna-se primordial que as medições realizadas com e nestes tipos de equipamentos ostentem confiabilidade metrológica1, uma vez que, resultados das medições inadequados podem levar a conclusões equivocadas sobre os testes realizados. Os resultados obtidos nos ensaios a câmaras térmicas e nas calibrações de termómetros, são considerados bons e aceitáveis, uma vez que as melhores incertezas obtidas, podem ser comparadas, através de consulta pública do Anexo Técnico do IPAC, com as incertezas de outros laboratórios acreditados em Portugal. Numa abordagem mais experimental, pode dizer-se que no ensaio a câmaras térmicas a obtenção de incertezas mais baixas ou mais altas depende maioritariamente do comportamento, características e estado de conservação das câmaras, tornando relevante o processo de estabilização da temperatura no interior das mesmas. A maioria das fontes de incerteza na calibração dos termómetros são obtidas pelas características e especificações do fabricante dos equipamentos, que se traduzem por uma contribuição com o mesmo peso para o cálculo da incerteza expandida (a exatidão de fabricante, as incertezas herdadas de certificados de calibração, da estabilidade e da uniformidade do meio térmico onde se efetuam as calibrações). Na calibração dos termómetros as incertezas mais baixas obtêm-se para termómetros de resoluções mais baixas. Verificou-se que os termómetros com resolução de 1ºC não detetavam as variações do banho térmico. Nos termómetros com resoluções inferiores, o peso da contribuição da dispersão de leituras no cálculo da incerteza, pode variar consoante as características do termómetro. Por exemplo os termómetros com resolução de 0,1ºC, apresentaram o maior peso na contribuição da componente da dispersão de leituras. Pode concluir-se que a acreditação de um laboratório é um processo que não é de todo fácil. Podem salientar-se aspetos que podem comprometer a acreditação, como por exemplo a má seleção do ou dos técnicos e equipamentos (má formação do técnico, equipamento que não seja por exemplo adequado à gama, mal calibrado, etc…) que vão efetuar as medições. Se não for bem feita, vai comprometer todo o processo nos passos seguintes. Deve haver também o envolvimento do todos os intervenientes do laboratório, o gestor da qualidade, o responsável técnico e os técnicos, só assim é que é possível chegar à qualidade pretendida e à melhoria contínua da acreditação do laboratório. Outro aspeto importante na preparação de uma acreditação de um laboratório é a pesquisa de documentação necessária e adequada para poder tomar decisões corretas na elaboração dos procedimentos conducentes à referida. O laboratório tem de mostrar/comprovar através de registos a sua competência. Finalmente pode dizer-se que competência é a palavra chave de uma acreditação, pois ela manifesta-se nas pessoas, equipamentos, métodos, instalações e outros aspetos da instituição a que pertence o laboratório sob acreditação.
Resumo:
Optical full-field measurement methods such as Digital Image Correlation (DIC) provide a new opportunity for measuring deformations and vibrations with high spatial and temporal resolution. However, application to full-scale wind turbines is not trivial. Elaborate preparation of the experiment is vital and sophisticated post processing of the DIC results essential. In the present study, a rotor blade of a 3.2 MW wind turbine is equipped with a random black-and-white dot pattern at four different radial positions. Two cameras are located in front of the wind turbine and the response of the rotor blade is monitored using DIC for different turbine operations. In addition, a Light Detection and Ranging (LiDAR) system is used in order to measure the wind conditions. Wind fields are created based on the LiDAR measurements and used to perform aeroelastic simulations of the wind turbine by means of advanced multibody codes. The results from the optical DIC system appear plausible when checked against common and expected results. In addition, the comparison of relative out-of-plane blade deflections shows good agreement between DIC results and aeroelastic simulations.
Resumo:
Gas-liquid two-phase flow is very common in industrial applications, especially in the oil and gas, chemical, and nuclear industries. As operating conditions change such as the flow rates of the phases, the pipe diameter and physical properties of the fluids, different configurations called flow patterns take place. In the case of oil production, the most frequent pattern found is slug flow, in which continuous liquid plugs (liquid slugs) and gas-dominated regions (elongated bubbles) alternate. Offshore scenarios where the pipe lies onto the seabed with slight changes of direction are extremely common. With those scenarios and issues in mind, this work presents an experimental study of two-phase gas-liquid slug flows in a duct with a slight change of direction, represented by a horizontal section followed by a downward sloping pipe stretch. The experiments were carried out at NUEM (Núcleo de Escoamentos Multifásicos UTFPR). The flow initiated and developed under controlled conditions and their characteristic parameters were measured with resistive sensors installed at four pipe sections. Two high-speed cameras were also used. With the measured results, it was evaluated the influence of a slight direction change on the slug flow structures and on the transition between slug flow and stratified flow in the downward section.
Resumo:
La aplicación Control Camera IP, desarrolla como Proyecto Fin de Carrera en la ETS. De Ingeniería Informática de la Universidad de Málaga, fue concebida como una interfaz de usuario para la monitorización y control de cámaras IP de forma remota, pudiendo ésta ejecutarse en diferentes plataformas, incluyendo dispositivos móviles con sistemas Android. En aquel momento sin embargo, las plataformas Android no disponían de una librería oficial dentro del marco de la herramienta de desarrollo utilizada (la biblioteca de desarrollo multiplataforma Qt), por lo que fue utilizada una versión alternativa no oficial denominada Necessitas Qt for Android. Hoy, con la versión 5 de Qt, existe la posibilidad de dar soporte a las plataformas Android de forma oficial, por lo que es posible adaptar la aplicación a esta nueva versión. En este Trabajo Fin de Grado, se ha adaptado la aplicación Control Camera IP a la versión 5 de Qt, logrando así crear plataformas para dispositivos Android de forma oficial. Además, se hace uso de la biblioteca OpenCV para el desarrollo de varios métodos de procesamiento sobre la imagen recibida por la cámara IP, así como algoritmos de detección de movimiento y de caras de personas, haciendo uso de técnicas de visión por computador. Finalmente, se introduce la posibilidad de utilizar APIs estandarizadas para la conectividad de la aplicación con cámaras IP de bajo coste, adaptando algunas de sus funciones a la aplicación Control Camera IP.
Resumo:
In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR.
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
Resumo:
Animal welfare issues have received much attention not only to supply farmed animal requirements, but also to ethical and cultural public concerns. Daily collected information, as well as the systematic follow-up of production stages, produces important statistical data for production assessment and control, as well as for improvement possibilities. In this scenario, this research study analyzed behavioral, production, and environmental data using Main Component Multivariable Analysis, which correlated observed behaviors, recorded using video cameras and electronic identification, with performance parameters of female broiler breeders. The aim was to start building a system to support decision-making in broiler breeder housing, based on bird behavioral parameters. Birds were housed in an environmental chamber, with three pens with different controlled environments. Bird sensitivity to environmental conditions were indicated by their behaviors, stressing the importance of behavioral observations for modern poultry management. A strong association between performance parameters and the behavior at the nest, suggesting that this behavior may be used to predict productivity. The behaviors of ruffling feathers, opening wings, preening, and at the drinker were negatively correlated with environmental temperature, suggesting that the increase of in the frequency of these behaviors indicate improvement of thermal welfare.
Resumo:
Oceans environmental monitoring and seafloor exploitation need in situ sensors and optical devices (cameras, lights) in various locations and on various carriers in order to initiate and to calibrate environmental models or to operate underwater industrial process supervision. For more than 10 years Ifremer deploys in situ monitoring systems for various seawater parameters and in situ observation systems based on lights and HD Cameras. To be economically operational, these systems must be equipped with a biofouling protection dedicated to the sensors and optical devices used in situ. Indeed, biofouling, in less than 15 days [1] will modify the transducing interfaces of the sensors and causes unacceptable bias on the measurements provided by the in situ monitoring system. In the same way biofouling will decrease the optical properties of windows and thus altering the lighting and the quality fot he images recorded by the camera.
Resumo:
Recent advances in mobile phone cameras have poised them to take over compact hand-held cameras as the consumer’s preferred camera option. Along with advances in the number of pixels, motion blur removal, face-tracking, and noise reduction algorithms have significant roles in the internal processing of the devices. An undesired effect of severe noise reduction is the loss of texture (i.e. low-contrast fine details) of the original scene. Current established methods for resolution measurement fail to accurately portray the texture loss incurred in a camera system. The development of an accurate objective method to identify the texture preservation or texture reproduction capability of a camera device is important in this regard. The ‘Dead Leaves’ target has been used extensively as a method to measure the modulation transfer function (MTF) of cameras that employ highly non-linear noise-reduction methods. This stochastic model consists of a series of overlapping circles with radii r distributed as r−3, and having uniformly distributed gray level, which gives an accurate model of occlusion in a natural setting and hence mimics a natural scene. This target can be used to model the texture transfer through a camera system when a natural scene is captured. In the first part of our study we identify various factors that affect the MTF measured using the ‘Dead Leaves’ chart. These include variations in illumination, distance, exposure time and ISO sensitivity among others. We discuss the main differences of this method with the existing resolution measurement techniques and identify the advantages. In the second part of this study, we propose an improvement to the current texture MTF measurement algorithm. High frequency residual noise in the processed image contains the same frequency content as fine texture detail, and is sometimes reported as such, thereby leading to inaccurate results. A wavelet thresholding based denoising technique is utilized for modeling the noise present in the final captured image. This updated noise model is then used for calculating an accurate texture MTF. We present comparative results for both algorithms under various image capture conditions.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Ciências da Saúde, Programa de Pós-Graduação em Ciências da Saúde, 2015.
Resumo:
Visual inputs to artificial and biological visual systems are often quantized: cameras accumulate photons from the visual world, and the brain receives action potentials from visual sensory neurons. Collecting more information quanta leads to a longer acquisition time and better performance. In many visual tasks, collecting a small number of quanta is sufficient to solve the task well. The ability to determine the right number of quanta is pivotal in situations where visual information is costly to obtain, such as photon-starved or time-critical environments. In these situations, conventional vision systems that always collect a fixed and large amount of information are infeasible. I develop a framework that judiciously determines the number of information quanta to observe based on the cost of observation and the requirement for accuracy. The framework implements the optimal speed versus accuracy tradeoff when two assumptions are met, namely that the task is fully specified probabilistically and constant over time. I also extend the framework to address scenarios that violate the assumptions. I deploy the framework to three recognition tasks: visual search (where both assumptions are satisfied), scotopic visual recognition (where the model is not specified), and visual discrimination with unknown stimulus onset (where the model is dynamic over time). Scotopic classification experiments suggest that the framework leads to dramatic improvement in photon-efficiency compared to conventional computer vision algorithms. Human psychophysics experiments confirmed that the framework provides a parsimonious and versatile explanation for human behavior under time pressure in both static and dynamic environments.
Resumo:
Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera’s point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ~10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera’s PSF. The algorithm can also improve dose estimation and treatment planning.
Resumo:
In the medical field images obtained from high definition cameras and other medical imaging systems are an integral part of medical diagnosis. The analysis of these images are usually performed by the physicians who sometimes need to spend long hours reviewing the images before they are able to come up with a diagnosis and then decide on the course of action. In this dissertation we present a framework for a computer-aided analysis of medical imagery via the use of an expert system. While this problem has been discussed before, we will consider a system based on mobile devices. Since the release of the iPhone on April 2003, the popularity of mobile devices has increased rapidly and our lives have become more reliant on them. This popularity and the ease of development of mobile applications has now made it possible to perform on these devices many of the image analyses that previously required a personal computer. All of this has opened the door to a whole new set of possibilities and freed the physicians from their reliance on their desktop machines. The approach proposed in this dissertation aims to capitalize on these new found opportunities by providing a framework for analysis of medical images that physicians can utilize from their mobile devices thus remove their reliance on desktop computers. We also provide an expert system to aid in the analysis and advice on the selection of medical procedure. Finally, we also allow for other mobile applications to be developed by providing a generic mobile application development framework that allows for access of other applications into the mobile domain. In this dissertation we outline our work leading towards development of the proposed methodology and the remaining work needed to find a solution to the problem. In order to make this difficult problem tractable, we divide the problem into three parts: the development user interface modeling language and tooling, the creation of a game development modeling language and tooling, and the development of a generic mobile application framework. In order to make this problem more manageable, we will narrow down the initial scope to the hair transplant, and glaucoma domains.
Resumo:
Lithium Ion (Li-Ion) batteries have got attention in recent decades because of their undisputable advantages over other types of batteries. They are used in so many our devices which we need in our daily life such as cell phones, lap top computers, cameras, and so many electronic devices. They also are being used in smart grids technology, stand-alone wind and solar systems, Hybrid Electric Vehicles (HEV), and Plug in Hybrid Electric Vehicles (PHEV). Despite the rapid increase in the use of Lit-ion batteries, the existence of limited battery models also inadequate and very complex models developed by chemists is the lack of useful models a significant matter. A battery management system (BMS) aims to optimize the use of the battery, making the whole system more reliable, durable and cost effective. Perhaps the most important function of the BMS is to provide an estimate of the State of Charge (SOC). SOC is the ratio of available ampere-hour (Ah) in the battery to the total Ah of a fully charged battery. The Open Circuit Voltage (OCV) of a fully relaxed battery has an approximate one-to-one relationship with the SOC. Therefore, if this voltage is known, the SOC can be found. However, the relaxed OCV can only be measured when the battery is relaxed and the internal battery chemistry has reached equilibrium. This thesis focuses on Li-ion battery cell modelling and SOC estimation. In particular, the thesis, introduces a simple but comprehensive model for the battery and a novel on-line, accurate and fast SOC estimation algorithm for the primary purpose of use in electric and hybrid-electric vehicles, and microgrid systems. The thesis aims to (i) form a baseline characterization for dynamic modeling; (ii) provide a tool for use in state-of-charge estimation. The proposed modelling and SOC estimation schemes are validated through comprehensive simulation and experimental results.
Resumo:
The main objective of blasting is to produce optimum fragmentation for downstream processing. Fragmentation is usually considered optimum when the average fragment size is minimum and the fragmentation distribution as uniform as possible. One of the parameters affecting blasting fragmentation is believed to be time delay between holes of the same row. Although one can find a significant number of studies in the literature, which examine the relationship between time delay and fragmentation, their results have been often controversial. The purpose of this work is to increase the level of understanding of how time delay between holes of the same row affects fragmentation. Two series of experiments were conducted for this purpose. The first series involved tests on small scale grout and granite blocks to determine the moment of burden detachment. The instrumentation used for these experiments consisted mainly of strain gauges and piezoelectric sensors. Some experiments were also recorded with a high speed camera. It was concluded that the time of detachment for this specific setup is between 300 and 600 μs. The second series of experiments involved blasting of a 2 meter high granite bench and its purpose was the determination of the hole-to-hole delay that provides optimum fragmentation. The fragmentation results were assessed with image analysis software. Moreover, vibration was measured close to the blast and the experiments were recorded with high speed cameras. The results suggest that fragmentation was optimum when delays between 4 and 6 ms were used for this specific setup. Also, it was found that the moment at which gases first appear to be venting from the face was consistently around 6 ms after detonation.