915 resultados para CCD cameras
Resumo:
extracts of the regional plants Annona squamosa and Annona muricata were analysed by silica gel thin-layer chromatography using adequate systems of solvents and spray reagents. Carbohydrates, amino acids, alkaloids, flavonoids and terpenoids were detected in both species. These data agree with those on the literature about phytochemistry of the Annonaceae.
Resumo:
A new electroanalytical method coupling TLC-DPV in solid state was developed for quantitative determination of phytoantioxidants with medicinal purpose, e.g. rosmarinic acid (RA) in samples of phytopharmaceuticals, e.g. rosemary (Rosmarinus officinalis L.). The method showed to be feasible, presenting linearity in concentrations ranging from 0.694 x 10-3 to 9.526 x 10-3 mol L-1 (r = 0.9945), good sensibility, selectivity, reproducibility, repeatability, agility and affordable cost. The concentrations of RA in different extracts of rosemary ranged from 0.05 to 0.52 (% w/w), presenting high recovery levels when compared to HPLC.
Resumo:
This study validated a simple and applied method for determining mycotoxins aflatoxin B1, aflatoxin B2, ochratoxin A, zearalenone and deoxynivalenol, in water from the rice production chain. Five solvent combinations for extraction were tested, with quantification performed by TLC/HPTLC and confirmation by LC-MS/MS. Mycotoxins in water from field and rice industries were evaluated. Mycotoxin recovery levels were around 90%. Two samples from rice parboiling waste were contaminated (deoxynivalenol/aflatoxin B1, 110/9 ng mL-1; and deoxynivalenol, 100 ng mL-1). Zearalenone, deoxynivalenol and ochratoxin A (36, 30 and 28%) were carried to soaking water during parboiling.
Resumo:
Imagens CCD/CBERS-2, nas bandas espectrais CCD2, CCD3 e CCD4, dos anos de 2004 e 2005, de Mirante do Paranapanema - SP, foram transformadas em reflectância de superfície usando o modelo 5S de correção atmosférica e normalizadas radiometricamente. O objetivo principal foi caracterizar espectralmente áreas de pastagens de Brachiaria brizantha em fase de florescimento, isentas e infectadas com a doença "mela-das-sementes da braquiária", possibilitando a sua detecção por meio da comparação entre os valores de reflectância de superfície denominada de Fator de Reflectância Bidirecional de Superfície (FRBS). Teve-se, também, o objetivo de avaliar a eficácia das imagens CCD/CBERS-2 para a obtenção de respostas espectrais de pastagens. Os dosséis sadios e doentes da Brachiaria brizantha foram identificados por meio da análise dos valores de reflectância e dos dados observados no Índice de Estresse Hídrico Acumulativo Relativo da Cultura (ACWSI) obtidos na área de estudo. Os resultados indicaram que as principais diferenças foram a diminuição da reflectância na banda CCD3 e o aumento da reflectância na banda CCD4 nas áreas doentes. A metodologia empregada com o uso de dados do sensor CCD/CBERS-2, associados ao ACWSI, mostrou-se eficaz para discriminar dosséis infectados com a "mela-das-sementes da braquiária".
Resumo:
Neste trabalho, bancos de dados públicos e gratuitos disponíveis na World Wide Web (WEB) foram utilizados para avaliar as áreas das superfícies dos espelhos d'água das represas de Furnas e do Funil, no Estado de Minas Gerais. O objetivo foi comparar as informações obtidas nos bancos da WEB com os valores das áreas calculadas a partir de imagens do sensor CCD a bordo dos satélites CBERS2 e CBERS2B. A área da represa de Furnas obtida a partir das imagens CCD/CBERS2B, ano 2008, foi de 1.138 km², mas nos bancos de dados consultados esta área estava entre 1.182 e 1.503 km². A represa do Funil, construída em 2003, com superfície de espelho d'água de 29,37 km² e uma ilha com área de 1,93 km² não aparecem nos bancos Atlas, Geominas, IGAM e IBGE. Os resultados mostraram algumas discrepâncias nos bancos de dados publicados na WEB, tais como diferenças em áreas e supressão ou extrapolação de limites do espelho d'água. Concluiu-se que, até o momento, os responsáveis por algumas publicações de bancos de dados no Estado de Minas Gerais não tiveram rigor suficiente com as atualizações. As imagens CCD/CBERS, que também são dados públicos disponíveis na WEB, mostraram ser produtos adequados para verificar, atualizar e melhorar as informações publicadas.
Resumo:
Medium-resolution satellite images have been widely used for the identification and quantification of irrigated areas by center pivot. These areas, which present predominantly circular forms, can be easily identified by visual analyses of these images. In addition to identifying and quantifying areas irrigated by center pivot, other information that is associated to these areas is fundamental for producing cadastral maps. The goal of this work was to generate cadastral mapping of areas irrigated by center pivots in the State of Minas Gerais, Brazil, with the purpose of supplying information on irrigated agriculture. Using the satellite CBERS2B/CCD, images were used to identify and quantify irrigated areas and then associate these areas with a database containing information about: irrigated area, perimeter, municipality, path row, basin in which the pivot is located, and the date of image acquisition.3,781 center pivots systems were identified. The smallest area irrigated was 4.6 hectares and the largest one was 192.6 hectares. The total estimated value of irrigated area was 254,875 hectares. The largest number of center pivots appeared in the municipalities of Unaí and Paracatu, with 495 and 459 systems, respectively. Cadastral mapping is a very useful tool to assist and enhance information on irrigated agriculture in the State of Minas Gerais.
Resumo:
Les pulsateurs compacts sont des étoiles présentant des variations intrinsèques de luminosité dont les gravités de surface sont supérieures à 100,000 cm/s² On retrouve parmi ces objets deux familles des sous-naines chaudes de type B (sdB) pulsantes et quatre familles distinctes de naines blanches pulsantes. Dans le but d'observer les pulsations de tels objets pour ensuite analyser leur propriétés grâce à l'astéroséismologie, l'Université de Montréal, en collaboration avec le Imaging Technology Laboratory (ITL - University of Arizona), a développé la caméra Mont4K (Montreal4K) CCD qui est, depuis le printemps 2007, le principal détecteur employé au télescope Kuiper de 1.55 m du Mt Bigelow (Steward Observatory, University of Arizona). à l'aide de ce montage, des observations ont été menées pour quelques-uns de ces pulsateurs compacts. La première cible fut HS 0702+6043, un pulsateur hybride. Une importante mission pour cet objet, réalisée du 1er novembre 2007 au 14 mars 2008, a permis d'identifier 28 modes de pulsations pour cet objet en plus de mettre en évidence pour certains de ces modes d'importantes variations d'amplitude. Deux autres cibles furent les naines blanches pulsantes au carbone de type « Hot DQ » SDSS J220029.08-074121.5 et SDSS J234843.30-094245.3. Il fut possible de montrer de façon indirecte la présence d'un fort champ magnétique à la surface de J220029.08-074121.5 grâce à la présence de la première harmonique du mode principal. En outre, pour ces deux cibles, on a pu conclure que celles-ci font bel et bien partie de la classe des naines blanches pulsantes au carbone.
Resumo:
Ce mémoire s'intéresse à la reconstruction d'un modèle 3D à partir de plusieurs images. Le modèle 3D est élaboré avec une représentation hiérarchique de voxels sous la forme d'un octree. Un cube englobant le modèle 3D est calculé à partir de la position des caméras. Ce cube contient les voxels et il définit la position de caméras virtuelles. Le modèle 3D est initialisé par une enveloppe convexe basée sur la couleur uniforme du fond des images. Cette enveloppe permet de creuser la périphérie du modèle 3D. Ensuite un coût pondéré est calculé pour évaluer la qualité de chaque voxel à faire partie de la surface de l'objet. Ce coût tient compte de la similarité des pixels provenant de chaque image associée à la caméra virtuelle. Finalement et pour chacune des caméras virtuelles, une surface est calculée basée sur le coût en utilisant la méthode de SGM. La méthode SGM tient compte du voisinage lors du calcul de profondeur et ce mémoire présente une variation de la méthode pour tenir compte des voxels précédemment exclus du modèle par l'étape d'initialisation ou de creusage par une autre surface. Par la suite, les surfaces calculées sont utilisées pour creuser et finaliser le modèle 3D. Ce mémoire présente une combinaison innovante d'étapes permettant de créer un modèle 3D basé sur un ensemble d'images existant ou encore sur une suite d'images capturées en série pouvant mener à la création d'un modèle 3D en temps réel.
Resumo:
Getting images from your mobile phone is best done using bluetooth, remember the image quality on these phones will not be high and you may find you can only print very small images, however camera phones are great for ease of use and look fine on screen.
Resumo:
Capsule Avian predators are principally responsible. Aims To document the fate of Spotted Flycatcher nests and to identify the species responsible for nest predation. Methods During 2005-06, purpose-built, remote, digital nest-cameras were deployed at 65 out of 141 Spotted Flycatcher nests monitored in two study areas, one in south Devon and the second on the border of Bedfordshire and Cambridgeshire. Results Of the 141 nests monitored, 90 were successful (non-camera nests, 49 out of 76 successful, camera nests, 41 out of 65). Fate was determined for 63 of the 65 nests monitored by camera, with 20 predation events documented, all of which occurred during daylight hours. Avian predators carried out 17 of the 20 predations, with the principal nest predator identified as Eurasian Jay Garrulus glandarius. The only mammal recorded predating nests was the Domestic Cat Felis catus, the study therefore providing no evidence that Grey Squirrels Sciurus carolinensis are an important predator of Spotted Flycatcher nests. There was no evidence of differences in nest survival rates at nests with and without cameras. Nest remains following predation events gave little clue as to the identity of the predator species responsible. Conclusions Nest-cameras can be useful tools in the identification of nest predators, and may be deployed with no subsequent effect on nest survival. The majority of predation of Spotted Flycatcher nests in this study was by avian predators, principally the Jay. There was little evidence of predation by mammalian predators. Identification of specific nest predators enhances studies of breeding productivity and predation risk.
Resumo:
This paper presents an image motion model for airborne three-line-array (TLA) push-broom cameras. Both aircraft velocity and attitude instability are taken into account in modeling image motion. Effects of aircraft pitch, roll, and yaw on image motion are analyzed based on geometric relations in designated coordinate systems. The image motion is mathematically modeled by image motion velocity multiplied by exposure time. Quantitative analysis to image motion velocity is then conducted in simulation experiments. The results have shown that image motion caused by aircraft velocity is space invariant while image motion caused by aircraft attitude instability is more complicated. Pitch,roll and yaw all contribute to image motion to different extents. Pitch dominates the along-track image motion and both roll and yaw greatly contribute to the cross-track image motion. These results provide a valuable base for image motion compensation to ensure high accuracy imagery in aerial photogrammetry.
Resumo:
This work presents a method of information fusion involving data captured by both a standard CCD camera and a ToF camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time of light information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localization. Further development of these methods will make it possible to identify objects and their position in the real world, and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
This work presents a method of information fusion involving data captured by both a standard charge-coupled device (CCD) camera and a time-of-flight (ToF) camera to be used in the detection of the proximity between a manipulator robot and a human. Both cameras are assumed to be located above the work area of an industrial robot. The fusion of colour images and time-of-flight information makes it possible to know the 3D localization of objects with respect to a world coordinate system. At the same time, this allows to know their colour information. Considering that ToF information given by the range camera contains innacuracies including distance error, border error, and pixel saturation, some corrections over the ToF information are proposed and developed to improve the results. The proposed fusion method uses the calibration parameters of both cameras to reproject 3D ToF points, expressed in a common coordinate system for both cameras and a robot arm, in 2D colour images. In addition to this, using the 3D information, the motion detection in a robot industrial environment is achieved, and the fusion of information is applied to the foreground objects previously detected. This combination of information results in a matrix that links colour and 3D information, giving the possibility of characterising the object by its colour in addition to its 3D localisation. Further development of these methods will make it possible to identify objects and their position in the real world and to use this information to prevent possible collisions between the robot and such objects.
Resumo:
In the last decade, several research results have presented formulations for the auto-calibration problem. Most of these have relied on the evaluation of vanishing points to extract the camera parameters. Normally vanishing points are evaluated using pedestrians or the Manhattan World assumption i.e. it is assumed that the scene is necessarily composed of orthogonal planar surfaces. In this work, we present a robust framework for auto-calibration, with improved results and generalisability for real-life situations. This framework is capable of handling problems such as occlusions and the presence of unexpected objects in the scene. In our tests, we compare our formulation with the state-of-the-art in auto-calibration using pedestrians and Manhattan World-based assumptions. This paper reports on the experiments conducted using publicly available datasets; the results have shown that our formulation represents an improvement over the state-of-the-art.