894 resultados para Image sensors
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
No panorama socioeconómico atual, a contenção de despesas e o corte no financiamento de serviços secundários consumidores de recursos conduzem à reformulação de processos e métodos das instituições públicas, que procuram manter a qualidade de vida dos seus cidadãos através de programas que se mostrem mais eficientes e económicos. O crescimento sustentado das tecnologias móveis, em conjunção com o aparecimento de novos paradigmas de interação pessoa-máquina com recurso a sensores e sistemas conscientes do contexto, criaram oportunidades de negócio na área do desenvolvimento de aplicações com vertente cívica para indivíduos e empresas, sensibilizando-os para a disponibilização de serviços orientados ao cidadão. Estas oportunidades de negócio incitaram a equipa do projeto a desenvolver uma plataforma de notificação de problemas urbanos baseada no seu sistema de informação geográfico para entidades municipais. O objetivo principal desta investigação foca a idealização, conceção e implementação de uma solução completa de notificação de problemas urbanos de caráter não urgente, distinta da concorrência pela facilidade com que os cidadãos são capazes de reportar situações que condicionam o seu dia-a-dia. Para alcançar esta distinção da restante oferta, foram realizados diversos estudos para determinar características inovadoras a implementar, assim como todas as funcionalidades base expectáveis neste tipo de sistemas. Esses estudos determinaram a implementação de técnicas de demarcação manual das zonas problemáticas e reconhecimento automático do tipo de problema reportado nas imagens, ambas desenvolvidas no âmbito deste projeto. Para a correta implementação dos módulos de demarcação e reconhecimento de imagem, foram feitos levantamentos do estado da arte destas áreas, fundamentando a escolha de métodos e tecnologias a integrar no projeto. Neste contexto, serão apresentadas em detalhe as várias fases que constituíram o processo de desenvolvimento da plataforma, desde a fase de estudo e comparação de ferramentas, metodologias, e técnicas para cada um dos conceitos abordados, passando pela proposta de um modelo de resolução, até à descrição pormenorizada dos algoritmos implementados. Por último, é realizada uma avaliação de desempenho ao par algoritmo/classificador desenvolvido, através da definição de métricas que estimam o sucesso ou insucesso do classificador de objetos. A avaliação é feita com base num conjunto de imagens de teste, recolhidas manualmente em plataformas públicas de notificação de problemas, confrontando os resultados obtidos pelo algoritmo com os resultados esperados.
Resumo:
The visual image is a fundamental component of epiphany, stressing its immediacy and vividness, corresponding to the enargeia of the traditional ekphrasis and also playing with cultural and social meanings. Morris Beja in his seminal book Epiphany in the Modern Novel, draws our attention to the distinction made by Joyce between the epiphany originated in a common object, in a discourse or gesture and the one arising in “a memorable phase of the mind itself”. This type materializes in the “dream-epiphany” and in the epiphany based in memory. On the other hand, Robert Langbaum in his study of the epiphanic mode, suggests that the category of “visionary epiphany” could account for the modern effect of an internally glowing vision like Blake’s “The Tyger”, which projects the vitality of a real tyger. The short story, whose length renders it a fitting genre for the use of different types of epiphany, has dealt with the impact of the visual image in this technique, to convey different effects and different aesthetic aims. This paper will present some examples of this occurrence in short stories of authors in whose work epiphany is a fundamental concept and literary technique: Walter Pater, Joseph Conrad, K. Mansfield, Clarice Lispector. Pater’s “imaginary portraits” concentrate on “priviledged moments” of the lives of the characters depicting their impressions through pictorial language; Conrad tries to show “moments of awakening” that can be remembered by the eye; Mansfield suggests that epiphany, the “glimpse”, should replace plot as an internal ordering principle of her impressionist short-stories; in C. Lispector the visualization of some situations is so aggressive that it causes nausea and a radical revelation on the protagonist’s.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial Para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Este artigo surgiu na sequência de um atelier “Une langue étrangère, un ordinateur, une image: c’est simple comme bonjour!”, desenvolvido no âmbito do XXI Congresso da Associação Portuguesa dos Professores de Francês, Images et imaginaires pour agir. Teve como propósito divulgar, experimentar e refletir sobre recursos digitais que podem dar um bom contributo ao processo de ensino e aprendizagem do Francês Língua Estrangeira (FLE). Evidencia-se o poder da imagem na construção do conhecimento, desafiando a criatividade e novos modos de ensinar a aprender. Verificou-se que os professores se interessaram pelas ferramentas digitais e evidenciaram a sua importância e a sua aplicabilidade nos contextos educativos. Neste sentido, o artigo divulga ferramentas informáticas focadas no desenvolvimento da oralidade/leitura/escrita do francês língua estrangeira, refere boas práticas de utilização em contexto de sala de aula, constituindo uma contribuição para a renovação da escola.
Resumo:
A new biomimetic sensor for leucomalachite green host-guest interactions and potentiometric transduction is presented. The artificial host was imprinted in methacrylic acid or acrylamido-2-methyl-1-propanesulfonic acid-based polymers. Molecularly imprinted particles were dispersed in 2-nitrophenyloctyl ether and trapped in poly(vinyl chloride). The potentiometric sensors exhibited a near-Nernstian response in steady state evaluations, with slopes and detection limits ranging from 45.8 to 81.2 mV and 0.28 to 1.01 , respectively. They were independent from the pH of test solutions within 3 to 5. Good selectivity was observed towards drugs that may contaminate water near fish cultures, such as oxycycline, doxycycline, enrofloxacin, trimethoprim, creatinine, chloramphenicol, and dopamine. The sensors were successfully applied to field monitoring of leucomalachite green in river samples. The method offered the advantages of simplicity, accuracy, applicability to colored and turbid samples, and automation feasibility.
Resumo:
A novel optical disposable probe for screening fluoroquinolones in fish farming waters is presented, having Norfloxacin (NFX) as target compound. The colorimetric reaction takes place in the solid/liquid interface consisting of a plasticized PVC layer carrying the colorimetric reagent and the sample solution. NFX solutions dropped on top of this solid-sensory surface provided a colour change from light yellow to dark orange. Several metals were tested as colorimetric reagents and Fe(III) was selected. The main parameters affecting the obtained colour were assessed and optimised in both liquid and solid phases. The corresponding studies were conducted by visible spectrophotometry and digital image acquisition. The three coordinates of the HSL model system of the collected image (Hue, Saturation and Lightness) were obtained by simple image management (enabled in any computer). The analytical response of the optimised solid-state optical probe against concentration was tested for several mathematical transformations of the colour coordinates. Linear behaviour was observed for logarithm NFX concentration against Hue+Lightness. Under this condition, the sensor exhibited a limit of detection below 50 μM (corresponding to about 16 mg/mL). Visual inspection also enabled semi-quantitative information. The selectivity was ensured against drugs from other chemical groups than fluoroquinolones. Finally, similar procedure was used to prepare an array of sensors for NFX, consisting on different metal species. Cu(II), Mn(II) and aluminon were selected for this purpose. The sensor array was used to detect NFX in aquaculture water, without any prior sample manipulation.
Resumo:
A new man-tailored biomimetic sensor for Chlorpromazine host-guest interactions and potentiometric transduction is presented. The artificial host was imprinted within methacrylic acid, 2-vinyl pyridine and 2-acrylamido-2-methyl-1-propanesulfonic acid based polymers. Molecularly imprinted particles were dispersed in 2-nitrophenyloctyl ether and entrapped in a poly(vinyl chloride) matrix. Slopes and detection limits ranged 51–67 mV/decade and 0.46–3.9 μg/mL, respectively, in steady state conditions. Sensors were independent from the pH of test solutions within 2.0–5.5. Good selectivity was observed towards oxytetracycline, doxytetracycline, ciprofloxacin, enrofloxacin, nalidixic acid, sulfadiazine, trimethoprim, glycine, hydroxylamine, cysteine and creatinine. Analytical features in flowing media were evaluated on a double-channel manifold, with a carrier solution of 5.0 × 10−2 mol/L phosphate buffer. Near-Nernstian response was observed over the concentration range 1.0 × 10−4 to 1.0 × 10−2 mol/L. Average slopes were about 48 mV/decade. The sensors were successfully applied to field monitoring of CPZ in fish samples, offering the advantages of simplicity, accuracy, automation feasibility and applicability to complex samples.
Resumo:
A biomimetic sensor for norfloxacin is presented that is based on host-guest interactions and potentiometric transduction. The artificial host was imprinted into polymers made from methacrylic acid and/or 2-vinyl pyridine. The resulting particles were entrapped in a plasticized poly(vinyl chloride) (PVC) matrix. The sensors exhibit near-Nernstian response in steady state evaluations, and detection limits range from 0.40 to 1.0 μg mL−1, respectively, and are independent of pH values at between 2 and 6, and 8 and 11, respectively. Good selectivity was observed over several potential interferents. In flowing media, the sensors exhibit fast response, a sensitivity of 68.2 mV per decade, a linear range from 79 μM to 2.5 mM, a detection limit of 20 μg mL−1, and a stable baseline. The sensors were successfully applied to field monitoring of norfloxacin in fish samples, biological samples, and pharmaceutical products.
Resumo:
III Jornadas de Electroquímica e Inovação (Electroquímica e Nanomateriais), na Universidade de Trás-os-Montes e Alto Douro, Vila Real, 16 a 17 de Setembro de 2013
Resumo:
Graduate Student Symposium on Molecular Imprinting 2013, na Queen’s University, Belfast, United Kingdom, 15 a 17 de Agosto de 2013
Resumo:
Currently, due to the widespread use of computers and the internet, students are trading libraries for the World Wide Web and laboratories with simulation programs. In most courses, simulators are made available to students and can be used to proof theoretical results or to test a developing hardware/product. Although this is an interesting solution: low cost, easy and fast way to perform some courses work, it has indeed major disadvantages. As everything is currently being done with/in a computer, the students are loosing the “feel” of the real values of the magnitudes. For instance in engineering studies, and mainly in the first years, students need to learn electronics, algorithmic, mathematics and physics. All of these areas can use numerical analysis software, simulation software or spreadsheets and in the majority of the cases data used is either simulated or random numbers, but real data could be used instead. For example, if a course uses numerical analysis software and needs a dataset, the students can learn to manipulate arrays. Also, when using the spreadsheets to build graphics, instead of using a random table, students could use a real dataset based, for instance, in the room temperature and its variation across the day. In this work we present a framework which uses a simple interface allowing it to be used by different courses where the computers are the teaching/learning process in order to give a more realistic feeling to students by using real data. A framework is proposed based on a set of low cost sensors for different physical magnitudes, e.g. temperature, light, wind speed, which are connected to a central server, that the students have access with an Ethernet protocol or are connected directly to the student computer/laptop. These sensors use the communication ports available such as: serial ports, parallel ports, Ethernet or Universal Serial Bus (USB). Since a central server is used, the students are encouraged to use sensor values results in their different courses and consequently in different types of software such as: numerical analysis tools, spreadsheets or simply inside any programming language when a dataset is needed. In order to do this, small pieces of hardware were developed containing at least one sensor using different types of computer communication. As long as the sensors are attached in a server connected to the internet, these tools can also be shared between different schools. This allows sensors that aren't available in a determined school to be used by getting the values from other places that are sharing them. Another remark is that students in the more advanced years and (theoretically) more know how, can use the courses that have some affinities with electronic development to build new sensor pieces and expand the framework further. The final solution provided is very interesting, low cost, simple to develop, allowing flexibility of resources by using the same materials in several courses bringing real world data into the students computer works.
Resumo:
It is well-known that ROVs require human intervention to guarantee the success of their assignment, as well as the equipment safety. However, as its teleoperation is quite complex to perform, there is a need for assisted teleoperation. This study aims to take on this challenge by developing vision-based assisted teleoperation maneuvers, since a standard camera is present in any ROV. The proposed approach is a visual servoing solution, that allows the user to select between several standard image processing methods and is applied to a 3-DOF ROV. The most interesting characteristic of the presented system is the exclusive use of the camera data to improve the teleoperation of an underactuated ROV. It is demonstrated through the comparison and evaluation of standard implementations of different vision methods and the execution of simple maneuvers to acquire experimental results, that the teleoperation of a small ROV can be drastically improved without the need to install additional sensors.
Resumo:
In a time of fierce competition between regions, an image serve as a basis to develop a strong sense of community, which fosters trust and cooperation that can be mobilized for regional growth. A positive image and reputation could be used in the promotional activities of the region benefiting all the stakeholders as a whole. Mega cultural events are frequently used to attract tourists and investments to a region, but also to enhance the city’s image. This study adopts a marketing/communication perspective of city’s image, and intends to explain how the image of the city is perceived by their residents. Specifically, we intend to compare the perceptions of residents that effectively participated in the Guimarães European Capital of Culture (ECOC) 2012 (engaged residents), and the residents that only assisted to the event (attendees). Several significant findings are reported and their implications for event managers and public policy administrators presented, along with the limitations of the study.
Resumo:
This work aims to evaluate the feasibility of using image-based cytometry (IBC) in the analysis of algal cell quantification and viability, using Pseudokirchneriella subcapitata as a cell model. Cell concentration was determined by IBC to be in a linear range between 1 × 105 and 8 × 106 cells mL−1. Algal viability was defined on the basis that the intact membrane of viable cells excludes the SYTOX Green (SG) probe. The disruption of membrane integrity represents irreversible damage and consequently results in cell death. Using IBC, we were able to successfully discriminate between live (SG-negative cells) and dead algal cells (heat-treated at 65 °C for 60 min; SG-positive cells). The observed viability of algal populations containing different proportions of killed cells was well correlated (R 2 = 0.994) with the theoretical viability. The validation of the use of this technology was carried out by exposing algal cells of P. subcapitata to a copper stress test for 96 h. IBC allowed us to follow the evolution of cell concentration and the viability of copper-exposed algal populations. This technology overcomes several main drawbacks usually associated with microscopy counting, such as labour-intensive experiments, tedious work and lack of the representativeness of the cell counting. In conclusion, IBC allowed a fast and automated determination of the total number of algal cells and allowed us to analyse viability. This technology can provide a useful tool for a wide variety of fields that utilise microalgae, such as the aquatic toxicology and biotechnology fields.