909 resultados para Concept-based Terminology
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Text based on the paper presented at the Conference "Autonomous systems: inter-relations of technical and societal issues" held at Monte de Caparica (Portugal), Universidade Nova de Lisboa, November, 5th and 6th 2009 and organized by IET-Research Centre on Enterprise and Work Innovation
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
The wide use of antibiotics in aquaculture has led to the emergence of resistant microbial species. It should be avoided/minimized by controlling the amount of drug employed in fish farming. For this purpose, the present work proposes test-strip papers aiming at the detection/semi-quantitative determination of organic drugs by visual comparison of color changes, in a similar analytical procedure to that of pH monitoring by universal pH paper. This is done by establishing suitable chemical changes upon cellulose, attributing the paper the ability to react with the organic drug and to produce a color change. Quantitative data is also enabled by taking a picture and applying a suitable mathematical treatment to the color coordinates given by the HSL system used by windows. As proof of concept, this approach was applied to oxytetracycline (OXY), one of the antibiotics frequently used in aquaculture. A bottom-up modification of paper was established, starting by the reaction of the glucose moieties on the paper with 3-triethoxysilylpropylamine (APTES). The so-formed amine layer allowed binding to a metal ion by coordination chemistry, while the metal ion reacted after with the drug to produce a colored compound. The most suitable metals to carry out such modification were selected by bulk studies, and the several stages of the paper modification were optimized to produce an intense color change against the concentration of the drug. The paper strips were applied to the analysis of spiked environmental water, allowing a quantitative determination for OXY concentrations as low as 30 ng/mL. In general, this work provided a simple, method to screen and discriminate tetracycline drugs, in aquaculture, being a promising tool for local, quick and cheap monitoring of drugs.
Resumo:
Astringency is an organoleptic property of beverages and food products resulting mainly from the interaction of salivary proteins with dietary polyphenols. It is of great importance to consumers, but the only effective way of measuring it involves trained sensorial panellists, providing subjective and expensive responses. Concurrent chemical evaluations try to screen food astringency, by means of polyphenol and protein precipitation procedures, but these are far from the real human astringency sensation where not all polyphenol–protein interactions lead to the occurrence of precipitate. Here, a novel chemical approach that tries to mimic protein–polyphenol interactions in the mouth is presented to evaluate astringency. A protein, acting as a salivary protein, is attached to a solid support to which the polyphenol binds (just as happens when drinking wine), with subsequent colour alteration that is fully independent from the occurrence of precipitate. Employing this simple concept, Bovine Serum Albumin (BSA) was selected as the model salivary protein and used to cover the surface of silica beads. Tannic Acid (TA), employed as the model polyphenol, was allowed to interact with the BSA on the silica support and its adsorption to the protein was detected by reaction with Fe(III) and subsequent colour development. Quantitative data of TA in the samples were extracted by colorimetric or reflectance studies over the solid materials. The analysis was done by taking a regular picture with a digital camera, opening the image file in common software and extracting the colour coordinates from HSL (Hue, Saturation, Lightness) and RGB (Red, Green, Blue) colour model systems; linear ranges were observed from 10.6 to 106.0 μmol L−1. The latter was based on the Kubelka–Munk response, showing a linear gain with concentrations from 0.3 to 10.5 μmol L−1. In either of these two approaches, semi-quantitative estimation of TA was enabled by direct eye comparison. The correlation between the levels of adsorbed TA and the astringency of beverages was tested by using the assay to check the astringency of wines and comparing these to the response of sensorial panellists. Results of the two methods correlated well. The proposed sensor has significant potential as a robust tool for the quantitative/semi-quantitative evaluation of astringency in wine.
Resumo:
A presente comunicação visa discutir as mais-valias de um desenho metodológico sustentado numa abordagem conceptual da Terminologia aplicado ao exercício de harmonização da definição do cenário educativo mais promissor do Ensino Superior actual: o blended learning. Sendo a Terminologia uma disciplina que se ocupa da representação, da descrição e da definição do conhecimento especializado através da língua a essência deste domínio do saber responde a uma necessidade fundamental da sociedade actual: putting order into our universe, nas palavras de Nuopponen (2011). No contexto descrito, os conceitos, enquanto elementos da estrutura do conhecimento (Sager, 1990) constituem um objecto de investigação de complexidade não despicienda, pois apesar do postulado de que a língua constitui uma ferramenta fundamental para descrever e organizar o conhecimento, o princípio isomórfico não pode ser tomado como adquirido. A abordagem conceptual em Terminologia propõe uma visão precisa do papel da língua no trabalho terminológico, sendo premissa basilar que não existe uma correspondência unívoca entre os elementos atomísticos do conhecimento e os elementos da expressão linguística. É pela razões enunciadas que as opções metodológicas circunscritas à análise do texto de especialidade serão consideradas imprecisas. Nesta reflexão perspectiva-se que o conceito-chave de uma abordagem conceptual do trabalho terminológico implica a combinação de um processo de elicitação do conhecimento tácito através de uma negociação discursiva orientada para o conceito e a análise de corpora textuais. Defende-se consequentemente que as estratégias de interacção entre terminólogo e especialista de domínio merecem atenção detalhada pelo facto de se reflectirem com expressividade na qualidade dos resultados obtidos. Na sequência do exposto, o modelo metodológico que propomos sustenta-se em três etapas que privilegiam um refinamento dessa interacção permitindo ao terminólogo afirmar-se como sujeito conceptualizador, decisor e interventor: (1) etapa exploratória do domínio-objecto de estudo; (2) etapa de análise onamasiológica de evidência textual e discursiva; (3) etapa de modelização e de validação de resultados. Defender-se-á a produtividade de uma sequência cíclica entre a análise textual e discursiva para fins onomasiológicos, a interacção colaborativa e a introspecção.
Resumo:
Quality of life is a concept influenced by social, economic, psychological, spiritual or medical state factors. More specifically, the perceived quality of an individual's daily life is an assessment of their well-being or lack of it. In this context, information technologies may help on the management of services for healthcare of chronic patients such as estimating the patient quality of life and helping the medical staff to take appropriate measures to increase each patient quality of life. This paper describes a Quality of Life estimation system developed using information technologies and the application of data mining algorithms to access the information of clinical data of patients with cancer from Otorhinolaryngology and Head and Neck services of an oncology institution. The system was evaluated with a sample composed of 3013 patients. The results achieved show that there are variables that may be significant predictors for the Quality of Life of the patient: years of smoking (p value 0.049) and size of the tumor (p value < 0.001). In order to assign the variables to the classification of the quality of life the best accuracy was obtained by applying the John Platt's sequential minimal optimization algorithm for training a support vector classifier. In conclusion data mining techniques allow having access to patients additional information helping the physicians to be able to know the quality of life and produce a well-informed clinical decision.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
OBJECTIVE: To empirically test, based on a large multicenter, multinational database, whether a modified PIRO (predisposition, insult, response, and organ dysfunction) concept could be applied to predict mortality in patients with infection and sepsis. DESIGN: Substudy of a multicenter multinational cohort study (SAPS 3). PATIENTS: A total of 2,628 patients with signs of infection or sepsis who stayed in the ICU for >48 h. Three boxes of variables were defined, according to the PIRO concept. Box 1 (Predisposition) contained information about the patient's condition before ICU admission. Box 2 (Injury) contained information about the infection at ICU admission. Box 3 (Response) was defined as the response to the infection, expressed as a Sequential Organ Failure Assessment score after 48 h. INTERVENTIONS: None. MAIN MEASUREMENTS AND RESULTS: Most of the infections were community acquired (59.6%); 32.5% were hospital acquired. The median age of the patients was 65 (50-75) years, and 41.1% were female. About 22% (n=576) of the patients presented with infection only, 36.3% (n=953) with signs of sepsis, 23.6% (n=619) with severe sepsis, and 18.3% (n=480) with septic shock. Hospital mortality was 40.6% overall, greater in those with septic shock (52.5%) than in those with infection (34.7%). Several factors related to predisposition, infection and response were associated with hospital mortality. CONCLUSION: The proposed three-level system, by using objectively defined criteria for risk of mortality in sepsis, could be used by physicians to stratify patients at ICU admission or shortly thereafter, contributing to a better selection of management according to the risk of death.
Resumo:
É possível assistir nos dias de hoje, a um processo tecnológico evolutivo acentuado por toda a parte do globo. No caso das empresas, quer as pequenas, médias ou de grandes dimensões, estão cada vez mais dependentes dos sistemas informatizados para realizar os seus processos de negócio, e consequentemente à geração de informação referente aos negócios e onde, muitas das vezes, os dados não têm qualquer relacionamento entre si. A maioria dos sistemas convencionais informáticos não são projetados para gerir e armazenar informações estratégicas, impossibilitando assim que esta sirva de apoio como recurso estratégico. Portanto, as decisões são tomadas com base na experiência dos administradores, quando poderiam serem baseadas em factos históricos armazenados pelos diversos sistemas. Genericamente, as organizações possuem muitos dados, mas na maioria dos casos extraem pouca informação, o que é um problema em termos de mercados competitivos. Como as organizações procuram evoluir e superar a concorrência nas tomadas de decisão, surge neste contexto o termo Business Intelligence(BI). A GisGeo Information Systems é uma empresa que desenvolve software baseado em SIG (sistemas de informação geográfica) recorrendo a uma filosofia de ferramentas open-source. O seu principal produto baseia-se na localização geográfica dos vários tipos de viaturas, na recolha de dados, e consequentemente a sua análise (quilómetros percorridos, duração de uma viagem entre dois pontos definidos, consumo de combustível, etc.). Neste âmbito surge o tema deste projeto que tem objetivo de dar uma perspetiva diferente aos dados existentes, cruzando os conceitos BI com o sistema implementado na empresa de acordo com a sua filosofia. Neste projeto são abordados alguns dos conceitos mais importantes adjacentes a BI como, por exemplo, modelo dimensional, data Warehouse, o processo ETL e OLAP, seguindo a metodologia de Ralph Kimball. São também estudadas algumas das principais ferramentas open-source existentes no mercado, assim como quais as suas vantagens/desvantagens relativamente entre elas. Em conclusão, é então apresentada a solução desenvolvida de acordo com os critérios enumerados pela empresa como prova de conceito da aplicabilidade da área Business Intelligence ao ramo de Sistemas de informação Geográfica (SIG), recorrendo a uma ferramenta open-source que suporte visualização dos dados através de dashboards.
Resumo:
As we move more closely to the practical concept of the Internet of Things and, our reliance on public and private APIs increases, web services and their related topics have become utterly crucial to the informatics community. However, the question about which style of web services would best solve a particular problem, can raise signi cant and multifarious debates. There can be found two implementation styles that highlight themselves: the RPC-oriented style represented by the SOAP protocol’s implementations and the hypermedia style, which is represented by the REST architectural style’s implementations. As we search examples of already established web services, we can nd a handful of robust and reliable public and private SOAP APIs, nevertheless, it seems that RESTful services are gaining popularity in the enterprise community. For the current generation of developers that work on informatics solutions, REST seems to represent a fundamental and straightforward alternative and even, a more deep-rooted approach than SOAP. But are they comparable? Do both approaches have each speci c best suitable scenarios? Such study is brie y carried out in the present document’s chapters, starting with the respective background study, following an analysis of the hypermedia approach and an instantiation of its architecture, in a particular case study applied in a BPM context.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação para obtenção do Grau de Doutor em Informática