838 resultados para Multiple methods framework


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To describe the trend for malignant skin neoplasms in subjects under 40 years of age in a region with high ultraviolet radiation indices.METHODS A descriptive epidemiological study on melanoma and nonmelanoma skin cancers that was conducted in Goiania, Midwest Brazil, with 1,688 people under 40 years of age, between 1988 and 2009. Cases were obtained fromRegistro de Câncer de Base Populacional de Goiânia(Goiania’s Population-Based Cancer File). Frequency, trends, and incidence of cases with single and multiple lesions were analyzed; transplants and genetic skin diseases were found in cases with multiple lesions.RESULTS Over the period, 1,995 skin cancer cases were observed to found, of which 1,524 (90.3%) cases had single lesions and 164 (9.7%) had multiple lesions. Regarding single lesions, incidence on men was observed to have risen from 2.4 to 3.1/100,000 inhabitants; it differed significantly for women, shifting from 2.3 to 5.3/100,000 (Annual percentage change – [APC] 3.0%, p = 0.006). Regarding multiple lesions, incidence on men was observed to have risen from 0.30 to 0.98/100,000 inhabitants; for women, it rose from 0.43 to 1.16/100,000 (APC 8.6%, p = 0.003). Genetic skin diseases or transplants were found to have been correlated with 10.0% of cases with multiple lesions – an average of 5.1 lesions per patient. The average was 2.5 in cases without that correlation.CONCLUSIONS Skin cancer on women under 40 years of age has been observed to be increasing for both cases with single and multiple lesions. It is not unusual to find multiple tumors in young people – in most cases, they are not associated with genetic skin diseases or transplants. It is necessary to avoid excessive exposure to ultraviolet radiation from childhood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for better adaptation of networks to transported flows has led to research on new approaches such as content aware networks and network aware applications. In parallel, recent developments of multimedia and content oriented services and applications such as IPTV, video streaming, video on demand, and Internet TV reinforced interest in multicast technologies. IP multicast has not been widely deployed due to interdomain and QoS support problems; therefore, alternative solutions have been investigated. This article proposes a management driven hybrid multicast solution that is multi-domain and media oriented, and combines overlay multicast, IP multicast, and P2P. The architecture is developed in a content aware network and network aware application environment, based on light network virtualization. The multicast trees can be seen as parallel virtual content aware networks, spanning a single or multiple IP domains, customized to the type of content to be transported while fulfilling the quality of service requirements of the service provider.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The investigation of the web of relationships between the different elements of the immune system has proven instrumental to better understand this complex biological system. This is particularly true in the case of the interactions between B and T lymphocytes, both during cellular development and at the stage of cellular effectors functions. The understanding of the B–T cells interdependency and the possibility to manipulate this relationship may be directly applicable to situations where immunity is deficient, as is the case of cancer or immune suppression after radio and chemotherapy. The work presented here started with the development of a novel and accurate tool to directly assess the diversity of the cellular repertoire (Chapter III). Contractions of T cell receptor diversity have been related with a deficient immune status. This method uses gene chips platforms where nucleic acids coding for lymphocyte receptors are hybridized and is based on the fact that the frequency of hybridization of nucleic acids to the oligonucleotides on a gene chip varies in direct proportion to diversity. Subsequently, and using this new method and other techniques of cell quantification I examined, in an animal model, the role that polyclonal B cells and immunoglobulin exert upon T cell development in the thymus, specifically on the acquisition of a broader repertoire diversity by the T cell receptors (Chapter IV and V). The hypothesis tested was if the presence of more diverse peptides in the thymus, namely polyclonal immunoglobulin, would induce the generation of more diverse T cells precursors. The results obtained demonstrated that the diversity of the T cell compartment is increased by the presence of polyclonal immunoglobulin. Polyclonal immunoglobulin, and particularly the Fab fragments of the molecule, represent the most diverse self-molecules in the body and its peptides are presented by antigen presenting cells to precursor T cells in the thymus during its development. This probably contributes significantly to the generation of receptor diversity. Furthermore, we also demonstrated that a more diverse repertoire of T lymphocytes is associated with a more effective and robust T cell immune function in vivo, as mice with a more diverse T cell receptors reject minor histocompatiblility discordant skin grafts faster than mice with a shrunken T cell receptor repertoire (Chapter V). We believe that a broader T cell receptor diversity allows a more efficient recognition and rejection of a higher range of external and internal aggressions. In this work it is demonstrated that a reduction of TCR diversity by thymectomy in wild type mice significantly increased survival of H-Y incompatible skin grafts, indicating decrease on T cell function. In addiction reconstitution of T-cell diversity in mice with a decreased T cell repertoire diversity with immunoglobulin Fab fragments, lead to a increase on TCR diversity and to a significantly decreased survival of the skin grafts (Chapter V). These results strongly suggest that increases on T cell repertoire diversity contribute to improvement of T cell function. Our results may have important implications on therapy and immune reconstitution in the context of AIDS, cancer, autoimmunity and post myeloablative treatments. Based on the previous results, we tested the clinical hypothesis that patients with haematological malignancies subjected to stem cell transplantation who recovered a robust immune system would have a better survival compared to patients who did not recover such a robust immune system. This study was undertaken by the examination of the progression and overall survival of 42 patients with mantle cell non-Hodgkin lymphoma receiving autologous hematopoietic stem cell transplantation (Chapter VI). The results obtained show that patients who recovered higher numbers of lymphocytes soon after autologous transplantation had a statistically significantly longer progression free and overall survivals. These results demonstrate the positive impact that a more robust immune system reconstitution after stem cell transplantation may have upon the survival of patients with haematological malignancies. In a similar clinical research framework, this dissertation also includes the study of the impact of recovering normal serum levels of polyclonal immunoglobulin on the survival of patients with another B cell haematological malignancy, multiple myeloma, after autologous stem cell transplantation (Chapter VII). The relapse free survival of the 110 patients with multiple myeloma analysed was associated with their ability to recover normal serum levels of the polyclonal compartment of immunoglobulin. These results suggest again the important effect of polyclonal immunoglobulin for the (re)generation of the immune competence. We also studied the impact of a robust immunity for the response to treatment with the antibody anti CD20, rituximab, in patients with non- Hodgkin’s lymphoma (NHL) (Chapter VIII). Patients with higher absolute counts of CD4+ T lymphocytes respond better (in terms of longer progression free survival) to rituximab compared to patients with lower number of CD4+ T lymphocytes. These observations highlight again the fact that a competent immune system is required for the clinical benefit of rituximab therapy in NHL patients. In conclusion, the work presented in this dissertation demonstrates, for the first time, that diverse B cells and polyclonal immunoglobulin promote T cell diversification in the thymus and improve T lymphocyte function. Also, it shows that in the setting of immune reconstitution, as after autologous stem cell transplantation for mantle cell lymphoma and in the setting of immune therapy for NHL, the absolute lymphocyte counts are an independent factor predicting progression free and overall survival. These results can have an important application in the clinical practice since the majority of the current treatments for cancer are immunosuppressive and implicate a subsequent immune recovery. Also, the effects of a number of antineoplastic treatments, including biological agents, depend on the immune system activity. In this way, studies similar to the ones presented here, where methods to improve the immune reconstitution are examined, may prove to be instrumental for a better understanding of the immune system and to guide more efficient treatment options and the design of future clinical trials. Resumo O estudo da rede de inter-relações entre os diversos elementos do sistema immune tem-se mostrado um instrumento essencial para uma melhor compreensão deste complexo sistema biológico. Tal é particularmente verdade no caso das interacções entre os linfócitos B e T, quer durante o desenvolvimento celular, quer ao nível das funções celulares efectoras. A compreensão da interdependência entre linfócitos B e T e a possibilidade de manipular esta relação pode ser directamente aplicável a situações em que a imunidade está deficiente, como é o caso das doenças neoplásicas ou da imunossupressão após radio ou quimioterapia. O trabalho apresentado nesta dissertação iniciou-se com o desenvolvimento de um novo método laboratorial para medir directamente a diversidade do reportório celular (Capítulo III). Reduções da diversidade do reportório dos receptores de células T têm sido relacionadas com um estado de imunodeficiência. O método desenvolvido utiliza “gene chips”, aos quais hibridizam os ácidos nucleicos codificantes das cadeias proteicas dos receptores linfocitários. A diversidade é calculada com base na frequência de hibridização do ácido nucleico da amostra aos oligonucleótidos presentes no “gene chip”. De seguida, e utilizando este novo método e outras técnicas de quantificação celular examinei, num modelo animal, o papel que as células policlonais B e a imunoglobulina exercem sobre o desenvolvimento linfocitário T no timo, especificamente na aquisição de um reportório diverso de receptores T (Capítulos IV e V). Testei, então, a hipótese de que a presença no timo de péptidos mais diversos, como a imunoglobulna policlonal, induzisse a génese de precursores T mais diversos. Demonstrámos que a diversidade do compartimento T é aumentado pela presença de imunoglobulina policlonal. A imunoglobulina policlonal, e particularmente os fragmentos Fab desta molécula, representam as moléculas autólogas mais diversas presentes nos organismos vertebrados. Estes péptidos são apresentados por células apresentadoras de antigénio às células precursoras T no timo, durante o desenvolvimento celular T. Tal, provavelmente, contribui para a génese da diversidade dos receptores. Também demonstrámos que a presença de um reportório mais diverso de linfócitos T se associa a um incremento da função imunológica T in vivo. Uma diversidade de receptores T mais extensa parece permitir um reconhecimento e rejeição mais eficientes de um maior número de agressores internos e externos. Demonstrámos que ratinhos com receptores de células T (RCT) com maior diversidade rejeitam transplantes cutâneos discordantes para antigénios minor de histocompatibilidade mais rapidamente do que ratinhos com um menor reportório T (Capítulo V). Por outro lado, uma redução da diversidade do RCT, causada por timectomia de ratinhos de estirpes selvagens, mostrou aumentar significativamente a sobrevivência de transplantes cutâneos incompatíveis para o antigénio H-Y (antigénio minor de histocompatibilidade), indicando uma diminuição da função linfocitária T. Além disso, a reconstituição da diversidade dos linfócitos T em ratinhos com uma diversidade de reportório T diminuída, induzida pela administração de fragmentos Fab de imunoglobulina, conduz a um aumento da diversidade dos RCT e a uma diminuição significativa da sobrevivência dos enxertos cutâneos (Capítulo V). Estes resultados sugerem que o aumento do reportório de células T contribui para uma melhoria das funções celulares T e poderão ter implicações importantes na terapêutica e reconstitutição imunológica em contexto de SIDA, neoplasias, autoimunidade e após tratamentos mieloablativos. Baseado nos resultados anteriores, decidimos testar a hipótese clínica de que doentes com neoplasias hematológicas sujeitos a transplantação de precursores hematopoiéticos e com recuperação imunológica precoce após transplante teriam uma sobrevivência mais longa do que doentes que não recuperassem tão bem a sua imunidade. Analisámos a sobrevivência global e sobrevivência sem doença de 42 doentes com linfoma não Hodgkin de células do manto sujeitos a transplante autólogo de precursores hematopoiéticos (Capítulo VI). Os resultados obtidos mostraram que os doentes que recuperaram contagens mais elevadas de linfócitos imediatamente após o transplante autólogo, apresentaram uma sobrevivência global e sem progressão mais longa do que doentes que não recuperaram contagens linfocitárias tão precocemente. Estes resultados demonstram o efeito positivo de uma reconstitutição imunológica robusta após transplante de presursores hematopoiéticos, sobre a sobrevivência de doentes com neoplasias hematológicas. Do mesmo modo, estudámos o efeito que a recuperação de níveis séricos normais de imunoglobulina policlonal tem na sobrevivência de doentes com outras neoplasias hematológicas de linfócitos B, como o mieloma múltiplo,após transplante autólogo de precursos hematopoiéticos (Capítulo VII). A sobrevivência livre de doença dos 110 doentes com mieloma múltiplo analizados está associada com a sua capacidade de recuperar níveis séricos normais do compartmento policlonal de imunoglobulina. Estes resultados pioneiros indicam a importância da imunoglobulina policlonal para a génese de competência imunológica. Também estudámos o impacto de um sistema imunitário eficiente sobre a resposta ao tratamento com o anticorpo anti CD20, ituximab, em doentes com linfoma não Hodgkin (LNH) (Capítulo VIII). Os resultados mostram que doentes com valores mais elevados de linfócitos T CD4+ respondem melhor (em termos de maior sobrevida livre de doença) ao rituximab, do que doentes com valores mais baixos. Estas observações ilustram a necessidade de um sistema imunitário competente para o benefício clínico da terapêutica com rituximab em doentes com LNH. Em conclusão, o trabalho apresentado nesta dissertação demonstra que as células B e a imunoglobulina policlonal promovem a diversidade das células T no timo e melhoram a função linfocitária T periférica. Concomitantemente, também demonstrámos que, no contexto de reconstituição imune, por exemplo, após transplante autólogo de precursores hematopoiéticos em doentes com linfomas de células do manto, o número absoluto de linfócitos é uma factor independente da sobrevivência. Os resultados demonstram, também, a importância dos valores de linfocitos T na resposta ao tratamento com rituximab no caso de doentes com LNH. O mesmo princípio se prova pelo facto de que doentes com mieloma múltiplo sujeitos a transplante autólogo de precursores hematopoiéticos que recuperam valores normais séricos de imunoglobulinas policlonais, terem melhores taxas de resposta em comparação com doentes que não recuperam valores normais de imunoglobulinas policlonais. Estes resultados podem ter importantes aplicações na prática clínica dado que a maioria dos tratamentos de doenças neoplásicas implica imunossupressão e, subsequente, recuperação imunológica. Estes estudos podem ser um instrumento fundamental para uma melhor compreensão do sistema imune e guiar uma escolha mais eficiente de opções terapêuticas bem como contribuir para a concepção de futuros estudos clínicos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In hyperspectral imagery a pixel typically consists mixture of spectral signatures of reference substances, also called endmembers. Linear spectral mixture analysis, or linear unmixing, aims at estimating the number of endmembers, their spectral signatures, and their abundance fractions. This paper proposes a framework for hyperpsectral unmixing. A blind method (SISAL) is used for the estimation of the unknown endmember signature and their abundance fractions. This method solve a non-convex problem by a sequence of augmented Lagrangian optimizations, where the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The proposed framework simultaneously estimates the number of endmembers present in the hyperspectral image by an algorithm based on the minimum description length (MDL) principle. Experimental results on both synthetic and real hyperspectral data demonstrate the effectiveness of the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atualmente, verifica-se um aumento na necessidade de software feito à medida do cliente, que se consiga adaptar de forma rápida as constantes mudanças da sua área de negócio. Cada cliente tem os seus problemas concretos que precisa de resolver, não lhe sendo muitas vezes possível dispensar uma elevada quantidade de recursos para atingir os fins pretendidos. De forma a dar resposta a estes problemas surgiram várias arquiteturas e metodologias de desenvolvimento de software, que permitem o desenvolvimento ágil de aplicações altamente configuráveis, que podem ser personalizadas por qualquer utilizador das mesmas. Este dinamismo, trazido para as aplicações sobre a forma de modelos que são personalizados pelos utilizadores e interpretados por uma plataforma genérica, cria maiores desafios no momento de realizar testes, visto existir um número de variáveis consideravelmente maior que numa aplicação com uma arquitetura tradicional. É necessário, em todos os momentos, garantir a integridade de todos os modelos, bem como da plataforma responsável pela sua interpretação, sem ser necessário o desenvolvimento constante de aplicações para suportar os testes sobre os diferentes modelos. Esta tese debruça-se sobre uma aplicação, a plataforma myMIS, que permite a interpretação de modelos orientados à gestão, escritos numa linguagem específica de domínio, sendo realizada a avaliação do estado atual e definida uma proposta de práticas de testes a aplicar no desenvolvimento da mesma. A proposta resultante desta tese permitiu verificar que, apesar das dificuldades inerentes à arquitetura da aplicação, o desenvolvimento de testes de uma forma genérica é possível, podendo as mesmas lógicas ser utilizadas para o teste de diversos modelos distintos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Os videojogos são cada vez mais uma das maiores áreas da indústria de entretenimento, tendo esta vindo a expandir-se de ano para ano. Para além disso, os videojogos estão cada vez mais presentes no nosso dia-adia, quer através dos dispositivos móveis ou das novas consolas. Com base nesta premissa, é seguro de afirmar que o investimento neste campo trará mais ganhos do que perdas. Esta Dissertação tem como objetivo o estudo do estado da indústria dos videojogos, tendo como principal foco a conceção de um videojogo, a partir duma Framework Modular, desenvolvida também no âmbito desta Dissertação. Para isso, é feito um estudo sobre o estado da arte tecnológico, onde várias ferramentas de criação de videojogos foram estudadas e analisadas, de forma a perceber as forças e fraquezas de cada uma, e um estudo sobre a arte do negócio, ficando assim com uma ideia mais concreta dos vários pontos necessários para a criação de um videojogo. De seguida são discutidos os diferentes géneros de videojogos existentes e é conceptualizado um pequeno videojogo, tendo ainda em conta os diferentes tipos de interfaces que são mais utilizados na indústria dos videojogos, de forma a entender qual será a forma mais viável, conforme o género, e as diferentes mecânicas presentes no videojogo a criar. A Framework Modular é desenvolvida tendo em conta toda a análise previamente realizada, e o videojogo conceptualizado. Esta tem como grande objetivo uma elevada personalização e manutenibilidade, sendo que todos os módulos implementados podem ser substituídos por outros sem criar conflitos entre si. Finalmente, de forma a unir todos os temas analisados ao longo desta Dissertação, é ainda desenvolvido um Protótipo de forma a comprovar o bom funcionamento da Framework, aplicando todas as decisões previamente feitas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper consists in the characterization of medium voltage (MV) electric power consumers based on a data clustering approach. It is intended to identify typical load profiles by selecting the best partition of a power consumption database among a pool of data partitions produced by several clustering algorithms. The best partition is selected using several cluster validity indices. These methods are intended to be used in a smart grid environment to extract useful knowledge about customers’ behavior. The data-mining-based methodology presented throughout the paper consists in several steps, namely the pre-processing data phase, clustering algorithms application and the evaluation of the quality of the partitions. To validate our approach, a case study with a real database of 1.022 MV consumers was used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A inclusão como paradigma educativo é cada vez mais aceite nos dias de hoje. Diversas publicações neste âmbito, tais como O Forúm Mundial de Educação para Todos (1990), a Declaração de Salamanca (1994) e o Enquadramento da Ação de Dakar (2000), bem como a ênfase dada à igualdade de oportunidades vêm sustentar uma política de educação para todos. As restrições à participação dos alunos com multideficiência legitimam um continuum de serviços que responda às suas particularidades. O projeto Centro de Recursos para a Inclusão (CRI) surge no âmbito da reorientação das escolas especiais, na passagem destes alunos para as escolas de ensino regular. A presente investigação descreve as práticas e perceções dos técnicos do CRI quanto à atuação da equipa e demais intervenientes no processo educativo dos alunos com multideficiência. Para o efeito, foram entrevistados todos os técnicos (32) de equipas CRI do distrito do Porto que atuam com aquela população em contexto escolar. Os resultados evidenciaram que os técnicos percecionam a sua equipa como tendo todas as valências terapêuticas necessárias, concordam com a inclusão de alunos com multideficiência na escola de ensino regular e salientam a necessidade de serem modificadas atitudes relativas à pragmatização desta abordagem. As práticas de avaliação dos alunos resultam de contributos individualizados dos intervenientes, conquanto a intervenção seja realizada nos contextos reais dos indivíduos. Por fim, os profissionais consideram fundamental participarem na elaboração da documentação relativa ao aluno e, consequentemente, sugerem um efetivo reconhecimento e envolvimento da equipa no trabalho desenvolvido nas escolas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological Water Quality - Water Treatment and Reuse

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural robustness is an emergent concept related to the structural response to damage. At the present time, robustness is not well defined and much controversy still remains around this subject. Even if robustness has seen growing interest as a consequence of catastrophic consequences due to extreme events, the fact is that the concept can also be very useful when considered on more probable exposure scenarios such as deterioration, among others. This paper intends to be a contribution to the definition of structural robustness, especially in the analysis of reinforced concrete structures subjected to corrosion. To achieve this, first of all, several proposed robustness definitions and indicators and misunderstood concepts will be analyzed and compared. From this point and regarding a concept that could be applied to most type of structures and dam-age scenarios, a robustness definition is proposed. To illustrate the proposed concept, an example of corroded reinforced concrete structures will be analyzed using nonlinear analysis numerical methods based on a contin-uum strong discontinuities approach and isotropic damage models for concrete. Finally the robustness of the presented example will be assessed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The world is increasingly in a global community. The rapid technological development of communication and information technologies allows the transmission of knowledge in real-time. In this context, it is imperative that the most developed countries are able to develop their own strategies to stimulate the industrial sector to keep up-to-date and being competitive in a dynamic and volatile global market so as to maintain its competitive capacities and by consequence, permits the maintenance of a pacific social state to meet the human and social needs of the nation. The path traced of competitiveness through technological differentiation in industrialization allows a wider and innovative field of research. Already we are facing a new phase of organization and industrial technology that begins to change the way we relate with the industry, society and the human interaction in the world of work in current standards. This Thesis, develop an analysis of Industrie 4.0 Framework, Challenges and Perspectives. Also, an analysis of German reality in facing to approach the future challenge in this theme, the competition expected to win in future global markets, points of domestic concerns felt in its industrial fabric household face this challenge and proposes recommendations for a more effective implementation of its own strategy. The methods of research consisted of a comprehensive review and strategically analysis of existing global literature on the topic, either directly or indirectly, in parallel with the analysis of questionnaires and data analysis performed by entities representing the industry at national and world global placement. The results found by this multilevel analysis, allowed concluding that this is a theme that is only in the beginning for construction the platform to engage the future Internet of Things in the industrial environment Industrie 4.0. This dissertation allows stimulate the need of achievements of more strategically and operational approach within the society itself as a whole to clarify the existing weaknesses in this area, so that the National Strategy can be implemented with effective approaches and planned actions for a direct training plan in a more efficiently path in education for the theme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A crowdsourcing innovation intermediary performs mediation activities between companies that have a problem to solve or that seek a business opportunity, and a group of people motivated to present ideas based on their knowledge, experience and wisdom, taking advantage of technology sharing and collaboration emerging from Web2.0. As far as we know, most of the present intermediaries don´t have, yet, an integrated vision that combines the creation of value through community development, brokering and technology transfer. In this paper we present a proposal of a knowledge repository framework for crowdsourcing innovation that enables effective support and integration of the activities developed in the process of value creation (community building, brokering and technology transfer), modeled using ontology engineering methods.