898 resultados para Bayesian shared component model
Resumo:
Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
The latest LHC data confirmed the existence of a Higgs-like particle and made interesting measurements on its decays into gamma gamma, ZZ*, WW*, tau(+)tau(-), and b (b) over bar. It is expected that a decay into Z gamma might be measured at the next LHC round, for which there already exists an upper bound. The Higgs-like particle could be a mixture of scalar with a relatively large component of pseudoscalar. We compute the decay of such a mixed state into Z gamma, and we study its properties in the context of the complex two Higgs doublet model, analysing the effect of the current measurements on the four versions of this model. We show that a measurement of the h -> Z gamma rate at a level consistent with the SM can be used to place interesting constraints on the pseudoscalar component. We also comment on the issue of a wrong sign Yukawa coupling for the bottom in Type II models.
Resumo:
We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.
Resumo:
The phase diagram of a simple model with two patches of type A and ten patches of type B (2A10B) on the face centred cubic lattice has been calculated by simulations and theory. Assuming that there is no interaction between the B patches the behavior of the system can be described in terms of the ratio of the AB and AA interactions, r. Our results show that, similarly to what happens for related off-lattice and two-dimensional lattice models, the liquid-vapor phase equilibria exhibit reentrant behavior for some values of the interaction parameters. However, for the model studied here the liquid-vapor phase equilibria occur for values of r lower than 1/3, a threshold value which was previously thought to be universal for 2AnB models. In addition, the theory predicts that below r = 1/3 (and above a new condensation threshold which is < 1/3) the reentrant liquid-vapor equilibria are so extreme that it exhibits a closed loop with a lower critical point, a very unusual behavior in single-component systems. An order-disorder transition is also observed at higher densities than the liquid-vapor equilibria, which shows that the liquid-vapor reentrancy occurs in an equilibrium region of the phase diagram. These findings may have implications in the understanding of the condensation of dipolar hard spheres given the analogy between that system and the 2AnB models considered here.
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
The ecotoxicological response of the living organisms in an aquatic system depends on the physical, chemical and bacteriological variables, as well as the interactions between them. An important challenge to scientists is to understand the interaction and behaviour of factors involved in a multidimensional process such as the ecotoxicological response.With this aim, multiple linear regression (MLR) and principal component regression were applied to the ecotoxicity bioassay response of Chlorella vulgaris and Vibrio fischeri in water collected at seven sites of Leça river during five monitoring campaigns (February, May, June, August and September of 2006). The river water characterization included the analysis of 22 physicochemical and 3 microbiological parameters. The model that best fitted the data was MLR, which shows: (i) a negative correlation with dissolved organic carbon, zinc and manganese, and a positive one with turbidity and arsenic, regarding C. vulgaris toxic response; (ii) a negative correlation with conductivity and turbidity and a positive one with phosphorus, hardness, iron, mercury, arsenic and faecal coliforms, concerning V. fischeri toxic response. This integrated assessment may allow the evaluation of the effect of future pollution abatement measures over the water quality of Leça River.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Management Information Systems 2000, p. 103-111
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer Science
Resumo:
It is imperative to accept that failures can and will occur, even in meticulously designed distributed systems, and design proper measures to counter those failures. Passive replication minimises resource consumption by only activating redundant replicas in case of failures, as typically providing and applying state updates is less resource demanding than requesting execution. However, most existing solutions for passive fault tolerance are usually designed and configured at design time, explicitly and statically identifying the most critical components and their number of replicas, lacking the needed flexibility to handle the runtime dynamics of distributed component-based embedded systems. This paper proposes a cost-effective adaptive fault tolerance solution with a significant lower overhead compared to a strict active redundancy-based approach, achieving a high error coverage with the minimum amount of redundancy. The activation of passive replicas is coordinated through a feedback-based coordination model that reduces the complexity of the needed interactions among components until a new collective global service solution is determined, improving the overall maintainability and robustness of the system.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Economics from the NOVA – School of Business and Economics
Resumo:
Dissertation for the Master’s Degree in Structural and Functional Biochemistry
Resumo:
Dissertation to obtain PhD in Industrial Engineering
Resumo:
RESUMO: A estrutura demográfica portuguesa é marcada por baixas taxas de natalidade e mortalidade, onde a população idosa representa uma fatia cada vez mais representativa, fruto de uma maior longevidade. A incidência do cancro, na sua generalidade, é maior precisamente nessa classe etária. A par de outras doenças igualmente lesivas (e.g. cardiovasculares, degenerativas) cuja incidência aumenta com a idade, o cancro merece relevo. Estudos epidemiológicos apresentam o cancro como líder mundial na mortalidade. Em países desenvolvidos, o seu peso representa 25% do número total de óbitos, percentagem essa que mais que duplica noutros países. A obesidade, a baixa ingestão de frutas e vegetais, o sedentarismo, o consumo de tabaco e a ingestão de álcool, configuram-se como cinco dos fatores de risco presentes em 30% das mortes diagnosticadas por cancro. A nível mundial e, em particular no Sul de Portugal, os cancros do estômago, recto e cólon apresentam elevadas taxas de incidência e de mortalidade. Do ponto de vista estritamente económico, o cancro é a doença que mais recursos consome enquanto que do ponto de vista físico e psicológico é uma doença que não limita o seu raio de ação ao doente. O cancro é, portanto, uma doença sempre atual e cada vez mais presente, pois reflete os hábitos e o ambiente de uma sociedade, não obstante as características intrínsecas a cada indivíduo. A adoção de metodologia estatística aplicada à modelação de dados oncológicos é, sobretudo, valiosa e pertinente quando a informação é oriunda de Registos de Cancro de Base Populacional (RCBP). A pertinência é justificada pelo fato destes registos permitirem aferir numa população específica, o risco desta sofrer e/ou vir a sofrer de uma dada neoplasia. O peso que as neoplasias do estômago, cólon e recto assumem foi um dos elementos que motivou o presente estudo que tem por objetivo analisar tendências, projeções, sobrevivências relativas e a distribuição espacial destas neoplasias. Foram considerados neste estudo todos os casos diagnosticados no período 1998-2006, pelo RCBP da região sul de Portugal (ROR-Sul). O estudo descritivo inicial das taxas de incidência e da tendência em cada uma das referidas neoplasias teve como base uma única variável temporal - o ano de diagnóstico - também designada por período. Todavia, uma metodologia que contemple apenas uma única variável temporal é limitativa. No cancro, para além do período, a idade à data do diagnóstico e a coorte de nascimento, são variáveis temporais que poderão prestar um contributo adicional na caracterização das taxas de incidência. A relevância assumida por estas variáveis temporais justificou a sua inclusão numaclasse de modelos designada por modelos Idade-Período-Coorte (Age-Period-Cohort models - APC), utilizada na modelação das taxas de incidência para as neoplasias em estudo. Os referidos modelos permitem ultrapassar o problema de relações não lineares e/ou de mudanças súbitas na tendência linear das taxas. Nos modelos APC foram consideradas a abordagem clássica e a abordagem com recurso a funções suavizadoras. A modelação das taxas foi estratificada por sexo. Foram ainda estudados os respectivos submodelos (apenas com uma ou duas variáveis temporais). Conhecido o comportamento das taxas de incidência, uma questão subsequente prende-se com a sua projeção em períodos futuros. Porém, o efeito de mudanças estruturais na população, ao qual Portugal não é alheio, altera substancialmente o número esperado de casos futuros com cancro. Estimativas da incidência de cancro a nível mundial obtidas a partir de projeções demográficas apontam para um aumento de 25% dos casos de cancro nas próximas duas décadas. Embora a projeção da incidência esteja associada a alguma incerteza, as projeções auxiliam no planeamento de políticas de saúde para a afetação de recursos e permitem a avaliação de cenários e de intervenções que tenham como objetivo a redução do impacto do cancro. O desconhecimento de projeções da taxa de incidência destas neoplasias na área abrangida pelo ROR-Sul, levou à utilização de modelos de projeção que diferem entre si quanto à sua estrutura, linearidade (ou não) dos seus coeficientes e comportamento das taxas na série histórica de dados (e.g. crescente, decrescente ou estável). Os referidos modelos pautaram-se por duas abordagens: (i)modelos lineares no que concerne ao tempo e (ii) extrapolação de efeitos temporais identificados pelos modelos APC para períodos futuros. Foi feita a projeção das taxas de incidência para os anos de 2007 a 2010 tendo em conta o género, idade e neoplasia. É ainda apresentada uma estimativa do impacto económico destas neoplasias no período de projeção. Uma questão pertinente e habitual no contexto clínico e a que o presente estudo pretende dar resposta, reside em saber qual a contribuição da neoplasia em si para a sobrevivência do doente. Nesse sentido, a mortalidade por causa específica é habitualmente utilizada para estimar a mortalidade atribuível apenas ao cancro em estudo. Porém, existem muitas situações em que a causa de morte é desconhecida e, mesmo que esta informação esteja disponível através dos certificados de óbito, não é fácil distinguir os casos em que a principal causa de morte é devida ao cancro. A sobrevivência relativa surge como uma medida objetiva que não necessita do conhecimento da causa específica da morte para o seu cálculo e dar-nos-á uma estimativa da probabilidade de sobrevivência caso o cancro em análise, num cenário hipotético, seja a única causa de morte. Desconhecida a principal causa de morte nos casos diagnosticados com cancro no registo ROR-Sul, foi determinada a sobrevivência relativa para cada uma das neoplasias em estudo, para um período de follow-up de 5 anos, tendo em conta o sexo, a idade e cada uma das regiões que constituem o registo. Foi adotada uma análise por período e as abordagens convencional e por modelos. No epílogo deste estudo, é analisada a influência da variabilidade espaço-temporal nas taxas de incidência. O longo período de latência das doenças oncológicas, a dificuldade em identificar mudanças súbitas no comportamento das taxas, populações com dimensão e riscos reduzidos, são alguns dos elementos que dificultam a análise da variação temporal das taxas. Nalguns casos, estas variações podem ser reflexo de flutuações aleatórias. O efeito da componente temporal aferida pelos modelos APC dá-nos um retrato incompleto da incidência do cancro. A etiologia desta doença, quando conhecida, está associada com alguma frequência a fatores de risco tais como condições socioeconómicas, hábitos alimentares e estilo de vida, atividade profissional, localização geográfica e componente genética. O “contributo”, dos fatores de risco é, por vezes, determinante e não deve ser ignorado. Surge, assim, a necessidade em complementar o estudo temporal das taxas com uma abordagem de cariz espacial. Assim, procurar-se-á aferir se as variações nas taxas de incidência observadas entre os concelhos inseridos na área do registo ROR-Sul poderiam ser explicadas quer pela variabilidade temporal e geográfica quer por fatores socioeconómicos ou, ainda, pelos desiguais estilos de vida. Foram utilizados os Modelos Bayesianos Hierárquicos Espaço-Temporais com o objetivo de identificar tendências espaço-temporais nas taxas de incidência bem como quantificar alguns fatores de risco ajustados à influência simultânea da região e do tempo. Os resultados obtidos pela implementação de todas estas metodologias considera-se ser uma mais valia para o conhecimento destas neoplasias em Portugal.------------ABSTRACT: mortality rates, with the elderly being an increasingly representative sector of the population, mainly due to greater longevity. The incidence of cancer, in general, is greater precisely in that age group. Alongside with other equally damaging diseases (e.g. cardiovascular,degenerative), whose incidence rates increases with age, cancer is of special note. In epidemiological studies, cancer is the global leader in mortality. In developed countries its weight represents 25% of the total number of deaths, with this percentage being doubled in other countries. Obesity, a reduce consumption of fruit and vegetables, physical inactivity, smoking and alcohol consumption, are the five risk factors present in 30% of deaths due to cancer. Globally, and in particular in the South of Portugal, the stomach, rectum and colon cancer have high incidence and mortality rates. From a strictly economic perspective, cancer is the disease that consumes more resources, while from a physical and psychological point of view, it is a disease that is not limited to the patient. Cancer is therefore na up to date disease and one of increased importance, since it reflects the habits and the environment of a society, regardless the intrinsic characteristics of each individual. The adoption of statistical methodology applied to cancer data modelling is especially valuable and relevant when the information comes from population-based cancer registries (PBCR). In such cases, these registries allow for the assessment of the risk and the suffering associated to a given neoplasm in a specific population. The weight that stomach, colon and rectum cancers assume in Portugal was one of the motivations of the present study, that focus on analyzing trends, projections, relative survival and spatial distribution of these neoplasms. The data considered in this study, are all cases diagnosed between 1998 and 2006, by the PBCR of Portugal, ROR-Sul.Only year of diagnosis, also called period, was the only time variable considered in the initial descriptive analysis of the incidence rates and trends for each of the three neoplasms considered. However, a methodology that only considers one single time variable will probably fall short on the conclusions that could be drawn from the data under study. In cancer, apart from the variable period, the age at diagnosis and the birth cohort are also temporal variables and may provide an additional contribution to the characterization of the incidence. The relevance assumed by these temporal variables justified its inclusion in a class of models called Age-Period-Cohort models (APC). This class of models was used for the analysis of the incidence rates of the three cancers under study. APC models allow to model nonlinearity and/or sudden changes in linear relationships of rate trends. Two approaches of APC models were considered: the classical and the one using smoothing functions. The models were stratified by gender and, when justified, further studies explored other sub-models where only one or two temporal variables were considered. After the analysis of the incidence rates, a subsequent goal is related to their projections in future periods. Although the effect of structural changes in the population, of which Portugal is not oblivious, may substantially change the expected number of future cancer cases, the results of these projections could help planning health policies with the proper allocation of resources, allowing for the evaluation of scenarios and interventions that aim to reduce the impact of cancer in a population. Worth noting that cancer incidence worldwide obtained from demographic projections point out to an increase of 25% of cancer cases in the next two decades. The lack of projections of incidence rates of the three cancers under study in the area covered by ROR-Sul, led us to use a variety of forecasting models that differ in the nature and structure. For example, linearity or nonlinearity in their coefficients and the trend of the incidence rates in historical data series (e.g. increasing, decreasing or stable).The models followed two approaches: (i) linear models regarding time and (ii) extrapolation of temporal effects identified by the APC models for future periods. The study provide incidence rates projections and the numbers of newly diagnosed cases for the year, 2007 to 2010, taking into account gender, age and the type of cancer. In addition, an estimate of the economic impact of these neoplasms is presented for the projection period considered. This research also try to address a relevant and common clinical question in these type of studies, regarding the contribution of the type of cancer to the patient survival. In such studies, the primary cause of death is commonly used to estimate the mortality specifically due to the cancer. However, there are many situations in which the cause of death is unknown, or, even if this information is available through the death certificates, it is not easy to distinguish the cases where the primary cause of death is the cancer. With this in mind, the relative survival is an alternative measure that does not need the knowledge of the specific cause of death to be calculated. This estimate will represent the survival probability in the hypothetical scenario of a certain cancer be the only cause of death. For the patients with unknown cause of death that were diagnosed with cancer in the ROR-Sul, the relative survival was calculated for each of the cancers under study, for a follow-up period of 5 years, considering gender, age and each one of the regions that are part the registry. A period analysis was undertaken, considering both the conventional and the model approaches. In final part of this study, we analyzed the influence of space-time variability in the incidence rates. The long latency period of oncologic diseases, the difficulty in identifying subtle changes in the rates behavior, populations of reduced size and low risk are some of the elements that can be a challenge in the analysis of temporal variations in rates, that, in some cases, can reflect simple random fluctuations. The effect of the temporal component measured by the APC models gives an incomplete picture of the cancer incidence. The etiology of this disease, when known, is frequently associated to risk factors such as socioeconomic conditions, eating habits and lifestyle, occupation, geographic location and genetic component. The "contribution"of such risk factors is sometimes decisive in the evolution of the disease and should not be ignored. Therefore, there was the need to consider an additional approach in this study, one of spatial nature, addressing the fact that changes in incidence rates observed in the ROR-Sul area, could be explained either by temporal and geographical variability or by unequal socio-economic or lifestyle factors. Thus, Bayesian hierarchical space-time models were used with the purpose of identifying space-time trends in incidence rates together with the the analysis of the effect of the risk factors considered in the study. The results obtained and the implementation of all these methodologies are considered to be an added value to the knowledge of these neoplasms in Portugal.