852 resultados para Initial data problem
Resumo:
A quantidade e variedade de conteúdos multimédia actualmente disponíveis cons- tituem um desafio para os utilizadores dado que o espaço de procura e escolha de fontes e conteúdos excede o tempo e a capacidade de processamento dos utilizado- res. Este problema da selecção, em função do perfil do utilizador, de informação em grandes conjuntos heterogéneos de dados é complexo e requer ferramentas específicas. Os Sistemas de Recomendação surgem neste contexto e são capazes de sugerir ao utilizador itens que se coadunam com os seus gostos, interesses ou necessidades, i.e., o seu perfil, recorrendo a metodologias de inteligência artificial. O principal objectivo desta tese é demonstrar que é possível recomendar em tempo útil conteúdos multimédia a partir do perfil pessoal e social do utilizador, recorrendo exclusivamente a fontes públicas e heterogéneas de dados. Neste sen- tido, concebeu-se e desenvolveu-se um Sistema de Recomendação de conteúdos multimédia baseado no conteúdo, i.e., nas características dos itens, no historial e preferências pessoais e nas interacções sociais do utilizador. Os conteúdos mul- timédia recomendados, i.e., os itens sugeridos ao utilizador, são provenientes da estação televisiva britânica, British Broadcasting Corporation (BBC), e estão classificados de acordo com as categorias dos programas da BBC. O perfil do utilizador é construído levando em conta o historial, o contexto, as preferências pessoais e as actividades sociais. O YouTube é a fonte do histo- rial pessoal utilizada, permitindo simular a principal fonte deste tipo de dados - a Set-Top Box (STB). O historial do utilizador é constituído pelo conjunto de vídeos YouTube e programas da BBC vistos pelo utilizador. O conteúdo dos vídeos do YouTube está classificado segundo as categorias de vídeo do próprio YouTube, sendo efectuado o mapeamento para as categorias dos programas da BBC. A informação social, que é proveniente das redes sociais Facebook e Twit- ter, é recolhida através da plataforma Beancounter. As actividades sociais do utilizador obtidas são filtradas para extrair os filmes e séries que são, por sua vez, enriquecidos semanticamente através do recurso a repositórios abertos de dados interligados. Neste caso, os filmes e séries são classificados através dos géneros da IMDb e, posteriormente, mapeados para as categorias de programas da BBC. Por último, a informação do contexto e das preferências explícitas, através da classificação dos itens recomendados, do utilizador são também contempladas. O sistema desenvolvido efectua recomendações em tempo real baseado nas actividades das redes sociais Facebook e Twitter, no historial de vídeos Youtube e de programas da BBC vistos e preferências explícitas. Foram realizados testes com cinco utilizadores e o tempo médio de resposta do sistema para criar o conjunto inicial de recomendações foi 30 s. As recomendações personalizadas são geradas e actualizadas mediante pedido expresso do utilizador.
Resumo:
Esta dissertação incide sobre a problemática da construção de um data warehouse para a empresa AdClick que opera na área de marketing digital. O marketing digital é um tipo de marketing que utiliza os meios de comunicação digital, com a mesma finalidade do método tradicional que se traduz na divulgação de bens, negócios e serviços e a angariação de novos clientes. Existem diversas estratégias de marketing digital tendo em vista atingir tais objetivos, destacando-se o tráfego orgânico e tráfego pago. Onde o tráfego orgânico é caracterizado pelo desenvolvimento de ações de marketing que não envolvem quaisquer custos inerentes à divulgação e/ou angariação de potenciais clientes. Por sua vez o tráfego pago manifesta-se pela necessidade de investimento em campanhas capazes de impulsionar e atrair novos clientes. Inicialmente é feita uma abordagem do estado da arte sobre business intelligence e data warehousing, e apresentadas as suas principais vantagens as empresas. Os sistemas business intelligence são necessários, porque atualmente as empresas detêm elevados volumes de dados ricos em informação, que só serão devidamente explorados fazendo uso das potencialidades destes sistemas. Nesse sentido, o primeiro passo no desenvolvimento de um sistema business intelligence é concentrar todos os dados num sistema único integrado e capaz de dar apoio na tomada de decisões. É então aqui que encontramos a construção do data warehouse como o sistema único e ideal para este tipo de requisitos. Nesta dissertação foi elaborado o levantamento das fontes de dados que irão abastecer o data warehouse e iniciada a contextualização dos processos de negócio existentes na empresa. Após este momento deu-se início à construção do data warehouse, criação das dimensões e tabelas de factos e definição dos processos de extração e carregamento dos dados para o data warehouse. Assim como a criação das diversas views. Relativamente ao impacto que esta dissertação atingiu destacam-se as diversas vantagem a nível empresarial que a empresa parceira neste trabalho retira com a implementação do data warehouse e os processos de ETL para carregamento de todas as fontes de informação. Sendo que algumas vantagens são a centralização da informação, mais flexibilidade para os gestores na forma como acedem à informação. O tratamento dos dados de forma a ser possível a extração de informação a partir dos mesmos.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Química Pela Universidade Nova de Lisboa,Faculdade de Ciências e Tecn
Resumo:
Fractional dynamics is a growing topic in theoretical and experimental scientific research. A classical problem is the initialization required by fractional operators. While the problem is clear from the mathematical point of view, it constitutes a challenge in applied sciences. This paper addresses the problem of initialization and its effect upon dynamical system simulation when adopting numerical approximations. The results are compatible with system dynamics and clarify the formulation of adequate values for the initial conditions in numerical simulations.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia do Ambiente pela Universidade Nova de Lisboa,Faculdade de Ciências e Tecnologia
Resumo:
RESUMO - O cancro da mama é uma preocupação da saúde pública a nível mundial, pela sua incidência, mortalidade e custos económicos associados. As terapias utilizadas no seu tratamento, embora eficazes, conduzem a alterações de todas as dimensões da Qualidade de Vida (QdV) da mulher com cancro da mama. A garantia de uma qualidade de serviço prestado deve ser uma prioridade das organizações de saúde, sendo a QdV uma medida de resultado. Partindo do pressuposto que em Portugal existe uma diferença potencial na forma como as mulheres com cancro da mama recebem o apoio por parte da fisioterapia, importa saber se a fisioterapia tem ou não influência na QdV da mulher com cancro da mama, o que, no caso de ser afirmativo, poderá constituir uma mais-valia para a qualidade do serviço prestado em oncologia. O Objectivo deste trabalho é construir um modelo de análise no sentido de responder à questão inicial de investigação: “Será que a fisioterapia contribui para a melhoria da Qualidade de Vida das mulheres com cancro da mama submetidas a cirurgia e outras terapias oncológicas?”. Neste sentido o trabalho de projecto dividiu-se por etapas. Inicialmente foi realizado um enquadramento teórico, através de uma revisão de literatura e da realização de entrevistas exploratórias, permitindo desta forma ter um conhecimento actual das temáticas que definem as variáveis e o objecto de estudo. Na etapa seguinte, foi feita uma análise crítica sobre o conhecimento actual do tema em estudo, que permitiu definir as variáveis a estudar, escolher o instrumento de medida a utilizar, ter conhecimento dos procedimentos a seguir. Após a definição do objectivo geral (avaliar se a fisioterapia tem influência na QdV das mulheres submetidas a cirurgia e outras terapias oncológicas) e dos objectivos específicos, iniciou-se o delineamento da metodologia tida como adequada para responder às questões de investigação levantadas (tipo de estudo, as variáveis, a unidade de análise, os métodos e técnicas de recolha de dados, os procedimentos e a metodologia de tratamento de dados). No âmbito do trabalho de projecto está definida a colocação em campo de um caso de estudo efectivo que permita dar um contributo real no delineamento da metodologia. Neste trabalho optou-se pela realização de um estudo piloto, que se enquadra nos procedimentos da metodologia e que teve por objectivo retirar algumas conclusões sobre: a aplicabilidade do instrumento de medida; os tempos definidos para a recolha de dados; as características sociodemográficas e clínicas da amostra; as questões de investigação levantadas. O estudo piloto consistiu num estudo pré-experimental, com uma amostra de 35 indivíduos, submetidos a cirurgia a cancro da mama e a outras terapias oncológicas. Foram avaliadas as dimensões do bem-estar físico e actividades quotidianas, bem-estar psicológico, relações sociais, sintomas e características sociodemográficas/clínicas, no início do tratamento individual de fisioterapia e no momento de alta. Utilizou-se como instrumento de medida o questionário EORTC QLQ–30 e o seu questionário complementar EORTC QLQ–23. Tendo sido construída uma ficha para recolha de dados sociodemográficos e clínicos. A significância estatística foi aceite para valores de p<0,05. Para comparação entre grupos e evolução dentro de cada grupo aplicou-se o teste t-student e o teste de Mann-Whitney. A análise dos resultados do estudo piloto permitiu verificar que: - O instrumento de medida proposto (questionário EORTC QLQ30 e BR23) mostrou ser de fácil aplicação, não tendo existido dificuldade por parte das doentes no seu preenchimento. Não houve problemas no cálculo dos scores e na sua interpretação; - Parte considerável das mulheres com cancro da mama será submetida a protocolos que se poderão prolongar por vários meses após a cirurgia (ex: QT+RT+HT). Esta realidade leva-nos a propor que sejam realizados vários momentos de avaliação, para que possam ser avaliadas as dimensões da QdV ao longo dos diferentes protocolos de tratamentos. Pensamos que o ideal seria a realização de 4 momentos de avaliação (3 a 4 semanas após a cirurgia, 3 meses, 6 meses e 9 meses após cirurgia). Sugerimos também que o estudo proposto seja realizado com uma amostra de maior dimensão; - O estudo piloto como recorreu a uma metodologia pré-experimental (ausência de grupo de controlo e apenas dois momentos de avaliação), não permite a consistência dos resultados; no entanto os resultados obtidos podem constituir um indicador de que a fisioterapia tem influência nas diferentes dimensões da QdV da mulher com cancro da mama submetida a cirurgia e a outras terapias oncológicas, podendo constituir uma mais-valia para a qualidade do serviço prestado em oncologia. Os resultados do estudo piloto permitiram redefinir a metodologia tida como adequada para responder à questão de investigação inicial. Apresentamos de seguida a mesma: Estudo quase-experimental, sendo a amostra constituída por dois grupos de 60 mulheres cada, submetidas a cirurgia a cancro da mama e a outras terapias oncológicas. O grupo experimental será submetido a tratamentos individuais de fisioterapia. Serão avaliadas as dimensões do bem-estar físico e actividades quotidianas, bem-estar psicológico, relações sociais e sintomas. A recolha de dados será realizada 3 semanas, 3 meses, 6 meses e 9 meses após a cirurgia. Como instrumento de medida será utilizado o questionário EORTC QLQ–30 e o seu questionário complementar EORTC QLQ–23, serão também recolhidos dados sociodemográficos e clínicos. A significância estatística será aceite para valores de p<0,05. Para comparação entre grupos e evolução dentro de cada grupo serão utilizados testes paramétricos e não paramétricos. A realização de um estudo que seguisse a metodologia acima referida permitiria uma maior consistência dos resultados, podendo eventualmente existir a confirmação de que a fisioterapia pode ter influência na QdV da mulher submetida a cirurgia a cancro da mama e a outras terapias oncológicas. A evidência de que a fisioterapia tem influência na QdV da mulher com cancro da mama, e o facto de a QdV ser um indicador da qualidade do serviço prestado em oncologia, poderão constituir um agente facilitador para a mudança na gestão de recursos humanos em organizações de saúde com a valência de oncologia, levando a uma alteração dos padrões de prática na área da fisioterapia em oncologia em Portugal, que poderá conduzir a uma melhor qualidade de serviço prestado ao doente oncológico. ----- ABSTRACT - Breast Cancer is a worldwide public health concern due to the incidence, mortality and economic costs associated. Although effective, therapies used in its treatment lead to changes in all Quality of Life (QoL) dimensions of a woman suffering from Breast Cancer. QoL is an outcome measure, and the insurance of quality of care provided should be a priority to health organizations. Taking into consideration that in Portugal there is a potential difference in the way women with Breast Cancer are provided with physical therapy, it is important to know whether physical therapy does or does not influence the QoL of women with breast cancer. If it does, it will lead to a health care quality improvement to cancer patients. The goal of the following study is to build an analysis model in order to answer the initial investigation question: “Does Physical Therapy contribute to enhance the Quality of Life of women with breast cancer who underwent surgery and other oncology treatments?” The project was divided in different stages. Initially, a literature revision was elaborated and exploratory interviews were held, which allowed an actual knowledge of the themes that define the variables and the object of study. The next stage included a critical analysis of the theme, which allowed the definition of variables of study, the choice of instrument of measure and the acquisition of some knowledge on how to proceed. After the definition of the general goal (to evaluate the influence of physical therapy on the QoL of women with breast cancer who underwent surgery and other oncology treatments) and specific goals, the choice of a right methodology took place, in order to answer the investigation questions (type of study, variables, unit analysis, methods and techniques on data collection, procedures and data treatment). In the scope of the project, it is decided to put out on the field an effective case-study which assures a real contribution on the choice of te methodology. In this particular work, there was a pilot study, included in the methodology procedures, with the goal of obtaining conclusions on the applicability of the instrument of measure; the length of time to collect data, the socio-demographic and clinical characteristics of the sample; the investigation questions. The pilot study consisted on a one group pretest-postest design, with a sample of 35 individuals who underwent surgery and other oncology treatments. Dimensions such as physical well-being and everyday life activities, psychological well-being, social relationships, symptoms and socio-demographical/clinical characteristics were assessed at the beginning of physical therapy individual treatment and at the moment of release. The instrument of measure used was the EORTC QLQ–30 questionnaire and its complementary questionnaire EORTC QLQ–23. A chart was made in order to collect socio-demographic and clinical data. Statistic significance was accepted for values of p<0,05. To compare between groups and to detect the evolution within each group, the t-student test and the Mann-Whitney test were applied. The outcome analysis of the pilot study allowed to verify that: - The instrument of measure proposed (EORTC QLQ30 and BR23) was easy to apply, and the subjects did not show any difficulty in filling it up. There was also no problem on calculating the scores or interpreting them; - A considerable part of the women with breast cancer will be submitted to protocols that may occur throughout several months after surgery (e.g., QT+RT+HT). This reality leads us to suggest several moments of assessment of the QoL dimensions in various moments of the different protocol treatments. We consider that the ideal number of evaluations would be 4 (3/4 weeks, 3 months, 6 months and 9 months after surgery). We also suggest the use of a larger sample; - Since the pilot study resorted to a one group pretest-postest design (there is an absence of control group and only two moments of assessment), there is no consistency of outcome. However, the results obtained indicate that physical therapy influences the dimensions of QoL on women with breast cancer who underwent surgery and other oncology treatments, which may be an asset to the quality of care provided to cancer patients. The outcome of the pilot study allowed to redefine the methodology given as adequate to answer the initial investigation question. Our suggestion is as follows: quasi-experimental design, with a sample of 120 subjects (2 groups of 60 women) with breast cancer who underwent surgery and other oncology treatments. The experimental group will be submitted to individual treatments of physical therapy. Dimensions such as physical well-being and everyday life activities, psychological well-being, social relationships and symptoms will be assessed. The collection of data will occur at 3 weeks, 3 months, 6 months and 9 months after surgery. The instrument of measure is the EORTC QLQ–30 questionnaire and its complementary questionnaire EORTC QLQ–23, and social-demographic and clinical information will also be collected. The statistic significance will be accepted for values of p<0,05. Parametric and non-parametric tests will be used to compare between groups and to detect the evolution within each group. Carrying out a study that followed the methodology discussed above would allow a better consistency of results, possibly enabling the confirmation that physical therapy influences the QoL of women with breast cancer who underwent surgery and other oncology treatments. The evidence that physical therapy influences the QoL of women with breast cancer, and the fact that QoL is an indicator of quality of care provided to cancer patients, may work as a facilitating agent in the change of human resources management in health organizations associated to oncology, which will lead to a change in oncology physical therapy practice patterns in Portugal, guiding to a health care quality improvement to cancer patients.
Resumo:
Electric power networks, namely distribution networks, have been suffering several changes during the last years due to changes in the power systems operation, towards the implementation of smart grids. Several approaches to the operation of the resources have been introduced, as the case of demand response, making use of the new capabilities of the smart grids. In the initial levels of the smart grids implementation reduced amounts of data are generated, namely consumption data. The methodology proposed in the present paper makes use of demand response consumers’ performance evaluation methods to determine the expected consumption for a given consumer. Then, potential commercial losses are identified using monthly historic consumption data. Real consumption data is used in the case study to demonstrate the application of the proposed method.
Resumo:
RESUMO - Enquadramento/objectivos: Apesar do elevado nível de comprometimento em estratégias eficazes para o controlo da tuberculose, em todo o mundo, esta constitui ainda um sério problema de Saúde Pública, com uma estimativa global de 9,4milhões de casos novos em 2008 e 1,8milhões de mortes/ano. O reduzido conhecimento das barreiras e facilitadores para o sucesso terapêutico constitui um importante obstáculo na procura de soluções eficazes de melhoramento da qualidade dos programas de controlo da tuberculose. Este estudo procura contribuir para a identificação atempada de doentes com perfis preditivos de insucesso terapêutico, através da identificação inicial de potenciais determinantes do resultado, com base num modelo epidemiológico e estatístico. Métodos: Foi desenvolvido um estudo de caso-controlo para a população de casos notificados ao Programa Nacional de Controlo da Tuberculose (n=24491), entre 2000-2007. Os factores preditivos de insucesso terapêutico foram identificados na análise bivariada e multivariada, com um nível de significância de 5%; a regressão logística foi utilizada para estimar a odds ratio de insucesso terapêutico, em comparação com o sucesso terapêutico, para diversos factores identificados na literatura, e para os quais os dados se encontravam disponíveis. Resultados: A dependência alcoólica (OR=2,889), o país de origem (OR=3,910), a situação sem-abrigo (OR=3.919), a co-infecção pelo VIH (OR=5,173), a interrupção (OR=60.615) ou falha terapêutica no tratamento anterior (OR=67.345) e a duração do tratamento inferior a 165 dias (OR=1930,133) foram identificados como factores preditivos de insucesso terapêutico. A duração do tratamento inferior a 165 dias provou ser o mais importante determinante do resultado terapêutico. Conclusões: Os resultados sugerem que um doente imigrante, em situação de sem-abrigo, dependente alcoólico, com tratamentos anteriores para a tuberculose e co-infectado pelo VIH apresenta uma elevada probabilidade de insucesso terapêutico. Assim, deverão ser definidas estratégias específicas, centradas no doente por forma a impedir este resultado. A base de dados (SVIG-TB), provou ser uma ferramenta de qualidade para a investigação sobre diversos aspectos do controlo da tuberculose. ------------------------------- ABSTRACT - Background/Objective: Despite the high commitment in good strategies for tuberculosis control worldwide, this is still a serious Public Health problem, with global estimates of 9,4million new cases in 2008 and 1,8million deaths/year. The poor understanding of the barriers and facilitators to treatment success is a major obstacle to find effective solutions to improve the quality of tuberculosis programs. This study tries to contribute to the timely identification of patients with predictive profiles of unsuccessful treatment outcomes, through the initial identification of characteristics probably affecting treatment outcome, found on the basis of an epidemiological and statistical model. Methods: A case-control study was conducted for the population of cases notified to the National Program for Tuberculosis Control (n=24 491), between 2000-2007. Predictive factors for unsuccessful outcome were assessed in a bivariate and multivariate analysis, using a significance level of 5%; a logistic regression was used to estimate the odds-ratio of unsuccessful, as compared to successful outcome, for several factors identified in the literature and to which data was available. Results: Alcohol abuse (OR=2,889), patient´s foreign origin (OR=3,910), homelessness (OR=3,919), HIV co-infection (OR=5,173), interruption (OR=60,615) or unsuccessful outcome in the previous treatment (OR=67,345) and treatment duration below 165 days (OR=1930,133) were identified as predictive of unsuccessful outcomes. A low treatment duration proved to be the most powerful factor affecting treatment outcome. Conclusions: Results suggest that a foreign-born patient, alcohol abuser, who has had a previous treatment for tuberculosis and is co-infected with HIV is very likely to have an unsuccessful outcome. Therefore, specific, patient-centered strategies should be taken to prevent an unsuccessful outcome. The database (SVIG-TB), has proved to be a quality tool on research of various aspects of tuberculosis control.
Resumo:
An intensive use of dispersed energy resources is expected for future power systems, including distributed generation, especially based on renewable sources, and electric vehicles. The system operation methods and tool must be adapted to the increased complexity, especially the optimal resource scheduling problem. Therefore, the use of metaheuristics is required to obtain good solutions in a reasonable amount of time. This paper proposes two new heuristics, called naive electric vehicles charge and discharge allocation and generation tournament based on cost, developed to obtain an initial solution to be used in the energy resource scheduling methodology based on simulated annealing previously developed by the authors. The case study considers two scenarios with 1000 and 2000 electric vehicles connected in a distribution network. The proposed heuristics are compared with a deterministic approach and presenting a very small error concerning the objective function with a low execution time for the scenario with 2000 vehicles.
Resumo:
This study firstly describes the epidemiology of malaria in Roraima, Amazon Basin in Brazil, in the years from 1991 to 1993: the predominance of plasmodium species, distribution of the blood slides examined, the malaria risk and seasonality; and secondly investigates whether population growth from 1962 to 1993 was associated with increasing risk of malaria. Frequency of malaria varied significantly by municipality. Marginally more malaria cases were reported during the dry season (from October to April), even after controlling for by year and municipality. Vivax was the predominant type in all municipalities but the ratio of plasmodium types varied between municipalities. No direct association between population growth and increasing risk of malaria from 1962 to 1993 was detected. Malaria in Roraima is of the "frontier" epidemiological type with high epidemic potential.
Resumo:
Fractional dynamics is a growing topic in theoretical and experimental scientific research. A classical problem is the initialization required by fractional operators. While the problem is clear from the mathematical point of view, it constitutes a challenge in applied sciences. This paper addresses the problem of initialization and its effect upon dynamical system simulation when adopting numerical approximations. The results are compatible with system dynamics and clarify the formulation of adequate values for the initial conditions in numerical simulations.
Resumo:
Demo in Workshop on ns-3 (WNS3 2015). 13 to 14, May, 2015. Castelldefels, Spain.
Resumo:
RESUMO: O aborto recorrente (AR) é um evento extremamente traumático com grande impacto na vida dos casais. Apesar de avanços significativos verificados na investigação médica, cerca de 50% dos casos continua sem uma causa identificada. Alguns aspectos como a caracterização inadequada das doentes e das perdas gestacionais, assim como diferentes metodologias utilizadas no seu estudo, têm influenciado a prevalência de alguns dos factores causais e dificultado a compreensão do AR. Da mesma forma, pouco se sabe sobre as diferenças de género na vivência psicológica do aborto recorrente e das suas eventuais repercussões para o relacionamento do casal, centrando-se os poucos estudos existentes preferencialmente na mulher. Por esta razão, o objectivo desta tese foi a caracterização dos factores médicos associados ao AR e das consequências psicológicas desta entidade, contribuindo para promover estratégias clínicas baseadas na evidência específica. Na primeira parte desta tese (capítulos 1 e 2), após uma breve introdução geral e através de uma revisão da literatura, efectua-se uma reflexão sobre o tema, abordando a epidemiologia do aborto recorrente, os factores médicos e os aspectos psicológicos associados. Nos capítulos 3 e 4 descrevemos três estudos efectuados em mulheres portuguesas com aborto recorrente. O primeiro estudo teve por objectivo caracterizar os factores médicos e determinar o padrão da perda recorrente de gravidez, numa coorte de mulheres submetidas a um protocolo de diagnóstico definido. As participantes foram agrupadas de acordo com a paridade (AR primário ou secundário) e a idade gestacional das perdas (embrionárias ou fetais). As anomalias da cavidade uterina, a SAAF e as translocações equilibradas parentais foram os factores mais prevalentes. 15,6% das participantes eram obesas. Em 55,5% dos casos não foi identificado nenhum factor. A história obstétrica materna influenciou significativamente os resultados encontrados: os factores anatómicos e a SAAF foram mais prevalentes em nulíparas e as perdas inexplicadas foram mais frequentes em mulheres com AR secundário. Assim, os nossos dados reforçam os resultados de pesquisas anteriores sobre a importância da obesidade, da síndrome de anticorpos antifosfolípidos e das anomalias uterinas estruturais como factores associados ao AR e mostram que os a paridade é um moderador da importância desses factores. Capítulo 6 94 A ausência de resultados consensuais na literatura sobre a etiologia do AR condiciona a pesquisa sistemática de alguns factores, envolvendo exames dispendiosos, muitas vezes sem que exista evidência que suporte a sua associação com esta entidade. A trombofilia hereditária é uma das condições frequentemente investigadas nestas doentes. O nosso segundo estudo pretende contribuir para clarificar o papel de duas mutações (factor V Leiden e protrombina G20210A) na perda recorrente de gravidez e esclarecer a necessidade do seu rastreio nestas situações. Foi efectuada a pesquisa destes polimorfismos em 100 mulheres com AR inexplicado e num grupo de controlo de multíparas sem história de perdas de gravidez. Na nossa amostra não se verificou uma associação entre perdas embrionárias recorrentes e estas mutações. Nas mulheres com este tipo de perdas, a prevalência do FLV foi inclusive menor do que a verificada nos controlos. Pelo contrário, nas participantes com perdas fetais a prevalência destes polimorfismos foi muito superior à verificada nos controlos, sugerindo uma possível associação entre estas duas entidades. A pequena dimensão deste último subgrupo de mulheres, não nos permitiu contudo tirar conclusões. Uma investigação prospectiva multicêntrica é necessária antes de recomendar a pesquisa da trombofilia hereditária na investigação do AR. Procurámos incluir também nesta tese uma dimensão psicológica e contribuir assim para o conhecimento dos processos relacionais originados pelo AR. No terceiro estudo foram investigadas as diferenças de género na vivência do AR e o seu impacto no relacionamento e sexualidade do casal. Participaram neste estudo 30 casais sem filhos, com pelo menos 3 abortos espontâneos consecutivos. Cada membro do casal respondeu a um conjunto de questionários (Impact of Events Scale, Perinatal Grief Scale, Partnership Questionnaire e Intimate Relationship Scale). Os resultados mostram que as mulheres sofrem mais intensamente do que os homens com o AR, relacionando-se a intensidade do seu sofrimento com a qualidade do relacionamento conjugal. A sexualidade do casal é também afectada pelo stress e pelo sofrimento associados ao AR. Uma avaliação e acompanhamento deste tipo de problemas são imprescindíveis para ajudar estes casais a manterem a qualidade afectiva e sexual da sua relação. Finalmente, no capítulo 5 sumariámos as conclusões de toda a contribuição pessoal para a investigação sobre os factores associados e repercussões para o casal da perda recorrente de gravidez.-------------------ABSTRACT: Recurrent miscarriage (RM), a rare condition, has been described as a traumatic event for couples. Parental chromosomal anomalies, maternal thrombophilic disorders and structural uterine anomalies have been directly associated with RM. However, despite significant advances in medical research, the vast majority of cases remain unexplained. Aspects as the ethnic diversity of the population with different expression of genes, the inappropriate characterization of patients and of pregnancy losses, as well as different methodologies used in their study, have influenced the prevalence of etiological factors and have hampered the understanding of this problem. Similarly, little is known about gender differences in psychological experience of RM and its implications for the relationship of the couple. The first objective of this thesis is the characterization of the medical factors and of the psychological consequences related with RM, in the Portuguese population, helping to promote specific evidence-based clinical strategies. In the first part of this thesis, and after a brief general introduction (Chapter 1), a critical review of literature on the definition, the epidemiology and the dimensions involved, with a special emphasis on associated medical and psychological aspects of recurrent miscarriage, is presented (Chapter 2). In Chapters 3 and 4 we describe three studies carried out in Portuguese couples with RM. The first study aimed to investigate the etiological factors and the pattern of pregnancy loss in a cohort of women with RM. Subjects were divided in groups according to their parity (primary or secondary RM) and time of pregnancy loss (embryonic or fetal). Parental chromosome anomalies, uterine anomalies and antiphospholipid syndrome were the most prevalent medical factors. 15.6% of the women were obese. In the majority of cases (55.5%) no identifiable cause was detected. Parity influenced significantly our results. There was a higher prevalence of anatomic factors and antiphospholipid syndrome in primary RM. On the other hand, unexplained losses were more frequent in secondary RM. Except for the parental chromosomal abnormalities; the frequency of risk factors was similar among women with fetal or embryonic losses. Our data emphasizes the results of previous research on the importance of obesity, antiphospholipid syndrome and structural uterine abnormalities as known risk factors for RM, and shows that parity is an important moderator of the weight of those risk factors. Our second study aims to clarify the role of two mutations (factor V Leiden and prothrombin G20210A) and elucidate the need for their screening in Portuguese women with RM. FVL and PT G20210A analysis was carried out in 100 women with three or more consecutive miscarriages and a control group of 100 parous controls with no history of pregnancy losses. Secondary analysis was made regarding gestational age at miscarriage (embryonic and fetal loss). Overall the prevalence of FVL and PT G20210A was similar in RM women compared with controls. In the RM embryonic subgroup of women, FVL prevalence was inclusively lower than that of controls. Conversely in women with fetal losses both polymorphisms were much more frequent, although statistical significance was not reached due to the small size of this subgroup of patients. These data indicate that inherited maternal thrombophilia is not associated with RM prior to 10 weeks of gestation. Therefore, its screening is not indicated as an initial approach in Portuguese women with RM and a negative personal history of thromboembolic.96 Our second study aims to clarify the role of two mutations (factor V Leiden and prothrombin G20210A) and elucidate the need for their screening in Portuguese women with RM. FVL and PT G20210A analysis was carried out in 100 women with three or more consecutive miscarriages and a control group of 100 parous controls with no history of pregnancy losses. Secondary analysis was made regarding gestational age at miscarriage (embryonic and fetal loss). Overall the prevalence of FVL and PT G20210A was similar in RM women compared with controls. In the RM embryonic subgroup of women, FVL prevalence was inclusively lower than that of controls. Conversely in women with fetal losses both polymorphisms were much more frequent, although statistical significance was not reached due to the small size of this subgroup of patients. These data indicate that inherited maternal thrombophilia is not associated with RM prior to 10 weeks of gestation. Therefore, its screening is not indicated as an initial approach in Portuguese women with RM and a negative personal history of thromboembolic. In our third study, we investigate gender differences in RM experience and its impact on the couple's relationship and sexuality. Each member of 30 couples with RM answered a set of questionnaires, including the Impact of Events Scale (Horowitz et al., 1979), the Perinatal Grief Scale (Toedter et al., 1988), the Partnership Questionnaire (Hahlweg, 1979) and the Intimate Relationship Scale (Hetherington e Soeken, 1990). Results showed that men do grieve, but less intensely than women. Although the quality of the couple‟s relationship seemed not to be adversely affected by RM, both partners described sexual changes after those events. Grief was related to the quality of communication in the couple for women, and to the quality of sex life for men. An understanding of such issues is critical in helping these couples to maintain sexual and affective quality of their relationship. Finally, in Chapter 5, conclusions and clinical implications of all personal contribution to the investigation on associated factors and relational consequences of recurrent miscarriage are presented.