861 resultados para experience-based knowledge
Resumo:
Endmember extraction (EE) is a fundamental and crucial task in hyperspectral unmixing. Among other methods vertex component analysis ( VCA) has become a very popular and useful tool to unmix hyperspectral data. VCA is a geometrical based method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Many Hyperspectral imagery applications require a response in real time or near-real time. Thus, to met this requirement this paper proposes a parallel implementation of VCA developed for graphics processing units. The impact on the complexity and on the accuracy of the proposed parallel implementation of VCA is examined using both simulated and real hyperspectral datasets.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Numa Estação de Tratamento de Águas Residuais (ETAR), são elevados os custos não só de tratamento das águas residuais como também de manutenção dos equipamentos lá existentes, nesse sentido procura-se utilizar processos capazes de transformar os resíduos em produtos úteis. A Digestão Anaeróbia (DA) é um processo atualmente disponível capaz de contribuir para a redução da poluição ambiental e ao mesmo tempo de valorizar os subprodutos gerados. Durante o processo de DA é produzido um gás, o biogás, que pode ser utilizado como fonte de energia, reduzindo assim a dependência energética da ETAR e a emissão de gases com efeito de estufa para a atmosfera. A otimização do processo de DA das lamas é essencial para o aumento da produção de biogás, mas a complexidade do processo constitui um obstáculo à sua otimização. Neste trabalho, aplicaram-se Redes Neuronais Artificiais (RNA) ao processo de DA de lamas de ETAR. RNA são modelos simplificados inspirados no funcionamento das células neuronais humanas e que adquirem conhecimento através da experiência. Quando a RNA é criada e treinada, produz valores de output aproximadamente corretos para os inputs fornecidos. Foi esse o motivo para recorrer a RNA na otimização da produção de biogás no digestor I da ETAR Norte da SIMRIA, usando o programa NeuralToolsTM da PalisadeTM para desenvolvimento das RNA. Para tal, efetuou-se uma análise e tratamento de dados referentes aos últimos quatro anos de funcionamento do digestor. Os resultados obtidos permitiram concluir que as RNA modeladas apresentam boa capacidade de generalização do processo de DA. Considera-se que este caso de estudo é promissor, fornecendo uma boa base para o desenvolvimento de modelos eventualmente mais gerais de RNA que, aplicado conjuntamente com as características de funcionamento de um digestor e o processo de DA, permitirá otimizar a produção de biogás em ETAR.
Resumo:
The year 2012 was the “boom year” in MOOC and all its outstanding growth until now, made us move forward in designing the first MOOC in our Institution (and the third in our country, Portugal). Most MOOC are video lectured based and the learning analytic process to these ones is just taking its first steps. Designing a video-lecture seems, at a first glance, very easy: one can just record a live lesson or lecture and turn it, directly, into a video-lecture (even here one may experience some “sound” and “camera” problems); but developing some engaging, appealing video-lecture, that motivates students to embrace knowledge and that really contributes to the teaching/learning process, it is not an easy task. Therefore questions like: “What kind of information can induce knowledge construction, in a video-lecture?”, “How can a professor interact in a video-lecture when he is not really there?”, “What are the video-lectures attributes that contribute the most to viewer’s engagement?”, “What seems to be the maximum “time-resistance” of a viewer?”, and many others, raised in our minds when designing video-lectures to a Mathematics MOOC from the scratch. We believe this technological resource can be a powerful tool to enhance students' learning process. Students that were born in digital/image era, respond and react slightly different to outside stimulus, than their teachers/professors ever did or do. In this article we will describe just how we have tried to overcome some of the difficulties and challenges we tackled when producing our own video-math-lectures and in what way, we feel, videos can contribute to the teaching and learning process at higher education level.
Resumo:
RESUMO: As concentrações circulantes de cálcio são notavelmente constantes a despeito das variações diárias na absorção intestinal e na eliminação renal deste elemento. A regulação da calcémia é um sistema complexo que compreende vários factores controladores (a calcémia, a fosforémia, as concentrações circulantes de paratormona (PTH) e calcitriol além de muitos outros factores como hormonas esteróides em geral, outros iões como o magnésio e outros factores hormonais) e vários órgãos alvo (glândulas paratiroideias, osso, rim e intestino). As respostas dos órgãos alvo também são muito variadas. No caso mais simples, a cristalização de sais de cálcio corresponde a uma mudança de fase em que participam moléculas orgânicas que a iniciam, aceleram ou inibem. Em geral a combinação de um factor controlador com o respectivo receptor de membrana (para polipeptídeos ou iões) ou intracelular (hormonas esteróides) é apenas o primeiro passo de uma cadeia bioquímica que introduz uma enorme amplificação na resposta. A esta variedade de mecanismos de resposta correspondem grandes diferenças nos tempos de resposta que podem ser de minutos a semanas. É hoje possível “observar” (medir) com apreciável rigor nos líquidos biológicos (sangue, urina, fezes, etc.) os factores mais importantes do sistema de regulação da calcémia (cálcio, fósforo, paratormona e calcitriol) assim como administrar estes factores em experiências agudas. Esta possibilidade reflecte – se na literatura neste campo que tem vindo a crescer. O advento das técnicas da biologia molecular tem permitido a caracterização molecular de algumas das disfunções da homeostase do cálcio e é de esperar um diagnóstico fisiopatológico cada vez mais rigoroso dessas disfunções. Com o avanço dos conhecimentos nesta área que não cessa de aumentar temos cada vez maiores capacidades para fazer diagnósticos e é cada vez mais difícil interpretar com rigor os correspondentes quadros metabólicos. A análise ou síntese de sistemas complexos é a actividade mais nobre dos engenheiros que lhes permite desenhar pontes, diques, barcos, aviões ou automóveis. Com o aparecimento de computadores de médio ou grande porte foi – lhes possível utilizar descrições matemáticas não só para desenhar sistemas como ainda para interpretar eventuais falhas na sua operação. Essas descrições matemáticas consistem numa sequência de operações realizadas num computador segundo um “programa informático” que receberam a designação genérica de modelos, por analogia com as famosas leis (equações) da física que foram deduzidas a partir de um certo número de postulados e que permitem representar matematicamente processos físicos. As famosas leis de Newton são talvez os exemplos mais famosos de “modelos” de sistemas físicos. A introdução de modelos matemáticos em biologia e particularmente em medicina só se deu recentemente.MÉTODOS No trabalho que aqui se apresenta construiu - se um modelo simplificado da homeostase do cálcio destinado ao cálculo de variáveis observáveis (concentrações de cálcio, fósforo, PTH e calcitriol) de modo a poderem comparar-se valores calculados com valores observados. A escolha dos componentes do modelo foi determinada pela nossa experiência clínica e pela informação fisiopatológica e clínica publicada. Houve a preocupação de construir o modelo de forma modular de modo a ser possível a sua expansão sem grandes transformações na descrição matemática (e informática) já existente. Na sua fase actual o modelo não pode ser usado como instrumento de diagnóstico. É antes uma ferramenta destinada a esclarecer “em princípio” mecanismos fisiopatológicos. Usou – se o modelo para simular um certo número de observações publicadas e para exemplificar a sua eventual aplicação clínica na simulação de situações hipotéticas e na análise de possíveis mecanismos fisiopatológicos responsáveis por situações de hipo ou hipercalcémias. Simultaneamente fez – se uma análise dos dados acumulados relativos a doentes vistos no Serviço de Endocrinologia do Instituto Português de Oncologia de Francisco Gentil – Centro Regional Oncológico de Lisboa, S.A. CONCLUSÕES Numa população de 894 doentes com patologias variadas do Instituto Português de Oncologia de Lisboa os valores da calcémia tiveram uma distribuição normal unimodal com uma média de 9.56 mg/dl, e um erro padrão de 0.41 mg/dl. Estas observações sugerem que a calcémia está sujeita a regulação. A partir dos resultados publicados em que o metabolismo do cálcio foi perturbado por infusões de cálcio, calcitriol ou PTH, de estudos bioquímicos e fisiológicos sobre os mecanismos de acção de factores controladores da calcémia e do estudo do comportamento de órgãos alvo (paratiroideias, intestino, osso e rim) foi possível construir um modelo matemático de parâmetros concentrados do sistema de regulação da calcémia. As expressões analíticas usadas foram baseadas na cinética enzimática de modo a que os seus parâmetros tivessem um significado físico ou fisiológico simples. O modelo revelou apreciável robustez e flexibilidade. É estável quando não perturbado e transita entre estados estacionários quando perturbado. Na sua forma actual gera simulações que reproduzem satisfatoriamente um número apreciável de dados experimentais colhidos em doentes. Isto não significa que possa ser usado como instrumento de diagnóstico aplicável a doentes individuais. O desenho do modelo comporta a adição posterior de novas relações quando surgirem situações para as quais se revele insuficiente. A utilização exaustiva do modelo permitiu explicitar aspectos do metabolismo do cálcio que ou não estão contidas na sua formulação actual – o aparecimento de hipertrofia ou de adenomas das paratiroideias e as alterações na estrutura óssea , a participação de outros factores controladores – magnésio, ou estão insuficientemente descritas – alterações do metabolismo do fósforo nos hipoparatiroidismos. A análise dos dados relativos aos doentes do Serviço de Endocrinologia do IPO permitiu o início da caracterização dos tipos de patologia que representam e de possíveis mecanismos fisiopatológicos subjacentes. Estas observações são o ponto de partida para análises futuras. São exemplos das relações encontradas: a distribuição dos doentes por dois grandes grupos conforme a calcémia é determinada pelas concentrações circulantes de PTH ou estas são determinadas pela calcémia; a distribuição sazonal das concentrações de Vit. D25. no sangue; a correlação negativa entre estas e as concentrações de PTH no sangue. Também foi possível extrair a cinética do controlo da PTH sobre a síntese de calcitriol. O estudo dos níveis circulantes de PTH no pós-operatório imediato de doentes paratiroidectomizados permitiu determinar as suas taxas de degradação metabólica. O modelo permitiu simular as relações Ca/PTH no sangue, Ca/Fracção excretada da carga tubular, Ca/P no sangue para valores normais ou altos de Ca. Foram feitas simulações de situações fisiopatológicas (em “doentes virtuais”): infusões crónicas de cálcio, PTH e calcitriol; alterações no comportamento de receptores. Estas simulações correspondem a experiências que não podem ser realizadas em humanos. São exemplos da utilização do modelo na exploração de possíveis mecanismos fisiopatológicos através da observação de resultados quantitativos inacessíveis à intuição. O modelo foi útil em duas fases do trabalho: Primeiro, durante a sua síntese implicou uma escolha criticamente selectiva de informação, sua análise quantitativa e processamento, uma explicitação rigorosa (analítica) das relações funcionais entre os controladores e as variáveis e da sua integração numa estrutura global; Segundo, a simulação de situações experimentais ou clínicas (dados do Serviço de Endocrinologia do IPO) em doentes obrigou a explicitar raciocínios fisiopatológicos habitualmente formulados em bases puramente intuitivas. Esta prática revelou comportamentos óbvios após as simulações – acção reduzida das infusões PTH (simulação de hiperparatiroidismos primários) enquanto não há inibição total da respectiva secreção, necessidade de aumento da massa secretora da paratiroideia nas insuficiências renais avançadas, etc. A síntese e utilização do modelo não implicaram uma preparação matemática avançada e foram possíveis mercê da disponibilidade de “software” interactivo especificamente desenhado para a simulação de sistemas dinâmicos em que os programas se escrevem em inglês usando a simbologia simples da álgebra elementar. A função nobre de modelos desta natureza é semelhante à dos modelos usados pelos físicos desde o século XVII: permitir explicações de carácter geral funcionando como uma ferramenta intelectual para manipulação de conceitos e para a realização de “experiências pensadas” (“thought experiments”) respeitando certos princípios físicos (princípios de conservação) que estabelecem as fronteiras da realidade. -------ABSTRACT: Calcium blood levels are remarkably constant despite great variations in calcium daily intake, intestinal absorption and renal excretion. The regulation of the calcium concentration in the blood is achieved by a complex system that includes several controller factors (mainly the serum levels of calcium, phosphorus, parathyroid hormone (PTH) and calcitriol but also of steroid hormones, ions such as magnesium and other hormonal factors) and several target organs (parathyroid glands, bone, kidney and intestine). The functional response to the controlling factors obeys a variety of kinetics. The precipitation of calcium salts is a simple phase transition in which organic molecules may provide nucleation centres or inhibit the process. The combination of a controller factor with its receptor located in the cell membrane (for peptides or ions) or in the nucleus (for steroid hormones) is only the first step of a biochemical chain that introduces a huge amplification in the response. To this great variability of response we have to add the times of response that vary from minutes to weeks. It is possible to “observe” (measure) with great accuracy in biological fluids (blood, urine, faeces, etc.) the most important factors intervening in the calcium regulation (calcium, phosphorus, PTH and calcitriol). The response of the system to acute infusions of the controlling factors has also been studied. Using molecular biology techniques it has been possible to characterize some calcium homeostasis dysfunctions and better physiopathological diagnosis are expected. With the increasingly new knowledge in this area we have better capacity to diagnose but it is harder to explain correctly the underlying metabolic mechanisms. The analysis or synthesis of complex systems is the noble activity of engineers that enables them to draw bridges, dams, boats, airplanes or cars. With the availability of medium-large frame computers it was possible to use mathematical descriptions not only to draw systems but also to explain flaws in its operations. These mathematical descriptions are generally known as models by analogy with the laws (equations) of physics that allow the mathematical description of physical processes. In practice it is not possible to find general solutions for the mathematical descriptions of complex systems but (numeric) computations for specific situations can be obtained with digital computers. The introduction of mathematical models in biology and particularly in medicine is a recent event. METHODS In this thesis a simplified model of calcium homeostasis was built that enables the computation of observable variables (concentrations of calcium, phosphorus, PTH and calcitriol) and allows the comparison between the simulation values and observed values. The choice of the model’s components was made according to our clinical experience and to the published clinical and physiopathological data. The model has a modular design that allows future expansions with minor alterations in its structure. In its present form the model cannot be used for diagnosis. It is a tool designed to enlighten physiopathological processes. To exemplify its possible clinical application in the simulation of hypothetical situations and in the analysis of possible mechanisms responsible for hypo or hypercalcemias the model was used to simulate a certain number of published observations. An analysis of clinical and laboratory data from the Endocrinology Department of the Portuguese Cancer Institute (I.P.O.F.G.-C.R.O.L.,S.A.) is also presented. CONCLUSIONS In a population of 188 patients without an identifiable disease of the calcium metabolism at the Portuguese Cancer Institute the calcemia levels had a unimodal distribution with an average of 9.56 mg/dL and a S.E.M of 0.41 mg/dL. This observation confirms that serum calcium is regulated. Using published data; in which calcium metabolism was disrupted by calcium, PTH or calcitriol infusions; from biochemical and physiological studies of the action of controller factors on the calcemia; in which the response of target organs (parathyroid glands, intestine, bone, kidney) was studied it was possible to build a mathematical model of concentrated parameters of the calcium homeostasis. Analytical expressions used were based on enzymatic kinetics. The model is flexible and robust. It is stable when not disturbed and changes between steady states when disturbed. In its present form it provides simulations that reproduce closely a number of experimental clinical data. This does not mean that it can be used as a diagnostic tool for individual patients. The exhaustive utilisation of the model revealed the need of future expansions to include aspects of the calcium metabolism not included in its present form –hypertrophy or adenomas of the parathyroid glands, bone structure changes, participation of other controller factors such as magnesium – or insufficiently described – phosphate metabolism in hypoparathyroidism. The analysis of the data collected from the I.P.O.’s Endocrinology Department allowed the initial characterization of the different pathologies represented and of their possible physiopathological mechanisms. These observations are a starting point for future analysis. As examples of the relations found were: the distribution of patients in two groups according to the dependency of calcium by PTH levels or PTH levels by calcium concentration; the seasonal distribution of the serum concentrations of D25; its negative correlation with PTH concentration. It was also possible to extract the kinetics of the control of the synthesis of calcitriol by PTH. The analysis of immediate post-surgical levels of PTH in parathyroidectomized patients allowed the determination of its metabolic clearance. The model also allowed the simulation of the relations between Ca/PTH in blood, serum Ca/Fraction of tubular load excreted and Ca/P in blood for normal and high values of calcium. Simulations were made of pathological situations (in “virtual patients”): chronic infusions of calcium, PTH and calcitriol; changes in the characteristics of receptors. These simulations are not possible in real persons. They are an example of the use of this model in exploring possible mechanisms of disease through the observation of quantitative results not accessible to simple intuition. This model was useful in two phases: Firstly, its construction required a careful choice of data, its quantitative analysis and processing, an analytical description of the relations between controller factors and variables and their integration in a global structure. Secondly, the simulation of experimental or clinical (I.P.O.’s Endocrinology Department) data implied testing physiopathological explanations that previously were based on intuition. The construction and utilisation of the model didn’t demand an advanced mathematical preparation since user-friendly interactive software was used. This software was specifically designed for the simulation of dynamic systems. The programs are written in English using elementary algebra symbols. The essential function of this type of models is identical to that of those used by physicists since the XVII century which describe quantitatively natural processes and are an intellectual tool for the manipulation of concepts and the performance of “thought experiments” based in certain physical principles (conservation principles) that are the frontiers of reality.------------------RESUMÉE: Les concentrations circulantes de calcium sont constantes même pendant des variations de l’absorption intestinale et de l’élimination rénale de cet élément. La régulation de la calcémie est un système complexe qui comprend plusieurs éléments contrôleurs (la calcémie, la phosphorémie, les concentrations circulantes de l’hormone parathyroïdienne (PTH) e du calcitriol et d’autres comme les hormones stéroïdes ou des ions comme le magnésium) et plusieurs organes (glandes parathyroïdiennes, l’os, le rein et l’intestin). Les réponses de ces organes sont variées. Dans le cas plus simple, la cristallisation des sels de calcium correspond à un changement de phase dans lequel y participent des molécules organiques que la débutent, l’accélèrent ou l’inhibent. Généralement la combinaison d’un élément contrôleur avec leur récepteur de membrane (pour les peptides ou les ions) ou intracellulaire (pour les hormones stéroïdes) n’est que le premier pas d’une chaîne biochimique qu’introduit une grande amplification de la réponse. A cette variété de réponses correspondent des grandes différences des temps de réponses qu’y vont des minuits a semaines. Il est possible « observer » (mesurer) dans les fluides biologiques (sang, urine, fèces, etc.) les éléments plus importants du système de régulation de la calcémie (calcium, phosphate, PTH et le calcitriol) et les administrer en expérimentes aigus. Cette possibilité est visible dans la littérature publiée dans ce domaine qui est en croissance permanente. L’avenir des techniques de biologie moléculaire a permis caractériser des nombreuses dysfonctions de la régulation de la calcémie et on attend un diagnostique physiopathologique de ces dysfonctions chaque fois plus rigoureuses. Les connaissances dans ce domaine s’agrandissent et on a de plus de capacités pour faire des diagnostiques et il est chaque fois plus difficile les interpréter. L’analyse ou synthèse de systèmes complexes est l’activité plus noble des ingénieurs qui les permit dessiner des ponts, bateaux, avions ou automobiles. Avec des ordinateurs de médium ou grand port il les est possible utiliser descriptions mathématiques pour dessiner les systèmes et interpréter des éventuelles fautes d’opération. Ces descriptions mathématiques sont une séquence d’opérations réalisées dans un ordinateur selon « un programme informatique » qui ont reçu la désignation générique de modèles, pour analogie avec les équations de la physique qui ont été déduits d’un nombre de postulées et qu’ont permit représenter des processus physiques en équations mathématiques. Les fameuses équations de Newton sont peut-être les exemples plus connus des systèmes physiques. L’introduction des modèles mathématiques en biologie et en particulier en médecine est un évènement récent. Dans ce travaille, on a construit un modèle simplifié de l’homéostasie du calcium pour calculer les variables observables (concentrations de calcium, phosphate, PTH et calcitriol) pour les comparer. Les choix des components a été déterminés par notre expérience clinique et par l’information physiopathologique et clinique publiée. Le modèle a été construit de façon modulaire ce que permit leur postérieur expansion sans des grandes altérations dans la description mathématique et informatique déjà existante. Dans cette forme le modèle ne peut être utilisé comme un instrument de diagnostique. Il est un outil pour éclairer la physiopathologie. Le modèle a été utilisé pour simuler un certain nombre d’observations publiées et pour exemplifier leur possible utilisation clinique dans la simulation des hypothèses et de la physiopathologie des situations d’hypo ou hypercalcémie. On a fait une analyse des éléments des procès cliniques des malades observées dans le Service d’Endocrinologie de l’IPOFG-CROL, SA. Dans une population de 894 malades avec des différentes pathologies les valeurs de calcémie on une distribution uni modale avec une Médie de 9.56 mg/dL et une erreur standard de 0.41 mg/dL. Ces observations suggèrent que la calcémie soit sujette de régulation. En utilisant des résultats de travaux publiés dans lesquels le métabolisme du calcium a été changé par des infusions de calcium, calcitriol ou PTH, des études biochimiques et physiologiques sur des mécanismes d’action des éléments contrôleurs de la calcémie et de l’étude du comportement des organes cible (parathyroïdes, intestin, rein, os), il a été possible de construire un modèle mathématique de paramètres concentrés du système de régulation de la calcémie. Les expressions analytiques utilisées ont été basées sur la cinétique enzymatique de façon à que les paramètres aient eu une signification physique ou biologique. Le modèle est stable quand il n’est pas perturbé et transit entre états stationnaires quand il est sujet a des perturbations. A ce moment il fait des simulations qui reproduisent de façon satisfaisant un nombre d’observations expérimentales. La construction du modèle permit l’addiction de nouvelles relations dans les cas ou il est insuffisant. L’utilisation exhaustive du modèle a permit expliciter des aspects du métabolisme du calcium qui y ne sont pas compris – l’hyperplasie ou la formation des adénomes des parathyroïdes, les altérations de la structure des os, la participation d’outres éléments régulateurs (magnésium), ou sont insuffisamment décrites – les altérations du métabolisme des phosphates dans l’hypoparathyroidism. L’analyse de l’information des malades du Service d’Endocrinologie a permit caractériser les pathologies représentées et leurs possibles mécanismes physiopathologiques. Ces observations sont le point de départ pour les analyses futures. Sont des exemples des relations trouvées: la distribution des malades par deux groupes: ceux dans lequel la calcémie est déterminée par la PTH ou ceux dans lesquels la PTH est déterminée par la calcémie; la distribution sazonale de la concentration de la vitamine D; la corrélation négative entre la vitamine D et la PTH. On a eu la possibilité de déduire la cinétique de control de la PTH sur la synthèse du calcitriol. L’étude des niveaux circulants de PTH sur des sujets parathyroidectomisées a permit déduire leur taux de dégradation métabolique. Le modèle a permit simuler les relations Ca/PTH dans le sang, Ca/fraction éliminée par le rein, Ca/P dans le sang pour des valeurs normales ou hautes de calcium. On a fait des simulations de situations physiopathologiques (dans “malades virtuelles”): Infusions chroniques de calcium, PTH ou calcitriol; altérations des récepteurs. Ces simulations ne peuvent pas être réalisées dans les humains. Sont des exemples d’utilisation du modèle dans l’exploration des possibles mécanismes de la physiopathologie en observant des résultats quantitatifs inaccessibles à l’intuition. Le modèle a été utile pendant deux étapes des travaux: La première, dans sa construction on a choisi l’information disponible, son analyse quantitative, l’explicitation rigoureuse (analytique) des relations fonctionnelles entre les contrôleurs et les variables et sa intégration dans une structure globale. La deuxième, la simulation de situations expérimentales ou cliniques (du Service d’Endocrinologie) a obligé d’expliciter des raisonnements physiopathologiques généralement formulés utilisant l’intuition. Cette pratique a montré des comportements – action réduite des infusions de PTH (jusqu’à l’inhibition totale de leur respective sécrétion), nécessité d’augmenter la masse sécréteuse de la parathyroïde dans les insuffisants rénales, etc. La synthèse et utilisation du modèle n’ont pas besoin d’une formation avancée en mathématique et sont possibles grâce à un programme interactif qui a été conçu pour la simulation des systèmes dynamiques dans lesquels le programme se construit en anglais en utilisant la symbolique élémentaire de l’algèbre. La fonction noble de ces modèles est semblable à celles des physiques du XVII siècle: Permettre établir explications générales en fonctionnant comme un outil intellectuel pour manipuler des concepts et pour la réalisation d’expérimentes pensées en respectant certains principes de la physique (principe de la conservation) qu’établissent les frontières de la réalité.
Resumo:
The increasing importance of the integration of distributed generation and demand response in the power systems operation and planning, namely at lower voltage levels of distribution networks and in the competitive environment of electricity markets, leads us to the concept of smart grids. In both traditional and smart grid operation, non-technical losses are a great economic concern, which can be addressed. In this context, the ELECON project addresses the use of demand response contributions to the identification of non-technical losses. The present paper proposes a methodology to be used by Virtual Power Players (VPPs), which are entities able to aggregate distributed small-size resources, aiming to define the best electricity tariffs for several, clusters of consumers. A case study based on real consumption data demonstrates the application of the proposed methodology.
Resumo:
This document presents a tool able to automatically gather data provided by real energy markets and to generate scenarios, capture and improve market players’ profiles and strategies by using knowledge discovery processes in databases supported by artificial intelligence techniques, data mining algorithms and machine learning methods. It provides the means for generating scenarios with different dimensions and characteristics, ensuring the representation of real and adapted markets, and their participating entities. The scenarios generator module enhances the MASCEM (Multi-Agent Simulator of Competitive Electricity Markets) simulator, endowing a more effective tool for decision support. The achievements from the implementation of the proposed module enables researchers and electricity markets’ participating entities to analyze data, create real scenarios and make experiments with them. On the other hand, applying knowledge discovery techniques to real data also allows the improvement of MASCEM agents’ profiles and strategies resulting in a better representation of real market players’ behavior. This work aims to improve the comprehension of electricity markets and the interactions among the involved entities through adequate multi-agent simulation.
Resumo:
Text based on the paper presented at the Conference "Autonomous systems: inter-relations of technical and societal issues" held at Monte de Caparica (Portugal), Universidade Nova de Lisboa, November, 5th and 6th 2009 and organized by IET-Research Centre on Enterprise and Work Innovation
Resumo:
In competitive electricity markets it is necessary for a profit-seeking load-serving entity (LSE) to optimally adjust the financial incentives offering the end users that buy electricity at regulated rates to reduce the consumption during high market prices. The LSE in this model manages the demand response (DR) by offering financial incentives to retail customers, in order to maximize its expected profit and reduce the risk of market power experience. The stochastic formulation is implemented into a test system where a number of loads are supplied through LSEs.
Resumo:
Following the deregulation experience of retail electricity markets in most countries, the majority of the new entrants of the liberalized retail market were pure REP (retail electricity providers). These entities were subject to financial risks because of the unexpected price variations, price spikes, volatile loads and the potential for market power exertion by GENCO (generation companies). A REP can manage the market risks by employing the DR (demand response) programs and using its' generation and storage assets at the distribution network to serve the customers. The proposed model suggests how a REP with light physical assets, such as DG (distributed generation) units and ESS (energy storage systems), can survive in a competitive retail market. The paper discusses the effective risk management strategies for the REPs to deal with the uncertainties of the DAM (day-ahead market) and how to hedge the financial losses in the market. A two-stage stochastic programming problem is formulated. It aims to establish the financial incentive-based DR programs and the optimal dispatch of the DG units and ESSs. The uncertainty of the forecasted day-ahead load demand and electricity price is also taken into account with a scenario-based approach. The principal advantage of this model for REPs is reducing the risk of financial losses in DAMs, and the main benefit for the whole system is market power mitigation by virtually increasing the price elasticity of demand and reducing the peak demand.
Resumo:
This paper presents an electricity medium voltage (MV) customer characterization framework supportedby knowledge discovery in database (KDD). The main idea is to identify typical load profiles (TLP) of MVconsumers and to develop a rule set for the automatic classification of new consumers. To achieve ourgoal a methodology is proposed consisting of several steps: data pre-processing; application of severalclustering algorithms to segment the daily load profiles; selection of the best partition, corresponding tothe best consumers’ segmentation, based on the assessments of several clustering validity indices; andfinally, a classification model is built based on the resulting clusters. To validate the proposed framework,a case study which includes a real database of MV consumers is performed.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation presented at Faculty of Sciences and Technology of the New University of Lisbon to attain the Master degree in Electrical and Computer Science Engineering
Resumo:
Dissertation presented at Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa to obtain the Master degree in Electrical and Computer Engineering
Resumo:
Considerando como objeto de estudo o Relatório Médico em imagiologia, esta investigação encontra-se situada entre duas áreas do saber, a Ciência de Informação e as Ciências da Saúde (mais designadamente no âmbito da imagiologia). O Relatório Médico em imagiologia é um documento textual, físico e/ou digital, sigiloso, com caráter legal, que compreende informação médica relativa a um (ou vários) exame(s) médico(s) de um utente e que têm como principal objetivo fornecer dados/indicadores para o diagnóstico médico especializado. Com esta investigação, pretende-se posicionar o objeto de estudo, documentar a sua génese intelectual e material, refletir sobre todo o fluxo informacional do documento, revelar a importância das suas fases de produção e de normalização. Foram realizados levantamentos bibliográficos e abordagens reflexivas sobre a importância e produção do relatório médico, que depois foram cruzados com estudos de caso baseado em diferentes instituições, onde a experiência profissional do médico (produtor intelectual) e datilógrafo (produtor material) nos elucida sobre a temática apresentada e nos permitiu entender práticas e fluxos informacionais. No final do estudo analisamos o papel do datilógrafo como produtor material do relatório médico, salientando-se a sua consciência ética e profissional e sugerindo um conjunto de boas práticas no âmbito da elaboração e normalização de relatórios médicos em imagiologia.