967 resultados para Processing methods
Resumo:
Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Hyperspectral unmixing methods aim at the decomposition of a hyperspectral image into a collection endmember signatures, i.e., the radiance or reflectance of the materials present in the scene, and the correspondent abundance fractions at each pixel in the image. This paper introduces a new unmixing method termed dependent component analysis (DECA). This method is blind and fully automatic and it overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. DECA is based on the linear mixture model, i.e., each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abundances are modeled as mixtures of Dirichlet densities, thus enforcing the non-negativity and constant sum constraints, imposed by the acquisition process. The endmembers signatures are inferred by a generalized expectation-maximization (GEM) type algorithm. The paper illustrates the effectiveness of DECA on synthetic and real hyperspectral images.
Resumo:
In this paper, a new parallel method for sparse spectral unmixing of remotely sensed hyperspectral data on commodity graphics processing units (GPUs) is presented. A semi-supervised approach is adopted, which relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. This method is based on the spectral unmixing by splitting and augmented Lagrangian (SUNSAL) that estimates the material's abundance fractions. The parallel method is performed in a pixel-by-pixel fashion and its implementation properly exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for simulated and real hyperspectral datasets reveal significant speedup factors, up to 1 64 times, with regards to optimized serial implementation.
Resumo:
Dissertation presented to obtain a Ph.D. degree in Engineering and Technology Sciences, Biotechnology at the Instituto de Tecnologia Química e Biológica, Universidade Nova de Lisboa
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Bioquímica pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia.A presente dissertação foi preparada no âmbito do convénio bilateral existente entre a Universidade Nova de Lisboa e a Universidade de Vigo.
Resumo:
O principal motivo para a realização deste trabalho consistiu no desenvolvimento de tecnologia robótica, que permitisse o mergulho e ascenção de grandes profundidades de uma forma eficiente. O trabalho realizado contemplou uma fase inicial de análise e estudo dos sistemas robóticos existentes no mercado, bem como métodos utilizados identificando vantagens e desvantagens em relação ao tipo de veículo pretendido. Seguiu-se uma fase de projeto e estudo mecânico, com o intuito de desenvolver um veículo com variação de lastro através do bombeamento de óleo para um reservatório exterior, para variar o volume total do veículo, variando assim a sua flutuabilidade. Para operar a grande profundidade com AUV’s é conveniente poder efetuar o trajeto up/down de forma eficiente e a variação de lastro apresenta vantagens nesse aspeto. No entanto, contrariamente aos gliders o interesse está na possibilidade de subir e descer na vertical. Para controlar a flutuabilidade e ao mesmo tempo analisar a profundidade do veículo em tempo real, foi necessario o uso de um sistema de processamento central que adquirisse a informação do sensor de pressão e comunicasse com o sistema de variação de lastro, de modo a fazer o controlo de posicionamento vertical desejado. Do ponto de vista tecnológico procurou-se desenvolver e avaliar soluções de variação de volume intermédias entre as dos gliders (poucas gramas) e as dos ROV’s workclass (dezenas ou centenas de kilogramas). Posteriormente, foi desenvolvido um simulador em matlab (Simulink) que reflete o comportamento da descida do veículo, permitindo alterar parâmetros do veículo e analisar os seus resultados práticos, de modo a poder ajustar o veículo real. Nos resultados simulados verificamos o cálculo das velocidades limite atingidas pelo veículo com diferentes coeficientes de atrito, bem como o comportamento da variação de lastro do veículo no seu deslocamento vertical. Sistema de Variação de Lastro para Controlo de Movimento Vertical de Veículo Subaquático Por fim, verificou-se ainda a capacidade de controlo do veículo para uma determinada profundiade, e foi feita a comparação entre estas simulações executadas com parâmetros muito próximos do ensaio real e os respetivos ensaios reais.
Resumo:
Journal of Proteome Research (2006)5: 2720-2726
Resumo:
A antropologia forense é uma disciplina das ciências forenses que trata da análise de restos cadavéricos humanos para fins legais. Uma das suas aplicações mais populares é a identificação forense que consiste em determinar o perfil biológico (idade, sexo, ancestralidade e estatura) de um indivíduo. No entanto, este processo muitas vezes é dificultado quando o corpo se encontra em avançado estado de decomposição apenas existindo restos esqueléticos. Neste caso, áreas médicas comummente utilizadas na identificação de cadáveres, como a patologia, tem de ser descartadas e surge a necessidade de aplicar outras técnicas. Neste contexto, muitos métodos antropométricos são propostos de forma a caracterizar uma pessoa através do seu esqueleto. Contudo, constata-se que a maioria dos procedimentos sugeridos é baseada em equipamentos básicos de medição, não usufruindo da tecnologia contemporânea. Assim, em parceria com a Delegação Norte do NMLCF, I. P., esta Tese teve na sua génese a criação de um sistema computacional baseado em imagens de Tomografia Computorizada (TC) de ossadas que, através de ferramentas open source, permita a realização de identificação forense. O trabalho apresentado baseia-se no processo de gestão de informação, aquisição, processamento e visualização de imagens TC. No decorrer da realização da presente Tese foi desenvolvida uma base de dados que permite organizar a informação de cada ossada e foram implementados algoritmos que levam a uma extracção de características muito mais vasta que a efetuada manualmente com os equipamentos de medição clássicos. O resultado final deste estudo consistiu num conjunto de técnicas que poderão ser englobadas num sistema computacional de identificação forense e deste modo criar uma aplicação com vantagens tecnológicas evidentes.
Resumo:
RESUMO: As concentrações circulantes de cálcio são notavelmente constantes a despeito das variações diárias na absorção intestinal e na eliminação renal deste elemento. A regulação da calcémia é um sistema complexo que compreende vários factores controladores (a calcémia, a fosforémia, as concentrações circulantes de paratormona (PTH) e calcitriol além de muitos outros factores como hormonas esteróides em geral, outros iões como o magnésio e outros factores hormonais) e vários órgãos alvo (glândulas paratiroideias, osso, rim e intestino). As respostas dos órgãos alvo também são muito variadas. No caso mais simples, a cristalização de sais de cálcio corresponde a uma mudança de fase em que participam moléculas orgânicas que a iniciam, aceleram ou inibem. Em geral a combinação de um factor controlador com o respectivo receptor de membrana (para polipeptídeos ou iões) ou intracelular (hormonas esteróides) é apenas o primeiro passo de uma cadeia bioquímica que introduz uma enorme amplificação na resposta. A esta variedade de mecanismos de resposta correspondem grandes diferenças nos tempos de resposta que podem ser de minutos a semanas. É hoje possível “observar” (medir) com apreciável rigor nos líquidos biológicos (sangue, urina, fezes, etc.) os factores mais importantes do sistema de regulação da calcémia (cálcio, fósforo, paratormona e calcitriol) assim como administrar estes factores em experiências agudas. Esta possibilidade reflecte – se na literatura neste campo que tem vindo a crescer. O advento das técnicas da biologia molecular tem permitido a caracterização molecular de algumas das disfunções da homeostase do cálcio e é de esperar um diagnóstico fisiopatológico cada vez mais rigoroso dessas disfunções. Com o avanço dos conhecimentos nesta área que não cessa de aumentar temos cada vez maiores capacidades para fazer diagnósticos e é cada vez mais difícil interpretar com rigor os correspondentes quadros metabólicos. A análise ou síntese de sistemas complexos é a actividade mais nobre dos engenheiros que lhes permite desenhar pontes, diques, barcos, aviões ou automóveis. Com o aparecimento de computadores de médio ou grande porte foi – lhes possível utilizar descrições matemáticas não só para desenhar sistemas como ainda para interpretar eventuais falhas na sua operação. Essas descrições matemáticas consistem numa sequência de operações realizadas num computador segundo um “programa informático” que receberam a designação genérica de modelos, por analogia com as famosas leis (equações) da física que foram deduzidas a partir de um certo número de postulados e que permitem representar matematicamente processos físicos. As famosas leis de Newton são talvez os exemplos mais famosos de “modelos” de sistemas físicos. A introdução de modelos matemáticos em biologia e particularmente em medicina só se deu recentemente.MÉTODOS No trabalho que aqui se apresenta construiu - se um modelo simplificado da homeostase do cálcio destinado ao cálculo de variáveis observáveis (concentrações de cálcio, fósforo, PTH e calcitriol) de modo a poderem comparar-se valores calculados com valores observados. A escolha dos componentes do modelo foi determinada pela nossa experiência clínica e pela informação fisiopatológica e clínica publicada. Houve a preocupação de construir o modelo de forma modular de modo a ser possível a sua expansão sem grandes transformações na descrição matemática (e informática) já existente. Na sua fase actual o modelo não pode ser usado como instrumento de diagnóstico. É antes uma ferramenta destinada a esclarecer “em princípio” mecanismos fisiopatológicos. Usou – se o modelo para simular um certo número de observações publicadas e para exemplificar a sua eventual aplicação clínica na simulação de situações hipotéticas e na análise de possíveis mecanismos fisiopatológicos responsáveis por situações de hipo ou hipercalcémias. Simultaneamente fez – se uma análise dos dados acumulados relativos a doentes vistos no Serviço de Endocrinologia do Instituto Português de Oncologia de Francisco Gentil – Centro Regional Oncológico de Lisboa, S.A. CONCLUSÕES Numa população de 894 doentes com patologias variadas do Instituto Português de Oncologia de Lisboa os valores da calcémia tiveram uma distribuição normal unimodal com uma média de 9.56 mg/dl, e um erro padrão de 0.41 mg/dl. Estas observações sugerem que a calcémia está sujeita a regulação. A partir dos resultados publicados em que o metabolismo do cálcio foi perturbado por infusões de cálcio, calcitriol ou PTH, de estudos bioquímicos e fisiológicos sobre os mecanismos de acção de factores controladores da calcémia e do estudo do comportamento de órgãos alvo (paratiroideias, intestino, osso e rim) foi possível construir um modelo matemático de parâmetros concentrados do sistema de regulação da calcémia. As expressões analíticas usadas foram baseadas na cinética enzimática de modo a que os seus parâmetros tivessem um significado físico ou fisiológico simples. O modelo revelou apreciável robustez e flexibilidade. É estável quando não perturbado e transita entre estados estacionários quando perturbado. Na sua forma actual gera simulações que reproduzem satisfatoriamente um número apreciável de dados experimentais colhidos em doentes. Isto não significa que possa ser usado como instrumento de diagnóstico aplicável a doentes individuais. O desenho do modelo comporta a adição posterior de novas relações quando surgirem situações para as quais se revele insuficiente. A utilização exaustiva do modelo permitiu explicitar aspectos do metabolismo do cálcio que ou não estão contidas na sua formulação actual – o aparecimento de hipertrofia ou de adenomas das paratiroideias e as alterações na estrutura óssea , a participação de outros factores controladores – magnésio, ou estão insuficientemente descritas – alterações do metabolismo do fósforo nos hipoparatiroidismos. A análise dos dados relativos aos doentes do Serviço de Endocrinologia do IPO permitiu o início da caracterização dos tipos de patologia que representam e de possíveis mecanismos fisiopatológicos subjacentes. Estas observações são o ponto de partida para análises futuras. São exemplos das relações encontradas: a distribuição dos doentes por dois grandes grupos conforme a calcémia é determinada pelas concentrações circulantes de PTH ou estas são determinadas pela calcémia; a distribuição sazonal das concentrações de Vit. D25. no sangue; a correlação negativa entre estas e as concentrações de PTH no sangue. Também foi possível extrair a cinética do controlo da PTH sobre a síntese de calcitriol. O estudo dos níveis circulantes de PTH no pós-operatório imediato de doentes paratiroidectomizados permitiu determinar as suas taxas de degradação metabólica. O modelo permitiu simular as relações Ca/PTH no sangue, Ca/Fracção excretada da carga tubular, Ca/P no sangue para valores normais ou altos de Ca. Foram feitas simulações de situações fisiopatológicas (em “doentes virtuais”): infusões crónicas de cálcio, PTH e calcitriol; alterações no comportamento de receptores. Estas simulações correspondem a experiências que não podem ser realizadas em humanos. São exemplos da utilização do modelo na exploração de possíveis mecanismos fisiopatológicos através da observação de resultados quantitativos inacessíveis à intuição. O modelo foi útil em duas fases do trabalho: Primeiro, durante a sua síntese implicou uma escolha criticamente selectiva de informação, sua análise quantitativa e processamento, uma explicitação rigorosa (analítica) das relações funcionais entre os controladores e as variáveis e da sua integração numa estrutura global; Segundo, a simulação de situações experimentais ou clínicas (dados do Serviço de Endocrinologia do IPO) em doentes obrigou a explicitar raciocínios fisiopatológicos habitualmente formulados em bases puramente intuitivas. Esta prática revelou comportamentos óbvios após as simulações – acção reduzida das infusões PTH (simulação de hiperparatiroidismos primários) enquanto não há inibição total da respectiva secreção, necessidade de aumento da massa secretora da paratiroideia nas insuficiências renais avançadas, etc. A síntese e utilização do modelo não implicaram uma preparação matemática avançada e foram possíveis mercê da disponibilidade de “software” interactivo especificamente desenhado para a simulação de sistemas dinâmicos em que os programas se escrevem em inglês usando a simbologia simples da álgebra elementar. A função nobre de modelos desta natureza é semelhante à dos modelos usados pelos físicos desde o século XVII: permitir explicações de carácter geral funcionando como uma ferramenta intelectual para manipulação de conceitos e para a realização de “experiências pensadas” (“thought experiments”) respeitando certos princípios físicos (princípios de conservação) que estabelecem as fronteiras da realidade. -------ABSTRACT: Calcium blood levels are remarkably constant despite great variations in calcium daily intake, intestinal absorption and renal excretion. The regulation of the calcium concentration in the blood is achieved by a complex system that includes several controller factors (mainly the serum levels of calcium, phosphorus, parathyroid hormone (PTH) and calcitriol but also of steroid hormones, ions such as magnesium and other hormonal factors) and several target organs (parathyroid glands, bone, kidney and intestine). The functional response to the controlling factors obeys a variety of kinetics. The precipitation of calcium salts is a simple phase transition in which organic molecules may provide nucleation centres or inhibit the process. The combination of a controller factor with its receptor located in the cell membrane (for peptides or ions) or in the nucleus (for steroid hormones) is only the first step of a biochemical chain that introduces a huge amplification in the response. To this great variability of response we have to add the times of response that vary from minutes to weeks. It is possible to “observe” (measure) with great accuracy in biological fluids (blood, urine, faeces, etc.) the most important factors intervening in the calcium regulation (calcium, phosphorus, PTH and calcitriol). The response of the system to acute infusions of the controlling factors has also been studied. Using molecular biology techniques it has been possible to characterize some calcium homeostasis dysfunctions and better physiopathological diagnosis are expected. With the increasingly new knowledge in this area we have better capacity to diagnose but it is harder to explain correctly the underlying metabolic mechanisms. The analysis or synthesis of complex systems is the noble activity of engineers that enables them to draw bridges, dams, boats, airplanes or cars. With the availability of medium-large frame computers it was possible to use mathematical descriptions not only to draw systems but also to explain flaws in its operations. These mathematical descriptions are generally known as models by analogy with the laws (equations) of physics that allow the mathematical description of physical processes. In practice it is not possible to find general solutions for the mathematical descriptions of complex systems but (numeric) computations for specific situations can be obtained with digital computers. The introduction of mathematical models in biology and particularly in medicine is a recent event. METHODS In this thesis a simplified model of calcium homeostasis was built that enables the computation of observable variables (concentrations of calcium, phosphorus, PTH and calcitriol) and allows the comparison between the simulation values and observed values. The choice of the model’s components was made according to our clinical experience and to the published clinical and physiopathological data. The model has a modular design that allows future expansions with minor alterations in its structure. In its present form the model cannot be used for diagnosis. It is a tool designed to enlighten physiopathological processes. To exemplify its possible clinical application in the simulation of hypothetical situations and in the analysis of possible mechanisms responsible for hypo or hypercalcemias the model was used to simulate a certain number of published observations. An analysis of clinical and laboratory data from the Endocrinology Department of the Portuguese Cancer Institute (I.P.O.F.G.-C.R.O.L.,S.A.) is also presented. CONCLUSIONS In a population of 188 patients without an identifiable disease of the calcium metabolism at the Portuguese Cancer Institute the calcemia levels had a unimodal distribution with an average of 9.56 mg/dL and a S.E.M of 0.41 mg/dL. This observation confirms that serum calcium is regulated. Using published data; in which calcium metabolism was disrupted by calcium, PTH or calcitriol infusions; from biochemical and physiological studies of the action of controller factors on the calcemia; in which the response of target organs (parathyroid glands, intestine, bone, kidney) was studied it was possible to build a mathematical model of concentrated parameters of the calcium homeostasis. Analytical expressions used were based on enzymatic kinetics. The model is flexible and robust. It is stable when not disturbed and changes between steady states when disturbed. In its present form it provides simulations that reproduce closely a number of experimental clinical data. This does not mean that it can be used as a diagnostic tool for individual patients. The exhaustive utilisation of the model revealed the need of future expansions to include aspects of the calcium metabolism not included in its present form –hypertrophy or adenomas of the parathyroid glands, bone structure changes, participation of other controller factors such as magnesium – or insufficiently described – phosphate metabolism in hypoparathyroidism. The analysis of the data collected from the I.P.O.’s Endocrinology Department allowed the initial characterization of the different pathologies represented and of their possible physiopathological mechanisms. These observations are a starting point for future analysis. As examples of the relations found were: the distribution of patients in two groups according to the dependency of calcium by PTH levels or PTH levels by calcium concentration; the seasonal distribution of the serum concentrations of D25; its negative correlation with PTH concentration. It was also possible to extract the kinetics of the control of the synthesis of calcitriol by PTH. The analysis of immediate post-surgical levels of PTH in parathyroidectomized patients allowed the determination of its metabolic clearance. The model also allowed the simulation of the relations between Ca/PTH in blood, serum Ca/Fraction of tubular load excreted and Ca/P in blood for normal and high values of calcium. Simulations were made of pathological situations (in “virtual patients”): chronic infusions of calcium, PTH and calcitriol; changes in the characteristics of receptors. These simulations are not possible in real persons. They are an example of the use of this model in exploring possible mechanisms of disease through the observation of quantitative results not accessible to simple intuition. This model was useful in two phases: Firstly, its construction required a careful choice of data, its quantitative analysis and processing, an analytical description of the relations between controller factors and variables and their integration in a global structure. Secondly, the simulation of experimental or clinical (I.P.O.’s Endocrinology Department) data implied testing physiopathological explanations that previously were based on intuition. The construction and utilisation of the model didn’t demand an advanced mathematical preparation since user-friendly interactive software was used. This software was specifically designed for the simulation of dynamic systems. The programs are written in English using elementary algebra symbols. The essential function of this type of models is identical to that of those used by physicists since the XVII century which describe quantitatively natural processes and are an intellectual tool for the manipulation of concepts and the performance of “thought experiments” based in certain physical principles (conservation principles) that are the frontiers of reality.------------------RESUMÉE: Les concentrations circulantes de calcium sont constantes même pendant des variations de l’absorption intestinale et de l’élimination rénale de cet élément. La régulation de la calcémie est un système complexe qui comprend plusieurs éléments contrôleurs (la calcémie, la phosphorémie, les concentrations circulantes de l’hormone parathyroïdienne (PTH) e du calcitriol et d’autres comme les hormones stéroïdes ou des ions comme le magnésium) et plusieurs organes (glandes parathyroïdiennes, l’os, le rein et l’intestin). Les réponses de ces organes sont variées. Dans le cas plus simple, la cristallisation des sels de calcium correspond à un changement de phase dans lequel y participent des molécules organiques que la débutent, l’accélèrent ou l’inhibent. Généralement la combinaison d’un élément contrôleur avec leur récepteur de membrane (pour les peptides ou les ions) ou intracellulaire (pour les hormones stéroïdes) n’est que le premier pas d’une chaîne biochimique qu’introduit une grande amplification de la réponse. A cette variété de réponses correspondent des grandes différences des temps de réponses qu’y vont des minuits a semaines. Il est possible « observer » (mesurer) dans les fluides biologiques (sang, urine, fèces, etc.) les éléments plus importants du système de régulation de la calcémie (calcium, phosphate, PTH et le calcitriol) et les administrer en expérimentes aigus. Cette possibilité est visible dans la littérature publiée dans ce domaine qui est en croissance permanente. L’avenir des techniques de biologie moléculaire a permis caractériser des nombreuses dysfonctions de la régulation de la calcémie et on attend un diagnostique physiopathologique de ces dysfonctions chaque fois plus rigoureuses. Les connaissances dans ce domaine s’agrandissent et on a de plus de capacités pour faire des diagnostiques et il est chaque fois plus difficile les interpréter. L’analyse ou synthèse de systèmes complexes est l’activité plus noble des ingénieurs qui les permit dessiner des ponts, bateaux, avions ou automobiles. Avec des ordinateurs de médium ou grand port il les est possible utiliser descriptions mathématiques pour dessiner les systèmes et interpréter des éventuelles fautes d’opération. Ces descriptions mathématiques sont une séquence d’opérations réalisées dans un ordinateur selon « un programme informatique » qui ont reçu la désignation générique de modèles, pour analogie avec les équations de la physique qui ont été déduits d’un nombre de postulées et qu’ont permit représenter des processus physiques en équations mathématiques. Les fameuses équations de Newton sont peut-être les exemples plus connus des systèmes physiques. L’introduction des modèles mathématiques en biologie et en particulier en médecine est un évènement récent. Dans ce travaille, on a construit un modèle simplifié de l’homéostasie du calcium pour calculer les variables observables (concentrations de calcium, phosphate, PTH et calcitriol) pour les comparer. Les choix des components a été déterminés par notre expérience clinique et par l’information physiopathologique et clinique publiée. Le modèle a été construit de façon modulaire ce que permit leur postérieur expansion sans des grandes altérations dans la description mathématique et informatique déjà existante. Dans cette forme le modèle ne peut être utilisé comme un instrument de diagnostique. Il est un outil pour éclairer la physiopathologie. Le modèle a été utilisé pour simuler un certain nombre d’observations publiées et pour exemplifier leur possible utilisation clinique dans la simulation des hypothèses et de la physiopathologie des situations d’hypo ou hypercalcémie. On a fait une analyse des éléments des procès cliniques des malades observées dans le Service d’Endocrinologie de l’IPOFG-CROL, SA. Dans une population de 894 malades avec des différentes pathologies les valeurs de calcémie on une distribution uni modale avec une Médie de 9.56 mg/dL et une erreur standard de 0.41 mg/dL. Ces observations suggèrent que la calcémie soit sujette de régulation. En utilisant des résultats de travaux publiés dans lesquels le métabolisme du calcium a été changé par des infusions de calcium, calcitriol ou PTH, des études biochimiques et physiologiques sur des mécanismes d’action des éléments contrôleurs de la calcémie et de l’étude du comportement des organes cible (parathyroïdes, intestin, rein, os), il a été possible de construire un modèle mathématique de paramètres concentrés du système de régulation de la calcémie. Les expressions analytiques utilisées ont été basées sur la cinétique enzymatique de façon à que les paramètres aient eu une signification physique ou biologique. Le modèle est stable quand il n’est pas perturbé et transit entre états stationnaires quand il est sujet a des perturbations. A ce moment il fait des simulations qui reproduisent de façon satisfaisant un nombre d’observations expérimentales. La construction du modèle permit l’addiction de nouvelles relations dans les cas ou il est insuffisant. L’utilisation exhaustive du modèle a permit expliciter des aspects du métabolisme du calcium qui y ne sont pas compris – l’hyperplasie ou la formation des adénomes des parathyroïdes, les altérations de la structure des os, la participation d’outres éléments régulateurs (magnésium), ou sont insuffisamment décrites – les altérations du métabolisme des phosphates dans l’hypoparathyroidism. L’analyse de l’information des malades du Service d’Endocrinologie a permit caractériser les pathologies représentées et leurs possibles mécanismes physiopathologiques. Ces observations sont le point de départ pour les analyses futures. Sont des exemples des relations trouvées: la distribution des malades par deux groupes: ceux dans lequel la calcémie est déterminée par la PTH ou ceux dans lesquels la PTH est déterminée par la calcémie; la distribution sazonale de la concentration de la vitamine D; la corrélation négative entre la vitamine D et la PTH. On a eu la possibilité de déduire la cinétique de control de la PTH sur la synthèse du calcitriol. L’étude des niveaux circulants de PTH sur des sujets parathyroidectomisées a permit déduire leur taux de dégradation métabolique. Le modèle a permit simuler les relations Ca/PTH dans le sang, Ca/fraction éliminée par le rein, Ca/P dans le sang pour des valeurs normales ou hautes de calcium. On a fait des simulations de situations physiopathologiques (dans “malades virtuelles”): Infusions chroniques de calcium, PTH ou calcitriol; altérations des récepteurs. Ces simulations ne peuvent pas être réalisées dans les humains. Sont des exemples d’utilisation du modèle dans l’exploration des possibles mécanismes de la physiopathologie en observant des résultats quantitatifs inaccessibles à l’intuition. Le modèle a été utile pendant deux étapes des travaux: La première, dans sa construction on a choisi l’information disponible, son analyse quantitative, l’explicitation rigoureuse (analytique) des relations fonctionnelles entre les contrôleurs et les variables et sa intégration dans une structure globale. La deuxième, la simulation de situations expérimentales ou cliniques (du Service d’Endocrinologie) a obligé d’expliciter des raisonnements physiopathologiques généralement formulés utilisant l’intuition. Cette pratique a montré des comportements – action réduite des infusions de PTH (jusqu’à l’inhibition totale de leur respective sécrétion), nécessité d’augmenter la masse sécréteuse de la parathyroïde dans les insuffisants rénales, etc. La synthèse et utilisation du modèle n’ont pas besoin d’une formation avancée en mathématique et sont possibles grâce à un programme interactif qui a été conçu pour la simulation des systèmes dynamiques dans lesquels le programme se construit en anglais en utilisant la symbolique élémentaire de l’algèbre. La fonction noble de ces modèles est semblable à celles des physiques du XVII siècle: Permettre établir explications générales en fonctionnant comme un outil intellectuel pour manipuler des concepts et pour la réalisation d’expérimentes pensées en respectant certains principes de la physique (principe de la conservation) qu’établissent les frontières de la réalité.
Resumo:
In this paper we address an order processing optimization problem known as minimization of open stacks (MOSP). We present an integer pro gramming model, based on the existence of a perfect elimination scheme in interval graphs, which finds an optimal sequence for the costumers orders.
Resumo:
This paper consists in the characterization of medium voltage (MV) electric power consumers based on a data clustering approach. It is intended to identify typical load profiles by selecting the best partition of a power consumption database among a pool of data partitions produced by several clustering algorithms. The best partition is selected using several cluster validity indices. These methods are intended to be used in a smart grid environment to extract useful knowledge about customers’ behavior. The data-mining-based methodology presented throughout the paper consists in several steps, namely the pre-processing data phase, clustering algorithms application and the evaluation of the quality of the partitions. To validate our approach, a case study with a real database of 1.022 MV consumers was used.
Resumo:
Demand response has gained increasing importance in the context of competitive electricity markets and smart grid environments. In addition to the importance that has been given to the development of business models for integrating demand response, several methods have been developed to evaluate the consumers’ performance after the participation in a demand response event. The present paper uses those performance evaluation methods, namely customer baseline load calculation methods, to determine the expected consumption in each period of the consumer historic data. In the cases in which there is a certain difference between the actual consumption and the estimated consumption, the consumer is identified as a potential cause of non-technical losses. A case study demonstrates the application of the proposed method to real consumption data.
Resumo:
Demand response has gain increasing importance in the context of competitive electricity markets environment. The use of demand resources is also advantageous in the context of smart grid operation. In addition to the need of new business models for integrating demand response, adequate methods are necessary for an accurate determination of the consumers’ performance evaluation after the participation in a demand response event. The present paper makes a comparison between some of the existing baseline methods related to the consumers’ performance evaluation, comparing the results obtained with these methods and also with a method proposed by the authors of the paper. A case study demonstrates the application of the referred methods to real consumption data belonging to a consumer connected to a distribution network.
Resumo:
Electric power networks, namely distribution networks, have been suffering several changes during the last years due to changes in the power systems operation, towards the implementation of smart grids. Several approaches to the operation of the resources have been introduced, as the case of demand response, making use of the new capabilities of the smart grids. In the initial levels of the smart grids implementation reduced amounts of data are generated, namely consumption data. The methodology proposed in the present paper makes use of demand response consumers’ performance evaluation methods to determine the expected consumption for a given consumer. Then, potential commercial losses are identified using monthly historic consumption data. Real consumption data is used in the case study to demonstrate the application of the proposed method.