978 resultados para Correlation structure
Resumo:
Previous work by our group introduced a novel concept and sensor design for “off-the-person” ECG, for which evidence on how it compares against standard clinical-grade equipment has been largely missing. Our objectives with this work are to characterise the off-the-person approach in light of the current ECG systems landscape, and assess how the signals acquired using this simplified setup compare with clinical-grade recordings. Empirical tests have been performed with real-world data collected from a population of 38 control subjects, to analyze the correlation between both approaches. Results show off-the-person data to be correlated with clinical-grade data, demonstrating the viability of this approach to potentially extend preventive medicine practices by enabling the integration of ECG monitoring into multiple dimensions of people’s everyday lives. © 2015, IUPESM and Springer-Verlag Berlin Heidelberg.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Mecânica Especialização em Concepção e Produção
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
A correlation and predictive scheme for the viscosity and self-diffusivity of liquid dialkyl adipates is presented. The scheme is based on the kinetic theory for dense hard-sphere fluids, applied to the van der Waals model of a liquid to predict the transport properties. A "universal" curve for a dimensionless viscosity of dialkyl adipates was obtained using recently published experimental viscosity and density data of compressed liquid dimethyl (DMA), dipropyl (DPA), and dibutyl (DBA) adipates. The experimental data are described by the correlation scheme with a root-mean-square deviation of +/- 0.34 %. The parameters describing the temperature dependence of the characteristic volume, V-0, and the roughness parameter, R-eta, for each adipate are well correlated with one single molecular parameter. Recently published experimental self-diffusion coefficients of the same set of liquid dialkyl adipates at atmospheric pressure were correlated using the characteristic volumes obtained from the viscosity data. The roughness factors, R-D, are well correlated with the same single molecular parameter found for viscosity. The root-mean-square deviation of the data from the correlation is less than 1.07 %. Tests are presented in order to assess the capability of the correlation scheme to estimate the viscosity of compressed liquid diethyl adipate (DEA) in a range of temperatures and pressures by comparison with literature data and of its self-diffusivity at atmospheric pressure in a range of temperatures. It is noteworthy that no data for DEA were used to build the correlation scheme. The deviations encountered between predicted and experimental data for the viscosity and self-diffusivity do not exceed 2.0 % and 2.2 %, respectively, which are commensurate with the estimated experimental measurement uncertainty, in both cases.
Resumo:
Binary operations on commutative Jordan algebras, CJA, can be used to study interactions between sets of factors belonging to a pair of models in which one nests the other. It should be noted that from two CJA we can, through these binary operations, build CJA. So when we nest the treatments from one model in each treatment of another model, we can study the interactions between sets of factors of the first and the second models.
Resumo:
In the present study we report the results of an analysis, based on serotyping, multilocus enzyme electrophoresis (MEE), and ribotyping of N. meningitidis serogroup C strains isolated from patients with meningococcal disease (MD) in Rio Grande do Sul (RS) and Santa Catarina (SC) States, Brazil, as the Center of Epidemiology Control of Ministry of Health detected an increasing of MD cases due to this serogroup in the last two years (1992-1993). We have demonstrated that the MD due to N.meningitidis serogroup C strains in RS and SC States occurring in the last 4 years were caused mainly by one clone of strains (ET 40), with isolates indistinguishable by serogroup, serotype, subtype and even by ribotyping. One small number of cases that were not due to an ET 40 strains, represent closely related clones that probably are new lineages generated from the ET 40 clone referred as ET 11A complex. We have also analyzed N.meningitidis serogroup C strains isolated in the greater São Paulo in 1976 as representative of the first post epidemic year in that region. The ribotyping method, as well as MEE, could provide useful information about the clonal characteristics of those isolates and also of strains isolated in south Brazil. The strains from 1976 have more similarity with the actual endemic than epidemic strains, by the ribotyping, sulfonamide sensitivity, and MEE results. In conclusion, serotyping with monoclonal antibodies (C:2b:P1.3), MEE (ET 11 and ET 11A complex), and ribotyping by using ClaI restriction enzyme (Rb2), were useful to characterize these epidemic strains of N.meningitidis related to the increased incidence of MD in different States of south Brazil. It is mostly probable that these N.meningitidis serogroup C strains have poor or no genetic corelation with 1971-1975 epidemic serogroup C strains. The genetic similarity of members of the ET 11 and ET 11A complex were confirmed by the ribotyping method by using three restriction endonucleases.
Resumo:
As the wireless cellular market reaches competitive levels never seen before, network operators need to focus on maintaining Quality of Service (QoS) a main priority if they wish to attract new subscribers while keeping existing customers satisfied. Speech Quality as perceived by the end user is one major example of a characteristic in constant need of maintenance and improvement. It is in this topic that this Master Thesis project fits in. Making use of an intrusive method of speech quality evaluation, as a means to further study and characterize the performance of speech codecs in second-generation (2G) and third-generation (3G) technologies. Trying to find further correlation between codecs with similar bit rates, along with the exploration of certain transmission parameters which may aid in the assessment of speech quality. Due to some limitations concerning the audio analyzer equipment that was to be employed, a different system for recording the test samples was sought out. Although the new designed system is not standard, after extensive testing and optimization of the system's parameters, final results were found reliable and satisfactory. Tests include a set of high and low bit rate codecs for both 2G and 3G, where values were compared and analysed, leading to the outcome that 3G speech codecs perform better, under the approximately same conditions, when compared with 2G. Reinforcing the idea that 3G is, with no doubt, the best choice if the costumer looks for the best possible listening speech quality. Regarding the transmission parameters chosen for the experiment, the Receiver Quality (RxQual) and Received Energy per Chip to the Power Density Ratio (Ec/N0), these were subject to speech quality correlation tests. Final results of RxQual were compared to those of prior studies from different researchers and, are considered to be of important relevance. Leading to the confirmation of RxQual as a reliable indicator of speech quality. As for Ec/N0, it is not possible to state it as a speech quality indicator however, it shows clear thresholds for which the MOS values decrease significantly. The studied transmission parameters show that they can be used not only for network management purposes but, at the same time, give an expected idea to the communications engineer (or technician) of the end-to-end speech quality consequences. With the conclusion of the work new ideas for future studies come to mind. Considering that the fourth-generation (4G) cellular technologies are now beginning to take an important place in the global market, as the first all-IP network structure, it seems of great relevance that 4G speech quality should be subject of evaluation. Comparing it to 3G, not only in narrowband but also adding wideband scenarios with the most recent standard objective method of speech quality assessment, POLQA. Also, new data found on Ec/N0 tests, justifies further research studies with the intention of validating the assumptions made in this work.
Resumo:
RESUMO: As concentrações circulantes de cálcio são notavelmente constantes a despeito das variações diárias na absorção intestinal e na eliminação renal deste elemento. A regulação da calcémia é um sistema complexo que compreende vários factores controladores (a calcémia, a fosforémia, as concentrações circulantes de paratormona (PTH) e calcitriol além de muitos outros factores como hormonas esteróides em geral, outros iões como o magnésio e outros factores hormonais) e vários órgãos alvo (glândulas paratiroideias, osso, rim e intestino). As respostas dos órgãos alvo também são muito variadas. No caso mais simples, a cristalização de sais de cálcio corresponde a uma mudança de fase em que participam moléculas orgânicas que a iniciam, aceleram ou inibem. Em geral a combinação de um factor controlador com o respectivo receptor de membrana (para polipeptídeos ou iões) ou intracelular (hormonas esteróides) é apenas o primeiro passo de uma cadeia bioquímica que introduz uma enorme amplificação na resposta. A esta variedade de mecanismos de resposta correspondem grandes diferenças nos tempos de resposta que podem ser de minutos a semanas. É hoje possível “observar” (medir) com apreciável rigor nos líquidos biológicos (sangue, urina, fezes, etc.) os factores mais importantes do sistema de regulação da calcémia (cálcio, fósforo, paratormona e calcitriol) assim como administrar estes factores em experiências agudas. Esta possibilidade reflecte – se na literatura neste campo que tem vindo a crescer. O advento das técnicas da biologia molecular tem permitido a caracterização molecular de algumas das disfunções da homeostase do cálcio e é de esperar um diagnóstico fisiopatológico cada vez mais rigoroso dessas disfunções. Com o avanço dos conhecimentos nesta área que não cessa de aumentar temos cada vez maiores capacidades para fazer diagnósticos e é cada vez mais difícil interpretar com rigor os correspondentes quadros metabólicos. A análise ou síntese de sistemas complexos é a actividade mais nobre dos engenheiros que lhes permite desenhar pontes, diques, barcos, aviões ou automóveis. Com o aparecimento de computadores de médio ou grande porte foi – lhes possível utilizar descrições matemáticas não só para desenhar sistemas como ainda para interpretar eventuais falhas na sua operação. Essas descrições matemáticas consistem numa sequência de operações realizadas num computador segundo um “programa informático” que receberam a designação genérica de modelos, por analogia com as famosas leis (equações) da física que foram deduzidas a partir de um certo número de postulados e que permitem representar matematicamente processos físicos. As famosas leis de Newton são talvez os exemplos mais famosos de “modelos” de sistemas físicos. A introdução de modelos matemáticos em biologia e particularmente em medicina só se deu recentemente.MÉTODOS No trabalho que aqui se apresenta construiu - se um modelo simplificado da homeostase do cálcio destinado ao cálculo de variáveis observáveis (concentrações de cálcio, fósforo, PTH e calcitriol) de modo a poderem comparar-se valores calculados com valores observados. A escolha dos componentes do modelo foi determinada pela nossa experiência clínica e pela informação fisiopatológica e clínica publicada. Houve a preocupação de construir o modelo de forma modular de modo a ser possível a sua expansão sem grandes transformações na descrição matemática (e informática) já existente. Na sua fase actual o modelo não pode ser usado como instrumento de diagnóstico. É antes uma ferramenta destinada a esclarecer “em princípio” mecanismos fisiopatológicos. Usou – se o modelo para simular um certo número de observações publicadas e para exemplificar a sua eventual aplicação clínica na simulação de situações hipotéticas e na análise de possíveis mecanismos fisiopatológicos responsáveis por situações de hipo ou hipercalcémias. Simultaneamente fez – se uma análise dos dados acumulados relativos a doentes vistos no Serviço de Endocrinologia do Instituto Português de Oncologia de Francisco Gentil – Centro Regional Oncológico de Lisboa, S.A. CONCLUSÕES Numa população de 894 doentes com patologias variadas do Instituto Português de Oncologia de Lisboa os valores da calcémia tiveram uma distribuição normal unimodal com uma média de 9.56 mg/dl, e um erro padrão de 0.41 mg/dl. Estas observações sugerem que a calcémia está sujeita a regulação. A partir dos resultados publicados em que o metabolismo do cálcio foi perturbado por infusões de cálcio, calcitriol ou PTH, de estudos bioquímicos e fisiológicos sobre os mecanismos de acção de factores controladores da calcémia e do estudo do comportamento de órgãos alvo (paratiroideias, intestino, osso e rim) foi possível construir um modelo matemático de parâmetros concentrados do sistema de regulação da calcémia. As expressões analíticas usadas foram baseadas na cinética enzimática de modo a que os seus parâmetros tivessem um significado físico ou fisiológico simples. O modelo revelou apreciável robustez e flexibilidade. É estável quando não perturbado e transita entre estados estacionários quando perturbado. Na sua forma actual gera simulações que reproduzem satisfatoriamente um número apreciável de dados experimentais colhidos em doentes. Isto não significa que possa ser usado como instrumento de diagnóstico aplicável a doentes individuais. O desenho do modelo comporta a adição posterior de novas relações quando surgirem situações para as quais se revele insuficiente. A utilização exaustiva do modelo permitiu explicitar aspectos do metabolismo do cálcio que ou não estão contidas na sua formulação actual – o aparecimento de hipertrofia ou de adenomas das paratiroideias e as alterações na estrutura óssea , a participação de outros factores controladores – magnésio, ou estão insuficientemente descritas – alterações do metabolismo do fósforo nos hipoparatiroidismos. A análise dos dados relativos aos doentes do Serviço de Endocrinologia do IPO permitiu o início da caracterização dos tipos de patologia que representam e de possíveis mecanismos fisiopatológicos subjacentes. Estas observações são o ponto de partida para análises futuras. São exemplos das relações encontradas: a distribuição dos doentes por dois grandes grupos conforme a calcémia é determinada pelas concentrações circulantes de PTH ou estas são determinadas pela calcémia; a distribuição sazonal das concentrações de Vit. D25. no sangue; a correlação negativa entre estas e as concentrações de PTH no sangue. Também foi possível extrair a cinética do controlo da PTH sobre a síntese de calcitriol. O estudo dos níveis circulantes de PTH no pós-operatório imediato de doentes paratiroidectomizados permitiu determinar as suas taxas de degradação metabólica. O modelo permitiu simular as relações Ca/PTH no sangue, Ca/Fracção excretada da carga tubular, Ca/P no sangue para valores normais ou altos de Ca. Foram feitas simulações de situações fisiopatológicas (em “doentes virtuais”): infusões crónicas de cálcio, PTH e calcitriol; alterações no comportamento de receptores. Estas simulações correspondem a experiências que não podem ser realizadas em humanos. São exemplos da utilização do modelo na exploração de possíveis mecanismos fisiopatológicos através da observação de resultados quantitativos inacessíveis à intuição. O modelo foi útil em duas fases do trabalho: Primeiro, durante a sua síntese implicou uma escolha criticamente selectiva de informação, sua análise quantitativa e processamento, uma explicitação rigorosa (analítica) das relações funcionais entre os controladores e as variáveis e da sua integração numa estrutura global; Segundo, a simulação de situações experimentais ou clínicas (dados do Serviço de Endocrinologia do IPO) em doentes obrigou a explicitar raciocínios fisiopatológicos habitualmente formulados em bases puramente intuitivas. Esta prática revelou comportamentos óbvios após as simulações – acção reduzida das infusões PTH (simulação de hiperparatiroidismos primários) enquanto não há inibição total da respectiva secreção, necessidade de aumento da massa secretora da paratiroideia nas insuficiências renais avançadas, etc. A síntese e utilização do modelo não implicaram uma preparação matemática avançada e foram possíveis mercê da disponibilidade de “software” interactivo especificamente desenhado para a simulação de sistemas dinâmicos em que os programas se escrevem em inglês usando a simbologia simples da álgebra elementar. A função nobre de modelos desta natureza é semelhante à dos modelos usados pelos físicos desde o século XVII: permitir explicações de carácter geral funcionando como uma ferramenta intelectual para manipulação de conceitos e para a realização de “experiências pensadas” (“thought experiments”) respeitando certos princípios físicos (princípios de conservação) que estabelecem as fronteiras da realidade. -------ABSTRACT: Calcium blood levels are remarkably constant despite great variations in calcium daily intake, intestinal absorption and renal excretion. The regulation of the calcium concentration in the blood is achieved by a complex system that includes several controller factors (mainly the serum levels of calcium, phosphorus, parathyroid hormone (PTH) and calcitriol but also of steroid hormones, ions such as magnesium and other hormonal factors) and several target organs (parathyroid glands, bone, kidney and intestine). The functional response to the controlling factors obeys a variety of kinetics. The precipitation of calcium salts is a simple phase transition in which organic molecules may provide nucleation centres or inhibit the process. The combination of a controller factor with its receptor located in the cell membrane (for peptides or ions) or in the nucleus (for steroid hormones) is only the first step of a biochemical chain that introduces a huge amplification in the response. To this great variability of response we have to add the times of response that vary from minutes to weeks. It is possible to “observe” (measure) with great accuracy in biological fluids (blood, urine, faeces, etc.) the most important factors intervening in the calcium regulation (calcium, phosphorus, PTH and calcitriol). The response of the system to acute infusions of the controlling factors has also been studied. Using molecular biology techniques it has been possible to characterize some calcium homeostasis dysfunctions and better physiopathological diagnosis are expected. With the increasingly new knowledge in this area we have better capacity to diagnose but it is harder to explain correctly the underlying metabolic mechanisms. The analysis or synthesis of complex systems is the noble activity of engineers that enables them to draw bridges, dams, boats, airplanes or cars. With the availability of medium-large frame computers it was possible to use mathematical descriptions not only to draw systems but also to explain flaws in its operations. These mathematical descriptions are generally known as models by analogy with the laws (equations) of physics that allow the mathematical description of physical processes. In practice it is not possible to find general solutions for the mathematical descriptions of complex systems but (numeric) computations for specific situations can be obtained with digital computers. The introduction of mathematical models in biology and particularly in medicine is a recent event. METHODS In this thesis a simplified model of calcium homeostasis was built that enables the computation of observable variables (concentrations of calcium, phosphorus, PTH and calcitriol) and allows the comparison between the simulation values and observed values. The choice of the model’s components was made according to our clinical experience and to the published clinical and physiopathological data. The model has a modular design that allows future expansions with minor alterations in its structure. In its present form the model cannot be used for diagnosis. It is a tool designed to enlighten physiopathological processes. To exemplify its possible clinical application in the simulation of hypothetical situations and in the analysis of possible mechanisms responsible for hypo or hypercalcemias the model was used to simulate a certain number of published observations. An analysis of clinical and laboratory data from the Endocrinology Department of the Portuguese Cancer Institute (I.P.O.F.G.-C.R.O.L.,S.A.) is also presented. CONCLUSIONS In a population of 188 patients without an identifiable disease of the calcium metabolism at the Portuguese Cancer Institute the calcemia levels had a unimodal distribution with an average of 9.56 mg/dL and a S.E.M of 0.41 mg/dL. This observation confirms that serum calcium is regulated. Using published data; in which calcium metabolism was disrupted by calcium, PTH or calcitriol infusions; from biochemical and physiological studies of the action of controller factors on the calcemia; in which the response of target organs (parathyroid glands, intestine, bone, kidney) was studied it was possible to build a mathematical model of concentrated parameters of the calcium homeostasis. Analytical expressions used were based on enzymatic kinetics. The model is flexible and robust. It is stable when not disturbed and changes between steady states when disturbed. In its present form it provides simulations that reproduce closely a number of experimental clinical data. This does not mean that it can be used as a diagnostic tool for individual patients. The exhaustive utilisation of the model revealed the need of future expansions to include aspects of the calcium metabolism not included in its present form –hypertrophy or adenomas of the parathyroid glands, bone structure changes, participation of other controller factors such as magnesium – or insufficiently described – phosphate metabolism in hypoparathyroidism. The analysis of the data collected from the I.P.O.’s Endocrinology Department allowed the initial characterization of the different pathologies represented and of their possible physiopathological mechanisms. These observations are a starting point for future analysis. As examples of the relations found were: the distribution of patients in two groups according to the dependency of calcium by PTH levels or PTH levels by calcium concentration; the seasonal distribution of the serum concentrations of D25; its negative correlation with PTH concentration. It was also possible to extract the kinetics of the control of the synthesis of calcitriol by PTH. The analysis of immediate post-surgical levels of PTH in parathyroidectomized patients allowed the determination of its metabolic clearance. The model also allowed the simulation of the relations between Ca/PTH in blood, serum Ca/Fraction of tubular load excreted and Ca/P in blood for normal and high values of calcium. Simulations were made of pathological situations (in “virtual patients”): chronic infusions of calcium, PTH and calcitriol; changes in the characteristics of receptors. These simulations are not possible in real persons. They are an example of the use of this model in exploring possible mechanisms of disease through the observation of quantitative results not accessible to simple intuition. This model was useful in two phases: Firstly, its construction required a careful choice of data, its quantitative analysis and processing, an analytical description of the relations between controller factors and variables and their integration in a global structure. Secondly, the simulation of experimental or clinical (I.P.O.’s Endocrinology Department) data implied testing physiopathological explanations that previously were based on intuition. The construction and utilisation of the model didn’t demand an advanced mathematical preparation since user-friendly interactive software was used. This software was specifically designed for the simulation of dynamic systems. The programs are written in English using elementary algebra symbols. The essential function of this type of models is identical to that of those used by physicists since the XVII century which describe quantitatively natural processes and are an intellectual tool for the manipulation of concepts and the performance of “thought experiments” based in certain physical principles (conservation principles) that are the frontiers of reality.------------------RESUMÉE: Les concentrations circulantes de calcium sont constantes même pendant des variations de l’absorption intestinale et de l’élimination rénale de cet élément. La régulation de la calcémie est un système complexe qui comprend plusieurs éléments contrôleurs (la calcémie, la phosphorémie, les concentrations circulantes de l’hormone parathyroïdienne (PTH) e du calcitriol et d’autres comme les hormones stéroïdes ou des ions comme le magnésium) et plusieurs organes (glandes parathyroïdiennes, l’os, le rein et l’intestin). Les réponses de ces organes sont variées. Dans le cas plus simple, la cristallisation des sels de calcium correspond à un changement de phase dans lequel y participent des molécules organiques que la débutent, l’accélèrent ou l’inhibent. Généralement la combinaison d’un élément contrôleur avec leur récepteur de membrane (pour les peptides ou les ions) ou intracellulaire (pour les hormones stéroïdes) n’est que le premier pas d’une chaîne biochimique qu’introduit une grande amplification de la réponse. A cette variété de réponses correspondent des grandes différences des temps de réponses qu’y vont des minuits a semaines. Il est possible « observer » (mesurer) dans les fluides biologiques (sang, urine, fèces, etc.) les éléments plus importants du système de régulation de la calcémie (calcium, phosphate, PTH et le calcitriol) et les administrer en expérimentes aigus. Cette possibilité est visible dans la littérature publiée dans ce domaine qui est en croissance permanente. L’avenir des techniques de biologie moléculaire a permis caractériser des nombreuses dysfonctions de la régulation de la calcémie et on attend un diagnostique physiopathologique de ces dysfonctions chaque fois plus rigoureuses. Les connaissances dans ce domaine s’agrandissent et on a de plus de capacités pour faire des diagnostiques et il est chaque fois plus difficile les interpréter. L’analyse ou synthèse de systèmes complexes est l’activité plus noble des ingénieurs qui les permit dessiner des ponts, bateaux, avions ou automobiles. Avec des ordinateurs de médium ou grand port il les est possible utiliser descriptions mathématiques pour dessiner les systèmes et interpréter des éventuelles fautes d’opération. Ces descriptions mathématiques sont une séquence d’opérations réalisées dans un ordinateur selon « un programme informatique » qui ont reçu la désignation générique de modèles, pour analogie avec les équations de la physique qui ont été déduits d’un nombre de postulées et qu’ont permit représenter des processus physiques en équations mathématiques. Les fameuses équations de Newton sont peut-être les exemples plus connus des systèmes physiques. L’introduction des modèles mathématiques en biologie et en particulier en médecine est un évènement récent. Dans ce travaille, on a construit un modèle simplifié de l’homéostasie du calcium pour calculer les variables observables (concentrations de calcium, phosphate, PTH et calcitriol) pour les comparer. Les choix des components a été déterminés par notre expérience clinique et par l’information physiopathologique et clinique publiée. Le modèle a été construit de façon modulaire ce que permit leur postérieur expansion sans des grandes altérations dans la description mathématique et informatique déjà existante. Dans cette forme le modèle ne peut être utilisé comme un instrument de diagnostique. Il est un outil pour éclairer la physiopathologie. Le modèle a été utilisé pour simuler un certain nombre d’observations publiées et pour exemplifier leur possible utilisation clinique dans la simulation des hypothèses et de la physiopathologie des situations d’hypo ou hypercalcémie. On a fait une analyse des éléments des procès cliniques des malades observées dans le Service d’Endocrinologie de l’IPOFG-CROL, SA. Dans une population de 894 malades avec des différentes pathologies les valeurs de calcémie on une distribution uni modale avec une Médie de 9.56 mg/dL et une erreur standard de 0.41 mg/dL. Ces observations suggèrent que la calcémie soit sujette de régulation. En utilisant des résultats de travaux publiés dans lesquels le métabolisme du calcium a été changé par des infusions de calcium, calcitriol ou PTH, des études biochimiques et physiologiques sur des mécanismes d’action des éléments contrôleurs de la calcémie et de l’étude du comportement des organes cible (parathyroïdes, intestin, rein, os), il a été possible de construire un modèle mathématique de paramètres concentrés du système de régulation de la calcémie. Les expressions analytiques utilisées ont été basées sur la cinétique enzymatique de façon à que les paramètres aient eu une signification physique ou biologique. Le modèle est stable quand il n’est pas perturbé et transit entre états stationnaires quand il est sujet a des perturbations. A ce moment il fait des simulations qui reproduisent de façon satisfaisant un nombre d’observations expérimentales. La construction du modèle permit l’addiction de nouvelles relations dans les cas ou il est insuffisant. L’utilisation exhaustive du modèle a permit expliciter des aspects du métabolisme du calcium qui y ne sont pas compris – l’hyperplasie ou la formation des adénomes des parathyroïdes, les altérations de la structure des os, la participation d’outres éléments régulateurs (magnésium), ou sont insuffisamment décrites – les altérations du métabolisme des phosphates dans l’hypoparathyroidism. L’analyse de l’information des malades du Service d’Endocrinologie a permit caractériser les pathologies représentées et leurs possibles mécanismes physiopathologiques. Ces observations sont le point de départ pour les analyses futures. Sont des exemples des relations trouvées: la distribution des malades par deux groupes: ceux dans lequel la calcémie est déterminée par la PTH ou ceux dans lesquels la PTH est déterminée par la calcémie; la distribution sazonale de la concentration de la vitamine D; la corrélation négative entre la vitamine D et la PTH. On a eu la possibilité de déduire la cinétique de control de la PTH sur la synthèse du calcitriol. L’étude des niveaux circulants de PTH sur des sujets parathyroidectomisées a permit déduire leur taux de dégradation métabolique. Le modèle a permit simuler les relations Ca/PTH dans le sang, Ca/fraction éliminée par le rein, Ca/P dans le sang pour des valeurs normales ou hautes de calcium. On a fait des simulations de situations physiopathologiques (dans “malades virtuelles”): Infusions chroniques de calcium, PTH ou calcitriol; altérations des récepteurs. Ces simulations ne peuvent pas être réalisées dans les humains. Sont des exemples d’utilisation du modèle dans l’exploration des possibles mécanismes de la physiopathologie en observant des résultats quantitatifs inaccessibles à l’intuition. Le modèle a été utile pendant deux étapes des travaux: La première, dans sa construction on a choisi l’information disponible, son analyse quantitative, l’explicitation rigoureuse (analytique) des relations fonctionnelles entre les contrôleurs et les variables et sa intégration dans une structure globale. La deuxième, la simulation de situations expérimentales ou cliniques (du Service d’Endocrinologie) a obligé d’expliciter des raisonnements physiopathologiques généralement formulés utilisant l’intuition. Cette pratique a montré des comportements – action réduite des infusions de PTH (jusqu’à l’inhibition totale de leur respective sécrétion), nécessité d’augmenter la masse sécréteuse de la parathyroïde dans les insuffisants rénales, etc. La synthèse et utilisation du modèle n’ont pas besoin d’une formation avancée en mathématique et sont possibles grâce à un programme interactif qui a été conçu pour la simulation des systèmes dynamiques dans lesquels le programme se construit en anglais en utilisant la symbolique élémentaire de l’algèbre. La fonction noble de ces modèles est semblable à celles des physiques du XVII siècle: Permettre établir explications générales en fonctionnant comme un outil intellectuel pour manipuler des concepts et pour la réalisation d’expérimentes pensées en respectant certains principes de la physique (principe de la conservation) qu’établissent les frontières de la réalité.
Resumo:
The positioning of the consumers in the power systems operation has been changed in the recent years, namely due to the implementation of competitive electricity markets. Demand response is an opportunity for the consumers’ participation in electricity markets. Smart grids can give an important support for the integration of demand response. The methodology proposed in the present paper aims to create an improved demand response program definition and remuneration scheme for aggregated resources. The consumers are aggregated in a certain number of clusters, each one corresponding to a distinct demand response program, according to the economic impact of the resulting remuneration tariff. The knowledge about the consumers is obtained from its demand price elasticity values. The illustrative case study included in the paper is based on a 218 consumers’ scenario.
Resumo:
The implementation of competitive electricity markets has changed the consumers’ and distributed generation position power systems operation. The use of distributed generation and the participation in demand response programs, namely in smart grids, bring several advantages for consumers, aggregators, and system operators. The present paper proposes a remuneration structure for aggregated distributed generation and demand response resources. A virtual power player aggregates all the resources. The resources are aggregated in a certain number of clusters, each one corresponding to a distinct tariff group, according to the economic impact of the resulting remuneration tariff. The determined tariffs are intended to be used for several months. The aggregator can define the periodicity of the tariffs definition. The case study in this paper includes 218 consumers, and 66 distributed generation units.
Resumo:
Since the discovery of the first penicillin bacterial resistance to β-lactam antibiotics has spread and evolved promoting new resistances to pathogens. The most common mechanism of resistance is the production of β-lactamases that have spread thorough nature and evolve to complex phenotypes like CMT type enzymes. New antibiotics have been introduced in clinical practice, and therefore it becomes necessary a concise summary about their molecular targets, specific use and other properties. β-lactamases are still a major medical concern and they have been extensively studied and described in the scientific literature. Several authors agree that Glu166 should be the general base and Ser70 should perform the nucleophilic attack to the carbon of the carbonyl group of the β-lactam ring. Nevertheless there still is controversy on their catalytic mechanism. TEMs evolve at incredible pace presenting more complex phenotypes due to their tolerance to mutations. These mutations lead to an increasing need of novel, stronger and more specific and stable antibiotics. The present review summarizes key structural, molecular and functional aspects of ESBL, IRT and CMT TEM β-lactamases properties and up to date diagrams of the TEM variants with defined phenotype. The activity and structural characteristics of several available TEMs in the NCBI-PDB are presented, as well as the relation of the various mutated residues and their specific properties and some previously proposed catalytic mechanisms.
Resumo:
O processo de liberalização do setor elétrico em Portugal Continental seguiu uma metodologia idêntica à da maior parte dos países europeus, tendo a abertura de mercado sido efetuada de forma progressiva. Assim, no âmbito do acompanhamento do setor elétrico nacional, reveste-se de particular interesse caracterizar a evolução mais recente do mercado liberalizado, nomeadamente em relação ao preço da energia elétrica. A previsão do preço da energia elétrica é uma questão muito importante para todos os participantes do mercado de energia elétrica. Como se trata de um assunto de grande importância, a previsão do preço da energia elétrica tem sido alvo de diversos estudos e diversas metodologias têm sido propostas. Esta questão é abordada na presente dissertação recorrendo a técnicas de previsão, nomeadamente a métodos baseados no histórico da variável em estudo. As previsões são, segundo alguns especialistas, um dos inputs essenciais que os gestores desenvolvem para ajudar no processo de decisão. Virtualmente cada decisão relevante ao nível das operações depende de uma previsão. Para a realização do modelo de previsão de preço da energia elétrica foram utilizados os modelos Autorregressivos Integrados de Médias Móveis, Autoregressive / Integrated / Moving Average (ARIMA), que geram previsões através da informação contida na própria série temporal. Como se pretende avaliar a estrutura do preço da energia elétrica do mercado de energia, é importante identificar, deste conjunto de variáveis, quais as que estão mais relacionados com o preço. Neste sentido, é realizada em paralelo uma análise exploratória, através da correlação entre o preço da energia elétrica e outras variáveis de estudo, utilizando para esse efeito o coeficiente de correlação de Pearson. O coeficiente de correlação de Pearson é uma medida do grau e da direção de relação linear entre duas variáveis quantitativas. O modelo desenvolvido foi aplicado tendo por base o histórico de preço da eletricidade desde o inicio do mercado liberalizado e de modo a obter as previsões diária, mensal e anual do preço da eletricidade. A metodologia desenvolvida demonstrou ser eficiente na obtenção das soluções e ser suficientemente rápida para prever o valor do preço da energia elétrica em poucos segundos, servindo de apoio à decisão em ambiente de mercado.
Resumo:
PLOS ONE, 4(8):ARTe6820
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Química e Bioquímica