973 resultados para Large Linear System
Resumo:
The ecotoxicological response of the living organisms in an aquatic system depends on the physical, chemical and bacteriological variables, as well as the interactions between them. An important challenge to scientists is to understand the interaction and behaviour of factors involved in a multidimensional process such as the ecotoxicological response.With this aim, multiple linear regression (MLR) and principal component regression were applied to the ecotoxicity bioassay response of Chlorella vulgaris and Vibrio fischeri in water collected at seven sites of Leça river during five monitoring campaigns (February, May, June, August and September of 2006). The river water characterization included the analysis of 22 physicochemical and 3 microbiological parameters. The model that best fitted the data was MLR, which shows: (i) a negative correlation with dissolved organic carbon, zinc and manganese, and a positive one with turbidity and arsenic, regarding C. vulgaris toxic response; (ii) a negative correlation with conductivity and turbidity and a positive one with phosphorus, hardness, iron, mercury, arsenic and faecal coliforms, concerning V. fischeri toxic response. This integrated assessment may allow the evaluation of the effect of future pollution abatement measures over the water quality of Leça River.
Resumo:
Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.
Resumo:
Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
The intensive use of distributed generation based on renewable resources increases the complexity of power systems management, particularly the short-term scheduling. Demand response, storage units and electric and plug-in hybrid vehicles also pose new challenges to the short-term scheduling. However, these distributed energy resources can contribute significantly to turn the shortterm scheduling more efficient and effective improving the power system reliability. This paper proposes a short-term scheduling methodology based on two distinct time horizons: hour-ahead scheduling, and real-time scheduling considering the point of view of one aggregator agent. In each scheduling process, it is necessary to update the generation and consumption operation, and the storage and electric vehicles status. Besides the new operation condition, more accurate forecast values of wind generation and consumption are available, for the resulting of short-term and very short-term methods. In this paper, the aggregator has the main goal of maximizing his profits while, fulfilling the established contracts with the aggregated and external players.
Resumo:
This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding the management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.
Resumo:
Em Angola, apenas cerca de 30% da população tem acesso à energia elétrica, nível que decresce para valores inferiores a 10% em zonas rurais mais remotas. Este problema é agravado pelo facto de, na maioria dos casos, as infraestruturas existentes se encontrarem danificadas ou não acompanharem o desenvolvimento da região. Em particular na capital angolana, Luanda que, sendo a menor província de Angola, é a que regista atualmente a maior densidade populacional. Com uma população de cerca de 5 milhões de habitantes, não só há frequentemente problemas relacionados com a falha do fornecimento de energia elétrica como há ainda uma percentagem considerável de municípios onde a rede elétrica ainda nem sequer chegou. O governo de Angola, no seu esforço de crescimento e aproveitamento das suas enormes potencialidades, definiu o setor energético como um dos fatores críticos para o desenvolvimento sustentável do país, tendo assumido que este é um dos eixos prioritários até 2016. Existem objetivos claros quanto à reabilitação e expansão das infraestruturas do setor elétrico, aumentando a capacidade instalada do país e criando uma rede nacional adequada, com o intuito não só de melhorar a qualidade e fiabilidade da rede já existente como de a aumentar. Este trabalho de dissertação consistiu no levantamento de dados reais relativamente à rede de distribuição de energia elétrica de Luanda, na análise e planeamento do que é mais premente fazer relativamente à sua expansão, na escolha dos locais onde é viável localizar novas subestações, na modelação adequada do problema real e na proposta de uma solução ótima para a expansão da rede existente. Depois de analisados diferentes modelos matemáticos aplicados ao problema de expansão de redes de distribuição de energia elétrica encontrados na literatura, optou-se por um modelo de programação linear inteira mista (PLIM) que se mostrou adequado. Desenvolvido o modelo do problema, o mesmo foi resolvido por recurso a software de otimização Analytic Solver e CPLEX. Como forma de validação dos resultados obtidos, foi implementada a solução de rede no simulador PowerWorld 8.0 OPF, software este que permite a simulação da operação do sistema de trânsito de potências.
Resumo:
A sustentabilidade do sistema energético é crucial para o desenvolvimento económico e social das sociedades presentes e futuras. Para garantir o bom funcionamento dos sistemas de energia actua-se, tipicamente, sobre a produção e sobre as redes de transporte e de distribuição. No entanto, a integração crescente de produção distribuída, principalmente nas redes de distribuição de média e de baixa tensão, a liberalização dos mercados energéticos, o desenvolvimento de mecanismos de armazenamento de energia, o desenvolvimento de sistemas automatizados de controlo de cargas e os avanços tecnológicos das infra-estruturas de comunicação impõem o desenvolvimento de novos métodos de gestão e controlo dos sistemas de energia. O contributo deste trabalho é o desenvolvimento de uma metodologia de gestão de recursos energéticos num contexto de SmartGrids, considerando uma entidade designada por VPP que gere um conjunto de instalações (unidades produtoras, consumidores e unidades de armazenamento) e, em alguns casos, tem ao seu cuidado a gestão de uma parte da rede eléctrica. Os métodos desenvolvidos contemplam a penetração intensiva de produção distribuída, o aparecimento de programas de Demand Response e o desenvolvimento de novos sistemas de armazenamento. São ainda propostos níveis de controlo e de tomada de decisão hierarquizados e geridos por entidades que actuem num ambiente de cooperação mas também de concorrência entre si. A metodologia proposta foi desenvolvida recorrendo a técnicas determinísticas, nomeadamente, à programação não linear inteira mista, tendo sido consideradas três funções objectivo distintas (custos mínimos, emissões mínimas e cortes de carga mínimos), originando, posteriormente, uma função objectivo global, o que permitiu determinar os óptimos de Pareto. São ainda determinados os valores dos custos marginais locais em cada barramento e consideradas as incertezas dos dados de entrada, nomeadamente, produção e consumo. Assim, o VPP tem ao seu dispor um conjunto de soluções que lhe permitirão tomar decisões mais fundamentadas e de acordo com o seu perfil de actuação. São apresentados dois casos de estudo. O primeiro utiliza uma rede de distribuição de 32 barramentos publicada por Baran & Wu. O segundo caso de estudo utiliza uma rede de distribuição de 114 barramentos adaptada da rede de 123 barramentos do IEEE.
Resumo:
O planeamento de redes de distribuição tem como objetivo assegurar a existência de capacidade nas redes para a fornecimento de energia elétrica com bons níveis de qualidade de serviço tendo em conta os fatores económicos associados. No âmbito do trabalho apresentado na presente dissertação, foi elaborado um modelo de planeamento que determina a configuração de rede resultante da minimização de custos associados a: 1) perdas por efeito de joule; 2) investimento em novos componentes; 3) energia não entregue. A incerteza associada ao valor do consumo de cada carga é modelada através de lógica difusa. O problema de otimização definido é resolvido pelo método de decomposição de benders que contempla dois trânsitos de potências ótimos (modelo DC e modelo AC) no problema mestre e escravo respectivamente para validação de restrições. Foram também definidos critérios de paragem do método de decomposição de benders. O modelo proposto classifica-se como programação não linear inteira mista e foi implementado na ferramenta de otimização General Algebraic Modeling System (GAMS). O modelo desenvolvido tem em conta todos componentes das redes para a otimização do planeamento, conforme podemos analisar nos casos de estudo implementados. Cada caso de estudo é definido pela variação da importância que cada uma das variáveis do problema toma, tendo em vista cobrir de alguma todos os cenários de operação expetáveis. Através destes casos de estudo verifica-se as várias configurações que a rede pode tomar, tendo em conta as importâncias atribuídas a cada uma das variáveis, bem como os respetivos custos associados a cada solução. Este trabalho oferece um considerável contributo no âmbito do planeamento de redes de distribuição, pois comporta diferentes variáveis para a execução do mesmo. É também um modelo bastante robusto não perdendo o ‘norte’ no encontro de solução para redes de grande dimensão, com maior número de componentes.
Resumo:
RESUMO: A presente dissertação para tese de doutoramento apresenta o desenvolvimento e a validação de um método simples e original para o diagnóstico de calcificações vasculares em doentes em diálise, utilizando um score semiquantitativo criado por nós e obtido em RX simples da bacia e das mãos, denominado score de calcifi cação vascular simples. Demonstramos que este score vascular simples é preditor de risco cardiovascular nos doentes em diálise. O score de calcificação vascular simples associou-se ainda à baixa densidade mineral óssea avaliada por dual energy X -ray absortiometry (DXA) no colo do fémur. Verifi camos igualmente que, em doentes em diálise, as calcifi cações coronárias quantifi cadas pelo score de Agatston e o score de calcifi cação vascular simples se associaram a um menor volume ósseo avaliado em biopsias ósseas. Estes trabalhos corroboram a hipótese da existência de um elo de ligação entre a doença óssea e a doença vascular nos doentes em diálise, e um dos elementos que contribuem para este elo de ligação podem ser as calcificações vasculares. Este score de calcificação vascular simples avalia calcifi cações em artérias de grande, médio e pequeno calibre, e inclui os dois padrões radiológicos de calcificação: calcificação linear, associada à calcifi cação da camada média da parede arterial, e calcificação irregular, associada à calcifi cação da camada íntima arterial1. Nos diferentes trabalhos por nós publicados demonstramos que as calcificações vasculares avaliadas por este método simples e barato permitem a identificação de indivíduos com elevado risco cardiovascular. Este score vascular associa -se a maior risco de mortalidade cardiovascular2, de mortalidade de causa global3, de internamentos cardiovasculares2, de doença ardiovascular2, de doença arterial periférica2,4,de calcifi cações valvulares5 e de rigidez arterial3. As guidelines KDIGO (Kidney disease: improving global outcomes), publicadas em 2009,sugerem que os doentes renais crónicos nos estadios 3 a 5, com calcificações vasculares e valvulares, devem ser considerados como apresentando o mais elevado risco cardiovascular6. A elevada mortalidade dos doentes renais crónicos não é totalmente explicada pelos fatores de risco tradicionais7. A organização KDIGO defende, desde 2006, a hipótese da existência de um elo de ligação entre a doença óssea e a doença vascular8. Esta ligação pode ser explicada pelas alterações do metabolismo mineral e ósseo e pela sua interação com as calcificações vasculares. Verificamos, nos nossos trabalhos, uma associação entre calcifi cações vasculares e doença óssea. O baixo volume ósseo diagnosticado por análise histomorfométrica de biopsias ósseas foi preditor de maior risco de calcificações vasculares avaliadas pelo score de calcifi cação vascular simples (dados apresentados nesta dissertação, no capítulo 6) e pelo score coronário de Agatston num grupo de doentes em diálise9. A contribuição original deste artigo9 foi considerada merecedora de um editorial feito pelo Dr. Gérard London10, investigador líder na área da calcificação vascular dos doentes renais crónicos e actual Presidente da EDTA (European Dialysis and Transplantation Association). Fomos também os primeiros a descrever uma associação independente e inversa entre a densidade mineral avaliada no colo do fémur por DXA (dual energy X -ray absortiometry) com calcificações vasculares avaliadas pelo score de calcificação vascular simples, com rigidez arterial avaliada por velocidade de onda de pulsocarotidofemoral e com doença arterial periférica diagnosticada por critérios clínicos11. Fomos igualmente os primeiros a mostrar uma correlação signifi cativa entre a densidade mineral óssea avaliada por DXA no colo do fémur, mas não na coluna lombar, com a espessura cortical avaliada por análise histomorfométrica em biopsia óssea12. O nosso estudo atribui pela primeira vez à DXA um papel no diagnóstico de porosidade cortical nos doentes em diálise. A utilidade da avaliação diferencial da densidade mineral óssea cortical e trabecular necessita ainda de ser confirmada em estudos prospectivos. Este achado inovador do nosso estudo foi mencionado pela ERBP (European Renal Best Practice) no comentário feito à posição da KDIGO que considera ser reduzida a utilidade da densidade mineral óssea nos doentes em diálise13. Dois dos trabalhos incluídos nesta dissertação foram referenciados nas guidelines KDIGO 2009 para avaliar a prevalência das calcificações vasculares (KDIGO 2009: Tabela suplementar 10, Fig. 3.6) e para validar a associação entre calcificações vasculares e mortalidade cardiovascular (KDIGO 2009: Tabela suplementar 12, Fig. 3.7)6. A inclusão destes nossos dois estudos nas referências destas guidelines, que utilizaram o exigente sistema GRADE (Grades of recommendation, assessment, development, and evaluation) na classificação e selecção dos estudos, valida o interesse científico dos nossos trabalhos. O diagnóstico de calcificações vasculares tem um interesse prático para os doentes renais crónicos. A presença de calcifi cações vasculares é um sinal de alerta para a existência de um elevado risco cardiovascular, e esta informação pode ser utilizada para modificar a terapêutica nestes doentes6. Diferentes métodos podem ser usados para diagnosticar calcificações vasculares nos doentes em diálise14,15. O score de calcificação vascular simples tem a vantagem da simplicidade e de poder ser facilmente interpretado pelo nefrologista, sem necessidade de um radiologista. A reprodutibilidade deste score já foi demonstrada por diferentes grupos em estudos nacionais e internacionais16-24. Nestes estudos foi demonstrado que as calcifi cações vasculares avaliadas pelo método criado por nós são preditoras de maior risco de eventos cardiovasculares16, de amputações dos membros inferiores17, de velocidade de onda de pulso18,19, de calcificações corneanas e conjuntivais20 e de calcifi cações coronárias21. Também foi demonstrada uma associação inversa entre o score de calcificação vascular simples com os níveis séricos de PTH21, com os níveis de 25(OH)vitamina D 22,23 e com os níveis de fetuína A19,24. Todos estes estudos, realizados por diferentes grupos, que utilizaram o score de calcificação vascular simples na sua metodologia, comprovam a facilidade de utilização deste score e a concordância de resultados atestam a sua reprodutibilidade e a utilidade na avaliação dos doentes renais crónicos. ---------------------------ABSTRACT: This thesis presents the development and validation of a simple and original method to identify vascular calcifications in dialysis patients, using a semi -quantitative score that we have created and that is obtained in plain X -ray of pelvis and hands. This score was named in different publications as “simple vascular calcifi cation score”. We have demonstrated that this score is a predictor of higher cardiovascular risk in dialysis patients. The simple vascular calcification score was also associated with lower mineral bone density evaluated by DXA in femoral neck. In hemodialysis patients coronary calcifications evaluated by the coronary Agatston score and by the simple vascular calcification score were associated with lower bone volume analysed in bone biopsies. These studies corroborate the hypothesis of the existence of a link between bone disease and vascular disease in dialysis patients and one of the elements of this link may be vascular calcifications. This simple vascular calcification score identifi es calcifications in large, medium and small calibre arteries and includes the two radiological patterns of arterial calcifi cation: linear calcification which has been associated with the calcifi cation of the media layer of the arterial wall and irregular and patchy calcification which has been associated with the calcifi cation of the intima layer of the arterial wall1. In the several studies that we have published we have demonstrated that vascular calcifications evaluated by this simple and inexpensive method allow the identification of patients with high cardiovascular risk. This simple vascular calcification score is an independent predictor of cardiovascular mortality2, all -cause mortality3, cardiovascular hospitalizations2, cardiovascular disease2, peripheral artery disease2,4, valvular calcifi cations5 and arterial stiffness3.KDIGO (Kidney Disease: Improving Global Outcomes) guidelines published in 2009 suggest that chronic kidney disease patients in stages 3 to 5, with vascular and valvular calcifications should be considered to be at the highest cardiovascular risk6. The high mortality of chronic kidney disease patients is not completely explained by the traditional risk factors7 and KDIGO group supports, since 2006, the hypothesis of the existence of a link between bone disease and vascular disease8.This link may be explained by the alterations of the bone and mineral metabolism and their interaction with development and progression of vascular calcifications. We have also verifi ed in our studies the existence of an association between vascular calcifications and bone disease. Low bone volume diagnosed by histomorphometric analysis of bone biopsies, in a group of dialysis patients, was independently associated with the simple vascular calcification score (data presented in this thesis,chapter 6) and with coronary calcifications evaluated by the Agatston score9. The original contribution of this article published in CJASN9 deserved a commentary in an Editorial written by Prof. Gérard London10 leader investigator in this area and current EDTA (European Dialysis and Transplantation Association) President. We were also the fi rst group to describe an independent and inverse association between bone mineral density evaluated in the femoral neck by DXA (dual energy X -ray absortiometry) with vascular calcifications evaluated by the simple vascular calcification score, with arterial stiffness evaluated by carotid-femoral pulse wave velocity and with peripheral artery disease diagnosed by clinical criteria11. We were also the first group to demonstrate a significant correlation between bone mineral density evaluated by DXA in femoral neck but not in lumbar spine, with cortical thickness evaluated by histomorphometric analysis of bone biopsy12. Our study has attributed to DXA, for the first time, a role in the diagnosis of cortical porosity in dialysis patients. The clinical utility of the differential evaluation of bone mineral density in cortical or trabecular bone needs, however, to be confi rmed in prospective studies. This original fi nding of our study was mentioned by ERBP (European Renal Best Practice) commenting the KDIGO position in relation with the reduced utility of bone mineral density evaluation in dialysis patients13. Two of the studies included in this thesis have been integrated in a group of studies selected as references by the KDIGO guidelines published in 2009 to evaluate the prevalence of vascular calcifications in CKD patients (KDIGO 2009: Supplementary Table 10, Fig. 3.6) and to corroborate the association between vascular calcifications and cardiovascular mortality (KDIGO 2009: Supplementary Table 12, Fig. 3.7)6. The inclusion of both studies as references in the KDIGO guidelines that have used the exigent GRADE system (Grades of Recommendation, Assessment, Development, and Evaluation) in the classifi cation and selection of studies, validates the scientifi c value of our studies. The diagnosis of vascular calcifi cations has a practical interest for chronic kidney disease patients. The presence of vascular calcifications is an alert sign to the existence of a high cardiovascular risk and this information may be used to modify the treatment of these patients6. Different methods may be used to detect the presence of vascular calcifications in dialysis patients14,15. The simple vascular calcifi cation score has the advantage of being simple, inexpensive and easily evaluated by the Nephrologist without the need for a Radiologist interpretation. The reproducibility of this method has already been demonstrated by other groups in national and international studies16 -24. It was demonstrated in those studies that vascular calcifi cations evaluated by the method created by us, predict higher risk of cardiovascular events16, higher risk of lower limbs amputations17, higher pulse wave velocity18,19, corneal and conjuntival calcifi cations 20 and coronary calcifi cations21. A negative association between the simple vascular calcification score and PTH levels21, 25(OH) vitamin D levels22,23 and Fetuin A levels19,24 has also been demonstrated. All these studies performed by different groups that have used the simple vascular calcifi cation score in their methods demonstrate that this score is simple, useful and reproducible in the evaluation of chronic kidney disease patients simple, useful and reproducible in the evaluation of chronic kidney disease patients.
Resumo:
Dissertação para obtenção do Grau de Mestre em Genética Molecular e Biomedicina
Resumo:
A theory of free vibrations of discrete fractional order (FO) systems with a finite number of degrees of freedom (dof) is developed. A FO system with a finite number of dof is defined by means of three matrices: mass inertia, system rigidity and FO elements. By adopting a matrix formulation, a mathematical description of FO discrete system free vibrations is determined in the form of coupled fractional order differential equations (FODE). The corresponding solutions in analytical form, for the special case of the matrix of FO properties elements, are determined and expressed as a polynomial series along time. For the eigen characteristic numbers, the system eigen main coordinates and the independent eigen FO modes are determined. A generalized function of visoelastic creep FO dissipation of energy and generalized forces of system with no ideal visoelastic creep FO dissipation of energy for generalized coordinates are formulated. Extended Lagrange FODE of second kind, for FO system dynamics, are also introduced. Two examples of FO chain systems are analyzed and the corresponding eigen characteristic numbers determined. It is shown that the oscillatory phenomena of a FO mechanical chain have analogies to electrical FO circuits. A FO electrical resistor is introduced and its constitutive voltage–current is formulated. Also a function of thermal energy FO dissipation of a FO electrical relation is discussed.
Resumo:
The theory of fractional calculus goes back to the beginning of thr throry of differential calculus but its inherent complexity postponed the applications of the associated concepts. In the last decade the progress in the areas of chaos and fractals revealed subtle relationships with the fractional calculus leading to an increasing interest in the development of the new paradigm. In the area of automaticcontrol preliminary work has already been carried out but the proposed algorithms are restricted to the frequency domain. The paper discusses the design of fractional-order discrete-time controllers. The algorithms studied adopt the time domein, which makes them suited for z-transform analusis and discrete-time implementation. The performance of discrete-time fractional-order controllers with linear and non-linear systems is also investigated.
Resumo:
Robotica 2012: 12th International Conference on Autonomous Robot Systems and Competitions April 11, 2012, Guimarães, Portugal
Resumo:
This paper describes the development and testing of a robotic capsule for search and rescue operations at sea. This capsule is able to operate autonomously or remotely controlled, is transported and deployed by a larger USV into a determined disaster area and is used to carry a life raft and inflate it close to survivors in large-scale maritime disasters. The ultimate goal of this development is to endow search and rescue teams with tools that extend their operational capability in scenarios with adverse atmospheric or maritime conditions.