10 resultados para Domain-specific analysis
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Functionally graded composite materials can provide continuously varying properties, which distribution can vary according to a specific location within the composite. More frequently, functionally graded materials consider a through thickness variation law, which can be more or less smoother, possessing however an important characteristic which is the continuous properties variation profiles, which eliminate the abrupt stresses discontinuities found on laminated composites. This study aims to analyze the transient dynamic behavior of sandwich structures, having a metallic core and functionally graded outer layers. To this purpose, the properties of the particulate composite metal-ceramic outer layers, are estimated using Mod-Tanaka scheme and the dynamic analyses considers first order and higher order shear deformation theories implemented though kriging finite element method. The transient dynamic response of these structures is carried out through Bossak-Newmark method. The illustrative cases presented in this work, consider the influence of the shape functions interpolation domain, the properties through-thickness distribution, the influence of considering different materials, aspect ratios and boundary conditions. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Vários estudos demonstraram que os doentes com insuficiência cardíaca congestiva (ICC) têm um compromisso da qualidade de vida relacionada com a saúde (QVRS), tendo esta, nos últimos anos, vindo a tornar-se um endpoint primário quando se analisa o impacto do tratamento de situações crónicas como a ICC. Objectivos: Avaliar as propriedades psicométricas da versão portuguesa de um novo instrumento específico para medir a QVRS na ICC em doentes hospitalizados: o Kansas City Cardiomyopathy Questionnaire (KCCQ). População e Métodos: O KCCQ foi aplicado a uma amostra consecutiva de 193 doentes internados por ICC. Destes, 105 repetiram esta avaliação 3 meses após admissão hospitalar, não havendo eventos ocorridos durante este período de tempo. A idade era 64,4± 12,4 anos (entre 21 e 88), com 72,5% a pertencer ao sexo masculino, sendo a ICC de etiologia isquémica em 42%. Resultados: Esta versão do KCCQ foi sujeita a validação estatística semelhante à americana com a avaliação da fidelidade e validade. A fidelidade foi avaliada pela consistência interna dos domínios e dos somatórios, apresentando valores Alpha de Cronbach idênticos nos vários domínios e somatórios ( =0,50 a =0,94). A validade foi analisada pela convergência, pela sensibilidade às diferenças entre grupos e pela sensibilidade à alteração da condição clínica. Avaliou-se a validade convergente de todos os domínios relacionados com funcionalidade, pela relação verificada entre estes e uma medida de funcionalidade, a classificação da New York Heart Association (NYHA), tendo-se verificado correlações significativas (p<0,01), como medida para avaliar a funcionalidade em doentes com ICC. Efectuou-se uma análise de variância entre o domínio limitação física, os somatórios e as classes da NYHA, tendo-se encontrado diferenças estatisticamente significativas (F=23,4; F=36,4; F=37,4; p=0,0001), na capacidade de descriminação da gravidade da condição clínica. Foi realizada uma segunda avaliação em 105 doentes na consulta do 3º mês após a intervenção clínica, tendo-se observado alterações significativas nas médias dos domínios avaliados entre o internamento e a consulta (diferenças de 14,9 a 30,6 numa escala de 0-100), indicando que os domínios avaliados são sensíveis à mudança da condição clínica. A correlação interdimensões da qualidade de vida que compõe este instrumento é moderada, sugerindo dimensões independentes, apoiando a sua estrutura multifactorial e a adequabilidade desta medida para a sua avaliação. Conclusão: O KCCQ é um instrumento válido, sensível à mudança e específico para medir a QVRS numa população portuguesa com miocardiopatia dilatada e ICC. ABSTRACT - Several studies have shown that patients with congestive heart failure (CHF) have a compromised health-related quality of life (HRQL), and this, in recent years, has become a primary endpoint when considering the impact of treatment of chronic conditions such as CHF. Objectives: To evaluate the psychometric properties of the Portuguese version of a new specific instrument to measure HRQL in patients hospitalized for CHF: the Kansas City Cardiomyopathy Questionnaire (KCCQ). Methods: The KCCQ was applied to a sample of 193 consecutive patients hospitalized for CHF. Of these, 105 repeated the assessment 3 months after admission, with no events during this period. Mean age was 64.4±12.4 years (21-88), and 72.5% were 72.5% male. CHF was of ischemic etiology in 42% of cases. Results: This version of the KCCQ was subjected to statistical validation, with assessment of reliability and validity, similar to the American version. Reliability was assessed by the internal consistency of the domains and summary scores, which showed similar values of Cronbach alpha (0.50-0.94). Validity was assessed by convergence, sensitivity to differences between groups and sensitivity to changes in clinical condition. We evaluated the convergent validity of all domains related to functionality, through the relationship between them and a measure of functionality, the New York Heart Association (NYHA) classification. Significant correlations were found (p<0.01) for this measure of functionality in patients with CHF. Analysis of variance between the physical limitation domain, the summary scores and NYHA class was performed and statistically significant differences were found (F=23.4; F=36.4; F=37.4, p=0.0001) in the ability to discriminate severity of clinical condition. A second evaluation was performed on 105 patients at the 3-month follow-up outpatient appointment, and significant changes were observed in the mean scores of the domains assessed between hospital admission and the clinic appointment (differences from 14.9 to 30.6 on a scale of 0-100), indicating that the domains assessed are sensitive to changes in clinical condition. The correlation between dimensions of quality of life in the KCCQ is moderate, suggesting that the dimensions are independent, supporting the multifactorial nature of HRQL and the suitability of this measure for its evaluation. Conclusion: The KCCQ is a valid instrument, sensitive to change and a specific measure of HRQL in a population with dilated cardiomyopathy and CHF.
Resumo:
A energia eléctrica é um bem essencial para a maioria das sociedades. O seu fornecimento tem sido encarado como um serviço público, da responsabilidade dos governos, através de empresas monopolistas, públicas e privadas. O Mercado Ibérico de Electricidade (MIBEL) surge com o objectivo da integração e cooperação do sector eléctrico Português e Espanhol, no qual é possível negociar preços e volumes de energia. Actualmente, as entidades podem negociar através de um mercado bolsista ou num mercado de contratos bilaterais. Uma análise dos mercados de electricidade existentes mostra que estes estão longe de estarem liberalizados. As tarifas não reflectem o efeito da competitividade. Além disso, o recurso a contratos bilaterais limita frequentemente os clientes a um único fornecedor de energia eléctrica. Nos últimos anos, têm surgido uma série de ferramentas computacionais que permitem simular, parte ou a totalidade, dos mercados de electricidade. Contudo, apesar das suas potencialidades, muitos simuladores carecem de flexibilidade e generalidade. Nesta perspectiva, esta dissertação tem como principal objectivo o desenvolvimento de um simulador de mercados de energia eléctrica que possibilite lidar com as dificuldades inerentes a este novo modelo de mercado, recorrendo a agentes computacionais autónomos. A dissertação descreve o desenho e a implementação de um simulador simplificado para negociação de contratos bilaterais em mercados de energia, com particular incidência para o desenho das estratégias a utilizar pelas partes negociais. Além disso, efectua-se a descrição de um caso prático, com dados do MIBEL. Descrevem-se também várias simulações computacionais, envolvendo retalhistas e consumidores de energia eléctrica, que utilizam diferentes estratégias negociais. Efectua-se a análise detalhada dos resultados obtidos. De forma sucinta, os resultados permitem concluir que as melhores estratégias para cada entidade, no caso prático estudado, são: a estratégia de concessões fixas, para o retalhista, e a estratégia de concessões baseada no volume de energia, para o consumidor.
Resumo:
Interest rate risk is one of the major financial risks faced by banks due to the very nature of the banking business. The most common approach in the literature has been to estimate the impact of interest rate risk on banks using a simple linear regression model. However, the relationship between interest rate changes and bank stock returns does not need to be exclusively linear. This article provides a comprehensive analysis of the interest rate exposure of the Spanish banking industry employing both parametric and non parametric estimation methods. Its main contribution is to use, for the first time in the context of banks’ interest rate risk, a nonparametric regression technique that avoids the assumption of a specific functional form. One the one hand, it is found that the Spanish banking sector exhibits a remarkable degree of interest rate exposure, although the impact of interest rate changes on bank stock returns has significantly declined following the introduction of the euro. Further, a pattern of positive exposure emerges during the post-euro period. On the other hand, the results corresponding to the nonparametric model support the expansion of the conventional linear model in an attempt to gain a greater insight into the actual degree of exposure.
Resumo:
Functionally graded materials are composite materials wherein the composition of the constituent phases can vary in a smooth continuous way with a gradation which is function of its spatial coordinates. This characteristic proves to be an important issue as it can minimize abrupt variations of the material properties which are usually responsible for localized high values of stresses, and simultaneously providing an effective thermal barrier in specific applications. In the present work, it is studied the static and free vibration behaviour of functionally graded sandwich plate type structures, using B-spline finite strip element models based on different shear deformation theories. The effective properties of functionally graded materials are estimated according to Mori-Tanaka homogenization scheme. These sandwich structures can also consider the existence of outer skins of piezoelectric materials, thus achieving them adaptive characteristics. The performance of the models, are illustrated through a set of test cases. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We report the nucleotide sequence of a 17,893 bp DNA segment from the right arm of Saccharomyces cerevisiae chromosome VII. This fragment begins at 482 kb from the centromere. The sequence includes the BRF1 gene, encoding TFIIIB70, the 5' portion of the GCN5 gene, an open reading frame (ORF) previously identified as ORF MGA1, whose translation product shows similarity to heat-shock transcription factors and five new ORFs. Among these, YGR250 encodes a polypeptide that harbours a domain present in several polyA binding proteins. YGR245 is similar to a putative Schizosaccharomyces pombe gene, YGR248 shows significant similarity with three ORFs of S. cerevisiae situated on different chromosomes, while the remaining two ORFs, YGR247 and YGR251, do not show significant similarity to sequences present in databases.
Resumo:
Relatório de Estágio apresentado à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Ensino do 1.º e 2.º ciclo do Ensino Básico
Resumo:
In man brain cancer is an aggressive, malignant form of tumour, it is highly infiltrative in nature, is associated with cellular heterogeneity and affects cerebral hemispheres of the brain. Current drug therapies are inadequate and an unmet clinical need exists to develop new improved therapeutics. The ability to silence genes associated with disease progression by using short interfering RNA (siRNA) presents the potential to develop safe and effective therapies. In this work, in order to protect the siRNA from degradation, promote cell specific uptake and enhance gene silencing efficiency, a PEGylated cyclodextrin (CD)-based nanoparticle, tagged with a CNS-targeting peptide derived from the rabies virus glycoprotein (RVG) was formulated and characterized. The modified cyclodextrin derivatives were synthesized and co-formulated to form nanoparticles containing siRNA which were analysed for size, surface charge, stability, cellular uptake and gene-knockdown in brain cancer cells. The results identified an optimised co-formulation prototype at a molar ratio of 1:1.5:0.5 (cationic cyclodextrin:PEGylated cyclodextrin:RVG-tagged PEGylated cyclodextrin) with a size of 281±39.72nm, a surface charge of 26.73±3mV, with efficient cellular uptake and a 27% gene-knockdown ability. This CD-based formulation represents a potential nanocomplex for systemic delivery of siRNA targeting brain cancer.
Resumo:
Infrared spectroscopy, either in the near and mid (NIR/MIR) region of the spectra, has gained great acceptance in the industry for bioprocess monitoring according to Process Analytical Technology, due to its rapid, economic, high sensitivity mode of application and versatility. Due to the relevance of cyprosin (mostly for dairy industry), and as NIR and MIR spectroscopy presents specific characteristics that ultimately may complement each other, in the present work these techniques were compared to monitor and characterize by in situ and by at-line high-throughput analysis, respectively, recombinant cyprosin production by Saccharomyces cerevisiae. Partial least-square regression models, relating NIR and MIR-spectral features with biomass, cyprosin activity, specific activity, glucose, galactose, ethanol and acetate concentration were developed, all presenting, in general, high regression coefficients and low prediction errors. In the case of biomass and glucose slight better models were achieved by in situ NIR spectroscopic analysis, while for cyprosin activity and specific activity slight better models were achieved by at-line MIR spectroscopic analysis. Therefore both techniques enabled to monitor the highly dynamic cyprosin production bioprocess, promoting by this way more efficient platforms for the bioprocess optimization and control.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.