889 resultados para Time-frequency analysis
Resumo:
Aims To determine whether the financial incentives for tight glycaemic control, introduced in the UK as part of a pay-for-performance scheme in 2004, increased the rate at which people with newly diagnosed Type 2 diabetes were started on anti-diabetic medication.
Methods A secondary analysis of data from the General Practice Research Database for the years 1999-2008 was performed using an interrupted time series analysis of the treatment patterns for people newly diagnosed with Type 2 diabetes (n=21 197).
Results Overall, the proportion of people with newly diagnosed diabetes managed without medication 12months after diagnosis was 47% and after 24months it was 40%. The annual rate of initiation of pharmacological treatment within 12months of diagnosis was decreasing before the introduction of the pay-for-performance scheme by 1.2% per year (95% CI -2.0, -0.5%) and increased after the introduction of the scheme by 1.9% per year (95% CI 1.1, 2.7%). The equivalent figures for treatment within 24months of diagnosis were -1.4% (95% CI -2.1, -0.8%) before the scheme was introduced and 1.6% (95% CI 0.8, 2.3%) after the scheme was introduced.
Conclusion The present study suggests that the introduction of financial incentives in 2004 has effected a change in the management of people newly diagnosed with diabetes. We conclude that a greater proportion of people with newly diagnosed diabetes are being initiated on medication within 1 and 2years of diagnosis as a result of the introduction of financial incentives for tight glycaemic control.
Resumo:
Objectives: To determine whether adjusting the denominator of the common hospital antibiotic use measurement unit (defined daily doses/100 bed-days) by including age-adjusted comorbidity score (100 bed-days/age-adjusted comorbidity score) would result in more accurate and meaningful assessment of hospital antibiotic use.
Methods: The association between the monthly sum of age-adjusted comorbidity and monthly antibiotic use was measured using time-series analysis (January 2008 to June 2012). For the purposes of conducting internal benchmarking, two antibiotic usage datasets were constructed, i.e. 2004-07 (first study period) and 2008-11 (second study period). Monthly antibiotic use was normalized per 100 bed-days and per 100 bed-days/age-adjusted comorbidity score.
Results: Results showed that antibiotic use had significant positive relationships with the sum of age-adjusted comorbidity score (P = 0.0004). The results also showed that there was a negative relationship between antibiotic use and (i) alcohol-based hand rub use (P = 0.0370) and (ii) clinical pharmacist activity (P = 0.0031). Normalizing antibiotic use per 100 bed-days contributed to a comparative usage rate of 1.31, i.e. the average antibiotic use during the second period was 31% higher than during the first period. However, normalizing antibiotic use per 100 bed-days per age-adjusted comorbidity score resulted in a comparative usage rate of 0.98, i.e. the average antibiotic use was 2% lower in the second study period. Importantly, the latter comparative usage rate is independent of differences in patient density and case mix characteristics between the two studied populations.
Conclusions: The proposed modified antibiotic measure provides an innovative approach to compare variations in antibiotic prescribing while taking account of patient case mix effects.
Resumo:
Na última década tem-se assistido a um crescimento exponencial das redes de comunicações sem fios, nomeadamente no que se refere a taxa de penetração do serviço prestado e na implementação de novas infra-estruturas em todo o globo. É ponto assente neste momento que esta tendência irá não só continuar como se fortalecer devido à convergência que é esperada entre as redes móveis sem fio e a disponibilização de serviços de banda larga para a rede Internet fixa, numa evolução para um paradigma de uma arquitectura integrada e baseada em serviços e aplicações IP. Por este motivo, as comunicações móveis sem fios irão ter um papel fundamental no desenvolvimento da sociedade de informação a médio e longo prazos. A estratégia seguida no projecto e implementação das redes móveis celulares da actual geração (2G e 3G) foi a da estratificação da sua arquitectura protocolar numa estrutura modular em camadas estanques, onde cada camada do modelo é responsável pela implementação de um conjunto de funcionalidades. Neste modelo a comunicação dá-se apenas entre camadas adjacentes através de primitivas de comunicação pré-estabelecidas. Este modelo de arquitectura resulta numa mais fácil implementação e introdução de novas funcionalidades na rede. Entretanto, o facto das camadas inferiores do modelo protocolar não utilizarem informação disponibilizada pelas camadas superiores, e vice-versa acarreta uma degradação no desempenho do sistema. Este paradigma é particularmente importante quando sistemas de antenas múltiplas são implementados (sistemas MIMO). Sistemas de antenas múltiplas introduzem um grau adicional de liberdade no que respeita a atribuição de recursos rádio: o domínio espacial. Contrariamente a atribuição de recursos no domínio do tempo e da frequência, no domínio espacial os recursos rádio mapeados no domínio espacial não podem ser assumidos como sendo completamente ortogonais, devido a interferência resultante do facto de vários terminais transmitirem no mesmo canal e/ou slots temporais mas em feixes espaciais diferentes. Sendo assim, a disponibilidade de informação relativa ao estado dos recursos rádio às camadas superiores do modelo protocolar é de fundamental importância na satisfação dos critérios de qualidade de serviço exigidos. Uma forma eficiente de gestão dos recursos rádio exige a implementação de algoritmos de agendamento de pacotes de baixo grau de complexidade, que definem os níveis de prioridade no acesso a esses recursos por base dos utilizadores com base na informação disponibilizada quer pelas camadas inferiores quer pelas camadas superiores do modelo. Este novo paradigma de comunicação, designado por cross-layer resulta na maximização da capacidade de transporte de dados por parte do canal rádio móvel, bem como a satisfação dos requisitos de qualidade de serviço derivados a partir da camada de aplicação do modelo. Na sua elaboração, procurou-se que o standard IEEE 802.16e, conhecido por Mobile WiMAX respeitasse as especificações associadas aos sistemas móveis celulares de quarta geração. A arquitectura escalonável, o baixo custo de implementação e as elevadas taxas de transmissão de dados resultam num processo de multiplexagem de dados e valores baixos no atraso decorrente da transmissão de pacotes, os quais são atributos fundamentais para a disponibilização de serviços de banda larga. Da mesma forma a comunicação orientada à comutação de pacotes, inenente na camada de acesso ao meio, é totalmente compatível com as exigências em termos da qualidade de serviço dessas aplicações. Sendo assim, o Mobile WiMAX parece satisfazer os requisitos exigentes das redes móveis de quarta geração. Nesta tese procede-se à investigação, projecto e implementação de algoritmos de encaminhamento de pacotes tendo em vista a eficiente gestão do conjunto de recursos rádio nos domínios do tempo, frequência e espacial das redes móveis celulares, tendo como caso prático as redes móveis celulares suportadas no standard IEEE802.16e. Os algoritmos propostos combinam métricas provenientes da camada física bem como os requisitos de qualidade de serviço das camadas superiores, de acordo com a arquitectura de redes baseadas no paradigma do cross-layer. O desempenho desses algoritmos é analisado a partir de simulações efectuadas por um simulador de sistema, numa plataforma que implementa as camadas física e de acesso ao meio do standard IEEE802.16e.
Resumo:
A glicosilação não-enzimática e o stress oxidativo representam dois processos importantes visto desempenharem um papel importante no que respeita às complicações de vários processos patofisiológicos. No presente, a associação entre a glicosilação não-enzimática e a oxidação de proteínas é reconhecida como sendo um dos principais responsáveis pela acumulação de proteínas não-funcionais que, por sua vez, promove uma contínua sensibilização para um aumento do stress oxidativo ao nível celular. Embora esteja disponível bastante informação no que respeita aos dois processos e suas consequências ao nível estrutural e funcional, permanecem questões por esclarecer acerca do que se desenvolve ao nível molecular. Com o objectivo de contribuir para uma melhor compreensão da relação entre a glicosilação não-enzimática e a oxidação, proteínas modelo (albumina, insulina e histonas H2B e H1) foram submetidas a sistemas in vitro de glicosilação não-enzimática e oxidação em condições controladas e durante um período de tempo específico. A identificação dos locais de glicosilação e oxidação foi realizada através de uma abordagem proteómica, na qual após digestão enzimática se procedeu à análise por cromatografia líquida acoplada a espectrometria de massa tandem (MALDI-TOF/TOF). Esta abordagem permitiu a obtenção de elevadas taxas de cobertura das sequências proteicas, permitindo a identificação dos locais preferenciais de glicosilação e oxidação nas diferentes proteínas estudadas. Como esperado, os resíduos de lisina foram os preferencialmente glicosilados. No que respeita à oxidação, além das modificações envolvendo hidroxilações e adições de oxigénio, foram identificadas deamidações, carbamilações e conversões oxidativas específicas de vários aminoácidos. No geral, os resíduos mais afectados pela oxidação foram os resíduos de cisteína, metionina, triptofano, tirosina, prolina, lisina e fenilalanina. Ao longo do período de tempo estudado, os resultados indicaram que a oxidação teve início em zonas expostas da proteína e/ou localizadas na vizinhança de resíduos de cisteína e metionina, ao invés de exibir um comportamente aleatório, ocorrendo de uma forma nãolinear por sua vez dependente da estabilidade conformacional da proteína. O estudo ao longo do tempo mostrou igualmente que, no caso das proteínas préglicosiladas, a oxidação das mesmas ocorreu de forma mais rápida e acentuada, sugerindo que as alterações estruturais induzidas pela glicosilação promovem um estado pro-oxidativo. No caso das proteínas pré-glicosiladas e oxidadas, foi identificado um maior número de modificações oxidativas assim como de resíduos modificados na vizinhança de resíduos glicosilados. Com esta abordagem é realizada uma importante contribuição na investigação das consequências do dano ‘glico-oxidativo’ em proteínas ao nível molecular através da combinação da espectrometria de massa e da bioinformática.
Resumo:
The exponential growth of the world population has led to an increase of settlements often located in areas prone to natural disasters, including earthquakes. Consequently, despite the important advances in the field of natural catastrophes modelling and risk mitigation actions, the overall human losses have continued to increase and unprecedented economic losses have been registered. In the research work presented herein, various areas of earthquake engineering and seismology are thoroughly investigated, and a case study application for mainland Portugal is performed. Seismic risk assessment is a critical link in the reduction of casualties and damages due to earthquakes. Recognition of this relation has led to a rapid rise in demand for accurate, reliable and flexible numerical tools and software. In the present work, an open-source platform for seismic hazard and risk assessment is developed. This software is capable of computing the distribution of losses or damage for an earthquake scenario (deterministic event-based) or earthquake losses due to all the possible seismic events that might occur within a region for a given interval of time (probabilistic event-based). This effort has been developed following an open and transparent philosophy and therefore, it is available to any individual or institution. The estimation of the seismic risk depends mainly on three components: seismic hazard, exposure and vulnerability. The latter component assumes special importance, as by intervening with appropriate retrofitting solutions, it may be possible to decrease directly the seismic risk. The employment of analytical methodologies is fundamental in the assessment of structural vulnerability, particularly in regions where post-earthquake building damage might not be available. Several common methodologies are investigated, and conclusions are yielded regarding the method that can provide an optimal balance between accuracy and computational effort. In addition, a simplified approach based on the displacement-based earthquake loss assessment (DBELA) is proposed, which allows for the rapid estimation of fragility curves, considering a wide spectrum of uncertainties. A novel vulnerability model for the reinforced concrete building stock in Portugal is proposed in this work, using statistical information collected from hundreds of real buildings. An analytical approach based on nonlinear time history analysis is adopted and the impact of a set of key parameters investigated, including the damage state criteria and the chosen intensity measure type. A comprehensive review of previous studies that contributed to the understanding of the seismic hazard and risk for Portugal is presented. An existing seismic source model was employed with recently proposed attenuation models to calculate probabilistic seismic hazard throughout the territory. The latter results are combined with information from the 2011 Building Census and the aforementioned vulnerability model to estimate economic loss maps for a return period of 475 years. These losses are disaggregated across the different building typologies and conclusions are yielded regarding the type of construction more vulnerable to seismic activity.
Resumo:
Tese de dout., Ciências e Tecnologia das Pescas, Faculdade de Ciências do Mar e do Ambiente, Universidade do Algarve, 2005
Resumo:
Evaluation of blood-flow Doppler ultrasound spectral content is currently performed on clinical diagnosis. Since mean frequency and bandwidth spectral parameters are determinants on the quantification of stenotic degree, more precise estimators than the conventional Fourier transform should be seek. This paper summarizes studies led by the author in this field, as well as the strategies used to implement the methods in real-time. Regarding stationary and nonstationary characteristics of the blood-flow signal, different models were assessed. When autoregressive and autoregressive moving average models were compared with the traditional Fourier based methods in terms of their statistical performance while estimating both spectral parameters, the Modified Covariance model was identified by the cost/benefit criterion as the estimator presenting better performance. The performance of three time-frequency distributions and the Short Time Fourier Transform was also compared. The Choi-Williams distribution proved to be more accurate than the other methods. The identified spectral estimators were developed and optimized using high performance techniques. Homogeneous and heterogeneous architectures supporting multiple instruction multiple data parallel processing were essayed. Results obtained proved that real-time implementation of the blood-flow estimators is feasible, enhancing the usage of more complex spectral models on other ultrasonic systems.
Resumo:
Tese de doutoramento, História e Filosofia das Ciências, Universidade de Lisboa, Faculdade de Ciências, 2016
Resumo:
ntroduction: Osteoarthritis (OA) is a degenerative joint disease affecting more than 8.5 million people in the UK. Disruption in the catabolic and anabolic balance, with the catabolic cytokine Interleukin 1 beta (IL-1β) being involved in the initiation and progression of OA (1). Melanocortin peptides (α-MSH and D[Trp8]-γ-MSH) exert their anti-inflammatory effects via activation of melanocortin receptors (MC), with both MC1 and MC3 being identified as promising candidates as novel targets for OA (2). This study aims to assess the chondroprotective and anti-inflammatory effects of the pan melanocortin receptor agonist α-MSH and MC3 agonist D[Trp8]-γ-MSH following IL-1β chondrocyte stimulation. Methods: RT-PCR/ Western Blot: Human C-20/A4 chondrocytic cell-line were cultured in 6 well plates (1x106 cells/well) and harvested to determine MC and IL-1β expression by RT-PCR, and Western Blot. Cell-Culture: Cells were cultured in 96 well plates (1x106 cells/well) and stimulated with H2O2 (0.3%), TNF-α (60 pg/ml) or IL-1β (0-5000pg/ml) for 0-72h and cell viability determined. Drug Treatment: In separate experiments cells were pre-treated with 3 μg/ml α-MSH (Sigma-Aldrich Inc. Poole, UK), or D[Trp8]-γ-MSH (Phoenix Pharmaceuticals, Karlsrhue, Germany) (all dissolved in PBS) for 30 minutes prior to IL-1β (5000pg/ml) stimulation for 6-24h. Analysis: Cell viability was determined by using the three cell viability assays; Alamar Blue, MTT and the Neutral Red (NR) assay. Cell-free supernatants were collected and analysed for Interleukin -6 (IL-6) and IL-8 release by ELISA. Data expressed as Mean ± SD of n=4-8 determination in quadruplicate. *p≤ 0.05 vs. control. Results: Both RT-PCR, and Western Blot showed MC1 and MC3 expression on C-20/A4 cells. Cell viability analysis: IL-1β stimulation led to a maximal cell death of 35% at 6h (Alamar Blue), and 40% and 75% with MTT and Neutral Red respectively at 24h compared to control. The three cell viability assays have different cellular uptake pathways, which accounts for the variations observed in cell viability in response to the concentration of IL-1β, and time. Cytokine analysis by ELISA: IL-1β (5000pg/ml) stimulation for 6 and 24h showed maximal IL-6 production 292.3 ±3.8 and 275.5 ±5.0 respectively, and IL-8 production 353.3 ±2.6 and 598.3 ±8.6 respectively. Pre-treatment of cells with α-MSH and D[Trp8]-γ-MSH caused significant reductions in both IL-6 and IL-8 respectively following IL-1β stimulation at 6h. Conclusion: MC1/3 are expressed on C-20/A4 cells, activation by melanocortin peptides led to an inhibition of IL-1β induced cell death and pro-inflammatory cytokine release.
Resumo:
Dissertação de Natureza Científica para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Edificações
Resumo:
Phenol is a toxic compound present in a wide variety of foundry resins. Its quantification is important for the characterization of the resins as well as for the evaluation of free contaminants present in foundry wastes. Two chromatographic methods, liquid chromatography with ultraviolet detection (LC-UV) and gas chromatography with flame ionization detection (GC-FID), for the analysis of free phenol in several foundry resins, after a simple extraction procedure (30 min), were developed. Both chromatographic methods were suitable for the determination of phenol in the studied furanic and phenolic resins, showing good selectivity, accuracy (recovery 99–100%; relative deviations <5%), and precision (coefficients of variation <6%). The used ASTM reference method was only found to be useful in the analysis of phenolic resins, while the LC and GC methods were applicable for all the studied resins. The developed methods reduce the time of analysis from 3.5 hours to about 30 min and can readily be used in routine quality control laboratories.
Resumo:
In this paper we address the real-time capabilities of P-NET, which is a multi-master fieldbus standard based on a virtual token passing scheme. We show how P-NET’s medium access control (MAC) protocol is able to guarantee a bounded access time to message requests. We then propose a model for implementing fixed prioritybased dispatching mechanisms at each master’s application level. In this way, we diminish the impact of the first-come-first-served (FCFS) policy that P-NET uses at the data link layer. The proposed model rises several issues well known within the real-time systems community: message release jitter; pre-run-time schedulability analysis in non pre-emptive contexts; non-independence of tasks at the application level. We identify these issues in the proposed model and show how results available for priority-based task dispatching can be adapted to encompass priority-based message dispatching in P-NET networks.
Resumo:
This paper applied MDS and Fourier transform to analyze different periods of the business cycle. With such purpose, four important stock market indexes (Dow Jones, Nasdaq, NYSE, S&P500) were studied over time. The analysis under the lens of the Fourier transform showed that the indexes have characteristics similar to those of fractional noise. By the other side, the analysis under the MDS lens identified patterns in the stock markets specific to each economic expansion period. Although the identification of patterns characteristic to each expansion period is interesting to practitioners (even if only in a posteriori fashion), further research should explore the meaning of such regularities and target to find a method to estimate future crisis.
Resumo:
Trabalho de Projeto apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Tradução e Interpretação Especializadas, sob orientação da Doutora Clara Sarmento
Resumo:
Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), 2013