987 resultados para phase shift errors
Resumo:
Dissertation presented to obtain the PhD degree in Biology/Molecular Biology by Universidade Nova de Lisboa, Instituto de Tecnologia Química e Biológica
Resumo:
A correta ventilação de locais afetos a serviços técnicos elétricos, nomeadamente postos de transformação e salas de grupos geradores, reveste-se de extrema importância como garantia da continuidade e qualidade do serviço prestado, durabilidade dos materiais e equipamentos e da segurança das instalações e utilizadores. A ventilação dos locais afetos a serviços técnicos elétricos pode ser natural ou mecânica, dependendo das suas caraterísticas e das necessidades de ar para ventilação e combustão, quando aplicável. Os técnicos responsáveis pelo projeto de instalações elétricas não detém, em regra, um conhecimento muito profundo sobre este tema, sendo os seus projetos realizados com base em especificações e metodologias gerais disponibilizadas pelos fabricantes e comercializadores dos materiais e equipamentos. O projeto de uma solução de ventilação para um local afeto a serviços técnicos eléctricos exige o conhecimento de todos os ganhos térmicos no interior do espaço, o conhecimento das soluções técnicas e tecnológicas de ventilação bem como as metodologias de dimensionamento aplicáveis a cada situação. Sendo a fase de projeto elétrico, em regra, uma atividade com prazos apertados, pode conduzir ao menosprezar de certos aspetos particulares que carecem de investigação e tempo para serem desenvolvidos, o que pode resultar em projetos e mapas de quantidades que apresentam desvios da solução ideal para o cliente, podendo resultar em investimentos mais elevados, quer na fase de execução, quer na fase de exploração das instalações. Neste sentido, pretendeu-se com o presente trabalho, tratar o tema da ventilação de locais afetos a serviços técnicos, atendendo ao enquadramento normativo e regulamentar das instalações, às soluções técnicas e tecnológicas disponíveis no mercado e às metodologias de dimensionamento, apresentadas pelos documentos normativos e regulamentares. Pretendeu-se também desenvolver uma ferramenta informática de auxilio ao dimensionamento das soluções de ventilação de locais afetos a serviços técnicas eléctricos destinados a postos de transformação e grupos geradores de modo a reduzir o tempo normalmente exigido por esta tarefa, o que se traduzirá numa maior rentabilidade do tempo de projeto, assim como a normalizar as soluções apresentadas e minimizar a probabilidade de erro do dimensionamento das soluções, reduzindo assim a probabilidade de gastos em “trabalhos a mais” provenientes de erros em projeto, poupança em materiais presentes no mapa de quantidades, maior eficácia na execução da empreitada, poupança em gastos durante a exploração e desta forma numa proximidade entre as partes interessadas com o dimensionamento da ventilação do espaço técnico elétrico.
Resumo:
Neste trabalho faz-se uma pesquisa e análise dos conceitos associados à navegação inercial para estimar a distância percorrida por uma pessoa. Foi desenvolvida uma plataforma de hardware para implementar os algoritmos de navegação inercial e estudar a marcha humana. Os testes efetuados permitiram adaptar os algoritmos de navegação inercial para humanos e testar várias técnicas para reduzir o erro na estimativa da distância percorrida. O sistema desenvolvido é um sistema modular que permite estudar o efeito da inserção de novos sensores. Desta forma foram adaptados os algoritmos de navegação para permitir a utilização da informação dos sensores de força colocados na planta do pé do utilizador. A partir desta arquitetura foram efetuadas duas abordagens para o cálculo da distância percorrida por uma pessoa. A primeira abordagem estima a distância percorrida considerando o número de passos. A segunda abordagem faz uma estimação da distância percorrida com base nos algoritmos de navegação inercial. Foram realizados um conjunto de testes para comparar os erros na estimativa da distância percorrida pelas abordagens efetuadas. A primeira abordagem obteve um erro médio de 4,103% em várias cadências de passo. Este erro foi obtido após sintonia para o utilizador em questão. A segunda abordagem obteve um erro de 9,423%. De forma a reduzir o erro recorreu-se ao filtro de Kalman o que levou a uma redução do erro para 9,192%. Por fim, recorreu-se aos sensores de força que permitiram uma redução para 8,172%. A segunda abordagem apesar de ter um erro maior não depende do utilizador pois não necessita de sintonia dos parâmetros para estimar a distância para cada pessoa. Os testes efetuados permitiram, através dos sensores de força, testar a importância da força sentida pela planta do pé para aferir a fase do ciclo de marcha. Esta capacidade permite reduzir os erros na estimativa da distância percorrida e obter uma maior robustez neste tipo de sistemas.
Resumo:
Num mercado globalizado, a procura contínua de vantagens competitivas é um fator crucial para o sucesso das organizações. A melhoria contínua dos processos é uma abordagem usual, uma vez que os resultados destas melhorias vão se traduzir diretamente na qualidade dos produtos. Neste contexto, a metodologia Failure Mode Effect Analysis (FMEA) é muito utilizada, especialmente pelas suas características proactivas, que permitem a identificação e a prevenção de erros do processo. Assim, quanto mais eficaz for a aplicação desta ferramenta, mais benefícios terá a organização. Assim, quando é utilizado com eficácia, o FMEA de Processo, além de ser um método poderoso na análise do processo, permite a melhoria contínua e a redução dos custos [1] . Este trabalho de dissertação teve como objetivo avaliar a eficácia da utilização da ferramenta do FMEA de processo numa organização certificada segundo a norma ISO/TS16949. A metodologia proposta passa pela análise de dados reais, ou seja, comparar as falhas verificadas no mercado com as falhas que tinham sido identificadas no FMEA. Assim, ao analisar o nível de falhas identificadas e não identificadas durante o FMEA e a projeção dessas falhas no mercado, consegue-se determinar se o FMEA foi mais ou menos eficaz, e ainda, identificar fatores que condicionam a melhor utilização da mesma. Este estudo, está organizado em três fases, a primeira apresenta a metodologia proposta , com a definição de um fluxograma do processo de avaliação e as métricas usadas, a segunda fase a aplicação do modelo proposto a dois casos de estudo, e uma última fase, que consiste na análise comparativa, individual e global, que visa, além de comparar esultados, identificar pontos fracos durante a execução do FMEA. Os resultados do caso de estudo indicam que a ferramenta do FMEA tem sido usada com eficácia, pois consegue-se identificar uma quantidade significativa de falhas potenciais e evitá-las. No entanto, existem ainda falhas que não foram identificadas no FMEA e que apareceram no cliente, e ainda, algumas falhas que foram identificadas e apareceram no cliente. As falhas traduzem-se em má qualidade e custos para o negócio, pelo que são propostas ações de melhoria. Pode-se concluir que uma boa utilização do FMEA pode ser um fator importante para a qualidade do serviço ao cliente, e ainda, com impacto dos custos.
Resumo:
The European Union Emissions Trading Scheme (EU ETS) is a cornerstone of the European Union's policy to combat climate change and its key tool for reducing industrial greenhouse gas emissions cost-effectively. The purpose of the present work is to evaluate the influence of CO2 opportunity cost on the Spanish wholesale electricity price. Our sample includes all Phase II of the EU ETS and the first year of Phase III implementation, from January 2008 to December 2013. A vector error correction model (VECM) is applied to estimate not only long-run equilibrium relations, but also short-run interactions between the electricity price and the fuel (natural gas and coal) and carbon prices. The four commodities prices are modeled as joint endogenous variables with air temperature and renewable energy as exogenous variables. We found a long-run relationship (cointegration) between electricity price, carbon price, and fuel prices. By estimating the dynamic pass-through of carbon price into electricity price for different periods of our sample, it is possible to observe the weakening of the link between carbon and electricity prices as a result from the collapse on CO2 prices, therefore compromising the efficacy of the system to reach proposed environmental goals. This conclusion is in line with the need to shape new policies within the framework of the EU ETS that prevent excessive low prices for carbon over extended periods of time.
Resumo:
The last decade has witnessed a major shift towards the deployment of embedded applications on multi-core platforms. However, real-time applications have not been able to fully benefit from this transition, as the computational gains offered by multi-cores are often offset by performance degradation due to shared resources, such as main memory. To efficiently use multi-core platforms for real-time systems, it is hence essential to tightly bound the interference when accessing shared resources. Although there has been much recent work in this area, a remaining key problem is to address the diversity of memory arbiters in the analysis to make it applicable to a wide range of systems. This work handles diverse arbiters by proposing a general framework to compute the maximum interference caused by the shared memory bus and its impact on the execution time of the tasks running on the cores, considering different bus arbiters. Our novel approach clearly demarcates the arbiter-dependent and independent stages in the analysis of these upper bounds. The arbiter-dependent phase takes the arbiter and the task memory-traffic pattern as inputs and produces a model of the availability of the bus to a given task. Then, based on the availability of the bus, the arbiter-independent phase determines the worst-case request-release scenario that maximizes the interference experienced by the tasks due to the contention for the bus. We show that the framework addresses the diversity problem by applying it to a memory bus shared by a fixed-priority arbiter, a time-division multiplexing (TDM) arbiter, and an unspecified work-conserving arbiter using applications from the MediaBench test suite. We also experimentally evaluate the quality of the analysis by comparison with a state-of-the-art TDM analysis approach and consistently showing a considerable reduction in maximum interference.
Resumo:
This paper applies Pseudo Phase Plane (PPP) and Fractional Calculus (FC) mathematical tools for modeling world economies. A challenging global rivalry among the largest international economies began in the early 1970s, when the post-war prosperity declined. It went on, up to now. If some worrying threatens may exist actually in terms of possible ambitious military aggression, invasion, or hegemony, countries’ PPP relative positions can tell something on the current global peaceful equilibrium. A global political downturn of the USA on global hegemony in favor of Asian partners is possible, but can still be not accomplished in the next decades. If the 1973 oil chock has represented the beginning of a long-run recession, the PPP analysis of the last four decades (1972–2012) does not conclude for other partners’ global dominance (Russian, Brazil, Japan, and Germany) in reaching high degrees of similarity with the most developed world countries. The synergies of the proposed mathematical tools lead to a better understanding of the dynamics underlying world economies and point towards the estimation of future states based on the memory of each time series.
Resumo:
Proceedings of the 10th Conference on Dynamical Systems Theory and Applications
Resumo:
This work aims to characterize levels and phase distribution of polycyclic aromatic hydrocarbons (PAHs) in indoor air of preschool environment and to assess the impact of outdoor PAH emissions to indoor environment. Gaseous and particulate (PM1 and PM2.5) PAHs (16 USEPA priority pollutants, plus dibenzo[a,l]pyrene, and benzo[j]fluoranthene) were concurrently sampled indoors and outdoors in one urban preschool located in north of Portugal for 35 days. The total concentration of 18 PAHs (ΣPAHs) in indoor air ranged from 19.5 to 82.0 ng/m3; gaseous compounds (range of 14.1–66.1 ng/m3) accounted for 85% ΣPAHs. Particulate PAHs (range 0.7–15.9 ng/m3) were predominantly associated with PM1 (76% particulate ΣPAHs) with 5-ring PAHs being the most abundant. Mean indoor/outdoor ratios (I/O) of individual PAHs indicated that outdoor emissions significantly contributed to PAH indoors; emissions from motor vehicles and fuel burning were the major sources.
Resumo:
This study aims to compare two methods of assessing the postural phase of gait initiation as to intrasession reliability, in healthy and post-stroke subjects. As a secondary aim, this study aims to analyse anticipatory postural adjustments during gait initiation based on the centre of pressure (CoP) displacements in post-stroke participants. The CoP signal was acquired during gait initiation in fifteen post-stroke subjects and twenty-three healthy controls. Postural phase was identified through a baseline-based method and a maximal displacement based method. In both healthy and post-stroke participants higher intra-class correlation coefficient and lower coefficient of variation values were obtained with the baseline-based method when compared to the maximal displacement based method. Post-stroke participants presented decreased CoP displacement backward and toward the first swing limb compared to controls when the baseline-based method was used. With the maximal displacement based method, there were differences between groups only regarding backward CoP displacement. Postural phase duration in medial-lateral direction was also increased in post-stroke participants when using the maximal displacement based method. The findings obtained indicate that the baseline-based method is more reliable detecting the onset of gait initiation in both groups, while the maximal displacement based method presents greater sensitivity for post-stroke participants.
Resumo:
Introduction: Lesions at ipsilateral systems related to postural control at ipsilesional side, may justify the lower performance of stroke subjects during walking. Purpose: To analyse bilateral ankle antagonist coactivation during double-support in stroke subjects. Methods: Sixteen (8 females; 8 males) subjects with a first isquemic stroke, and twenty two controls (12 females; 10 males) participated in this study. The double support phase was assessed through ground reaction forces and electromyography of ankle muscles was assessed in both limbs. Results: Ipsilesional limb presented statistical significant differences from control when assuming specific roles during double support, being the tibialis anterior and soleus pair the one in which this atypical behavior was more pronounced. Conclusion: The ipsilesional limb presents a dysfunctional behavior when a higher postural control activity was demanded.
Resumo:
Antigenic preparations from Sporothrix schenckii usually involve materials from mixed cultures of yeast and mycelia presenting cross-reactions with other deep mycoses. We have standardized pure yeast phase with high viability of the cells suitable to obtain specific excretion-secretion products without somatic contaminations. These excretion-secretion products were highly immunogenic and did not produce noticeable cross-reactions in either double immunodiffusion or Western blot. The antigenic preparation consists mainly of proteins with molecular weights between 40 and 70 kDa, some of them with proteolytic activity in mild acidic conditions. We also observed cathepsin-like activity at two days of culture and chymotrypsin-like activity at four days of culture consistent with the change in concentration of different secreted proteins. The proteases were able to cleave different subclasses of human IgG suggesting a sequential production of antigens and molecules that could interact and interfere with the immune response of the host.
Resumo:
Abstract In a few rare diseases, specialised studies in cerebrospinal fluid (CSF) are required to identify the underlying metabolic disorder. We aimed to explore the possibility of detecting key synaptic proteins in the CSF, in particular dopaminergic and gabaergic, as new procedures that could be useful for both pathophysiological and diagnostic purposes in investigation of inherited disorders of neurotransmission. Dopamine receptor type 2 (D2R), dopamine transporter (DAT) and vesicular monoamine transporter type 2 (VMAT2) were analysed in CSF samplesfrom 30 healthy controls (11 days to 17 years) by western blot analysis. Because VMAT2 was the only protein with intracellular localisation, and in order to compare results, GABA vesicular transporter, which is another intracellular protein, was also studied. Spearman’s correlation and Student’s t tests were applied to compare optical density signals between different proteins. All these synaptic proteins could be easily detected and quantified in the CSF. DAT, D2R and GABA VT expression decrease with age, particularly in the first months of life, reflecting the expected intense synaptic activity and neuronal circuitry formation. A statistically significant relationship was found between D2R and DAT expression, reinforcing the previous evidence of DAT regulation by D2R. To our knowledge, there are no previous studies on human CSF reporting a reliable analysis of these proteins. These kinds of studies could help elucidate new causes of disturbed dopaminergic and gabaergic transmission as well as understanding different responses to L-dopa in inherited disorders affecting dopamine metabolism. Moreover, this approach to synaptic activity in vivo can be extended to different groups of proteins and diseases.
Resumo:
The purpose of our study was to evaluate the accuracy of dynamic incremental bolus-enhanced conventional CT (DICT) with intravenous contrast administration, early phase, in the diagnosis of malignancy of focal liver lesions. A total of 122 lesions were selected in 74 patients considering the following criteria: lesion diameter 10 mm or more, number of lesions less than six per study, except in multiple angiomatosis and the existence of a valid criteria of definitive diagnosis. Lesions were categorized into seven levels of diagnostic confidence of malignancy compared with the definitive diagnosis for acquisition of a receiver-operator-characteristic (ROC) curve analysis and to determine the sensitivity and specificity of the technique. Forty-six and 70 lesions were correctly diagnosed as malignant and benign, respectively; there were 2 false-positive and 4 false-negative diagnoses of malignancy and the sensitivity and specificity obtained were 92 and 97%. The DICT early phase was confirmed as a highly accurate method in the characterization and diagnosis of malignancy of focal liver lesions, requiring an optimal technical performance and judicious analysis of existing semiological data.