68 resultados para Complementary metal–oxide–semiconductor (CMOS)
Resumo:
Electricity markets are complex environments with very particular characteristics. A critical issue regarding these specific characteristics concerns the constant changes they are subject to. This is a result of the electricity markets’ restructuring, which was performed so that the competitiveness could be increased, but it also had exponential implications in the increase of the complexity and unpredictability in those markets scope. The constant growth in markets unpredictability resulted in an amplified need for market intervenient entities in foreseeing market behaviour. The need for understanding the market mechanisms and how the involved players’ interaction affects the outcomes of the markets, contributed to the growth of usage of simulation tools. Multi-agent based software is particularly well fitted to analyze dynamic and adaptive systems with complex interactions among its constituents, such as electricity markets. This dissertation presents ALBidS – Adaptive Learning strategic Bidding System, a multiagent system created to provide decision support to market negotiating players. This system is integrated with the MASCEM electricity market simulator, so that its advantage in supporting a market player can be tested using cases based on real markets’ data. ALBidS considers several different methodologies based on very distinct approaches, to provide alternative suggestions of which are the best actions for the supported player to perform. The approach chosen as the players’ actual action is selected by the employment of reinforcement learning algorithms, which for each different situation, simulation circumstances and context, decides which proposed action is the one with higher possibility of achieving the most success. Some of the considered approaches are supported by a mechanism that creates profiles of competitor players. These profiles are built accordingly to their observed past actions and reactions when faced with specific situations, such as success and failure. The system’s context awareness and simulation circumstances analysis, both in terms of results performance and execution time adaptation, are complementary mechanisms, which endow ALBidS with further adaptation and learning capabilities.
Resumo:
Esta dissertação de Mestrado pretende, numa primeira fase, identificar as condições gerais e os pressupostos da aplicação da ferramenta Análise do Valor (AV) e integrá-la nos Sistemas de Gestão da Qualidade. Pretende-se demonstrar a técnica e aumentar o seu conhecimento, assim como as várias abordagens do processo, as vantagens e os constrangimentos no seu uso, conduzir à ideia que poderá ser útil proceder a uma análise organizada e sistemática dos produtos/serviços existentes nas organizações, abrindo a hipótese a novas soluções para o produto/serviço de mais fácil produção/realização e ensaio ao menor custo. É realçada a importância do conceito da Análise do Valor demonstrando que se pode tornar numa ferramenta eficaz na melhoria dos produtos mas também de processos de fabrico e até em processos administrativos. Sendo a Qualidade entendida como um conjunto de características que um bem, produto ou serviço possui que o tornam apto para satisfazer na plenitude uma dada necessidade do seu utilizador, este trabalho também faz a ligação com os Sistemas de Gestão da Qualidade comparando dois referenciais, a Norma NP EN 12973 e a ISO 9001:2008. Numa segunda fase é realizada uma profunda abordagem à ferramenta QFD – Quality Function Deployment – como uma técnica complementar à aplicação prática da técnica AV e é realizado um estudo a um serviço pós-venda que inclui muitos dos seus conceitos e princípios. O trabalho foi realizado na empresa onde sou colaborador há cerca de 10 anos exercendo o cargo de “Service Manager Press/Post Press” ao departamento de serviço técnico e apoio ao cliente. Foi muito útil a demonstração prática para entendimento das dificuldades sentidas e dos obstáculos a ultrapassar. O trabalho termina com as conclusões do caso prático e as conclusões gerais, mencionando as definições dos aceleradores / obstáculos da aplicação da AV.
Resumo:
The process of immobilization of biological molecules is one of the most important steps in the construction of a biosensor. In the case of DNA, the way it exposes its bases can result in electrochemical signals to acceptable levels. The use of self-assembled monolayer that allows a connection to the gold thiol group and DNA binding to an aldehydic ligand resulted in the possibility of determining DNA hybridization. Immobilized single strand of DNA (ssDNA) from calf thymus pre-formed from alkanethiol film was formed by incubating a solution of 2-aminoethanothiol (Cys) followed by glutaraldehyde (Glu). Cyclic voltammetry (CV) was used to characterize the self-assembled monolayer on the gold electrode and, also, to study the immobilization of ssDNA probe and hybridization with the complementary sequence (target ssDNA). The ssDNA probe presents a well-defined oxidation peak at +0.158 V. When the hybridization occurs, this peak disappears which confirms the efficacy of the annealing and the DNA double helix performing without the presence of electroactive indicators. The use of SAM resulted in a stable immobilization of the ssDNA probe, enabling the hybridization detection without labels. This study represents a promising approach for molecular biosensor with sensible and reproducible results.
Resumo:
Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia
Resumo:
23rd SPACE AGM and Conference from 9 to 12 May 2012 Conference theme: The Role of Professional Higher Education: Responsibility and Reflection Venue: Mikkeli University of Applied Sciences, Mikkeli, Finland
Resumo:
The indiscriminate use of antibiotics in foodproducing animals has received increasing attention as a contributory factor in the international emergence of antibiotic- resistant bacteria (Woodward in Pesticide, veterinary and other residues in food, CRC Press, Boca Raton, 2004). Numerous analytical methods for quantifying antibacterial residues in edible animal products have been developed over years (Woodward in Pesticide, veterinary and other residues in food, CRC Press, Boca Raton, 2004; Botsoglou and Fletouris in Handbook of food analysis, residues and other food component analysis, Marcel Dekker, Ghent, 2004). Being Amoxicillin (AMOX) one of those critical veterinary drugs, efforts have been made to develop simple and expeditious methods for its control in food samples. In literature, only one AMOX-selective electrode has been reported so far. In that work, phosphotungstate:amoxycillinium ion exchanger was used as electroactive material (Shoukry et al. in Electroanalysis 6:914–917, 1994). Designing new materials based on molecularly imprinted polymers (MIPs) which are complementary to the size and charge of AMOX could lead to very selective interactions, thus enhancing the selectivity of the sensing unit. AMOXselective electrodes used imprinted polymers as electroactive materials having AMOX as target molecule to design a biomimetic imprinted cavity. Poly(vinyl chloride), sensors of methacrylic acid displayed Nernstian slopes (60.7 mV/decade) and low detection limits (2.9×10-5 mol/L). The potentiometric responses were not affected by pH within 4–5 and showed good selectivity. The electrodes were applied successfully to the analysis of real samples.
Resumo:
Catastrophic events, such as wars and terrorist attacks, tornadoes and hurricanes, earthquakes, tsunamis, floods and landslides, are always accompanied by a large number of casualties. The size distribution of these casualties has separately been shown to follow approximate power law (PL) distributions. In this paper, we analyze the statistical distributions of the number of victims of catastrophic phenomena, in particular, terrorism, and find double PL behavior. This means that the data sets are better approximated by two PLs instead of a single one. We plot the PL parameters, corresponding to several events, and observe an interesting pattern in the charts, where the lines that connect each pair of points defining the double PLs are almost parallel to each other. A complementary data analysis is performed by means of the computation of the entropy. The results reveal relationships hidden in the data that may trigger a future comprehensive explanation of this type of phenomena.
Resumo:
Soil vapor extraction (SVE) and bioremediation (BR) are two of the most common soil remediation technologies. Their application is widespread; however, both present limitations, namely related to the efficiencies of SVE on organic soils and to the remediation times of some BR processes. This work aimed to study the combination of these two technologies in order to verify the achievement of the legal clean-up goals in soil remediation projects involving seven different simulated soils separately contaminated with toluene and xylene. The remediations consisted of the application of SVE followed by biostimulation. The results show that the combination of these two technologies is effective and manages to achieve the clean-up goals imposed by the Spanish Legislation. Under the experimental conditions used in this work, SVE is sufficient for the remediation of soils, contaminated separately with toluene and xylene, with organic matter contents (OMC) below 4 %. In soils with higher OMC, the use of BR, as a complementary technology, and when the concentration of contaminant in the gas phase of the soil reaches values near 1 mg/L, allows the achievement of the clean-up goals. The OMC was a key parameter because it hindered SVE due to adsorption phenomena but enhanced the BR process because it acted as a microorganism and nutrient source.
Resumo:
Cu2ZnSnS4 (CZTS) is a p-type semiconductor that has been seen as a possible low-cost replacement for Cu(In,Ga)Se2 in thin film solar cells. So far compound has presented difficulties in its growth, mainly, because of the formation of secondary phases like ZnS, CuxSnSx+1, SnxSy, Cu2−xS and MoS2. X-ray diffraction analysis (XRD), which is mostly used for phase identification cannot resolve some of these phases from the kesterite/stannite CZTS and thus the use of a complementary technique is needed. Raman scattering analysis can help distinguishing these phases not only laterally but also in depth. Knowing the absorption coefficient and using different excitation wavelengths in Raman scattering analysis, one is capable of profiling the different phases present in multi-phase CZTS thin films. This work describes in a concise form the methods used to grow chalcogenide compounds, such as, CZTS, CuxSnSx+1, SnxSy and cubic ZnS based on the sulphurization of stacked metallic precursors. The results of the films’ characterization by XRD, electron backscatter diffraction and scanning electron microscopy/energy dispersive spectroscopy techniques are presented for the CZTS phase. The limitation of XRD to identify some of the possible phases that can remain after the sulphurization process are investigated. The results of the Raman analysis of the phases formed in this growth method and the advantage of using this technique in identifying them are presented. Using different excitation wavelengths it is also analysed the CZTS film in depth showing that this technique can be used as non destructive methods to detect secondary phases.
Resumo:
A Cooperativa Agrícola de Vila do Conde desenvolve um negócio de fabrico e comercialização de misturas complementares para alimentação bovina, sobretudo para vacas leiteiras. Há alguns anos a esta parte, esta Cooperativa sabe que terá que deslocalizar a unidade fabril existente devido a imposições da Direção Geral de Alimentação e Veterinária, relacionadas com questões de natureza ambiental. A necessidade de ser realizado um novo investimento, para garantir a sustentabilidade do negócio mais rentável gerido por esta Cooperativa, levou a pensar-se na possibilidade de construção de uma nova unidade fabril, de dimensão superior, capaz de servir outras cooperativas, visando o desejado entendimento das cooperativas em torno de um objetivo comum, logrando a obtenção de economias de escala, de extrema importância para a sobrevivência do setor leiteiro na região do Entre Douro e Minho. Para o efeito será constituída uma nova sociedade por quotas, designada por AGRIVIL XXI, Lda., de capital exclusivamente cooperativo, possibilitando que, em cada momento, se possa aferir a situação económica e financeira do negócio de forma mais rigorosa e autónoma. Esta realidade foi conducente à elaboração do presente Plano de Negócios que se espera profícuo para definição dos objetivos e metas a atingir num futuro próximo pela Cooperativa Agrícola de Vila do Conde. As análises de viabilidade e do risco do projeto demonstraram estarem criadas as condições de aceitação do mesmo, sendo expectável um VAL de 1.371.764 euros, uma TIR de 12,04% e um pay-back period próximo dos 11 anos. No entanto é notório a existência de um risco inerente ao investimento na medida em que o montante dos fluxos gerados tende a aproximar-se dos fluxos investidos, não gerando um excedente de riqueza significativo.
Resumo:
With progressing CMOS technology miniaturization, the leakage power consumption starts to dominate the dynamic power consumption. The recent technology trends have equipped the modern embedded processors with the several sleep states and reduced their overhead (energy/time) of the sleep transition. The dynamic voltage frequency scaling (DVFS) potential to save energy is diminishing due to efficient (low overhead) sleep states and increased static (leakage) power consumption. The state-of-the-art research on static power reduction at system level is based on assumptions that cannot easily be integrated into practical systems. We propose a novel enhanced race-to-halt approach (ERTH) to reduce the overall system energy consumption. The exhaustive simulations demonstrate the effectiveness of our approach showing an improvement of up to 8 % over an existing work.
Resumo:
Não existe uma definição única de processo de memória de longo prazo. Esse processo é geralmente definido como uma série que possui um correlograma decaindo lentamente ou um espectro infinito de frequência zero. Também se refere que uma série com tal propriedade é caracterizada pela dependência a longo prazo e por não periódicos ciclos longos, ou que essa característica descreve a estrutura de correlação de uma série de longos desfasamentos ou que é convencionalmente expressa em termos do declínio da lei-potência da função auto-covariância. O interesse crescente da investigação internacional no aprofundamento do tema é justificado pela procura de um melhor entendimento da natureza dinâmica das séries temporais dos preços dos ativos financeiros. Em primeiro lugar, a falta de consistência entre os resultados reclama novos estudos e a utilização de várias metodologias complementares. Em segundo lugar, a confirmação de processos de memória longa tem implicações relevantes ao nível da (1) modelação teórica e econométrica (i.e., dos modelos martingale de preços e das regras técnicas de negociação), (2) dos testes estatísticos aos modelos de equilíbrio e avaliação, (3) das decisões ótimas de consumo / poupança e de portefólio e (4) da medição de eficiência e racionalidade. Em terceiro lugar, ainda permanecem questões científicas empíricas sobre a identificação do modelo geral teórico de mercado mais adequado para modelar a difusão das séries. Em quarto lugar, aos reguladores e gestores de risco importa saber se existem mercados persistentes e, por isso, ineficientes, que, portanto, possam produzir retornos anormais. O objetivo do trabalho de investigação da dissertação é duplo. Por um lado, pretende proporcionar conhecimento adicional para o debate da memória de longo prazo, debruçando-se sobre o comportamento das séries diárias de retornos dos principais índices acionistas da EURONEXT. Por outro lado, pretende contribuir para o aperfeiçoamento do capital asset pricing model CAPM, considerando uma medida de risco alternativa capaz de ultrapassar os constrangimentos da hipótese de mercado eficiente EMH na presença de séries financeiras com processos sem incrementos independentes e identicamente distribuídos (i.i.d.). O estudo empírico indica a possibilidade de utilização alternativa das obrigações do tesouro (OT’s) com maturidade de longo prazo no cálculo dos retornos do mercado, dado que o seu comportamento nos mercados de dívida soberana reflete a confiança dos investidores nas condições financeiras dos Estados e mede a forma como avaliam as respetiva economias com base no desempenho da generalidade dos seus ativos. Embora o modelo de difusão de preços definido pelo movimento Browniano geométrico gBm alegue proporcionar um bom ajustamento das séries temporais financeiras, os seus pressupostos de normalidade, estacionariedade e independência das inovações residuais são adulterados pelos dados empíricos analisados. Por isso, na procura de evidências sobre a propriedade de memória longa nos mercados recorre-se à rescaled-range analysis R/S e à detrended fluctuation analysis DFA, sob abordagem do movimento Browniano fracionário fBm, para estimar o expoente Hurst H em relação às séries de dados completas e para calcular o expoente Hurst “local” H t em janelas móveis. Complementarmente, são realizados testes estatísticos de hipóteses através do rescaled-range tests R/S , do modified rescaled-range test M - R/S e do fractional differencing test GPH. Em termos de uma conclusão única a partir de todos os métodos sobre a natureza da dependência para o mercado acionista em geral, os resultados empíricos são inconclusivos. Isso quer dizer que o grau de memória de longo prazo e, assim, qualquer classificação, depende de cada mercado particular. No entanto, os resultados gerais maioritariamente positivos suportam a presença de memória longa, sob a forma de persistência, nos retornos acionistas da Bélgica, Holanda e Portugal. Isto sugere que estes mercados estão mais sujeitos a maior previsibilidade (“efeito José”), mas também a tendências que podem ser inesperadamente interrompidas por descontinuidades (“efeito Noé”), e, por isso, tendem a ser mais arriscados para negociar. Apesar da evidência de dinâmica fractal ter suporte estatístico fraco, em sintonia com a maior parte dos estudos internacionais, refuta a hipótese de passeio aleatório com incrementos i.i.d., que é a base da EMH na sua forma fraca. Atendendo a isso, propõem-se contributos para aperfeiçoamento do CAPM, através da proposta de uma nova fractal capital market line FCML e de uma nova fractal security market line FSML. A nova proposta sugere que o elemento de risco (para o mercado e para um ativo) seja dado pelo expoente H de Hurst para desfasamentos de longo prazo dos retornos acionistas. O expoente H mede o grau de memória de longo prazo nos índices acionistas, quer quando as séries de retornos seguem um processo i.i.d. não correlacionado, descrito pelo gBm(em que H = 0,5 , confirmando- se a EMH e adequando-se o CAPM), quer quando seguem um processo com dependência estatística, descrito pelo fBm(em que H é diferente de 0,5, rejeitando-se a EMH e desadequando-se o CAPM). A vantagem da FCML e da FSML é que a medida de memória de longo prazo, definida por H, é a referência adequada para traduzir o risco em modelos que possam ser aplicados a séries de dados que sigam processos i.i.d. e processos com dependência não linear. Então, estas formulações contemplam a EMH como um caso particular possível.
Resumo:
In an increasingly competitive and globalized world, companies need effective training methodologies and tools for their employees. However, selecting the most suitable ones is not an easy task. It depends on the requirements of the target group (namely time restrictions), on the specificities of the contents, etc. This is typically the case for training in Lean, the waste elimination manufacturing philosophy. This paper presents and compares two different approaches to lean training methodologies and tools: a simulation game based on a single realistic manufacturing platform, involving production and assembly operations that allows learning by playing; and a digital game that helps understand lean tools. This paper shows that both tools have advantages in terms of trainee motivation and knowledge acquisition. Furthermore, they can be used in a complementary way, reinforcing the acquired knowledge.
Resumo:
In the last twenty years genetic algorithms (GAs) were applied in a plethora of fields such as: control, system identification, robotics, planning and scheduling, image processing, and pattern and speech recognition (Bäck et al., 1997). In robotics the problems of trajectory planning, collision avoidance and manipulator structure design considering a single criteria has been solved using several techniques (Alander, 2003). Most engineering applications require the optimization of several criteria simultaneously. Often the problems are complex, include discrete and continuous variables and there is no prior knowledge about the search space. These kind of problems are very more complex, since they consider multiple design criteria simultaneously within the optimization procedure. This is known as a multi-criteria (or multiobjective) optimization, that has been addressed successfully through GAs (Deb, 2001). The overall aim of multi-criteria evolutionary algorithms is to achieve a set of non-dominated optimal solutions known as Pareto front. At the end of the optimization procedure, instead of a single optimal (or near optimal) solution, the decision maker can select a solution from the Pareto front. Some of the key issues in multi-criteria GAs are: i) the number of objectives, ii) to obtain a Pareto front as wide as possible and iii) to achieve a Pareto front uniformly spread. Indeed, multi-objective techniques using GAs have been increasing in relevance as a research area. In 1989, Goldberg suggested the use of a GA to solve multi-objective problems and since then other researchers have been developing new methods, such as the multi-objective genetic algorithm (MOGA) (Fonseca & Fleming, 1995), the non-dominated sorted genetic algorithm (NSGA) (Deb, 2001), and the niched Pareto genetic algorithm (NPGA) (Horn et al., 1994), among several other variants (Coello, 1998). In this work the trajectory planning problem considers: i) robots with 2 and 3 degrees of freedom (dof ), ii) the inclusion of obstacles in the workspace and iii) up to five criteria that are used to qualify the evolving trajectory, namely the: joint traveling distance, joint velocity, end effector / Cartesian distance, end effector / Cartesian velocity and energy involved. These criteria are used to minimize the joint and end effector traveled distance, trajectory ripple and energy required by the manipulator to reach at destination point. Bearing this ideas in mind, the paper addresses the planning of robot trajectories, meaning the development of an algorithm to find a continuous motion that takes the manipulator from a given starting configuration up to a desired end position without colliding with any obstacle in the workspace. The chapter is organized as follows. Section 2 describes the trajectory planning and several approaches proposed in the literature. Section 3 formulates the problem, namely the representation adopted to solve the trajectory planning and the objectives considered in the optimization. Section 4 studies the algorithm convergence. Section 5 studies a 2R manipulator (i.e., a robot with two rotational joints/links) when the optimization trajectory considers two and five objectives. Sections 6 and 7 show the results for the 3R redundant manipulator with five goals and for other complementary experiments are described, respectively. Finally, section 8 draws the main conclusions.
Resumo:
On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.