13 resultados para OPERATING CHARACTERISTIC CURVES
em Instituto Politécnico do Porto, Portugal
Resumo:
O estudo das curvas características de um transístor permite conhecer um conjunto de parâmetros essenciais à sua utilização tanto no domínio da amplificação de sinais como em circuitos de comutação. Deste estudo é possível obter dados em condições que muitas vezes não constam na documentação fornecida pelos fabricantes. O trabalho que aqui se apresenta consiste no desenvolvimento de um sistema que permite de forma simples, eficiente e económica obter as curvas características de um transístor (bipolar de junção, efeito de campo de junção e efeito de campo de metal-óxido semicondutor), podendo ainda ser utilizado como instrumento pedagógico na introdução ao estudo dos dispositivos semicondutores ou no projecto de amplificadores transistorizados. O sistema é constituído por uma unidade de condicionamento de sinal, uma unidade de processamento de dados (hardware) e por um programa informático que permite o processamento gráfico dos dados obtidos, isto é, traçar as curvas características do transístor. O seu princípio de funcionamento consiste na utilização de um conversor Digital-Analógico (DAC) como fonte de tensão variável, alimentando a base (TBJ) ou a porta (JFET e MOSFET) do dispositivo a testar. Um segundo conversor fornece a variação da tensão VCE ou VDS necessária à obtenção de cada uma das curvas. O controlo do processo é garantido por uma unidade de processamento local, baseada num microcontrolador da família 8051, responsável pela leitura dos valores em corrente e em tensão recorrendo a conversores Analógico-Digital (ADC). Depois de processados, os dados são transmitidos através de uma ligação USB para um computador no qual um programa procede à representação gráfica, das curvas características de saída e à determinação de outros parâmetros característicos do dispositivo semicondutor em teste. A utilização de componentes convencionais e a simplicidade construtiva do projecto tornam este sistema económico, de fácil utilização e flexível, pois permite com pequenas alterações
Resumo:
Objective Public health organizations recommend that preschool-aged children accumulate at least 3 h of physical activity (PA) daily. Objective monitoring using pedometers offers an opportunity to measure preschooler's PA and assess compliance with this recommendation. The purpose of this study was to derive step-based recommendations consistent with the 3 h PA recommendation for preschool-aged children. Method The study sample comprised 916 preschool-aged children, aged 3 to 6 years (mean age = 5.0 ± 0.8 years). Children were recruited from kindergartens located in Portugal, between 2009 and 2013. Children wore an ActiGraph GT1M accelerometer that measured PA intensity and steps per day simultaneously over a 7-day monitoring period. Receiver operating characteristic (ROC) curve analysis was used to identify the daily step count threshold associated with meeting the daily 3 hour PA recommendation. Results A significant correlation was observed between minutes of total PA and steps per day (r = 0.76, p < 0.001). The optimal step count for ≥ 3 h of total PA was 9099 steps per day (sensitivity (90%) and specificity (66%)) with area under the ROC curve = 0.86 (95% CI: 0.84 to 0.88). Conclusion Preschool-aged children who accumulate less than 9000 steps per day may be considered Insufficiently Active.
Resumo:
Power system organization has gone through huge changes in the recent years. Significant increase in distributed generation (DG) and operation in the scope of liberalized markets are two relevant driving forces for these changes. More recently, the smart grid (SG) concept gained increased importance, and is being seen as a paradigm able to support power system requirements for the future. This paper proposes a computational architecture to support day-ahead Virtual Power Player (VPP) bid formation in the smart grid context. This architecture includes a forecasting module, a resource optimization and Locational Marginal Price (LMP) computation module, and a bid formation module. Due to the involved problems characteristics, the implementation of this architecture requires the use of Artificial Intelligence (AI) techniques. Artificial Neural Networks (ANN) are used for resource and load forecasting and Evolutionary Particle Swarm Optimization (EPSO) is used for energy resource scheduling. The paper presents a case study that considers a 33 bus distribution network that includes 67 distributed generators, 32 loads and 9 storage units.
Resumo:
Solvent extraction is considered as a multi-criteria optimization problem, since several chemical species with similar extraction kinetic properties are frequently present in the aqueous phase and the selective extraction is not practicable. This optimization, applied to mixer–settler units, considers the best parameters and operating conditions, as well as the best structure or process flow-sheet. Global process optimization is performed for a specific flow-sheet and a comparison of Pareto curves for different flow-sheets is made. The positive weight sum approach linked to the sequential quadratic programming method is used to obtain the Pareto set. In all investigated structures, recovery increases with hold-up, residence time and agitation speed, while the purity has an opposite behaviour. For the same treatment capacity, counter-current arrangements are shown to promote recovery without significant impairment in purity. Recycling the aqueous phase is shown to be irrelevant, but organic recycling with as many stages as economically feasible clearly improves the design criteria and reduces the most efficient organic flow-rate.
Resumo:
A flow injection analysis (FIA) system having a chlormequat selective electrode is proposed. Several electrodes with poly(vinyl chloride) based membranes were constructed for this purpose. Comparative characterization suggestedthe use of membrane with chlormequat tetraphenylborate and dibutylphthalate. On a single-line FIA set-up, operating with 1x10-2 mol L-1 ionic strength and 6.3 pH, calibration curves presented slopes of 53.6±0.4mV decade-1 within 5.0x10-6 and1.0x10-3 mol L-1, andsquaredcorrelation coefficients >0.9953. The detection limit was 2.2x10-6 mol L-1 and the repeatability equal to ±0.68mV (0.7%). A dual-channel FIA manifold was therefore constructed, enabling automatic attainment of previous ionic strength andpH conditions and thus eliminating sample preparation steps. Slopes of 45.5±0.2mV decade -1 along a concentration range of 8.0x10-6 to 1.0x10-3 mol L-1 with a repeatability ±0.4mV (0.69%) were obtained. Analyses of real samples were performed, and recovery gave results ranging from 96.6 to 101.1%.
Resumo:
The electrochemical behaviour of the pesticide metam (MT) at a glassy carbon working electrode (GCE) and at a hanging mercury drop electrode (HMDE) was investigated. Different voltammetric techniques, including cyclic voltammetry (CV) and square wave voltammetry (SWV), were used. An anodic peak (independent of pH) at +1.46 V vs AgCl/Ag was observed in MTaqueous solution using the GCE. SWV calibration curves were plotted under optimized conditions (pH 2.5 and frequency 50 Hz), which showed a linear response for 17–29 mg L−1. Electrochemical reduction was also explored, using the HMDE. A well defined cathodic peak was recorded at −0.72 V vs AgCl/ Ag, dependent on pH. After optimizing the operating conditions (pH 10.1, frequency 150 Hz, potential deposition −0.20 V for 10 s), calibration curves was measured in the concentration range 2.5×10−1 to 1.0 mg L−1 using SWV. The electrochemical behaviour of this compound facilitated the development of a flow injection analysis (FIA) system with amperometric detection for the quantification of MT in commercial formulations and spiked water samples. An assessment of the optimal FIA conditions indicated that the best analytical results were obtained at a potential of +1.30 V, an injection volume of 207 μL and an overall flow rate of 2.4 ml min−1. Real samples were analysed via calibration curves over the concentration range 1.3×10−2 to 1.3 mg L−1. Recoveries from the real samples (spiked waters and commercial formulations) were between 97.4 and 105.5%. The precision of the proposed method was evaluated by assessing the relative standard deviation (RSD %) of ten consecutive determinations of one sample (1.0 mg L−1), and the value obtained was 1.5%.
Resumo:
Embedded systems are increasingly complex and dynamic, imposing progressively higher developing time and costs. Tuning a particular system for deployment is thus becoming more demanding. Furthermore when considering systems which have to adapt themselves to evolving requirements and changing service requests. In this perspective, run-time monitoring of the system behaviour becomes an important requirement, allowing to dynamically capturing the actual scheduling progress and resource utilization. For this to succeed, operating systems need to expose their internal behaviour and state, making it available to external applications, and a runtime monitoring mechanism must be available. However, such mechanism can impose a burden in the system itself if not wisely used. In this paper we explore this problem and propose a framework, which is intended to provide this run-time mechanism whilst achieving code separation, run-time efficiency and flexibility for the final developer.
Resumo:
Our day-to-day life is dependent on several embedded devices, and in the near future, many more objects will have computation and communication capabilities enabling an Internet of Things. Correspondingly, with an increase in the interaction of these devices around us, developing novel applications is set to become challenging with current software infrastructures. In this paper, we argue that a new paradigm for operating systems needs to be conceptualized to provide aconducive base for application development on Cyber-physical systems. We demonstrate its need and importance using a few use-case scenarios and provide the design principles behind, and an architecture of a co-operating system or CoS that can serve as an example of this new paradigm.
Resumo:
The IEEE 802.15.4 is the most widespread used protocol for Wireless Sensor Networks (WSNs) and it is being used as a baseline for several higher layer protocols such as ZigBee, 6LoWPAN or WirelessHART. Its MAC (Medium Access Control) supports both contention-free (CFP, based on the reservation of guaranteed time-slots GTS) and contention based (CAP, ruled by CSMA/CA) access, when operating in beacon-enabled mode. Thus, it enables the differentiation between real-time and best-effort traffic. However, some WSN applications and higher layer protocols may strongly benefit from the possibility of supporting more traffic classes. This happens, for instance, for dense WSNs used in time-sensitive industrial applications. In this context, we propose to differentiate traffic classes within the CAP, enabling lower transmission delays and higher success probability to timecritical messages, such as for event detection, GTS reservation and network management. Building upon a previously proposed methodology (TRADIF), in this paper we outline its implementation and experimental validation over a real-time operating system. Importantly, TRADIF is fully backward compatible with the IEEE 802.15.4 standard, enabling to create different traffic classes just by tuning some MAC parameters.
Resumo:
The increase of electricity demand in Brazil, the lack of the next major hydroelectric reservoirs implementation, and the growth of environmental concerns lead utilities to seek an improved system planning to meet these energy needs. The great diversity of economic, social, climatic, and cultural conditions in the country have been causing a more difficult planning of the power system. The work presented in this paper concerns the development of an algorithm that aims studying the influence of the issues mentioned in load curves. Focus is given to residential consumers. The consumption device with highest influence in the load curve is also identified. The methodology developed gains increasing importance in the system planning and operation, namely in the smart grids context.
Resumo:
A Salmonella é um microrganismo responsável por grande parte das doenças alimentares, podendo por em causa a saúde pública da área contaminada. Uma deteção rápida, eficiente e altamente sensível e extremamente importante, sendo um campo em franco desenvolvimento e alvo de variados e múltiplos estudos na comunidade cientifica atual. Foi desenvolvido um método potenciométrico para a deteção de Salmonellas, com elétrodos seletivos de iões, construídos em laboratório com pontas de micropipetas, fios de prata e sensores com composição otimizada. O elétrodo indicador escolhido foi um ESI seletivo a cadmio, para redução da probabilidade de interferências no método, devido a pouca abundancia do cadmio em amostras alimentares. Elétrodos seletivos a sódio, elétrodos de Ag/AgCl de simples e de dupla juncão foram também construídos e caracterizados para serem aplicados como elétrodos de referência. Adicionalmente otimizaram-se as condições operacionais para a analise potenciométrica, nomeadamente o elétrodo de referencia utilizado, condicionamento dos elétrodos, efeito do pH e volume da solução amostra. A capacidade de realizar leituras em volumes muito pequenos com limites de deteção na ordem dos micromolares por parte dos ESI de membrana polimérica, foi integrada num ensaio com um formato nao competitivo ELISA tipo sanduiche, utilizando um anticorpo primário ligado a nanopartículas de Fe@Au, permitindo a separação dos complexos anticorpo-antigénio formados dos restantes componentes em cada etapa do ensaio, pela simples aplicação de um campo magnético. O anticorpo secundário foi marcado com nanocristais de CdS, que são bastante estáveis e é fácil a transformação em Cd2+ livre, permitindo a leitura potenciométrica. Foram testadas várias concentrações de peroxido de hidrogénio e o efeito da luz para otimizar a dissolução de CdS. O método desenvolvido permitiu traçar curvas de calibração com soluções de Salmonellas incubadas em PBS (pH 4,4) em que o limite de deteção foi de 1100 CFU/mL e de 20 CFU/mL, utilizando volumes de amostra de 10 ƒÊL e 100 ƒÊL, respetivamente para o intervalo de linearidade de 10 a 108 CFU/mL. O método foi aplicado a uma amostra de leite bovino. A taxa de recuperação media obtida foi de 93,7% } 2,8 (media } desvio padrão), tendo em conta dois ensaios de recuperação efetuados (com duas replicas cada), utilizando um volume de amostra de 100 ƒÊL e concentrações de 100 e 1000 CFU/mL de Salmonella incubada.
Resumo:
A criação de infraestruturas passa pela construção de estradas que ligam pontos estratégicos, permitindo acesso a bens e serviços, de forma cómoda e segura. No desenvolvimento deste trabalho é abordado o estudo e projeto de uma variante urbana no concelho de Cinfães, nas especificidades de traçado, pavimentos e sinalização. Inicia-se por uma apresentação sobre o trabalho, os objetivos, estrutura e metodologia utilizada na sua elaboração. São apresentados os softwares utilizados, como editores de imagem (Google Earth, Microsoft ICE e Caesium) que permitem obter e trabalhar imagens panorâmicas, o Civil 3D que possibilita a realização ágil de um projeto de vias, e o Alize-LCPC que determina as caraterísticas de dimensionamento de um pavimento flexível. São apresentados os estudos necessários para a construção da variante em questão passando pela localização da via, o trabalho sobre o levantamento topográfico fornecido pela Câmara Municipal, condicionantes de traçado e serviços afetados. Posteriormente, são abordados alguns conceitos teóricos como geometria do traçado, velocidade, tráfego e visibilidade. Descrevem-se as caraterísticas geométricas de infraestruturas rodoviárias a conhecer anteriormente à realização de um projeto de execução de uma via, como o traçado em planta (alinhamentos retos, curvas, raios, sobreelevação, sobrelargura), perfil longitudinal (trainéis, inclinações, concordâncias verticais) e perfil transversal (faixa de rodagem, bermas, valetas e taludes). É realizada ainda uma apresentação sobre os elementos integrantes de uma plataforma rodoviária e passeio, os seus critérios de dimensionamento, como caraterização do tráfego, temperaturas de serviço e deformações, assim como os elementos teóricos para o estudo de drenagem (período de retorno, precipitação e tipos de dispositivos). São ainda apresentadas as caraterísticas gerais de um projeto de sinalização e segurança, enunciando as marcas rodoviárias e a sinalização vertical. Termina-se apresentando as soluções encontradas e os meios utilizados, para a elaboração do projeto de uma via nova, alargamento de via existente e requalificação de pavimento de um troço de ligação à EN222, expondo ainda as conclusões obtidas na realização do projeto com propostas para desenvolvimento futuros.
Resumo:
The study of agent diffusion in biological tissues is very important to understand and characterize the optical clearing effects and mechanisms involved: tissue dehydration and refractive index matching. From measurements made to study the optical clearing, it is obvious that light scattering is reduced and that the optical properties of the tissue are controlled in the process. On the other hand, optical measurements do not allow direct determination of the diffusion properties of the agent in the tissue and some calculations are necessary to estimate those properties. This fact is imposed by the occurrence of two fluxes at optical clearing: water typically directed out of and agent directed into the tissue. When the water content in the immersion solution is approximately the same as the free water content of the tissue, a balance is established for water and the agent flux dominates. To prove this concept experimentally, we have measured the collimated transmittance of skeletal muscle samples under treatment with aqueous solutions containing different concentrations of glucose. After estimating the mean diffusion time values for each of the treatments we have represented those values as a function of glucose concentration in solution. Such a representation presents a maximum diffusion time for a water content in solution equal to the tissue free water content. Such a maximum represents the real diffusion time of glucose in the muscle and with this value we could calculate the corresponding diffusion coefficient.