880 resultados para Three phase system
Resumo:
A lamotrigina (LTG) é um fármaco pertencente à classe das feniltriazinas utilizado no tratamento de crises epilépticas generalizadas e focais e no tratamento adjunto da epilepsia refratária. Devido à alta variabilidade interindividual, às interações medicamentosas e aos efeitos adversos apresentados durante a administração da LTG, a monitorização terapêutica nos pacientes que fazem uso deste fármaco é necessária para ajuste de dose individual e evitar os efeitos adversos. Assim, o objetivo deste trabalho foi a avaliação de duas técnicas de microextração: a microextração em fase líquida com fibras ocas (HF-LPME) e a microextração líquido-líquido dispersiva (DLLME) para análise da lamotrigina em amostras de plasma de pacientes epilépticos. Primeiramente foram definidas as condições eletroforéticas: foi utilizado um capilar de sílica fundida de 75 ?m de diâmetro interno e 50 cm de comprimento efetivo. O eletrólito de corrida (BGE) foi composto por ácido 2-morfolinoetanosulfônico (MES), na concentração de 130 mmol L-1 e pH 5,0. As análises foram realizadas à temperatura de 20°C e tensão de 15 kV. A amostra foi injetada hidrodinamicamente (0,5 psi por 10 s) e a detecção foi feita em 214 nm. Nestas condições a LTG e o padrão interno (PI), lidocaína, puderam ser analisados em menos de 7 minutos. A HF-LPME foi avaliada no modo de 3 fases, usando 500 ?L de plasma e 3,5 mL de solução fosfato de sódio 50 mmol L-1 pH 9,0 como fase doadora. O solvente utilizado para impregnar a fibra foi o 1-octanol. Como fase aceptora foram utilizados 60 ?L de solução de ácido clorídrico pH 4,0. Para avaliação da DLLME, foi necessária uma etapa de pré-tratamento da amostra (500 ?L de plasma) com 1 mL de acetonitrila. Após isto, 1,3 mL do sobrenadante foram adicionados a 4 mL de solução fosfato de sódio 50 mmol L-1 pH 9,0 e 120 ?L de clorofórmio (solvente extrator) foram injetados nesta amostra aquosa e 165 ?L de fase sedimentada foram recuperados. As características de desempenho analítico para ambos os métodos foram avaliadas, sendo obtida linearidade na faixa de concentração plasmática de 1-20 ?g/mL e limite inferior de quantificação (LIQ) de 1 ?g mL-1. Os ensaios de precisão e exatidão apresentaram valores de acordo com os guias oficiais. Além disso, os métodos foram seletivos, não apresentaram efeito residual e as amostras foram estáveis. Os valores de recuperação foram de 54,3 e 23% para HF-LPME e DLLME, respectivamente. Os métodos validados foram aplicados com sucesso em amostras de plasma de pacientes epilépticos em tratamento com a LTG. Além disso, as duas técnicas foram comparadas e a HF-LPME apresentou vantagens em relação à DLLME, mostrando ser uma técnica promissora para análise de matrizes complexas, com reduzido consumo de solvente orgânico e possibilidade de automação.
Resumo:
Este trabalho propõe uma técnica de modelagem multiescala concorrente do concreto considerando duas escalas distintas: a mesoescala, onde o concreto é modelado como um material heterogêneo, e a macroescala, na qual o concreto é tratado como um material homogêneo. A heterogeneidade da estrutura mesoscópica do concreto é idealizada considerando três fases distintas, compostas pelos agregados graúdos e argamassa (matriz), estes considerados materiais homogêneos, e zona de transição interfacial (ZTI), tratada como a parte mais fraca entre as três fases. O agregado graúdo é gerado a partir de uma curva granulométrica e posicionado na matriz de forma aleatória. Seu comportamento mecânico é descrito por um modelo constitutivo elástico-linear, devido a sua maior resistência quando comparado com as outras duas fases do concreto. Elementos finitos contínuos com alta relação de aspecto em conjunto com um modelo constitutivo de dano são usados para representar o comportamento não linear do concreto, decorrente da iniciação de fissuras na ZTI e posterior propagação para a matriz, dando lugar à formação de macrofissuras. Os elementos finitos de interface com alta relação de aspecto são inseridos entre todos os elementos regulares da matriz e entre os da matriz e agregados, representando a ZTI, tornando-se potenciais caminhos de propagação de fissuras. No estado limite, quando a espessura do elemento de interface tende a zero (h ?0) e, consequentemente, a relação de aspecto tende a infinito, estes elementos apresentam a mesma cinemática da aproximação contínua de descontinuidades fortes (ACDF), sendo apropriados para representar a formação de descontinuidades associados a fissuras, similar aos modelos coesivos. Um modelo de dano à tração é proposto para representar o comportamento mecânico não linear das interfaces, associado à formação de fissuras, ou até mesmo ao eventual fechamento destas. A fim de contornar os problemas causados pela malha de elementos finitos de transição entre as malhas da macro e da mesoescala, que, em geral, apresentam diferenças expressivas 5 de refinamento, utiliza-se uma técnica recente de acoplamento de malhas não conformes. Esta técnica é baseada na definição de elementos finitos de acoplamento (EFAs), os quais são capazes de estabelecer a continuidade de deslocamento entre malhas geradas de forma completamente independentes, sem aumentar a quantidade total de graus de liberdade do problema, podendo ser utilizados tanto para acoplar malhas não sobrepostas quanto sobrepostas. Para tornar possível a análise em multiescala em casos nos quais a região de localização de deformações não pode ser definida a priori, propõe-se uma técnica multiescala adaptativa. Nesta abordagem, usa-se a distribuição de tensões da escala macroscópica como um indicador para alterar a modelagem das regiões críticas, substituindo-se a macroescala pela mesoescala durante a análise. Consequentemente, a malha macroscópica é automaticamente substituída por uma malha mesoscópica, onde o comportamento não linear está na iminência de ocorrer. Testes numéricos são desenvolvidos para mostrar a capacidade do modelo proposto de representar o processo de iniciação e propagação de fissuras na região tracionada do concreto. Os resultados numéricos são comparados com os resultados experimentais ou com aqueles obtidos através da simulação direta em mesoescala (SDM).
Resumo:
Os motores de indução desempenham um importante papel na indústria, fato este que destaca a importância do correto diagnóstico e classificação de falhas ainda em fase inicial de sua evolução, possibilitando aumento na produtividade e, principalmente, eliminando graves danos aos processos e às máquinas. Assim, a proposta desta tese consiste em apresentar um multiclassificador inteligente para o diagnóstico de motor sem defeitos, falhas de curto-circuito nos enrolamentos do estator, falhas de rotor e falhas de rolamentos em motores de indução trifásicos acionados por diferentes modelos de inversores de frequência por meio da análise das amplitudes dos sinais de corrente de estator no domínio do tempo. Para avaliar a precisão de classificação frente aos diversos níveis de severidade das falhas, foram comparados os desempenhos de quatro técnicas distintas de aprendizado de máquina; a saber: (i) Rede Fuzzy Artmap, (ii) Rede Perceptron Multicamadas, (iii) Máquina de Vetores de Suporte e (iv) k-Vizinhos-Próximos. Resultados experimentais obtidos a partir de 13.574 ensaios experimentais são apresentados para validar o estudo considerando uma ampla faixa de frequências de operação, bem como regimes de conjugado de carga em 5 motores diferentes.
Resumo:
Os motores de indução trifásicos são os principais elementos de conversão de energia elétrica em mecânica motriz aplicados em vários setores produtivos. Identificar um defeito no motor em operação pode fornecer, antes que ele falhe, maior segurança no processo de tomada de decisão sobre a manutenção da máquina, redução de custos e aumento de disponibilidade. Nesta tese são apresentas inicialmente uma revisão bibliográfica e a metodologia geral para a reprodução dos defeitos nos motores e a aplicação da técnica de discretização dos sinais de correntes e tensões no domínio do tempo. É também desenvolvido um estudo comparativo entre métodos de classificação de padrões para a identificação de defeitos nestas máquinas, tais como: Naive Bayes, k-Nearest Neighbor, Support Vector Machine (Sequential Minimal Optimization), Rede Neural Artificial (Perceptron Multicamadas), Repeated Incremental Pruning to Produce Error Reduction e C4.5 Decision Tree. Também aplicou-se o conceito de Sistemas Multiagentes (SMA) para suportar a utilização de múltiplos métodos concorrentes de forma distribuída para reconhecimento de padrões de defeitos em rolamentos defeituosos, quebras nas barras da gaiola de esquilo do rotor e curto-circuito entre as bobinas do enrolamento do estator de motores de indução trifásicos. Complementarmente, algumas estratégias para a definição da severidade dos defeitos supracitados em motores foram exploradas, fazendo inclusive uma averiguação da influência do desequilíbrio de tensão na alimentação da máquina para a determinação destas anomalias. Os dados experimentais foram adquiridos por meio de uma bancada experimental em laboratório com motores de potência de 1 e 2 cv acionados diretamente na rede elétrica, operando em várias condições de desequilíbrio das tensões e variações da carga mecânica aplicada ao eixo do motor.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Frequency of exposure to very low- and high-frequency words was manipulated in a three-phase (familiarisation, study, and test) design. During familiarisation, words were presented with their definition (once, four times, or not presented). One week (Experiment 1) or one day (Experiment 2) later, participants studied a list of homogeneous pairs (i.e., pair members were matched on background and familiarisation frequency). Item and associative recognition of high- and very low-frequency words presented in intact, rearranged, old-new, or new-new pairs were tested in Experiment 1. Associative recognition of very low-frequency words was tested in Experiment 2. Results showed that prior familiaris ation improved associative recognition of very low-frequency pairs, but had no effect on high-frequency pairs. The role of meaning in the formation of item-to-item and item-to-context associations and the implications for current models of memory are discussed.
Resumo:
Froth recovery measurements have been conducted in both the presence (three-phase froth) and absence (two-phase froth) of particles of different contact angles in a specially modified laboratory flotation column. Increasing the particle hydrophobicity increased the flow rate of particles entering the froth, while the recovery of particles across the froth phase itself also increased for particle contact angles to 63 and at all vertical heights of the froth column. However, a further increase in the contact angle to 69 resulted in lower particle recovery across the froth phase. The reduced froth recovery for particles of 69 contact angle was linked to significant bubble coalescence within the froth phase. The reduced froth recovery occurred uniformly across the entire particle size range, and was, presumably, a result of particle detachment from coalescing bubbles. Water flow rates across the froth phase also varied with particle contact angle. The general trend was a decrease in the concentrate flow rate of water with increasing particle contact angle. An inverse relationship between water flow rate and bubble radius was also observed, possibly allowing prediction of water flow rate from bubble size measurements in the froth. Comparison of the froth structure, defined by bubble size, gas hold-up and bubble layer thickness, for two- and three-phase froths, at the same frother concentration, showed there was a relationship between water flow rate and froth structure. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.
Resumo:
A three-phase longitudinal study examined the origins of grammatical sensitivity and its usefulness as a predictor of early word-level reading. At about 4 years of age, children were given a range of language and cognitive tests. One year later, the children were given a further series of language and cognitive tests, this time including grammatical sensitivity, phonological sensitivity, and nonword repetition. Another year later, word-level reading achievement was assessed. Overall, grammatical sensitivity and phonological sensitivity were more firmly grounded in earlier language ability than in cognitive ability. Phonological sensitivity and nonword repetition showed reliable predictive associations with subsequent word reading skills. Grammatical sensitivity did not. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
To participate effectively in the post-industrial information societies and knowledge/service economies of the 21st century, individuals must be better-informed, have greater thinking and problem-solving abilities, be self-motivated; have a capacity for cooperative interaction; possess varied and specialised skills; and be more resourceful and adaptable than ever before. This paper reports on one outcome from a national project funded by the Ministerial Council on Education, Employment Training and Youth Affairs, which investigated what practices, processes, strategies and structures best promote lifelong learning and the development of lifelong learners in the middle years of schooling. The investigation linked lifelong learning with middle schooling because there were indications that middle schooling reform practices also lead to the development of lifelong learning attributes, which is regarded as a desirable outcome of schooling in Australia. While this larger project provides depth around these questions, this paper specifically reports on the development of a three-phase model that can guide the sequence in which schools undertaking middle schooling reform attend to particular core component changes. The model is developed from the extensive analysis of 25 innovative schools around the nation, and provides a unique insight into the desirable sequences and time spent achieving reforms, along with typical pitfalls that lead to a regression in the reform process. Importantly, the model confirms that schooling reform takes much more time than planners typically expect or allocate, and there are predictable and identifiable inhibitors to achieving it.
Resumo:
This Study is the first phase of a three-phase study continuing over three years. Twent)' health professionals from different disciplinary backgrounds (medical doctors, nurses, allied health professionals) and 20 patients across a range of medical condidons, education, gender, and socio-economic backgrounds, pardcipated in one-on-one semi-structured interviews. Participants described their experiences and percepdons of both effecdve and sadsfying medical consultations and dissadsf)'ing and ineffecdve ones. They also discussed their individual goals and needs in the consultation process. Results indicated that while there were some similarides in consultation goals and needs between health professionals, there were also clear differences across the different discipUnes. In addition, there were clear differences in goals and needs across the twenty padents. These findings are discussed within the framework of communicadon accommodadon theor}' (CAT) and the linguisdc model of padent pardcipadon (LMOPP) and focus on understanding the different dynamics that underpin varying health professional and padent interacdons.
Resumo:
Purpose – The purpose of this paper is to explore the role and relevance of external standards in demonstrating the value and impact of academic library services to their stakeholders. Design/methodology/approach – Two UK standards, Charter Mark and Customer Service Excellence, are evaluated via an exploratory case study, employing multiple data collection techniques. Methods and results of phases 1-2 of a three phase research project are outlined. Findings – Despite some limitations, standards may assist the manager in demonstrating the value, impact and quality of academic libraries in a recessional environment. Active engagement and partnership with customers is imperative if academic libraries are to be viewed as vital to their parent organisations and thus survive. Originality/value – This paper provides a systematic evaluation of the role of external accreditation standards in measuring academic library service value and impact.
Resumo:
A study of the hydrodynamics and mass transfer characteristics of a liquid-liquid extraction process in a 450 mm diameter, 4.30 m high Rotating Disc Contactor (R.D.C.) has been undertaken. The literature relating to this type of extractor and the relevant phenomena, such as droplet break-up and coalescence, drop mass transfer and axial mixing has been revjewed. Experiments were performed using the system C1airsol-350-acetone-water and the effects of drop size, drop size-distribution and dispersed phase hold-up on the performance of the R.D.C. established. The results obtained for the two-phase system C1airso1-water have been compared with published correlations: since most of these correlations are based on data obtained from laboratory scale R.D.C.'s, a wide divergence was found. The hydrodynamics data from this study have therefore been correlated to predict the drop size and the dispersed phase hold-up and agreement has been obtained with the experimental data to within +8% for the drop size and +9% for the dispersed phase hold-up. The correlations obtained were modified to include terms involving column dimensions and the data have been correlated with the results obtained from this study together with published data; agreement was generally within +17% for drop size and within +14% for the dispersed phase hold-up. The experimental drop size distributions obtained were in excellent agreement with the upper limit log-normal distributions which should therefore be used in preference to other distribution functions. In the calculation of the overall experimental mass transfer coefficient the mean driving force was determined from the concentration profile along the column using Simpson's Rule and a novel method was developed to calculate the overall theoretical mass transfer coefficient Kca1, involving the drop size distribution diagram to determine the volume percentage of stagnant, circulating and oscillating drops in the sample population. Individual mass transfer coefficients were determined for the corresponding droplet state using different single drop mass transfer models. Kca1 was then calculated as the fractional sum of these individual coefficients and their proportions in the drop sample population. Very good agreement was found between the experimental and theoretical overall mass transfer coefficients. Drop sizes under mass transfer conditions were strongly dependant upon the direction of mass transfer. Drop Sizes in the absence of mass transfer were generally larger than those with solute transfer from the continuous to the dispersed phase, but smaller than those with solute transfer in the opposite direction at corresponding phase flowrates and rotor speed. Under similar operating conditions hold-up was also affected by mass transfer; it was higher when solute transfered from the continuous to the dispersed phase and lower when direction was reversed compared with non-mass transfer operation.
Resumo:
Fast pyrolysis liquid or bio-oil has been used in engines with limited success. It requires a pilot fuel and/or an additive for successful combustion and there are problems with materials and liquid properties. It is immiscible with all conventional hydrocarbon fuels. Biodiesel, a product of esterification of vegetable oil with an alcohol, is widely used as a renewable liquid fuel as an additive to diesel at up to 20%. There are however limits to its use in conventional engines due to poor low temperature performance and variability in quality from a variety of vegetable oil qualities and variety of esterification processes. Within the European Project Bioliquids-CHP - a joint project between the European Commission and Russia - a study was undertaken to develop small scale CHP units based on engines and microturbines fuelled with bioliquids from fast pyrolysis and methyl esters of vegetable oil. Blends of bio-oil and biodiesel were evaluated and tested to overcome some of the disadvantages of using either fuel by itself. An alcohol was used as the co-solvent in the form of ethanol, 1-butanol or 2-propanol. Visual inspection of the blend homogeneity after 48 h was used as an indicator of the product stability and the results were plotted in a three phase chart for each alcohol used. An accelerated stability test was performed on selected samples in order to predict its long term stability. We concluded that the type and quantity of alcohol is critical for the blend formation and stability. Using 1-butanol gave the widest selection of stable blends, followed by blends with 2-propanol and finally ethanol, thus 1-butanol blends accepted the largest proportion of bio-oil in the mixture. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
Two-way power flow is nothing new and has been in practical use using line commutated converters for at least 50 years. With these types of converters, reversal of power flow can be achieved by increasing the firing angle of the devices beyond 90 degrees thus producing a negative DC voltage. Line commutated converters have several known disadvantages including: the direct current cannot be reversed, the power factor decreases when the firing angle increases and the harmonics are high on the line current. To tackle the above problems a forced commutated converter can be used. The power factor can be unity and the harmonics can be reduced. Many researchers have used PWM with different control techniques to serve the above purposes. In each converter arm, they used a forced commutated device with an antiparallel diode. Under the rectification mode of operation the current path is preponderantly through the diodes and under the inverter operation the current flows preponderantly through the forced commutated devices. Although their results were encouraging and gave a unity power factor with nearly sinusoidal current, the main disadvantage was that there were difficulties in controlling the power factor when the system is needed to operate at lagging or leading power factor. In this work, a new idea was introduced by connecting two GTOs antiparallel instead of a diode and a GTO. A single phase system using two GTO converters which are connected in series was built. One converter operates as a rectifier and the other converter operates as an inverter. In the case of the inversion mode and in each inverter arm one GTO is operated as a diode simply by switching it always on and the other antiparallel GTO is operated as a normal device to carry the inverter current. In case of the rectification mode, in each arm one GTO is always off and the other GTP is operated as a controlled device. The main advantage is that the system can be operated at lagging or leading power factor.