881 resultados para Linear operators


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Control Centre operators are essential to assure a good performance of Power Systems. Operators’ actions are critical in dealing with incidents, especially severe faults, like blackouts. In this paper we present an Intelligent Tutoring approach for training Portuguese Control Centre operators in incident analysis and diagnosis, and service restoration of Power Systems, offering context awareness and an easy integration in the working environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electricity market players operating in a liberalized environment requires access to an adequate decision support tool, allowing them to consider all the business opportunities and take strategic decisions. Ancillary services represent a good negotiation opportunity that must be considered by market players. For this, decision support tools must include ancillary market simulation. This paper proposes two different methods (Linear Programming and Genetic Algorithm approaches) for ancillary services dispatch. The methodologies are implemented in MASCEM, a multi-agent based electricity market simulator. A test case concerning the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is included in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RESUMO: Introdução – A Radioterapia (RT) é uma abordagem terapêutica para tratamento de neoplasia de mama. Contudo, diferentes técnicas de irradiação (TI) podem ser usadas. Objetivos – Comparar 4 TI, considerando a irradiação dos volumes alvo (PTV) e dos órgãos de risco (OAR). Metodologia – Selecionaram-se 7 pacientes com indicação para RT de mama esquerda. Sobre tomografia computorizada foram feitos os contornos do PTV e dos OAR. Foram calculadas 4 planimetrias/paciente para as TI: conformacional externa (EBRT), intensidade modulada com 2 (IMRT2) e 5 campos (IMRT5) e arco dinâmico (DART). Resultados – Histogramas de dose volume foram comparados para todas as TI usando o software de análise estatística, IBM SPSS v20. Com IMRT5 e DART, os OAR recebem mais doses baixas. No entanto, IMRT5 apresenta melhores índices de conformidade e homogeneidade para o PTV. Conclusões – IMRT5 apresenta o melhor índice de conformidade; EBRT e IMRT2 apresentam melhores resultados que DART. Há d.e.s entre as TI, sobretudo em doses mais baixas nos OAR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJETIVO: Identificar os determinantes da desnutrição energético-protéica que ocasionam déficits ponderal e de crescimento linear em crianças. MÉTODOS: Estudo transversal envolvendo 1.041 crianças (menores de dois anos de idade) de 10 municípios do Estado da Bahia, de 1999 a 2000. Utilizou-se a técnica de regressão logística e estratégia da abordagem hierárquica para identificar os fatores associados ao estado antropométrico. RESULTADOS: O modelo final para déficit no crescimento linear revelou como determinante básico: a posse de dois ou menos equipamentos domésticos (OR=2,9; IC 95%: 1,74-4,90) e no nível subjacente, a ausência de consulta pré-natal (OR=2,7; IC 95%: 1,47-4,97); entre os determinantes imediatos o baixo peso ao nascer (<2.500 g) (OR=3,6; IC 95%: 1,72-7,70) e relato de hospitalização nos 12 meses anteriores à entrevista (OR=2,4; IC 95%: 1,42-4,10). Fatores determinantes no déficit ponderal nos níveis básico, subjacente e imediato foram, respectivamente: a renda mensal per capita inferior a ¼ do salário-mínimo (OR=3,4; IC 95%: 1,41-8,16), a ausência de pré-natal (OR=2,1; IC 95%: 1,03-4,35), e o baixo peso ao nascer (OR=4,8; IC 95%: 2,00-11,48). CONCLUSÕES: Os déficits ponderal e linear das crianças foram explicados pela intermediação entre as precárias condições materiais de vida e o restrito acesso ao cuidado com a saúde e a carga de morbidade. Intervenções que melhorem as condições de vida e ampliem o acesso às ações do serviço de saúde são estratégias que caminham na busca da eqüidade em saúde e nutrição na infância.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJETIVO: Estratégias metodológicas vêm sendo desenvolvidas para minimizar o efeito do erro de medida da dieta. O objetivo do estudo foi descrever a aplicação de uma estratégia para correção da informação dietética pelo erro de medida. MÉTODOS: Foram obtidos dados de consumo alimentar pela aplicação do Questionário de Freqüência Alimentar a 79 adolescentes do Município de São Paulo em 1999. Os dados dietéticos obtidos foram corrigidos por meio de regressão linear, após o ajuste pela energia usando-se o método dos resíduos. O método de referência utilizado foi o recordatório de 24 horas, aplicado em três momentos distintos. RESULTADOS: Os valores corrigidos aproximaram-se dos valores de referência. O fator de correção lambda foi de 0,89 para energia. Para os macronutrientes, os fatores foram de 0,41; 0,22 e 0,20, para carboidratos, lipídios e proteínas, respectivamente. CONCLUSÕES: As médias e desvios-padrão dos valores corrigidos denotam que houve uma correção do erro de medida. Apesar disso, debate-se o desempenho desses métodos, que são notoriamente imperfeitos quando seus pressupostos teóricos não são atendidos, o que é comum nos estudos da dieta que usam instrumentos de medida baseados no relato dos indivíduos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As indústrias de componentes e acessórios automóveis são um elo fundamental no processo produtivo da indústria automóvel. Neste leque industrial encontra-se a Preh Portugal, Lda, como empresa fornecedora de componentes electrónicos, mais concretamente, painéis de controlo de climatização. Os painéis fornecidos pela Preh aos seus clientes encontram-se sujeitos a rigorosos testes de qualidade e funcionalidade. Neste sentido o teste funcional das teclas surge, relacionando o curso da tecla em função de uma força actuante. Esta relação está comprometida com uma curva característica padrão para o tipo de tecla. Para além destes compromissos, também é necessário que a tecla feche e abra o seu contacto eléctrico. Esta tese foca-se no desenvolvimento do teste de teclas, apresentando uma alteração ao sistema actual com a introdução de um sistema embebido, no intuito de flexibilizar o sistema de teste e reduzindo custos. O sistema embebido pretende dar capacidade de processamento ao teste e, desta forma, substituir o actual computador como elemento de processamento. A solução implementada consistiu numa mudança estrutural, através da inclusão do sistema embebido entre o computador e o sistema de deslocamento. Passando o foco central do processo de teste a residir no sistema embebido, este tem de estabelecer comunicações com os restantes elementos intervenientes no teste. Estabelece comunicações série RS-232 com o sistema de deslocamento (leitura do curso e força na tecla), Ethernet com o computador (comandos, parâmetros e resultados) e CAN com o painel de controlo de climatização (fecho/abertura do contacto eléctrico). A concretização deste projecto resultou numa nova estrutura e aplicação, a qual é facilmente integrada na linha de produção com as vantagens de ser menos onerosa e mais flexível, conforme o pretendido.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In basaltic dykes the magnetic lineation K1 (maximum magnetic susceptibility axis) is generally taken to indicate the flow direction during solidification of the magma. This assumption was tested in Tertiary basaltic dykes from Greenland displaying independent evidence of subhorizontal flow. The digital processing of microphotographs from thin sections cut in (K1, K2) planes yields the preferred linear orientation of plagioclase, which apparently marks the magma flow lineation. In up to 60% of cases, the angular separation between K1 and the assumed flow direction is greater than 45degrees. This suggests that the uncorroborated use of magnetic lineations in dykes is risky. A simple geometrical method is proposed to infer the flow vector from AMS in dykes based solely on magnetic foliations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chpater in Book Proceedings with Peer Review Second Iberian Conference, IbPRIA 2005, Estoril, Portugal, June 7-9, 2005, Proceedings, Part II

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use a simple model of associating fluids which consists of spherical particles having a hard-core repulsion, complemented by three short-ranged attractive sites on the surface (sticky spots). Two of the spots are of type A and one is of type B; the bonding interactions between each pair of spots have strengths epsilon(AA), epsilon(BB), and epsilon(AB). The theory is applied over the whole range of bonding strengths and the results are interpreted in terms of the equilibrium cluster structures of the phases. In addition to our numerical results, we derive asymptotic expansions for the free energy in the limits for which there is no liquid-vapor critical point: linear chains (epsilon(AA)not equal 0, epsilon(AB)=epsilon(BB)=0), hyperbranched polymers (epsilon(AB)not equal 0, epsilon(AA)=epsilon(BB)=0), and dimers (epsilon(BB)not equal 0, epsilon(AA)=epsilon(AB)=0). These expansions also allow us to calculate the structure of the critical fluid by perturbing around the above limits, yielding three different types of condensation: of linear chains (AA clusters connected by a few AB or BB bonds); of hyperbranched polymers (AB clusters connected by AA bonds); or of dimers (BB clusters connected by AA bonds). Interestingly, there is no critical point when epsilon(AA) vanishes despite the fact that AA bonds alone cannot drive condensation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Screening of topologies developed by hierarchical heuristic procedures can be carried out by comparing their optimal performance. In this work we will be exploiting mono-objective process optimization using two algorithms, simulated annealing and tabu search, and four different objective functions: two of the net present value type, one of them including environmental costs and two of the global potential impact type. The hydrodealkylation of toluene to produce benzene was used as case study, considering five topologies with different complexities mainly obtained by including or not liquid recycling and heat integration. The performance of the algorithms together with the objective functions was observed, analyzed and discussed from various perspectives: average deviation of results for each algorithm, capacity for producing high purity product, screening of topologies, objective functions robustness in screening of topologies, trade-offs between economic and environmental type objective functions and variability of optimum solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Amulti-residue methodology based on a solid phase extraction followed by gas chromatography–tandem mass spectrometry was developed for trace analysis of 32 compounds in water matrices, including estrogens and several pesticides from different chemical families, some of them with endocrine disrupting properties. Matrix standard calibration solutions were prepared by adding known amounts of the analytes to a residue-free sample to compensate matrix-induced chromatographic response enhancement observed for certain pesticides. Validation was done mainly according to the International Conference on Harmonisation recommendations, as well as some European and American validation guidelines with specifications for pesticides analysis and/or GC–MS methodology. As the assumption of homoscedasticity was not met for analytical data, weighted least squares linear regression procedure was applied as a simple and effective way to counteract the greater influence of the greater concentrations on the fitted regression line, improving accuracy at the lower end of the calibration curve. The method was considered validated for 31 compounds after consistent evaluation of the key analytical parameters: specificity, linearity, limit of detection and quantification, range, precision, accuracy, extraction efficiency, stability and robustness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ancillary services represent a good business opportunity that must be considered by market players. This paper presents a new methodology for ancillary services market dispatch. The method considers the bids submitted to the market and includes a market clearing mechanism based on deterministic optimization. An Artificial Neural Network is used for day-ahead prediction of Regulation Down, regulation-up, Spin Reserve and Non-Spin Reserve requirements. Two test cases based on California Independent System Operator data concerning dispatch of Regulation Down, Regulation Up, Spin Reserve and Non-Spin Reserve services are included in this paper to illustrate the application of the proposed method: (1) dispatch considering simple bids; (2) dispatch considering complex bids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding he management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We perform a comparison between the fractional iteration and decomposition methods applied to the wave equation on Cantor set. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The definition and programming of distributed applications has become a major research issue due to the increasing availability of (large scale) distributed platforms and the requirements posed by the economical globalization. However, such a task requires a huge effort due to the complexity of the distributed environments: large amount of users may communicate and share information across different authority domains; moreover, the “execution environment” or “computations” are dynamic since the number of users and the computational infrastructure change in time. Grid environments, in particular, promise to be an answer to deal with such complexity, by providing high performance execution support to large amount of users, and resource sharing across different organizations. Nevertheless, programming in Grid environments is still a difficult task. There is a lack of high level programming paradigms and support tools that may guide the application developer and allow reusability of state-of-the-art solutions. Specifically, the main goal of the work presented in this thesis is to contribute to the simplification of the development cycle of applications for Grid environments by bringing structure and flexibility to three stages of that cycle through a commonmodel. The stages are: the design phase, the execution phase, and the reconfiguration phase. The common model is based on the manipulation of patterns through pattern operators, and the division of both patterns and operators into two categories, namely structural and behavioural. Moreover, both structural and behavioural patterns are first class entities at each of the aforesaid stages. At the design phase, patterns can be manipulated like other first class entities such as components. This allows a more structured way to build applications by reusing and composing state-of-the-art patterns. At the execution phase, patterns are units of execution control: it is possible, for example, to start or stop and to resume the execution of a pattern as a single entity. At the reconfiguration phase, patterns can also be manipulated as single entities with the additional advantage that it is possible to perform a structural reconfiguration while keeping some of the behavioural constraints, and vice-versa. For example, it is possible to replace a behavioural pattern, which was applied to some structural pattern, with another behavioural pattern. In this thesis, besides the proposal of the methodology for distributed application development, as sketched above, a definition of a relevant set of pattern operators was made. The methodology and the expressivity of the pattern operators were assessed through the development of several representative distributed applications. To support this validation, a prototype was designed and implemented, encompassing some relevant patterns and a significant part of the patterns operators defined. This prototype was based in the Triana environment; Triana supports the development and deployment of distributed applications in the Grid through a dataflow-based programming model. Additionally, this thesis also presents the analysis of a mapping of some operators for execution control onto the Distributed Resource Management Application API (DRMAA). This assessment confirmed the suitability of the proposed model, as well as the generality and flexibility of the defined pattern operators