873 resultados para Multi-Agent Model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

O problema de Planejamento da Expansão de Sistemas de Distribuição (PESD) visa determinar diretrizes para a expansão da rede considerando a crescente demanda dos consumidores. Nesse contexto, as empresas distribuidoras de energia elétrica têm o papel de propor ações no sistema de distribuição com o intuito de adequar o fornecimento da energia aos padrões exigidos pelos órgãos reguladores. Tradicionalmente considera-se apenas a minimização do custo global de investimento de planos de expansão, negligenciando-se questões de confiabilidade e robustez do sistema. Como consequência, os planos de expansão obtidos levam o sistema de distribuição a configurações que são vulneráveis a elevados cortes de carga na ocorrência de contingências na rede. Este trabalho busca a elaboração de uma metodologia para inserir questões de confiabilidade e risco ao problema PESD tradicional, com o intuito de escolher planos de expansão que maximizem a robustez da rede e, consequentemente, atenuar os danos causados pelas contingências no sistema. Formulou-se um modelo multiobjetivo do problema PESD em que se minimizam dois objetivos: o custo global (que incorpora custo de investimento, custo de manutenção, custo de operação e custo de produção de energia) e o risco de implantação de planos de expansão. Para ambos os objetivos, são formulados modelos lineares inteiros mistos que são resolvidos utilizando o solver CPLEX através do software GAMS. Para administrar a busca por soluções ótimas, optou-se por programar em linguagem C++ dois Algoritmos Evolutivos: Non-dominated Sorting Genetic Algorithm-2 (NSGA2) e Strength Pareto Evolutionary Algorithm-2 (SPEA2). Esses algoritmos mostraram-se eficazes nessa busca, o que foi constatado através de simulações do planejamento da expansão de dois sistemas testes adaptados da literatura. O conjunto de soluções encontradas nas simulações contém planos de expansão com diferentes níveis de custo global e de risco de implantação, destacando a diversidade das soluções propostas. Algumas dessas topologias são ilustradas para se evidenciar suas diferenças.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Os motores de indução trifásicos são os principais elementos de conversão de energia elétrica em mecânica motriz aplicados em vários setores produtivos. Identificar um defeito no motor em operação pode fornecer, antes que ele falhe, maior segurança no processo de tomada de decisão sobre a manutenção da máquina, redução de custos e aumento de disponibilidade. Nesta tese são apresentas inicialmente uma revisão bibliográfica e a metodologia geral para a reprodução dos defeitos nos motores e a aplicação da técnica de discretização dos sinais de correntes e tensões no domínio do tempo. É também desenvolvido um estudo comparativo entre métodos de classificação de padrões para a identificação de defeitos nestas máquinas, tais como: Naive Bayes, k-Nearest Neighbor, Support Vector Machine (Sequential Minimal Optimization), Rede Neural Artificial (Perceptron Multicamadas), Repeated Incremental Pruning to Produce Error Reduction e C4.5 Decision Tree. Também aplicou-se o conceito de Sistemas Multiagentes (SMA) para suportar a utilização de múltiplos métodos concorrentes de forma distribuída para reconhecimento de padrões de defeitos em rolamentos defeituosos, quebras nas barras da gaiola de esquilo do rotor e curto-circuito entre as bobinas do enrolamento do estator de motores de indução trifásicos. Complementarmente, algumas estratégias para a definição da severidade dos defeitos supracitados em motores foram exploradas, fazendo inclusive uma averiguação da influência do desequilíbrio de tensão na alimentação da máquina para a determinação destas anomalias. Os dados experimentais foram adquiridos por meio de uma bancada experimental em laboratório com motores de potência de 1 e 2 cv acionados diretamente na rede elétrica, operando em várias condições de desequilíbrio das tensões e variações da carga mecânica aplicada ao eixo do motor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Os smart grids representam a nova geração dos sistemas elétricos de potência, combinando avanços em computação, sistemas de comunicação, processos distribuídos e inteligência artificial para prover novas funcionalidades quanto ao acompanhamento em tempo real da demanda e do consumo de energia elétrica, gerenciamento em larga escala de geradores distribuídos, entre outras, a partir de um sistema de controle distribuído sobre a rede elétrica. Esta estrutura modifica profundamente a maneira como se realiza o planejamento e a operação de sistemas elétricos nos dias de hoje, em especial os de distribuição, e há interessantes possibilidades de pesquisa e desenvolvimento possibilitada pela busca da implementação destas funcionalidades. Com esse cenário em vista, o presente trabalho utiliza uma abordagem baseada no uso de sistemas multiagentes para simular esse tipo de sistema de distribuição de energia elétrica, considerando opções de controle distintas. A utilização da tecnologia de sistemas multiagentes para a simulação é baseada na conceituação de smart grids como um sistema distribuído, algo também realizado nesse trabalho. Para validar a proposta, foram simuladas três funcionalidades esperadas dessas redes elétricas: classificação de cargas não-lineares; gerenciamento de perfil de tensão; e reconfiguração topológica com a finalidade de reduzir as perdas elétricas. Todas as modelagens e desenvolvimentos destes estudos estão aqui relatados. Por fim, o trabalho se propõe a identificar os sistemas multiagentes como uma tecnologia a ser empregada tanto para a pesquisa, quanto para implementação dessas redes elétricas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A eficiência econômica da bovinocultura leiteira está relacionada à utilização de animais que apresentem, concomitantemente, bom desempenho quanto à produção, reprodução, saúde e longevidade. Nisto, o índice de seleção configura-se como ferramenta importante ao aumento da lucratividade nesse sistema, visto que permite a seleção de reprodutores para várias características simultaneamente, considerando a relação entre elas bem como a relevância econômica das mesmas. Com a recente disponibilidade de dados genômicos tornou-se ainda possível expandir a abrangência e acurácia dos índices de seleção por meio do aumento do número e qualidade das informações consideradas. Nesse contexto, dois estudos foram desenvolvidos. No primeiro, o objetivo foi estimar parâmetros genéticos e valores genéticos (VG) para características relacionadas à produção e qualidade do leite incluindo-se a informação genômica na avaliação genética. Foram utilizadas medidas de idade ao primeiro parto (IPP), produção de leite (PROD), teor de gordura (GOR), proteína (PROT), lactose, caseína, escore de células somáticas (ECS) e perfil de ácidos graxos de 4.218 vacas bem como os genótipos de 755 vacas para 57.368 polimorfismos de nucleotídeo único (SNP). Os componentes de variância e VG foram obtidos por meio de um modelo misto animal, incluindo-se os efeitos de grupos de contemporâneas, ordem de lactação, dias em lactação e os efeitos aditivo genético, ambiente permanente e residual. Duas abordagens foram desenvolvidas: uma tradicional, na qual a matriz de relacionamentos é baseada no pedigree; e uma genômica, na qual esta matriz é construída combinando-se a informação de pedigree e dos SNP. As herdabilidades variaram de 0,07 a 0,39. As correlações genéticas entre PROD e os componentes do leite variaram entre -0,45 e -0,13 enquanto correlações altas e positivas foram estimadas entre GOR e os ácidos graxos. O uso da abordagem genômica não alterou as estimativas de parâmetros genéticos; contudo, houve aumento entre 1,5% e 6,8% na acurácia dos VG, à exceção de IPP, para a qual houve uma redução de 1,9%. No segundo estudo, o objetivo foi incorporar a informação genômica no desenvolvimento de índices econômicos de seleção. Neste, os VG para PROD, GOR, PROT, teor de ácidos graxos insaturados (INSAT), ECS e vida produtiva foram combinados em índices de seleção ponderados por valores econômicos estimados sob três cenários de pagamento: exclusivamente por volume de leite (PAG1); por volume e por componentes do leite (PAG2); por volume e componentes do leite incluindo INSAT (PAG3). Esses VG foram preditos a partir de fenótipos de 4.293 vacas e genótipos de 755 animais em um modelo multi-característica sob as abordagens tradicional e genômica. O uso da informação genômica influenciou os componentes de variância, VG e a resposta à seleção. Entretanto, as correlações de ranking entre as abordagens foram altas nos três cenários, com valores entre 0,91 e 0,99. Diferenças foram principalmente observadas entre PAG1 e os demais cenários, com correlações entre 0,67 e 0,88. A importância relativa das características e o perfil dos melhores animais foram sensíveis ao cenário de remuneração considerado. Assim, verificou-se como essencial a consideração dos valores econômicos das características na avaliação genética e decisões de seleção.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computational Swarms (enxames computacionais), consistindo da integração de sensores e atuadores inteligentes no nosso mundo conectado, possibilitam uma extensão da info-esfera no mundo físico. Nós chamamos esta info-esfera extendida, cíber-física, de Swarm. Este trabalho propõe uma visão de Swarm onde dispositivos computacionais cooperam dinâmica e oportunisticamente, gerando redes orgânicas e heterogêneas. A tese apresenta uma arquitetura computacional do Plano de Controle do Sistema Operacional do Swarm, que é uma camada de software distribuída embarcada em todos os dispositivos que fazem parte do Swarm, responsável por gerenciar recursos, definindo atores, como descrever e utilizar serviços e recursos (como divulgá-los e descobrí-los, como realizar transações, adaptações de conteúdos e cooperação multiagentes). O projeto da arquitetura foi iniciado com uma revisão da caracterização do conceito de Swarm, revisitando a definição de termos e estabelecendo uma terminologia para ser utilizada. Requisitos e desafios foram identificados e uma visão operacional foi proposta. Esta visão operacional foi exercitada com casos de uso e os elementos arquiteturais foram extraídos dela e organizados em uma arquitetura. A arquitetura foi testada com os casos de uso, gerando revisões do sistema. Cada um dos elementos arquiteturais requereram revisões do estado da arte. Uma prova de conceito do Plano de Controle foi implementada e uma demonstração foi proposta e implementada. A demonstração selecionada foi o Smart Jukebox, que exercita os aspectos distribuídos e a dinamicidade do sistema proposto. Este trabalho apresenta a visão do Swarm computacional e apresenta uma plataforma aplicável na prática. A evolução desta arquitetura pode ser a base de uma rede global, heterogênea e orgânica de redes de dispositivos computacionais alavancando a integração de sistemas cíber-físicos na núvem permitindo a cooperação de sistemas escaláveis e flexíveis, interoperando para alcançar objetivos comuns.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Numerical simulations of turbulent driven flow in a dense medium cyclone with magnetite medium have been conducted using Fluent. The predicted air core shape and diameter were found to be close to the experimental results measured by gamma ray tomography. It is possible that the Large eddy simulation (LES) turbulence model with Mixture multi-phase model can be used to predict the air/slurry interface accurately although the LES may need a finer grid. Multi-phase simulations (air/water/medium) are showing appropriate medium segregation effects but are over-predicting the level of segregation compared to that measured by gamma-ray tomography in particular with over prediction of medium concentrations near the wall. Further, investigated the accurate prediction of axial segregation of magnetite using the LES turbulence model together with the multi-phase mixture model and viscosity corrections according to the feed particle loading factor. Addition of lift forces and viscosity correction improved the predictions especially near the wall. Predicted density profiles are very close to gamma ray tomography data showing a clear density drop near the wall. The effect of size distribution of the magnetite has been fully studied. It is interesting to note that the ultra-fine magnetite sizes (i.e. 2 and 7 mu m) are distributed uniformly throughout the cyclone. As the size of magnetite increases, more segregation of magnetite occurs close to the wall. The cut-density (d(50)) of the magnetite segregation is 32 gm, which is expected with superfine magnetite feed size distribution. At higher feed densities the agreement between the [Dungilson, 1999; Wood, J.C., 1990. A performance model for coal-washing dense medium cyclones, Ph.D. Thesis, JKMRC, University of Queensland] correlations and the CFD are reasonably good, but the overflow density is lower than the model predictions. It is believed that the excessive underflow volumetric flow rates are responsible for under prediction of the overflow density. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we study at perturbative level correlation functions of Wilson loops (and local operators) and their relations to localization, integrability and other quantities of interest as the cusp anomalous dimension and the Bremsstrahlung function. First of all we consider a general class of 1/8 BPS Wilson loops and chiral primaries in N=4 Super Yang-Mills theory. We perform explicit two-loop computations, for some particular but still rather general configuration, that confirm the elegant results expected from localization procedure. We find notably full consistency with the multi-matrix model averages, obtained from 2D Yang-Mills theory on the sphere, when interacting diagrams do not cancel and contribute non-trivially to the final answer. We also discuss the near BPS expansion of the generalized cusp anomalous dimension with L units of R-charge. Integrability provides an exact solution, obtained by solving a general TBA equation in the appropriate limit: we propose here an alternative method based on supersymmetric localization. The basic idea is to relate the computation to the vacuum expectation value of certain 1/8 BPS Wilson loops with local operator insertions along the contour. Also these observables localize on a two-dimensional gauge theory on S^2, opening the possibility of exact calculations. As a test of our proposal, we reproduce the leading Luscher correction at weak coupling to the generalized cusp anomalous dimension. This result is also checked against a genuine Feynman diagram approach in N=4 super Yang-Mills theory. Finally we study the cusp anomalous dimension in N=6 ABJ(M) theory, identifying a scaling limit in which the ladder diagrams dominate. The resummation is encoded into a Bethe-Salpeter equation that is mapped to a Schroedinger problem, exactly solvable due to the surprising supersymmetry of the effective Hamiltonian. In the ABJ case the solution implies the diagonalization of the U(N) and U(M) building blocks, suggesting the existence of two independent cusp anomalous dimensions and an unexpected exponentation structure for the related Wilson loops.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Measuring Job Openings: Evidence from Swedish Plant Level Data. In modern macroeconomic models “job openings'' are a key component. Thus, when taking these models to the data we need an empirical counterpart to the theoretical concept of job openings. To achieve this, the literature relies on job vacancies measured either in survey or register data. Insofar as this concept captures the concept of job openings well we should see a tight relationship between vacancies and subsequent hires on the micro level. To investigate this, I analyze a new data set of Swedish hires and job vacancies on the plant level covering the period 2001-2012. I find that vacancies contain little power in predicting hires over and above (i) whether the number of vacancies is positive and (ii) plant size. Building on this, I propose an alternative measure of job openings in the economy. This measure (i) better predicts hiring at the plant level and (ii) provides a better fitting aggregate matching function vis-à-vis the traditional vacancy measure. Firm Level Evidence from Two Vacancy Measures. Using firm level survey and register data for both Sweden and Denmark we show systematic mis-measurement in both vacancy measures. While the register-based measure on the aggregate constitutes a quarter of the survey-based measure, the latter is not a super-set of the former. To obtain the full set of unique vacancies in these two databases, the number of survey vacancies should be multiplied by approximately 1.2. Importantly, this adjustment factor varies over time and across firm characteristics. Our findings have implications for both the search-matching literature and policy analysis based on vacancy measures: observed changes in vacancies can be an outcome of changes in mis-measurement, and are not necessarily changes in the actual number of vacancies. Swedish Unemployment Dynamics. We study the contribution of different labor market flows to business cycle variations in unemployment in the context of a dual labor market. To this end, we develop a decomposition method that allows for a distinction between permanent and temporary employment. We also allow for slow convergence to steady state which is characteristic of European labor markets. We apply the method to a new Swedish data set covering the period 1987-2012 and show that the relative contributions of inflows and outflows to/from unemployment are roughly 60/30. The remaining 10\% are due to flows not involving unemployment. Even though temporary contracts only cover 9-11\% of the working age population, variations in flows involving temporary contracts account for 44\% of the variation in unemployment. We also show that the importance of flows involving temporary contracts is likely to be understated if one does not account for non-steady state dynamics. The New Keynesian Transmission Mechanism: A Heterogeneous-Agent Perspective. We argue that a 2-agent version of the standard New Keynesian model---where a ``worker'' receives only labor income and a “capitalist'' only profit income---offers insights about how income inequality affects the monetary transmission mechanism. Under rigid prices, monetary policy affects the distribution of consumption, but it has no effect on output as workers choose not to change their hours worked in response to wage movements. In the corresponding representative-agent model, in contrast, hours do rise after a monetary policy loosening due to a wealth effect on labor supply: profits fall, thus reducing the representative worker's income. If wages are rigid too, however, the monetary transmission mechanism is active and resembles that in the corresponding representative-agent model. Here, workers are not on their labor supply curve and hence respond passively to demand, and profits are procyclical.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Underpinned by the resource-based view (RBV), social exchange theory (SET), and a theory of intrinsic motivation (empowerment), I proposed and tested a multi-level model that simultaneously examines the intermediate linkages or mechanisms through which HPWS impact individual and organizational performance. First and underpinned by RBV, I examined at the unit level, collective human capital and competitive advantage as path-ways through which the use of HPWS influences – branch market performance. Second and-, underpinned by social exchange (perceived organizational support) and intrinsic motivation (psychological empowerment) theories, I examined cross and individual level mechanisms through which experienced HPWS may influence employee performance. I tested the propositions of this study with multisource data obtained from junior and senior customer contact employees, and managers of 37 branches of two banks in Ghana. Results of the Structural Equation Modeling (SEM) analysis revealed that (i) collective human capital partially mediated the relationship between management-rated HPWS and competitive advantage, while competitive advantage completely mediated the influence of human capital on branch market performance. Consequently, management-rated HPWS influenced branch market performance indirectly through collective human capital and competitive advantage. Additionally, results of hierarchical linear modeling (HLM) tests of the cross-level influences on the motivational implications of HPWS revealed that (i) management-rated HPWS influenced experienced HPWS; (ii) perceived organizational support (POS) and psychological empowerment fully mediated the influence of experienced HPWS on service-oriented organizational citizenship behaviour (OCB), and; (iii) service-oriented OCB mediated the influence of psychological empowerment and POS on service quality and task performance. I discuss the theoretical and practical implications of these findings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper discusses the use of the non-parametric free disposal hull (FDH) and the parametric multi-level model (MLM) as alternative methods for measuring pupil and school attainment where hierarchical structured data are available. Using robust FDH estimates, we show how to decompose the overall inefficiency of a unit (a pupil) into a unit specific and a higher level (a school) component. By a sample of entry and exit attainments of 3017 girls in British ordinary single sex schools, we test the robustness of the non-parametric and parametric estimates. Finally, the paper uses the traditional MLM model in a best practice framework so that pupil and school efficiencies can be computed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many models of edge analysis in biological vision, the initial stage is a linear 2nd derivative operation. Such models predict that adding a linear luminance ramp to an edge will have no effect on the edge's appearance, since the ramp has no effect on the 2nd derivative. Our experiments did not support this prediction: adding a negative-going ramp to a positive-going edge (or vice-versa) greatly reduced the perceived blur and contrast of the edge. The effects on a fairly sharp edge were accurately predicted by a nonlinear multi-scale model of edge processing [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision], in which a half-wave rectifier comes after the 1st derivative filter. But we also found that the ramp affected perceived blur more profoundly when the edge blur was large, and this greater effect was not predicted by the existing model. The model's fit to these data was much improved when the simple half-wave rectifier was replaced by a threshold-like transducer [May, K. A. & Georgeson, M. A. (2007). Blurred edges look faint, and faint edges look sharp: The effect of a gradient threshold in a multi-scale edge coding model. Vision Research, 47, 1705-1720.]. This modified model correctly predicted that the interaction between ramp gradient and edge scale would be much larger for blur perception than for contrast perception. In our model, the ramp narrows an internal representation of the gradient profile, leading to a reduction in perceived blur. This in turn reduces perceived contrast because estimated blur plays a role in the model's estimation of contrast. Interestingly, the model predicts that analogous effects should occur when the width of the window containing the edge is made narrower. This has already been confirmed for blur perception; here, we further support the model by showing a similar effect for contrast perception. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The goal of evidence-based medicine is to uniformly apply evidence gained from scientific research to aspects of clinical practice. In order to achieve this goal, new applications that integrate increasingly disparate health care information resources are required. Access to and provision of evidence must be seamlessly integrated with existing clinical workflow and evidence should be made available where it is most often required - at the point of care. In this paper we address these requirements and outline a concept-based framework that captures the context of a current patient-physician encounter by combining disease and patient-specific information into a logical query mechanism for retrieving relevant evidence from the Cochrane Library. Returned documents are organized by automatically extracting concepts from the evidence-based query to create meaningful clusters of documents which are presented in a manner appropriate for point of care support. The framework is currently being implemented as a prototype software agent that operates within the larger context of a multi-agent application for supporting workflow management of emergency pediatric asthma exacerbations. © 2008 Springer-Verlag Berlin Heidelberg.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis reports a cross-national study carried out in England and India in an attempt to clarify the association of certain cultural and non-cultural characteristics with people's work-related attitudes and values, and with the structure of their work organizations. Three perspectives are considered to be relevant to the objectives of the study. The contingency perspective suggests that a 'fit' between an organization's context and its structural arrangements will be fundamentally necessary for achieving success and survival. The political economy perspective argues for the determining role of the social and economic structures within which the organization operates. The culturalist perspective looks to cultural attitudes and values of organizational members for an explanation for their organization's structure. The empirical investigation was carried out in three stages in each of the two countries involved by means of surveys of cultural attitudes, work-related attitudes and organizational structures and systems. The cultural surveys suggested that Indian and English people were different from one another with regard to fear of, and respect and obedience to, their seniors, ability to cope with ambiguity, honesty, independence, expression of emotions, fatalism, reserve, and care for others; they were similar with regard to tolerance, friendliness, attitude to change, attitude to law, self-control and self-confidence, and attitude to social differentiation. The second stage of the study, involving the employees of fourteen organizations, found that the English ones perceived themselves to have more power at work, expressed more tolerance for ambiguity, and had different expectations from their job than did the Indian equivalents. The two samples were similar with respect to commitment to their company and trust in their colleagues. The findings also suggested that employees' occupations, education and age had some influences on their work-related attitudes. The final stage of the research was a study of structures, control systems, and reward and punishment policies of the same fourteen organizations which were matched almost completely on their contextual factors across the two countries. English and Indian organizations were found to be similar in terms of centralization, specialization, chief executive's span of control, height and management control strategies. English organizations, however, were far more formalized, spent more time on consultation and their managers delegated authority lower down the hierarchy than Indian organizations. The major finding of the study was the multiple association that cultural, national and contingency factors had with the structural characteristics of the organizations and with the work-related attitudes of their members. On the basis of this finding, a multi-perspective model for understanding organizational structures and systems is proposed in which the contributions made by contingency, political economy and cultural perspectives are recognized and incorporated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aim. To test a model of eight thematic determinants of whether nurses intend to remain in nursing roles. Background. Despite the dramatic increase in the supply of nurses in England over the past decade, a combination of the economic downturn, funding constraints and more generally an ageing nursing population means that healthcare organizations are likely to encounter long-term problems in the recruitment and retention of nursing staff. Design. Survey. Method. Data were collected from a large staff survey conducted in the National Health Service in England between September-December 2009. A multi-level model was tested using MPlus statistical software on a sub-sample of 16,707 nurses drawn from 167 healthcare organizations. Results. Findings were generally supportive of the proposed model. Nurses who reported being psychologically engaged with their jobs reported a lower intention to leave their current job. The perceived availability of developmental opportunities, being able to achieve a good work-life balance and whether nurses' encountered work pressures were also influencing factors on their turnover intentions. However, relationships formed with colleagues and patients displayed comparatively small relationships with turnover intentions. Conclusion. The focus at the local level needs to be on promoting employee engagement by equipping staff with the resources (physical and monetary) and control to enable them to perform their tasks to standards they aspire to and creating a work environment where staff are fully involved in the wider running of their organizations, communicating to staff that patient care is important and the top priority of the organization. © 2012 Blackwell Publishing Ltd.