829 resultados para Adaptive Equalization. Neural Networks. Optic Systems. Neural Equalizer
Resumo:
Differently from theoretical scale-free networks, most real networks present multi-scale behavior, with nodes structured in different types of functional groups and communities. While the majority of approaches for classification of nodes in a complex network has relied on local measurements of the topology/connectivity around each node, valuable information about node functionality can be obtained by concentric (or hierarchical) measurements. This paper extends previous methodologies based on concentric measurements, by studying the possibility of using agglomerative clustering methods, in order to obtain a set of functional groups of nodes, considering particular institutional collaboration network nodes, including various known communities (departments of the University of Sao Paulo). Among the interesting obtained findings, we emphasize the scale-free nature of the network obtained, as well as identification of different patterns of authorship emerging from different areas (e.g. human and exact sciences). Another interesting result concerns the relatively uniform distribution of hubs along concentric levels, contrariwise to the non-uniform pattern found in theoretical scale-free networks such as the BA model. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The Methods for compensation of harmonic currents and voltages have been widely used since these methods allow to reduce to acceptable levels the harmonic distortion in the voltages or currents in a power system, and also compensate reactive. The reduction of harmonics and reactive contributes to the reduction of losses in transmission lines and electrical machinery, increasing the power factor, reduce the occurrence of overvoltage and overcurrent. The active power filter is the most efficient method for compensation of harmonic currents and voltages. The active power filter is necessary to use current and voltage controllers loop. Conventionally, the current and voltage control loop of active filter has been done by proportional controllers integrative. This work, investigated the use of a robust adaptive control technique on the shunt active power filter current and voltage control loop to increase robustness and improve the performance of active filter to compensate for harmonics. The proposed control scheme is based on a combination of techniques for adaptive control pole placement and variable structure. The advantages of the proposed method over conventional ones are: lower total harmonic distortion, more flexibility, adaptability and robustness to the system. Moreover, the proposed control scheme improves the performance and improves the transient of active filter. The validation of the proposed technique was verified initially by a simulation program implemented in C++ language and then experimental results were obtained using a prototype three-phase active filter of 1 kVA
Resumo:
Neste trabalho, um controlador adaptativo backstepping a estrutura variável (Variable Structure Adaptive Backstepping Controller, VS-ABC) é apresentado para plantas monovariáveis, lineares e invariantes no tempo com grau relativo unitário. Ao invés das tradicionais leis integrais para estimação dos parâmetros da planta, leis chaveadas são utilizadas com o objetivo de aumentar a robustez em relação a incertezas paramétricas e distúrbios externos, bem como melhorar o desempenho transitório do sistema. Adicionalmente, o projeto do novo controlador é mais intuitivo quando comparado ao controlador backstepping original, uma vez que os relés introduzidos apresentam amplitudes diretamente relacionadas com os parâmetros nominais da planta. Esta nova abordagem, com uso de estrutura variável, também reduz a complexidade das implementações práticas, motivando a utilização de componentes industriais, tais como, FPGAs (Field Programmable Gate Arrays ), MCUs (Microcontrollers) e DSPs (Digital Signal Processors). Simulações preliminares para um sistema instável de primeira e segunda ordem são apresentadas de modo a corroborar os estudos. Um dos exemplos de Rohrs é ainda abordado através de simulações, para os dois cenários adaptativos: o controlador backstepping adaptativo original e o VS-ABC
Resumo:
The identification of genes essential for survival is important for the understanding of the minimal requirements for cellular life and for drug design. As experimental studies with the purpose of building a catalog of essential genes for a given organism are time-consuming and laborious, a computational approach which could predict gene essentiality with high accuracy would be of great value. We present here a novel computational approach, called NTPGE (Network Topology-based Prediction of Gene Essentiality), that relies on the network topology features of a gene to estimate its essentiality. The first step of NTPGE is to construct the integrated molecular network for a given organism comprising protein physical, metabolic and transcriptional regulation interactions. The second step consists in training a decision-tree-based machine-learning algorithm on known essential and non-essential genes of the organism of interest, considering as learning attributes the network topology information for each of these genes. Finally, the decision-tree classifier generated is applied to the set of genes of this organism to estimate essentiality for each gene. We applied the NTPGE approach for discovering the essential genes in Escherichia coli and then assessed its performance. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a mathematical model and a methodology to solve a transmission network expansion planning problem considering open access. The methodology finds the optimal transmission network expansion plan that allows the power system to operate adequately in an environment with multiples generation scenarios. The model presented is solved using a specialized genetic algorithm. The methodology is tested in a system from the literature. ©2008 IEEE.
Resumo:
This paper deals with the problem of establishing stabilizing state-dependent switching laws in DC-DC converters operating at continuous conduction mode (CCM) and comparing their performance indexes. Firstly, the nature of the problem is defined, that is, the study of switched affine systems, which may not share a common equilibrium point. The concept of stability is, therefore, broadened. Then, the central theorem is proposed, from which a family of switching laws can be derived, namely the minimum law and the hold state law. Some of these are proved to stabilize the basic DC-DC converters and then, their performances are compared to another law, from a previous work, by simulation, where a great reduction in overshoot is obtained. © 2011 IEEE.
Resumo:
We have completed a high-contrast direct imaging survey for giant planets around 57 debris disk stars as part of the Gemini NICI Planet-Finding Campaign. We achieved median H-band contrasts of 12.4 mag at 0.''5 and 14.1 mag at 1'' separation. Follow-up observations of the 66 candidates with projected separation <500 AU show that all of them are background objects. To establish statistical constraints on the underlying giant planet population based on our imaging data, we have developed a new Bayesian formalism that incorporates (1) non-detections, (2) single-epoch candidates, (3) astrometric and (4) photometric information, and (5) the possibility of multiple planets per star to constrain the planet population. Our formalism allows us to include in our analysis the previously known β Pictoris and the HR 8799 planets. Our results show at 95% confidence that <13% of debris disk stars have a ≥5 M Jup planet beyond 80 AU, and <21% of debris disk stars have a ≥3 M Jup planet outside of 40 AU, based on hot-start evolutionary models. We model the population of directly imaged planets as d 2 N/dMdavpropm α a β, where m is planet mass and a is orbital semi-major axis (with a maximum value of a max). We find that β < –0.8 and/or α > 1.7. Likewise, we find that β < –0.8 and/or a max < 200 AU. For the case where the planet frequency rises sharply with mass (α > 1.7), this occurs because all the planets detected to date have masses above 5 M Jup, but planets of lower mass could easily have been detected by our search. If we ignore the β Pic and HR 8799 planets (should they belong to a rare and distinct group), we find that <20% of debris disk stars have a ≥3 M Jup planet beyond 10 AU, and β < –0.8 and/or α < –1.5. Likewise, β < –0.8 and/or a max < 125 AU. Our Bayesian constraints are not strong enough to reveal any dependence of the planet frequency on stellar host mass. Studies of transition disks have suggested that about 20% of stars are undergoing planet formation; our non-detections at large separations show that planets with orbital separation >40 AU and planet masses >3 M Jup do not carve the central holes in these disks.
Resumo:
Concentration photovoltaic (CPV) systems might produce quite uneven irradiance distributions (both on their level and on their spectral distribution) on the solar cell. This effect can be even more evident when the CPV system is slightly off-axis, since they are often designed to assure good uniformity only at normal incidence. The non-uniformities both in absolute irradiance and spectral content produced by the CPV systems, can originate electrical losses in multi-junction solar cells (MJSC). This works is focused on the integration of ray-tracing methods for simulating the irradiance and spectrum maps produced by different optic systems throughout the solar cell surface, with a 3D fully distributed circuit model which simulates the electrical behavior of a state-of-the-art triple-junction solar cell under the different light distributions obtained with ray-tracing. In this study four different CPV system (SILO, XTP, RTP, and FK) comprising Fresnel lenses concentrating sunlight onto the same solar cell are modeled when working on-axis and 0.6 degrees off-axis. In this study the impact of non-uniformities on a CPV system behavior is revealed. The FK outperforms other Fresnel-based CPV systems in both on-axis and off-axis conditions.
Resumo:
La seguridad en redes informáticas es un área que ha sido ampliamente estudiada y objeto de una extensa investigación en los últimos años. Debido al continuo incremento en la complejidad y sofisticación de los ataques informáticos, el aumento de su velocidad de difusión, y la lentitud de reacción frente a las intrusiones existente en la actualidad, se hace patente la necesidad de mecanismos de detección y respuesta a intrusiones, que detecten y además sean capaces de bloquear el ataque, y mitiguen su impacto en la medida de lo posible. Los Sistemas de Detección de Intrusiones o IDSs son tecnologías bastante maduras cuyo objetivo es detectar cualquier comportamiento malicioso que ocurra en las redes. Estos sistemas han evolucionado rápidamente en los últimos años convirtiéndose en herramientas muy maduras basadas en diferentes paradigmas, que mejoran su capacidad de detección y le otorgan un alto nivel de fiabilidad. Por otra parte, un Sistema de Respuesta a Intrusiones (IRS) es un componente de seguridad que puede estar presente en la arquitectura de una red informática, capaz de reaccionar frente a los incidentes detectados por un Sistema de Detección de Intrusiones (IDS). Por desgracia, esta tecnología no ha evolucionado al mismo ritmo que los IDSs, y la reacción contra los ataques detectados es lenta y básica, y los sistemas presentan problemas para ejecutar respuestas de forma automática. Esta tesis doctoral trata de hacer frente al problema existente en la reacción automática frente a intrusiones, mediante el uso de ontologías, lenguajes formales de especificación de comportamiento y razonadores semánticos como base de la arquitectura del sistema de un sistema de respuesta automática frente a intrusiones o AIRS. El objetivo de la aproximación es aprovechar las ventajas de las ontologías en entornos heterogéneos, además de su capacidad para especificar comportamiento sobre los objetos que representan los elementos del dominio modelado. Esta capacidad para especificar comportamiento será de gran utilidad para que el AIRS infiera la respuesta óptima frente a una intrusión en el menor tiempo posible. Abstract Security in networks is an area that has been widely studied and has been the focus of extensive research over the past few years. The number of security events is increasing, and they are each time more sophisticated, and quickly spread, and slow reaction against intrusions, there is a need for intrusion detection and response systems to dynamically adapt so as to better detect and respond to attacks in order to mitigate them or reduce their impact. Intrusion Detection Systems (IDSs) are mature technologies whose aim is detecting malicious behavior in the networks. These systems have quickly evolved and there are now very mature tools based on different paradigms (statistic anomaly-based, signature-based and hybrids) with a high level of reliability. On the other hand, Intrusion Response System (IRS) is a security technology able to react against the intrusions detected by IDS. Unfortunately, the state of the art in IRSs is not as mature as with IDSs. The reaction against intrusions is slow and simple, and these systems have difficulty detecting intrusions in real time and triggering automated responses. This dissertation is to address the existing problem in automated reactions against intrusions using ontologies, formal behaviour languages and semantic reasoners as the basis of the architecture of an automated intrusion response systems or AIRS. The aim is to take advantage of ontologies in heterogeneous environments, in addition to its ability to specify behavior of objects representing the elements of the modeling domain. This ability to specify behavior will be useful for the AIRS in the inference process of the optimum response against an intrusion, as quickly as possible.
Resumo:
La creciente complejidad, heterogeneidad y dinamismo inherente a las redes de telecomunicaciones, los sistemas distribuidos y los servicios avanzados de información y comunicación emergentes, así como el incremento de su criticidad e importancia estratégica, requieren la adopción de tecnologías cada vez más sofisticadas para su gestión, su coordinación y su integración por parte de los operadores de red, los proveedores de servicio y las empresas, como usuarios finales de los mismos, con el fin de garantizar niveles adecuados de funcionalidad, rendimiento y fiabilidad. Las estrategias de gestión adoptadas tradicionalmente adolecen de seguir modelos excesivamente estáticos y centralizados, con un elevado componente de supervisión y difícilmente escalables. La acuciante necesidad por flexibilizar esta gestión y hacerla a la vez más escalable y robusta, ha provocado en los últimos años un considerable interés por desarrollar nuevos paradigmas basados en modelos jerárquicos y distribuidos, como evolución natural de los primeros modelos jerárquicos débilmente distribuidos que sucedieron al paradigma centralizado. Se crean así nuevos modelos como son los basados en Gestión por Delegación, en el paradigma de código móvil, en las tecnologías de objetos distribuidos y en los servicios web. Estas alternativas se han mostrado enormemente robustas, flexibles y escalables frente a las estrategias tradicionales de gestión, pero continúan sin resolver aún muchos problemas. Las líneas actuales de investigación parten del hecho de que muchos problemas de robustez, escalabilidad y flexibilidad continúan sin ser resueltos por el paradigma jerárquico-distribuido, y abogan por la migración hacia un paradigma cooperativo fuertemente distribuido. Estas líneas tienen su germen en la Inteligencia Artificial Distribuida (DAI) y, más concretamente, en el paradigma de agentes autónomos y en los Sistemas Multi-agente (MAS). Todas ellas se perfilan en torno a un conjunto de objetivos que pueden resumirse en alcanzar un mayor grado de autonomía en la funcionalidad de la gestión y una mayor capacidad de autoconfiguración que resuelva los problemas de escalabilidad y la necesidad de supervisión presentes en los sistemas actuales, evolucionar hacia técnicas de control fuertemente distribuido y cooperativo guiado por la meta y dotar de una mayor riqueza semántica a los modelos de información. Cada vez más investigadores están empezando a utilizar agentes para la gestión de redes y sistemas distribuidos. Sin embargo, los límites establecidos en sus trabajos entre agentes móviles (que siguen el paradigma de código móvil) y agentes autónomos (que realmente siguen el paradigma cooperativo) resultan difusos. Muchos de estos trabajos se centran en la utilización de agentes móviles, lo cual, al igual que ocurría con las técnicas de código móvil comentadas anteriormente, les permite dotar de un mayor componente dinámico al concepto tradicional de Gestión por Delegación. Con ello se consigue flexibilizar la gestión, distribuir la lógica de gestión cerca de los datos y distribuir el control. Sin embargo se permanece en el paradigma jerárquico distribuido. Si bien continúa sin definirse aún una arquitectura de gestión fiel al paradigma cooperativo fuertemente distribuido, estas líneas de investigación han puesto de manifiesto serios problemas de adecuación en los modelos de información, comunicación y organizativo de las arquitecturas de gestión existentes. En este contexto, la tesis presenta un modelo de arquitectura para gestión holónica de sistemas y servicios distribuidos mediante sociedades de agentes autónomos, cuyos objetivos fundamentales son el incremento del grado de automatización asociado a las tareas de gestión, el aumento de la escalabilidad de las soluciones de gestión, soporte para delegación tanto por dominios como por macro-tareas, y un alto grado de interoperabilidad en entornos abiertos. A partir de estos objetivos se ha desarrollado un modelo de información formal de tipo semántico, basado en lógica descriptiva que permite un mayor grado de automatización en la gestión en base a la utilización de agentes autónomos racionales, capaces de razonar, inferir e integrar de forma dinámica conocimiento y servicios conceptualizados mediante el modelo CIM y formalizados a nivel semántico mediante lógica descriptiva. El modelo de información incluye además un “mapping” a nivel de meta-modelo de CIM al lenguaje de especificación de ontologías OWL, que supone un significativo avance en el área de la representación y el intercambio basado en XML de modelos y meta-información. A nivel de interacción, el modelo aporta un lenguaje de especificación formal de conversaciones entre agentes basado en la teoría de actos ilocucionales y aporta una semántica operacional para dicho lenguaje que facilita la labor de verificación de propiedades formales asociadas al protocolo de interacción. Se ha desarrollado también un modelo de organización holónico y orientado a roles cuyas principales características están alineadas con las demandadas por los servicios distribuidos emergentes e incluyen la ausencia de control central, capacidades de reestructuración dinámica, capacidades de cooperación, y facilidades de adaptación a diferentes culturas organizativas. El modelo incluye un submodelo normativo adecuado al carácter autónomo de los holones de gestión y basado en las lógicas modales deontológica y de acción.---ABSTRACT---The growing complexity, heterogeneity and dynamism inherent in telecommunications networks, distributed systems and the emerging advanced information and communication services, as well as their increased criticality and strategic importance, calls for the adoption of increasingly more sophisticated technologies for their management, coordination and integration by network operators, service providers and end-user companies to assure adequate levels of functionality, performance and reliability. The management strategies adopted traditionally follow models that are too static and centralised, have a high supervision component and are difficult to scale. The pressing need to flexibilise management and, at the same time, make it more scalable and robust recently led to a lot of interest in developing new paradigms based on hierarchical and distributed models, as a natural evolution from the first weakly distributed hierarchical models that succeeded the centralised paradigm. Thus new models based on management by delegation, the mobile code paradigm, distributed objects and web services came into being. These alternatives have turned out to be enormously robust, flexible and scalable as compared with the traditional management strategies. However, many problems still remain to be solved. Current research lines assume that the distributed hierarchical paradigm has as yet failed to solve many of the problems related to robustness, scalability and flexibility and advocate migration towards a strongly distributed cooperative paradigm. These lines of research were spawned by Distributed Artificial Intelligence (DAI) and, specifically, the autonomous agent paradigm and Multi-Agent Systems (MAS). They all revolve around a series of objectives, which can be summarised as achieving greater management functionality autonomy and a greater self-configuration capability, which solves the problems of scalability and the need for supervision that plague current systems, evolving towards strongly distributed and goal-driven cooperative control techniques and semantically enhancing information models. More and more researchers are starting to use agents for network and distributed systems management. However, the boundaries established in their work between mobile agents (that follow the mobile code paradigm) and autonomous agents (that really follow the cooperative paradigm) are fuzzy. Many of these approximations focus on the use of mobile agents, which, as was the case with the above-mentioned mobile code techniques, means that they can inject more dynamism into the traditional concept of management by delegation. Accordingly, they are able to flexibilise management, distribute management logic about data and distribute control. However, they remain within the distributed hierarchical paradigm. While a management architecture faithful to the strongly distributed cooperative paradigm has yet to be defined, these lines of research have revealed that the information, communication and organisation models of existing management architectures are far from adequate. In this context, this dissertation presents an architectural model for the holonic management of distributed systems and services through autonomous agent societies. The main objectives of this model are to raise the level of management task automation, increase the scalability of management solutions, provide support for delegation by both domains and macro-tasks and achieve a high level of interoperability in open environments. Bearing in mind these objectives, a descriptive logic-based formal semantic information model has been developed, which increases management automation by using rational autonomous agents capable of reasoning, inferring and dynamically integrating knowledge and services conceptualised by means of the CIM model and formalised at the semantic level by means of descriptive logic. The information model also includes a mapping, at the CIM metamodel level, to the OWL ontology specification language, which amounts to a significant advance in the field of XML-based model and metainformation representation and exchange. At the interaction level, the model introduces a formal specification language (ACSL) of conversations between agents based on speech act theory and contributes an operational semantics for this language that eases the task of verifying formal properties associated with the interaction protocol. A role-oriented holonic organisational model has also been developed, whose main features meet the requirements demanded by emerging distributed services, including no centralised control, dynamic restructuring capabilities, cooperative skills and facilities for adaptation to different organisational cultures. The model includes a normative submodel adapted to management holon autonomy and based on the deontic and action modal logics.
Resumo:
Singular-value decomposition (SVD)-based multiple-input multiple output (MIMO) systems, where the whole MIMO channel is decomposed into a number of unequally weighted single-input single-output (SISO) channels, have attracted a lot of attention in the wireless community. The unequal weighting of the SISO channels has led to intensive research on bit- and power allocation even in MIMO channel situation with poor scattering conditions identified as the antennas correlation effect. In this situation, the unequal weighting of the SISO channels becomes even much stronger. In comparison to the SVD-assisted MIMO transmission, geometric mean decomposition (GMD)-based MIMO systems are able to compensate the drawback of weighted SISO channels when using SVD, where the decomposition result is nearly independent of the antennas correlation effect. The remaining interferences after the GMD-based signal processing can be easily removed by using dirty paper precoding as demonstrated in this work. Our results show that GMD-based MIMO transmission has the potential to significantly simplify the bit and power loading processes and outperforms the SVD-based MIMO transmission as long as the same QAM-constellation size is used on all equally-weighted SISO channels.
Resumo:
Several languages have been proposed for the task of describing networks of systems, either to help on managing, simulate or deploy testbeds for testing purposes. However, there is no one specifically designed to describe the honeynets, covering the specific characteristics in terms of applications and tools included in the honeypot systems that make the honeynet. In this paper, the requirements of honeynet description are studied and a survey of existing description languages is presented, concluding that a CIM (Common Information Model) match the basic requirements. Thus, a CIM like technology independent honeynet description language (TIHDL) is proposed. The language is defined being independent of the platform where the honeynet will be deployed later, and it can be translated, either using model-driven techniques or other translation mechanisms, into the description languages of honeynet deployment platforms and tools. This approach gives flexibility to allow the use of a combination of heterogeneous deployment platforms. Besides, a flexible virtual honeynet generation tool (HoneyGen) based on the approach and description language proposed and capable of deploying honeynets over VNX (Virtual Networks over LinuX) and Honeyd platforms is presented for validation purposes.
Resumo:
The analysis of clusters has attracted considerable interest over the last few decades. The articulation of clusters into complex networks and systems of innovation -- generally known as regional innovation systems -- has, in particular, been associated with the delivery of greater innovation and growth. However, despite the growing economic and policy relevance of clusters, little systematic research has been conducted into their association with other factors promoting innovation and economic growth. This article addresses this issue by looking at the relationship between innovation and economic growth in 152 regions of Europe during the period between 1995 and 2006. Using an econometric model with a static and a dynamic dimension, the results of the analysis highlight that: a) regional growth through innovation in Europe is fundamentally connected to the presence of an adequate socioeconomic environment and, in particular, to the existence of a well-trained and educated pool of workers; b) the presence of clusters matters for regional growth, but only in combination with a good ‘social filter’, and this association wanes in time; c) more traditional R&D variables have a weak initial connection to economic development, but this connection increases over time and, is, once again, contingent on the existence of adequate socioeconomic conditions.