829 resultados para Multi-classifier systems
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Connectivity is the basic factor for the proper operation of any wireless network. In a mobile wireless sensor network it is a challenge for applications and protocols to deal with connectivity problems, as links might get up and down frequently. In these scenarios, having knowledge of the node remaining connectivity time could both improve the performance of the protocols (e.g. handoff mechanisms) and save possible scarce nodes resources (CPU, bandwidth, and energy) by preventing unfruitful transmissions. The current paper provides a solution called Genetic Machine Learning Algorithm (GMLA) to forecast the remainder connectivity time in mobile environments. It consists in combining Classifier Systems with a Markov chain model of the RF link quality. The main advantage of using an evolutionary approach is that the Markov model parameters can be discovered on-the-fly, making it possible to cope with unknown environments and mobility patterns. Simulation results show that the proposal is a very suitable solution, as it overcomes the performance obtained by similar approaches.
Resumo:
This paper presents a multi-agent architecture that was designed to develop processes supervision and control systems, with the main objective to automate tasks that are repetitive and stressful, and error prone when performed by humans. A set of agents were identified, based on the study of a number of applications found in the literature, that use the approach of multi-agent systems for data integration and process monitoring to faults detection and diagnosis, these agents are used as basis of the proposed multi-agent architecture. A prototype system for the analysis of abnormalities during oil wells drilling was developed.
Resumo:
Agent Communication Languages (ACLs) have been developed to provide a way for agents to communicate with each other supporting cooperation in Multi-Agent Systems. In the past few years many ACLs have been proposed for Multi-Agent Systems, such as KQML and FIPA-ACL. The goal of these languages is to support high-level, human like communication among agents, exploiting Knowledge Level features rather than symbol level ones. Adopting these ACLs, and mainly the FIPA-ACL specifications, many agent platforms and prototypes have been developed. Despite these efforts, an important issue in the research on ACLs is still open and concerns how these languages should deal (at the Knowledge Level) with possible failures of agents. Indeed, the notion of Knowledge Level cannot be straightforwardly extended to a distributed framework such as MASs, because problems concerning communication and concurrency may arise when several Knowledge Level agents interact (for example deadlock or starvation). The main contribution of this Thesis is the design and the implementation of NOWHERE, a platform to support Knowledge Level Agents on the Web. NOWHERE exploits an advanced Agent Communication Language, FT-ACL, which provides high-level fault-tolerant communication primitives and satisfies a set of well defined Knowledge Level programming requirements. NOWHERE is well integrated with current technologies, for example providing full integration for Web services. Supporting different middleware used to send messages, it can be adapted to various scenarios. In this Thesis we present the design and the implementation of the architecture, together with a discussion of the most interesting details and a comparison with other emerging agent platforms. We also present several case studies where we discuss the benefits of programming agents using the NOWHERE architecture, comparing the results with other solutions. Finally, the complete source code of the basic examples can be found in appendix.
Resumo:
The advent of distributed and heterogeneous systems has laid the foundation for the birth of new architectural paradigms, in which many separated and autonomous entities collaborate and interact to the aim of achieving complex strategic goals, impossible to be accomplished on their own. A non exhaustive list of systems targeted by such paradigms includes Business Process Management, Clinical Guidelines and Careflow Protocols, Service-Oriented and Multi-Agent Systems. It is largely recognized that engineering these systems requires novel modeling techniques. In particular, many authors are claiming that an open, declarative perspective is needed to complement the closed, procedural nature of the state of the art specification languages. For example, the ConDec language has been recently proposed to target the declarative and open specification of Business Processes, overcoming the over-specification and over-constraining issues of classical procedural approaches. On the one hand, the success of such novel modeling languages strongly depends on their usability by non-IT savvy: they must provide an appealing, intuitive graphical front-end. On the other hand, they must be prone to verification, in order to guarantee the trustworthiness and reliability of the developed model, as well as to ensure that the actual executions of the system effectively comply with it. In this dissertation, we claim that Computational Logic is a suitable framework for dealing with the specification, verification, execution, monitoring and analysis of these systems. We propose to adopt an extended version of the ConDec language for specifying interaction models with a declarative, open flavor. We show how all the (extended) ConDec constructs can be automatically translated to the CLIMB Computational Logic-based language, and illustrate how its corresponding reasoning techniques can be successfully exploited to provide support and verification capabilities along the whole life cycle of the targeted systems.
Resumo:
Communication and coordination are two key-aspects in open distributed agent system, being both responsible for the system’s behaviour integrity. An infrastructure capable to handling these issues, like TuCSoN, should to be able to exploit modern technologies and tools provided by fast software engineering contexts. Thesis aims to demonstrate TuCSoN infrastructure’s abilities to cope new possibilities, hardware and software, offered by mobile technology. The scenarios are going to configure, are related to the distributed nature of multi-agent systems where an agent should be located and runned just on a mobile device. We deal new mobile technology frontiers concerned with smartphones using Android operating system by Google. Analysis and deployment of a distributed agent-based system so described go first to impact with quality and quantity considerations about available resources. Engineering issue at the base of our research is to use TuCSoN against to reduced memory and computing capability of a smartphone, without the loss of functionality, efficiency and integrity for the infrastructure. Thesis work is organized on two fronts simultaneously: the former is the rationalization process of the available hardware and software resources, the latter, totally orthogonal, is the adaptation and optimization process about TuCSoN architecture for an ad-hoc client side release.
Resumo:
Nano(bio)science and nano(bio)technology play a growing and tremendous interest both on academic and industrial aspects. They are undergoing rapid developments on many fronts such as genomics, proteomics, system biology, and medical applications. However, the lack of characterization tools for nano(bio)systems is currently considered as a major limiting factor to the final establishment of nano(bio)technologies. Flow Field-Flow Fractionation (FlFFF) is a separation technique that is definitely emerging in the bioanalytical field, and the number of applications on nano(bio)analytes such as high molar-mass proteins and protein complexes, sub-cellular units, viruses, and functionalized nanoparticles is constantly increasing. This can be ascribed to the intrinsic advantages of FlFFF for the separation of nano(bio)analytes. FlFFF is ideally suited to separate particles over a broad size range (1 nm-1 μm) according to their hydrodynamic radius (rh). The fractionation is carried out in an empty channel by a flow stream of a mobile phase of any composition. For these reasons, fractionation is developed without surface interaction of the analyte with packing or gel media, and there is no stationary phase able to induce mechanical or shear stress on nanosized analytes, which are for these reasons kept in their native state. Characterization of nano(bio)analytes is made possible after fractionation by interfacing the FlFFF system with detection techniques for morphological, optical or mass characterization. For instance, FlFFF coupling with multi-angle light scattering (MALS) detection allows for absolute molecular weight and size determination, and mass spectrometry has made FlFFF enter the field of proteomics. Potentialities of FlFFF couplings with multi-detection systems are discussed in the first section of this dissertation. The second and the third sections are dedicated to new methods that have been developed for the analysis and characterization of different samples of interest in the fields of diagnostics, pharmaceutics, and nanomedicine. The second section focuses on biological samples such as protein complexes and protein aggregates. In particular it focuses on FlFFF methods developed to give new insights into: a) chemical composition and morphological features of blood serum lipoprotein classes, b) time-dependent aggregation pattern of the amyloid protein Aβ1-42, and c) aggregation state of antibody therapeutics in their formulation buffers. The third section is dedicated to the analysis and characterization of structured nanoparticles designed for nanomedicine applications. The discussed results indicate that FlFFF with on-line MALS and fluorescence detection (FD) may become the unparallel methodology for the analysis and characterization of new, structured, fluorescent nanomaterials.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
While the use of distributed intelligence has been incrementally spreading in the design of a great number of intelligent systems, the field of Artificial Intelligence in Real Time Strategy games has remained mostly a centralized environment. Despite turn-based games have attained AIs of world-class level, the fast paced nature of RTS games has proven to be a significant obstacle to the quality of its AIs. Chapter 1 introduces RTS games describing their characteristics, mechanics and elements. Chapter 2 introduces Multi-Agent Systems and the use of the Beliefs-Desires-Intentions abstraction, analysing the possibilities given by self-computing properties. In Chapter 3 the current state of AI development in RTS games is analyzed highlighting the struggles of the gaming industry to produce valuable. The focus on improving multiplayer experience has impacted gravely on the quality of the AIs thus leaving them with serious flaws that impair their ability to challenge and entertain players. Chapter 4 explores different aspects of AI development for RTS, evaluating the potential strengths and weaknesses of an agent-based approach and analysing which aspects can benefit the most against centralized AIs. Chapter 5 describes a generic agent-based framework for RTS games where every game entity becomes an agent, each of which having its own knowledge and set of goals. Different aspects of the game, like economy, exploration and warfare are also analysed, and some agent-based solutions are outlined. The possible exploitation of self-computing properties to efficiently organize the agents activity is then inspected. Chapter 6 presents the design and implementation of an AI for an existing Open Source game in beta development stage: 0 a.d., an historical RTS game on ancient warfare which features a modern graphical engine and evolved mechanics. The entities in the conceptual framework are implemented in a new agent-based platform seamlessly nested inside the existing game engine, called ABot, widely described in Chapters 7, 8 and 9. Chapter 10 and 11 include the design and realization of a new agent based language useful for defining behavioural modules for the agents in ABot, paving the way for a wider spectrum of contributors. Chapter 12 concludes the work analysing the outcome of tests meant to evaluate strategies, realism and pure performance, finally drawing conclusions and future works in Chapter 13.
Abscheidung und Charakterisierung von Plasmapolymerschichten auf Fluorkohlenstoff- und Siloxan-Basis
Resumo:
In dieser Arbeit wurden Fluorkohlenstoff-basierte und siliziumorganische Plasmapolymerfilme hergestellt und hinsichtlich ihrer strukturellen und funktionalen Eigenschaften untersucht. Beide untersuchten Materialsysteme sind in der Beschichtungstechnologie von großem wissenschaftlichen und anwendungstechnischen Interesse. Die Schichtabscheidung erfolgte mittels plasmachemischer Gasphasenabscheidung (PECVD) an Parallelplattenreaktoren. Bei den Untersuchungen zur Fluorkohlenstoff-Plasmapolymerisation stand die Herstellung ultra-dünner, d. h. weniger als 5 nm dicker Schichten im Vordergrund. Dies wurde durch gepulste Plasmaanregung und Verwendung eines Gasgemisches aus Trifluormethan (CHF3) und Argon realisiert. Die Bindungsstruktur der Schichten wurden in Abhängigkeit der eingespeisten Leistung, die den Fragmentationsgrad der Monomere im Plasma bestimmt, analysiert. Hierzu wurden die Röntgen-Photoelektronenspektroskopie (XPS), Rasterkraftmikroskopie (AFM), Flugzeit-Sekundärionenmassenspektrometrie (ToF-SIMS) und Röntgenreflektometrie (XRR) eingesetzt. Es zeigte sich, dass die abgeschiedenen Schichten ein homogenes Wachstumsverhalten und keine ausgeprägten Interfacebereiche zum Substrat und zur Oberfläche hin aufweisen. Die XPS-Analysen deuten darauf hin, dass Verkettungsreaktionen von CF2-Radikalen im Plasma eine wichtige Rolle für den Schichtbildungsprozess spielen. Weiterhin konnte gezeigt werden, dass der gewählte Beschichtungsprozess eine gezielte Reduzierung der Benetzbarkeit verschiedener Substrate ermöglicht. Dabei genügen Schichtdicken von weniger als 3 nm zur Erreichung eines teflonartigen Oberflächencharakters mit Oberflächenenergien um 20 mN/m. Damit erschließen sich neue Applikationsmöglichkeiten ultra-dünner Fluorkohlenstoffschichten, was anhand eines Beispiels aus dem Bereich der Nanooptik demonstriert wird. Für die siliziumorganischen Schichten unter Verwendung des Monomers Hexamethyldisiloxan (HMDSO) galt es zunächst, diejenigen Prozessparameter zu identifizieren, die ihren organischen bzw. glasartigen Charakter bestimmen. Hierzu wurde der Einfluss von Leistungseintrag und Zugabe von Sauerstoff als Reaktivgas auf die Elementzusammensetzung der Schichten untersucht. Bei niedrigen Plasmaleistungen und Sauerstoffflüssen werden vor allem kohlenstoffreiche Schichten abgeschieden, was auf eine geringere Fragmentierung der Kohlenwasserstoffgruppen zurückgeführt wurde. Es zeigte sich, dass die Variation des Sauerstoffanteils im Prozessgas eine sehr genaue Steuerbarkeit der Schichteigenschaften ermöglicht. Mittels Sekundär-Neutralteilchen-Massenspektrometrie (SNMS) konnte die prozesstechnische Realisierbarkeit und analytische Quantifizierbarkeit von Wechselschichtsystemen aus polymerartigen und glasartigen Lagen demonstriert werden. Aus dem Intensitätsverhältnis von Si:H-Molekülen zu Si-Atomen im SNMS-Spektrum ließ sich der Wasserstoffgehalt bestimmen. Weiterhin konnte gezeigt werden, dass durch Abscheidung von HMDSO-basierten Gradientenschichten eine deutliche Reduzierung von Reibung und Verschleiß bei Elastomerbauteilen erzielt werden kann.
Resumo:
Die Röntgenabsorptionsspektroskopie (Extended X-ray absorption fine structure (EXAFS) spectroscopy) ist eine wichtige Methode zur Speziation von Schwermetallen in einem weiten Bereich von umweltrelevanten Systemen. Um Strukturparameter wie Koordinationszahl, Atomabstand und Debye-Waller Faktoren für die nächsten Nachbarn eines absorbierenden Atoms zu bestimmen, ist es für experimentelle EXAFS-Spektren üblich, unter Verwendung von Modellstrukturen einen „Least-Squares-Fit“ durchzuführen. Oft können verschiedene Modellstrukturen mit völlig unterschiedlicher chemischer Bedeutung die experimentellen EXAFS-Daten gleich gut beschreiben. Als gute Alternative zum konventionellen Kurven-Fit bietet sich das modifizierte Tikhonov-Regularisationsverfahren an. Ergänzend zur Tikhonov-Standardvariationsmethode enthält der in dieser Arbeit vorgestellte Algorithmus zwei weitere Schritte, nämlich die Anwendung des „Method of Separating Functionals“ und ein Iterationsverfahren mit Filtration im realen Raum. Um das modifizierte Tikhonov-Regularisationsverfahren zu testen und zu bestätigen wurden sowohl simulierte als auch experimentell gemessene EXAFS-Spektren einer kristallinen U(VI)-Verbindung mit bekannter Struktur, nämlich Soddyit (UO2)2SiO4 x 2H2O, untersucht. Die Leistungsfähigkeit dieser neuen Methode zur Auswertung von EXAFS-Spektren wird durch ihre Anwendung auf die Analyse von Proben mit unbekannter Struktur gezeigt, wie sie bei der Sorption von U(VI) bzw. von Pu(III)/Pu(IV) an Kaolinit auftreten. Ziel der Dissertation war es, die immer noch nicht voll ausgeschöpften Möglichkeiten des modifizierten Tikhonov-Regularisationsverfahrens für die Auswertung von EXAFS-Spektren aufzuzeigen. Die Ergebnisse lassen sich in zwei Kategorien einteilen. Die erste beinhaltet die Entwicklung des Tikhonov-Regularisationsverfahrens für die Analyse von EXAFS-Spektren von Mehrkomponentensystemen, insbesondere die Wahl bestimmter Regularisationsparameter und den Einfluss von Mehrfachstreuung, experimentell bedingtem Rauschen, etc. auf die Strukturparameter. Der zweite Teil beinhaltet die Speziation von sorbiertem U(VI) und Pu(III)/Pu(IV) an Kaolinit, basierend auf experimentellen EXAFS-Spektren, die mit Hilfe des modifizierten Tikhonov-Regularisationsverfahren ausgewertet und mit Hilfe konventioneller EXAFS-Analyse durch „Least-Squares-Fit“ bestätigt wurden.
Resumo:
This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.
Resumo:
Im Forschungsgebiet der Künstlichen Intelligenz, insbesondere im Bereich des maschinellen Lernens, hat sich eine ganze Reihe von Verfahren etabliert, die von biologischen Vorbildern inspiriert sind. Die prominentesten Vertreter derartiger Verfahren sind zum einen Evolutionäre Algorithmen, zum anderen Künstliche Neuronale Netze. Die vorliegende Arbeit befasst sich mit der Entwicklung eines Systems zum maschinellen Lernen, das Charakteristika beider Paradigmen in sich vereint: Das Hybride Lernende Klassifizierende System (HCS) wird basierend auf dem reellwertig kodierten eXtended Learning Classifier System (XCS), das als Lernmechanismus einen Genetischen Algorithmus enthält, und dem Wachsenden Neuralen Gas (GNG) entwickelt. Wie das XCS evolviert auch das HCS mit Hilfe eines Genetischen Algorithmus eine Population von Klassifizierern - das sind Regeln der Form [WENN Bedingung DANN Aktion], wobei die Bedingung angibt, in welchem Bereich des Zustandsraumes eines Lernproblems ein Klassifizierer anwendbar ist. Beim XCS spezifiziert die Bedingung in der Regel einen achsenparallelen Hyperquader, was oftmals keine angemessene Unterteilung des Zustandsraumes erlaubt. Beim HCS hingegen werden die Bedingungen der Klassifizierer durch Gewichtsvektoren beschrieben, wie die Neuronen des GNG sie besitzen. Jeder Klassifizierer ist anwendbar in seiner Zelle der durch die Population des HCS induzierten Voronoizerlegung des Zustandsraumes, dieser kann also flexibler unterteilt werden als beim XCS. Die Verwendung von Gewichtsvektoren ermöglicht ferner, einen vom Neuronenadaptationsverfahren des GNG abgeleiteten Mechanismus als zweites Lernverfahren neben dem Genetischen Algorithmus einzusetzen. Während das Lernen beim XCS rein evolutionär erfolgt, also nur durch Erzeugen neuer Klassifizierer, ermöglicht dies dem HCS, bereits vorhandene Klassifizierer anzupassen und zu verbessern. Zur Evaluation des HCS werden mit diesem verschiedene Lern-Experimente durchgeführt. Die Leistungsfähigkeit des Ansatzes wird in einer Reihe von Lernproblemen aus den Bereichen der Klassifikation, der Funktionsapproximation und des Lernens von Aktionen in einer interaktiven Lernumgebung unter Beweis gestellt.
Resumo:
The present thesis is focused on the study of innovative Si-based materials for third generation photovoltaics. In particular, silicon oxi-nitride (SiOxNy) thin films and multilayer of Silicon Rich Carbide (SRC)/Si have been characterized in view of their application in photovoltaics. SiOxNy is a promising material for applications in thin-film solar cells as well as for wafer based silicon solar cells, like silicon heterojunction solar cells. However, many issues relevant to the material properties have not been studied yet, such as the role of the deposition condition and precursor gas concentrations on the optical and electronic properties of the films, the composition and structure of the nanocrystals. The results presented in the thesis aim to clarify the effects of annealing and oxygen incorporation within nc-SiOxNy films on its properties in view of the photovoltaic applications. Silicon nano-crystals (Si NCs) embedded in a dielectric matrix were proposed as absorbers in all-Si multi-junction solar cells due to the quantum confinement capability of Si NCs, that allows a better match to the solar spectrum thanks to the size induced tunability of the band gap. Despite the efficient solar radiation absorption capability of this structure, its charge collection and transport properties has still to be fully demonstrated. The results presented in the thesis aim to the understanding of the transport mechanisms at macroscopic and microscopic scale. Experimental results on SiOxNy thin films and SRC/Si multilayers have been obtained at macroscopical and microscopical level using different characterizations techniques, such as Atomic Force Microscopy, Reflection and Transmission measurements, High Resolution Transmission Electron Microscopy, Energy-Dispersive X-ray spectroscopy and Fourier Transform Infrared Spectroscopy. The deep knowledge and improved understanding of the basic physical properties of these quite complex, multi-phase and multi-component systems, made by nanocrystals and amorphous phases, will contribute to improve the efficiency of Si based solar cells.
Resumo:
In this thesis we have extended the methods for microscopic charge-transport simulations for organic semiconductors. In these materials the weak intermolecular interactions lead to spatially localized charge carriers, and the charge transport occurs as an activated hopping process between diabatic states. In addition to weak electronic couplings between these states, different electrostatic environments in the organic material lead to a broadening of the density of states for the charge energies which limits carrier mobilities.rnThe contributions to the method development includern(i) the derivation of a bimolecular charge-transfer rate,rn(ii) the efficient evaluation of intermolecular (outer-sphere) reorganization energies,rn(iii) the investigation of effects of conformational disorder on intramolecular reorganization energies or internal site energiesrnand (iv) the inclusion of self-consistent polarization interactions for calculation of charge energies.These methods were applied to study charge transport in amorphous phases of small molecules used in the emission layer of organic light emitting diodes (OLED).rnWhen bulky substituents are attached to an aromatic core in order to adjust energy levels or prevent crystallization, a small amount of delocalization of the frontier orbital to the substituents can increase electronic couplings between neighboring molecules. This leads to improved charge-transfer rates and, hence, larger charge-mobility. We therefore suggest using the mesomeric effect (as opposed to the inductive effect) when attaching substituents to aromatic cores, which is necessary for example in deep blue OLEDs, where the energy levels of a host molecule have to be adjusted to those of the emitter.rnFurthermore, the energy landscape for charges in an amorphous phase cannot be predicted by mesoscopic models because they approximate the realistic morphology by a lattice and represent molecular charge distributions in a multipole expansion. The microscopic approach shows that a polarization-induced stabilization of a molecule in its charged and neutral states can lead to large shifts, broadening, and traps in the distribution of charge energies. These results are especially important for multi-component systems (the emission layer of an OLED or the donor-acceptor interface of an organic solar cell), if the change in polarizability upon charging (or excitation in case of energy transport) is different for the components. Thus, the polarizability change upon charging or excitation should be added to the set of molecular parameters essential for understanding charge and energy transport in organic semiconductors.rnWe also studied charge transport in self-assembled systems, where intermolecular packing motives induced by side chains can increase electronic couplings between molecules. This leads to larger charge mobility, which is essential to improve devices such as organic field effect transistors, where low carrier mobilities limit the switching frequency.rnHowever, it is not sufficient to match the average local molecular order induced by the sidernchains (such as the pitch angle between consecutive molecules in a discotic mesophase) with maxima of the electronic couplings.rnIt is also important to make the corresponding distributions as narrow as possible compared to the window determined by the closest minima of thernelectronic couplings. This is especially important in one-dimensional systems, where charge transport is limited by the smallest electronic couplings.rnThe immediate implication for compound design is that the side chains should assist the self-assemblingrnprocess not only via soft entropic interactions, but also via stronger specific interactions, such as hydrogen bonding.rnrnrnrn