898 resultados para LARGE SYSTEMS
Resumo:
Our study refers to the state of Rio Grande do Norte against the deployment of oil activity in their territory. In this sense the aim of this work was to analyze the presence of the loop space of the oil production system linked to objects and actions on the Rio Grande do Norte territory. From the so-called "oil shock", an event that caused global developments in several countries, Petróleo Brasileiro S/A (PETROBRAS) increased investments in drilling geological basins in Brazil. In the year 1973 was drilled in the sea area well which led to commercial production of oil and gas in Rio Grande do Norte. From that point on were added in some parts of the Potiguar territory, large systems of coupled objects to actions caused by several agents. In this context, geographic situations have been reorganized due to an unprecedented space circuit production accompanied by a new circle of cooperation. In the state happen all instances of the circuit: the production, distribution and consumption. In light of the theory of the geographical area seek to direct our thoughts to the operation of these bodies, and they are linked to material and immaterial flows multiscales. This perspective allows us to think the territory of Rio Grande do Norte entered into a new territorial division of labor characterized by specialization regional production. Oil activity was implemented in the territory of Rio Grande do Norte at a time marked by productive restructuring of various economic sectors. The oil sector has been acting increasingly linked the scientific and informational, with a view to increasing productivity. The presence of this circuit demanded the territory, specifically the Mossoró, an organizational structure that is different from the vast system nationally integrated private commercial corporations to small corporations, all of them relating directly or indirectly to PETROBRAS. The flows between companies whose headquarters are located in distant states and even countries have generated a continuous movement of goods, people, information and ideas, which is also causing new materialities in the territory
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
As técnicas utilizadas para avaliação da segurança estática em sistemas elétricos de potência dependem da execução de grande número de casos de fluxo de carga para diversas topologias e condições operacionais do sistema. Em ambientes de operação de tempo real, esta prática é de difícil realização, principalmente em sistemas de grande porte onde a execução de todos os casos de fluxo de carga que são necessários, exige elevado tempo e esforço computacional mesmo para os recursos atuais disponíveis. Técnicas de mineração de dados como árvore de decisão estão sendo utilizadas nos últimos anos e tem alcançado bons resultados nas aplicações de avaliação da segurança estática e dinâmica de sistemas elétricos de potência. Este trabalho apresenta uma metodologia para avaliação da segurança estática em tempo real de sistemas elétricos de potência utilizando árvore de decisão, onde a partir de simulações off-line de fluxo de carga, executadas via software Anarede (CEPEL), foi gerada uma extensa base de dados rotulada relacionada ao estado do sistema, para diversas condições operacionais. Esta base de dados foi utilizada para indução das árvores de decisão, fornecendo um modelo de predição rápida e precisa que classifica o estado do sistema (seguro ou inseguro) para aplicação em tempo real. Esta metodologia reduz o uso de computadores no ambiente on-line, uma vez que o processamento das árvores de decisão exigem apenas a verificação de algumas instruções lógicas do tipo if-then, de um número reduzido de testes numéricos nos nós binários para definição do valor do atributo que satisfaz as regras, pois estes testes são realizados em quantidade igual ao número de níveis hierárquicos da árvore de decisão, o que normalmente é reduzido. Com este processamento computacional simples, a tarefa de avaliação da segurança estática poderá ser executada em uma fração do tempo necessário para a realização pelos métodos tradicionais mais rápidos. Para validação da metodologia, foi realizado um estudo de caso baseado em um sistema elétrico real, onde para cada contingência classificada como inseguro, uma ação de controle corretivo é executada, a partir da informação da árvore de decisão sobre o atributo crítico que mais afeta a segurança. Os resultados mostraram ser a metodologia uma importante ferramenta para avaliação da segurança estática em tempo real para uso em um centro de operação do sistema.
Resumo:
Despite intensive research during the last decades, thetheoreticalunderstanding of supercooled liquids and the glasstransition is stillfar from being complete. Besides analytical investigations,theso-called energy-landscape approach has turned out to beveryfruitful. In the literature, many numerical studies havedemonstratedthat, at sufficiently low temperatures, all thermodynamicquantities can be predicted with the help of the propertiesof localminima in the potential-energy-landscape (PEL). The main purpose of this thesis is to strive for anunderstanding ofdynamics in terms of the potential energy landscape. Incontrast to the study of static quantities, this requirestheknowledge of barriers separating the minima.Up to now, it has been the general viewpoint that thermallyactivatedprocesses ('hopping') determine the dynamics only belowTc(the critical temperature of mode-coupling theory), in thesense that relaxation rates follow from local energybarriers.As we show here, this viewpoint should be revisedsince the temperature dependence of dynamics is governed byhoppingprocesses already below 1.5Tc.At the example of a binary mixture of Lennard-Jonesparticles (BMLJ),we establish a quantitative link from the diffusioncoefficient,D(T), to the PEL topology. This is achieved in three steps:First, we show that it is essential to consider wholesuperstructuresof many PEL minima, called metabasins, rather than singleminima. Thisis a consequence of strong correlations within groups of PELminima.Second, we show that D(T) is inversely proportional to theaverageresidence time in these metabasins. Third, the temperaturedependenceof the residence times is related to the depths of themetabasins, asgiven by the surrounding energy barriers. We further discuss that the study of small (but not toosmall) systemsis essential, in that one deals with a less complex energylandscapethan in large systems. In a detailed analysis of differentsystemsizes, we show that the small BMLJ system consideredthroughout thethesis is free of major finite-size-related artifacts.
Resumo:
Zusammenfassung In der vorliegenden Arbeit besch¨aftige ich mich mit Differentialgleichungen von Feynman– Integralen. Ein Feynman–Integral h¨angt von einem Dimensionsparameter D ab und kann f¨ur ganzzahlige Dimension als projektives Integral dargestellt werden. Dies ist die sogenannte Feynman–Parameter Darstellung. In Abh¨angigkeit der Dimension kann ein solches Integral divergieren. Als Funktion in D erh¨alt man eine meromorphe Funktion auf ganz C. Ein divergentes Integral kann also durch eine Laurent–Reihe ersetzt werden und dessen Koeffizienten r¨ucken in das Zentrum des Interesses. Diese Vorgehensweise wird als dimensionale Regularisierung bezeichnet. Alle Terme einer solchen Laurent–Reihe eines Feynman–Integrals sind Perioden im Sinne von Kontsevich und Zagier. Ich beschreibe eine neue Methode zur Berechnung von Differentialgleichungen von Feynman– Integralen. ¨ Ublicherweise verwendet man hierzu die sogenannten ”integration by parts” (IBP)– Identit¨aten. Die neue Methode verwendet die Theorie der Picard–Fuchs–Differentialgleichungen. Im Falle projektiver oder quasi–projektiver Variet¨aten basiert die Berechnung einer solchen Differentialgleichung auf der sogenannten Griffiths–Dwork–Reduktion. Zun¨achst beschreibe ich die Methode f¨ur feste, ganzzahlige Dimension. Nach geeigneter Verschiebung der Dimension erh¨alt man direkt eine Periode und somit eine Picard–Fuchs–Differentialgleichung. Diese ist inhomogen, da das Integrationsgebiet einen Rand besitzt und daher nur einen relativen Zykel darstellt. Mit Hilfe von dimensionalen Rekurrenzrelationen, die auf Tarasov zur¨uckgehen, kann in einem zweiten Schritt die L¨osung in der urspr¨unglichen Dimension bestimmt werden. Ich beschreibe außerdem eine Methode, die auf der Griffiths–Dwork–Reduktion basiert, um die Differentialgleichung direkt f¨ur beliebige Dimension zu berechnen. Diese Methode ist allgemein g¨ultig und erspart Dimensionswechsel. Ein Erfolg der Methode h¨angt von der M¨oglichkeit ab, große Systeme von linearen Gleichungen zu l¨osen. Ich gebe Beispiele von Integralen von Graphen mit zwei und drei Schleifen. Tarasov gibt eine Basis von Integralen an, die Graphen mit zwei Schleifen und zwei externen Kanten bestimmen. Ich bestimme Differentialgleichungen der Integrale dieser Basis. Als wichtigstes Beispiel berechne ich die Differentialgleichung des sogenannten Sunrise–Graphen mit zwei Schleifen im allgemeinen Fall beliebiger Massen. Diese ist f¨ur spezielle Werte von D eine inhomogene Picard–Fuchs–Gleichung einer Familie elliptischer Kurven. Der Sunrise–Graph ist besonders interessant, weil eine analytische L¨osung erst mit dieser Methode gefunden werden konnte, und weil dies der einfachste Graph ist, dessen Master–Integrale nicht durch Polylogarithmen gegeben sind. Ich gebe außerdem ein Beispiel eines Graphen mit drei Schleifen. Hier taucht die Picard–Fuchs–Gleichung einer Familie von K3–Fl¨achen auf.
Resumo:
Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.
Resumo:
Java Enterprise Applications (JEAs) are large systems that integrate multiple technologies and programming languages. Transactions in JEAs simplify the development of code that deals with failure recovery and multi-user coordination by guaranteeing atomicity of sets of operations. The heterogeneous nature of JEAs, however, can obfuscate conceptual errors in the application code, and in particular can hide incorrect declarations of transaction scope. In this paper we present a technique to expose and analyze the application transaction scope in JEAs by merging and analyzing information from multiple sources. We also present several novel visualizations that aid in the analysis of transaction scope by highlighting anomalies in the specification of transactions and violations of architectural constraints. We have validated our approach on two versions of a large commercial case study.
Resumo:
Java Enterprise Applications (JEAs) are large systems that integrate multiple technologies and programming languages. With the purpose to support the analysis of JEAs we have developed MooseJEE an extension of the \emphMoose environment capable to model the typical elements of JEAs.
Resumo:
The molecular interactions between the host molecule, perthiolated beta-cyclodextrin (CD), and the guest molecules, adamantaneacetic acid (AD) and ferroceneacetic acid (FC), have been inestigated theoretically in both the gas and aqueous phases. The major computations have been carried out at the theoretical levels, RHF/6-31G and B3LYP/6- 31G. MP2 electronic energies were also computed based at the geometries optimized by both the RHF and B3LYP methods in the gas phase to establish a better estimate of the correlation effect. The solvent phase computations were completed at the RHF/6-31G and B3LYP/6-31G levels using the PCM model. The most stable structures optimized in gas phase by both the RHF and B3LYP methods were used for the computations in solution. A method to systematically manipulate the relative position and orientation between the interacting molecules is proposed. In the gas phase, six trials with different host-guest relative positions and orientations were completed successfully with the B3LYP method for both the CD-AD and CD-FC complexes. Only four trials were completed with RHF method. In the gas phase, the best results from the RHF method gives for the association Gibbs free energy (ΔG°) values equal to -32.21kj/mol for CD-AD and -25.73kj/mol for CD-FC. And the best results from the B3LYP method have ΔG° equal to -47.57kj/mol for CD-AD and -41.09kj/mol for CD-FC. The MP2 correction significantly lowers ΔG° based on the geometries from both methods. For the RHF structure, the MP2 computations lowered ΔG° to -60.64kj/mol for CD-AD and -54.10 for CD-FC. For the structure from the B3LYP method, it was reduced to -59.87 kj/mol for CD-AD and -54.84 kj/mol for CDFC. The RHF solvent phase calculations yielded following results: ΔG°(aq) equals 107.2kj/mol for CD-AD and 111.4kj/mol for CD-FC. Compared with the results from the RHF method, the B3LYP method provided clearly better solvent phase results with ΔG° (aq) equal to 38.64kj/mol for CD-AD and 39.61kj/mol for CD-FC. These results qualitatively explain the experimental observations. However quantitatively they are in poor agreement with the experimental values available in the literature and those recently published by Liu et al. And the reason is believed to be omission of hydrophobic contribution to the association. Determining the global geometrical minima for these very large systems was very difficult and computationally time consuming, but after a very thorough search, these were identified. A relevant result of this search is that when the complexes, CD-AD and CD-FC, are formed, the AD and FC molecules are only partially embedded inside the CD cavity. The totally embedded complexes were found to have significantly higher energies. The semiempirical method, ZINDO, was employed to investigate the effect of complexation on the first electronic excitation of CD anchored to a metal nano-particle. The computational results revealed that after complexation to FC, the transition intensity declines to about 25% of the original value, and after complexation with AD, the intensity drops almost 50%. The tighter binding and transition intensity of CD-AD qualitatively agrees with the experimental result that the addition of AD to a solution of CD and FC restores the fluorescence of CD that was quenched by the addition of FC. A method to evaluate the “hydrophobic force” effect is proposed for future work.
Resumo:
Software developers often ask questions about software systems and software ecosystems that entail exploration and navigation, such as who uses this component?, and where is this feature implemented?. Software visualisation can be a great aid to understanding and exploring the answers to such questions, but visualisations require expertise to implement effectively, and they do not always scale well to large systems. We propose to automatically generate software visualisations based on software models derived from open source software corpora and from an analysis of the properties of typical developers queries and commonly used visualisations. The key challenges we see are (1) understanding how to match queries to suitable visualisations, and (2) scaling visualisations effectively to very large software systems and corpora. In the paper we motivate the idea of automatic software visualisation, we enumerate the challenges and our proposals to address them, and we describe some very initial results in our attempts to develop scalable visualisations of open source software corpora.
Resumo:
The general goal of this thesis is correlating observable properties of organic and metal-organic materials with their ground-state electron density distribution. In a long-term view, we expect to develop empirical or semi-empirical approaches to predict materials properties from the electron density of their building blocks, thus allowing to rationally engineering molecular materials from their constituent subunits, such as their functional groups. In particular, we have focused on linear optical properties of naturally occurring amino acids and their organic and metal-organic derivatives, and on magnetic properties of metal-organic frameworks. For analysing the optical properties and the magnetic behaviour of the molecular or sub-molecular building blocks in materials, we mostly used the more traditional QTAIM partitioning scheme of the molecular or crystalline electron densities, however, we have also investigated a new approach, namely, X-ray Constrained Extremely Localized Molecular Orbitals (XC-ELMO), that can be used in future to extracted the electron densities of crystal subunits. With the purpose of rationally engineering linear optical materials, we have calculated atomic and functional group polarizabilities of amino acid molecules, their hydrogen-bonded aggregates and their metal-organic frameworks. This has enabled the identification of the most efficient functional groups, able to build-up larger electric susceptibilities in crystals, as well as the quantification of the role played by intermolecular interactions and coordinative bonds on modifying the polarizability of the isolated building blocks. Furthermore, we analysed the dependence of the polarizabilities on the one-electron basis set and the many-electron Hamiltonian. This is useful for selecting the most efficient level of theory to estimate susceptibilities of molecular-based materials. With the purpose of rationally design molecular magnetic materials, we have investigated the electron density distributions and the magnetism of two copper(II) pyrazine nitrate metal-organic polymers. High-resolution X-ray diffraction and DFT calculations were used to characterize the magnetic exchange pathways and to establish relationships between the electron densities and the exchange-coupling constants. Moreover, molecular orbital and spin-density analyses were employed to understand the role of different magnetic exchange mechanisms in determining the bulk magnetic behaviour of these materials. As anticipated, we have finally investigated a modified version of the X-ray constrained wavefunction technique, XC-ELMOs, that is not only a useful tool for determination and analysis of experimental electron densities, but also enables one to derive transferable molecular orbitals strictly localized on atoms, bonds or functional groups. In future, we expect to use XC-ELMOs to predict materials properties of large systems, currently challenging to calculate from first-principles, such as macromolecules or polymers. Here, we point out advantages, needs and pitfalls of the technique. This work fulfils, at least partially, the prerequisites to understand materials properties of organic and metal-organic materials from the perspective of the electron density distribution of their building blocks. Empirical or semi-empirical evaluation of optical or magnetic properties from a preconceived assembling of building blocks could be extremely important for rationally design new materials, a field where accurate but expensive first-principles calculations are generally not used. This research could impact the community in the fields of crystal engineering, supramolecular chemistry and, of course, electron density analysis.
Resumo:
Essential biological processes are governed by organized, dynamic interactions between multiple biomolecular systems. Complexes are thus formed to enable the biological function and get dissembled as the process is completed. Examples of such processes include the translation of the messenger RNA into protein by the ribosome, the folding of proteins by chaperonins or the entry of viruses in host cells. Understanding these fundamental processes by characterizing the molecular mechanisms that enable then, would allow the (better) design of therapies and drugs. Such molecular mechanisms may be revealed trough the structural elucidation of the biomolecular assemblies at the core of these processes. Various experimental techniques may be applied to investigate the molecular architecture of biomolecular assemblies. High-resolution techniques, such as X-ray crystallography, may solve the atomic structure of the system, but are typically constrained to biomolecules of reduced flexibility and dimensions. In particular, X-ray crystallography requires the sample to form a three dimensional (3D) crystal lattice which is technically di‑cult, if not impossible, to obtain, especially for large, dynamic systems. Often these techniques solve the structure of the different constituent components within the assembly, but encounter difficulties when investigating the entire system. On the other hand, imaging techniques, such as cryo-electron microscopy (cryo-EM), are able to depict large systems in near-native environment, without requiring the formation of crystals. The structures solved by cryo-EM cover a wide range of resolutions, from very low level of detail where only the overall shape of the system is visible, to high-resolution that approach, but not yet reach, atomic level of detail. In this dissertation, several modeling methods are introduced to either integrate cryo-EM datasets with structural data from X-ray crystallography, or to directly interpret the cryo-EM reconstruction. Such computational techniques were developed with the goal of creating an atomic model for the cryo-EM data. The low-resolution reconstructions lack the level of detail to permit a direct atomic interpretation, i.e. one cannot reliably locate the atoms or amino-acid residues within the structure obtained by cryo-EM. Thereby one needs to consider additional information, for example, structural data from other sources such as X-ray crystallography, in order to enable such a high-resolution interpretation. Modeling techniques are thus developed to integrate the structural data from the different biophysical sources, examples including the work described in the manuscript I and II of this dissertation. At intermediate and high-resolution, cryo-EM reconstructions depict consistent 3D folds such as tubular features which in general correspond to alpha-helices. Such features can be annotated and later on used to build the atomic model of the system, see manuscript III as alternative. Three manuscripts are presented as part of the PhD dissertation, each introducing a computational technique that facilitates the interpretation of cryo-EM reconstructions. The first manuscript is an application paper that describes a heuristics to generate the atomic model for the protein envelope of the Rift Valley fever virus. The second manuscript introduces the evolutionary tabu search strategies to enable the integration of multiple component atomic structures with the cryo-EM map of their assembly. Finally, the third manuscript develops further the latter technique and apply it to annotate consistent 3D patterns in intermediate-resolution cryo-EM reconstructions. The first manuscript, titled An assembly model for Rift Valley fever virus, was submitted for publication in the Journal of Molecular Biology. The cryo-EM structure of the Rift Valley fever virus was previously solved at 27Å-resolution by Dr. Freiberg and collaborators. Such reconstruction shows the overall shape of the virus envelope, yet the reduced level of detail prevents the direct atomic interpretation. High-resolution structures are not yet available for the entire virus nor for the two different component glycoproteins that form its envelope. However, homology models may be generated for these glycoproteins based on similar structures that are available at atomic resolutions. The manuscript presents the steps required to identify an atomic model of the entire virus envelope, based on the low-resolution cryo-EM map of the envelope and the homology models of the two glycoproteins. Starting with the results of the exhaustive search to place the two glycoproteins, the model is built iterative by running multiple multi-body refinements to hierarchically generate models for the different regions of the envelope. The generated atomic model is supported by prior knowledge regarding virus biology and contains valuable information about the molecular architecture of the system. It provides the basis for further investigations seeking to reveal different processes in which the virus is involved such as assembly or fusion. The second manuscript was recently published in the of Journal of Structural Biology (doi:10.1016/j.jsb.2009.12.028) under the title Evolutionary tabu search strategies for the simultaneous registration of multiple atomic structures in cryo-EM reconstructions. This manuscript introduces the evolutionary tabu search strategies applied to enable a multi-body registration. This technique is a hybrid approach that combines a genetic algorithm with a tabu search strategy to promote the proper exploration of the high-dimensional search space. Similar to the Rift Valley fever virus, it is common that the structure of a large multi-component assembly is available at low-resolution from cryo-EM, while high-resolution structures are solved for the different components but lack for the entire system. Evolutionary tabu search strategies enable the building of an atomic model for the entire system by considering simultaneously the different components. Such registration indirectly introduces spatial constrains as all components need to be placed within the assembly, enabling the proper docked in the low-resolution map of the entire assembly. Along with the method description, the manuscript covers the validation, presenting the benefit of the technique in both synthetic and experimental test cases. Such approach successfully docked multiple components up to resolutions of 40Å. The third manuscript is entitled Evolutionary Bidirectional Expansion for the Annotation of Alpha Helices in Electron Cryo-Microscopy Reconstructions and was submitted for publication in the Journal of Structural Biology. The modeling approach described in this manuscript applies the evolutionary tabu search strategies in combination with the bidirectional expansion to annotate secondary structure elements in intermediate resolution cryo-EM reconstructions. In particular, secondary structure elements such as alpha helices show consistent patterns in cryo-EM data, and are visible as rod-like patterns of high density. The evolutionary tabu search strategy is applied to identify the placement of the different alpha helices, while the bidirectional expansion characterizes their length and curvature. The manuscript presents the validation of the approach at resolutions ranging between 6 and 14Å, a level of detail where alpha helices are visible. Up to resolution of 12 Å, the method measures sensitivities between 70-100% as estimated in experimental test cases, i.e. 70-100% of the alpha-helices were correctly predicted in an automatic manner in the experimental data. The three manuscripts presented in this PhD dissertation cover different computation methods for the integration and interpretation of cryo-EM reconstructions. The methods were developed in the molecular modeling software Sculptor (http://sculptor.biomachina.org) and are available for the scientific community interested in the multi-resolution modeling of cryo-EM data. The work spans a wide range of resolution covering multi-body refinement and registration at low-resolution along with annotation of consistent patterns at high-resolution. Such methods are essential for the modeling of cryo-EM data, and may be applied in other fields where similar spatial problems are encountered, such as medical imaging.
Resumo:
The increasing complexity of current software systems is encouraging the development of self-managed software architectures, i.e. systems capable of reconfiguring their structure at runtime to fulfil a set of goals. Several approaches have covered different aspects of their development, but some issues remain open, such as the maintainability or the scalability of self-management subsystems. Centralized approaches, like self-adaptive architectures, offer good maintenance properties but do not scale well for large systems. On the contrary, decentralized approaches, like self-organising architectures, offer good scalability but are not maintainable: reconfiguration specifications are spread and often tangled with functional specifications. In order to address these issues, this paper presents an aspect-oriented autonomic reconfiguration approach where: (1) each subsystem is provided with self-management properties so it can evolve itself and the components that it is composed of; (2) self-management concerns are isolated and encapsulated into aspects, thus improving its reuse and maintenance. Povzetek: Predstavljen je pristop s samo-preoblikovanjem programske arhitekture.