964 resultados para Abstraction layers
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Modeling ERP software means capturing the information necessary for supporting enterprise management. This modeling process goes down through different abstraction layers, from enterprise modeling to code generation. Thus ERP is the kind of system where enterprise engineering undoubtedly has, or should have, a strong influence. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools can jeopardize the advantage brought by source code availability. Therefore, the aim of this paper is to present a development process proposal for the Open Source ERP5 system. The proposed development process aims to cover different abstraction levels, taking into account well established standards and common practices, as well as platform issues. Its main goal is to provide an adaptable meta-process to ERP5 adopters. © 2006 IEEE.
Resumo:
The design and implementation of an ERP system involves capturing the information necessary for implementing the system's structure and behavior that support enterprise management. This process should start on the enterprise modeling level and finish at the coding level, going down through different abstraction layers. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools jeopardizes the advantages of source code availability. Moreover, the distributed, decentralized decision-making, and source-code driven development culture of open source communities, generally doesn't rely on methods for modeling the higher abstraction levels necessary for an ERP solution. The aim of this paper is to present a model driven development process for the open source ERP ERP5. The proposed process covers the different abstraction levels involved, taking into account well established standards and common practices, as well as new approaches, by supplying Enterprise, Requirements, Analysis, Design, and Implementation workflows. Copyright 2008 ACM.
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
110 p.
Resumo:
Low-density nanostructured foams are often limited in applications due to their low mechanical and thermal stabilities. Here we report an approach of building the structural units of three-dimensional (3D) foams using hybrid two-dimensional (2D) atomic layers made of stacked graphene oxide layers reinforced with conformal hexagonal boron nitride (h-BN) platelets. The ultra-low density (1/400 times density of graphite) 3D porous structures are scalably synthesized using solution processing method. A layered 3D foam structure forms due to presence of h-BN and significant improvements in the mechanical properties are observed for the hybrid foam structures, over a range of temperatures, compared with pristine graphene oxide or reduced graphene oxide foams. It is found that domains of h-BN layers on the graphene oxide framework help to reinforce the 2D structural units, providing the observed improvement in mechanical integrity of the 3D foam structure.
Resumo:
The ethanol oxidation reaction (EOR) is investigated on Pt/Au(hkl) electrodes. The Au(hkl) single crystals used belong to the [n(111)x(110)] family of planes. Pt is deposited following the galvanic exchange of a previously deposited Cu monolayer using a Pt(2+) solution. Deposition is not epitaxial and the defects on the underlying Au(hkl) substrates are partially transferred to the Pt films. Moreover, an additional (100)-step-like defect is formed, probably as a result of the strain resulting from the Pt and Au lattice mismatch. Regarding the EOR, both vicinal Pt/Au(hkl) surfaces exhibit a behavior that differs from that expected for stepped Pt; for instance, the smaller the step density on the underlying Au substrate, the greater the ability to break the CC bond in the ethanol molecule, as determined by in situ Fourier transform infrared spectroscopy measurements. Also, we found that the acetic acid production is favored as the terrace width decreases, thus reflecting the inefficiency of the surface array to cleave the ethanol molecule.
Resumo:
Three welding procedures used to rebuild worn shafts in sugar cane mills were analysed: two submerged arc welding processes and one flux cored arc welding (FCAW) process. Sliding wear tests were in accordance with ASTM G 77 standard, using rings of welding material, blocks of bronze SAE 67, and oil as lubricant. The worn surfaces of rings and blocks were analysed by scanning electron microscopy to determine the wear mechanisms. High contact pressure, high operating temperature, and low relative speed were applied in sliding wear tests to match the conditions in sugar cane mills. Transferred material and evidence of adhesive junctions were detected. Additionally, hardened fragments produced abrasive grooves on the worn surfaces. The welding deposits that presented strong adhesion on the worn surface showed higher mass loss than the materials that presented more abrasive characteristics. Plastic mechanical properties were measured and related to the mass loss. The tested materials presented similar hardness but different yield stress and hardening coefficient. A relationship between wear, strain hardening coefficient, and yield stress was found. The welding deposit that presented the highest hardening coefficient showed the highest mass loss, with evidence of severe adhesion on the worn surface.
Resumo:
We have investigated the fundamental structural properties of conducting thin films formed by implanting gold ions into polymethylmethacrylate (PMMA) polymer at 49 eV using a repetitively pulsed cathodic arc plasma gun. Transmission electron microscopy images of these composites show that the implanted ions form gold clusters of diameter similar to 2-12 nm distributed throughout a shallow, buried layer of average thickness 7 nm, and small angle x-ray scattering (SAXS) reveals the structural properties of the PMMA-gold buried layer. The SAXS data have been interpreted using a theoretical model that accounts for peculiarities of disordered systems.
Resumo:
PMMA (polymethylmethacrylate) was ion implanted with gold at very low energy and over a range of different doses using a filtered cathodic arc metal plasma system. A nanometer scale conducting layer was formed, fully buried below the polymer surface at low implantation dose, and evolving to include a gold surface layer as the dose was increased. Depth profiles of the implanted material were calculated using the Dynamic TRIM computer simulation program. The electrical conductivity of the gold-implanted PMMA was measured in situ as a function of dose. Samples formed at a number of different doses were subsequently characterized by Rutherford backscattering spectrometry, and test patterns were formed on the polymer by electron beam lithography. Lithographic patterns were imaged by atomic force microscopy and demonstrated that the contrast properties of the lithography were well maintained in the surface-modified PMMA.
Resumo:
AISI D2 is the most commonly used cold-work tool steel of its grade. It offers high hardenability, low distortion after quenching, high resistance to softening and good wear resistance. The use of appropriate hard coatings on this steel can further improve its wear resistance. Boronizing is a surface treatment of Boron diffusion into the substrate. In this work boride layers were formed on AISI D2 steel using borax baths containing iron-titanium and aluminium, at 800 degrees C and 1000 degrees C during 4 h. The borided treated steel was characterized by optical microscopy, Vickers microhardness, X-ray diffraction (XRD) and glow discharge optical spectroscopy (GDOS) to verify the effect of the bath compositions and treatment temperatures in the layer formation. Depending on the bath composition, Fe(2)B or FeB was the predominant phase in the boride layers. The layers exhibited ""saw-tooth"" morphology at the substrate interface; layer thicknesses varied from 60 to 120 mu m, and hardness in the range of 1596-1744 HV were obtained. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Due to rain events historical monuments exposed to the atmosphere are frequently submitted to wet and dry cycles. During drying periods wetness is maintained in some confined regions and the corrosion product layer, generally denominated patinas, builds up and gets thicker. The aim of this study is to use electrochemical impedance spectroscopy (EIS) to investigate the electrochemical behaviour of pure copper coated with two artificial patina layers and submitted either to continuous or to intermittent immersion tests, this latter aiming to simulate wet and dry cycles. The experiments were performed in 0.1 mol dm(-3) NaCl solution and in artificial rainwater containing the most significant pollutants of the city of Sao Paulo. The results of the continuous immersion tests in the NaCl solution have shown that the coated samples behave like a porous electrode with finite pore length. On the other hand, in the intermittent tests a porous electrode response with semi-infinite pore length can be developed. The results were interpreted based on the model of de Levie and a critical comparison with previous interpretations reported in the literature for similar systems is presented. (C) 2011 Elsevier Ltd. All rights reserved.