14 resultados para Expandability
Resumo:
Our main result is a new sequential method for the design of decentralized control systems. Controller synthesis is conducted on a loop-by-loop basis, and at each step the designer obtains an explicit characterization of the class C of all compensators for the loop being closed that results in closed-loop system poles being in a specified closed region D of the s-plane, instead of merely stabilizing the closed-loop system. Since one of the primary goals of control system design is to satisfy basic performance requirements that are often directly related to closed-loop pole location (bandwidth, percentage overshoot, rise time, settling time), this approach immediately allows the designer to focus on other concerns such as robustness and sensitivity. By considering only compensators from class C and seeking the optimum member of that set with respect to sensitivity or robustness, the designer has a clearly-defined limited optimization problem to solve without concern for loss of performance. A solution to the decentralized tracking problem is also provided. This design approach has the attractive features of expandability, the use of only 'local models' for controller synthesis, and fault tolerance with respect to certain types of failure.
Resumo:
基于QNX实时多任务操作系统,设计了机器人软件系统。该系统采用了RTM(实时监控)和DCP(设备控制)两个公共数据区,利用QNX的消息传送机制通过两个公共数据区的通信接口在RTM和DCP之间通信,这种数据隔离机制保障了程序的模块化和可扩展性。
Resumo:
Emerging healthcare applications can benefit enormously from recent advances in pervasive technology and computing. This paper introduces the CLARITY Modular Ambient Health and Wellness Measurement Platform:, which is a heterogeneous and robust pervasive healthcare solution currently under development at the CLARITY Center for Sensor Web Technologies. This intelligent and context-aware platform comprises the Tyndall Wireless Sensor Network prototyping system, augmented with an agent-based middleware and frontend computing architecture. The key contribution of this work is to highlight how interoperability, expandability, reusability and robustness can be manifested in the modular design of the constituent nodes and the inherently distributed nature of the controlling software architecture.Emerging healthcare applications can benefit enormously from recent advances in pervasive technology and computing. This paper introduces the CLARITY Modular Ambient Health and Wellness Measurement Platform:, which is a heterogeneous and robust pervasive healthcare solution currently under development at the CLARITY Center for Sensor Web Technologies. This intelligent and context-aware platform comprises the Tyndall Wireless Sensor Network prototyping system, augmented with an agent-based middleware and frontend computing architecture. The key contribution of this work is to highlight how interoperability, expandability, reusability and robustness can be manifested in the modular design of the constituent nodes and the inherently distributed nature of the controlling software architecture.
Resumo:
Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------
Resumo:
Fully connected cubic networks (FCCNs) are a class of newly proposed hierarchical interconnection networks for multicomputer systems, which enjoy the strengths of constant node degree and good expandability. The shortest path routing in FCCNs is an open problem. In this paper, we present an oblivious routing algorithm for n-level FCCN with N = 8(n) nodes, and prove that this algorithm creates a shortest path from the source to the destination. At the costs of both an O(N)-parallel-step off-line preprocessing phase and a list of size N stored at each node, the proposed algorithm is carried out at each related node in O(n) time. In some cases the proposed algorithm is superior to the one proposed by Chang and Wang in terms of the length of the routing path. This justifies the utility of our routing strategy. (C) 2006 Elsevier Inc. All rights reserved.
Resumo:
In this study were conducted experimental procedures for determination of variation of the expandability of rigid polyurethane foam (PUR) from a natural oil polyol (NOP), specifically the Castor oil plant, Ricinus communis, pure and additions of the vermiculite in phase dispersed in different percentage within a range from 0% to 20%, mass replacement. From the information acquired, were defined the parameters for production of bodies of test, plates obtained through controlled expansion, with the final volume fixed. Initially, the plates were subjected to thermal performance tests and evaluated the temperature profiles, to later be extracted samples duly prepared in accordance with the conditions required for each test. Was proceeded then the measurement of the coefficient of thermal conductivity, volumetric capacity heat and thermal diffusivity. The findings values were compared with the results obtained in the tests of thermal performance, contributing to validation of the same. Ultimately, it was investigated the influence that changes in physical-chemical structure of the material had exerted on the variation of thermophysical quantities through gas pycnometry, scanning electron microscopy (SEM) combined with energy dispersive X-ray fluorescence spectroscopy (EDXRF), infrared spectroscopy using Fourier transform (FTIR), thermogravimetric analysis (TGA) and differential thermal analysis (DTA). Based on the results obtained was possible to demonstrate that all load percentage analyzed promoted an increase in the potential expansion (PE) of the resin. In production of the plates, the composites with density near at the free expansion presented high contraction during the cure, being the of higher density adopted as definitive standard. In the thermal performance tests, the heating and cooling curves of the different composites had presented symmetry and values very close for lines of the temperature. The results obtained for the thermophysical properties of composites, showed little difference in respect of pure foam. The percentage of open pores and irregularities in the morphology of the composites were proportionate to the increment of vermiculite. In the interaction between the matrix and dispersed phase, there were no chemical transformations in the region of interface and new compounds were not generated. The composites of PUR-NOP and vermiculite presented thermal insulating properties near the foam pure and percentage significantly less plastic in its composition, to the formulation with 10% of load
Resumo:
This dissertation deals with the development of a project concerning a demonstration in the scope of the Supply Chain 6 of the Internet of Energy (IoE) project: the Remote Monitoring Emulator, which bears my personal contribution in several sections. IoE is a project of international relevance, that means to establish an interoperability standard as regards the electric power production and utilization infrastructure, using Smart Space platforms. The future perspectives of IoE have to do with a platform for electrical power trade-of, the Smart Grid, whose energy is produced by decentralized renewable sources and whose services are exploited primarily according to the Internet of Things philosophy. The main consumers of this kind of smart technology will be Smart Houses (that is to say, buildings controlled by an autonomous system for electrical energy management that is interoperable with the Smart Grid) and Electric Mobility, that is a smart and automated management regarding movement and, overall, recharging of electrical vehicles. It is precisely in the latter case study that the project Remote Monitoring Emulator takes place. It consists in the development of a simulated platform for the management of an electrical vehicle recharging in a city. My personal contribution to this project lies in development and modeling of the simulation platform, of its counterpart in a mobile application and implementation of a city service prototype. This platform shall, ultimately, make up a demonstrator system exploiting the same device which a real user, inside his vehicle, would use. The main requirements that this platform shall satisfy will be interoperability, expandability and relevance to standards, as it needs to communicate with other development groups and to effectively respond to internal changes that can affect IoE.
Resumo:
Kaolinite, goethite, minor hematite, and gibbsite were found in fluvial upper Lower Cretaceous basal sediment from the Southern Kerguelen Plateau, Sites 748 and 750, 55°S latitude. This mineral assemblage, derived from the weathering of basalt, indicates near-tropical weathering conditions with high orographic rainfall, at least 100 cm per year. The climate deteriorated by the Turonian or Coniacian, as indicated by the decline in kaolinite content of this sediment. The Upper Cretaceous sediment at Site 748 consists of 200 m of millimeter-laminated, sparsely fossiliferous, wood-bearing glauconitic siltstone and clay stone with siderite concretions deposited on a shelf below wave base. Some graded and cross beds indicate that storms swept over the shelf and reworked the sediment. Overlying this unit is 300 m of intermittently partly silicified, bryozoan-inoceramid-echinoderm-rich glauconitic packstones, grainstones, and wackestones. The dominant clay mineral in both units is identical to the mineral composition of the glauconite pellets: randomly interstratified smectite-mica. The clay fraction has a higher percent of expandable layers than the mineral of the glauconite pellets, and the clay of the underlying subunit has a higher percentage of expandable layers than the clay of the carbonate subunit. Potassium levels mirror these mineral variations, with higher K levels in minerals that have a lower percentage of expandable layers. The decrease in expandability of the mineral in the upper subunit is attributed to diagenesis, the result of higher porosity.
Resumo:
How the micro-scale fabric of clay-rich mudstone evolves during consolidation in early burial is critical to how they are interpreted in the deeper portions of sedimentary basins. Core samples from the Integrated Ocean Drilling Program Expedition 308, Ursa Basin, Gulf of Mexico, covering seafloor to 600 meters below sea floor (mbsf) are ideal for studying the micro-scale fabric of mudstones. Mudstones of consistent composition and grain size decrease in porosity from 80% at the seafloor to 37% at 600 mbsf. Argon-ion milling produces flat surfaces to image this pore evolution over a vertical effective stress range of 0.25 (71 mbsf) to 4.05 MPa (597 mbsf). With increasing burial, pores become elongated, mean pore size decreases, and there is preferential loss of the largest pores. There is a small increase in clay mineral preferred orientation as recorded by high resolution X-ray goniometry with burial.
Resumo:
X-ray diffraction analyses of the clay-sized fraction of sediments from the Nankai Trough and Shikoku Basin (Sites 1173, 1174, and 1177 of the Ocean Drilling Program) reveal spatial and temporal trends in clay minerals and diagenesis. More detrital smectite was transported into the Shikoku Basin during the early-middle Miocene than what we observe today, and smectite input decreased progressively through the late Miocene and Pliocene. Volcanic ash has been altered to dioctahedral smectite in the upper Shikoku Basin facies at Site 1173; the ash alteration front shifts upsection to the outer trench-wedge facies at Site 1174. At greater depths (lower Shikoku Basin facies), smectite alters to illite/smectite mixed-layer clay, but reaction progress is incomplete. Using ambient geothermal conditions, a kinetic model overpredicts the amount of illite in illite/smectite clays by 15%-20% at Site 1174. Numerical simulations come closer to observations if the concentration of potassium in pore water is reduced or the time of burial is shortened. Model results match X-ray diffraction results fairly well at Site 1173. The geothermal gradient at Site 1177 is substantially lower than at Sites 1173 and 1174; consequently, volcanic ash alters to smectite in lower Shikoku Basin deposits but smectite-illite diagenesis has not started. The absolute abundance of smectite in mudstones from Site 1177 is sufficient (30-60 wt%) to influence the strata's shear strength and hydrogeology as they subduct along the Ashizuri Transect.
Resumo:
This report describes the results of semiquantitative analysis of clay mineral composition by X-ray diffraction. The samples consist of hemipelagic mud and mudstone cored from Hydrate Ridge during Leg 204 of the Ocean Drilling Program. We analyzed oriented aggregates of the clay-sized fractions (<2 µm) to estimate relative percentages of smectite, illite, and chlorite (+ kaolinite). For the most part, stratigraphic variations in clay mineral composition are modest and there are no significant differences among the seven sites that were included in the study. On average, early Pleistocene to Holocene trench slope and slope basin deposits contain 29% smectite, 31% illite, and 40% chlorite (+ kaolinite). Late Pliocene to early Pleistocene strata from the underlying accretionary prism contain moderately larger proportions of smectite with average values of 38% smectite, 27% illite, and 35% chlorite (+ kaolinite). There is no evidence of clay mineral diagenesis at the depths sampled. The expandability of smectite is, on average, equal to 64%, and there are no systematic variations in expandability as a function of burial depth or depositional age. The absence of clay mineral diagenesis is consistent with the relatively shallow sample depths and corresponding maximum temperatures of only 24°-33°C.
Resumo:
habilidades de comprensión y resolución de problemas. Tanto es así que se puede afirmar con rotundidad que no existe el método perfecto para cada una de las etapas de desarrollo y tampoco existe el modelo de ciclo de vida perfecto: cada nuevo problema que se plantea es diferente a los anteriores en algún aspecto y esto hace que técnicas que funcionaron en proyectos anteriores fracasen en los proyectos nuevos. Por ello actualmente se realiza un planteamiento integrador que pretende utilizar en cada caso las técnicas, métodos y herramientas más acordes con las características del problema planteado al ingeniero. Bajo este punto de vista se plantean nuevos problemas. En primer lugar está la selección de enfoques de desarrollo. Si no existe el mejor enfoque, ¿cómo se hace para elegir el más adecuado de entre el conjunto de los existentes? Un segundo problema estriba en la relación entre las etapas de análisis y diseño. En este sentido existen dos grandes riesgos. Por un lado, se puede hacer un análisis del problema demasiado superficial, con lo que se produce una excesiva distancia entre el análisis y el diseño que muchas veces imposibilita el paso de uno a otro. Por otro lado, se puede optar por un análisis en términos del diseño que provoca que no cumpla su objetivo de centrarse en el problema, sino que se convierte en una primera versión de la solución, lo que se conoce como diseño preliminar. Como consecuencia de lo anterior surge el dilema del análisis, que puede plantearse como sigue: para cada problema planteado hay que elegir las técnicas más adecuadas, lo que requiere que se conozcan las características del problema. Para ello, a su vez, se debe analizar el problema, eligiendo una técnica antes de conocerlo. Si la técnica utiliza términos de diseño entonces se ha precondicionado el paradigma de solución y es posible que no sea el más adecuado para resolver el problema. En último lugar están las barreras pragmáticas que frenan la expansión del uso de métodos con base formal, dificultando su aplicación en la práctica cotidiana. Teniendo en cuenta todos los problemas planteados, se requieren métodos de análisis del problema que cumplan una serie de objetivos, el primero de los cuales es la necesidad de una base formal, con el fin de evitar la ambigüedad y permitir verificar la corrección de los modelos generados. Un segundo objetivo es la independencia de diseño: se deben utilizar términos que no tengan reflejo directo en el diseño, para que permitan centrarse en las características del problema. Además los métodos deben permitir analizar problemas de cualquier tipo: algorítmicos, de soporte a la decisión o basados en el conocimiento, entre otros. En siguiente lugar están los objetivos relacionados con aspectos pragmáticos. Por un lado deben incorporar una notación textual formal pero no matemática, de forma que se facilite su validación y comprensión por personas sin conocimientos matemáticos profundos pero al mismo tiempo sea lo suficientemente rigurosa para facilitar su verificación. Por otro lado, se requiere una notación gráfica complementaria para representar los modelos, de forma que puedan ser comprendidos y validados cómodamente por parte de los clientes y usuarios. Esta tesis doctoral presenta SETCM, un método de análisis que cumple estos objetivos. Para ello se han definido todos los elementos que forman los modelos de análisis usando una terminología independiente de paradigmas de diseño y se han formalizado dichas definiciones usando los elementos fundamentales de la teoría de conjuntos: elementos, conjuntos y relaciones entre conjuntos. Por otro lado se ha definido un lenguaje formal para representar los elementos de los modelos de análisis – evitando en lo posible el uso de notaciones matemáticas – complementado con una notación gráfica que permite representar de forma visual las partes más relevantes de los modelos. El método propuesto ha sido sometido a una intensa fase de experimentación, durante la que fue aplicado a 13 casos de estudio, todos ellos proyectos reales que han concluido en productos transferidos a entidades públicas o privadas. Durante la experimentación se ha evaluado la adecuación de SETCM para el análisis de problemas de distinto tamaño y en sistemas cuyo diseño final usaba paradigmas diferentes e incluso paradigmas mixtos. También se ha evaluado su uso por analistas con distinto nivel de experiencia – noveles, intermedios o expertos – analizando en todos los casos la curva de aprendizaje, con el fin de averiguar si es fácil de aprender su uso, independientemente de si se conoce o no alguna otra técnica de análisis. Por otro lado se ha estudiado la capacidad de ampliación de modelos generados con SETCM, para comprobar si permite abordar proyectos realizados en varias fases, en los que el análisis de una fase consista en ampliar el análisis de la fase anterior. En resumidas cuentas, se ha tratado de evaluar la capacidad de integración de SETCM en una organización como la técnica de análisis preferida para el desarrollo de software. Los resultados obtenidos tras esta experimentación han sido muy positivos, habiéndose alcanzado un alto grado de cumplimiento de todos los objetivos planteados al definir el método.---ABSTRACT---Software development is an inherently complex activity, which requires specific abilities of problem comprehension and solving. It is so difficult that it can even be said that there is no perfect method for each of the development stages and that there is no perfect life cycle model: each new problem is different to the precedent ones in some respect and the techniques that worked in other problems can fail in the new ones. Given that situation, the current trend is to integrate different methods, tools and techniques, using the best suited for each situation. This trend, however, raises some new problems. The first one is the selection of development approaches. If there is no a manifestly single best approach, how does one go about choosing an approach from the array of available options? The second problem has to do with the relationship between the analysis and design phases. This relation can lead to two major risks. On one hand, the analysis could be too shallow and far away from the design, making it very difficult to perform the transition between them. On the other hand, the analysis could be expressed using design terminology, thus becoming more a kind of preliminary design than a model of the problem to be solved. In third place there is the analysis dilemma, which can be expressed as follows. The developer has to choose the most adequate techniques for each problem, and to make this decision it is necessary to know the most relevant properties of the problem. This implies that the developer has to analyse the problem, choosing an analysis method before really knowing the problem. If the chosen technique uses design terminology then the solution paradigm has been preconditioned and it is possible that, once the problem is well known, that paradigm wouldn’t be the chosen one. The last problem consists of some pragmatic barriers that limit the applicability of formal based methods, making it difficult to use them in current practice. In order to solve these problems there is a need for analysis methods that fulfil several goals. The first one is the need of a formal base, which prevents ambiguity and allows the verification of the analysis models. The second goal is design-independence: the analysis should use a terminology different from the design, to facilitate a real comprehension of the problem under study. In third place the analysis method should allow the developer to study different kinds of problems: algorithmic, decision-support, knowledge based, etc. Next there are two goals related to pragmatic aspects. Firstly, the methods should have a non mathematical but formal textual notation. This notation will allow people without deep mathematical knowledge to understand and validate the resulting models, without losing the needed rigour for verification. Secondly, the methods should have a complementary graphical notation to make more natural the understanding and validation of the relevant parts of the analysis. This Thesis proposes such a method, called SETCM. The elements conforming the analysis models have been defined using a terminology that is independent from design paradigms. Those terms have been then formalised using the main concepts of the set theory: elements, sets and correspondences between sets. In addition, a formal language has been created, which avoids the use of mathematical notations. Finally, a graphical notation has been defined, which can visually represent the most relevant elements of the models. The proposed method has been thoroughly tested during the experimentation phase. It has been used to perform the analysis of 13 actual projects, all of them resulting in transferred products. This experimentation allowed evaluating the adequacy of SETCM for the analysis of problems of varying size, whose final design used different paradigms and even mixed ones. The use of the method by people with different levels of expertise was also evaluated, along with the corresponding learning curve, in order to assess if the method is easy to learn, independently of previous knowledge on other analysis techniques. In addition, the expandability of the analysis models was evaluated, assessing if the technique was adequate for projects organised in incremental steps, in which the analysis of one step grows from the precedent models. The final goal was to assess if SETCM can be used inside an organisation as the preferred analysis method for software development. The obtained results have been very positive, as SETCM has obtained a high degree of fulfilment of the goals stated for the method.