948 resultados para Two-domain architecture
Resumo:
Holocene records documenting variations in direction and intensity of the geomagnetic field during the last about seven and a half millennia are presented for Northwest Africa. High resolution paleomagnetic analyses of two marine sediment sequences recovered from around 900 meter water depth on the upper continental slope off Cape Ghir (30°51'N, 10°16'W) were supplemented by magnetic measurements characterizing composition, concentration, grain size and coercivity of the magnetic mineral assemblage. Age control for the high sedimentation rate deposits (~60 cm/kyr) was established by AMS radiocarbon dates. The natural remanent magnetization (NRM) is very predominantly carried by a fine grained, mostly single domain (titano-)magnetite fraction allowing the reliable definition of stable NRM inclinations and declinations from alternating field demagnetization and principal component analysis. Predictions of the Korte and Constable (2005) geomagnetic field model CALS7K.2 for the study area are in fair agreement with the Holocene directional records for the most parts, yet noticeable differences exist in some intervals. The magnetic mineral inventory of the sediments reveals various climate controlled variations, specifically in concentration and grain size. A very strong impact had the mid-Holocene environmental change from humid to arid conditions on the African continent which also clearly affects relative paleointensity (RPI) estimates based on different remanence normalizers. To overcome this problem the pseudo-Thellier RPI technique has been applied. The results represent the first Holocene record of Earth's magnetic field intensity variations in the NW Africa region. It displays long term trends similar to those of model predictions, but also conspicuous millennium scale differences.
Resumo:
Zircons from the oldest magmatic and metasedimentary rocks in the Podolia domain of the Ukrainian shield were studied and dated by the U-Pb method on a NORDSIM secondary-ion mass spectrometer. Age of zircon cores in enderbite gneisses sampled in the Kazachii Yar and Odessa quarries on the opposite banks of the Yuzhnyi Bug River reaches 3790 Ma. Cores of terrigenous zircons in quartzites from the Odessa quarry as well as in garnet gneisses from the Zaval'e graphite quarry have age within 3650-3750 Ma. Zircon rims record two metamorphic events around 2750-2850 Ma and 1900-2000 Ma. Extremely low U content in zircons of the second age group indicates conditions of the granulite facies metamorphism in Paleoproterozoic within the Podolia domain. Measured data on orthorocks (enderbite-gneiss) and metasedimentary rocks unambiguously suggest existence of the ancient Paleoarchean crust in the Podolia (Dniester-Bug) domain of the Ukrainian shield. They contribute in our knowledge of scales of formation and geochemical features of the primordial crust.
Resumo:
PURPOSE The purpose of this study was to classify and detect intraretinal hemorrhage (IRH) in spectral domain optical coherence tomography (SD-OCT). METHODS Initially the presentation of IRH in BRVO-patients in SD-OCT was described by one reader comparing color-fundus (CF) and SD-OCT using dedicated software. Based on these established characteristics, the presence and the severity of IRH in SD-OCT and CF were assessed by two other masked readers and the inter-device and the inter-observer agreement were evaluated. Further the area of IRH was compared. RESULTS About 895 single B-scans of 24 eyes were analyzed. About 61% of SD-OCT scans and 46% of the CF-images were graded for the presence of IRH (concordance: 73%, inter-device agreement: k = 0.5). However, subdivided into previously established severity levels of dense (CF: 21.3% versus SD-OCT: 34.7%, k = 0.2), flame-like (CF: 15.5% versus SD-OCT: 45.5%, k = 0.3), and dot-like (CF: 32% versus SD-OCT: 24.4%, k = 0.2) IRH, the inter-device agreement was weak. The inter-observer agreement was strong with k = 0.9 for SD-OCT and k = 0.8 for CF. The mean area of IRH detected on SD-OCT was significantly greater than on CF (SD-OCT: 11.5 ± 4.3 mm(2) versus CF: 8.1 ± 5.5 mm(2), p = 0.008). CONCLUSIONS IRH seems to be detectable on SD-OCT; however, the previously established severity grading agreed weakly with that assessed by CF.
Resumo:
La Tesis Doctoral nace con una intensa vocación pedagógica. La hipótesis de trabajo se establece en torno a una cuestión de interés personal, un tema sobre el que se vertebran, desde el comienzo del doctorado, los diferentes cursos y trabajos de investigación: LA CASA DOMÍNGUEZ como paradigma de la dialéctica en la obra de Alejandro de la Sota. La clasificación de la realidad en categorías antagónicas determina un orden conceptual polarizado, una red de filiaciones excluyentes sobre las que Sota construye su personal protocolo operativo: la arquitectura intelectual o popular, experimental o tradicional, universal o local, ligera o pesada, elevada o enterrada, etc. Se propone el abordaje de una cuestión latente en el conjunto de la obra ‘sotiana’, desde la disección y el análisis de una de sus obras más pequeñas: la casa Domínguez. Se trata de una organización sin precedentes, que eleva la estrategia dialéctica al paroxismo: la vivienda se separa en dos estratos independientes, la zona de día, elevada, y la zona de noche, enterrada; cada uno de los estratos establece su propio orden geométrico y constructivo, su propio lenguaje y carácter, su propia identidad e incluso su propio presupuesto. Las relaciones entre interior y exterior se especializan en función de la actividad o el reposo, estableciéndose una compleja red de relaciones, algunas evidentes y otras celosamente veladas, entre los diferentes niveles. La estancia destinada a las tareas activas se proyecta como un objeto de armazón ligero y piel fría; la precisa geometría del cubo delimita la estancia vigilante sobre el paisaje conquistado. La ladera habitada se destina al reposo y se configura como una topografía verde bajo la que se desarrollan los dormitorios en torno a patios, grietas y lucernarios, generando un paisaje propio: la construcción del objeto frente a la construcción del lugar La casa Domínguez constituye uno de los proyectos menos estudiados, y por lo tanto menos celebrados, de la obra de Don Alejandro. Las publicaciones sucesivas reproducen la documentación gráfica junto a la memoria (epopeya) que el propio Sota compone para la publicación del proyecto. Apenas un par de breves textos críticos de Miguel Ángel Baldellou y, recientemente de Moisés Puente, abordan la vivienda como tema monográfico. Sin embargo, la producción de proyecto y obra ocupó a De la Sota un periodo no inferior a diez años, con casi cien planos dibujados para dos versiones de proyecto, la primera de ellas, inédita. El empeño por determinar hasta el último detalle de la ‘pequeña’ obra, conduce a Sota a controlar incluso el mobiliario interior, como hiciera en otras obras ‘importantes’ como el Gobierno Civil de Tarragona, el colegio mayor César Carlos o el edificio de Correos y Telecomunicaciones de León. La complicidad del cliente, mantenida durante casi cuarenta años, habilita el despliegue de una importante colección de recursos y herramientas de proyecto. La elección de la casa Domínguez como tema central de la tesis persigue por lo tanto un triple objetivo: en primer lugar, el abordaje del proyecto como paradigma de la dialéctica ‘sotiana’, analizando la coherencia entre el discurso de carácter heroico y la obra finalmente construida; en segundo lugar, la investigación rigurosa, de corte científico, desde la disección y progresivo desmontaje del objeto arquitectónico; y por último, la reflexión sobre los temas y dispositivos de proyecto que codifican la identificación entre la acción de construir y el hecho de habitar, registrando los aciertos y valorando con actitud crítica aquellos elementos poco coherentes con el orden interno de la propuesta. This doctoral thesis is the fruit of a profound pedagogical vocation. The central hypothesis was inspired by a question of great personal interest, and this interest has, since the very beginning of the doctorate, been the driving force behind all subsequent lines of research and investigation. The “Casa Domínguez” represents a paradigm of the dialectics found in the work of Alejandro de la Sota. The perception of reality as antagonistic categories determines a polarized conceptual order, a network of mutually excluding associations upon which Sota builds his own personal operative protocol: intellectual or popular architecture, experimental or traditional, universal or local, heavy or light, above or below ground, etc. Through the analysis and dissection of the “Casa Domínguez”, one of Sota’s smallest projects, an attempt is made to approach the underlying question posed in “Sotian” work as a whole. This is about organization without precedent, raising the strategic dialectics to levels of paroxysm. The house is divided into two separate levels, the day-time level above ground, and the lower night-time level beneath the surface of the ground. Each level has its own geometrical and stuctural order, its own language and character, its own identity and even has its own construction budget. The interaction between the two areas is centered on the two functions of rest and activity, and this in turn establishes a complex relationship network between both, which is sometimes self-evident, but at other times jealously guarded. The living area designed for daily activity is presented as an object of light structure and delicate skin; the precise geometry of the cube delimiting the ever watchful living area’s domain over the land it has conquered. A green topography is created on the slope below which lies an area adapted for rest and relaxation. Two bedrooms, built around patios, skylights and light crevices, generate an entirely independent environment: the construction of an object as opposed to the creation of a landscape. The “Casa Domínguez” project has been subject to much less scrutiny and examination than Don Alejandro’s other works, and is consequently less well-known. A succession of journals have printed the blueprint document together with a poetic description (epopee), composed by Sota himself, to mark the project’s publication. There has, however, scarcely been more than two brief critical appraisals, those by Miguel Ángel Baldellou and more recently by Moisés Puente, that have regarded the project as a monographic work. The project and works nevertheless occupied no less than ten years of De La Sota’s life, with over a hundred draft drawings for two separate versions of the project, the first of which remains unpublished. The sheer determination to design this “small” work in the most meticulous detail, drove Sota to manage and select its interior furniture, as indeed he had previously done with more “important” works like the Tarragona Civil Government, César Carlos College, or the Post Office telecommunications building in León. Client collaboration, maintained over a period of almost forty years, has facilitated an impressive array of the project’s tools and resources. The choice of “Casa Domínguez” as the central subject matter of this thesis, was made in pursuance of a triple objective: firstly, to approach the project as a paradigm of the “Sotian” dialectic, the analysis of the discourse between the heroic character and the finished building; secondly, a rigorous scientific investigation, and progressive disassembling and dissecting of the architectonic object; and finally, a reflection on aspects of the project and its technology which codify the identification between the action of construction and the reality of living, thus marking its achievements, whilst at the same time subjecting incoherent elements of the proposal’s established order to a critical evaluation.
Resumo:
The electroencephalograph (EEG) signal is one of the most widely used signals in the biomedicine field due to its rich information about human tasks. This research study describes a new approach based on i) build reference models from a set of time series, based on the analysis of the events that they contain, is suitable for domains where the relevant information is concentrated in specific regions of the time series, known as events. In order to deal with events, each event is characterized by a set of attributes. ii) Discrete wavelet transform to the EEG data in order to extract temporal information in the form of changes in the frequency domain over time- that is they are able to extract non-stationary signals embedded in the noisy background of the human brain. The performance of the model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed scheme has potential in classifying the EEG signals.
Resumo:
Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.
Resumo:
En un mercado de educación superior cada vez más competitivo, la colaboración entre universidades es una efectiva estrategia para acceder al mercado global. El desarrollo de titulaciones conjuntas es un importante mecanismo para fortalecer las colaboraciones académicas y diversificar los conocimientos. Las titulaciones conjuntas están siendo cada vez más implementadas en las universidades de todo el mundo. En Europa, el proceso de Bolonia y el programa Erasmus, están fomentado el reconocimiento de titulaciones conjuntas y dobles y promoviendo la colaboración entre las instituciones académicas. En el imparable proceso de la globalización y convergencia educativa, el uso de sistemas de e-learning para soportar cursos tanto semipresencial como online es una tendencia en crecimiento. Dado que los sistemas de e-learning soportan una amplia variedad de cursos, es necesario encontrar una solución adecuada que permita a las universidades soportar y gestionar las titulaciones conjuntas a través de sus sistemas de e-learning en conformidad con los acuerdos de colaboración establecidos por las universidades participantes. Esta tesis doctoral abordará las siguientes preguntas de investigación: 1. ¿Qué factores deben tenerse en cuenta en la implementación y gestión de titulaciones conjuntas? 2. ¿Cómo pueden los sistemas actuales de e-learning soportar el desarrollo de titulaciones conjuntas? 3. ¿Qué otros servicios y sistemas necesitan ser adaptados por las universidades interesadas en participar en una titulación conjunta a través de sus sistemas de e-learning? La implementación de titulaciones conjuntas a través de sistemas de e-learning es compleja e implica retos técnicos, administrativos, culturales, financieros, jurídicos y de seguridad. Esta tesis doctoral propone una serie de contribuciones que pueden ayudar a resolver algunos de los retos identificados. En primer lugar se ha elaborado un modelo conceptual que incluye la información del contexto de las titulaciones conjuntas que es relevante para la implementación de estas titulaciones en los sistemas de e-learning. Después de definir el modelo conceptual, se ha propuesto una arquitectura basada en políticas para la implementación de titulaciones interinstitucionales a través de sistemas de e-learning de acuerdo a los términos estipulados en los acuerdos de colaboración que son firmados por las universidades participantes. El autor se ha centrado en el componente de gestión de flujos de trabajo de esta arquitectura. Por último y con el fin de permitir la interoperabilidad de repositorios de objetos educativos, los componentes básicos a implementar han sido identificados y validados. El uso de servicios multimedia en educación es una tendencia creciente, proporcionando servicios de e-learning que permiten mejorar la comunicación y la interacción entre profesores y alumnos. Dentro de estos servicios, nos hemos centrado en el uso de la videoconferencia y la grabación de clases como servicios adecuados para el desarrollo de cursos impartidos en escenarios de educación colaborativos. Las contribuciones han sido validadas en proyectos de investigación de ámbito nacional y europeo en los que el autor ha participado. Abstract In an increasingly competitive higher education market, collaboration between universities is an effective strategy for gaining access to the global market. The development of joint degrees is an important mechanism for strengthening academic research collaborations and diversifying knowledge. Joint degrees are becoming increasingly implemented in universities around the world. In Europe, the Bologna process and the Erasmus programme have encouraged both the global recognition of joint and double degrees and promoted close collaboration between academic institutions. In the unstoppable process of globalization and educational convergence, the use of e-learning systems for supporting both blended and online courses is becoming a growing trend. Since e-learning systems covers a wide range of courses, it becomes necessary to find a suitable solution that enables universities to support and manage joint degrees through their e-learning systems in accordance with the collaboration agreements established by the universities involved. This dissertation will address the following research questions: 1. What factors need to be considered in the implementation and management of joint degrees? 2. How can the current e-learning systems support the development of joint degrees? 3. What other services and systems need to be adapted by universities interested in participating in a joint degree through their e-learning systems? The implementation of joint degrees using e-learning systems is complex and involves technical, administrative, security, cultural, financial and legal challenges. This dissertation proposes a series of contributions to help solve some of the identified challenges. One of the cornerstones of this proposal is a conceptual model of all the relevant issues related to the support of joint degrees by means of e-learning systems. After defining the conceptual model, this dissertation proposes a policy-driven architecture for implementing inter-institutional degree collaborations through e-learning systems as stipulated by a collaboration agreement signed by two universities. The author has focused on the workflow management component of this architecture. Finally, the building blocks for achieving interoperability of learning object repositories have been identified and validated. The use of multimedia services in education is a growing trend, providing rich e-learning services that improve the communication and interaction between teachers and students. Within these e-learning services, we have focused on the use of videoconferencing and lecture recording as the best-suited services to support collaborative learning scenarios. The contributions have been validated within national and European research projects that the author has been involved in.
Resumo:
This article proposes a MAS architecture for network diagnosis under uncertainty. Network diagnosis is divided into two inference processes: hypothesis generation and hypothesis confirmation. The first process is distributed among several agents based on a MSBN, while the second one is carried out by agents using semantic reasoning. A diagnosis ontology has been defined in order to combine both inference processes. To drive the deliberation process, dynamic data about the influence of observations are taken during diagnosis process. In order to achieve quick and reliable diagnoses, this influence is used to choose the best action to perform. This approach has been evaluated in a P2P video streaming scenario. Computational and time improvements are highlight as conclusions.
Resumo:
Emission inventories are databases that aim to describe the polluting activities that occur across a certain geographic domain. According to the spatial scale, the availability of information will vary as well as the applied assumptions, which will strongly influence its quality, accuracy and representativeness. This study compared and contrasted two emission inventories describing the Greater Madrid Region (GMR) under an air quality simulation approach. The chosen inventories were the National Emissions Inventory (NEI) and the Regional Emissions Inventory of the Greater Madrid Region (REI). Both of them were used to feed air quality simulations with the CMAQ modelling system, and the results were compared with observations from the air quality monitoring network in the modelled domain. Through the application of statistical tools, the analysis of emissions at cell level and cell – expansion procedures, it was observed that the National Inventory showed better results for describing on – road traffic activities and agriculture, SNAP07 and SNAP10. The accurate description of activities, the good characterization of the vehicle fleet and the correct use of traffic emission factors were the main causes of such a good correlation. On the other hand, the Regional Inventory showed better descriptions for non – industrial combustion (SNAP02) and industrial activities (SNAP03). It incorporated realistic emission factors, a reasonable fuel mix and it drew upon local information sources to describe these activities, while NEI relied on surrogation and national datasets which leaded to a poorer representation. Off – road transportation (SNAP08) was similarly described by both inventories, while the rest of the SNAP activities showed a marginal contribution to the overall emissions.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Cloud computing is one the most relevant computing paradigms available nowadays. Its adoption has increased during last years due to the large investment and research from business enterprises and academia institutions. Among all the services cloud providers usually offer, Infrastructure as a Service has reached its momentum for solving HPC problems in a more dynamic way without the need of expensive investments. The integration of a large number of providers is a major goal as it enables the improvement of the quality of the selected resources in terms of pricing, speed, redundancy, etc. In this paper, we propose a system architecture, based on semantic solutions, to build an interoperable scheduler for federated clouds that works with several IaaS (Infrastructure as a Service) providers in a uniform way. Based on this architecture we implement a proof-of-concept prototype and test it with two different cloud solutions to provide some experimental results about the viability of our approach.
Resumo:
Set-Sharing analysis, the classic Jacobs and Langen's domain, has been widely used to infer several interesting properties of programs at compile-time such as occurs-check reduction, automatic parallelization, flnite-tree analysis, etc. However, performing abstract uniflcation over this domain implies the use of a closure operation which makes the number of sharing groups grow exponentially. Much attention has been given in the literature to mitígate this key inefficiency in this otherwise very useful domain. In this paper we present two novel alternative representations for the traditional set-sharing domain, tSH and tNSH. which compress efficiently the number of elements into fewer elements enabling more efficient abstract operations, including abstract uniflcation, without any loss of accuracy. Our experimental evaluation supports that both representations can reduce dramatically the number of sharing groups showing they can be more practical solutions towards scalable set-sharing.
Resumo:
In this introductory chapter we put in context and give a brief outline of the work that we thoroughly present in the rest of the dissertation. We consider this work divided in two main parts. The first part is the Firenze Framework, a knowledge level description framework rich enough to express the semantics required for describing both semantic Web services and semantic Grid services. We start by defining what the Semantic Grid is and its relation with the Semantic Web; and the possibility of their convergence since both initiatives have become mainly service-oriented. We also introduce the main motivators of the creation of this framework, one is to provide a valid description framework that works at knowledge level; the other to provide a description framework that takes into account the characteristics of Grid services in order to be able to describe them properly. The other part of the dissertation is devoted to Vega, an event-driven architecture that, by means of proposed knowledge level description framework, is able to achieve high scale provisioning of knowledge-intensive services. In this introductory chapter we portrait the anatomy of a generic event-driven architecture, and we briefly enumerate their main characteristics, which are the reason that make them our choice.
Resumo:
The fragmented condition of our everyday brings us closer to the risks of hyper-expression. Against it two positions unfold to help us face a world that escapes our capacities: familiarity and poetic recognition. In the latter it is crucial the role of the insignificant as dynamic and relational instigator of a conscious threading of reality through the actions of the Poeta Faber and his careful look onto the world. / The production of the common as the material and symbolic fabric of the city, unstable reality in a perpetual becoming, leads us to a new and much needed reconsideration of the public/private division born from the modern state. Immersed in the confusion between public and common, we have not perceived that through the expropriation of the first we have been prepared for the willing surrendering of the second. / From insignificance to rebellion as affirmative going into action related to the idea of minor architecture as common and intensely political production, born from the inside of a society that has no more outsides.
Resumo:
This paper describes a new category of CAD applications devoted to the definition and parameterization of hull forms, called programmed design. Programmed design relies on two prerequisites. The first one is a product model with a variety of types large enough to face the modeling of any type of ship. The second one is a design language dedicated to create the product model. The main purpose of the language is to publish the modeling algorithms of the application in the designer knowledge domain to let the designer create parametric model scripts. The programmed design is an evolution of the parametric design but it is not just parametric design. It is a tool to create parametric design tools. It provides a methodology to extract the design knowledge by abstracting a design experience in order to store and reuse it. Programmed design is related with the organizational and architectural aspects of the CAD applications but not with the development of modeling algorithms. It is built on top and relies on existing algorithms provided by a comprehensive product model. Programmed design can be useful to develop new applications, to support the evolution of existing applications or even to integrate different types of application in a single one. A three-level software architecture is proposed to make the implementation of the programmed design easier. These levels are the conceptual level based on the design language, the mathematical level based on the geometric formulation of the product model and the visual level based on the polyhedral representation of the model as required by the graphic card. Finally, some scenarios of the use of programmed design are discussed. For instance, the development of specialized parametric hull form generators for a ship type or a family of ships or the creation of palettes of hull form components to be used as parametric design patterns. Also two new processes of reverse engineering which can considerably improve the application have been detected: the creation of the mathematical level from the visual level and the creation of the conceptual level from the mathematical level. © 2012 Elsevier Ltd. All rights reserved. 1. Introduction