875 resultados para Computation in architecture


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Intensity non-uniformity (bias field) correction, contextual constraints over spatial intensity distribution and non-spherical cluster's shape in the feature space are incorporated into the fuzzy c-means (FCM) for segmentation of three-dimensional multi-spectral MR images. The bias field is modeled by a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of either intensity or membership are added into the FCM cost functions. Since the feature space is not isotropic, distance measures, other than the Euclidean distance, are used to account for the shape and volumetric effects of clusters in the feature space. The performance of segmentation is improved by combining the adaptive FCM scheme with the criteria used in Gustafson-Kessel (G-K) and Gath-Geva (G-G) algorithms through the inclusion of the cluster scatter measure. The performance of this integrated approach is quantitatively evaluated on normal MR brain images using the similarity measures. The improvement in the quality of segmentation obtained with our method is also demonstrated by comparing our results with those produced by FSL (FMRIB Software Library), a software package that is commonly used for tissue classification.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Within the regression framework, we show how different levels of nonlinearity influence the instantaneous firing rate prediction of single neurons. Nonlinearity can be achieved in several ways. In particular, we can enrich the predictor set with basis expansions of the input variables (enlarging the number of inputs) or train a simple but different model for each area of the data domain. Spline-based models are popular within the first category. Kernel smoothing methods fall into the second category. Whereas the first choice is useful for globally characterizing complex functions, the second is very handy for temporal data and is able to include inner-state subject variations. Also, interactions among stimuli are considered. We compare state-of-the-art firing rate prediction methods with some more sophisticated spline-based nonlinear methods: multivariate adaptive regression splines and sparse additive models. We also study the impact of kernel smoothing. Finally, we explore the combination of various local models in an incremental learning procedure. Our goal is to demonstrate that appropriate nonlinearity treatment can greatly improve the results. We test our hypothesis on both synthetic data and real neuronal recordings in cat primary visual cortex, giving a plausible explanation of the results from a biological perspective.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The twentieth century brought a new sensibility characterized by the discredit of cartesian rationality and the weakening of universal truths, related with aesthetic values as order, proportion and harmony. In the middle of the century, theorists such as Theodor Adorno, Rudolf Arnheim and Anton Ehrenzweig warned about the transformation developed by the artistic field. Contemporary aesthetics seemed to have a new goal: to deny the idea of art as an organized, finished and coherent structure. The order had lost its privileged position. Disorder, probability, arbitrariness, accidentality, randomness, chaos, fragmentation, indeterminacy... Gradually new terms were coined by aesthetic criticism to explain what had been happening since the beginning of the century. The first essays on the matter sought to provide new interpretative models based on, among other arguments, the phenomenology of perception, the recent discoveries of quantum mechanics, the deeper layers of the psyche or the information theories. Overall, were worthy attempts to give theoretical content to a situation as obvious as devoid of founding charter. Finally, in 1962, Umberto Eco brought together all this efforts by proposing a single theoretical frame in his book Opera Aperta. According to his point of view, all of the aesthetic production of twentieth century had a characteristic in common: its capacity to express multiplicity. For this reason, he considered that the nature of contemporary art was, above all, ambiguous. The aim of this research is to clarify the consequences of the incorporation of ambiguity in architectural theoretical discourse. We should start making an accurate analysis of this concept. However, this task is quite difficult because ambiguity does not allow itself to be clearly defined. This concept has the disadvantage that its signifier is as imprecise as its signified. In addition, the negative connotations that ambiguity still has outside the aesthetic field, stigmatizes this term and makes its use problematic. Another problem of ambiguity is that the contemporary subject is able to locate it in all situations. This means that in addition to distinguish ambiguity in contemporary productions, so does in works belonging to remote ages and styles. For that reason, it could be said that everything is ambiguous. And that’s correct, because somehow ambiguity is present in any creation of the imperfect human being. However, as Eco, Arnheim and Ehrenzweig pointed out, there are two major differences between current and past contexts. One affects the subject and the other the object. First, it’s the contemporary subject, and no other, who has acquired the ability to value and assimilate ambiguity. Secondly, ambiguity was an unexpected aesthetic result in former periods, while in contemporary object it has been codified and is deliberately present. In any case, as Eco did, we consider appropriate the use of the term ambiguity to refer to the contemporary aesthetic field. Any other term with more specific meaning would only show partial and limited aspects of a situation quite complex and difficult to diagnose. Opposed to what normally might be expected, in this case ambiguity is the term that fits better due to its particular lack of specificity. In fact, this lack of specificity is what allows to assign a dynamic condition to the idea of ambiguity that in other terms would hardly be operative. Thus, instead of trying to define the idea of ambiguity, we will analyze how it has evolved and its consequences in architectural discipline. Instead of trying to define what it is, we will examine what its presence has supposed in each moment. We will deal with ambiguity as a constant presence that has always been latent in architectural production but whose nature has been modified over time. Eco, in the mid-twentieth century, discerned between classical ambiguity and contemporary ambiguity. Currently, half a century later, the challenge is to discern whether the idea of ambiguity has remained unchanged or have suffered a new transformation. What this research will demonstrate is that it’s possible to detect a new transformation that has much to do with the cultural and aesthetic context of last decades: the transition from modernism to postmodernism. This assumption leads us to establish two different levels of contemporary ambiguity: each one related to one these periods. The first level of ambiguity is widely well-known since many years. Its main characteristics are a codified multiplicity, an interpretative freedom and an active subject who gives conclusion to an object that is incomplete or indefinite. This level of ambiguity is related to the idea of indeterminacy, concept successfully introduced into contemporary aesthetic language. The second level of ambiguity has been almost unnoticed for architectural criticism, although it has been identified and studied in other theoretical disciplines. Much of the work of Fredric Jameson and François Lyotard shows reasonable evidences that the aesthetic production of postmodernism has transcended modern ambiguity to reach a new level in which, despite of the existence of multiplicity, the interpretative freedom and the active subject have been questioned, and at last denied. In this period ambiguity seems to have reached a new level in which it’s no longer possible to obtain a conclusive and complete interpretation of the object because it has became an unreadable device. The postmodern production offers a kind of inaccessible multiplicity and its nature is deeply contradictory. This hypothetical transformation of the idea of ambiguity has an outstanding analogy with that shown in the poetic analysis made by William Empson, published in 1936 in his Seven Types of Ambiguity. Empson established different levels of ambiguity and classified them according to their poetic effect. This layout had an ascendant logic towards incoherence. In seventh level, where ambiguity is higher, he located the contradiction between irreconcilable opposites. It could be said that contradiction, once it undermines the coherence of the object, was the better way that contemporary aesthetics found to confirm the Hegelian judgment, according to which art would ultimately reject its capacity to express truth. Much of the transformation of architecture throughout last century is related to the active involvement of ambiguity in its theoretical discourse. In modern architecture ambiguity is present afterwards, in its critical review made by theoreticians like Colin Rowe, Manfredo Tafuri and Bruno Zevi. The publication of several studies about Mannerism in the forties and fifties rescued certain virtues of an historical style that had been undervalued due to its deviation from Renacentist canon. Rowe, Tafuri and Zevi, among others, pointed out the similarities between Mannerism and certain qualities of modern architecture, both devoted to break previous dogmas. The recovery of Mannerism allowed joining ambiguity and modernity for first time in the same sentence. In postmodernism, on the other hand, ambiguity is present ex-professo, developing a prominent role in the theoretical discourse of this period. The distance between its analytical identification and its operational use quickly disappeared because of structuralism, an analytical methodology with the aspiration of becoming a modus operandi. Under its influence, architecture began to be identified and studied as a language. Thus, postmodern theoretical project discerned between the components of architectural language and developed them separately. Consequently, there is not only one, but three projects related to postmodern contradiction: semantic project, syntactic project and pragmatic project. Leading these projects are those prominent architects whose work manifested an especial interest in exploring and developing the potential of the use of contradiction in architecture. Thus, Robert Venturi, Peter Eisenman and Rem Koolhaas were who established the main features through which architecture developed the dialectics of ambiguity, in its last and extreme level, as a theoretical project in each component of architectural language. Robert Venturi developed a new interpretation of architecture based on its semantic component, Peter Eisenman did the same with its syntactic component, and also did Rem Koolhaas with its pragmatic component. With this approach this research aims to establish a new reflection on the architectural transformation from modernity to postmodernity. Also, it can serve to light certain aspects still unaware that have shaped the architectural heritage of past decades, consequence of a fruitful relationship between architecture and ambiguity and its provocative consummation in a contradictio in terminis. Esta investigación centra su atención fundamentalmente sobre las repercusiones de la incorporación de la ambigüedad en forma de contradicción en el discurso arquitectónico postmoderno, a través de cada uno de sus tres proyectos teóricos. Está estructurada, por tanto, en torno a un capítulo principal titulado Dialéctica de la ambigüedad como proyecto teórico postmoderno, que se desglosa en tres, de títulos: Proyecto semántico. Robert Venturi; Proyecto sintáctico. Peter Eisenman; y Proyecto pragmático. Rem Koolhaas. El capítulo central se complementa con otros dos situados al inicio. El primero, titulado Dialéctica de la ambigüedad contemporánea. Una aproximación realiza un análisis cronológico de la evolución que ha experimentado la idea de la ambigüedad en la teoría estética del siglo XX, sin entrar aún en cuestiones arquitectónicas. El segundo, titulado Dialéctica de la ambigüedad como crítica del proyecto moderno se ocupa de examinar la paulatina incorporación de la ambigüedad en la revisión crítica de la modernidad, que sería de vital importancia para posibilitar su posterior introducción operativa en la postmodernidad. Un último capítulo, situado al final del texto, propone una serie de Proyecciones que, a tenor de lo analizado en los capítulos anteriores, tratan de establecer una relectura del contexto arquitectónico actual y su evolución posible, considerando, en todo momento, que la reflexión en torno a la ambigüedad todavía hoy permite vislumbrar nuevos horizontes discursivos. Cada doble página de la Tesis sintetiza la estructura tripartita del capítulo central y, a grandes rasgos, la principal herramienta metodológica utilizada en la investigación. De este modo, la triple vertiente semántica, sintáctica y pragmática con que se ha identificado al proyecto teórico postmoderno se reproduce aquí en una distribución específica de imágenes, notas a pie de página y cuerpo principal del texto. En la columna de la izquierda están colocadas las imágenes que acompañan al texto principal. Su distribución atiende a criterios estéticos y compositivos, cualificando, en la medida de lo posible, su condición semántica. A continuación, a su derecha, están colocadas las notas a pie de página. Su disposición es en columna y cada nota está colocada a la misma altura que su correspondiente llamada en el texto principal. Su distribución reglada, su valor como notación y su posible equiparación con una estructura profunda aluden a su condición sintáctica. Finalmente, el cuerpo principal del texto ocupa por completo la mitad derecha de cada doble página. Concebido como un relato continuo, sin apenas interrupciones, su papel como responsable de satisfacer las demandas discursivas que plantea una investigación doctoral está en correspondencia con su condición pragmática.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The fragmented condition of our everyday brings us closer to the risks of hyper-expression. Against it two positions unfold to help us face a world that escapes our capacities: familiarity and poetic recognition. In the latter it is crucial the role of the insignificant as dynamic and relational instigator of a conscious threading of reality through the actions of the Poeta Faber and his careful look onto the world. / The production of the common as the material and symbolic fabric of the city, unstable reality in a perpetual becoming, leads us to a new and much needed reconsideration of the public/private division born from the modern state. Immersed in the confusion between public and common, we have not perceived that through the expropriation of the first we have been prepared for the willing surrendering of the second. / From insignificance to rebellion as affirmative going into action related to the idea of minor architecture as common and intensely political production, born from the inside of a society that has no more outsides.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The recent establishment of a digital culture and society, together with current financial crisis and urgent energetic and climatic needs, has radically changed the architectural scene from the optimism of some years ago to a situation of uncertainty and huge social demands and challenges. In this context, it is suggested to rethink the role of structure in architecture, such as an enabler, a guide and a catalyst. The purpose of this paper is to present the economic, cultural and social context in which architecture develops nowadays. The method, to suggest a discussion on which role the structure may adopt in the architecture to come. The achievement, to highlight its potential to face current requirements and challenges.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La edificación es un sector de enorme influencia en la evolución del consumo de energía y las emisiones de CO2. Teniendo en cuenta que en estos momentos hay 3,5 millones de viviendas vacías y que los próximos años no va a haber un aumento en la demanda de vivienda nueva, la rehabilitación sostenible del parque residencial existente es una tarea prioritaria y sobre la que hay que prestar especial interés. Dado que aún queda mucho donde actuar, es necesario hacer una reflexión sobre cómo se está rehabilitando para poder mejorar en el futuro. Por ello, el objetivo del presente trabajo es analizar algunas estrategias adoptadas hasta ahora en el parque inmobiliario y su aplicación en un caso de estudio, mediante la mejora de la fachada de un edificio de viviendas situado en Madrid.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La informática teórica es una disciplina básica ya que la mayoría de los avances en informática se sustentan en un sólido resultado de esa materia. En los últimos a~nos debido tanto al incremento de la potencia de los ordenadores, como a la cercanía del límite físico en la miniaturización de los componentes electrónicos, resurge el interés por modelos formales de computación alternativos a la arquitectura clásica de von Neumann. Muchos de estos modelos se inspiran en la forma en la que la naturaleza resuelve eficientemente problemas muy complejos. La mayoría son computacionalmente completos e intrínsecamente paralelos. Por este motivo se les está llegando a considerar como nuevos paradigmas de computación (computación natural). Se dispone, por tanto, de un abanico de arquitecturas abstractas tan potentes como los computadores convencionales y, a veces, más eficientes: alguna de ellas mejora el rendimiento, al menos temporal, de problemas NPcompletos proporcionando costes no exponenciales. La representación formal de las redes de procesadores evolutivos requiere de construcciones, tanto independientes, como dependientes del contexto, dicho de otro modo, en general una representación formal completa de un NEP implica restricciones, tanto sintácticas, como semánticas, es decir, que muchas representaciones aparentemente (sintácticamente) correctas de casos particulares de estos dispositivos no tendrían sentido porque podrían no cumplir otras restricciones semánticas. La aplicación de evolución gramatical semántica a los NEPs pasa por la elección de un subconjunto de ellos entre los que buscar los que solucionen un problema concreto. En este trabajo se ha realizado un estudio sobre un modelo inspirado en la biología celular denominado redes de procesadores evolutivos [55, 53], esto es, redes cuyos nodos son procesadores muy simples capaces de realizar únicamente un tipo de mutación puntual (inserción, borrado o sustitución de un símbolo). Estos nodos están asociados con un filtro que está definido por alguna condición de contexto aleatorio o de pertenencia. Las redes están formadas a lo sumo de seis nodos y, teniendo los filtros definidos por una pertenencia a lenguajes regulares, son capaces de generar todos los lenguajes enumerables recursivos independientemente del grafo subyacente. Este resultado no es sorprendente ya que semejantes resultados han sido documentados en la literatura. Si se consideran redes con nodos y filtros definidos por contextos aleatorios {que parecen estar más cerca a las implementaciones biológicas{ entonces se pueden generar lenguajes más complejos como los lenguajes no independientes del contexto. Sin embargo, estos mecanismos tan simples son capaces de resolver problemas complejos en tiempo polinomial. Se ha presentado una solución lineal para un problema NP-completo, el problema de los 3-colores. Como primer aporte significativo se ha propuesto una nueva dinámica de las redes de procesadores evolutivos con un comportamiento no determinista y masivamente paralelo [55], y por tanto todo el trabajo de investigación en el área de la redes de procesadores se puede trasladar a las redes masivamente paralelas. Por ejemplo, las redes masivamente paralelas se pueden modificar de acuerdo a determinadas reglas para mover los filtros hacia las conexiones. Cada conexión se ve como un canal bidireccional de manera que los filtros de entrada y salida coinciden. A pesar de esto, estas redes son computacionalmente completas. Se pueden también implementar otro tipo de reglas para extender este modelo computacional. Se reemplazan las mutaciones puntuales asociadas a cada nodo por la operación de splicing. Este nuevo tipo de procesador se denomina procesador splicing. Este modelo computacional de Red de procesadores con splicing ANSP es semejante en cierto modo a los sistemas distribuidos en tubos de ensayo basados en splicing. Además, se ha definido un nuevo modelo [56] {Redes de procesadores evolutivos con filtros en las conexiones{ , en el cual los procesadores tan solo tienen reglas y los filtros se han trasladado a las conexiones. Dicho modelo es equivalente, bajo determinadas circunstancias, a las redes de procesadores evolutivos clásicas. Sin dichas restricciones el modelo propuesto es un superconjunto de los NEPs clásicos. La principal ventaja de mover los filtros a las conexiones radica en la simplicidad de la modelización. Otras aportaciones de este trabajo ha sido el dise~no de un simulador en Java [54, 52] para las redes de procesadores evolutivos propuestas en esta Tesis. Sobre el término "procesador evolutivo" empleado en esta Tesis, el proceso computacional descrito aquí no es exactamente un proceso evolutivo en el sentido Darwiniano. Pero las operaciones de reescritura que se han considerado pueden interpretarse como mutaciones y los procesos de filtrado se podrían ver como procesos de selección. Además, este trabajo no abarca la posible implementación biológica de estas redes, a pesar de ser de gran importancia. A lo largo de esta tesis se ha tomado como definición de la medida de complejidad para los ANSP, una que denotaremos como tama~no (considerando tama~no como el número de nodos del grafo subyacente). Se ha mostrado que cualquier lenguaje enumerable recursivo L puede ser aceptado por un ANSP en el cual el número de procesadores está linealmente acotado por la cardinalidad del alfabeto de la cinta de una máquina de Turing que reconoce dicho lenguaje L. Siguiendo el concepto de ANSP universales introducido por Manea [65], se ha demostrado que un ANSP con una estructura de grafo fija puede aceptar cualquier lenguaje enumerable recursivo. Un ANSP se puede considerar como un ente capaz de resolver problemas, además de tener otra propiedad relevante desde el punto de vista práctico: Se puede definir un ANSP universal como una subred, donde solo una cantidad limitada de parámetros es dependiente del lenguaje. La anterior característica se puede interpretar como un método para resolver cualquier problema NP en tiempo polinomial empleando un ANSP de tama~no constante, concretamente treinta y uno. Esto significa que la solución de cualquier problema NP es uniforme en el sentido de que la red, exceptuando la subred universal, se puede ver como un programa; adaptándolo a la instancia del problema a resolver, se escogerín los filtros y las reglas que no pertenecen a la subred universal. Un problema interesante desde nuestro punto de vista es el que hace referencia a como elegir el tama~no optimo de esta red.---ABSTRACT---This thesis deals with the recent research works in the area of Natural Computing {bio-inspired models{, more precisely Networks of Evolutionary Processors first developed by Victor Mitrana and they are based on P Systems whose father is Georghe Paun. In these models, they are a set of processors connected in an underlying undirected graph, such processors have an object multiset (strings) and a set of rules, named evolution rules, that transform objects inside processors[55, 53],. These objects can be sent/received using graph connections provided they accomplish constraints defined at input and output filters processors have. This symbolic model, non deterministic one (processors are not synchronized) and massive parallel one[55] (all rules can be applied in one computational step) has some important properties regarding solution of NP-problems in lineal time and of course, lineal resources. There are a great number of variants such as hybrid networks, splicing processors, etc. that provide the model a computational power equivalent to Turing machines. The origin of networks of evolutionary processors (NEP for short) is a basic architecture for parallel and distributed symbolic processing, related to the Connection Machine as well as the Logic Flow paradigm, which consists of several processors, each of them being placed in a node of a virtual complete graph, which are able to handle data associated with the respective node. All the nodes send simultaneously their data and the receiving nodes handle also simultaneously all the arriving messages, according to some strategies. In a series of papers one considers that each node may be viewed as a cell having genetic information encoded in DNA sequences which may evolve by local evolutionary events, that is point mutations. Each node is specialized just for one of these evolutionary operations. Furthermore, the data in each node is organized in the form of multisets of words (each word appears in an arbitrarily large number of copies), and all the copies are processed in parallel such that all the possible events that can take place do actually take place. Obviously, the computational process just described is not exactly an evolutionary process in the Darwinian sense. But the rewriting operations we have considered might be interpreted as mutations and the filtering process might be viewed as a selection process. Recombination is missing but it was asserted that evolutionary and functional relationships between genes can be captured by taking only local mutations into consideration. It is clear that filters associated with each node allow a strong control of the computation. Indeed, every node has an input and output filter; two nodes can exchange data if it passes the output filter of the sender and the input filter of the receiver. Moreover, if some data is sent out by some node and not able to enter any node, then it is lost. In this paper we simplify the ANSP model considered in by moving the filters from the nodes to the edges. Each edge is viewed as a two-way channel such that the input and output filters coincide. Clearly, the possibility of controlling the computation in such networks seems to be diminished. For instance, there is no possibility to loose data during the communication steps. In spite of this and of the fact that splicing is not a powerful operation (remember that splicing systems generates only regular languages) we prove here that these devices are computationally complete. As a consequence, we propose characterizations of two complexity classes, namely NP and PSPACE, in terms of accepting networks of restricted splicing processors with filtered connections. We proposed a uniform linear time solution to SAT based on ANSPFCs with linearly bounded resources. This solution should be understood correctly: we do not solve SAT in linear time and space. Since any word and auxiliary word appears in an arbitrarily large number of copies, one can generate in linear time, by parallelism and communication, an exponential number of words each of them having an exponential number of copies. However, this does not seem to be a major drawback since by PCR (Polymerase Chain Reaction) one can generate an exponential number of identical DNA molecules in a linear number of reactions. It is worth mentioning that the ANSPFC constructed above remains unchanged for any instance with the same number of variables. Therefore, the solution is uniform in the sense that the network, excepting the input and output nodes, may be viewed as a program according to the number of variables, we choose the filters, the splicing words and the rules, then we assign all possible values to the variables, and compute the formula.We proved that ANSP are computationally complete. Do the ANSPFC remain still computationally complete? If this is not the case, what other problems can be eficiently solved by these ANSPFCs? Moreover, the complexity class NP is exactly the class of all languages decided by ANSP in polynomial time. Can NP be characterized in a similar way with ANSPFCs?

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Un jardín de invierno. Arquitectura animada / Naturaleza inanimada

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La presente tesis doctoral se enmarca dentro del concepto de la sistematización del conocimiento en arquitectura, más concretamente en el campo de las construcciones arquitectónicas y la toma de decisiones en la fase de proyecto de envolventes arquitectónicas multicapa. Por tanto, el objetivo principal es el establecimiento de las bases para una toma de decisiones informadas durante el proyecto de una envolvente multicapa con el fin de colaborar en su optimización. Del mismo modo que la historia de la arquitectura está relacionada con la historia de la innovación en construcción, la construcción está sujeta a cambios como respuesta a los fracasos anteriores. En base a esto, se identifica la toma de decisiones en la fase de proyecto como el estadio inicial para establecer un punto estratégico de reflexión y de control sobre los procesos constructivos. La presente investigación, conceptualmente, define los parámetros intervinientes en el proyecto de envolventes arquitectónicas multicapa a partir de una clasificación y sistematización de todos los componentes (elementos, unidades y sistemas constructivos) utilizados en las fachadas multicapa. Dicha sistematización se materializa en una hoja matriz de datos en la que, dentro de una organización a modo de árbol, se puede acceder a la consulta de cada componente y de su caracterización. Dicha matriz permite la incorporación futura de cualquier componente o sistema nuevo que aparezca en el mercado, relacionándolo con aquellos con los que comparta ubicación, tipo de material, etc. Con base en esa matriz de datos, se diseña la sistematización de la toma de decisiones en la fase de proyecto de una envolvente arquitectónica, en concreto, en el caso de una fachada. Operativamente, el resultado se presenta como una herramienta que permite al arquitecto o proyectista reflexionar y seleccionar el sistema constructivo más adecuado, al enfrentarse con las distintas decisiones o elecciones posibles. La herramienta se basa en las elecciones iniciales tomadas por el proyectista y se estructura, a continuación y sucesivamente, en distintas aproximaciones, criterios, subcriterios y posibilidades que responden a los distintos avances en la definición del sistema constructivo. Se proponen una serie de fichas operativas de comprobación que informan sobre el estadio de decisión y de definición de proyecto alcanzados en cada caso. Asimismo, el sistema permite la conexión con otros sistemas de revisión de proyectos para fomentar la reflexión sobre la normalización de los riesgos asociados tanto al proprio sistema como a su proceso constructivo y comportamiento futuros. La herramienta proporciona un sistema de ayuda para ser utilizado en el proceso de toma de decisiones en la fase de diseño de una fachada multicapa, minimizando la arbitrariedad y ofreciendo una cualificación previa a la cuantificación que supondrá la elaboración del detalle constructivo y de su medición en las sucesivas fases del proyecto. Al mismo tiempo, la sistematización de dicha toma de decisiones en la fase del proyecto puede constituirse como un sistema de comprobación en las diferentes fases del proceso de decisión proyectual y de definición de la envolvente de un edificio. ABSTRACT The central issue of this doctoral Thesis is founded on the framework of the concept of the systematization of knowledge in architecture, in particular, in respect of the field of building construction and the decision making in the design stage of multilayer building envelope projects. Therefore, the main objective is to establish the bases for knowledgeable decision making during a multilayer building envelope design process, in order to collaborate with its optimization. Just as the history of architecture is connected to the history of innovation in construction, construction itself is subject to changes as a response to previous failures. On this basis, the decisions made during the project design phase are identified as the initial state to establish an strategic point for reflection and control, referred to the constructive processes. Conceptually, this research defines the parameters involving the multilayer building envelope projects, on the basis of a classification and systematization for all the components (elements, constructive units and constructive systems) used in multilayer façades. The mentioned systematization is materialized into a data matrix sheet in which, following a tree‐like organization, the access to every single component and its characterization is possible. The above data matrix allows the future inclusion of any new component or system that may appear in the construction market. That new component or system can be put into a relationship with another, which it shares location, type of material,… with. Based on the data matrix, the systematization of the decision making process for a building envelope design stage is designed, more particularly in the case of a façade. Putting this into practice, it is represented as a tool which allows the architect or the designer, to reflect and to select the appropriate building system when facing the different elections or the different options. The tool is based on the initial elections taken by the designer. Then and successively, it is shaped on the form of different operative steps, criteria, sub‐criteria and possibilities which respond to a different progress in the definition of the building construction system. In order to inform about the stage of the decision and the definition reached by the project in every particular case, a range of operative sheets are proposed. Additionally, the system allows the connection with other reviewing methods for building projects. The aim of this last possibility is to encourage the reflection on standardization of the associated risks to the building system itself and its future performance. The tool provides a helping system to be used during the decision making process for a multilayer façade design. It minimizes the arbitrariness and offers a qualification previous to the quantification that will be done with the development of the construction details and their bill of quantities, that in subsequent project stages will be executed. At the same time, the systematization of the mentioned decision making during the design phase, can be found as a checking system in the different stages of the decision making design process and in the different stages of the building envelope definition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Postprint

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The research described here is supported by the award made by the RCUK Digital Economy programme to the dot.rural Digital Economy Hub; award reference: EP/G066051/1.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electromagnetic coupling phenomena between overhead power transmission lines and other nearby structures are inevitable, especially in densely populated areas. The undesired effects resulting from this proximity are manifold and range from the establishment of hazardous potentials to the outbreak of alternate current corrosion phenomena. The study of this class of problems is necessary for ensuring security in the vicinities of the interaction zone and also to preserve the integrity of the equipment and of the devices there present. However, the complete modeling of this type of application requires the three- -dimensional representation of the region of interest and needs specific numerical methods for field computation. In this work, the modeling of problems arising from the flow of electrical currents in the ground (the so-called conductive coupling) will be addressed with the finite element method. Those resulting from the time variation of the electromagnetic fields (the so-called inductive coupling) will be considered as well, and they will be treated with the generalized PEEC (Partial Element Equivalent Circuit) method. More specifically, a special boundary condition on the electric potential is proposed for truncating the computational domain in the finite element analysis of conductive coupling problems, and a complete PEEC formulation for modeling inductive coupling problems is presented. Test configurations of increasing complexities are considered for validating the foregoing approaches. These works aim to provide a contribution to the modeling of this class of problems, which tend to become common with the expansion of power grids.