61 resultados para Outstanding housewives
em Universidad Politécnica de Madrid
Resumo:
Starting from the inaugural text of Philibert de L'Orme, stereotomic treatises and manuscripts are subject to the opposing forces of reason and fancy. The Nativity Chapel in Burgos Cathedral provides an outstanding case study on this subject. It was built in 1571-1582 by Martín de Bérriz and Martín de la Haya, using an oval vault resting on trumpet squinches to span a rectangular bay. Bed joints and rib axes are not planar curves, as usual in oval vaults. This warping is not capricious; we shall argue that it is the outcome of a systematic tracing method. As a result of this process, the slope of the bed joints increases slightly in the first courses, but stays fairly constant after the third course; this solution prevents the upper courses from slipping. Thus, in the Nativity Chapel of Burgos Cathedral, the constraints of masonry construction fostered a singular solution verging on capriccio. It is also worthwhile to remark that the warping of the joints is not easily appreciable to the eye and that the tracing process does not seem to start from a previous conception of the resulting form. All this suggests that we should be quite careful when talking about the whimsical character of Late Gothic and Early Renaissance; in some occasions, apparent caprice is the offspring of practical thinking.
Resumo:
En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.
Resumo:
In the beginning of the 90s, ontology development was similar to an art: ontology developers did not have clear guidelines on how to build ontologies but only some design criteria to be followed. Work on principles, methods and methodologies, together with supporting technologies and languages, made ontology development become an engineering discipline, the so-called Ontology Engineering. Ontology Engineering refers to the set of activities that concern the ontology development process and the ontology life cycle, the methods and methodologies for building ontologies, and the tool suites and languages that support them. Thanks to the work done in the Ontology Engineering field, the development of ontologies within and between teams has increased and improved, as well as the possibility of reusing ontologies in other developments and in final applications. Currently, ontologies are widely used in (a) Knowledge Engineering, Artificial Intelligence and Computer Science, (b) applications related to knowledge management, natural language processing, e-commerce, intelligent information integration, information retrieval, database design and integration, bio-informatics, education, and (c) the Semantic Web, the Semantic Grid, and the Linked Data initiative. In this paper, we provide an overview of Ontology Engineering, mentioning the most outstanding and used methodologies, languages, and tools for building ontologies. In addition, we include some words on how all these elements can be used in the Linked Data initiative.
Resumo:
GRC is a cementitious composite material made up of a cement mortar matrix and chopped glass fibers. Due to its outstanding mechanical properties, GRC has been widely used to produce cladding panels and some civil engineering elements. Impact failure of cladding panels made of GRC may occur during production if some tool falls onto the panel, due to stone or other objects impacting at low velocities or caused by debris projected after a blast. Impact failure of a front panel of a building may have not only an important economic value but also human lives may be at risk if broken pieces of the panel fall from the building to the pavement. Therefore, knowing GRC impact strength is necessary to prevent economic costs and putting human lives at risk. One-stage light gas gun is an impact test machine capable of testing different materials subjected to impact loads. An experimental program was carried out, testing GRC samples of five different formulations, commonly used in building industry. Steel spheres were shot at different velocities on square GRC samples. The residual velocity of the projectiles was obtained both using a high speed camera with multiframe exposure and measuring the projectile’s penetration depth in molding clay blocks. Tests were performed on young and artificially aged GRC samples to compare GRC’s behavior when subjected to high strain rates. Numerical simulations using a hydrocode were made to analyze which parameters are most important during an impact event. GRC impact strength was obtained from test results. Also, GRC’s embrittlement, caused by GRC aging, has no influence on GRC impact behavior due to the small size of the projectile. Also, glass fibers used in GRC production only maintain GRC panels’ integrity but have no influence on GRC’s impact strength. Numerical models have reproduced accurately impact tests.
Resumo:
The magazine of the Spanish Nuclear Society (SNE), “Nuclear España” is a scientific-technical publication with almost thirty years of uninterrupted edition and more than 300 numbers published. Their pages approach technical subjects related to the nuclear energy, as well as the activities developed by the SNE, especially in national and international meetings. The main part of the magazine is composed by articles written by known specialist of the energy industry. One of the top goals of the magazine is to help on transferring the knowledge from the generation that built the nuclear power plants in Spain and the new generation of professionals that have started its nuclear career in the last years. Each number is monographic, trying to cover as many aspects on an issue as it is possible, with collaborations from the companies, the research centers and universities that helps to have complementary points of view. On the other hand the articles help to deep in the issue´s topic, broadening the view of the readers about the nuclear field and helping to share knowledge across the industry. The news section of the Magazine picks up the actuality of the sector as a whole. The editorial section reflects the opinion of the SNE Governing Board and the Magazine Committee on the subjects of interest in this field. On the other hand, the monthly interview sets out the professional outstanding opinions. With a total of eleven numbers per year, three of them have a noticeable international character: the one dedicated to the operative experiences on the Spanish and European nuclear power plants, the monographic issue devoted tothe Annual Meeting of the SNE and the international issue, which covers the last activities of the Spanish industry in international projects. Both first are bilingual issues (Spanish-English), whereas the international edition is published completely in English. Besides its diffusion through all the members of the SNE, the Magazine is distributed, in the national scope, to companies and organisms related to the nuclear power, universities, research centers, representatives of the Central, Autonomic and Local Administrations, mass media and communication professionals. It is also sent to the utilities and research centers in Europe, United States, South America and Asia.
Resumo:
Metal grid lines are a vital element in multijunction solar cells in order to take out from the cell the generated photocurrent. Nevertheless all this implies certain shadowing factor and thus certain reflectivity on cells surface that lowers its light absorption. This reflectivity produces a loss in electrical efficiency and thus a loss in global energy production for CPV systems. We present here an optical design for recovering this portion of reflected light, and thus leading to a system efficiency increase. This new design is based on an external confinement cavity, an optical element able to redirect the light reflected by the cell towards its surface again. It has been possible thanks to the recent invention of the advanced Köhler concentrators by LPI, likely to integrate one of these cavities easily. We have proven the excellent performance of these cavities integrated in this kind of CPV modules offering outstanding results: 33.2% module electrical efficiency @Tcell=25ºC and relative efficiency and Isc gains of over 6%.
Resumo:
Multijunction solar cells present a certain reflectivity on its surface that lowers its light absorption. This reflectivity produces a loss in electrical efficiency and thus a loss in global energy production for CPV systems. We present here an optical design for recovering this portion of reflected light, and thus leading to a system efficiency increase. This new design is based on an external confinement cavity, an optical element able to redirect the light reflected by the cell towards its surface again. We have proven the excellent performance of these cavities integrated in CPV modules offering outstanding results: 33.2% module electrical efficiency @Tcell = 25 °C and relative efficiency and Isc gains of over 6%
Resumo:
Cable-stayed bridges represent nowadays key points in transport networks and their seismic behavior needs to be fully understood, even beyond the elastic range of materials. Both nonlinear dynamic (NL-RHA) and static (pushover) procedures are currently available to face this challenge, each with intrinsic advantages and disadvantages, and their applicability in the study of the nonlinear seismic behavior of cable-stayed bridges is discussed here. The seismic response of a large number of finite element models with different span lengths, tower shapes and class of foundation soil is obtained with different procedures and compared. Several features of the original Modal Pushover Analysis (MPA) are modified in light of cable-stayed bridge characteristics, furthermore, an extension of MPA and a new coupled pushover analysis (CNSP) are suggested to estimate the complex inelastic response of such outstanding structures subjected to multi-axial strong ground motions.
Resumo:
(Matsukawa and Habeck, 2007) analyse the main instruments for risk mitigation in infrastructure financing with Multilateral Financial Institutions (MFIs). Their review coincided with the global financial crisis of 2007-08, and is highly relevant in current times considering the sovereign debt crisis, the lack of available capital and the increases in bank regulation in Western economies. The current macroeconomic environment has seen a slowdown in the level of finance for infrastructure projects, as they pose a higher credit risk given their requirements for long term investments. The rationale for this work is to look for innovative solutions that are focused on the credit risk mitigation of infrastructure and energy projects whilst optimizing the economic capital allocation for commercial banks. This objective is achieved through risk-sharing with MFIs and looking for capital relief in project finance transactions. This research finds out the answer to the main question: "What is the impact of risk-sharing with MFIs on project finance transactions to increase their efficiency and viability?", and is developed from the perspective of a commercial bank assessing the economic capital used and analysing the relevant variables for it: Probability of Default, Loss Given Default and Recovery Rates, (Altman, 2010). An overview of project finance for the infrastructure and energy sectors in terms of the volume of transactions worldwide is outlined, along with a summary of risk-sharing financing with MFIs. A review of the current regulatory framework beneath risk-sharing in structured finance with MFIs is also analysed. From here, the impact of risk-sharing and the diversification effect in infrastructure and energy projects is assessed, from the perspective of economic capital allocation for a commercial bank. CreditMetrics (J. P. Morgan, 1997) is applied over an existing well diversified portfolio of project finance infrastructure and energy investments, working with the main risk capital measures: economic capital, RAROC, and EVA. The conclusions of this research show that economic capital allocation on a portfolio of project finance along with risk-sharing with MFIs have a huge impact on capital relief whilst increasing performance profitability for commercial banks. There is an outstanding diversification effect due to the portfolio, which is combined with risk mitigation and an improvement in recovery rates through Partial Credit Guarantees issued by MFIs. A stress test scenario analysis is applied to the current assumptions and credit risk model, considering a downgrade in the rating for the commercial bank (lender) and an increase of default in emerging countries, presenting a direct impact on economic capital, through an increase in expected loss and a decrease in performance profitability. Getting capital relief through risk-sharing makes it more viable for commercial banks to finance infrastructure and energy projects, with the beneficial effect of a direct impact of these investments on GDP growth and employment. The main contribution of this work is to promote a strategic economic capital allocation in infrastructure and energy financing through innovative risk-sharing with MFIs and economic pricing to create economic value added for banks, and to allow the financing of more infrastructure and energy projects. This work suggests several topics for further research in relation to issues analysed. (Matsukawa and Habeck, 2007) analizan los principales instrumentos de mitigación de riesgos en las Instituciones Financieras Multilaterales (IFMs) para la financiación de infraestructuras. Su presentación coincidió con el inicio de la crisis financiera en Agosto de 2007, y sus consecuencias persisten en la actualidad, destacando la deuda soberana en economías desarrolladas y los problemas capitalización de los bancos. Este entorno macroeconómico ha ralentizado la financiación de proyectos de infraestructuras. El actual trabajo de investigación tiene su motivación en la búsqueda de soluciones para la financiación de proyectos de infraestructuras y de energía, mitigando los riesgos inherentes, con el objeto de reducir el consumo de capital económico en los bancos financiadores. Este objetivo se alcanza compartiendo el riesgo de la financiación con IFMs, a través de estructuras de risk-sharing. La investigación responde la pregunta: "Cuál es el impacto de risk-sharing con IFMs, en la financiación de proyectos para aumentar su eficiencia y viabilidad?". El trabajo se desarrolla desde el enfoque de un banco comercial, estimando el consumo de capital económico en la financiación de proyectos y analizando las principales variables del riesgo de crédito, Probability of Default, Loss Given Default and Recovery Rates, (Altman, 2010). La investigación presenta las cifras globales de Project Finance en los sectores de infraestructuras y de energía, y analiza el marco regulatorio internacional en relación al consumo de capital económico en la financiación de proyectos en los que participan IFMs. A continuación, el trabajo modeliza una cartera real, bien diversificada, de Project Finance de infraestructuras y de energía, aplicando la metodología CreditMet- rics (J. P. Morgan, 1997). Su objeto es estimar el consumo de capital económico y la rentabilidad de la cartera de proyectos a través del RAROC y EVA. La modelización permite estimar el efecto diversificación y la liberación de capital económico consecuencia del risk-sharing. Los resultados muestran el enorme impacto del efecto diversificación de la cartera, así como de las garantías parciales de las IFMs que mitigan riesgos, mejoran el recovery rate de los proyectos y reducen el consumo de capital económico para el banco comercial, mientras aumentan la rentabilidad, RAROC, y crean valor económico, EVA. En escenarios económicos de inestabilidad, empeoramiento del rating de los bancos, aumentos de default en los proyectos y de correlación en las carteras, hay un impacto directo en el capital económico y en la pérdida de rentabilidad. La liberación de capital económico, como se plantea en la presente investigación, permitirá financiar más proyectos de infraestructuras y de energía, lo que repercutirá en un mayor crecimiento económico y creación de empleo. La principal contribución de este trabajo es promover la gestión activa del capital económico en la financiación de infraestructuras y de proyectos energéticos, a través de estructuras innovadoras de risk-sharing con IFMs y de creación de valor económico en los bancos comerciales, lo que mejoraría su eficiencia y capitalización. La aportación metodológica del trabajo se convierte por su originalidad en una contribución, que sugiere y facilita nuevas líneas de investigación académica en las principales variables del riesgo de crédito que afectan al capital económico en la financiación de proyectos.
Resumo:
The twentieth century brought a new sensibility characterized by the discredit of cartesian rationality and the weakening of universal truths, related with aesthetic values as order, proportion and harmony. In the middle of the century, theorists such as Theodor Adorno, Rudolf Arnheim and Anton Ehrenzweig warned about the transformation developed by the artistic field. Contemporary aesthetics seemed to have a new goal: to deny the idea of art as an organized, finished and coherent structure. The order had lost its privileged position. Disorder, probability, arbitrariness, accidentality, randomness, chaos, fragmentation, indeterminacy... Gradually new terms were coined by aesthetic criticism to explain what had been happening since the beginning of the century. The first essays on the matter sought to provide new interpretative models based on, among other arguments, the phenomenology of perception, the recent discoveries of quantum mechanics, the deeper layers of the psyche or the information theories. Overall, were worthy attempts to give theoretical content to a situation as obvious as devoid of founding charter. Finally, in 1962, Umberto Eco brought together all this efforts by proposing a single theoretical frame in his book Opera Aperta. According to his point of view, all of the aesthetic production of twentieth century had a characteristic in common: its capacity to express multiplicity. For this reason, he considered that the nature of contemporary art was, above all, ambiguous. The aim of this research is to clarify the consequences of the incorporation of ambiguity in architectural theoretical discourse. We should start making an accurate analysis of this concept. However, this task is quite difficult because ambiguity does not allow itself to be clearly defined. This concept has the disadvantage that its signifier is as imprecise as its signified. In addition, the negative connotations that ambiguity still has outside the aesthetic field, stigmatizes this term and makes its use problematic. Another problem of ambiguity is that the contemporary subject is able to locate it in all situations. This means that in addition to distinguish ambiguity in contemporary productions, so does in works belonging to remote ages and styles. For that reason, it could be said that everything is ambiguous. And that’s correct, because somehow ambiguity is present in any creation of the imperfect human being. However, as Eco, Arnheim and Ehrenzweig pointed out, there are two major differences between current and past contexts. One affects the subject and the other the object. First, it’s the contemporary subject, and no other, who has acquired the ability to value and assimilate ambiguity. Secondly, ambiguity was an unexpected aesthetic result in former periods, while in contemporary object it has been codified and is deliberately present. In any case, as Eco did, we consider appropriate the use of the term ambiguity to refer to the contemporary aesthetic field. Any other term with more specific meaning would only show partial and limited aspects of a situation quite complex and difficult to diagnose. Opposed to what normally might be expected, in this case ambiguity is the term that fits better due to its particular lack of specificity. In fact, this lack of specificity is what allows to assign a dynamic condition to the idea of ambiguity that in other terms would hardly be operative. Thus, instead of trying to define the idea of ambiguity, we will analyze how it has evolved and its consequences in architectural discipline. Instead of trying to define what it is, we will examine what its presence has supposed in each moment. We will deal with ambiguity as a constant presence that has always been latent in architectural production but whose nature has been modified over time. Eco, in the mid-twentieth century, discerned between classical ambiguity and contemporary ambiguity. Currently, half a century later, the challenge is to discern whether the idea of ambiguity has remained unchanged or have suffered a new transformation. What this research will demonstrate is that it’s possible to detect a new transformation that has much to do with the cultural and aesthetic context of last decades: the transition from modernism to postmodernism. This assumption leads us to establish two different levels of contemporary ambiguity: each one related to one these periods. The first level of ambiguity is widely well-known since many years. Its main characteristics are a codified multiplicity, an interpretative freedom and an active subject who gives conclusion to an object that is incomplete or indefinite. This level of ambiguity is related to the idea of indeterminacy, concept successfully introduced into contemporary aesthetic language. The second level of ambiguity has been almost unnoticed for architectural criticism, although it has been identified and studied in other theoretical disciplines. Much of the work of Fredric Jameson and François Lyotard shows reasonable evidences that the aesthetic production of postmodernism has transcended modern ambiguity to reach a new level in which, despite of the existence of multiplicity, the interpretative freedom and the active subject have been questioned, and at last denied. In this period ambiguity seems to have reached a new level in which it’s no longer possible to obtain a conclusive and complete interpretation of the object because it has became an unreadable device. The postmodern production offers a kind of inaccessible multiplicity and its nature is deeply contradictory. This hypothetical transformation of the idea of ambiguity has an outstanding analogy with that shown in the poetic analysis made by William Empson, published in 1936 in his Seven Types of Ambiguity. Empson established different levels of ambiguity and classified them according to their poetic effect. This layout had an ascendant logic towards incoherence. In seventh level, where ambiguity is higher, he located the contradiction between irreconcilable opposites. It could be said that contradiction, once it undermines the coherence of the object, was the better way that contemporary aesthetics found to confirm the Hegelian judgment, according to which art would ultimately reject its capacity to express truth. Much of the transformation of architecture throughout last century is related to the active involvement of ambiguity in its theoretical discourse. In modern architecture ambiguity is present afterwards, in its critical review made by theoreticians like Colin Rowe, Manfredo Tafuri and Bruno Zevi. The publication of several studies about Mannerism in the forties and fifties rescued certain virtues of an historical style that had been undervalued due to its deviation from Renacentist canon. Rowe, Tafuri and Zevi, among others, pointed out the similarities between Mannerism and certain qualities of modern architecture, both devoted to break previous dogmas. The recovery of Mannerism allowed joining ambiguity and modernity for first time in the same sentence. In postmodernism, on the other hand, ambiguity is present ex-professo, developing a prominent role in the theoretical discourse of this period. The distance between its analytical identification and its operational use quickly disappeared because of structuralism, an analytical methodology with the aspiration of becoming a modus operandi. Under its influence, architecture began to be identified and studied as a language. Thus, postmodern theoretical project discerned between the components of architectural language and developed them separately. Consequently, there is not only one, but three projects related to postmodern contradiction: semantic project, syntactic project and pragmatic project. Leading these projects are those prominent architects whose work manifested an especial interest in exploring and developing the potential of the use of contradiction in architecture. Thus, Robert Venturi, Peter Eisenman and Rem Koolhaas were who established the main features through which architecture developed the dialectics of ambiguity, in its last and extreme level, as a theoretical project in each component of architectural language. Robert Venturi developed a new interpretation of architecture based on its semantic component, Peter Eisenman did the same with its syntactic component, and also did Rem Koolhaas with its pragmatic component. With this approach this research aims to establish a new reflection on the architectural transformation from modernity to postmodernity. Also, it can serve to light certain aspects still unaware that have shaped the architectural heritage of past decades, consequence of a fruitful relationship between architecture and ambiguity and its provocative consummation in a contradictio in terminis. Esta investigación centra su atención fundamentalmente sobre las repercusiones de la incorporación de la ambigüedad en forma de contradicción en el discurso arquitectónico postmoderno, a través de cada uno de sus tres proyectos teóricos. Está estructurada, por tanto, en torno a un capítulo principal titulado Dialéctica de la ambigüedad como proyecto teórico postmoderno, que se desglosa en tres, de títulos: Proyecto semántico. Robert Venturi; Proyecto sintáctico. Peter Eisenman; y Proyecto pragmático. Rem Koolhaas. El capítulo central se complementa con otros dos situados al inicio. El primero, titulado Dialéctica de la ambigüedad contemporánea. Una aproximación realiza un análisis cronológico de la evolución que ha experimentado la idea de la ambigüedad en la teoría estética del siglo XX, sin entrar aún en cuestiones arquitectónicas. El segundo, titulado Dialéctica de la ambigüedad como crítica del proyecto moderno se ocupa de examinar la paulatina incorporación de la ambigüedad en la revisión crítica de la modernidad, que sería de vital importancia para posibilitar su posterior introducción operativa en la postmodernidad. Un último capítulo, situado al final del texto, propone una serie de Proyecciones que, a tenor de lo analizado en los capítulos anteriores, tratan de establecer una relectura del contexto arquitectónico actual y su evolución posible, considerando, en todo momento, que la reflexión en torno a la ambigüedad todavía hoy permite vislumbrar nuevos horizontes discursivos. Cada doble página de la Tesis sintetiza la estructura tripartita del capítulo central y, a grandes rasgos, la principal herramienta metodológica utilizada en la investigación. De este modo, la triple vertiente semántica, sintáctica y pragmática con que se ha identificado al proyecto teórico postmoderno se reproduce aquí en una distribución específica de imágenes, notas a pie de página y cuerpo principal del texto. En la columna de la izquierda están colocadas las imágenes que acompañan al texto principal. Su distribución atiende a criterios estéticos y compositivos, cualificando, en la medida de lo posible, su condición semántica. A continuación, a su derecha, están colocadas las notas a pie de página. Su disposición es en columna y cada nota está colocada a la misma altura que su correspondiente llamada en el texto principal. Su distribución reglada, su valor como notación y su posible equiparación con una estructura profunda aluden a su condición sintáctica. Finalmente, el cuerpo principal del texto ocupa por completo la mitad derecha de cada doble página. Concebido como un relato continuo, sin apenas interrupciones, su papel como responsable de satisfacer las demandas discursivas que plantea una investigación doctoral está en correspondencia con su condición pragmática.
Resumo:
Stonemasonry of the Gothic vault in its totality is based upon geometry of the line, whereas classic stereotomy relies on the comprehensive knowledge of the surface and the highly sophisticated sides of the voussoirs necessary for its vaults. It is obvious that this leap in the art of construction was paralleled and accompanied by an extension of the horizons of geometry. In Spain, it was made possible thanks to the centuries-old tradition of stone building begun in the most remote medieval times and to the presence of outstanding architects or stonemasons such as Juan de Álava, whose professional work surpassed the established limits and provided the art of building with new instruments.
Resumo:
The arrival of European master masons to Burgos and Toledo during the mid-fifteenth century was essential for the promotion of the late Gothic ribbed vault design techniques in Spain. The Antigua Chapel in Seville Cathedral, designed by the Spanish master mason Simón de Colonia on 1497, provides an outstanding case study on this subject. This vault is characterized by the interlacing of the ribs near the springing, reflecting the influence of German ribbed vault designs. This paper analyses the relationship between German ribbed vaults and their design methods with those of Spanish ribbed vaults; with particular attention to the presence of ribs that cut through one another above the springing, materialized in the work of Simón de Colonia. This characteristic is reflected in some manuscripts in the German area, like the Wiener Sammlungen (15th-16th centuries) and the Codex Miniatus 3 (ca. 1560-1570), but no Spanish documents of the same period make reference to it.
Resumo:
The vault of the sacristy of the Cathedral of Saint-Jean Baptiste in Perpignan (France), constructed by the Majorcan architect Guillem Sagrera between 1433 and 1447, is an outstanding, yet strikingly unknown, example of rib vaulting. This paper analyzes the overall construction of the form of the vault, characterized by its highly irregular perimeter, with particular attention to an isolated decorated corbel which solves the problem of the wall support of a group of six ribs and is in stark contrast with the rest of the supports, which are completely unadorned. Given the extreme rigour of Sagrera in all his works (and this one in particular), this apparent “capriccio” must be justified not only by decorative or formal requirements, but also by the constructive logic of Gothic vaulting system
Resumo:
A Kuhnian approach to research assessment requires us to consider that the important scientific breakthroughs that drive scientific progress are infrequent and that the progress of science does not depend on normal research. Consequently, indicators of research performance based on the total number of papers do not accurately measure scientific progress. Similarly, those universities with the best reputations in terms of scientific progress differ widely from other universities in terms of the scale of investments made in research and in the higher concentrations of outstanding scientists present, but less so in terms of the total number of papers or citations. This study argues that indicators for the 1% high-citation tail of the citation distribution reveal the contribution of universities to the progress of science and provide quantifiable justification for the large investments in research made by elite research universities. In this tail, which follows a power low, the number of the less frequent and highly cited important breakthroughs can be predicted from the frequencies of papers in the upper part of the tail. This study quantifies the false impression of excellence produced by multinational papers, and by other types of papers that do not contribute to the progress of science. Many of these papers are concentrated in and dominate lists of highly cited papers, especially in lower-ranked universities. The h-index obscures the differences between higher- and lower-ranked universities because the proportion of h-core papers in the 1% high-citation tail is not proportional to the value of the h-index.
Resumo:
The rheological and tribological properties of single-walled carbon nanotube (SWCNT)-reinforced poly(phenylene sulphide) (PPS) and poly(ether ether ketone) (PEEK) nanocomposites prepared via melt-extrusion were investigated. The effectiveness of employing a dual-nanofiller strategy combining polyetherimide (PEI)-wrapped SWCNTs with inorganic fullerene-like tungsten disulfide (IF-WS2) nanoparticles for property enhancement of the resulting hybrid composites was evaluated. Viscoelastic measurements revealed that the complex viscosity ?, storage modulus G?, and loss modulus G? increased with SWCNT content. In the low-frequency region, G? and G? became almost independent of frequency at higher SWCNT loadings, suggesting a transition from liquid-like to solid-like behavior. The incorporation of increasing IF-WS2 contents led to a progressive drop in ? and G? due to a lubricant effect. PEEK nanocomposites showed lower percolation threshold than those based on PPS, ascribed to an improved SWCNT dispersion due to the higher affinity between PEI and PEEK. The SWCNTs significantly lowered the wear rate but only slightly reduced the coefficient of friction. Composites with both nanofillers exhibited improved wear behavior, attributed to the outstanding tribological properties of these nanoparticles and a synergistic reinforcement effect. The combination of SWCNTs with IF-WS2 is a promising route for improving the tribological and rheological performance of thermoplastic nanocomposites.