16 resultados para Research Subject Categories::TECHNOLOGY::Civil engineering and architecture::Other civil engineering and architecture


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract is not available

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to establish an active internal know-how -reserve~ in an information processing and engineering services . company, a training architecture tailored to the company as an whole must be defined. When a company' s earnings come from . advisory services dynamically structured i.n the form of projects, as is the case at hand, difficulties arise that must be taken into account in the architectural design. The first difficulties are of a psychological nature and the design method proposed here begjns wi th the definition of the highest training metasystem, which is aimed at making adjustments for the variety of perceptions of the company's human components, before the architecture can be designed. This approach may be considered as an application of the cybernetic Law of Requisita Variety (Ashby) and of the Principle of Conceptual Integrity (Brooks) . Also included is a description of sorne of the results of the first steps of metasystems at the level of company organization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last two decades, there has been an important increase in research on speech technology in Spain, mainly due to a higher level of funding from European, Spanish and local institutions and also due to a growing interest in these technologies for developing new services and applications. This paper provides a review of the main areas of speech technology addressed by research groups in Spain, their main contributions in the recent years and the main focus of interest these days. This description is classified in five main areas: audio processing including speech, speaker characterization, speech and language processing, text to speech conversion and spoken language applications. This paper also introduces the Spanish Network of Speech Technologies (RTTH. Red Temática en Tecnologías del Habla) as the research network that includes almost all the researchers working in this area, presenting some figures, its objectives and its main activities developed in the last years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract This work is a contribution to the research and development of the intermediate band solar cell (IBSC), a high efficiency photovoltaic concept that features the advantages of both low and high bandgap solar cells. The resemblance with a low bandgap solar cell comes from the fact that the IBSC hosts an electronic energy band -the intermediate band (IB)- within the semiconductor bandgap. This IB allows the collection of sub-bandgap energy photons by means of two-step photon absorption processes, from the valence band (VB) to the IB and from there to the conduction band (CB). The exploitation of these low energy photons implies a more efficient use of the solar spectrum. The resemblance of the IBSC with a high bandgap solar cell is related to the preservation of the voltage: the open-circuit voltage (VOC) of an IBSC is not limited by any of the sub-bandgaps (involving the IB), but only by the fundamental bandgap (defined from the VB to the CB). Nevertheless, the presence of the IB allows new paths for electronic recombination and the performance of the IBSC is degraded at 1 sun operation conditions. A theoretical argument is presented regarding the need for the use of concentrated illumination in order to circumvent the degradation of the voltage derived from the increase in the recombi¬nation. This theory is supported by the experimental verification carried out with our novel characterization technique consisting of the acquisition of photogenerated current (IL)-VOC pairs under low temperature and concentrated light. Besides, at this stage of the IBSC research, several new IB materials are being engineered and our novel character¬ization tool can be very useful to provide feedback on their capability to perform as real IBSCs, verifying or disregarding the fulfillment of the “voltage preservation” principle. An analytical model has also been developed to assess the potential of quantum-dot (QD)-IBSCs. It is based on the calculation of band alignment of III-V alloyed heterojunc-tions, the estimation of the confined energy levels in a QD and the calculation of the de¬tailed balance efficiency. Several potentially useful QD materials have been identified, such as InAs/AlxGa1-xAs, InAs/GaxIn1-xP, InAs1-yNy/AlAsxSb1-x or InAs1-zNz/Alx[GayIn1-y]1-xP. Finally, a model for the analysis of the series resistance of a concentrator solar cell has also been developed to design and fabricate IBSCs adapted to 1,000 suns. Resumen Este trabajo contribuye a la investigación y al desarrollo de la célula solar de banda intermedia (IBSC), un concepto fotovoltaico de alta eficiencia que auna las ventajas de una célula solar de bajo y de alto gap. La IBSC se parece a una célula solar de bajo gap (o banda prohibida) en que la IBSC alberga una banda de energía -la banda intermedia (IB)-en el seno de la banda prohibida. Esta IB permite colectar fotones de energía inferior a la banda prohibida por medio de procesos de absorción de fotones en dos pasos, de la banda de valencia (VB) a la IB y de allí a la banda de conducción (CB). El aprovechamiento de estos fotones de baja energía conlleva un empleo más eficiente del espectro solar. La semejanza antre la IBSC y una célula solar de alto gap está relacionada con la preservación del voltaje: la tensión de circuito abierto (Vbc) de una IBSC no está limitada por ninguna de las fracciones en las que la IB divide a la banda prohibida, sino que está únicamente limitada por el ancho de banda fundamental del semiconductor (definido entre VB y CB). No obstante, la presencia de la IB posibilita nuevos caminos de recombinación electrónica, lo cual degrada el rendimiento de la IBSC a 1 sol. Este trabajo argumenta de forma teórica la necesidad de emplear luz concentrada para evitar compensar el aumento de la recom¬binación de la IBSC y evitar la degradación del voltage. Lo anterior se ha verificado experimentalmente por medio de nuestra novedosa técnica de caracterización consistente en la adquisicin de pares de corriente fotogenerada (IL)-VOG en concentración y a baja temperatura. En esta etapa de la investigación, se están desarrollando nuevos materiales de IB y nuestra herramienta de caracterizacin está siendo empleada para realimentar el proceso de fabricación, comprobando si los materiales tienen capacidad para operar como verdaderas IBSCs por medio de la verificación del principio de preservación del voltaje. También se ha desarrollado un modelo analítico para evaluar el potencial de IBSCs de puntos cuánticos. Dicho modelo está basado en el cálculo del alineamiento de bandas de energía en heterouniones de aleaciones de materiales III-V, en la estimación de la energía de los niveles confinados en un QD y en el cálculo de la eficiencia de balance detallado. Este modelo ha permitido identificar varios materiales de QDs potencialmente útiles como InAs/AlxGai_xAs, InAs/GaxIni_xP, InAsi_yNy/AlAsxSbi_x ó InAsi_zNz/Alx[GayIni_y]i_xP. Finalmente, también se ha desarrollado un modelado teórico para el análisis de la resistencia serie de una célula solar de concentración. Gracias a dicho modelo se han diseñado y fabricado IBSCs adaptadas a 1.000 soles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web 1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs. These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools. Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate. However, linguistic annotation tools have still some limitations, which can be summarised as follows: 1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.). 2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts. 3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc. A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved. In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool. Therefore, it would be quite useful to find a way to (i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools; (ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate. Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned. Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section. 2. GOALS OF THE PRESENT WORK As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based <Subject, Predicate, Object> triples, as in the usual Semantic Web languages (namely RDF(S) and OWL), in order for the model to be considered suitable for the Semantic Web. Besides, to be useful for the Semantic Web, this model should provide a way to automate the annotation of web pages. As for the present work, this requirement involved reusing the linguistic annotation tools purchased by the OEG research group (http://www.oeg-upm.net), but solving beforehand (or, at least, minimising) some of their limitations. Therefore, this model had to minimise these limitations by means of the integration of several linguistic annotation tools into a common architecture. Since this integration required the interoperation of tools and their annotations, ontologies were proposed as the main technological component to make them effectively interoperate. From the very beginning, it seemed that the formalisation of the elements and the knowledge underlying linguistic annotations within an appropriate set of ontologies would be a great step forward towards the formulation of such a model (henceforth referred to as OntoTag). Obviously, first, to combine the results of the linguistic annotation tools that operated at the same level, their annotation schemas had to be unified (or, preferably, standardised) in advance. This entailed the unification (id. standardisation) of their tags (both their representation and their meaning), and their format or syntax. Second, to merge the results of the linguistic annotation tools operating at different levels, their respective annotation schemas had to be (a) made interoperable and (b) integrated. And third, in order for the resulting annotations to suit the Semantic Web, they had to be specified by means of an ontology-based vocabulary, and structured by means of ontology-based <Subject, Predicate, Object> triples, as hinted above. Therefore, a new annotation scheme had to be devised, based both on ontologies and on this type of triples, which allowed for the combination and the integration of the annotations of any set of linguistic annotation tools. This annotation scheme was considered a fundamental part of the model proposed here, and its development was, accordingly, another major objective of the present work. All these goals, aims and objectives could be re-stated more clearly as follows: Goal 1: Development of a set of ontologies for the formalisation of the linguistic knowledge relating linguistic annotation. Sub-goal 1.1: Ontological formalisation of the EAGLES (1996a; 1996b) de facto standards for morphosyntactic and syntactic annotation, in a way that helps respect the triple structure recommended for annotations in these works (which is isomorphic to the <Subject, Predicate, Object> triple structures used in the context of the Semantic Web). Sub-goal 1.2: Incorporation into this preliminary ontological formalisation of other existing standards and standard proposals relating the levels mentioned above, such as those currently under development within ISO/TC 37 (the ISO Technical Committee dealing with Terminology, which deals also with linguistic resources and annotations). Sub-goal 1.3: Generalisation and extension of the recommendations in EAGLES (1996a; 1996b) and ISO/TC 37 to the semantic level, for which no ISO/TC 37 standards have been developed yet. Sub-goal 1.4: Ontological formalisation of the generalisations and/or extensions obtained in the previous sub-goal as generalisations and/or extensions of the corresponding ontology (or ontologies). Sub-goal 1.5: Ontological formalisation of the knowledge required to link, combine and unite the knowledge represented in the previously developed ontology (or ontologies). Goal 2: Development of OntoTag’s annotation scheme, a standard-based abstract scheme for the hybrid (linguistically-motivated and ontological-based) annotation of texts. Sub-goal 2.1: Development of the standard-based morphosyntactic annotation level of OntoTag’s scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996a) and also the recommendations included in the ISO/MAF (2008) standard draft. Sub-goal 2.2: Development of the standard-based syntactic annotation level of the hybrid abstract scheme. This level should include, and possibly extend, the recommendations of EAGLES (1996b) and the ISO/SynAF (2010) standard draft. Sub-goal 2.3: Development of the standard-based semantic annotation level of OntoTag’s (abstract) scheme. Sub-goal 2.4: Development of the mechanisms for a convenient integration of the three annotation levels already mentioned. These mechanisms should take into account the recommendations included in the ISO/LAF (2009) standard draft. Goal 3: Design of OntoTag’s (abstract) annotation architecture, an abstract architecture for the hybrid (semantic) annotation of texts (i) that facilitates the integration and interoperation of different linguistic annotation tools, and (ii) whose results comply with OntoTag’s annotation scheme. Sub-goal 3.1: Specification of the decanting processes that allow for the classification and separation, according to their corresponding levels, of the results of the linguistic tools annotating at several different levels. Sub-goal 3.2: Specification of the standardisation processes that allow (a) complying with the standardisation requirements of OntoTag’s annotation scheme, as well as (b) combining the results of those linguistic tools that share some level of annotation. Sub-goal 3.3: Specification of the merging processes that allow for the combination of the output annotations and the interoperation of those linguistic tools that share some level of annotation. Sub-goal 3.4: Specification of the merge processes that allow for the integration of the results and the interoperation of those tools performing their annotations at different levels. Goal 4: Generation of OntoTagger’s schema, a concrete instance of OntoTag’s abstract scheme for a concrete set of linguistic annotations. These linguistic annotations result from the tools and the resources available in the research group, namely • Bitext’s DataLexica (http://www.bitext.com/EN/datalexica.asp), • LACELL’s (POS) tagger (http://www.um.es/grupos/grupo-lacell/quees.php), • Connexor’s FDG (http://www.connexor.eu/technology/machinese/glossary/fdg/), and • EuroWordNet (Vossen et al., 1998). This schema should help evaluate OntoTag’s underlying hypotheses, stated below. Consequently, it should implement, at least, those levels of the abstract scheme dealing with the annotations of the set of tools considered in this implementation. This includes the morphosyntactic, the syntactic and the semantic levels. Goal 5: Implementation of OntoTagger’s configuration, a concrete instance of OntoTag’s abstract architecture for this set of linguistic tools and annotations. This configuration (1) had to use the schema generated in the previous goal; and (2) should help support or refute the hypotheses of this work as well (see the next section). Sub-goal 5.1: Implementation of the decanting processes that facilitate the classification and separation of the results of those linguistic resources that provide annotations at several different levels (on the one hand, LACELL’s tagger operates at the morphosyntactic level and, minimally, also at the semantic level; on the other hand, FDG operates at the morphosyntactic and the syntactic levels and, minimally, at the semantic level as well). Sub-goal 5.2: Implementation of the standardisation processes that allow (i) specifying the results of those linguistic tools that share some level of annotation according to the requirements of OntoTagger’s schema, as well as (ii) combining these shared level results. In particular, all the tools selected perform morphosyntactic annotations and they had to be conveniently combined by means of these processes. Sub-goal 5.3: Implementation of the merging processes that allow for the combination (and possibly the improvement) of the annotations and the interoperation of the tools that share some level of annotation (in particular, those relating the morphosyntactic level, as in the previous sub-goal). Sub-goal 5.4: Implementation of the merging processes that allow for the integration of the different standardised and combined annotations aforementioned, relating all the levels considered. Sub-goal 5.5: Improvement of the semantic level of this configuration by adding a named entity recognition, (sub-)classification and annotation subsystem, which also uses the named entities annotated to populate a domain ontology, in order to provide a concrete application of the present work in the two areas involved (the Semantic Web and Corpus Linguistics). 3. MAIN RESULTS: ASSESSMENT OF ONTOTAG’S UNDERLYING HYPOTHESES The model developed in the present thesis tries to shed some light on (i) whether linguistic annotation tools can effectively interoperate; (ii) whether their results can be combined and integrated; and, if they can, (iii) how they can, respectively, interoperate and be combined and integrated. Accordingly, several hypotheses had to be supported (or rejected) by the development of the OntoTag model and OntoTagger (its implementation). The hypotheses underlying OntoTag are surveyed below. Only one of the hypotheses (H.6) was rejected; the other five could be confirmed. H.1 The annotations of different levels (or layers) can be integrated into a sort of overall, comprehensive, multilayer and multilevel annotation, so that their elements can complement and refer to each other. • CONFIRMED by the development of: o OntoTag’s annotation scheme, o OntoTag’s annotation architecture, o OntoTagger’s (XML, RDF, OWL) annotation schemas, o OntoTagger’s configuration. H.2 Tool-dependent annotations can be mapped onto a sort of tool-independent annotations and, thus, can be standardised. • CONFIRMED by means of the standardisation phase incorporated into OntoTag and OntoTagger for the annotations yielded by the tools. H.3 Standardisation should ease: H.3.1: The interoperation of linguistic tools. H.3.2: The comparison, combination (at the same level and layer) and integration (at different levels or layers) of annotations. • H.3 was CONFIRMED by means of the development of OntoTagger’s ontology-based configuration: o Interoperation, comparison, combination and integration of the annotations of three different linguistic tools (Connexor’s FDG, Bitext’s DataLexica and LACELL’s tagger); o Integration of EuroWordNet-based, domain-ontology-based and named entity annotations at the semantic level. o Integration of morphosyntactic, syntactic and semantic annotations. H.4 Ontologies and Semantic Web technologies (can) play a crucial role in the standardisation of linguistic annotations, by providing consensual vocabularies and standardised formats for annotation (e.g., RDF triples). • CONFIRMED by means of the development of OntoTagger’s RDF-triple-based annotation schemas. H.5 The rate of errors introduced by a linguistic tool at a given level, when annotating, can be reduced automatically by contrasting and combining its results with the ones coming from other tools, operating at the same level. However, these other tools might be built following a different technological (stochastic vs. rule-based, for example) or theoretical (dependency vs. HPS-grammar-based, for instance) approach. • CONFIRMED by the results yielded by the evaluation of OntoTagger. H.6 Each linguistic level can be managed and annotated independently. • REJECTED: OntoTagger’s experiments and the dependencies observed among the morphosyntactic annotations, and between them and the syntactic annotations. In fact, Hypothesis H.6 was already rejected when OntoTag’s ontologies were developed. We observed then that several linguistic units stand on an interface between levels, belonging thereby to both of them (such as morphosyntactic units, which belong to both the morphological level and the syntactic level). Therefore, the annotations of these levels overlap and cannot be handled independently when merged into a unique multileveled annotation. 4. OTHER MAIN RESULTS AND CONTRIBUTIONS First, interoperability is a hot topic for both the linguistic annotation community and the whole Computer Science field. The specification (and implementation) of OntoTag’s architecture for the combination and integration of linguistic (annotation) tools and annotations by means of ontologies shows a way to make these different linguistic annotation tools and annotations interoperate in practice. Second, as mentioned above, the elements involved in linguistic annotation were formalised in a set (or network) of ontologies (OntoTag’s linguistic ontologies). • On the one hand, OntoTag’s network of ontologies consists of − The Linguistic Unit Ontology (LUO), which includes a mostly hierarchical formalisation of the different types of linguistic elements (i.e., units) identifiable in a written text; − The Linguistic Attribute Ontology (LAO), which includes also a mostly hierarchical formalisation of the different types of features that characterise the linguistic units included in the LUO; − The Linguistic Value Ontology (LVO), which includes the corresponding formalisation of the different values that the attributes in the LAO can take; − The OIO (OntoTag’s Integration Ontology), which  Includes the knowledge required to link, combine and unite the knowledge represented in the LUO, the LAO and the LVO;  Can be viewed as a knowledge representation ontology that describes the most elementary vocabulary used in the area of annotation. • On the other hand, OntoTag’s ontologies incorporate the knowledge included in the different standards and recommendations for linguistic annotation released so far, such as those developed within the EAGLES and the SIMPLE European projects or by the ISO/TC 37 committee: − As far as morphosyntactic annotations are concerned, OntoTag’s ontologies formalise the terms in the EAGLES (1996a) recommendations and their corresponding terms within the ISO Morphosyntactic Annotation Framework (ISO/MAF, 2008) standard; − As for syntactic annotations, OntoTag’s ontologies incorporate the terms in the EAGLES (1996b) recommendations and their corresponding terms within the ISO Syntactic Annotation Framework (ISO/SynAF, 2010) standard draft; − Regarding semantic annotations, OntoTag’s ontologies generalise and extend the recommendations in EAGLES (1996a; 1996b) and, since no stable standards or standard drafts have been released for semantic annotation by ISO/TC 37 yet, they incorporate the terms in SIMPLE (2000) instead; − The terms coming from all these recommendations and standards were supplemented by those within the ISO Data Category Registry (ISO/DCR, 2008) and also of the ISO Linguistic Annotation Framework (ISO/LAF, 2009) standard draft when developing OntoTag’s ontologies. Third, we showed that the combination of the results of tools annotating at the same level can yield better results (both in precision and in recall) than each tool separately. In particular, 1. OntoTagger clearly outperformed two of the tools integrated into its configuration, namely DataLexica and FDG in all the combination sub-phases in which they overlapped (i.e. POS tagging, lemma annotation and morphological feature annotation). As far as the remaining tool is concerned, i.e. LACELL’s tagger, it was also outperformed by OntoTagger in POS tagging and lemma annotation, and it did not behave better than OntoTagger in the morphological feature annotation layer. 2. As an immediate result, this implies that a) This type of combination architecture configurations can be applied in order to improve significantly the accuracy of linguistic annotations; and b) Concerning the morphosyntactic level, this could be regarded as a way of constructing more robust and more accurate POS tagging systems. Fourth, Semantic Web annotations are usually performed by humans or else by machine learning systems. Both of them leave much to be desired: the former, with respect to their annotation rate; the latter, with respect to their (average) precision and recall. In this work, we showed how linguistic tools can be wrapped in order to annotate automatically Semantic Web pages using ontologies. This entails their fast, robust and accurate semantic annotation. As a way of example, as mentioned in Sub-goal 5.5, we developed a particular OntoTagger module for the recognition, classification and labelling of named entities, according to the MUC and ACE tagsets (Chinchor, 1997; Doddington et al., 2004). These tagsets were further specified by means of a domain ontology, namely the Cinema Named Entities Ontology (CNEO). This module was applied to the automatic annotation of ten different web pages containing cinema reviews (that is, around 5000 words). In addition, the named entities annotated with this module were also labelled as instances (or individuals) of the classes included in the CNEO and, then, were used to populate this domain ontology. • The statistical results obtained from the evaluation of this particular module of OntoTagger can be summarised as follows. On the one hand, as far as recall (R) is concerned, (R.1) the lowest value was 76,40% (for file 7); (R.2) the highest value was 97, 50% (for file 3); and (R.3) the average value was 88,73%. On the other hand, as far as the precision rate (P) is concerned, (P.1) its minimum was 93,75% (for file 4); (R.2) its maximum was 100% (for files 1, 5, 7, 8, 9, and 10); and (R.3) its average value was 98,99%. • These results, which apply to the tasks of named entity annotation and ontology population, are extraordinary good for both of them. They can be explained on the basis of the high accuracy of the annotations provided by OntoTagger at the lower levels (mainly at the morphosyntactic level). However, they should be conveniently qualified, since they might be too domain- and/or language-dependent. It should be further experimented how our approach works in a different domain or a different language, such as French, English, or German. • In any case, the results of this application of Human Language Technologies to Ontology Population (and, accordingly, to Ontological Engineering) seem very promising and encouraging in order for these two areas to collaborate and complement each other in the area of semantic annotation. Fifth, as shown in the State of the Art of this work, there are different approaches and models for the semantic annotation of texts, but all of them focus on a particular view of the semantic level. Clearly, all these approaches and models should be integrated in order to bear a coherent and joint semantic annotation level. OntoTag shows how (i) these semantic annotation layers could be integrated together; and (ii) they could be integrated with the annotations associated to other annotation levels. Sixth, we identified some recommendations, best practices and lessons learned for annotation standardisation, interoperation and merge. They show how standardisation (via ontologies, in this case) enables the combination, integration and interoperation of different linguistic tools and their annotations into a multilayered (or multileveled) linguistic annotation, which is one of the hot topics in the area of Linguistic Annotation. And last but not least, OntoTag’s annotation scheme and OntoTagger’s annotation schemas show a way to formalise and annotate coherently and uniformly the different units and features associated to the different levels and layers of linguistic annotation. This is a great scientific step ahead towards the global standardisation of this area, which is the aim of ISO/TC 37 (in particular, Subcommittee 4, dealing with the standardisation of linguistic annotations and resources).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La Tesis Doctoral nace con una intensa vocación pedagógica. La hipótesis de trabajo se establece en torno a una cuestión de interés personal, un tema sobre el que se vertebran, desde el comienzo del doctorado, los diferentes cursos y trabajos de investigación: LA CASA DOMÍNGUEZ como paradigma de la dialéctica en la obra de Alejandro de la Sota. La clasificación de la realidad en categorías antagónicas determina un orden conceptual polarizado, una red de filiaciones excluyentes sobre las que Sota construye su personal protocolo operativo: la arquitectura intelectual o popular, experimental o tradicional, universal o local, ligera o pesada, elevada o enterrada, etc. Se propone el abordaje de una cuestión latente en el conjunto de la obra ‘sotiana’, desde la disección y el análisis de una de sus obras más pequeñas: la casa Domínguez. Se trata de una organización sin precedentes, que eleva la estrategia dialéctica al paroxismo: la vivienda se separa en dos estratos independientes, la zona de día, elevada, y la zona de noche, enterrada; cada uno de los estratos establece su propio orden geométrico y constructivo, su propio lenguaje y carácter, su propia identidad e incluso su propio presupuesto. Las relaciones entre interior y exterior se especializan en función de la actividad o el reposo, estableciéndose una compleja red de relaciones, algunas evidentes y otras celosamente veladas, entre los diferentes niveles. La estancia destinada a las tareas activas se proyecta como un objeto de armazón ligero y piel fría; la precisa geometría del cubo delimita la estancia vigilante sobre el paisaje conquistado. La ladera habitada se destina al reposo y se configura como una topografía verde bajo la que se desarrollan los dormitorios en torno a patios, grietas y lucernarios, generando un paisaje propio: la construcción del objeto frente a la construcción del lugar La casa Domínguez constituye uno de los proyectos menos estudiados, y por lo tanto menos celebrados, de la obra de Don Alejandro. Las publicaciones sucesivas reproducen la documentación gráfica junto a la memoria (epopeya) que el propio Sota compone para la publicación del proyecto. Apenas un par de breves textos críticos de Miguel Ángel Baldellou y, recientemente de Moisés Puente, abordan la vivienda como tema monográfico. Sin embargo, la producción de proyecto y obra ocupó a De la Sota un periodo no inferior a diez años, con casi cien planos dibujados para dos versiones de proyecto, la primera de ellas, inédita. El empeño por determinar hasta el último detalle de la ‘pequeña’ obra, conduce a Sota a controlar incluso el mobiliario interior, como hiciera en otras obras ‘importantes’ como el Gobierno Civil de Tarragona, el colegio mayor César Carlos o el edificio de Correos y Telecomunicaciones de León. La complicidad del cliente, mantenida durante casi cuarenta años, habilita el despliegue de una importante colección de recursos y herramientas de proyecto. La elección de la casa Domínguez como tema central de la tesis persigue por lo tanto un triple objetivo: en primer lugar, el abordaje del proyecto como paradigma de la dialéctica ‘sotiana’, analizando la coherencia entre el discurso de carácter heroico y la obra finalmente construida; en segundo lugar, la investigación rigurosa, de corte científico, desde la disección y progresivo desmontaje del objeto arquitectónico; y por último, la reflexión sobre los temas y dispositivos de proyecto que codifican la identificación entre la acción de construir y el hecho de habitar, registrando los aciertos y valorando con actitud crítica aquellos elementos poco coherentes con el orden interno de la propuesta. This doctoral thesis is the fruit of a profound pedagogical vocation. The central hypothesis was inspired by a question of great personal interest, and this interest has, since the very beginning of the doctorate, been the driving force behind all subsequent lines of research and investigation. The “Casa Domínguez” represents a paradigm of the dialectics found in the work of Alejandro de la Sota. The perception of reality as antagonistic categories determines a polarized conceptual order, a network of mutually excluding associations upon which Sota builds his own personal operative protocol: intellectual or popular architecture, experimental or traditional, universal or local, heavy or light, above or below ground, etc. Through the analysis and dissection of the “Casa Domínguez”, one of Sota’s smallest projects, an attempt is made to approach the underlying question posed in “Sotian” work as a whole. This is about organization without precedent, raising the strategic dialectics to levels of paroxysm. The house is divided into two separate levels, the day-time level above ground, and the lower night-time level beneath the surface of the ground. Each level has its own geometrical and stuctural order, its own language and character, its own identity and even has its own construction budget. The interaction between the two areas is centered on the two functions of rest and activity, and this in turn establishes a complex relationship network between both, which is sometimes self-evident, but at other times jealously guarded. The living area designed for daily activity is presented as an object of light structure and delicate skin; the precise geometry of the cube delimiting the ever watchful living area’s domain over the land it has conquered. A green topography is created on the slope below which lies an area adapted for rest and relaxation. Two bedrooms, built around patios, skylights and light crevices, generate an entirely independent environment: the construction of an object as opposed to the creation of a landscape. The “Casa Domínguez” project has been subject to much less scrutiny and examination than Don Alejandro’s other works, and is consequently less well-known. A succession of journals have printed the blueprint document together with a poetic description (epopee), composed by Sota himself, to mark the project’s publication. There has, however, scarcely been more than two brief critical appraisals, those by Miguel Ángel Baldellou and more recently by Moisés Puente, that have regarded the project as a monographic work. The project and works nevertheless occupied no less than ten years of De La Sota’s life, with over a hundred draft drawings for two separate versions of the project, the first of which remains unpublished. The sheer determination to design this “small” work in the most meticulous detail, drove Sota to manage and select its interior furniture, as indeed he had previously done with more “important” works like the Tarragona Civil Government, César Carlos College, or the Post Office telecommunications building in León. Client collaboration, maintained over a period of almost forty years, has facilitated an impressive array of the project’s tools and resources. The choice of “Casa Domínguez” as the central subject matter of this thesis, was made in pursuance of a triple objective: firstly, to approach the project as a paradigm of the “Sotian” dialectic, the analysis of the discourse between the heroic character and the finished building; secondly, a rigorous scientific investigation, and progressive disassembling and dissecting of the architectonic object; and finally, a reflection on aspects of the project and its technology which codify the identification between the action of construction and the reality of living, thus marking its achievements, whilst at the same time subjecting incoherent elements of the proposal’s established order to a critical evaluation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se presenta la tesis doctoral, titulada ‘TRANS Arquitectura. Imaginación, Invención e individuación del objeto tecnico arquitectónico. Transferencia tecnológica desde la Industria del Transporte al Proyecto de Arquitectura [1900-1973]'’, que aborda la relación entre la Arquitectura y el Objeto Técnico durante la Modernidad.1 La temática de la tesis gravita en torno a la cultura técnica, la cultura material y la historia de la Tecnología del siglo XX. Hipótesis Se sostiene aquí la existencia de unas arquitecturas que se definen como Objetos Técnicos. Para demostrarlo se estudia si éstas comparten las mismas propiedades ontológicas de los objetos técnicos. Industria y Arquitectura La historia de la Arquitectura Moderna es la historia de la Industria Moderna y sus instalaciones industriales, sus productos y artefactos o sus procedimientos y procesos productivos. Fábricas, talleres, acerías, astilleros, minas, refinerías, laboratorios, automóviles, veleros, aviones, dirigibles, transbordadores, estaciones espaciales, electrodomésticos, ordenadores personales, teléfonos móviles, motores, baterías, turbinas, aparejos, cascos, chassis, carrocerías, fuselajes, composites, materiales sintéticos, la cadena de montaje, la fabricación modular, la cadena de suministros, la ingeniería de procesos, la obsolescencia programada… Todos estos objetos técnicos evolucionan constantemente gracias al inconformismo de la imaginación humana, y como intermediarios que son, cambian nuestra manera de relacionarnos con el mundo. La Arquitectura, al igual que otros objetos técnicos, media entre el hombre y el mundo. Con el objetivo de reducir el ámbito tan vasto de la investigación, éste se ha filtrado a partir de varios parámetros y cualidades de la Industria, estableciendo un marco temporal, vinculado con un determinado modo de hacer, basado en la ciencia. El inicio del desarrollo industrial basado en el conocimiento científico se da desde la Segunda Revolución Industrial, por consenso en el último tercio del siglo XIX. Este marco centra el foco de la tesis en el proceso de industrialización experimentado por la Arquitectura desde entonces, y durante aproximadamente un siglo, recorriendo la Modernidad durante los 75 primeros años del siglo XX. Durante este tiempo, los arquitectos han realizado transferencias de imágenes, técnicas, procesos y materiales desde la Industria, que ha servido como fuente de conocimiento para la Arquitectura, y ha evolucionado como disciplina. Para poder abordar más razonablemente un periodo tan amplio, se ha elegido el sector industrial del transporte, que históricamente ha sido, no sólo fuente de inspiración para los Arquitectos, sino también fuente de transferencia tecnológica para la Arquitectura. Conjuntos técnicos como los astilleros, fábricas de automóviles o hangares de aviones, individuos técnicos como barcos, coches o aviones, y elementos técnicos como las estructuras que les dan forma y soporte, son todos ellos objetos técnicos que comparten propiedades con las arquitecturas que aquí se presentan. La puesta en marcha de la cadena móvil de montaje en 1913, se toma instrumentalmente como primer foco temporal desde el que relatar la evolución de numerosos objetos técnicos en la Primera Era de la Máquina; un segundo foco se sitúa en 19582, año de la creación de la Agencia Espacial norteamericana (NASA), que sirve de referencia para situar la Segunda Era de la Máquina. La mayoría de los objetos técnicos arquitectónicos utilizados para probar la hipótesis planteada, gravitan en torno a estas fechas, con un rango de más menos 25 años, con una clara intención de sincronizar el tiempo de la acción y el tiempo del pensamiento. Arquitectura y objeto técnico Los objetos técnicos han estado siempre relacionados con la Arquitectura. En el pasado, el mismo técnico que proyectaba y supervisaba una estructura, se ocupaba de inventar los ingenios y máquinas para llevarlas a cabo. Los maestros de obra, eran verdaderos ‘agentes de transferencia tecnológica’ de la Industria y su conocimiento relacionaba técnicas de fabricación de diferentes objetos técnicos. Brunelleschi inventó varia grúas para construir la cúpula de Santa Maria dei Fiori (ca.1461), seguramente inspirado por la reedición del tratado de Vitruvio, De Architectura (15 A.C.), cuyo último capítulo estaba dedicado a las máquinas de la arquitectura clásica romana, y citaba a inventores como Archimedes. El arquitecto florentino fue el primero en patentar un invento en 1421: una embarcación anfibia que serviría para transportar mármol de Carrara por el río Arno, para su obra en Florencia. J. Paxton. Crystal Palace. London 1851. Viga-columna. Robert McCormick. Cosechadora 1831. 2ª patente, 1845. La Segunda Revolución Industrial nos dejó un primitivo ejemplo moderno de la relación entre la Arquitectura y el objeto técnico. El mayor edificio industrializado hasta la fecha, el Crystal Palace de Londres, obra de Joseph Paxton, fue montado en Londres con motivo de la Gran Exposición sobre la Industria Mundial de 1851, y siempre estará asociado a la cosechadora McCormick, merecedora del Gran Premio del Jurado. De ambos objetos técnicos, podrían destacarse características similares, como su origen industrial, y ser el complejo resultado de un ensamblaje simple de elementos técnicos. Desde la entonces, el desarrollo tecnológico ha experimentado una aceleración continuada, dando lugar a una creciente especialización y separación del conocimiento sobre las técnicas antes naturalmente unidas. Este proceso se ha dado a expensas del conocimiento integrador y en detrimento de la promiscuidad entre la Industria y la Arquitectura. Este es, sin lugar a dudas, un signo consustancial a nuestro tiempo, que provoca un natural interés de los arquitectos y otros tecnólogos, por las transferencias, trans e inter-disciplinareidades que tratan de re-establecer los canales de relación entre los diferentes campos del conocimiento. La emergencia de objetos técnicos como los vehículos modernos a principios del siglo XX (el automóvil, el trasatlántico, el dirigible o el aeroplano) está relacionada directamente con la Arquitectura de la Primera Era de la Máquina. La fascinación de los arquitectos modernos por aquellas nuevas estructuras habitables, se ha mantenido durante más de un siglo, con diferente intensidad y prestando atención a unos objetos técnicos u otros, oscilando entre el dominio del valor simbólico de los vehículos como objetosimágenes, durante el periodo heroico de la Primera Era de la Máquina, y la mirada más inquisitiva durante la Segunda, que perseguía un conocimiento más profundo de la organización de los mismos y del sistema técnico en el que estaban incluidos. La relación homóloga que existe entre arquitecturas y vehículos, por su condición de estructuras habitables, es algo de sobra conocido desde que Le Corbusier utilizara aquellas imágenes de barcos, coches y aviones para ilustrar su manifiesto Vers une architecture, de 1923. Los vehículos modernos han sido los medios con los que transmitir los conceptos que ansiaban transformar las propiedades tradicionales de la Arquitectura, relativas a su factura, su habitabilidad, su duración, su funcionalidad o su estética. Destaca particularmente el caso del automóvil en las décadas de los años 30 y 50, y los vehículos del programa espacial en las décadas de los 60 y 70. El conocimiento y la documentación previa de estos hechos, fueron un buen indicio para identificar y confirmar que el sector industrial del transporte, era un especialmente trascendente y fértil proveedor de casos de transferencia tecnológica para la Arquitectura. La tradición Moderna inaugurada por Le Corbusier en los años 20, ha sido mantenida y defendida por una multitud de arquitectos modernos como Albert Frey, Richard Neutra, Ralph Soriano, Charles Eames o Craig Ellwood, cuyo trabajo, animado por el legado de anteriores tecnólogos como Bucky Fuller o Jean Prouvé, fue fundamental y referencia obligada para la siguiente generación de arquitectos como Cedric Price, Archigram, Norman Foster, Richard Rogers, Renzo Piano, Jean Kaplicky o Richard Horden, entre otros. Todos ellos han contribuido a engrosar el imaginario del objeto técnico, aportando sus obras arquitectónicas. Estos arquitectos que aparecen repetidamente en el discurrir de la tesis, pertenecen a un mismo linaje, y son agrupados según una estructura ‘genealógica’, que se ha denominado ‘Estirpe Técnica’. Unidos por intereses comunes y similares enfoques o actitudes ante el proyecto de arquitectura, entendida como objeto Técnico, han operado mediante la práctica de la transferencia tecnológica, sin limitarse a las técnicas compositivas propias de la disciplina arquitectónica. Durante la investigación, se ha recopilado una selección de menciones explícitas -hechas por arquitectos- sobre otros objetos técnicos para referirse a la Arquitectura, mostrando las constantes y las variaciones de sus intereses a lo largo del siglo, lo que nos ha llevado a conclusiones como por ejemplo, que los conjuntos técnicos (fábricas de zepelines, aviones, automóviles o trasatlánticos) eran tomados por los arquitectos de la primera Modernidad, como un modelo imaginario, formal y compositivo, mientras que los de la Segunda Era de la Máquina los tomaban como modelo espacial y organizativo para la arquitectura. La mencionada estirpe de tecnólogos incluye líneas de descendencia conocidas, como: EiffelSuchovBehrens GropiusMiesLeCorbusierLodsProuve, en la Europa continental, o una rama británica como: LoudonPaxtonWilliamsStirlingGowan SmithsonsPriceArchigramFosterRogersPiano KaplickyHorden. También podemos encontrar conexiones intercontinentales como Fuller EamesRudolphFosterRogers, o ramificaciones menos previsibles como: LeRicolaisKahn PianoKaplicky, o LeCorbusierFreyLacaton Vassal… Seguramente muchos más merecerían incluirse en esta lista, y de hecho, la tesis asume la imposibilidad de incluirlo todo (por motivos prácticos) aunque contempla la posibilidad de ser ampliada en un futuro. Con lo aquí incluido, se pretende mostrar la continuidad en los enfoques, planteamientos y técnicas de proyectos aplicadas, de los que podemos deducir algunas conclusiones, como por ejemplo, que en los periodos inmediatamente posteriores a las dos Guerras Mundiales, aumentó la intensidad de aportaciones de nuevas imágenes de vehículos, al imaginario del objeto técnico utilizado por los arquitectos, a través de publicaciones y exposiciones. Hoy, cien años después de que Ford pusiera en marcha la cadena móvil de montaje, aún encontramos viva esta tradición en las palabras de un arquitecto, Richard Horden, cuyo trabajo porta consigo –como la información embebida en los elementos técnicos- toda una cultura técnica de una tradición moderna. Horden representa uno de los exponentes de la que he denominado estirpe de tecnólogos. Es por ello que he querido concluir la tesis con una entrevista, realizada en Mayo de 2015, en su estudio de Berkeley Square en Londres (ver Apéndices). Guías Para el desarrollo de la presente tesis, se ha tomado, como principal obra de referencia, otra tesis, titulada El modo de existencia de los objetos técnicos, leída y publicada en 1958 por el filósofo francés Gilbert Simondon [1924-89], dedicada a la ontología del objeto técnico. Esta obra enmarca el enfoque intelectual de la tesis, que entronca con la fenomenología, para movilizar una visión particular de la Arquitectura, a la que sirve como modelo de análisis ontológico para estudiar sus procesos de génesis, invención e individuación. Para el desarrollo de éstos, se ha utilizado como complemento bibliográfico, otra obra del mismo autor, titulada Imaginación e invención 1965-66. En cuanto a las fuentes historiográficas disciplinares, se ha elegido utilizar a Reyner P. Banham [1922-1988] y a Martin E. Pawley [1938-2008] como guías a través de la arquitectura del siglo XX. Sus crónicas sobre la Primera y Segunda Era de la Máquina3 y su obra crítica, han servido como índices desde los que reconstruir el imaginario del objeto técnico moderno, y del que aprovisionarse de proyectos y obras de Arquitectura como casos de estudio para la tesis. Estas obras han servido además como índices de otra bibliografía, que ha sido complementaria a la de éstos. Objetivos de la Tesis El principal objetivo de la tesis es demostrar la hipótesis: si una obra de arquitectura puede ser considerada un objeto técnico y bajo qué condiciones, construyendo un criterio que permita reconocer cuándo una obra de Arquitectura responde a la definición de objeto técnico. Otro objetivo es demostrar la importancia y potencia de la Transferencia tecnológica en el proceso evolutivo de la Arquitectura, y para ello se presentan ejemplos de una metodología de proyecto por ensamblaje, que Martin Pawley denominaba ‘Design by Assembly’. También es un objetivo el de reconstruir un Atlas del Imaginario del objeto técnico moderno, con el fin de conocer mejor las causas, razones y finalidades que llevaron a los arquitectos modernos a perseguir una arquitectura como objeto técnico. Este Atlas permite relacionar panópticamente los distintos objetos técnicos entre sí, revelando la verdadera importancia y trascendencia de aquéllos y las arquitecturas con las que se relacionan. En él, las arquitecturas vuelven a situarse en el contexto más extenso y complejo de la industria y la historia de la tecnología, al que siempre pertenecieron. De este modo, éstas son capaces de desvelar todo el conocimiento -en forma de información- que portan en su propio código ‘genético’, desplegando capítulos completos de cultura tecnológica, tan antigua como la Humanidad y en constante y creciente evolución. Estructura de la tesis Tras una Introducción en la que se presentan algunos de los conceptos principales que se instrumentalizan en la tesis sobre la ontología Simondoniana del objeto técnico y sobre la transferencia tecnológica aplicada al proyecto de Arquitectura, el texto principal de la tesis consta de tres partes: La primera se dedica a la Imaginación, una segunda parte a la Invención y una tercera a Individuación o evolución del objeto técnico. Se termina con una Discusión de la tesis y un apartado de Conclusiones. En la Introducción al objeto técnico, éste se define ontológicamente y se distinguen sus diferentes categorías (conjuntos técnicos, individuos técnicos y elementos técnicos). Se explica el proceso de génesis del objeto técnico y sus fases de imaginación, invención e individuación. También se presentan los conceptos de transducción, tecnicidad y sistema técnico, fundamentales para entender el concepto de transferencia tecnológica que se desarrollará después. La concretización, explica el modo particular de individuación y evolución de los objetos técnicos, un proceso por el que las diferentes partes de un objeto técnico, se integran y tienden hacia la propia convergencia. Aquí se comprueba la efectividad del concepto simondoniano de Transducción, como señal o información transmitida y transformada, y se relaciona con la Transferencia Tecnológica - un proceso sinergético, por el que un sector industrial se beneficia del desarrollo de otro sector- a la que se han referido explícitamente arquitectos e historiadores para explicar sus obras, durante la Segunda Era de la Máquina, y que es determinante para el desarrollo de la Industria. La transferencia tecnológica sería la transmisión del conjunto de conocimientos sobre la técnica, que incluyen su esfera fáctica, pero también la esfera sensible de la experiencia. En su aplicación a la arquitectura, las transferencias se han clasificado según tres tipos: Eidéticas, Tectónicas, Orgánicas. En la primera parte dedicada a la Imaginación del objeto técnico arquitectónico se realiza una reconstrucción ‘arqueológica’ –y parcial- del imaginario del objeto técnico moderno, con la intención de conocer mejor su génesis y la relación con otros objetos técnicos. Las fuentes de ese imaginario se buscan en las instalaciones de la Industria de principios de siglo XX, en particular en las fábricas de vehículos, con la finalidad de comprobar hasta qué punto, esos objetos técnicos fueron importantes para imaginar la Arquitectura moderna. La reconstrucción se continúa hasta la Segunda Era de la Máquina, cuando una nueva mirada más inquisitiva y precisa, se dirige a otras fábricas, vehículos y componentes, interesándose por sus cualidades materiales y organizativas. Transferencias Eidéticas, que operan desde un conocimiento intuitivo y son útiles para transmitir información sobre la esencia de un objeto técnico que sirve de fuente. Conceptos abstractos se transmiten por medio de las imágenes—objeto, para producir una transformación en su equivalente arquitectónico. Fruto de la investigación, se han detectado un grupo de conceptos que han sido objeto de transferencias tecnológicas de naturaleza eidética, provenientes del imaginario del objeto técnico moderno: FABRICADO, HABITABLE, FUNCIONAL, EFICIENTE, OBSOLESCENTE y BELLO. En la segunda parte dedicada a la Invención del objeto técnico arquitectónico, las transferencias también pueden ser Tectónicas, cuando lo que se transmite es una técnica constructiva o estructural aplicada mediante MATERIALES artificiales (como los metales, los composites como el ferrocemento, y el plywood, o las aleaciones como el aluminio) o mediante el ensamblaje de ESTRUCTURAS o partes componentes de otro objeto técnico, (como cascos, fuselajes, carrocerías o aparejos) y tiene como resultado la invención de un nuevo objeto técnico arquitectónico. En la tercera parte dedicada a la individuación, se abordan las transferencias ORGÁNICAS, lo que se transfiere es una técnica organizativa, aplicada a través de PROCEDIMIENTOS que definen la actividad del arquitecto como tecnólogo e inventor de objetos técnicos. Estos procedimientos tienen un efecto transformador en tres instituciones tradicionales para la Arquitectura: la Escuela, el Estudio y la Obra, y sus resultados se resumen en nuevos modelos de organización de la Educación de la Arquitectura, con la aparición de los Talleres de proyectos; nuevos modelos de organización del ejercicio de arquitecto: la Oficina técnica; nuevos modelos de organización del espacio, basados en la organización espacial de la Industria, que da lugar a patrones o Matrices espaciales; un nuevo modelo de organización del proyecto, que utiliza las herramientas gráficas de la industria y el ensamblaje como metodología; y un nuevo modelo de producción arquitectónica, basado en la Industrialización. Tras explicar los conceptos y la génesis del ensamblaje y el montaje, se presenta el proyecto por ensamblaje (Design by assembly) como un método que promueve la invención arquitectónica. Se demuestra utilizando algunos casos analizados en la tesis, en los que se ha realizado alguna transferencia conceptual, constructiva u organizativa. Tras analizar las arquitecturas estudiadas en la tesis, se ha utilizado el método genético propuesto por Simondon para comprender cada evolución particular, reconstruyendo las líneas genealógicas hasta sus ancestros, e identificando una serie de linajes genéticos, que corresponderían con los conjuntos técnicos estudiados en la tesis: el astillero, la fábrica de coches, y la fábrica de aeronaves: los Ancestros de la Modernidad. Los sistemas de organización espacial de estos conjuntos técnicos, están directamente relacionados con el objeto técnico que se produce en él. A partir de ellos se definen una serie de matrices operativas (MILL, SHOP, SHED), que sirven para hacer una taxonomía del objeto técnico arquitectónico. Esto se ejemplifica con algunos proyectos de Norman Foster, Richard Rogers, Renzo Piano, Nicholas Grimshaw, Jean Kaplicky y Richard Horden. Tesis: Comprobación de la hipótesis Simondon definía ontológicamente el Objeto técnico como aquello de lo que existe génesis y que desarrolla una tendencia hacia la solidaridad y unidad. Para que una Arquitectura pueda ser reconocida como un Objeto técnico, se deben dar una serie de condiciones, en las sucesivas fases que intervienen en su modo de existencia: Imaginación. Estas arquitecturas remiten a un imaginario protagonizado por imágenes-objeto de otros objetos técnicos (conjuntos técnicos, individuos técnicos y elementos técnicos). Esas imágenes-objeto vehiculizan una transferencia eidética de los objetos técnicos que simbolizan. Invención. Estas arquitecturas son el resultado de transferencias tectónicas, que se producen durante el proceso de proyecto, mediante el ensamblaje de materiales, componentes o procedimientos, utilizados en la industria para la producción de otros objetos técnicos. Individuación. Estas arquitecturas evolucionan y se individualizan por concretización, un proceso por el que los objetos técnicos se organizan para seguir su tendencia hacia la integración de sus partes, con el fin de alcanzar la convergencia de funciones en una única estructura. Esta integración tiende hacia la naturalización del objeto técnico, mediante la inclusión simbiótica de sus medios naturales asociados. En este caso, veremos cómo se ha producido transferencias orgánicas, o lo que es lo mismo, cómo los objetos técnicos –en el nivel de los conjuntos técnicos- se han tomado como modelo de organización por la arquitectura. Tras comprobar que de ellas existe una génesis, que evoluciona por las fases de imaginación e invención y concretización, se analiza su imaginario, su materialidad, sus estructuras y su organización, con el fin de detectar patrones y principios organizativos comunes a otros objetos técnicos. Interés de la tesis Desde el comienzo del nuevo siglo, diversos autores han demostrado un renovado interés por definir qué es el proyecto, qué lo constituye para qué sirve. Las aproximaciones al tema provienen de la filosofía analítica (Galle, 2008) o de la filosofía de la tecnología (Verbeek, 2005; Vermaas, 2009) y a menudo versan sobre la relación entre diseño y la cultura material (Dorschel 2003, Boradkar 2010 o Preston 2012). Es importante indicar el reciente y también creciente interés suscitado por la obra del filósofo francés, Gilbert Simondon [1924-1989], reconocida por su importante contribución a la filosofía de la técnica y la fenomenología, y por la influencia en el pensamiento de filósofos como Gilles Deleuze, autor presente en multitud de tesis doctorales e investigaciones teóricas llevadas a cabo en las principales escuelas de Arquitectura de todo el mundo desde los años 90 hasta el presente. La reedición y traducción de la obra de Simondon (ing. 1980, esp. 2008) ha recibido la atención de filósofos actuales como Paolo Virno, Bruno Latour o Bernard Stiegler, que siguen recurriendo a su estudio y análisis para avanzar en su pensamiento, estando por tanto presente en el debate contemporáneo sobre la técnica. Tras su reciente traducción al español, el pensamiento de Simondon ha despertado un gran interés en América Latina, como demuestra la organización de varios congresos y simposios, así como la proliferación de publicaciones en torno a su obra y pensamiento. Las futuras traducciones del resto de sus principales obras, asegurarán una introducción cada vez mayor en la comunidad académica. Se ha procurado presentar una mirada alternativa de la Historia de la Arquitectura Moderna, utilizando como guía a un cronista como Reyner Banham. La Era de la Máquina se ha cruzado con la Mecanología y el “vitalismo técnico” de Simondon, obteniendo como resultado una interpretación fresca, renovada y optimista de algunas de las más importantes obras de Arquitectura del siglo XX, que seguro contribuirán al desarrollo de la del siglo XXI, inmerso ya en el cambio de paradigma hacia la sostenibilidad y la ecología. ABSTRACT 'TRANS architecture. Imagination, invention and technical individuation of the architectural technical object. Technology transfer from the Transport Industry to Architectural Design [1900- 1973]' is a thesis dealing with the relationship between Architecture and the Technical Object during Modernity5. The theme of the thesis revolves around the technical culture, material culture and the history of twentieth-century technology. Hypothesis Held here is the existence of some architectures defined as technical objects. A study has been developed to prove if those architectures share the ontological properties of a technical object. Industry and Architecture The history of Modern Architecture is also the history of modern industry and its facilities, its products and devices, its procedures and production processes. Factories, workshops, steel mills, shipyards, mines, refineries, laboratories, cars, yachts, airplanes, airships, shuttles, space stations, home appliances, personal computers, mobile phones, motors, batteries, turbines, rigs, hulls, chassis, bodies, fuselages , composites and synthetic materials, the assembly line, modular manufacturing, the supply chain, process engineering, the planned obsolescence ... All these technical objects are constantly evolving thanks to the inconsistency of the human imagination and, as our intermediates, keep changing our way of relating and being in the world. Architecture, alike other technical objects, mediates between man and the World. In order to frame the vast field of the research, it has been filtered according to various parameters and qualities of Industry, establishing also a time frame which is related to a particular science-based way of making. The start of an industrial development, based on scientific knowledge is given from the Second Industrial Revolution -by consensus on the last third of the nineteenth century. This frame puts the focus of the thesis in the process of industrialization experienced by the Architecture of at least one century, and tours through Modernity during the first 75 years of the twenieth century. During this time, architects have made transfers of images, techniques, processes and materials from Industry, serving as a source of knowledge and thus allowing Architecture to evolve as a discipline. To reasonably address the enormous scope of the thesis, the industrial sector of transportation has ben chosen. It is not only a historical source of inspiration for architects, but also a traditional source of technology transfer for Modern Architecture. Technical sets such as shipyards, automobile factories or aircraft hangars, technical individuals as boats, cars or planes, and technical elements like the structures shaping and supporting them, are all technical objects which share properties with the architectures here presented. The launch of the moving assembly line in 1913, is instrumentally taken as a first time focus, from which to describe the evolution of many technical objects in the First Machine Age; a second focus could be found in 19586, year of the creation of the North American Space Agency (NASA), serving as a reference to the Second Machine Age. Most architectural technical objects used to test the hypothesis, gravitate around this second focus, in a range of plus or minus 25 years, with a clear intention to synchronize the time for action and time of thought. Architecture and Technical Object Technical objects have always been related to Architecture. In the past, the same technician who planned and oversaw a building structure, invented the devices and machines to carry them out. The foremen were the true 'technology transfer agents' from Industry. Their knowledge naturally related different manufacturing techniques to make diverse technical objects. Brunelleschi invented various cranes to build the dome of Santa Maria dei Fiori in Florence (ca.1461). Probably inspired by the reedition of Vitruvius’ treaty De Architectura (15 BC), whose last chapter was dedicated to the machines of classical Roman architecture and quoted inventors as Archimedes, the florentine architect was the first to patent an invention in 1421: an amphibious craft serving as a means of transportation for Carrara marble along the Arno river. At the daw of the Second Industrial Revolution, whose development was based on the scientific knowledge, we find a primitive modern example of the relationship between Architecture and a Technical Object: The Crystal Palace, built in London for the Great Exhibition of 1851 World Industry and designed by Joseph Paxton, was the largest to date industrialized building, and it will be always associated with the McCormick Reaper, worthy of the Grand Jury’s Prize. Similar characteristics could be emphasized of both technical objects, such as their industrial origin and for being be the complex result of a simple assembly of technical elements. Since then, technological development has experienced a continued acceleration, resulting in an increasing specialization and separation of knowledge about techniques which were naturally attached in the past. This process has happened at the expense of an integrative knowledge and against promiscuity between Industry and Architecture. This is, undoubtedly, an inherent sign of our time, which causes the natural and interest of architects and other technicians about transfers, trans-disciplinarity and inter-disciplinarity, as a reaction to reestablish channels of relationships between these different fields of knowledge. The emergence of technical objects as modern vehicles in the early twentieth century (the car, the Ocean liner, the airship or the airplane) is directly related to the Architecture of the First Machine Age. Modern architects’ fascination for those new ‘inhabitable’ structures has been maintained for over a century, with different intensity and paying attention to one and other technical objets, ranging from the domain of the symbolic value of the vehicles as objectsimages, during heroic period of the First Machine Age, to the more inquisitive glance characterizing the Second Machine Age, which sought a deeper understanding of the organization of such objects and the technical system to which they belonged. The periods immediately following both World Wars, showed a concentrated effort to bring new images of vehicles to the imaginary of architects, by means of publications and exhibitions. The homologous relationship between architectures and vehicles, in their capacity as living structures, is something well known since Le Corbusier used the images of cars, boats and airplanes to illustrate his manifesto, Towards an architecture in 1923. Modern vehicles have been the means by which to convey the concepts eager to transform the traditional attributes of Architecture: those relating to its manufacture, habitability, duration, functionality or aesthetics. The automobile stands out during the 30s and 50s, and the new vehicles of the Space Program satnd in the 60s and 70s. The prior knowledge and documentation of these events were a good indication to identify the industrial sector of Transportation as one of especial importance and as a fertile provider of technology transfer cases for Architecture. The Modern tradition, inaugurated by Le Corbusier in the 20s, has been maintained and defended by a host of modern architects like Albert Frey, Richard Neutra, Ralph Soriano, Charles Eames and Craig Ellwood, whose work - inspired by the legacy of previous technologists as Bucky Fuller or Jean Prouvé- was fundamental and a mandatory reference for the next generation of architects like Cedric Price, Archigram, Norman Foster, Richard Rogers, Renzo Piano, Jean and Richard Horden Kaplicky, among others. They have all contributed to increase the imaginary of the technical object, adding to it their architectural works. In the passage of the thesis, we repeatedly find a number of architects, who have been grouped according to a 'genealogical' structure, which has been called 'Technical Lineage'. Gathered by common interests and similar views or attitudes to the architectural design, understood as a technical object, they have operated through the practice of technology transfer, without limiting itself to specific compositional techniques of the architectural discipline. During the investigation, a selection of explicit references made by those architects, about other technical objects referring to their Architecture, has been compiled, showing constants and variations in their interests throughout the century, which has led to conclusions such as, having technicians sets (zeppelins factories, airships factories, car factories and shipyards) been taken by the architects of the first Modernity, as their main formal, compositional and imaginary models, while the Second Machine Age had taken them as a spatial and organizational model for their architecture. The above mentioned lineage of technologists includes weel-known ‘seed lines’ as: Eiffel- Suchov-Behrens, Gropius-Mies-LeCorbusier- Lods-Prouve, in continental Europe; British branches as Loudon-Paxton-Williams-Stirling- Gowan-Smithsons-Price-Archigram-Foster- Rogers-Piano-Kaplicky-Horden. And we could also find intercontinental connections as Fuller- Eames-Rudolph-Foster-Rogers, or other less predictable ramifications as LeRicolais-Kahn Piano-Kaplicky, or LeCorbusier-Frey-Lacaton & Vassal... Many more would surely deserve to be included in this list, and indeed, the thesis assumes the impossibility of including them all (for practical reasons) and even contemplates possible future extensions. The material included herein is to demonstrate the continuity in the approaches, statements and in the applied architectural design techniques, from which we can draw some conclusions. Today, one hundred years after Ford put up the moving assembly line, we still find this tradition alive in the words of the architect Richard Horden, whose work carries with it –as with the information embedded in every technical element- the whole techncial culture of a modern tradition. Horden is represented here as one of the exponents of what I have called the lineage of technologists. That is why I wanted to conclude the thesis with an interview to Richard Horden, held in May 2015 in his studio in London's Berkeley Square (see Appendices). Guides For the development of this thesis, another thesis, entitled: The mode of existence of technical objects, is taken as the main reference work. Read and published in 1958 by the French philosopher Gilbert Simondon [1924- 1989], it was dedicated to the ontology of the technical object. This work frames the intellectual approach of the thesis, which connects with phenomenology to mobilize a particular vision of Architecture. It is used as a model of ontological analysis to study its genesis, invention and evolutionary processes. To develop these, another work by the same author, titled Imagination and Invention (1965- 1966) has been used as a bibliographical complement. As for the disciplinary historical sources, Reyner P. Banham [1922-1988] and Martin E. Pawley [1938-2008] have been chosen as guides through the modern Architecture of the twentieth century. Their cronical reports on the First and Second Machine Age and their critical works have served as an index from which to reconstruct the imaginary of the modern technical object in the Machine Age7, and to stock up on projects and works of architecture, used as case studies for the thesis. These works have also been used as triggers for other literatures, which has been complementary to the former. Objectives of the Thesis The main objective of the thesis is to prove its hypothesis: if a work of architecture can be considered a technical object and under what conditions, building then a criterion for recognizing when a work of architecture meets the definition of a technical object. Another aim is to demonstrate the importance and power of Technology Transfer in the evolutionary process of Architecture, and to do it, some examples of a methodology for architectural design that Martin Pawley called 'Design by Assembly' are presented. It is also an objective to reconstruct an Atlas of the imaginary of the modern technical object, in order to better understand the causes, reasons and purposes that led modern architects to pursue architecture as a technical object. This Atlas allows to panoptically relate the various technical objects, revealing the true importance and significance of those and the architecture with whom they interact. Architectures are again at the largest and most complex industrial context and the history of technology, which always belonged. Thus, they are able to reveal all the knowledge-in the shape of information-carried in their own 'genetic' code, displaying full chapters of technological culture as old as mankind and constantly growing and evolving. Thesis: Proving the Hypothesis Simondon ontologically defined the technical object as ‘that of which genesis exists’ and that develops ‘a tendency towards solidarity and unity’. For an architecture to be recognized as a technical object, a number of conditions should be given, in the successive phases involved in their mode of existence: Imagination. These architectures refer to an imaginary featuring images-object other technical objects (technical sets, technical individuals and technical elements). These images are the means to an eidetic transfer of the technical objects which they symbolize. Invention. These architectures are the result of tectonic transfers, which occur during the architectural design process, by assembling materials, components or procedures used in industry for the production of other technical objects. Individuation. These architectures evolve and are individualized by ‘concretization’, a process leading to the full integration of its parts and aiming the full convergence of its functions into a single structure. This integration tends towards the naturalization of the technical object, by means of a symbiotic incorporation of their associated milieus. After checking if there is a genesis of them, which evolves through the phases of imagination and invention and concretization, their imaginary, materiality, structure and organization are analyzed in order to detect patterns and common organizational principles to other technical objects counterparts. Structure The main text of the thesis consists of three parts. Before there is an Introduction to the main concepts that are exploited in the thesis on ontology Simondonian technical object, and technology transfer applied to Architecture. Then a first part covers the Imaginary of the modern technical object, a second part is dedicated to the Invention and a third part to the individuation process The thesis ends with a section for the Discussion and the Conclusions. The Introduction to the technical object, this is ontologically defined and its different categories are distinguished. The process of genesis of the technical object and the phases of imagination, invention and indivuation are explained. Concepts as Transduction, Technicality and Technical system are presented for being fundamental to understand the concept of Technology Transfer that will take place later. The concretization is explained as the particular mode of individuation and evolution of technical objects, a process by which the different parts of a technical object, are integrated and begin a tendency towards a convergence in itself. The first part, dedicated to the Imagination of the architectural technical object presents a parcial "archaeological" reconstruction the imaginary of the modern technical object, intended to better understand its genesis and the relationship with other technical objects. The imaginary sources are searched in the premises of the Industry of the early twentieth century, and particularly in the factories of modern vehicles, in order to see, to what extent these technical objects were important to imagine modern architecture. The reconstruction is continued until the Second Machine Age, when a new, more inquisitive and precise gaze turns to other factories, other vehicles and other components and materials, inquiring now about their organizational qualities. The second part is devoted to the Invention of the architectural technical object. The effectiveness of the simondonian concept of Transduction is checked: a transmitted and transformed sign or information, which relates to Technology Transfer, a synergetic process by which an industrial sector benefits from the development of another sector, to which some architects and historians have explicitly referred to explain their works during Machine Age, and which is crucial for the development of the industry. Technology transfer would be the transmission of a set of information or knowledge about technique, including the factual sphere of technique, but also the sensitive sphere of experience. In their application to Architecture, these transfers have been classified according to three types: Eidetic, Tectonic and Organic. Eidetic Transfers operate from an intuitive knowledge and are useful for transmitting information about the essence of the technical object serving as a source. Abstract concepts are transmitted through the object-images to produce an equivalent transformation in Architecture. A group of concepts that have been the subject of technology transfers of eidetic nature, and have been originated in the imaginary of the modern technical object, have been detected as a result of the research: FABRICATED, INHABITABLE, FUNCTIONAL, EFFICIENT, OBSOLESCENT, and BEAUTIFUL. The transfers can also be Tectonic when, that which is transferred is a constructive or structural technique, applied through artificial MATERIALS such as metals, composites as the ferrocement, or plywood, or alloys such as aluminum; or by means of the assembly of STRUCTURES or parts of other technical objects such as hulls, fuselages, car bodies or rigs, resulting in the invention of a new architectural technical object. In the case of ORGANIC transfers, what is transferred is an organizational technique, applied by means of a set of PROCEDURES defining the activity of the architect as a technologist and inventor of technical objects. These procedures have a transformative effect on three traditional institutions for Architecture: the School, the Atelier and the Work, and the results are summarized in new models of organization of the Education of Architecture, with the onset of the Architectural Design Studios or workshops; new models of organization of the practice of architect: the technical office; and new models of space organization, based on the spatial organization of the industry, resulting in spatial patterns or spatial matrices; a new model of organization of the project, which uses graphical tools and industrail protocols as the assembly as a methodology; a new model of architectural production based on the industrialization. After explaining the concepts and the genesis of assembly and montage, Design by assembly is presented as a method that promotes architectural invention, and is shown using some case studies analyzed in the thesis, in which there has been made some conceptual, constructive or organizational transfer. After analyzing the architectures studied in the thesis, genetic method proposed by Simondon was used to understand every particular evolution, reconstructing their genealogical lines up to their ancestors, identifying a series of genetic lineages, which correspond to the technical sets studied in the thesis : the shipyard, the car factory, and aircraft factory. The real ancestors of Modernity. The spatial organization systems of these technical sets are directly related to the technical object that is fabricated within them. From that point, a number of operational matrices are defined (MILL, SHOP, SHED) and used to make a taxonomy of the architectural technical object. This is exemplified by some projects by architects as Norman Foster, Richard Rogers, Renzo Piano, Nicholas Grimshaw, Jean and Richard Horden Kaplicky. Interest of the thesis Since the beginning of the new century, several authors have shown a renewed interest in defining what a project is, how it is constituted and what it is for. The approaches to the subject are brought from analytic philosophy (Galle, 2008) or from the philosophy of technology (Verbeek, 2005; Vermaas, 2009) and they often speak about the relationship between design and material culture (Dorschel 2003, 2010 or Preston Boradkar 2012). It is also important to note the recent and growing interest in the work of French philosopher Gilbert Simondon [1924-1989], mainly known for its important contribution to the philosophy of technology and phenomenology of the technical object, and the influence on the thinking of contemporary philosophers as Paolo Virno, Bruno Latour or Gilles Deleuze, being the latter a author present in many doctoral theses and theoretical research conducted at major architecture schools around the world since the 90s to the present. The republication and translation of the work of Simondon (eng. 1980, spn. 2008) has received the attention from current philosophers such as Bernard Stiegler who continues to use its study and analysis to advance his thinking, thus being present in the contemporary debate about the technique. After its recent translation into Spanish, the thought of Simondon has aroused great interest in Latin America, as evidenced by the organization of various conferences and symposia, as well as the proliferation of publications about his work and thought8. Future translations of the rest of his major works, will ensure increased introduction in the academic community. Efforts have been made to present an alternative view of the History of Modern Architecture, using a reporter as Reyner P.Banham as a guide. The Machine Age intersects Simondon’s mechanology and his "technical vitalism", resulting in a fresh, renewed and optimistic interpretation of some of the most important works of Architecture of the twentieth century, which will surely contribute to the development of this century’s Architecture, already immersed in the paradigm shift towards sustainability and ecology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Under-deck cable-stayed bridges are very effective structural systems for which the strong contribution of the stay cables under live loading allows for the design of very slender decks for persistent and transient loading scenarios. Their behaviour when subjected to seismic excitation is investigated herein and a set of design criteria are presented that relate to the type and arrangement of bearings, the number and configuration of struts, and the transverse distribution of stay cables. The nonlinear behaviour of these bridges when subject to both near-field and far-field accelerograms has been thoroughly investigated through the use of incremental dynamic analyses. An intensity measure that reflects the pertinent contributions to response when several vibration modes are activated was proposed and is shown to be effective for the analysis of this structural type. The under-deck cable-stay system contributes in a very positive manner to reducing the response when the bridges are subject to very strong seismic excitation. For such scenarios, the reduction in the stiffness of the deck because of crack formation, when prestressed concrete decks are used, mobilises the cable system and enhances the overall performance of the system. Sets of natural accelerograms that are compliant with the prescriptions of Eurocode 8 were also applied to propose a set of design criteria for this bridge type in areas prone to earthquakes. Particular attention is given to outlining the optimal strategies for the deployment of bearings

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When we look at the history of electricity and electromagnetism in Spain we discover that the most important Spanish researchers are generally out of the official institutions or stable research groups until the 20th century [1] [2]. In the 20th century most of the scientific research is done in stable research institutions and universities and the most important electromagnetism research centres in Spain are located in the Faculty of Physics of the most important universities, the National Scientific Research Council (CSIC) and the School for Telecommunication Engineering created in 1923. But the greatest impulse of research in the antenna and radiowave propagation field is done after 1960 reaching the first national URSI conference in 1980. After that year, the relation between groups and the number of research groups is continuously growing and the relation to industry is also increasing. When Spain joins the European research organizations (COST, ERC...) and the European Union in 1985 the research support experience a fast growing and the participation in the European research structures. In the antenna design field, there exist some specializations although most of the groups have dome specific projects in almost all the antenna analysis and design fields. Here, we have selected the most important and characteristic area related to each of the research groups and institutions. The easiest way to classify the research work in antennas is the selection between antenna analysis, design and measurement. After that the selected frequency bands technology, the type of antennas and the related circuits can be a good criterion to describe the variety of research work and specialization between groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experimental software engineering includes several processes, the most representative being run experiments, run replications and synthesize the results of multiple replications. Of these processes, only the first is relatively well established in software engineering. Problems of information management and communication among researchers are one of the obstacles to progress in the replication and synthesis processes. Software engineering experimentation has expanded considerably over the last few years. This has brought with it the invention of experimental process support proposals. However, few of these proposals provide integral support, including replication and synthesis processes. Most of the proposals focus on experiment execution. This paper proposes an infrastructure providing integral support for the experimental research process, specializing in the replication and synthesis of a family of experiments. The research has been divided into stages or phases, whose transition milestones are marked by the attainment of their goals. Each goal exactly matches an artifact or product. Within each stage, we will adopt cycles of successive approximations (generateand- test cycles), where each approximation includes a diferent viewpoint or input. Each cycle will end with the product approval.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La asignatura Sistemas Operativos presenta dificultades para su aprendizaje, pero poco se conoce acerca de las mismas, ya que no han sido determinadas ni estudiadas por la literatura. Asimismo, los trabajos existentes sobre la enseñanza y aprendizaje de Sistemas Operativos se limitan a proponer distintos enfoques para impartir la asignatura y en general no evalúan el aprendizaje de los estudiantes para comprobar la eficacia del método propuesto ni usan metodologías de investigación rigurosas. Por otra parte, la impartición de la asignatura Sistemas Operativos en modalidad online ha sido escasamente estudiada y podría tener dificultades adicionales a las de la modalidad presencial, ya que el contexto online impone una serie de restricciones tanto para el profesor como para el estudiante. En la presente tesis se ha llevado a cabo una evaluación formativa en la asignatura Sistemas Operativos, perteneciente al Grado de Ingeniería Informática de una universidad online. El objetivo inicial de la evaluación era descubrir las dificultades de los estudiantes para la comprensión de los conceptos de la asignatura. Posteriormente y, dada la buena aceptación de la evaluación por parte de los estudiantes, se ampliaron los objetivos del trabajo para explorar los efectos de la evaluación realizada sobre el aprendizaje. La evaluación formativa diseñada está basada en la taxonomía revisada de Bloom y sus principales objetivos son: (a) promover el aprendizaje significativo y (b) hacer a los estudiantes conscientes de su proceso de aprendizaje. La metodología de investigación utilizada es el estudio de caso cualitativo y la muestra está constituida por 9 estudiantes del total de 13 matriculados en la asignatura. Los datos cualitativos analizados proceden de las pruebas de evaluación formativa llevadas a cabo por los estudiantes durante la impartición de la asignatura. Los conceptos de sistemas operativos que han resultado más difíciles de comprender en el curso online estudiado han sido las interrupciones y los semáforos. Además, alrededor de estos conceptos se han identificado las dificultades específicas y sus posibles causas. Las dificultades descubiertas acerca de los semáforos corroboran las investigaciones existentes en el área de programación concurrente. El resto de las dificultades identificadas no habían sido determinadas por la literatura existente. En cuanto a los efectos de la evaluación formativa sobre el aprendizaje, la evidencia empírica muestra que ésta ha provocado en los estudiantes una reflexión profunda sobre los conceptos de la asignatura y sobre su propio proceso de aprendizaje. El estudio de caso presentado puede ayudar a los profesores del área de ingeniería a crear evaluaciones formativas en cursos online. La tesis, por tanto, realiza aportaciones relevantes en las áreas de enseñanza y aprendizaje de sistemas operativos, evaluación formativa, metodologías cualitativas y educación online. ABSTRACT Operating Systems is a difficult subject to learn; however little is known about said difficulties, as they have not been studied nor determined by the relevant literature. Existing studies on teaching and learning the subject of operating systems are limited to presenting different approaches for teaching the subject and generally do not evaluate students’ learning to verify the effectiveness of the proposed methods, nor do they use rigorous research methodologies. On the other hand, there are very few studies on teaching operating systems online, which may inherently present more difficulties than the in-person format, since an online context imposes a series of restrictions on both professors and students, such as not having face-to-face interaction for communications. This thesis studies a formative assessment of the subject of operating systems, as part of the Degree in Information Technology Engineering for an online university. The initial objective of this assessment was to determine the students’ difficulties in conceptual comprehension for this subject. Once students had accepted the assessment, the study’s objectives were expanded to include an investigation of the effects of the assessment on learning. The designed formative assessment was based on Revised Bloom’s Taxonomy with the following main objectives: (a) to promote meaningful learning and (b) (b) to make students aware of their learning process. The research methodology involves a qualitative case study with a sample consisting of 9 of the total 13 students registered for this course. The qualitative data analyzed comes from the formative assessment tests taken by these students during the course. The most difficult operating systems concepts for students in the online course were interrupts and semaphores. Additionally, the specific difficulties and their possible causes have been identified. The students’ comprehension difficulties with semaphores corroborate the existing research in the area of concurrent programming. The other identified difficulties were not discussed in the existing literature. Regarding the effects of the formative assessment on learning, the empirical evidence shows that it causes students to reflect carefully on the subject’s concepts as well as their own learning process. The presented case study can help professors in the area of engineering to create formative assessments for online courses. This thesis, therefore, makes relevant contributions to the areas of teaching and learning operating systems, formative assessment, qualitative methodologies, and online education.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La arquitectura china ha experimentado grandes cambios a lo largo de un extenso proceso histórico. El hito de mayor importancia es el que da paso al denominado Tiempo Moderno, periodo en el cual irrumpe por vez primera en China la arquitectura occidental, que comienza a tener una influencia muy activa y significativa sobre los rasgos y la identidad de la arquitectura tradicional china, hasta ese momento el único estilo o forma de hacer –muy diferente, en cuanto a su concepción y fisonomía, de los planteamientos occidentales- que había sobrevivido sin desvíos significativos, configurando un panorama milenario bastante homogéneo en los aspectos técnicos y artísticos en el desarrollo de esa arquitectura. Por un cúmulo de complejas razones, la mayor parte de la arquitectura china del periodo feudal -es decir el que forman todos los años anteriores a 1849- ha desaparecido. Sin embargo, desde la fecha indicada hasta la Revolución de 1949 (el denominado periodo semicolonial o semifeudal), sí se conservan muchas edificaciones, que fueron mejor construidas y mantenidas luego, destacando por su importancia en ese sentido las iglesias cristianas. Dichos templos representan cronológicamente, no sólo la primera irrupción de la arquitectura clásica occidental en China, sino el inicio de un proceso de modernización de la profundamente enraizada y, en buena medida, estancada arquitectura vernácula, combinando técnicas y estilos de ambos planteamientos, para dar como resultado originales edificaciones de un singular eclecticismo que caracterizarían buena parte de la arquitectura de dicha etapa semicolonial. En términos generales, últimamente se ha ido prestando cada vez más atención a esta arquitectura de los tiempos modernos, aunque las iglesias cristianas de la provincia de Shaanxi no han sido objeto de estudio específico, a pesar de que su tipología es muy representativa de las construcciones de esta clase en otras regiones del interior de China. La investigación que desarrolla la presente tesis doctoral sale al paso de esa deficiencia, abriendo puertas a la continuación del trabajo referido, extendido a otras zonas o arquitecturas, y, por extensión, a la profundización analítica de la hibridación arquitectónica y cultural entre China y Occidente. Sobre las bases de investigación documental, estudios de campo y dibujo, la tesis plantea un estudio aclaratorio de los rasgos y raíces de la arquitectura tradicional china, al que sigue otro histórico y tipológico de los templos cristianos en la provincia de Shaanxi, deteniéndose en sus características fundamentales, situación (uso) actual y estado de conservación. Se ha considerado imprescindible añadir al trabajo, como apéndice, un elaborado glosario conceptual ilustrado de términos básicos arquitectónicos y constructivos, en chino, inglés y español. ABSTRACT The Chinese architecture has gone through great changes during the long process of history. The tremendous changing period was the named Modern Times of China when, for the very first time, the western architecture was introduced into China and became to influence majorly on the traditional Chinese architecture. Before that, the traditional Chinese architecture which has its own, yet totally different system from the occidental architecture system was the only architectural style could be found in China. Although, due to many historical, conceptual and architectural characteristic reasons, large amount of the ancient Chinese architecture built in the feudal China was not preserved, there are a lot of buildings of semi-feudal China that was well constructed and conserved. The most important architectural type of the semi-feudal China is the Christian Churches. It was not only the first western architectural form that was brought into and well developed in China, but also was the beginner of the modernization process of Chinese architecture. Because of the deep root of the 2000-year traditional Chinese architecture, all the Christian Churches built in China during the semi-colonial society has a combined style of both the traditional Chinese architecture and the classic western churches. They are a priceless asset of the Chinese architectural history. Recently, more and more attention had been paid on the Chinese Modern Times architecture, however, the Christian Churches in Shaanxi Province, the province which has a unique history with the Christian, but less economically developed have never been researched yet. The Christian Churches of Shaanxi Province reflect the general feature of developing history of the Christian Churches of common inner-land regions in China. The research opens the door to further study on other Christian Churches and related buildings, and also for the further study on the Chinese-western architectural and culture communication. On the base of document research, field survey and mapping, in this thesis, an in-depth study had been done on the general history of the features and roots of the traditional Chinese architecture, the developing history of the Christian Churches of Shaanxi Province and the architectural types, examples, characteristics, present situation and conservation status. By comparing the Christian Churches of the cities in Shaanxi province to the Christian Churches in other more developed cities, and by comparing the Christian Churches in China to the classic western churches, the architectural combination feature of the Christian Churches in China are highlighted. The thesis is a fundamental research on which many further studies about the architectural developing history, characteristics and conservation of the Christian Churches in China could be done. It is considered essential to add to the work, as an appendix, an elaborate conceptual illustrated glossary of architectural and construction terms in Chinese, English and Spanish.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El trabajo contenido en esta tesis doctoral está encuadrado en el desarrollo de antenas reconfigurables electrónicamente capaces de proporcionar prestaciones competitivas a las aplicaciones cada vez más comunes que operan a frecuencias superiores a 60 GHz. En concreto, esta tesis se centra en el estudio, diseño, e implementación de las antenas reflectarray, a las que se introduce la tecnología de cristal líquido como elemento característico con el que se consigue reconfigurabilidad de haz de forma electrónica. Desde un punto de vista muy general, se puede describir un cristal líquido como un material cuya permitividad eléctrica es variable y controlada por una excitación externa, que generalmente suele corresponderse con un campo eléctrico quasi-estático (AC). Las antenas reflectarray de cristal líquido se han escogido como objeto de estudio por varias razones. La primera de ellas tiene que ver con las ventajas que los reflectarrays, y en especial aquellos realizados en configuración planar, proporcionan con respecto a otras antenas de alta ganancia como los reflectores o los “phased-arrays”. En los reflectarrays, la alimentación a través de una fuente primaria común (característica de reflectores) y el elevado número de grados de libertad de las celdas que los componen (característica de arrays) hacen que estas antenas puedan proporcionar prestaciones eléctricas iguales o mejores que las anteriores, a un coste más reducido y con estructuras de antena más compactas. La segunda razón radica en la flexibilidad que ofrece el cristal líquido a ser confinado y polarizado en recintos de geometría variada, como consecuencia de su fluidez (propiedad de los líquidos). Por ello, la tecnología de cristal líquido permite que el propio elemento reconfigurable en las celdas de reflectarray se adapte a la configuración planar de manera que en sí mismo, el cristal líquido sea una o varias de las capas características de esta configuración. Esto simplifica de forma drástica la estructura y la fabricación de este tipo de antenas, incluso si se comparan con reflectarrays reconfigurables basados en otras tecnologías como diodos, MEMS, etc. Por tanto, su coste y desarrollo es muy reducido, lo que hace que se puedan fabricar reflectarrays reconfigurables eléctricamente grandes, a bajo coste, y en producción elevada. Un ejemplo claro de una estructura similar, y que ha tenido éxito comercial, son las pantallas de cristal líquido. La tercera razón reside en el hecho de que el cristal líquido es, hasta la fecha, de las pocas tecnologías capaces de ofrecer reconfigurabilidad del haz a frecuencias superiores a 60 GHz. De hecho, el cristal líquido permite reconfigurabilidad en un amplio margen de frecuencias, que va desde DC a frecuencias del espectro visible, incluyendo las microondas y los THz. Otras tecnologías, como los materiales ferroeléctricos, el grafeno o la tecnología CMOS “on chip” permiten también conmutar el haz en estas frecuencias. Sin embargo, la tecnología CMOS tiene un elevado coste y actualmente está limitada a frecuencias inferiores a 150 GHz, y aunque los materiales ferroeléctricos o el grafeno puedan conmutar a frecuencias más altas y en un rango más amplio, tienen serias dificultades que los hacen aún inmaduros. En el caso de los materiales ferroeléctricos, los elevados voltajes para conmutar el material los hacen poco atractivos, mientras que en el caso del grafeno, su modelado aún está en discusión, y todavía no se han arrojado resultados experimentales que validen su idoneidad. Estas tres razones hacen que los reflectarrays basados en cristal líquido sean atractivos para multitud de aplicaciones de haz reconfigurable a frecuencias superiores a 60 GHz. Aplicaciones como radar de escaneo de imágenes de alta resolución, espectroscopia molecular, radiómetros para observación atmosférica, o comunicaciones inalámbricas de alta frecuencia (WiGig) son algunas de ellas. La tesis está estructurada en tres partes. En la primera de ellas se describen las características más comunes de los cristales líquidos, centrándonos en detalle en aquellas propiedades ofrecidas por este material en fase nemática. En concreto, se estudiará la anisotropía dieléctrica (Ae) de los cristales líquidos uniaxiales, que son los que se emplean en esta tesis, definida como la diferencia entre la permitividad paralela (£//) y la perpendicular (e±): Ae = e,, - e±. También se estudiará la variación de este parámetro (Ae) con la frecuencia, y el modelado electromagnético macroscópico más general que, extraído a partir de aquella, permite describir el cristal líquido para cada tensión de polarización en celdas de geometría planar. Este modelo es de suma importancia para garantizar precisión en el desfasaje proporcionado por las diferentes celdas reconfigurables para reflectarrays que se describirán en la siguiente parte de la tesis. La segunda parte de la tesis se centra en el diseño de celdas reflectarray resonantes basadas en cristal líquido. La razón por la que se escogen estos tipos de celdas reside en el hecho de que son las únicas capaces de proporcionar rangos de fase elevados ante la reducida anisotropía dieléctrica que ofrecen los cristales líquidos. El objetivo de esta parte trata, por tanto, de obtener estructuras de celdas reflectarray que sean capaces de proporcionar buenas prestaciones eléctricas a nivel de antena, mejorando sustancialmente las prestaciones de las celdas reportadas en el estado del arte, así como de desarrollar una herramienta de diseño general para aquellas. Para ello, se estudian las prestaciones eléctricas de diferentes tipos de elementos resonantes de cristal líquido que van, desde el más sencillo, que ha limitado el estado de la técnica hasta el desarrollo de esta tesis y que está formado por un sólo resonador, a elementos que constan de varios resonadores (multi-resonantes) y que pueden ser monocapa o multicapa. En un primer paso, el procedimiento de diseño de estas estructuras hace uso de un modelo convencional de cristal líquido que ha venido siendo usado en el estado del arte para este tipo de celdas, y que considera el cristal líquido como un material homogéneo e isótropo cuya permitividad varía entre (e/7) y (e±). Sin embargo, en esta parte de la tesis se demuestra que dicho modelado no es suficiente para describir de forma genérica el comportamiento del cristal líquido en las celdas tipo reflectarray. En la tesis se proponen procedimientos más exactos para el análisis y diseño basados en un modelo más general que define el cristal líquido como un material anisótropo e inhomogeneo en tres dimensiones, y se ha implementado una técnica que permite optimizar celdas multi-resonantes de forma eficiente para conseguir elevadas prestaciones en cuanto a ancho de banda, rango de fase, pérdidas, o sensibilidad al ángulo de incidencia. Los errores cometidos en el uso del modelado convencional a nivel de celda (amplitud y fase) se han analizado para varias geometrías, usando medidas de varios prototipos de antena que usan un cristal líquido real a frecuencias superiores a 100 GHz. Las medidas se han realizado en entorno periódico mediante un banco cuasi-óptico, que ha sido diseñado especialmente para este fin. Uno de estos prototipos se ha optimizado a 100 GHz para conseguir un ancho de banda relativamente elevado (10%), pérdidas reducidas, un rango de fase mayor de 360º, baja sensibilidad al ángulo de incidencia, y baja influencia de la inhomogeneidad transversal del cristal líquido en la celda. Estas prestaciones a nivel de celda superan de forma clara aquellas conseguidas por otros elementos que se han reportado en la literatura, de manera que dicho prototipo se ha usado en la última parte de la tesis para realizar diversas antenas de barrido. Finalmente, en esta parte se presenta una estrategia de caracterización de la anisotropía macroscópica a partir de medidas de los elementos de reflectarray diseñados en banco cuasi-óptico, obteniendo resultados tanto en las frecuencias de interés en RF como en AC, y comparándolas con aquellas obtenidas mediante otros métodos. La tercera parte de la tesis consiste en el estudio, diseño, fabricación y medida de antenas reconfigurables basadas en cristal líquido en configuraciones complejas. En reflectarrays pasivos, el procedimiento de diseño de la antena se limita únicamente al ajuste en cada celda de la antena de las dimensiones de las metalizaciones que se emplean para el control de fase, mediante procesos de optimización bien conocidos. Sin embargo, en el caso de reflectarrays reconfigurables basados en cristal líquido, resulta necesario un paso adicional, que consiste en calcular de forma adecuada las tensiones de control en cada celda del reflectarray para configurar la fase requerida en cada una de ellas, así como diseñar la estructura y los circuitos de control que permitan direccionar a cada elemento su tensión correspondiente. La síntesis de tensiones es por tanto igual o más importante que el diseño de la geometría de las celdas, puesto que éstas son las que están directamente relacionadas con la fase. En el estado del arte, existen varias estrategias de síntesis de tensiones que se basan en la caracterización experimental de la curva de fase respecto al voltaje. Sin embargo, esta caracterización sólo puede hacerse a un solo ángulo de incidencia y para unas determinadas dimensiones de celda, lo que produce que las tensiones sintetizadas sean diferentes de las adecuadas, y en definitiva que se alcancen errores de fase mayores de 70º. De esta forma, hasta la fecha, las prestaciones a nivel de antena que se han conseguido son reducidas en cuanto a ancho de banda, rango de escaneo o nivel de lóbulos secundarios. En esta última parte de la tesis, se introduce una nueva estrategia de síntesis de tensiones que es capaz de predecir mediante simulaciones, y con alta precisión, las tensiones que deben introducirse en cada celda teniendo en cuenta su ángulo de incidencia, sus dimensiones, la frecuencia, así como la señal de polarización definida por su frecuencia y forma de onda AC. Esta estrategia se basa en modelar cada uno de los estados de permitividad del cristal líquido como un sustrato anisótropo con inhomogeneidad longitudinal (1D), o en ciertos casos, como un tensor equivalente homogéneo. La precisión de ambos modelos electromagnéticos también se discute. Con el objetivo de obtener una herramienta eficiente de cálculo de tensiones, también se ha escrito e implementado una herramienta de análisis basada en el Método de los Momentos en el Dominio Espectral (SD-MoM) para sustratos estratificados anisótropos, que se usa en cada iteración del procedimiento de síntesis para analizar cada una de las celdas de la antena. La síntesis de tensiones se ha diseñado además para reducir al máximo el efecto del rizado de amplitud en el diagrama de radiación, que es característico en los reflectarrays que están formados por celdas con pérdidas elevadas, lo que en sí, supone un avance adicional para la obtención de mejores prestaciones de antena. Para el cálculo de los diagramas de radiación empleados en el procedimiento de síntesis, se asume un análisis elemento a elemento considerando periodicidad local, y se propone el uso de un método capaz de modelar el campo incidente de forma que se elimine la limitación de la periodicidad local en la excitación. Una vez definida la estrategia adecuada de cálculo de las tensiones a aplicar al cristal líquido en cada celda, la estructura de direccionamiento de las mismas en la antena, y diseñados los circuitos de control, se diseñan, fabrican y miden dos prototipos diferentes de antena de barrido electrónico a 100 GHz usando las celdas anteriormente presentadas. El primero de estos prototipos es un reflectarray en configuración “single offset” con capacidad de escaneo en un plano (elevación o azimut). Aunque previamente se realizan diseños de antenas de barrido en 2D a varias frecuencias en el rango de milimétricas y sub-milimétricas, y se proponen ciertas estrategias de direccionamiento que permiten conseguir este objetivo, se desarrolla el prototipo con direccionamiento en una dimensión con el fin de reducir el número de controles y posibles errores de fabricación, y así también validar la herramienta de diseño. Para un tamaño medio de apertura (con un numero de filas y columnas entre 30 y 50 elementos, lo que significa un reflectarray con un número de elementos superior a 900), la configuración “single offset” proporciona rangos de escaneo elevados, y ganancias que pueden oscilar entre los 20 y 30 dBi. En concreto, el prototipo medido proporciona un haz de barrido en un rango angular de 55º, en el que el nivel de lóbulos secundarios (SLL) permanece mejor de -13 dB en un ancho de banda de un 8%. La ganancia máxima es de 19.4 dBi. Estas prestaciones superan de forma clara aquellas conseguidas por otros autores. El segundo prototipo se corresponde con una antena de doble reflector que usa el reflectarray de cristal líquido como sub-reflector para escanear el haz en un plano (elevación o azimut). El objetivo básico de esta geometría es obtener mayores ganancias que en el reflectarray “single offset” con una estructura más compacta, aunque a expensas de reducir el rango de barrido. En concreto, se obtiene una ganancia máxima de 35 dBi, y un rango de barrido de 12º. Los procedimientos de síntesis de tensiones y de diseño de las estructuras de las celdas forman, en su conjunto, una herramienta completa de diseño precisa y eficiente de antenas reflectarray reconfigurables basados en cristales líquidos. Dicha herramienta se ha validado mediante el diseño, la fabricación y la medida de los prototipos anteriormente citados a 100 GHz, que consiguen algo nunca alcanzado anteriormente en la investigación de este tipo de antenas: unas prestaciones competitivas y una predicción excelente de los resultados. El procedimiento es general, y por tanto se puede usar a cualquier frecuencia en la que el cristal líquido ofrezca anisotropía dieléctrica, incluidos los THz. Los prototipos desarrollados en esta tesis doctoral suponen también unas de las primeras antenas de barrido real a frecuencias superiores a 100 GHz. En concreto, la antena de doble reflector para escaneo de haz es la primera antena reconfigurable electrónicamente a frecuencias superiores a 60 GHz que superan los 25 dBi de ganancia, siendo a su vez la primera antena de doble reflector que contiene un reflectarray reconfigurable como sub-reflector. Finalmente, se proponen ciertas mejoras que aún deben se deben realizar para hacer que estas antenas puedan ser un producto completamente desarrollado y competitivo en el mercado. ABSTRACT The work presented in this thesis is focused on the development of electronically reconfigurable antennas that are able to provide competitive electrical performance to the increasingly common applications operating at frequencies above 60 GHz. Specifically, this thesis presents the study, design, and implementation of reflectarray antennas, which incorporate liquid crystal (LC) materials to scan or reconfigure the beam electronically. From a general point of view, a liquid crystal can be defined as a material whose dielectric permittivity is variable and can be controlled with an external excitation, which usually corresponds with a quasi-static electric field (AC). By changing the dielectric permittivity at each cell that makes up the reflectarray, the phase shift on the aperture is controlled, so that a prescribed radiation pattern can be configured. Liquid Crystal-based reflectarrays have been chosen for several reasons. The first has to do with the advantages provided by the reflectarray antenna with respect to other high gain antennas, such as reflectors or phased arrays. The RF feeding in reflectarrays is achieved by using a common primary source (as in reflectors). This arrangement and the large number of degrees of freedom provided by the cells that make up the reflectarray (as in arrays), allow these antennas to provide a similar or even better electrical performance than other low profile antennas (reflectors and arrays), but assuming a more reduced cost and compactness. The second reason is the flexibility of the liquid crystal to be confined in an arbitrary geometry due to its fluidity (property of liquids). Therefore, the liquid crystal is able to adapt to a planar geometry so that it is one or more of the typical layers of this configuration. This simplifies drastically both the structure and manufacture of this type of antenna, even when compared with reconfigurable reflectarrays based on other technologies, such as diodes MEMS, etc. Therefore, the cost of developing this type of antenna is very small, which means that electrically large reconfigurable reflectarrays could be manufactured assuming low cost and greater productions. A paradigmatic example of a similar structure is the liquid crystal panel, which has already been commercialized successfully. The third reason lies in the fact that, at present, the liquid crystal is one of the few technologies capable of providing switching capabilities at frequencies above 60 GHz. In fact, the liquid crystal allows its permittivity to be switched in a wide range of frequencies, which are from DC to the visible spectrum, including microwaves and THz. Other technologies, such as ferroelectric materials, graphene or CMOS "on chip" technology also allow the beam to be switched at these frequencies. However, CMOS technology is expensive and is currently limited to frequencies below 150 GHz, and although ferroelectric materials or graphene can switch at higher frequencies and in a wider range, they have serious difficulties that make them immature. Ferroelectric materials involve the use of very high voltages to switch the material, making them unattractive, whereas the electromagnetic modelling of the graphene is still under discussion, so that the experimental results of devices based on this latter technology have not been reported yet. These three reasons make LC-based reflectarrays attractive for many applications that involve the use of electronically reconfigurable beams at frequencies beyond 60 GHz. Applications such as high resolution imaging radars, molecular spectroscopy, radiometers for atmospheric observation, or high frequency wireless communications (WiGig) are just some of them. This thesis is divided into three parts. In the first part, the most common properties of the liquid crystal materials are described, especially those exhibited in the nematic phase. The study is focused on the dielectric anisotropy (Ac) of uniaxial liquid crystals, which is defined as the difference between the parallel (e/7) and perpendicular (e±) permittivities: Ae = e,, - e±. This parameter allows the permittivity of a LC confined in an arbitrary volume at a certain biasing voltage to be described by solving a variational problem that involves both the electrostatic and elastic energies. Thus, the frequency dependence of (Ae) is also described and characterised. Note that an appropriate LC modelling is quite important to ensure enough accuracy in the phase shift provided by each cell that makes up the reflectarray, and therefore to achieve a good electrical performance at the antenna level. The second part of the thesis is focused on the design of resonant reflectarray cells based on liquid crystal. The reason why resonant cells have been chosen lies in the fact that they are able to provide enough phase range using the values of the dielectric anisotropy of the liquid crystals, which are typically small. Thus, the aim of this part is to investigate several reflectarray cell architectures capable of providing good electrical performance at the antenna level, which significantly improve the electrical performance of the cells reported in the literature. Similarly, another of the objectives is to develop a general tool to design these cells. To fulfill these objectives, the electrical yields of different types of resonant reflectarray elements are investigated, beginning from the simplest, which is made up of a single resonator and limits the state of the art. To overcome the electrical limitations of the single resonant cell, several elements consisting of multiple resonators are considered, which can be single-layer or multilayer. In a first step, the design procedure of these structures makes use of a conventional electromagnetic model which has been used in the literature, which considers that the liquid crystal behaves as homogeneous and isotropic materials whose permittivity varies between (e/7) y (e±). However, in this part of the thesis it is shown that the conventional modelling is not enough to describe the physical behaviour of the liquid crystal in reflectarray cells accurately. Therefore, a more accurate analysis and design procedure based on a more general model is proposed and developed, which defines the liquid crystal as an anisotropic three-dimensional inhomogeneous material. The design procedure is able to optimize multi-resonant cells efficiently to achieve good electrical performance in terms of bandwidth, phase range, losses, or sensitivity to the angle of incidence. The errors made when the conventional modelling (amplitude and phase) is considered have been also analysed for various cell geometries, by using measured results from several antenna prototypes made up of real liquid crystals at frequencies above 100 GHz. The measurements have been performed in a periodic environment using a quasi-optical bench, which has been designed especially for this purpose. One of these prototypes has been optimized to achieve a relatively large bandwidth (10%) at 100 GHz, low losses, a phase range of more than 360º, a low sensitivity to angle of incidence, and a low influence of the transversal inhomogeneity of the liquid crystal in the cell. The electrical yields of this prototype at the cell level improve those achieved by other elements reported in the literature, so that this prototype has been used in the last part of the thesis to perform several complete antennas for beam scanning applications. Finally, in this second part of the thesis, a novel strategy to characterise the macroscopic anisotropy using reflectarray cells is presented. The results in both RF and AC frequencies are compared with those obtained by other methods. The third part of the thesis consists on the study, design, manufacture and testing of LCbased reflectarray antennas in complex configurations. Note that the design procedure of a passive reflectarray antenna just consists on finding out the dimensions of the metallisations of each cell (which are used for phase control), using well-known optimization processes. However, in the case of reconfigurable reflectarrays based on liquid crystals, an additional step must be taken into account, which consists of accurately calculating the control voltages to be applied to each cell to configure the required phase-shift distribution on the surface of the antenna. Similarly, the structure to address the voltages at each cell and the control circuitry must be also considered. Therefore, the voltage synthesis is even more important than the design of the cell geometries (dimensions), since the voltages are directly related to the phase-shift. Several voltage synthesis procedures have been proposed in the state of the art, which are based on the experimental characterization of the phase/voltage curve. However, this characterization can be only carried out at a single angle of incidence and at certain cell dimensions, so that the synthesized voltages are different from those needed, thus giving rise to phase errors of more than 70°. Thus, the electrical yields of the LCreflectarrays reported in the literature are limited in terms of bandwidth, scanning range or side lobes level. In this last part of the thesis, a new voltage synthesis procedure has been defined and developed, which allows the required voltage to be calculated at each cell using simulations that take into account the particular dimensions of the cells, their angles of incidence, the frequency, and the AC biasing signal (frequency and waveform). The strategy is based on the modelling of each one of the permittivity states of the liquid crystal as an anisotropic substrate with longitudinal inhomogeneity (1D), or in certain cases, as an equivalent homogeneous tensor. The accuracy of both electromagnetic models is also discussed. The phase errors made by using the proposed voltage synthesis are better than 7º. In order to obtain an efficient tool to analyse and design the reflectarray, an electromagnetic analysis tool based on the Method of Moments in the spectral domain (SD-MoM) has also written and developed for anisotropic stratified media, which is used at each iteration of the voltage synthesis procedure. The voltage synthesis is also designed to minimize the effect of amplitude ripple on the radiation pattern, which is typical of reflectarrays made up of cells exhibiting high losses and represents a further advance in achieving a better antenna performance. To calculate the radiation patterns used in the synthesis procedure, an element-by-element analysis is assumed, which considers the local periodicity approach. Under this consideration, the use of a novel method is proposed, which avoids the limitation that the local periodicity imposes on the excitation. Once the appropriate strategy to calculate the voltages to be applied at each cell is developed, and once it is designed and manufactured both the structure to address the voltages to the antenna and the control circuits, two complete LC-based reflectarray antennas that operate at 100 GHz have been designed, manufactured and tested using the previously presented cells. The first prototype consists of a single offset reflectarray with beam scanning capabilities on one plane (elevation and azimuth). Although several LC-reflectarray antennas that provide 2-D scanning capabilities are also designed, and certain strategies to achieve the 2-D addressing of the voltage are proposed, the manufactured prototype addresses the voltages in one dimension in order to reduce the number of controls and manufacturing errors, and thereby validating the design tool. For an average aperture size (with a number of rows and columns of between 30 and 50 elements, which means a reflectarray with more than 900 cells), the single offset configuration provides an antenna gain of between 20 and 30 dBi and a large scanning range. The prototype tested at 100 GHz exhibits an electronically scanned beam in an angular range of 55º and 8% of bandwidth, in which the side lobe level (SLL) remains better than -13 dB. The maximum gain is 19.4 dBi. The electrical performance of the antenna is clearly an improvement on those achieved by other authors in the state of the art. The second prototype corresponds to a dual reflector antenna with a liquid crystal-based reflectarray used as a sub-reflector for beam scanning in one plane (azimuth or elevation). The main objective is to obtain a higher gain than that provided by the single offset configuration, but using a more compact architecture. In this case, a maximum gain of 35 dBi is achieved, although at the expense of reducing the scanning range to 12°, which is inherent in this type of structure. As a general statement, the voltage synthesis and the design procedure of the cells, jointly make up a complete, accurate and efficient design tool of reconfigurable reflectarray antennas based on liquid crystals. The tool has been validated by testing the previously mentioned prototypes at 100 GHz, which achieve something never reached before for this type of antenna: a competitive electrical performance, and an excellent prediction of the results. The design procedure is general and therefore can be used at any frequency for which the liquid crystal exhibits dielectric anisotropy. The two prototypes designed, manufactured and tested in this thesis are also some of the first antennas that currently operate at frequencies above 100 GHz. In fact, the dual reflector antenna is the first electronically scanned dual reflector antenna at frequencies above 60 GHz (the operation frequency is 100 GHz) with a gain greater than 25 dBi, being in turn the first dual-reflector antenna with a real reconfigurable sub-reflectarray. Finally, some improvements that should be still investigated to make these antennas commercially competitive are proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La habitabilidad precaria (HaP) constituye hoy el primer problema mundial del urbanismo, la ordenación del territorio y varias otras disciplinas, como la arquitectura y varias ingenierías que, en conjunto, estructuran e impulsan el sector de la construcción mundial que se centra en atender el alojamiento humano en su diversidad de funciones. En la Conferencia Habitat II, celebrada en 1996 en Estambul, ante el desmesurado crecimiento cuantitativo de la HaP mundial, se planteó la prioridad de prevenir el problema de los nuevos asentamientos precarios: “paliar los problemas relacionados con los asentamientos humanos espontáneos mediante programas y políticas que se anticipen a los asentamientos no planeados”1, sin embargo, tras casi veinte años, aún no existe una herramienta sustantiva y específica que facilite a los políticos responsables de gestionar el desarrollo urbano en ciudades con bajos recursos de países en desarrollo, tomar decisiones que transformen de la forma más eficiente posible el fenómeno de la proliferación de asentamientos informales en una oportunidad de progreso y prosperidad para sus ciudades. La presente tesis parte de la convicción, que trata de fundamentar objetivamente a lo largo del desarrollo de su documentación, de que dicha herramienta fundamental ha de buscarse, a partir de la teoría Habitabilidad Básica, como un soporte esencial con el que reconducir los futuros procesos de ocupación espontánea periurbana. El propósito de la investigación se concreta en caracterizar, y conocer la óptima aplicabilidad, de un instrumento operativo elemental que ayude en la toma de decisiones estratégicas de las autoridades responsables sobre la mejor ubicación de los asentamientos que, hasta la existencia y aplicabilidad de este instrumento, se consideran espontáneos. Asentamientos espontáneos éstos que, en condiciones normales quedarían sujetos durante años a la precariedad mientras que por medio de tal instrumento abandonarían su génesis espontánea para acceder a través de planificación elemental a condiciones de Habitabilidad Básica. La materialización concreta de esta herramienta sería un plano sintético de directrices de ordenación territorial y urbana denominado Plano de Elección del Sitio (PES). Diseñado como un Modelo Teórico Elemental, su aplicación estaría preferentemente orientada a ciudades pequeñas de países en desarrollo que presenten escaso nivel institucional, limitada capacidad económica y técnica, así como ausencia o ineficacia en su planeamiento. A través de un proceso de investigación basado en: la mencionada teoría de la Habitabilidad Básica, la literatura científica de la materia y las experiencias de casos recientes de planificación urbana mediante la aplicación de sistemas de información del suelo, se propone una caracterización y aplicabilidad preliminar de la herramienta. Tras analizar pormenorizadamente sus fortalezas y debilidades y contando con la participación de un grupo de expertos independientes, el trabajo concluye con una nueva caracterización de la herramienta y la reformulación de la hipótesis inicial. ABSTRACT Nowadays, the precarious habitability (PHa) is the main problem of urbanism around the world, land-use planning and several other disciplines such as architecture and different engineering studies that, as a whole, structure and boost the global construction sector which focuses on meeting the Human accommodation in its functional diversity. In the Habitat II Conference in Istanbul in 1996, in light of the excessive quantitative growth of the global PHa, the priority of preventing the problem of new squatter settlements was raised: "to alleviate the problems related to spontaneous human settlements through programs and policies that anticipate unplanned settlements"2, however, after nearly twenty years, there is still no substantive and specific tool to facilitate policy makers to manage urban development for towns with low-income in developing countries, taking decisions that transform as efficiently as possible the phenomenon of the proliferation of informal settlements into an opportunity for progress and prosperity for their cities. This thesis is based on the conviction, which tries to objectively substantiate along the development of its documentation, that this fundamental tool has to be sought from the Basic Habitability theory as an essential support to redirect the future processes of peri-urban spontaneous occupation. The purpose of the research is carried out to characterize, and know the optimum applicability of a basic operational tool to assist in the strategic decisions making of the responsible authorities on the best location of settlements that, until the existence and applicability of this instrument, are considered spontaneous. Spontaneous settlements which, under normal conditions would be subject to the precariousness for years while under that instrument they would abandon their spontaneous genesis for accessing by elemental planning to the Basic Habitability. The concretionary materialization of this tool would be a synthetic guidelines plan of territorial and urban planning called Site Election Plan (SEP). Designed as a Elementary Theoretical Model, its application would preferably be oriented for small towns in developing countries that represent a low institutional, economic and technical limited capacity, as well as the absence or ineffectiveness in their planning. Throughout a research process based on: the aforementioned theory of Basic Habitability, the scientific literature of the subject and the experiences of recent cases of urban planning through the application of soil information systems, characterization and preliminary applicability of the tool is proposed. After attentively analyzing their strengths and weaknesses and with the participation of a group of independent experts, the paper concludes with a new characterization of the tool and the reformulation of the initial hypothesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Industriales Research Meeting 2016 (IRM16) is an event to show the research activities at the School of Industrial Engineering (ETSII) of the Technical University of Madrid (UPM). The main purpose of this event is to present the ongoing research carried out by professors and researchers of the Institutes, Research Centres, Research Groups and Departments of this School, through funded research projects in close collaboration with public and private institutions and companies, some of them from IBEX-35. This book contains the 138 posters presented from different branches of engineering such as: acoustic, aerospace, bioengineering, chemical, electrical, electronics, automation, energy, environmental, management and industrial organization, laser technology and industrial organization, laser technology and applications, materials, mathematics, statistics, mechanics, manufacturing, structures, nuclear technology, seismic, vehicles and railways.