12 resultados para Lower level relaxation
em Universidad Politécnica de Madrid
Resumo:
Most implementations of parallel logic programming rely on complex low-level machinery which is arguably difflcult to implement and modify. We explore an alternative approach aimed at taming that complexity by raising core parts of the implementation to the source language level for the particular case of and-parallelism. Therefore, we handle a signiflcant portion of the parallel implementation mechanism at the Prolog level with the help of a comparatively small number of concurrency-related primitives which take care of lower-level tasks such as locking, thread management, stack set management, etc. The approach does not eliminate altogether modiflcations to the abstract machine, but it does greatly simplify them and it also facilitates experimenting with different alternatives. We show how this approach allows implementing both restricted and unrestricted (i.e., non fork-join) parallelism. Preliminary experiments show that the amount of performance sacriflced is reasonable, although granularity control is required in some cases. Also, we observe that the availability of unrestricted parallelism contributes to better observed speedups.
Resumo:
The modelling of critical infrastructures (CIs) is an important issue that needs to be properly addressed, for several reasons. It is a basic support for making decisions about operation and risk reduction. It might help in understanding high-level states at the system-of-systems layer, which are not ready evident to the organisations that manage the lower level technical systems. Moreover, it is also indispensable for setting a common reference between operator and authorities, for agreeing on the incident scenarios that might affect those infrastructures. So far, critical infrastructures have been modelled ad-hoc, on the basis of knowledge and practice derived from less complex systems. As there is no theoretical framework, most of these efforts proceed without clear guides and goals and using informally defined schemas based mostly on boxes and arrows. Different CIs (electricity grid, telecommunications networks, emergency support, etc) have been modelled using particular schemas that were not directly translatable from one CI to another. If there is a desire to build a science of CIs it is because there are some observable commonalities that different CIs share. Up until now, however, those commonalities were not adequately compiled or categorized, so building models of CIs that are rooted on such commonalities was not possible. This report explores the issue of which elements underlie every CI and how those elements can be used to develop a modelling language that will enable CI modelling and, subsequently, analysis of CI interactions, with a special focus on resilience
Resumo:
Compilation techniques such as those portrayed by the Warren Abstract Machine(WAM) have greatly improved the speed of execution of logic programs. The research presented herein is geared towards providing additional performance to logic programs through the use of parallelism, while preserving the conventional semantics of logic languages. Two áreas to which special attention is given are the preservation of sequential performance and storage efficiency, and the use of low overhead mechanisms for controlling parallel execution. Accordingly, the techniques used for supporting parallelism are efficient extensions of those which have brought high inferencing speeds to sequential implementations. At a lower level, special attention is also given to design and simulation detail and to the architectural implications of the execution model behavior. This paper offers an overview of the basic concepts and techniques used in the parallel design, simulation tools used, and some of the results obtained to date.
Resumo:
This article presents a cartographic system to facilitate cooperative manoeuvres among autonomous vehicles in a well-known environment. The main objective is to design an extended cartographic system to help in the navigation of autonomous vehicles. This system has to allow the vehicles not only to access the reference points needed for navigation, but also noticeable information such as the location and type of traffic signals, the proximity to a crossing, the streets en route, etc. To do this, a hierarchical representation of the information has been chosen, where the information has been stored in two levels. The lower level contains the archives with the Universal Traverse Mercator (UTM) coordinates of the points that define the reference segments to follow. The upper level contains a directed graph with the relational database in which streets, crossings, roundabouts and other points of interest are represented. Using this new system it is possible to know when the vehicle approaches a crossing, what other paths arrive at that crossing, and, should there be other vehicles circulating on those paths and arriving at the crossing, which one has the highest priority. The data obtained from the cartographic system is used by the autonomous vehicles for cooperative manoeuvres.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
We describe the current status of and provide performance results for a prototype compiler of Prolog to C, ciaocc. ciaocc is novel in that it is designed to accept different kinds of high-level information, typically obtained via an automatic analysis of the initial Prolog program and expressed in a standardized language of assertions. This information is used to optimize the resulting C code, which is then processed by an off-the-shelf C compiler. The basic translation process essentially mimics the unfolding of a bytecode emulator with respect to the particular bytecode corresponding to the Prolog program. This is facilitated by a flexible design of the instructions and their lower-level components. This approach allows reusing a sizable amount of the machinery of the bytecode emulator: predicates already written in C, data definitions, memory management routines and áreas, etc., as well as mixing emulated bytecode with native code in a relatively straightforward way. We report on the performance of programs compiled by the current versión of the system, both with and without analysis information.
Improving the compilation of prolog to C using type and determinism information: Preliminary results
Resumo:
We describe the current status of and provide preliminary performance results for a compiler of Prolog to C. The compiler is novel in that it is designed to accept different kinds of high-level information (typically obtained via an analysis of the initial Prolog program and expressed in a standardized language of assertions) and use this information to optimize the resulting C code, which is then further processed by an off-the-shelf C compiler. The basic translation process used essentially mimics an unfolding of a C-coded bytecode emúlator with respect to the particular bytecode corresponding to the Prolog program. Optimizations are then applied to this unfolded program. This is facilitated by a more flexible design of the bytecode instructions and their lower-level components. This approach allows reusing a sizable amount of the machinery of the bytecode emulator: ancillary pieces of C code, data definitions, memory management routines and áreas, etc., as well as mixing bytecode emulated code with natively compiled code in a relatively straightforward way We report on the performance of programs compiled by the current versión of the system, both with and without analysis information.
Resumo:
El principal objetivo de la presente investigación fue el conocer el perfil de rendimiento técnico de los triatletas, desde un punto de vista biomecánica, en el segmento carrera a pie durante la competición en triatlón. Asimismo, como el genero y el nivel deportivo del triatleta podrían influir en su respuesta motriz durante la competicion. Para ello, se necesitaba desarrollar y validar una técnica experimental que fuera lo suficientemente precisa (validez interna), con una alta fiabilidad y con una gran validez externa (ecologica) debido al entorno de la competicion. La muestra la formaron un total de 64 deportistas: 32 triatletas participantes en la Copa del Mundo de Triatlon de Madrid-2008 (16 hombres y 16 mujeres) y 32 triatletas participantes en el Clasificatorio del Campeonato de Espana Elite (16 hombres y 16 mujeres). El análisis de la técnica de carrera de los deportistas se realizo mediante un sistema fotogramétrico en 2d que permitió calcular las coordenadas (x,y) de los centros articulares con un error de 1.66% en el eje x y de un 2.10% en el eje y. Las imágenes fueron obtenidas por una cámara que filmaba el movimiento en un plano antero-posterior del triatleta. Algoritmos basados en la DLT (Abdel-Aziz & Karara, 1971) permitieron conocer las coordenadas reales a partir de las coordenadas digitalizadas en el plano y posteriormente las distintas variables analizadas. El análisis biomecánica de la carrera se realizo en 4 ocasiones diferentes durante la competición, correspondiendo con cada una de las vueltas de 2,5 km, que el triatleta tenía que realizar. La velocidad de carrera resulto estar íntimamente ligada al nivel deportivo del triatleta. Del mismo modo, 3 de los 4 grupos analizados presentaron valores inferiores a 3 minutos 30 segundos por kilometro recorrido, poniendo de manifiesto el altísimo nivel de los sujetos analizados. Del mismo modo parece que las chicos consiguen una mayor velocidad gracias a una mayor longitud de ciclo en relación a las chicas, ya que estas muestran valores mayores en cuanto a frecuencia de zancada. La frecuencia de zancada presento los valores más altos en la primera vuelta en todos los deportistas analizados. Asimismo, los triatletas de nivel internacional y las chicas fueron los que mostraron los mayores valores. La longitud de zancada presento distintas tendencias en función del nivel y el género del deportista. Así pues, en los deportistas internacionales y en los chicos los mayores valores se encontraron en la primera vuelta mientras que la tendencia fue al descenso, siendo probablemente la fatiga acumulada la causante de dicha tendencia. En cambio, aquellos deportistas de nivel nacional y las chicas mostraron valores mayores en la segunda vuelta que en la primera, evidenciando que además de la fatiga, el ciclismo previo tiene una incidencia directa sobre su rendimiento. Los tiempos de vuelo permanecieron constantes durante toda la carrera, encontrando cierta evolución en los tiempos de apoyo, la cual provoca una modificación en los porcentajes relativos en los tiempos de vuelo. Los tiempos de apoyo más bajos se encontraron en la primera vuelta. Del mismo modo, los deportistas de nivel internacional y los chicos mostraron valores inferiores. También, estos grupos fueron más constantes en sus valores a lo largo de las vueltas. Por el contrario, se encontraron tendencias al aumento en los triatletas de nivel nacional y en las chicas, los cuales no fueron capaces de mantener el mismo rendimiento debido seguramente a su menor nivel deportivo. La oscilación vertical de la cadera se mostro constante en los triatletas de mayor nivel, encontrándose tendencias al aumento en los de menor nivel. Del mismo modo, los valores más altos correspondieron a las chicas y a los deportistas de nivel nacional. La distancia de la cadera al apoyo permaneció constante a lo largo de las vueltas en todos los grupos, obteniéndose valores mayores en los triatletas de nivel internacional y en los chicos. El ángulo de la rodilla apoyada en el momento del despegue no mostro una tendencia clara. Los deportistas de nivel internacional y los chicos presentaron los valores más bajos. El ángulo de la rodilla libre en el momento del despegue mostro una correlación muy alta con la velocidad de carrera. Del mismo modo, los ángulos más pequeños se encontraron en los triatletas internacionales y en los chicos, debido seguramente a los mayores valores de velocidad registrados por ambos grupos. Los ángulos de los tobillos no mostraron ninguna tendencia clara durante la competición analizada. Los cuatro grupos de población presentaron valores similares, por lo que parece que no representan una variable que pueda incidir sobre el rendimiento biomecánica del triatleta. Los resultados obtenidos en el presente estudio de investigación avalan la utilización de la fotogrametría-video en 2d para el análisis de la técnica de carrera durante la competición en triatlón. Su aplicación en una competición de máximo nivel internacional ha posibilitado conocer el perfil técnico que presentan los triatletas a lo largo del segmento de carrera a pie. Del mismo modo, se ha podido demostrar como los estudios realizados en laboratorio no reflejan la realidad competitiva de un triatlón de máximo nivel. The aim of this research was to determine the running technique profile during a triathlon competition from a biomechanical perspective. Also, to analyze the triathlete gender’s and level of performance’s influence on this profile in competition. An accurate (internal validity) and reliable methodology with a high external validity (ecological) had to be developed to get those aims in competition. Sixty-four triathletes were analyzed. 32 (16 males, 16 females) took part in the Madrid 2008 Triathlon World Cup and 32 (16 males and 16 females) took part in the Spanish Triathlon National Championships. The biomechanical analyses were carried out by a photogrammetric system that allow to calculate the landmarks coordinates (x,y) with a 1.66% error in x axis, and a 2.10% error in y axis. The frames were obtained with a camera situated perpendicular to the triathletes’ trajectory, filming the saggittal plane. DLT based algorithms (Abdel-Aziz & Karara, 1971) were used to calculate the real coordinates from the digitalized ones and the final variables afterwards. The biomechanical analisys itself was performed in four different moments during the competition, according to each 2.5 km lap the triathletes had to do. Running speed was highly related to performance level. Also, 3 of the 4 analyzed groups showed speed values under the 3 minutes and 30 seconds per kilometer. It demonstrated the very high performance level of the analized triathletes. Furthermore, it seems that men get higher speeds because their longer stride length, while women shows higher stride frequency values. The highest stride frequency values were found in the first lap. Women and the international level triathletes showed the highest values. Stride length showed different tendencies according to the gender and level of performance. Men and international level triathletes showed the highest level in the first lap and a decreasing tendency after that. The accumulated fatigue was probably the reason of this tendency. On the other hand, higher values than in first lap were found in the second one in women and national level triathletes. It demonstrated the previous cycling can affect to those groups in terms of biomechanics. Flight times remained constant during the running part, while the contact times showed an increasing tendency that caused a variation in flight times percents. The lowest contact times were found in the first lap and in men and international triathletes’ values. Also, these two groups were more consistent during the whole running. On the other hand, increasing tendencies were found in women and national level triathletes, who were not able to maintain the same values probably due to their lower level of performance. Higher level triathletes showed more consistent hip vertical oscillation values than lower level triathletes, who presented increasing tendencies. The highest values were found in women and national level triathletes. The horizontal distance hip-toe cap remained constant among the laps in all the groups. Men and international level triathletes showed the highest values. The support knee angle at toe-off did not show a clear tendency. The lowest values were found in men and international level triathletes. A high correlation was found between the non-support knee angle and the running speed. Furthermore, men and international level triathletes showed the smallest values, due to the higher velocities reached by these two groups. Ankles angles did not show any tendency during the running part. Similar values were found in the four analyzed groups, so this variable does not seem to represent an important one within the triathlete’s performance. The results obtained in the present research support the use of the bidimensional photogrammetric video-system to analyze the running technique during a triathlon competition. Its application in international triathlon meetings has allowed determining the triathletes’ technique profile during the running part. Also, it has been demonstrated the laboratory-based studies does not reproduce a top-level competition.
Resumo:
This research addressed the development of a consolidated model designed especially to cover the security and usability attributes of a software product. As a starting point, we built a new usability model on the basis of well-known quality standards and models. We then used an existing security model to analyse the relationship between these two approaches. This analysis consisted of a systematic mapping study of the relationship between security and usability as global quality factors. We identified five relationship types: inverse, direct, relative, one-way inverse, and no relationship. Most authors agree that there is an inverse relationship between security and usability. However, this is not a unanimous finding, and this study unveils a number of open questions, like application domain dependency and the need to explore lower-level relationships between attribute subcharacteristics. In order to clarify the questions raised during the research, we conducted a second systematic mapping to further analyse the finer-grained structure of these factors, such as authentication as a subset of security and user efficiency as a subset of usability. The most relevant finding is that efficiency does not depend on the security level during the authentication process. There are other subfactors that require analysis. Accordingly, this research is the first part of a larger project to develop a full-blown consolidated model for security and usability.
Resumo:
El objetivo de este Proyecto Fin de Grado es el diseño de megafonía y PAGA (Public Address /General Alarm) de la estación de tren Waipahu Transit Center en la ciudad de Honolulú, Hawái. Esta estación forma parte de una nueva línea de tren que está en proceso de construcción actualmente llamada Honolulu Rail Transit. Inicialmente la línea de tren constará de 21 estaciones, en las que prácticamente todas están diseñadas como pasos elevados usando como referencia las autopistas que cruzan la isla. Se tiene prevista su fecha de finalización en el año 2019, aunque las primeras estaciones se inaugurarán en 2017. Se trata en primer lugar un estudio acústico del recinto a sonorizar, eligiendo los equipos necesarios: conmutadores, altavoces, amplificadores, procesador, equipo de control y micrófonos. Este primer estudio sirve para obtener una aproximación de equipos necesarios, así como la posible situación de estos dentro de la estación. Tras esto, se procede a la simulación de la estación mediante el programa de simulación acústica y electroacústica EASE 4.4. Para ello, se diseña la estación en un modelo 3D, en el que cada superficie se asocia a su material correspondiente. Para facilitar el diseño y el cómputo de las simulaciones se divide la estación en 3 partes por separado. Cada una corresponde a un nivel de la estación: Ground level, el nivel inferior que contiene la entrada; Concourse Level, pasillo que comunica los dos andenes; y Platform Level, en el que realizarán las paradas los trenes. Una vez realizado el diseño se procede al posicionamiento de altavoces en los diferentes niveles de la estación. Debido al clima existente en la isla, el cual ronda los 20°C a lo largo de todo el año, no es necesaria la instalación de sistemas de aire acondicionado o calefacción, por lo que la estación no está totalmente cerrada. Esto supone un problema al realizar las simulaciones en EASE, ya que al tratarse de un recinto abierto se deberán hallar parámetros como el tiempo de reverberación o el volumen equivalente por otros medios. Para ello, se utilizará el método Ray Tracing, mediante el cual se halla el tiempo de reverberación por la respuesta al impulso de la sala; y a continuación se calcula un volumen equivalente del recinto mediante la fórmula de Eyring. Con estos datos, se puede proceder a calcular los parámetros necesarios: nivel de presión sonora directo, nivel de presión sonora total y STI (Speech Transmission Index). Para obtener este último será necesario ecualizar antes en cada uno de los niveles de la estación. Una vez hechas las simulaciones, se comprueba que el nivel de presión sonora y los valores de inteligibilidad son acordes con los requisitos dados por el cliente. Tras esto, se procede a realizar los bucles de altavoces y el cálculo de amplificadores necesarios. Se estudia la situación de los micrófonos, que servirán para poder variar la potencia emitida por los altavoces dependiendo del nivel de ruido en la estación. Una vez obtenidos todos los equipos necesarios en la estación, se hace el conexionado entre éstos, tanto de una forma simplificada en la que se pueden ver los bucles de altavoces en cada nivel de la estación, como de una forma más detallada en la que se muestran las conexiones entre cada equipo del rack. Finalmente, se realiza el etiquetado de los equipos y un presupuesto estimado con los costes del diseño del sistema PAGA. ABSTRACT. The aim of this Final Degree Project is the design of the PAGA (Public Address / General Alarm) system in the train station Waipahu Transit Center in the city of Honolulu, Hawaii. This station is part of a new rail line that is currently under construction, called Honolulu Rail Transit. Initially, the rail line will have 21 stations, in which almost all are designed elevated using the highways that cross the island as reference. At first, it is treated an acoustic study in the areas to cover, choosing the equipment needed: switches, loudspeakers, amplifiers, DPS, control station and microphones. This first study helps to obtain an approximation of the equipments needed, as well as their placement inside the station. Thereafter, it is proceeded to do the simulation of the station through the acoustics and electroacoustics simulation software EASE 4.4. In order to do that, it is made the 3D design of the station, in which each surface is associated with its material. In order to ease the design and calculation of the simulations, the station has been divided in 3 zones. Each one corresponds with one level of the station: Ground Level, the lower level that has the entrance; Concourse Level, a corridor that links the two platforms; and Platform Level, where the trains will stop. Once the design is made, it is proceeded to place the speakers in the different levels of the station. Due to the weather in the island, which is about 20°C throughout the year, it is not necessary the installation of air conditioning or heating systems, so the station is not totally closed. This cause a problem when making the simulations in EASE, as the project is open, and it will be necessary to calculate parameters like the reverberation time or the equivalent volume by other methods. In order to do that, it will be used the Ray Tracing method, by which the reverberation time is calculated by the impulse response; and then it is calculated the equivalent volume of the area with the Eyring equation. With this information, it can be proceeded to calculate the parameters needed: direct sound pressure level, total sound pressure level and STI (Speech Transmission Index). In order to obtain the STI, it will be needed to equalize before in each of the station’s levels. Once the simulations are done, it is checked that the sound pressure level and the intelligibility values agree with the requirements given by the client. After that, it is proceeded to perform the speaker’s loops and the calculation of the amplifiers needed. It is studied the placement of the microphones, which will help to vary the power emitted by the speakers depending on the background noise level in the station. Once obtained all the necessary equipment in the station, it is done the connection diagram, both a simplified diagram in which there can be seen the speaker’s loops in each level of the station, or a more detailed diagram in which it is shown the wiring between each equipment of the rack. At last, it is done the labeling of the equipments and an estimated budget with the expenses for the PAGA design.
Resumo:
La Tesis Doctoral nace con una intensa vocación pedagógica. La hipótesis de trabajo se establece en torno a una cuestión de interés personal, un tema sobre el que se vertebran, desde el comienzo del doctorado, los diferentes cursos y trabajos de investigación: LA CASA DOMÍNGUEZ como paradigma de la dialéctica en la obra de Alejandro de la Sota. La clasificación de la realidad en categorías antagónicas determina un orden conceptual polarizado, una red de filiaciones excluyentes sobre las que Sota construye su personal protocolo operativo: la arquitectura intelectual o popular, experimental o tradicional, universal o local, ligera o pesada, elevada o enterrada, etc. Se propone el abordaje de una cuestión latente en el conjunto de la obra ‘sotiana’, desde la disección y el análisis de una de sus obras más pequeñas: la casa Domínguez. Se trata de una organización sin precedentes, que eleva la estrategia dialéctica al paroxismo: la vivienda se separa en dos estratos independientes, la zona de día, elevada, y la zona de noche, enterrada; cada uno de los estratos establece su propio orden geométrico y constructivo, su propio lenguaje y carácter, su propia identidad e incluso su propio presupuesto. Las relaciones entre interior y exterior se especializan en función de la actividad o el reposo, estableciéndose una compleja red de relaciones, algunas evidentes y otras celosamente veladas, entre los diferentes niveles. La estancia destinada a las tareas activas se proyecta como un objeto de armazón ligero y piel fría; la precisa geometría del cubo delimita la estancia vigilante sobre el paisaje conquistado. La ladera habitada se destina al reposo y se configura como una topografía verde bajo la que se desarrollan los dormitorios en torno a patios, grietas y lucernarios, generando un paisaje propio: la construcción del objeto frente a la construcción del lugar La casa Domínguez constituye uno de los proyectos menos estudiados, y por lo tanto menos celebrados, de la obra de Don Alejandro. Las publicaciones sucesivas reproducen la documentación gráfica junto a la memoria (epopeya) que el propio Sota compone para la publicación del proyecto. Apenas un par de breves textos críticos de Miguel Ángel Baldellou y, recientemente de Moisés Puente, abordan la vivienda como tema monográfico. Sin embargo, la producción de proyecto y obra ocupó a De la Sota un periodo no inferior a diez años, con casi cien planos dibujados para dos versiones de proyecto, la primera de ellas, inédita. El empeño por determinar hasta el último detalle de la ‘pequeña’ obra, conduce a Sota a controlar incluso el mobiliario interior, como hiciera en otras obras ‘importantes’ como el Gobierno Civil de Tarragona, el colegio mayor César Carlos o el edificio de Correos y Telecomunicaciones de León. La complicidad del cliente, mantenida durante casi cuarenta años, habilita el despliegue de una importante colección de recursos y herramientas de proyecto. La elección de la casa Domínguez como tema central de la tesis persigue por lo tanto un triple objetivo: en primer lugar, el abordaje del proyecto como paradigma de la dialéctica ‘sotiana’, analizando la coherencia entre el discurso de carácter heroico y la obra finalmente construida; en segundo lugar, la investigación rigurosa, de corte científico, desde la disección y progresivo desmontaje del objeto arquitectónico; y por último, la reflexión sobre los temas y dispositivos de proyecto que codifican la identificación entre la acción de construir y el hecho de habitar, registrando los aciertos y valorando con actitud crítica aquellos elementos poco coherentes con el orden interno de la propuesta. This doctoral thesis is the fruit of a profound pedagogical vocation. The central hypothesis was inspired by a question of great personal interest, and this interest has, since the very beginning of the doctorate, been the driving force behind all subsequent lines of research and investigation. The “Casa Domínguez” represents a paradigm of the dialectics found in the work of Alejandro de la Sota. The perception of reality as antagonistic categories determines a polarized conceptual order, a network of mutually excluding associations upon which Sota builds his own personal operative protocol: intellectual or popular architecture, experimental or traditional, universal or local, heavy or light, above or below ground, etc. Through the analysis and dissection of the “Casa Domínguez”, one of Sota’s smallest projects, an attempt is made to approach the underlying question posed in “Sotian” work as a whole. This is about organization without precedent, raising the strategic dialectics to levels of paroxysm. The house is divided into two separate levels, the day-time level above ground, and the lower night-time level beneath the surface of the ground. Each level has its own geometrical and stuctural order, its own language and character, its own identity and even has its own construction budget. The interaction between the two areas is centered on the two functions of rest and activity, and this in turn establishes a complex relationship network between both, which is sometimes self-evident, but at other times jealously guarded. The living area designed for daily activity is presented as an object of light structure and delicate skin; the precise geometry of the cube delimiting the ever watchful living area’s domain over the land it has conquered. A green topography is created on the slope below which lies an area adapted for rest and relaxation. Two bedrooms, built around patios, skylights and light crevices, generate an entirely independent environment: the construction of an object as opposed to the creation of a landscape. The “Casa Domínguez” project has been subject to much less scrutiny and examination than Don Alejandro’s other works, and is consequently less well-known. A succession of journals have printed the blueprint document together with a poetic description (epopee), composed by Sota himself, to mark the project’s publication. There has, however, scarcely been more than two brief critical appraisals, those by Miguel Ángel Baldellou and more recently by Moisés Puente, that have regarded the project as a monographic work. The project and works nevertheless occupied no less than ten years of De La Sota’s life, with over a hundred draft drawings for two separate versions of the project, the first of which remains unpublished. The sheer determination to design this “small” work in the most meticulous detail, drove Sota to manage and select its interior furniture, as indeed he had previously done with more “important” works like the Tarragona Civil Government, César Carlos College, or the Post Office telecommunications building in León. Client collaboration, maintained over a period of almost forty years, has facilitated an impressive array of the project’s tools and resources. The choice of “Casa Domínguez” as the central subject matter of this thesis, was made in pursuance of a triple objective: firstly, to approach the project as a paradigm of the “Sotian” dialectic, the analysis of the discourse between the heroic character and the finished building; secondly, a rigorous scientific investigation, and progressive disassembling and dissecting of the architectonic object; and finally, a reflection on aspects of the project and its technology which codify the identification between the action of construction and the reality of living, thus marking its achievements, whilst at the same time subjecting incoherent elements of the proposal’s established order to a critical evaluation.
Resumo:
This research studied the effects of additional fiber in the rearing phase diets on egg production, gastrointestinal tract (GIT) traits, and body measurements of brown egg-laying hens fed diets varying in energy concentration from 17 to 46 wk of age. The experiment was completely randomized with 10 treatments arranged as a 5 × 2 factorial with 5 rearing phase diets and 2 laying phase diets. During the rearing phase, treatments consisted of a control diet based on cereals and soybean meal and 4 additional diets with a combination of 2 fiber sources (cereal straw and sugar beet pulp, SBP) at 2 levels (2 and 4%). During the laying phase, diets differed in energy content (2,650 vs. 2,750 kcal AMEn/kg) but had the same amino acid content per unit of energy. The rearing diet did not affect any production trait except egg production that was lower in birds fed SBP than in birds fed straw (91.6 and 94.1%, respectively; P < 0.05). Laying hens fed the high energy diet had lower feed intake (P < 0.001), better feed conversion (P < 0.01), and greater BW gain (P < 0.05) than hens fed the low energy diet but egg production and egg weight were not affected. At 46 wk of age, none of the GIT traits was affected by previous dietary treatment. At this age, hen BW was positively related with body length (r = 0.500; P < 0.01), tarsus length (r = 0.758; P < 0.001), and body mass index (r = 0.762; P < 0.001) but no effects of type of diet on these traits were detected. In summary, the inclusion of up to 4% of a fiber source in the rearing diets did not affect GIT development of the hens but SBP reduced egg production. An increase in the energy content of the laying phase diet reduced ADFI and improved feed efficiency but did not affect any of the other traits studied.