942 resultados para Adaptable seat


Relevância:

10.00% 10.00%

Publicador:

Resumo:

La capacidad de transporte es uno de los baremos fundamentales para evaluar la progresión que puede llegar a tener un área económica y social. Es un sector de elevada importancia para la sociedad actual. Englobado en los distintos tipos de transporte, uno de los medios de transporte que se encuentra más en alza en la actualidad, es el ferroviario. Tanto para movilidad de pasajeros como para mercancías, el tren se ha convertido en un medio de transporte muy útil. Se encuentra dentro de las ciudades, entre ciudades con un radio pequeño entre ellas e incluso cada vez más, gracias a la alta velocidad, entre ciudades con gran distancia entre ellas. Esta Tesis pretende ayudar en el diseño de una de las etapas más importantes de los Proyectos de instalación de un sistema ferroviario: el sistema eléctrico de tracción. La fase de diseño de un sistema eléctrico de tracción ferroviaria se enfrenta a muchas dudas que deben ser resueltas con precisión. Del éxito de esta fase dependerá la capacidad de afrontar las demandas de energía de la explotación ferroviaria. También se debe atender a los costes de instalación y de operación, tanto costes directos como indirectos. Con la Metodología que se presenta en esta Tesis se ofrecerá al diseñador la opción de manejar un sistema experto que como soluciones le plantee un conjunto de escenarios de sistemas eléctricos correctos, comprobados por resolución de modelos de ecuaciones. Correctos desde el punto de vista de validez de distintos parámetros eléctrico, como de costes presupuestarios e impacto de costes indirectos. Por tanto, el diseñador al haber hecho uso de esta Metodología, tendría en un espacio de tiempo relativamente corto, un conjunto de soluciones factibles con las que poder elegir cuál convendría más según sus intereses finales. Esta Tesis se ha desarrollado en una vía de investigación integrada dentro del Centro de Investigaciones Ferroviarias CITEF-UPM. Entre otros proyectos y vías de investigación, en CITEF se ha venido trabajando en estudios de validación y dimensionamiento de sistemas eléctricos ferroviarios con diversos y variados clientes y sistemas ferroviarios. A lo largo de los proyectos realizados, el interés siempre ha girado mayoritariamente sobre los siguientes parámetros del sistema eléctrico: - Calcular número y posición de subestaciones de tracción. Potencia de cada subestación. - Tipo de catenaria a lo largo del recorrido. Conductores que componen la catenaria. Características. - Calcular número y posición de autotransformadores para sistemas funcionando en alterna bitensión o 2x25kV. - Posición Zonas Neutras. - Validación según normativa de: o Caídas de tensión en la línea o Tensiones máximas en el retorno de la línea o Sobrecalentamiento de conductores o Sobrecalentamiento de los transformadores de las subestaciones de tracción La idea es que las soluciones aportadas por la Metodología sugieran escenarios donde de estos parámetros estén dentro de los límites que marca la normativa. Tener la posibilidad de tener un repositorio de posibles escenarios donde los parámetros y elementos eléctricos estén calculados como correctos, aporta un avance en tiempos y en pruebas, que mejoraría ostensiblemente el proceso habitual de diseño para los sistemas eléctricos ferroviarios. Los costes directos referidos a elementos como subestaciones de tracción, autotransformadores, zonas neutras, ocupan un gran volumen dentro del presupuesto de un sistema ferroviario. En esta Tesis se ha querido profundizar también en el efecto de los costes indirectos provocados en la instalación y operación de sistemas eléctricos. Aquellos derivados del impacto medioambiental, los costes que se generan al mantener los equipos eléctricos y la instalación de la catenaria, los costes que implican la conexión entre las subestaciones de tracción con la red general o de distribución y por último, los costes de instalación propios de cada elemento compondrían los costes indirectos que, según experiencia, se han pensado relevantes para ejercer un cierto control sobre ellos. La Metodología cubrirá la posibilidad de que los diseños eléctricos propuestos tengan en cuenta variaciones de coste inasumibles o directamente, proponer en igualdad de condiciones de parámetros eléctricos, los más baratos en función de los costes comentados. Analizando los costes directos e indirectos, se ha pensado dividir su impacto entre los que se computan en la instalación y los que suceden posteriormente, durante la operación de la línea ferroviaria. Estos costes normalmente suelen ser contrapuestos, cuánto mejor es uno peor suele ser el otro y viceversa, por lo que hace falta un sistema que trate ambos objetivos por separado. Para conseguir los objetivos comentados, se ha construido la Metodología sobre tres pilares básicos: - Simulador ferroviario Hamlet: Este simulador integra módulos para construir esquemas de vías ferroviarios completos; módulo de simulación mecánica y de la tracción de material rodante; módulo de señalización ferroviaria; módulo de sistema eléctrico. Software realizado en C++ y Matlab. - Análisis y estudio de cómo focalizar los distintos posibles escenarios eléctricos, para que puedan ser examinados rápidamente. Pico de demanda máxima de potencia por el tráfico ferroviario. - Algoritmos de optimización: A partir de un estudio de los posibles algoritmos adaptables a un sistema tan complejo como el que se plantea, se decidió que los algoritmos genéticos serían los elegidos. Se han escogido 3 algoritmos genéticos, permitiendo recabar información acerca del comportamiento y resultados de cada uno de ellos. Los elegidos por motivos de tiempos de respuesta, multiobjetividad, facilidad de adaptación y buena y amplia aplicación en proyectos de ingeniería fueron: NSGA-II, AMGA-II y ɛ-MOEA. - Diseño de funciones y modelo preparado para trabajar con los costes directos e indirectos y las restricciones básicas que los escenarios eléctricos no deberían violar. Estas restricciones vigilan el comportamiento eléctrico y la estabilidad presupuestaria. Las pruebas realizadas utilizando el sistema han tratado o bien de copiar situaciones que se puedan dar en la realidad o directamente sistemas y problemas reales. Esto ha proporcionado además de la posibilidad de validar la Metodología, también se ha posibilitado la comparación entre los algoritmos genéticos, comparar sistemas eléctricos escogidos con los reales y llegar a conclusiones muy satisfactorias. La Metodología sugiere una vía de trabajo muy interesante, tanto por los resultados ya obtenidos como por las oportunidades que puede llegar a crear con la evolución de la misma. Esta Tesis se ha desarrollado con esta idea, por lo que se espera pueda servir como otro factor para trabajar con la validación y diseño de sistemas eléctricos ferroviarios. ABSTRACT Transport capacity is one of the critical points to evaluate the progress than a specific social and economical area is able to reach. This is a sector of high significance for the actual society. Included inside the most common types of transport, one of the means of transport which is elevating its use nowadays is the railway. Such as for passenger transport of weight movements, the train is being consolidated like a very useful mean of transport. Railways are installed in many geography areas. Everyone know train in cities, or connecting cities inside a surrounding area or even more often, taking into account the high-speed, there are railways infrastructure between cities separated with a long distance. This Ph.D work aims to help in the process to design one of the most essential steps in Installation Projects belonging to a railway system: Power Supply System. Design step of the railway power supply, usually confronts to several doubts and uncertainties, which must be solved with high accuracy. Capacity to supply power to the railway traffic depends on the success of this step. On the other hand is very important to manage the direct and indirect costs derived from Installation and Operation. With the Methodology is presented in this Thesis, it will be offered to the designer the possibility to handle an expert system that finally will fill a set of possible solutions. These solutions must be ready to work properly in the railway system, and they were tested using complex equation models. This Thesis has been developed through a research way, integrated inside Citef (Railway Research Centre of Technical University of Madrid). Among other projects and research ways, in Citef has been working in several validation studies and dimensioning of railway power supplies. It is been working by a large range of clients and railways systems. Along the accomplished Projects, the main goal has been rounded mostly about the next list of parameters of the electrical system: - Calculating number and location of traction substations. Power of each substation. - Type of Overhead contact line or catenary through the railway line. The wires which set up the catenary. Main Characteristics. - Calculating number and position of autotransformers for systems working in alternating current bi-voltage of called 2x25 kV. - Location of Neutral Zones. - Validating upon regulation of: o Drop voltages along the line o Maximum return voltages in the line o Overheating/overcurrent of the wires of the catenary o Avoiding overheating in the transformers of the traction substations. Main objective is that the solutions given by the Methodology, could be suggest scenarios where all of these parameters from above, would be between the limits established in the regulation. Having the choice to achieve a repository of possible good scenarios, where the parameters and electrical elements will be assigned like ready to work, that gives a great advance in terms of times and avoiding several tests. All of this would improve evidently the regular railway electrical systems process design. Direct costs referred to elements like traction substations, autotransformers, neutral zones, usually take up a great volume inside the general budget in railway systems. In this Thesis has been thought to bear in mind another kind of costs related to railway systems, also called indirect costs. These could be enveloped by those enmarked during installation and operation of electrical systems. Those derived from environmental impact; costs generated during the maintenance of the electrical elements and catenary; costs involved in the connection between traction substations and general electric grid; finally costs linked with the own installation of the whole electrical elements needed for the correct performance of the railway system. These are integrated inside the set has been collected taking into account own experience and research works. They are relevant to be controlled for our Methodology, just in case for the designers of this type of systems. The Methodology will cover the possibility that the final proposed power supply systems will be hold non-acceptable variations of costs, comparing with initial expected budgets, or directly assuming a threshold of budget for electrical elements in actual scenario, and achieving the cheapest in terms of commented costs from above. Analyzing direct and indirect costs, has been thought to divide their impact between two main categories. First one will be inside the Installation and the other category will comply with the costs often happens during Railway Operation time. These costs normally are opposed, that means when one is better the other turn into worse, in costs meaning. For this reason is necessary treating both objectives separately, in order to evaluate correctly the impact of each one into the final system. The objectives detailed before build the Methodology under three basic pillars: - Railway simulator Hamlet: This software has modules to configure many railway type of lines; mechanical and traction module to simulate the movement of rolling stock; signaling module; power supply module. This software has been developed using C++ and Matlab R13a - Previously has been mandatory to study how would be possible to work properly with a great number of feasible electrical systems. The target comprised the quick examination of these set of scenarios in terms of time. This point is talking about Maximum power demand peaks by railway operation plans. - Optimization algorithms. A railway infrastructure is a very complex system. At the beginning it was necessary to search about techniques and optimization algorithms, which could be adaptable to this complex system. Finally three genetic multiobjective algorithms were the chosen. Final decision was taken attending to reasons such as time complexity, able to multiobjective, easy to integrate in our problem and with a large application in engineering tasks. They are: NSGA-II, AMGA-II and ɛ-MOEA. - Designing objectives functions and equation model ready to work with the direct and indirect costs. The basic restrictions are not able to avoid, like budgetary or electrical, connected hardly with the recommended performance of elements, catenary and safety in a electrical railway systems. The battery of tests launched to the Methodology has been designed to be as real as possible. In fact, due to our work in Citef and with real Projects, has been integrated and configured three real railway lines, in order to evaluate correctly the final results collected by the Methodology. Another topic of our tests has been the comparison between the performances of the three algorithms chosen. Final step has been the comparison again with different possible good solutions, it means power supply system designs, provided by the Methodology, testing the validity of them. Once this work has been finished, the conclusions have been very satisfactory. Therefore this Thesis suggest a very interesting way of research and work, in terms of the results obtained and for the future opportunities can be created with the evolution of this. This Thesis has been developed with this idea in mind, so is expected this work could adhere another factor to work in the difficult task of validation and design of railway power supply systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este proyecto se propone un entorno de producción para televisión en alta definición donde las cintas magnéticas para la captura, modificación, gestión y transferencia de los contenidos audiovisuales, quedan sustituidas por servidores informáticos y sistemas de almacenamiento basados en las tecnologías de la información. Dicho entorno sin cintas tiene como misión la realización de la fase de la producción de los contenidos televisivos. Se trata de un centra independiente, en una ubicación remota respecto a las instalaciones centrales de la empresa emisora de televisión. La conexión del entorno sin cinta con los servicios centrales de la cadena se realiza por medio de redes de datos de alta velocidad y por enlace de radiofrecuencia. Por estos medios los sistemas de redacción comunican datos y escaletas, se reciben las señales de contribución que intervienen en los programas, se envía la serial realizada para emisión y se transfieren los materiales grabados al área de Postproducción para su elaboración final. Se plantean dos estudios de televisión dotados de servidores de video y de un almacenamiento compartido para una gestión ágil, unificada y flexible de las demandas de los programas. Además de la eliminación del lento y pesado trabajo de manipulación de las cintas, la producción resulta mucho mas ágil porque se eliminan tiempos de espera por la posibilidad de acceso simultaneo de varios usuarios a un mismo contenido. También se suprimen los tiempos de digitalización y descarga del material grabado, porque los sistemas implementados permiten la ingesta directa de las señales recibidas. Los contenidos de varias jornadas de grabación, en calidad HD, se conservan en el sistema de almacenamiento para la elaboración de materiales en el propio centra y para su transferencia al departamento central correspondiente. Mediante aplicaciones software se busca la integración del trabajo de la redacción de los programas con los procesos de producción de los estudios. El diseño que se propone para los diferentes subsistemas técnicos de los estudios esta orientado a lograr una alta fiabilidad, operatividad y adaptabilidad a las demandas técnicas de la producción audiovisual de los diferentes tipos de programas. Al tratarse de una propuesta conceptual, de manera general no se basa en equipos de marcas o fabricantes concretos sino mas bien en las metodologías concretas de trabajo. Cuando se ejemplifica algún dispositivo en particular es debido a que el concepto tecnológico del mismo es novedoso destaca de manera especial sobre la generalidad de los equipos existentes para esa funcionalidad. ABSTRACT. This project hopes to propose a television production platform that uses computers, servers and storage systems based on information technologies, rather than video tape recorders for ingesting, editing and making TV programs. This tapeless system has as its mission the production of all kind of television contents, employing IT systems, without the use of magnetic tapes. We envision an independent TV production center located in a remote location, in relation to the main broadcaster facilities, where all communications between this broadcasting center and the remote independent tapeless center would occur via high speed internet and a radiofrequency link as a back up. In this way, the Newsroom systems communicate data and rundowns; contribution feeds are received; PGM signal are codified and transmitted; and stored media are transferred to the post production area for final editing, playout and archive. Two TV studios are proposed, equipped with video servers and sharing media storage for agile, unified and flexible management of the production requirements. Apart from completely eliminating the slow and hard work resulting from handling a lot of traditional magnetic tapes, the production ends up being much quicker due to the fact that there is no waiting time between recording and viewing. This also enables several users to access and view the same material at the same time. The digitalization and downloading time is also greatly reduced due to the direct ingestion of contribution feeds to the system. The HD content of various days of recording, are stored for future use for whichever department needs the footage in the future. Through software applications, there will be complete integration between the Newsroom work and the production process of the studios. The proposed design for the various technical subsystems in the recording studio is directed towards achieving optimum reliability and operational capability: they are easily adaptable to the technical demands of the audiovisual production of the different programs. Because we are dealing with a conceptual proposal, in general terms, we are not defining the brands or manufacturers of the technical equipment, but rather we are specifying the methods which we plan to implement. When some equipment is highlighted, it's only because that specific brand exemplifies a higher performance than any other equipment in the range.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Uno de los temas más importantes dentro del debate contemporáneo, es el que se refiere a la sostenibilidad a largo plazo de la sociedad tal y como la entendemos hoy. El ser humano está recuperando la sensibilidad perdida que le concebía como una pieza más dentro del ciclo natural de la vida. Por fin hemos entendido que no podemos ser auto suficientes e independientes del entorno natural que nos rodea. Más allá del respeto y del cuidado, está abierta la puerta del conocimiento infinito que nos brinda la naturaleza a todos los niveles y a todas las escalas. Dentro de la disciplina arquitectónica han existido ejemplos como Antoni Gaudí o Frei Otto que han referenciado su obra en el mundo Natural, encontrando en él las estrategias y bases para el diseño arquitectónico. Sin embargo han sido una minoría dentro del enorme elenco de arquitectos defensores del ángulo recto. En las últimas décadas, la tendencia está cambiando. No nos referimos tanto a la sensibilidad creciente por conseguir una mayor eficiencia energética que ha llevado a una puesta en valor de la arquitectura vernácula, trasladando su sabiduría a las estrategias bioclimáticas. Nos referimos a un caso específico dentro del amplio abanico de formas arquitectónicas que han aparecido gracias a la incorporación de las herramientas computacionales en el diseño y la producción. Las arquitecturas que nos interesan son las que aprovechan estas técnicas para analizar e interpretar las estrategias complejas y altamente eficientes que encontramos en la naturaleza, y trasladarlas a la disciplina arquitectónica. Esta tendencia que se enmarca dentro de la Biomímesis o Biomimética es conocida con el nombre de Bioarquitectura. La presente tesis trata de morfología y sobre todo de morfogénesis. El término morfología se refiere al estudio de una forma concreta que nos permite entender un caso específico, nuestro foco de atención se centra sin embargo en la morfogénesis, es decir, en el estudio de los procesos de generación de esas formas, para poder reproducir patrones y generar abanicos de casos adaptables y reconfigurables. El hecho de estudiar la forma no quiere decir que ésta sea una tesis “formalista” con la connotación peyorativa y gestual que se le suele atribuir a este término. La investigación concibe el concepto de forma como lo hace el mundo natural: forma como síntesis de eficiencia. No hay ninguna forma natural gratuita, que no cumpla una función determinada y que no se desarrolle con el mínimo material y gaste la mínima energía posible. Este afán por encontrar la “forma eficaz” es lo que nos hace traspasar la frontera de la arquitectura formalista. El camino de investigación morfológica se traza, como el título de la tesis indica, siguiendo el hilo conductor concreto de los radiolarios. Estos microorganismos unicelulares poseen unos esqueletos tan complejos que para poder entender su morfología es necesario establecer un amplio recorrido que abarca más de 4.000 años de conocimiento humano. Desde el descubrimiento de los sólidos platónicos, poliedros que configuran muchas de las formas globales de estos esqueletos; hasta la aplicación de los algoritmos generativos, que permiten entender y reproducir los patrones de comportamiento que existen detrás de los sistemas de compactación y teselación irregular de los esqueletos radiolarios. La tesis no pretende plantear el problema desde un punto de vista biológico, ni paleontológico, aunque inevitablemente en el primer capítulo se realiza un análisis referenciado del estado del conocimiento científico actual. Sí se analizan en mayor profundidad cuestiones morfológicas y se tratan los diferentes posicionamientos desde los cuales estos microorganismos han servido de referencia en la disciplina arquitectónica. Además encontramos necesario analizar otros patrones naturales que comparten estrategias generativas con los esqueletos radiolarios. Como ya hemos apuntado, en el segundo capítulo se aborda un recorrido desde las geometrías más básicas a las más complejas, que tienen relación con las estrategias de generación de las formas detectadas en los microorganismos. A su vez, el análisis de estas geometrías se intercala con ejemplos de aplicaciones dentro de la arquitectura, el diseño y el arte. Finalizando con un cronograma que sintetiza y relaciona las tres vías de investigación abordadas: natural, geométrica y arquitectónica. Tras los dos capítulos centrales, el capítulo final recapitula las estrategias analizadas y aplica el conocimiento adquirido en la tesis, mediante la realización de diferentes prototipos que abarcan desde el dibujo analítico tradicional, a la fabricación digital y el diseño paramétrico, pasando por modelos analógicos de escayola, barras metálicas, resina, silicona, látex, etc. ABSTRACT One of the most important issues in the contemporary debate, is the one concerning the long-term sustainability of society as we understand it today. The human being is recovering the lost sensitivity that conceived us as part of the natural cycle of life. We have finally understood that we cannot be self-sufficient and independent of the natural environment which surrounds us. Beyond respect and care, we’ll find that the gateway to the infinite knowledge that nature provides us at all levels and at all scales is open. Within the architectural discipline, there have been remarkable examples such as Antoni Gaudí or Frei Otto who have inspired their work in the natural world. Both, found in nature the strategies and basis of their architectural designs. However, they have been a minority within the huge cast of architects defenders of the right angle. In recent decades, the trend is changing. We are not referring to the growing sensitivity in trying to achieve energy efficiency that has led to an enhancement of vernacular architecture, transferring its wisdom to bioclimatic strategies. We refer to a specific case within the wide range of architectural forms that have appeared thanks to the integration of computer tools in both design and production processes. We are interested in architectures that exploit these techniques to analyse and interpret the complex and highly efficient strategies found in nature, and shift them to the discipline of architecture. This trend, which is being implemented in the framework of the Biomimicry or biomimetics, is called Bioarchitecture. This thesis deals with morphology and more specifically with morphogenesis. Morphology is the study of a concrete form that allows us to understand a specific case. However, our focus is centered in morphogenesis or, in other words, the study of the processes of generation of these forms, in order to replicate patterns and generate a range of adaptable and reconfigurable cases. The fact of studying shapes does not mean that this is a “formalistic” thesis with the pejorative connotation that is often attributed to this term. This study conceives the concept of shape as Nature does: as a synthesis of efficiency. There is no meaningless form in nature. Furthermore, forms and shapes in nature play a particular role and are developed with minimum energetic consumption. This quest to find the efficient shape is what makes us go beyond formalistic architecture. The road of morphological investigation is traced, as the title of the thesis suggests, following the thread of radiolaria. These single-cell microorganisms possess very complex skeletons, so to be able to understand their morphology we must establish a wide spectrum which spans throughout more than 4.000 years of human knowledge. From the discovery of the platonic solids, polyhedrons which configure a huge range of global shapes of these skeletons, through the application of generative algorithms which allow us to understand and recreate the behavioral patterns behind the systems of compression and irregular tessellation of the radiolarian skeletons. The thesis does not pretend to lay out the problem from a biological, paleontological standpoint, although inevitably the first chapter is developed through an analysis in reference to the current state of the science. A deeper analysis of morphological aspects and different positionings is taken into account where these microorganisms have served as reference in the architectonic discipline. In addition we find necessary to analyse other natural patterns which share generative strategies with radiolarian skeletons. Aforementioned, in the second chapter an itinerary of the most basic geometries to the more complex ones is addressed. These are related, in this chapter, to the generative strategies of the shapes found in microorganisms. At the same time, the analysis of these geometries is placed among examples of applications inside the fields of architecture, design and the arts. To come to an end, a time chart synthesizes and relates the three investigation paths addressed: natural, geometrical and architectonic. After the two central chapters, the final chapter summarises the strategies analysed and applies the knowledge acquired throughout the thesis. This final chapter is shaped by the realization of different prototypes which range from traditional analytical drawings, to digital fabrication and parametric design, going through plaster analogical models, metal bars, resin, silicone, latex, etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introducción: En el entrenamiento de los deportes de equipo, está generalizada la descripción de la periodización del entrenamiento en función de las capacidades físicas trabajadas en cada microciclo. Este proceso de planificación y programación tiene un fundamento científico y es necesario, pero no es incompatible con un registro y una programación de los contenidos tácticos a trabajar. Para ello, debe existir un modelo de juego basado en las características de los jugadores y un registro detallado de cada partido, en función de unos indicadores de rendimiento que permitan evaluar la asimilación de dicho modelo de juego. Objetivo: Realizar una revisión de la literatura más reciente acerca de planificación en fútbol y a partir de esta definir unas tareas para desarrollar un modelo de juego concreto, además de un método de registro de datos y de evaluación del proceso enseñanza-aprendizaje táctico-estratégico en el fútbol de alto rendimiento. Material y métodos: Mediante la diversa bibliografía, la información sobre el modelo de juego y los vídeos de partidos completos proporcionados por el cuerpo técnico del Club Deportivo Leganés; he desarrollado la propuesta de una metodología de entrenamiento, y unas tareas como base de la enseñanza-aprendizaje del modelo de juego Resultados: El resultado de este trabajo es una propuesta metodológica que puede ser adaptable a otros entrenadores, modelos de juego e incluso deportes, ya que es una herramienta flexible. Conclusiones: El registro detallado de un modelo de juego, de unas tareas para desarrollar cada uno de sus aspectos y su evaluación mediante unos indicadores de rendimiento diseñados por el entrenador, forman en conjunto una herramienta práctica para el desarrollo de dicho modelo de juego y por lo tanto aumentar el rendimiento del equipo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tesis desvela el origen moderno del modo de afrontar el proyecto de arquitectura mediante métodos de ordenación. Estos procedimientos, fieles a la poética que los respalda, establecen unos principios que anteceden y constituyen la base del método y estos son técnicos, funcionales y sociales. Una cartografía de los principios propuestos por los arquitectos y los teóricos de la arquitectura nos aporta el medio de investigación de la tesis, los libros de arquitecto. La intelectualización y conceptualización que conlleva la arquitectura durante el siglo XX, favorecida por la asociación de los arquitectos, los historiadores y los críticos en encuentros y debates, fomentará la aparición de textos en los que el proyecto de arquitectura se contextualice en su entorno. De esta manera se deja de lado la resolución de un proyecto concreto, mediante la elección entre diversas posibilidades contingentes, para establecer que el acto de proyectar constituye un problema abstracto. Esta postura modifica la resolución del proyecto de arquitectura que ahora se acomete como un caso particular a resolver según los principios y métodos propuestos. Los libros de arquitecto se evidencian como el medio privilegiado para exponer los principios y los métodos de organización de estos, posicionándolos en el ambiente cultural y social. Los principios técnica, función y ciudad que fascinan a los arquitectos desde los años veinte, sufren un proceso de puesta en crisis entre el final de la II Guerra Mundial y la crisis del petróleo del año 1973. A partir de los años setenta pierden su vigencia y ya no deslumbran. Quedan relegados a un principio más, que afecta al proyecto de arquitectura, pero no lo determina. Este desplazamiento en vez de debilitarlos hace que se manifiesten en todo su poder creativo. Las herramientas que explicitan estos principios tales como, la seriación, la modulación, el cambio de escala, los métodos de organización jerarquizados o adaptables, las taxonomías, los diagramas y los relatos, pierden su carga de novedad y de certeza, y su poder metafórico alcanzando la contemporaneidad convertidas en una estructura conceptual sobre la que se organizan los proyectos de arquitectura. ABSTRACT This dissertation reveals the modernist origins of approaching architectural design through organizational methods. These procedures, true to the poetics that back them, establish certain principles that precede and constitute the foundations of the method, and they are technical, functional and social. A map of the principles proposed by architects and architecture theorists provides the means of research of this dissertation; architect’s books. The intellectualization and conceptualization regarding architecture during the 20th century, assisted by the association of architects, historians and critics through conferences and debates, encouraged the advent of texts in which the architectural project is contextualized in its surroundings. In this way, the issue of solving a specific design is set aside by choosing between a diverse set of possible contingencies, establishing that the act of designing constitutes an abstract problem. This stance changed the way the architectural project was carried out by becoming a specific case to be worked out according to the principles and methods proposed. Architect’s books become the privileged means to present the principles and organizational methods of architects, positioning them in cultural and social circles. The principles of technology, functionality and urbanity that had fascinated architects since the 1920s, were put into question between the end of World War II and the 1973 oil crisis. After the 1970s these principles were no longer valid and ceased to amaze. But this displacement, instead of debilitating them, made them appear in their full creative force. The tools that assert these principles, such as serial production, modulation, change of scales, hierarchical or adaptable organizational methods, taxonomies, diagrams and narratives, lose their novel and undisputed content as well as their metaphorical power, reaching us, nowadays, turned into a conceptual structure upon which we organize architectural design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este proyecto se centra en la construcción de una herramienta para la gestión de contenidos de muy diversos tipos, siendo fácilmente adaptable a cada uno de los contextos. Permite guardar los contenidos necesarios gracias a un formulario previamente personalizado, de este modo hay un editor que se dedica solamente a la introducción de los contenidos y un administrador que personaliza los campos del formulario según los contenidos. En esencia la herramienta sirve de apoyo a dos tipos de usuario, desarrolladores (administrador) y redactores (editor), a los primeros les simplifica las tareas de conceptualización de las estructuras de datos de las que se desea tener persistencia y sirve como base para construir los editores que usan los redactores, por otro lado proporciona un API sencillo, potente y ágil para recuperar los datos introducidos por los redactores. La herramienta a su vez está pensada para ser interoperable, es decir, no obliga a usar un tipo de almacenamiento persistente concreto. Puede utilizar desde los sencillos archivos de texto, con lo que puede desplegarse en servidores treméndamente básicos. Por otro lado, si se necesita potencia en las búsquedas, nada debe impedir el uso de bases de datos relacionales como MySql. O incluso si se quiere dar un paso más y se quiere aprovechar la flexibilidad, potencia y maleabilidad de las bases de datos NoSql (como MongoDB) no es costoso, lo que hay que hacer es implementar una nueva clase de tipo PersistentManager y desarrollar los tipos de búsqueda y recuperación de contenidos que se necesiten. En la versión inicial de la herramienta se han implementado estos tres tipos de almacenes, nada impide usar sólo alguno de ellos y desechar el resto o implementar uno nuevo. Desde el punto de vista de los redactores, les ofrece un entorno sencillo y potente para poder realizar las tareas típicas denominadas CRUD (Create Read Update Delete, Crear Leer Actualizar y Borrar), un redactor podrá crear, buscar, re-aprovechar e incluso planificar publicación de contenidos en el tiempo. ABSTRACT This project focuses on building a tool for content management of many types, being easily adaptable to each context. Saves the necessary content through a previously designed form, thus there will be an editor working only on the introduction of the contents and there will be an administrator to customize the form fields as contents. Essentially the tool provides support for two types of users, developers (administrator) and editors, the first will have simplified the tasks of conceptualization of data structures which are desired to be persistent and serve as the basis for building the structures that will be used by editors, on the other hand provides a simple, powerful and agile API to retrieve the data entered by the editors. The tool must also be designed to be interoperable, which means not to be bound by the use of a particular type of persistent storage. You can use simple text files, which can be deployed in extremely basic servers. On the other hand, if power is needed in searches, nothing should prevent the use of relational databases such as MySQL. Or even if you want to go a step further and want to take advantage of the flexibility, power and malleability of NoSQL databases (such as MongoDB) it will not be difficult, you will only need to implement a new class of PersistentManager type and develop the type of search and query of content as needed. In the initial version of the tool these three types of storage have been implemented, it will be entitled to use only one of them and discard the rest or implement a new one. From the point of view of the editors, it offers a simple and powerful environment to perform the typical tasks called CRUD (Create Read Update Delete), an editor can create, search, re-use and even plan publishing content in time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Admission: LU Students, Faculty and Staff interested in reserving a seat to participate must sign up by November 1, 2015 here: http://goo.gl/forms/WDqM7RXlic

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent advances in single molecule manipulation methods offer a novel approach to investigating the protein folding problem. These studies usually are done on molecules that are naturally organized as linear arrays of globular domains. To extend these techniques to study proteins that normally exist as monomers, we have developed a method of synthesizing polymers of protein molecules in the solid state. By introducing cysteines at locations where bacteriophage T4 lysozyme molecules contact each other in a crystal and taking advantage of the alignment provided by the lattice, we have obtained polymers of defined polarity up to 25 molecules long that retain enzymatic activity. These polymers then were manipulated mechanically by using a modified scanning force microscope to characterize the force-induced reversible unfolding of the individual lysozyme molecules. This approach should be general and adaptable to many other proteins with known crystal structures. For T4 lysozyme, the force required to unfold the monomers was 64 ± 16 pN at the pulling speed used. Refolding occurred within 1 sec of relaxation with an efficiency close to 100%. Analysis of the force versus extension curves suggests that the mechanical unfolding transition follows a two-state model. The unfolding forces determined in 1 M guanidine hydrochloride indicate that in these conditions the activation barrier for unfolding is reduced by 2 kcal/mol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detection of similarity is particularly difficult for small proteins and thus connections between many of them remain unnoticed. Structure and sequence analysis of several metal-binding proteins reveals unexpected similarities in structural domains classified as different protein folds in SCOP and suggests unification of seven folds that belong to two protein classes. The common motif, termed treble clef finger in this study, forms the protein structural core and is 25–45 residues long. The treble clef motif is assembled around the central zinc ion and consists of a zinc knuckle, loop, β-hairpin and an α-helix. The knuckle and the first turn of the helix each incorporate two zinc ligands. Treble clef domains constitute the core of many structures such as ribosomal proteins L24E and S14, RING fingers, protein kinase cysteine-rich domains, nuclear receptor-like fingers, LIM domains, phosphatidylinositol-3-phosphate-binding domains and His-Me finger endonucleases. The treble clef finger is a uniquely versatile motif adaptable for various functions. This small domain with a 25 residue structural core can accommodate eight different metal-binding sites and can have many types of functions from binding of nucleic acids, proteins and small molecules, to catalysis of phosphodiester bond hydrolysis. Treble clef motifs are frequently incorporated in larger structures or occur in doublets. Present analysis suggests that the treble clef motif defines a distinct structural fold found in proteins with diverse functional properties and forms one of the major zinc finger groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wnt1 signaling has been implicated as one factor involved in neural crest-derived melanocyte (NC-M) development. Mice deficient for both Wnt1 and Wnt3a have a marked deficiency in trunk neural crest derivatives including NC-Ms. We have used cell lineage-directed gene targeting of Wnt signaling genes to examine the effects of Wnt signaling in mouse neural crest development. Gene expression was directed to cell lineages by infection with subgroup A avian leukosis virus vectors in lines of transgenic mice that express the retrovirus receptor tv-a. Transgenic mice with tva in either nestin-expressing neural precursor cells (line Ntva) or dopachrome tautomerase (DCT)-expressing melanoblasts (line DCTtva) were analyzed. We overstimulated Wnt signaling in two ways: directed gene transfer of Wnt1 to Ntva+ cells and transfer of β-catenin to DCTtva+ NC-M precursor cells. In both methods, NC-M expansion and differentiation were effected. Significant increases were observed in the number of NC-Ms [melanin+ and tyrosinase-related protein 1 (TYRP1)+ cells], the differentiation of melanin− TYRP1+ cells to melanin+ TYRP1+ NC-Ms, and the intensity of pigmentation per NC-M. These data are consistent with Wnt1 signaling being involved in both expansion and differentiation of migrating NC-Ms in the developing mouse embryo. The use of lineage-directed gene targeting will allow the dissection of signaling molecules involved in NC development and is adaptable to other mammalian developmental systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent studies on proteins whose N and C termini are in close proximity have demonstrated that folding of polypeptide chains and assembly of oligomers can be accomplished with circularly permuted chains. As yet no methodical study has been conducted to determine how extensively new termini can be introduced and where such termini cannot be tolerated. We have devised a procedure to generate random circular permutations of the catalytic chains of Escherichia coli aspartate transcarbamoylase (ATCase; EC 2.1.3.2) and to select clones that produce active or stable holoenzyme containing permuted chains. A tandem gene construct was made, based on the desired linkage between amino acid residues in the C- and N-terminal regions of the polypeptide chain, and this DNA was treated with a suitable restriction enzyme to yield a fragment containing the rearranged coding sequence for the chain. Circularization achieved with DNA ligase, followed by linearization at random with DNase I, and incorporation of the linearized, repaired, blunt-ended, rearranged genes into a suitable plasmid permitted the expression of randomly permuted polypeptide chains. The plasmid with appropriate stop codons also contained pyrI, the gene encoding the regulatory chain of ATCase. Colonies expressing detectable amounts of ATCase-like molecules containing permuted catalytic chains were identified by an immunoblot technique or by their ability to grow in the absence of pyrimidines in the growth medium. Sequencing of positive clones revealed a variety of novel circular permutations. Some had N and C termini within helices of the wild-type enzyme as well as deletions and insertions. Permutations were concentrated in the C-terminal domain and only few were detected in the N-terminal domain. The technique, which is adaptable generally to proteins whose N and C termini are near each other, can be of value in relating in vivo folding of nascent, growing polypeptide chains to in vitro renaturation of complete chains and determining the role of protein sequence in folding kinetics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a shape-recovery technique in two dimensions and three dimensions with specific applications in modeling anatomical shapes from medical images. This algorithm models extremely corrugated structures like the brain, is topologically adaptable, and runs in O(N log N) time, where N is the total number of points in the domain. Our technique is based on a level set shape-recovery scheme recently introduced by the authors and the fast marching method for computing solutions to static Hamilton-Jacobi equations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A base-pair resolution method for determining nucleosome position in vitro has been developed to com- plement existing, less accurate methods. Cysteaminyl EDTA was tethered to a recombinant histone octamer via a mutant histone H4 with serine 47 replaced by cysteine. When assembled into nucleosome core particles, the DNA could be cut site specifically by hydroxyl radical-catalyzed chain scission by using the Fenton reaction. Strand cleavage occurs mainly at a single nucleotide close to the dyad axis of the core particle, and assignment of this location via the symmetry of the nucleosome allows base-pair resolution mapping of the histone octamer position on the DNA. The positions of the histone octamer and H3H4 tetramer were mapped on a 146-bp Lytechinus variegatus 5S rRNA sequence and a twofold-symmetric derivative. The weakness of translational determinants of nucleosome positioning relative to the overall affinity of the histone proteins for this DNA is clearly demonstrated. The predominant location of both histone octamer and H3H4 tetramer assembled on the 5S rDNA is off center. Shifting the nucleosome core particle position along DNA within a conserved rotational phase could be induced under physiologically relevant conditions. Since nucleosome shifting has important consequences for chromatin structure and gene regulation, an approach to the thermodynamic characterization of this movement is proposed. This mapping method is potentially adaptable for determining nucleosome position in chromatin in vivo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report the detection of the first extrasolar planet, ET-1 (HD 102195b), using the Exoplanet Tracker (ET), a new-generation Doppler instrument. The planet orbits HD 102195, a young star with solar metallicity that may be part of the local association. The planet imparts radial velocity variability to the star with a semiamplitude of 63.4 ± 2.0 m s^-1 and a period of 4.11 days. The planetary minimum mass (m sin i) is 0.488MJ ± 0.015M_J. The planet was initially detected in the spring of 2005 with the Kitt Peak National Observatory (KPNO) 0.9 m coudé feed telescope. The detection was confirmed by radial velocity observations with the ET at the KPNO 2.1 m telescope and also at the 9 m Hobby-Eberly Telescope (HET) with its High Resolution Spectrograph. This planetary discovery with a 0.9 m telescope around a V = 8.05 magnitude star was made possible by the high throughput of the instrument: 49% measured from the fiber output to the detector. The ET's interferometer-based approach is an effective method for planet detection. In addition, the ET concept is adaptable to multiple-object Doppler observations or very high precision observations with a cross-dispersed echelle spectrograph to separate stellar fringes over a broad wavelength band. In addition to spectroscopic observations of HD 102195, we obtained brightness measurements with one of the automated photometric telescopes at Fairborn Observatory. Those observations reveal that HD 102195 is a spotted variable star with an amplitude of ~0.015 mag and a 12.3 ± 0.3 day period. This is consistent with spectroscopically observed Ca II H and K emission levels and line-broadening measurements but inconsistent with rotational modulation of surface activity as the cause of the radial velocity variability. Our photometric observations rule out transits of the planetary companion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Backbone networks are responsible for long-haul data transport serving many clients with a large volume of data. Since long-haul data transport service must rely on a robust high capacity network the current technology broadly adopted by the industry is Wavelength Division Multiplexing (WDM). WDM networks enable one single ber to operate with multiple high capacity channels, drastically increasing the ber capacity. In WDM networks each channel is associated with an individual wavelength. Therefore a whole wavelength capacity is assigned to a connection, causing waste of bandwidth in case the connection bandwidth requirement is less than the channel total capacity. In the last half decade, Elastic Optical Networks (EON) have been proposed and developed based on the fexible use of the optical spectrum known as the exigrid. EONs are adaptable to clients requirements and may enhance optical networks performance. For these reasons, research community and data transport providers have been demonstrating increasingly high interest in EONs which are likely to replace WDM as the universally adopted technology in backbone networks in the near future. EONs have two characteristics that may limit its ecient resources use. The spectrum fragmentation, inherent to the dynamic EON operation, decrease the network capacity to assign resources to connection requests increasing network blocking probability. The spectrum fragmentation also intensifides the denial of service to higher rate request inducing service unfairness. Due to the fact EONs were just recently developed and proposed, the aforementioned issues were not yet extensively studied and solutions are still being proposed. Furthermore, EONs do not yet provide specific features as differentiated service mechanisms. Differentiated service strategies are important in backbone networks to guarantee client\'s diverse requirements in case of a network failure or the natural congestion and resources contention that may occur at some periods of time in a network. Impelled by the foregoing facts, this thesis objective is three-fold. By means of developing and proposing a mechanism for routing and resources assignment in EONs, we intend to provide differentiated service while decreasing fragmentation level and increasing service fairness. The mechanism proposed and explained in this thesis was tested in a EON simulation environment and performance results indicated that it promotes beneficial performance enhancements when compared to benchmark algorithms.