901 resultados para design or documentation process


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La arquitectura histórica constituye un ámbito de notable singularidad dentro del patrimonio cultural, ya que representa uno de los máximos exponentes de la cultura material de las sociedades precedentes. Su adecuada conservación y preservación debe ir necesariamente precedida de un riguroso y profundo conocimiento de sus valores culturales, de ahí la importancia de las investigaciones en este campo. Entre todos los elementos que configuran las edificaciones históricas, son probablemente las bóvedas los elementos más singulares, dada su relevancia desde un punto de vista tanto estético como estructural y constructivo. Hasta la fecha, los estudios centrados en los abovedamientos medievales góticos han aportado visiones generales del conjunto de obras, estableciendo las pertinentes clasificaciones y poniendo de manifiesto la notable variedad de tipos de bóvedas de crucería. La presente investigación tiene su origen en la necesidad de profundizar en el conocimiento de este sistema constructivo mediante el estudio específico y sistemático de un tipo concreto de abovedamiento: las bóvedas de crucería rebajadas que sustentan los coros altos de los templos. En concreto, el análisis se ha centrado en aquellos abovedamientos construidos en la Corona de Castilla durante los reinados de los Reyes Católicos y Carlos I, puesto que es en este momento de transición entre el mundo medieval y la Edad Moderna con una coexistencia de la tradición medieval y las nuevas ideas renacentistas cuando se crean las más singulares obras. Por lo tanto, el trabajo desarrollado se ha centrado en el estudio e interpretación de los procesos de diseño, trazado y construcción específicos de cada una de las bóvedas. Más allá de un enfoque descriptivo o basado en una visión actual, se ha tratado de profundizar en los métodos, sistemas y recursos que los maestros canteros emplearon, lo que ha obligado a adoptar en la medida de lo posible la mentalidad y conocimiento bajomedievales. Con estas premisas se ha desarrollado una investigación que necesariamente se ha apoyado en la contextualización histórica de cada una de las bóvedas, generándose un catálogo completo de las obras. Posteriormente, se ha desarrollado una toma de datos y un análisis individualizado de cada una de ellas, para poder obtener una interpretación de su proceso de diseño y construcción. Finalmente, se ha abordado un estudio comparativo del conjunto de las obras, poniendo en relación sus características históricas, geométricas, constructivas y estructurales. Ello ha permitido obtener unos resultados novedosos respecto a las principales cuestiones sobre el diseño y construcción de las bóvedas de crucería rebajadas, poniendo de relieve su singularidad y el profundo conocimiento de los maestros canteros que las crearon. De este modo, se ha pretendido avanzar en la investigación y sentar las bases para posteriores trabajos en el ámbito de los abovedamientos de crucería. Historical architecture is a quite singular field when considering cultural heritage, because it is one of the most important exponents of the material culture previous societies. Its proper conservation and restoration must be preceded of a rigorous and deep knowledge of the cultural values, and that is why researches in this fi eld are very important. The study of historical architecture has been developed traditionally from the viewpoint of History of Art and Architecture. Thanks to such discipline, it has been possible to establish and systematize several architectonical types and styles. However, there has been a lack in relationship with the analyses focused on the structural and constructive historical systems, which has been recently compensated by the gradual development of the discipline of Construction History. Among the several elements which form the historical buildings, the vaults are probably the most singular elements, thanks to their aesthetic, constructive and structural relevance. To date, the studies focused on the medieval gothic vaults have provided general visions of the whole group of works, which has allowed defi ning the proper classifi cations and underlining the great variety of kinds of ribbed vaults. The present research has its origin in the need of a deeper knowledge of this specific constructive system. For that reason, a specifi c and systematic analysis of a particular kind of vaults has been developed. It is focused on the surbased ribbed vaults which support the elevated choirs of some churches. In particular, it includes the works built in the Crown of Castille during the kingdoms of Catholic Kings and Carlos I, because at this precise moment of transition from the medieval world into de Modern Age with a coexistence of the medieval tradition and the new classicistic ideas the most singular and relevant surbased vaults were built. In this way, the analysis has been focused in the study and interpretation of the design, tracing and construction methods of each vault. More than a descriptive approach or an analysis based on our contemporary point of view and knowledge, this research has studied the methods, systems and resources of master masons in depth. Then, it has been necessary to adopt as much as possible their mentality, as well as the late medieval knowledge. With the above mentioned premises, the research has been developed including the historical contextualization of each vault, providing also a complete catalogue of such works. After obtaining the proper survey, measurements and other complementary data, each one has been analyzed in order to develop a hypothesis of the design and construction process. Finally, a comparative study has been carried out, which has allowed putting in relationship the historical, geometrical, constructive and structural features of the whole group of vaults. This research has provided novel results about de design and construction of surbased ribbed vaults, underlining their singularity as well as the deep knowledge of master masons who created them. In this way, we have tried to go further in this scientific field and to set the basis for latter researches focused on ribbed vaults.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El auge que ha surgido en los últimos años por la reparación de edificios y estructuras construidas con hormigón ha llevado al desarrollo de morteros de reparación cada vez más tecnológicos. En el desarrollo de estos morteros por parte de los fabricantes, surge la disyuntiva en el uso de los polímeros en sus formulaciones, por no encontrarse justificado en ocasiones el trinomio prestaciones/precio/aplicación. En esta tesis se ha realizado un estudio exhaustivo para la justificación de la utilización de estos morteros como morteros de reparación estructural como respuesta a la demanda actual disponiéndolo en tres partes: En la primera parte se realizó un estudio del arte de los morteros y sus constituyentes. El uso de los morteros se remonta a la antigüedad, utilizándose como componentes yeso y cal fundamentalmente. Los griegos y romanos desarrollaron el concepto de morteros de cal, introduciendo componentes como las puzolanas, cales hidraúlicas y áridos de polvo de mármol dando origen a morteros muy parecidos a los hormigones actuales. En la edad media y renacimiento se perdió la tecnología desarrollada por los romanos debido al extenso uso de la piedra en las construcciones civiles, defensivas y religiosas. Hubo que esperar hasta el siglo XIX para que J. Aspdin descubriese el actual cemento como el principal compuesto hidraúlico. Por último y ya en el siglo XX con la aparición de moléculas tales como estireno, melanina, cloruro de vinilo y poliésteres se comenzó a desarrollar la industria de los polímeros que se añadieron a los morteros dando lugar a los “composites”. El uso de polímeros en matrices cementantes dotan al mortero de propiedades tales como: adherencia, flexibilidad y trabajabilidad, como ya se tiene constancia desde los años 30 con el uso de caucho naturales. En la actualidad el uso de polímeros de síntesis (polivinialacetato, estireno-butadieno, viniacrílico y resinas epoxi) hacen que principalmente el mortero tenga mayor resistencia al ataque del agua y por lo tanto aumente su durabilidad ya que se minimizan todas las reacciones de deterioro (hielo, humedad, ataque biológico,…). En el presente estudio el polímero que se utilizó fue en estado polvo: polímero redispersable. Estos polímeros están encapsulados y cuando se ponen en contacto con el agua se liberan de la cápsula formando de nuevo el gel. En los morteros de reparación el único compuesto hidraúlico que hay es el cemento y es el principal constituyente hoy en día de los materiales de construcción. El cemento se obtiene por molienda conjunta de Clínker y yeso. El Clínker se obtiene por cocción de una mezcla de arcillas y calizas hasta una temperatura de 1450-1500º C por reacción en estado fundente. Para esta reacción se deben premachacar y homogeneizar las materias primas extraídas de la cantera. Son dosificadas en el horno con unas proporciones tales que cumplan con unas relación de óxidos tales que permitan formar las fases anhidras del Clínker C3S, C2S, C3A y C4AF. De la hidratación de las fases se obtiene el gel CSH que es el que proporciona al cemento de sus propiedades. Existe una norma (UNE-EN 197-1) que establece la composición, especificaciones y tipos de cementos que se fabrican en España. La tendencia actual en la fabricación del cemento pasa por el uso de cementos con mayores contenidos de adiciones (cal, puzolana, cenizas volantes, humo de sílice,…) con el objeto de obtener cementos más sostenibles. Otros componentes que influyen en las características de los morteros son: - Áridos. En el desarrollo de los morteros se suelen usar naturales, bien calizos o silícicos. Hacen la función de relleno y de cohesionantes de la matriz cementante. Deben ser inertes - Aditivos. Son aquellos componentes del mortero que son dosificados en una proporción menor al 5%. Los más usados son los superplastificantes por su acción de reductores de agua que revierte en una mayor durabilidad del mortero. Una vez analizada la composición de los morteros, la mejora tecnológica de los mismos está orientada al aumento de la durabilidad de su vida en obra. La durabilidad se define como la capacidad que éste tiene de resistir a la acción del ambiente, ataques químicos, físicos, biológicos o cualquier proceso que tienda a su destrucción. Estos procesos dependen de factores tales como la porosidad del hormigón y de la exposición al ambiente. En cuanto a la porosidad hay que tener en cuenta la distribución de macroporos, mesoporos y microporos de la estructura del hormigón, ya que no todos son susceptibles de que se produzca el transporte de agentes deteriorantes, provocando tensiones internas en las paredes de los mismos y destruyendo la matriz cementante Por otro lado los procesos de deterioro están relacionados con la acción del agua bien como agente directo o como vehículo de transporte del agente deteriorante. Un ambiente que resulta muy agresivo para los hormigones es el marino. En este caso los procesos de deterioro están relacionados con la presencia de cloruros y de sulfatos tanto en el agua de mar como en la atmosfera que en combinación con el CO2 y O2 forman la sal de Friedel. El deterioro de las estructuras en ambientes marinos se produce por la debilitación de la matriz cementante y posterior corrosión de las armaduras que provocan un aumento de volumen en el interior y rotura de la matriz cementante por tensiones capilares. Otras reacciones que pueden producir estos efectos son árido-álcali y difusión de iones cloruro. La durabilidad de un hormigón también depende del tipo de cemento y su composición química (cementos con altos contenidos de adición son más resistentes), relación agua/cemento y contenido de cemento. La Norma UNE-EN 1504 que consta de 10 partes, define los productos para la protección y reparación de estructuras de hormigón, el control de calidad de los productos, propiedades físico-químicas y durables que deben cumplir. En esta Norma se referencian otras 65 normas que ofrecen los métodos de ensayo para la evaluación de los sistemas de reparación. En la segunda parte de esta Tesis se hizo un diseño de experimentos con diferentes morteros poliméricos (con concentraciones de polímero entre 0 y 25%), tomando como referencia un mortero control sin polímero, y se estudiaron sus propiedades físico-químicas, mecánicas y durables. Para mortero con baja proporción de polímero se recurre a sistemas monocomponentes y para concentraciones altas bicomponentes en la que el polímero está en dispersión acuosa. Las propiedades mecánicas medidas fueron: resistencia a compresión, resistencia a flexión, módulo de elasticidad, adherencia por tracción directa y expansión-retracción, todas ellas bajo normas UNE. Como ensayos de caracterización de la durabilidad: absorción capilar, resistencia a carbonatación y adherencia a tracción después de ciclos hielo-deshielo. El objeto de este estudio es seleccionar el mortero con mejor resultado general para posteriormente hacer una comparativa entre un mortero con polímero (cantidad optimizada) y un mortero sin polímero. Para seleccionar esa cantidad óptima de polímero a usar se han tenido en cuenta los siguientes criterios: el mortero debe tener una clasificación R4 en cuanto a prestaciones mecánicas al igual que para evaluar sus propiedades durables frente a los ciclos realizados, siempre teniendo en cuenta que la adición de polímero no puede ser elevada para hacer el mortero competitivo. De este estudio se obtuvieron las siguientes conclusiones generales: - Un mortero normalizado no cumple con propiedades para ser clasificado como R3 o R4. - Sin necesidad de polímero se puede obtener un mortero que cumpliría con R4 para gran parte de las características medidas - Es necesario usar relaciones a:c< 0.5 para conseguir morteros R4, - La adición de polímero mejora siempre la adherencia, abrasión, absorción capilar y resistencia a carbonatación - Las diferentes proporciones de polímero usadas siempre suponen una mejora tecnológica en propiedades mecánicas y de durabilidad. - El polímero no influye sobre la expansión y retracción del mortero. - La adherencia se mejora notablemente con el uso del polímero. - La presencia de polímero en los morteros mejoran las propiedades relacionadas con la acción del agua, por aumento del poder cementante y por lo tanto de la cohesión. El poder cementante disminuye la porosidad. Como consecuencia final de este estudio se determinó que la cantidad óptima de polímero para la segunda parte del estudio es 2.0-3.5%. La tercera parte consistió en el estudio comparativo de dos morteros: uno sin polímero (mortero A) y otro con la cantidad optimizada de polímero, concluida en la parte anterior (mortero B). Una vez definido el porcentaje de polímeros que mejor se adapta a los resultados, se plantea un nuevo esqueleto granular mejorado, tomando una nueva dosificación de tamaños de áridos, tanto para el mortero de referencia, como para el mortero con polímeros, y se procede a realizar los ensayos para su caracterización física, microestructural y de durabilidad, realizándose, además de los ensayos de la parte 1, mediciones de las propiedades microestructurales que se estudiaron a través de las técnicas de porosimetría de mercurio y microscopia electrónica de barrido (SEM); así como propiedades del mortero en estado fresco (consistencia, contenido de aire ocluido y tiempo final de fraguado). El uso del polímero frente a la no incorporación en la formulación del mortero, proporcionó al mismo de las siguientes ventajas: - Respecto a sus propiedades en estado fresco: El mortero B presentó mayor consistencia y menor cantidad de aire ocluido lo cual hace un mortero más trabajable y más dúctil al igual que más resistente porque al endurecer dejará menos huecos en su estructura interna y aumentará su durabilidad. Al tener también mayor tiempo de fraguado, pero no excesivo permite que la manejabilidad para puesta en obra sea mayor, - Respecto a sus propiedades mecánicas: Destacar la mejora en la adherencia. Es una de las principales propiedades que confiere el polímero a los morteros. Esta mayor adherencia revierte en una mejora de la adherencia al soporte, minimización de las posibles reacciones en la interfase hormigón-mortero y por lo tanto un aumento en la durabilidad de la reparación ejecutada con el mortero y por consecuencia del hormigón. - Respecto a propiedades microestructurales: la porosidad del mortero con polímero es menor y menor tamaño de poro critico susceptible de ser atacado por agentes externos causantes de deterioro. De los datos obtenidos por SEM no se observaron grandes diferencias - En cuanto a abrasión y absorción capilar el mortero B presentó mejor comportamiento como consecuencia de su menor porosidad y su estructura microscópica. - Por último el comportamiento frente al ataque de sulfatos y agua de mar, así como al frente de carbonatación, fue más resistente en el mortero con polímero por su menor permeabilidad y su menor porosidad. Para completar el estudio de esta tesis, y debido a la gran importancia que están tomando en la actualidad factores como la sostenibilidad se ha realizado un análisis de ciclo de vida de los dos morteros objeto de estudio de la segunda parte experimental.In recent years, the extended use of repair materials for buildings and structures made the development of repair mortars more and more technical. In the development of these mortars by producers, the use of polymers in the formulations is a key point, because sometimes this use is not justified when looking to the performance/price/application as a whole. This thesis is an exhaustive study to justify the use of these mortars as a response to the current growing demand for structural repair. The thesis is classified in three parts:The first part is the study of the state of the art of mortars and their constituents.In ancient times, widely used mortars were based on lime and gypsum. The Greeks and Romans developed the concept of lime mortars, introducing components such as pozzolans, hydraulic limes and marble dust as aggregates, giving very similar concrete mortars to the ones used currently. In the middle Age and Renaissance, the technology developed by the Romans was lost, due to the extensive use of stone in the civil, religious and defensive constructions. It was not until the 19th century, when J. Aspdin discovered the current cement as the main hydraulic compound. Finally in the 20th century, with the appearance of molecules such as styrene, melanin, vinyl chloride and polyester, the industry began to develop polymers which were added to the binder to form special "composites".The use of polymers in cementitious matrixes give properties to the mortar such as adhesion, Currently, the result of the polymer synthesis (polivynilacetate, styrene-butadiene, vynilacrylic and epoxy resins) is that mortars have increased resistance to water attack and therefore, they increase their durability since all reactions of deterioration are minimised (ice, humidity, biological attack,...). In the present study the polymer used was redispersible polymer powder. These polymers are encapsulated and when in contact with water, they are released from the capsule forming a gel.In the repair mortars, the only hydraulic compound is the cement and nowadays, this is the main constituent of building materials. The current trend is centered in the use of higher contents of additions (lime, pozzolana, fly ash, silica, silica fume...) in order to obtain more sustainable cements. Once the composition of mortars is analyzed, the technological improvement is centred in increasing the durability of the working life. Durability is defined as the ability to resist the action of the environment, chemical, physical, and biological attacks or any process that tends to its destruction. These processes depend on factors such as the concrete porosity and the environmental exposure. In terms of porosity, it be considered, the distribution of Macropores and mesopores and pores of the concrete structure, since not all of them are capable of causing the transportation of damaging agents, causing internal stresses on the same walls and destroying the cementing matrix.In general, deterioration processes are related to the action of water, either as direct agent or as a transport vehicle. Concrete durability also depends on the type of cement and its chemical composition (cement with high addition amounts are more resistant), water/cement ratio and cement content. The standard UNE-EN 1504 consists of 10 parts and defines the products for the protection and repair of concrete, the quality control of products, physical-chemical properties and durability. Other 65 standards that provide the test methods for the evaluation of repair systems are referenced in this standard. In the second part of this thesis there is a design of experiments with different polymer mortars (with concentrations of polymer between 0 and 25%), taking a control mortar without polymer as a reference and its physico-chemical, mechanical and durable properties were studied. For mortars with low proportion of polymer, 1 component systems are used (powder polymer) and for high polymer concentrations, water dispersion polymers are used. The mechanical properties measured were: compressive strength, flexural strength, modulus of elasticity, adhesion by direct traction and expansion-shrinkage, all of them under standards UNE. As a characterization of the durability, following tests are carried out: capillary absorption, resistance to carbonation and pull out adhesion after freeze-thaw cycles. The target of this study is to select the best mortar to make a comparison between mortars with polymer (optimized amount) and mortars without polymer. To select the optimum amount of polymer the following criteria have been considered: the mortar must have a classification R4 in terms of mechanical performance as well as in durability properties against the performed cycles, always bearing in mind that the addition of polymer cannot be too high to make the mortar competitive in price. The following general conclusions were obtained from this study: - A standard mortar does not fulfill the properties to be classified as R3 or R4 - Without polymer, a mortar may fulfill R4 for most of the measured characteristics. - It is necessary to use relations w/c ratio < 0.5 to get R4 mortars - The addition of polymer always improves adhesion, abrasion, capillary absorption and carbonation resistance - The different proportions of polymer used always improve the mechanical properties and durability. - The polymer has no influence on the expansion and shrinkage of the mortar - Adhesion is improved significantly with the use of polymer. - The presence of polymer in mortars improves the properties related to the action of the water, by the increase of the cement power and therefore the cohesion. The cementitious properties decrease the porosity. As final result of this study, it was determined that the optimum amount of polymer for the second part of the study is 2.0 - 3.5%. The third part is the comparative study between two mortars: one without polymer (A mortar) and another with the optimized amount of polymer, completed in the previous part (mortar B). Once the percentage of polymer is defined, a new granular skeleton is defined, with a new dosing of aggregate sizes, for both the reference mortar, the mortar with polymers, and the tests for physical, microstructural characterization and durability, are performed, as well as trials of part 1, measurements of the microstructural properties that were studied by scanning electron microscopy (SEM) and mercury porosimetry techniques; as well as properties of the mortar in fresh State (consistency, content of entrained air and final setting time). The use of polymer versus non polymer mortar, provided the following advantages: - In fresh state: mortar with polymer presented higher consistency and least amount of entrained air, which makes a mortar more workable and more ductile as well as more resistant because hardening will leave fewer gaps in its internal structure and increase its durability. Also allow it allows a better workability because of the longer (not excessive) setting time. - Regarding the mechanical properties: improvement in adhesion. It is one of the main properties which give the polymer to mortars. This higher adhesion results in an improvement of adhesion to the substrate, minimization of possible reactions at the concrete-mortar interface and therefore an increase in the durability of the repair carried out with mortar and concrete. - Respect to microstructural properties: the porosity of mortar with polymer is less and with smaller pore size, critical to be attacked by external agents causing deterioration. No major differences were observed from the data obtained by SEM - In terms of abrasion and capillary absorption, polymer mortar presented better performance as a result of its lower porosity and its microscopic structure. - Finally behavior against attack by sulfates and seawater, as well as to carbonation, was better in the mortar with polymer because of its lower permeability and its lower porosity. To complete the study, due to the great importance of sustainability for future market facts, the life cycle of the two mortars studied was analysed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La industria metalúrgica estatal venezolana ha vivido, desde sus inicios, procesos cíclicos de cambios y ajustes tecnológicos. Estos procesos no han sido objeto de sistematización que asegure el aprendizaje y apropiación del conocimiento. Este hecho, aún hoy, ha obstaculizado los procesos de apropiación y mejora de las tecnologías asociadas al sector. A partir del acompañamiento a iniciativas de participación de grupos de interés surgidos del seno de los trabajadores, se planteó esta investigación que tuvo como propósito la determinación de condiciones y relaciones para su participación directa en los procesos de mejora de las tecnologías existentes y el fortalecimiento del aprendizaje asociado. Se consideraron dos ámbitos Latinoamericanos donde hay manifestación de gestión colectiva y participación: Venezuela y Argentina. En el caso venezolano, el abordaje se realizó bajo la Investigación Acción Participativa (IAP), desarrollando la “investigación próxima”, como estrategia de acompañamiento, mediante “talleres de formación-investigación” y la sistematización de experiencias considerando la perspectiva y necesidades de los actores. En el caso argentino, el abordaje se realizó mediante visitas, entrevistas, reuniones y encuentros. Los talleres realizados en Venezuela, en un contexto de diálogo de saberes, facilitaron el surgimiento de herramientas prácticas para la sistematización de su propia experiencia (“Preguntas generadoras”, “Mi historia con la tecnología”, “Bitácora de aprendizaje”). El intercambio con los pares argentinos ha generado una red que plantea la posibilidad de construcción y nucleación conjunta de saberes y experiencia, tanto para los trabajadores como para los investigadores. Los casos estudiados referidos a las empresas recuperadas por los trabajadores (ERTs) argentinos evidencian un proceso de participación marcada por su autonomía en la gestión de la empresa, dadas las circunstancias que los llevó a asumirla para conservar sus puestos de trabajo. De estos casos emergieron categorías asociadas con elementos de gestión de un proceso técnico–tecnológico, como la participación en la planificación, concepción o diseño de la mejora. La participación en general está asociada al hecho asambleario, vinculado a las prácticas de toma de decisiones autogestionarias como expresión de una alta participación. La Asamblea, como máxima instancia de participación, y el Consejo de Administración son las formas de participación prevalecientes. En cuanto al aprendizaje, los trabajadores de las ERTs argentinas aportaron categorías de gran significación a los procesos de socialización del conocimiento: conocimiento colectivo y cooperación del conocimiento, rescate de los saberes y formación de trabajadores que tomen el relevo. Las categorías surgidas de las ERTs argentinas, los referentes teóricos y el interés de los trabajadores venezolanos fueron la base para la valoración tanto de su grado de participación en las mejoras a procesos tecnológicos emprendidas, como del aprendizaje asociado. Ésta valoración se realizó bajo una aproximación borrosa dado el carácter ambiguo de estas categorías que fueron trabajadas como conjuntos que se relacionan, más que como variables. Se encontró que la participación, se configura como un sub-conjunto del aprendizaje para contribuir a su fortalecimiento. Las condiciones y relaciones para fortalecer la participación en los asuntos tecnológicos surgieron a partir de la sistematización y síntesis de ambas experiencias (Venezuela y Argentina) conjugando una estructura que contempla la formación para la nucleación de colectivos de saberes (proyectos de mejora o innovaciones), las redes por afinidad, la sistematización de su propia experiencia técnica y los enlaces institucionales. Estos resultados dan cuenta de la integración de los intereses de las partes (trabajadores, investigadores, instituciones), mediante las estrategias de encuentro, de sistematización de los propios métodos y de conformación de los “colectivos de saberes”, la red de IAP en la industria (IAP Industrial) considerando la “deriva de la investigación”, bajo discursos práctico–teóricos propios, como posibilidad de posicionamiento de su participación en los asuntos tecnológicos de sus respectivas organizaciones, abriendo una oportunidad de ampliación de la experiencia en otros ámbitos y sectores. ABSTRACT Venezuelan's state owned steel industry has experienced since its earliest years, cycles of change and technological adjustments. These processes have not been systematized to ensure learning and knowledge in those organizations. This fact, even today, has hindered the processes of appropriation and improving the technologies associated with the sector. In order to support initiatives involving metalworker interest groups, this research was aimed at determining conditions and relations for their direct participation in process improvements to existing technologies and strengthening the associated learning. Two Latin American countries, Venezuela and Argentina, were considered on the base of their collective management and participation experiences. The Venezuelan approach was carried out under the Participatory Action Research (PAR) strategy, through the ‘proximal research’ as support strategy, by means of ‘workshops–research’ and systematization of experiences considering the perspective and needs of actors. Workshops were carried out in metallurgical enterprises from steel and aluminum at Guayana, Venezuela and its affiliates in the Central region. Those industries have been promoted collective management. The Argentine approach was carried out through visits, interviews, meetings and gatherings. The workshops held in Venezuela, in a dialogue of knowledge context, facilitated the emergence of tools for the systematization of their own experience (‘generating questions’, ‘My history with technology’, ‘Learning Log’). The relation with Argentine peers has generated a network that creates opportunities of knowledge and experience construction and its nucleation for both, workers and researchers. The cases studied relating to Argentine workers’ recuperated enterprises show a participatory process marked by autonomy in the management of the factory, given the circumstances that led them to take it in order to maintain their jobs. From these cases emerged categories associated with management aspects about technical-technology process, such as participation in planning, design or implementation of the improvement. Participation, in general, is associated with assemblies, joined to the practices of self-management decision-making as an expression of a high participation. The Cooperative General Assembly, as the highest instance of participation, and the Board of Directors are the prevalent forms of participation. In relation to learning, Argentine workers’ recuperated enterprises provided categories of great significance to the process of socialization of knowledge: collective knowledge and knowledge cooperation, recovery of knowledge and training workers for replacement. Based on categories arising from the Argentine experience, theoretical framework and the interest of the Venezuelan workers the assessment of both, their degree of participation on technical improvements and the associated technological learning were made considering a fuzzy approach, given the ambiguous nature of these categories that were worked as sets rather than variables. It was found that participation is configured as a subset of learning to contribute to its strengthening. The conditions and relations to strengthen participation in technology issues emerged from the systematization and synthesis of both experiences (Venezuela and Argentina) combining a structure which provides training for the nucleation of collectives of knowledge (improvement projects or innovations), affinity networks, systematization of their own expertise and institutional links. These results show the integration of the interests of stakeholders (workers, researchers, institutions) through strategies like meetings, systematization of their own methods, forming ‘collectives of technological knowledge’ and a participative action research network in this industry (Industrial PAR) considering the ‘research drift’, under their own practical-theoretical discourses positioned as a possibility of their participation in technological activities in their respective organizations, opening an opportunity for scaling to other areas and sectors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The distributed computing models typically assume every process in the system has a distinct identifier (ID) or each process is programmed differently, which is named as eponymous system. In such kind of distributed systems, the unique ID is helpful to solve problems: it can be incorporated into messages to make them trackable (i.e., to or from which process they are sent) to facilitate the message transmission; several problems (leader election, consensus, etc.) can be solved without the information of network property in priori if processes have unique IDs; messages in the register of one process will not be overwritten by others process if this process announces; it is useful to break the symmetry. Hence, eponymous systems have influenced the distributed computing community significantly either in theory or in practice. However, every thing in the world has its own two sides. The unique ID also has disadvantages: it can leak information of the network(size); processes in the system have no privacy; assign unique ID is costly in bulk-production(e.g, sensors). Hence, homonymous system is appeared. If some processes share the same ID and programmed identically is called homonymous system. Furthermore, if all processes shared the same ID or have no ID is named as anonymous system. In homonymous or anonymous distributed systems, the symmetry problem (i.e., how to distinguish messages sent from which process) is the main obstacle in the design of algorithms. This thesis is aimed to propose different symmetry break methods (e.g., random function, counting technique, etc.) to solve agreement problem. Agreement is a fundamental problem in distributed computing including a family of abstractions. In this thesis, we mainly focus on the design of consensus, set agreement, broadcast algorithms in anonymous and homonymous distributed systems. Firstly, the fault-tolerant broadcast abstraction is studied in anonymous systems with reliable or fair lossy communication channels separately. Two classes of anonymous failure detectors AΘ and AP∗ are proposed, and both of them together with a already proposed failure detector ψ are implemented and used to enrich the system model to implement broadcast abstraction. Then, in the study of the consensus abstraction, it is proved the AΩ′ failure detector class is strictly weaker than AΩ and AΩ′ is implementable. The first implementation of consensus in anonymous asynchronous distributed systems augmented with AΩ′ and where a majority of processes does not crash. Finally, a general consensus problem– k-set agreement is researched and the weakest failure detector L used to solve it, in asynchronous message passing systems where processes may crash and recover, with homonyms (i.e., processes may have equal identities), and without a complete initial knowledge of the membership.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La metodología Integrated Safety Analysis (ISA), desarrollada en el área de Modelación y Simulación (MOSI) del Consejo de Seguridad Nuclear (CSN), es un método de Análisis Integrado de Seguridad que está siendo evaluado y analizado mediante diversas aplicaciones impulsadas por el CSN; el análisis integrado de seguridad, combina las técnicas evolucionadas de los análisis de seguridad al uso: deterministas y probabilistas. Se considera adecuado para sustentar la Regulación Informada por el Riesgo (RIR), actual enfoque dado a la seguridad nuclear y que está siendo desarrollado y aplicado en todo el mundo. En este contexto se enmarcan, los proyectos Safety Margin Action Plan (SMAP) y Safety Margin Assessment Application (SM2A), impulsados por el Comité para la Seguridad de las Instalaciones Nucleares (CSNI) de la Agencia de la Energía Nuclear (NEA) de la Organización para la Cooperación y el Desarrollo Económicos (OCDE) en el desarrollo del enfoque adecuado para el uso de las metodologías integradas en la evaluación del cambio en los márgenes de seguridad debidos a cambios en las condiciones de las centrales nucleares. El comité constituye un foro para el intercambio de información técnica y de colaboración entre las organizaciones miembro, que aportan sus propias ideas en investigación, desarrollo e ingeniería. La propuesta del CSN es la aplicación de la metodología ISA, especialmente adecuada para el análisis según el enfoque desarrollado en el proyecto SMAP que pretende obtener los valores best-estimate con incertidumbre de las variables de seguridad que son comparadas con los límites de seguridad, para obtener la frecuencia con la que éstos límites son superados. La ventaja que ofrece la ISA es que permite el análisis selectivo y discreto de los rangos de los parámetros inciertos que tienen mayor influencia en la superación de los límites de seguridad, o frecuencia de excedencia del límite, permitiendo así evaluar los cambios producidos por variaciones en el diseño u operación de la central que serían imperceptibles o complicados de cuantificar con otro tipo de metodologías. La ISA se engloba dentro de las metodologías de APS dinámico discreto que utilizan la generación de árboles de sucesos dinámicos (DET) y se basa en la Theory of Stimulated Dynamics (TSD), teoría de fiabilidad dinámica simplificada que permite la cuantificación del riesgo de cada una de las secuencias. Con la ISA se modelan y simulan todas las interacciones relevantes en una central: diseño, condiciones de operación, mantenimiento, actuaciones de los operadores, eventos estocásticos, etc. Por ello requiere la integración de códigos de: simulación termohidráulica y procedimientos de operación; delineación de árboles de sucesos; cuantificación de árboles de fallos y sucesos; tratamiento de incertidumbres e integración del riesgo. La tesis contiene la aplicación de la metodología ISA al análisis integrado del suceso iniciador de la pérdida del sistema de refrigeración de componentes (CCWS) que genera secuencias de pérdida de refrigerante del reactor a través de los sellos de las bombas principales del circuito de refrigerante del reactor (SLOCA). Se utiliza para probar el cambio en los márgenes, con respecto al límite de la máxima temperatura de pico de vaina (1477 K), que sería posible en virtud de un potencial aumento de potencia del 10 % en el reactor de agua a presión de la C.N. Zion. El trabajo realizado para la consecución de la tesis, fruto de la colaboración de la Escuela Técnica Superior de Ingenieros de Minas y Energía y la empresa de soluciones tecnológicas Ekergy Software S.L. (NFQ Solutions) con el área MOSI del CSN, ha sido la base para la contribución del CSN en el ejercicio SM2A. Este ejercicio ha sido utilizado como evaluación del desarrollo de algunas de las ideas, sugerencias, y los algoritmos detrás de la metodología ISA. Como resultado se ha obtenido un ligero aumento de la frecuencia de excedencia del daño (DEF) provocado por el aumento de potencia. Este resultado demuestra la viabilidad de la metodología ISA para obtener medidas de las variaciones en los márgenes de seguridad que han sido provocadas por modificaciones en la planta. También se ha mostrado que es especialmente adecuada en escenarios donde los eventos estocásticos o las actuaciones de recuperación o mitigación de los operadores pueden tener un papel relevante en el riesgo. Los resultados obtenidos no tienen validez más allá de la de mostrar la viabilidad de la metodología ISA. La central nuclear en la que se aplica el estudio está clausurada y la información relativa a sus análisis de seguridad es deficiente, por lo que han sido necesarias asunciones sin comprobación o aproximaciones basadas en estudios genéricos o de otras plantas. Se han establecido tres fases en el proceso de análisis: primero, obtención del árbol de sucesos dinámico de referencia; segundo, análisis de incertidumbres y obtención de los dominios de daño; y tercero, cuantificación del riesgo. Se han mostrado diversas aplicaciones de la metodología y ventajas que presenta frente al APS clásico. También se ha contribuido al desarrollo del prototipo de herramienta para la aplicación de la metodología ISA (SCAIS). ABSTRACT The Integrated Safety Analysis methodology (ISA), developed by the Consejo de Seguridad Nuclear (CSN), is being assessed in various applications encouraged by CSN. An Integrated Safety Analysis merges the evolved techniques of the usually applied safety analysis methodologies; deterministic and probabilistic. It is considered as a suitable tool for assessing risk in a Risk Informed Regulation framework, the approach under development that is being adopted on Nuclear Safety around the world. In this policy framework, the projects Safety Margin Action Plan (SMAP) and Safety Margin Assessment Application (SM2A), set up by the Committee on the Safety of Nuclear Installations (CSNI) of the Nuclear Energy Agency within the Organization for Economic Co-operation and Development (OECD), were aimed to obtain a methodology and its application for the integration of risk and safety margins in the assessment of the changes to the overall safety as a result of changes in the nuclear plant condition. The committee provides a forum for the exchange of technical information and cooperation among member organizations which contribute their respective approaches in research, development and engineering. The ISA methodology, proposed by CSN, specially fits with the SMAP approach that aims at obtaining Best Estimate Plus Uncertainty values of the safety variables to be compared with the safety limits. This makes it possible to obtain the exceedance frequencies of the safety limit. The ISA has the advantage over other methods of allowing the specific and discrete evaluation of the most influential uncertain parameters in the limit exceedance frequency. In this way the changes due to design or operation variation, imperceptibles or complicated to by quantified by other methods, are correctly evaluated. The ISA methodology is one of the discrete methodologies of the Dynamic PSA framework that uses the generation of dynamic event trees (DET). It is based on the Theory of Stimulated Dynamics (TSD), a simplified version of the theory of Probabilistic Dynamics that allows the risk quantification. The ISA models and simulates all the important interactions in a Nuclear Power Plant; design, operating conditions, maintenance, human actuations, stochastic events, etc. In order to that, it requires the integration of codes to obtain: Thermohydraulic and human actuations; Even trees delineation; Fault Trees and Event Trees quantification; Uncertainty analysis and risk assessment. This written dissertation narrates the application of the ISA methodology to the initiating event of the Loss of the Component Cooling System (CCWS) generating sequences of loss of reactor coolant through the seals of the reactor coolant pump (SLOCA). It is used to test the change in margins with respect to the maximum clad temperature limit (1477 K) that would be possible under a potential 10 % power up-rate effected in the pressurized water reactor of Zion NPP. The work done to achieve the thesis, fruit of the collaborative agreement of the School of Mining and Energy Engineering and the company of technological solutions Ekergy Software S.L. (NFQ Solutions) with de specialized modeling and simulation branch of the CSN, has been the basis for the contribution of the CSN in the exercise SM2A. This exercise has been used as an assessment of the development of some of the ideas, suggestions, and algorithms behind the ISA methodology. It has been obtained a slight increase in the Damage Exceedance Frequency (DEF) caused by the power up-rate. This result shows that ISA methodology allows quantifying the safety margin change when design modifications are performed in a NPP and is specially suitable for scenarios where stochastic events or human responses have an important role to prevent or mitigate the accidental consequences and the total risk. The results do not have any validity out of showing the viability of the methodology ISA. Zion NPP was retired and information of its safety analysis is scarce, so assumptions without verification or approximations based on generic studies have been required. Three phases are established in the analysis process: first, obtaining the reference dynamic event tree; second, uncertainty analysis and obtaining the damage domains; third, risk quantification. There have been shown various applications of the methodology and advantages over the classical PSA. It has also contributed to the development of the prototype tool for the implementation of the ISA methodology (SCAIS).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El concepto de casa crecedera, tal y como lo conocemos en la actualidad, se acuñó por primera vez en 1932 en el concurso Das Wachsende Haus organizado por Martin Wagner y Hans Poelzig dentro del marco de la Exposición Internacional Sonne, Luft und Haus für alle, promovida por la Oficina de Turismo de la ciudad de Berlín. En dicho concurso, se definía este tipo de vivienda como aquella célula básica o vivienda semilla que, dependiendo de las necesidades y posibilidades de los habitantes, podía crecer mediante otras estancias, conformando una vivienda completa en sí misma en cada fase de crecimiento. Numerosos arquitectos de primer orden, tales como Walter Gropius, Bruno Taut, Erich Mendelsohn o Hans Scharoun, participaron en este concurso, abriendo una nueva vía de exploración dentro de la vivienda flexible, la del crecimiento programado. A partir de ese momento, en Europa, y subsecuentemente en EEUU y otras regiones desarrolladas, se iniciaron numerosas investigaciones teóricas y prácticas en torno al fenómeno del crecimiento en la vivienda desde un enfoque vinculado a la innovación, tanto espacial como técnica. Por otro lado, aunque dentro del marco de la arquitectura popular de otros países, ya se ensayaban viviendas crecederas desde el siglo XVIII debido a que, por su tamaño, eran más asequibles dentro del mercado. Desde los años treinta, numerosos países en vías de desarrollo tuvieron que lidiar con migraciones masivas del campo a la ciudad, por lo que se construyeron grandes conjuntos habitacionales que, en numerosos casos, estaban conformados por viviendas crecederas. En todos ellos, la aproximación al crecimiento de la vivienda se daba desde una perspectiva diferente a la de los países desarrollados. Se primaba la economía de medios, el uso de sistemas constructivos de bajo costo y, en muchos casos, se fomentaba incluso la autoconstrucción guiada, frente a las construcciones prefabricadas ensambladas por técnicos especializados que se proponían, por ejemplo, en los casos europeos. Para realizar esta investigación, se recopiló información de estas y otras viviendas. A continuación, se identificaron distintas maneras de producir el crecimiento, atendiendo a su posición relativa respecto de la vivienda semilla, a las que se denominó mecanismos de ampliación, utilizados indistintamente sin tener en cuenta la ubicación geográfica de cada casa. La cuestión de porqué se prefiere un mecanismo en lugar de otro en un caso determinado, desencadenó el principal objetivo de esta Tesis: la elaboración de un sistema de análisis y diagnóstico de la vivienda crecedera que, de acuerdo a determinados parámetros, permitiera indicar cuál es la ampliación o sucesión de ampliaciones óptimas para una familia concreta, en una ubicación establecida. Se partió de la idea de que el crecimiento de la vivienda está estrechamente ligado a la evolución de la unidad de convivencia que reside en ella, de manera que la casa se transformó en un hábitat dinámico. Además se atendió a la complejidad y variabilidad del fenómeno, sujeto a numerosos factores socio-económicos difícilmente previsibles en el tiempo, pero fácilmente monitorizables según unos patrones determinados vinculados a la normatividad, el número de habitantes, el ahorro medio, etc. Como consecuencia, para el diseño del sistema de optimización de la vivienda crecedera, se utilizaron patrones evolutivos. Dichos patrones, alejados ya del concepto espacial y morfológico usualmente utilizado en arquitectura por figuras como C. Alexander o J. Habraken, pasaron a entenderse como una secuencia de eventos en el tiempo (espaciales, sociales, económicos, legales, etc.), que describen el proceso de transformación y que son peculiares de cada vivienda. De esta manera, el tiempo adquirió una especial importancia al convertirse en otro material más del proyecto arquitectónico. Fue en la construcción de los patrones donde se identificaron los mencionados mecanismos de ampliación, entendidos también como sistemas de compactación de la ciudad a través de la ocupación tridimensional del espacio. Al estudiar la densidad, mediante los conceptos de holgura y hacinamiento, se aceptó la congestión de las ciudades como un valor positivo. De esta forma, las posibles transformaciones realizadas por los habitantes (previstas desde un inicio) sobre el escenario del habitar (vivienda semilla), se convirtieron también en herramientas de proyecto urbano que responden a condicionantes del lugar y de los habitantes con distintas intensidades de crecimiento, ocupación y densidad. Igualmente, en el proceso de diseño del sistema de optimización, se detectaron las estrategias para la adaptabilidad y transformación de la casa crecedera, es decir, aquella serie de acciones encaminadas a la alteración de la vivienda para facilitar su ampliación, y que engloban desde sistemas constructivos en espera, que facilitan las costuras entre crecimiento y vivienda semilla, hasta sistemas espaciales que permiten que la casa altere su uso transformándose en un hábitat productivo o en un artefacto de renta. Así como los mecanismos de ampliación están asociados a la morfología, se descubrió que su uso es independiente de la localización, y que las estrategias de adaptabilidad de la vivienda se encuentran ligadas a sistemas constructivos o procesos de gestión vinculados a una región concreta. De esta manera, la combinación de los mecanismos con las estrategias caracterizan el proceso de evolución de la vivienda, vinculándola a unos determinados condicionantes sociales, geográficos y por tanto, constructivos. Finalmente, a través de la adecuada combinación de mecanismos de ampliación y estrategias de adaptabilidad en el proyecto de la vivienda con crecimiento programado es posible optimizar su desarrollo en términos económicos, constructivos, sociales y espaciales. Como resultado, esto ayudaría no sólo a mejorar la vida de los habitantes de la vivienda semilla en términos cualitativos y cuantitativos, sino también a compactar las ciudades mediante sistemas incluyentes, ya que las casas crecederas proporcionan una mayor complejidad de usos y diversidad de relaciones sociales. ABSTRACT The growing house concept -as we currently know it- was used for the first time back in 1932 in the competition Das Wachsende Haus organized by Martin Wagner and Hans Poelzig during the International Exhibition Sonne, Luft und Haus für alle, promoted by Berlin's Tourist Office. In that competition this type of housing was defined as a basic cell or a seed house unit, and depending on the needs and capabilities of the residents it could grow by adding rooms and defining itself as a complete house unit during each growing stage. Many world-top class architects such as Walter Gropius, Bruno Taut, Erich Mendelsohn or Hans Scharoun, were part of this competition exploring a new path in the flexible housing field, the scheduled grownth. From that moment and on, in Europe -and subsequently in the USA and other developed areas- many theorical and pragmatical researchs were directed towards the growing house phenomena, coming from an initial approach related to innovation, spacial and technical innovation. Furthermore -inside the traditional architecture frame in other countries, growing houses were already tested in the XVIII century- mainly due to the size were more affordable in the Real State Market. Since the 30's decade many developing countries had to deal with massive migration movements from the countryside to cities, building large housing developments were -in many cases- formed by growing housing units. In all of these developing countries the growing house approach was drawn from a different perspective than in the developed countries. An economy of means was prioritized, the utilization of low cost construction systems and -in many cases- a guided self-construction was prioritized versus the prefabricated constructions set by specialized technics that were proposed -for instance- in the European cases. To proceed with this research, information from these -and other- housing units was gathered. From then and on different ways to perform the growing actions were identified, according to its relative position from the seed house unit, these ways were named as addition or enlargement mechanisms indifferently utilized without adknowledging the geographic location for each house. The question of why one addition mechanism is preferred over another in any given case became the main target of this Thesis; the ellaboration of an analysis and diagnosis system for the growing house -according to certain parameters- would allow to point out which is the addition or addition process more efficient for a certain family in a particular location. As a starting point the grownth of the housing unit is directly linked to the evolution of the family unit that lives on it, so the house becomes a dynamic habitat. The complexity and the variability of the phenomena was taken into consideration related to a great number of socio-economic factors hardly able to be foreseen ahead on time but easy to be monitored according to certain patterns linked to regulation, population, average savings, etc As a consequence, to design the optimization system for the growing house, evolutionary patterns were utilized. Those patterns far away from the spatial and morphologic concept normally utilized in Architecture by characters like C. Alexander or J. Habraken, started to be understood like a sequence of events on time (spatial events, social events, economic events, legal events, etc) that describes the transformation process and that are particular for each housing unit. Therefore time became something important as another ingredient in the Architectural Project. The before mentioned addition or enlargement mechanisms were identified while building the patterns; these mechanisms were also understood as city's system of compactation through the tridimendional ocupation of space. Studying density, thorough the concepts of comfort and overcrowding, traffic congestion in the city was accepted as a positive value. This way, the possible transformations made by the residents (planned from the begining) about the residencial scenary (seed house), also became tools of the urban project that are a response to site's distinctive features and to the residents with different grownth intensities, activities and density Likewise, during the process of designing the optimization system, strategies for adaptations and transformation of the growing house were detected, in other words, the serial chain of actions directed to modify the house easing its enlargement or addition, and that comprehends from constructive systems on hold -that smooths the costures between grownth and housing seed- to spatial systems that allows that the house modify its utilization, becoming a productive habitat or a rental asset. Because the enlargement mechanisms are linked to the morphology, it was discovered that the use it's not related to the location, and that the adaptation strategies of the houses are linked to constructive systems or management processes linked to a particular area. This way the combination of mechanisms and strategies characterizes the process of housing evolution, linking it to certain social and geographic peculiarities and therefore constructives. At last, through the certain combination of enlargement mechanisms and adaptability strategies in the housing with scheduled grownth project is possible to optimize its development in economic, constructive, social and spatial terms. As a result, this would help not only to improve the life of the seed house residents in qualitative and quantitative terms but also to compact the cities through inclusive systems, given that the growing houses provide a larger complexity of uses and social relations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work was to develop a generic methodology for evaluating and selecting, at the conceptual design phase of a project, the best process technology for Natural Gas conditioning. A generic approach would be simple and require less time and would give a better understanding of why one process is to be preferred over another. This will lead to a better understanding of the problem. Such a methodology would be useful in evaluating existing, novel and hybrid technologies. However, to date no information is available in the published literature on such a generic approach to gas processing. It is believed that the generic methodology presented here is the first available for choosing the best or cheapest method of separation for natural gas dew-point control. Process cost data are derived from evaluations carried out by the vendors. These evaluations are then modelled using a steady-state simulation package. From the results of the modelling the cost data received are correlated and defined with respect to the design or sizing parameters. This allows comparisons between different process systems to be made in terms of the overall process. The generic methodology is based on the concept of a Comparative Separation Cost. This takes into account the efficiency of each process, the value of its products, and the associated costs. To illustrate the general applicability of the methodology, three different cases suggested by BP Exploration are evaluated. This work has shown that it is possible to identify the most competitive process operations at the conceptual design phase and illustrate why one process has an advantage over another. Furthermore, the same methodology has been used to identify and evaluate hybrid processes. It has been determined here that in some cases they offer substantial advantages over the separate process techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In analysing manufacturing systems, for either design or operational reasons, failure to account for the potentially significant dynamics could produce invalid results. There are many analysis techniques that can be used, however, simulation is unique in its ability to assess detailed, dynamic behaviour. The use of simulation to analyse manufacturing systems would therefore seem appropriate if not essential. Many simulation software products are available but their ease of use and scope of application vary greatly. This is illustrated at one extreme by simulators which offer rapid but limited application whilst at the other simulation languages which are extremely flexible but tedious to code. Given that a typical manufacturing engineer does not posses in depth programming and simulation skills then the use of simulators over simulation languages would seem a more appropriate choice. Whilst simulators offer ease of use their limited functionality may preclude their use in many applications. The construction of current simulators makes it difficult to amend or extend the functionality of the system to meet new challenges. Some simulators could even become obsolete as users, demand modelling functionality that reflects the latest manufacturing system design and operation concepts. This thesis examines the deficiencies in current simulation tools and considers whether they can be overcome by the application of object-oriented principles. Object-oriented techniques have gained in popularity in recent years and are seen as having the potential to overcome any of the problems traditionally associated with software construction. There are a number of key concepts that are exploited in the work described in this thesis: the use of object-oriented techniques to act as a framework for abstracting engineering concepts into a simulation tool and the ability to reuse and extend object-oriented software. It is argued that current object-oriented simulation tools are deficient and that in designing such tools, object -oriented techniques should be used not just for the creation of individual simulation objects but for the creation of the complete software. This results in the ability to construct an easy to use simulator that is not limited by its initial functionality. The thesis presents the design of an object-oriented data driven simulator which can be freely extended. Discussion and work is focused on discrete parts manufacture. The system developed retains the ease of use typical of data driven simulators. Whilst removing any limitation on its potential range of applications. Reference is given to additions made to the simulator by other developers not involved in the original software development. Particular emphasis is put on the requirements of the manufacturing engineer and the need for Ihe engineer to carrv out dynamic evaluations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential benefits of implementing Component-Based Development (CBD) methodologies in a globally distributed environment are many. Lessons from the aeronautics, automotive, electronics and computer hardware industries, in which Component-Based (CB) architectures have been successfully employed for setting up globally distributed design and production activities, have consistently shown that firms have managed to increase the rate of reused components and sub-assemblies, and to speed up the design and production process of new products.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimal design for parameter estimation in Gaussian process regression models with input-dependent noise is examined. The motivation stems from the area of computer experiments, where computationally demanding simulators are approximated using Gaussian process emulators to act as statistical surrogates. In the case of stochastic simulators, which produce a random output for a given set of model inputs, repeated evaluations are useful, supporting the use of replicate observations in the experimental design. The findings are also applicable to the wider context of experimental design for Gaussian process regression and kriging. Designs are proposed with the aim of minimising the variance of the Gaussian process parameter estimates. A heteroscedastic Gaussian process model is presented which allows for an experimental design technique based on an extension of Fisher information to heteroscedastic models. It is empirically shown that the error of the approximation of the parameter variance by the inverse of the Fisher information is reduced as the number of replicated points is increased. Through a series of simulation experiments on both synthetic data and a systems biology stochastic simulator, optimal designs with replicate observations are shown to outperform space-filling designs both with and without replicate observations. Guidance is provided on best practice for optimal experimental design for stochastic response models. © 2013 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-quality software documentation is a substantial issue for understanding software systems. Shorter time-to-market software cycles increase the importance of automatism for keeping the documentation up to date. In this paper, we describe the automatic support of the software documentation process using semantic technologies. We introduce a software documentation ontology as an underlying knowledge base. The defined ontology is populated automatically by analysing source code, software documentation and code execution. Through selected results we demonstrate that the use of such semantic systems can support software documentation processes efficiently.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research addresses the problem of cost estimation for product development in engineer-to-order (ETO) operations. An ETO operation starts the product development process with a product specification and ends with delivery of a rather complicated, highly customized product. ETO operations are practiced in various industries such as engineering tooling, factory plants, industrial boilers, pressure vessels, shipbuilding, bridges and buildings. ETO views each product as a delivery item in an industrial project and needs to make an accurate estimation of its development cost at the bidding and/or planning stage before any design or manufacturing activity starts. ^ Many ETO practitioners rely on an ad hoc approach to cost estimation, with use of past projects as reference, adapting them to the new requirements. This process is often carried out on a case-by-case basis and in a non-procedural fashion, thus limiting its applicability to other industry domains and transferability to other estimators. In addition to being time consuming, this approach usually does not lead to an accurate cost estimate, which varies from 30% to 50%. ^ This research proposes a generic cost modeling methodology for application in ETO operations across various industry domains. Using the proposed methodology, a cost estimator will be able to develop a cost estimation model for use in a chosen ETO industry in a more expeditious, systematic and accurate manner. ^ The development of the proposed methodology was carried out by following the meta-methodology as outlined by Thomann. Deploying the methodology, cost estimation models were created in two industry domains (building construction and the steel milling equipment manufacturing). The models are then applied to real cases; the cost estimates are significantly more accurate than the actual estimates, with mean absolute error rate of 17.3%. ^ This research fills an important need of quick and accurate cost estimation across various ETO industries. It differs from existing approaches to the problem in that a methodology is developed for use to quickly customize a cost estimation model for a chosen application domain. In addition to more accurate estimation, the major contributions are in its transferability to other users and applicability to different ETO operations. ^