16 resultados para leave to proceed
em Universidad Politécnica de Madrid
Resumo:
El objetivo de la presente Tesis está dirigido a analizar diversas opciones de diversificación que la pesca marítima profesional puede tener, poniendo el acento en el respeto al medio ambiente. En concreto la Pesca-turismo aparece como una de las alternativas más viables tanto por su respeto al medio ambiente como por su relativamente sencilla posibilidad de implantación en España. A fin de poder desarrollar la misma se proponen los cambios legislativos necesarios para su implantación en nuestro país procediéndose, para ello, al estudio de la situación actual de la actividad pesquera en nuestro país desde un punto de vista jurídico, con una especial consideración de la gestión y conservación de los recursos pesqueros. El estudio de su posible implantación en España, comienza analizando cuáles son las diferentes administraciones que influyen en dicha actividad y en qué medida lo hacen, tanto en el ámbito nacional como en el supranacional, con una especial referencia a la Unión Europea, así como la organización de las mismas. Acto seguido se procede al examen, tanto de la normativa sobre la materia, como de la jurisprudencia y la doctrina aplicables a la actividad de pesca marítima en España para, una vez llevado a cabo dicho estudio, comenzar con el examen de los requisitos exigidos, ya sean éstos de índole material o humana, a aquéllos que quieran llevarla a cabo. A continuación, se estudia el régimen de infracciones y sanciones, entrando, por último, en el terreno más propiamente de conservación de los recursos, donde se examinan las medidas que, a tal fin, se encuentran en nuestro ordenamiento jurídico. Más adelante se analiza, siquiera sea brevemente, la pesca deportiva, que, aunque con una incidencia muy inferior, no deja de ser otra forma de actividad extractiva. La existencia de otras experiencias parecidas a la propuesta de pesca-turismo en el ámbito internacional es objeto de estudio a fin de determinar cuál es la situación en otros países donde se hayan implantado con anterioridad. Para ello se lleva a cabo un análisis de las distintas soluciones que a este mismo problema se han dado en otros países también con amplia tradición en este campo procediendo, a tal fin, a un estudio de la legislación sobre la pesca-turismo, fundamentalmente en Italia, pero también en Francia y Portugal. A la vista de cuanto antecede es posible concluir que al día de hoy no es factible la realización de las actividades de Pesca-turismo con los instrumentos jurídicos de los que se dispone en España, dado que existen puntos sustanciales en los mismos que o bien impiden o bien no permiten su desarrollo. Por tanto el planteamiento de tales actividades necesariamente conlleva una serie de modificaciones normativas. El siguiente paso y en relación con las modificaciones normativas a efectuar, se ofrece un texto alternativo al texto legal a modificar (Ley 3/2001 de Pesca Marítima del Estado) y al Real Decreto 1027/1989, mientras que se dejan efectuados los apuntes precisos de cual debería de ser el marco reglamentario que, en desarrollo de las modificaciones anteriores, posibilitasen el ejercicio de la Pesca-turismo en España. Por último se ha optado por efectuar un análisis de la opinión del sector, tendente a verificar si los datos obtenidos empíricamente quedaban asimismo reflejados en las actitudes de los destinatarios finales de tales normas, que no serían otros que los pescadores profesionales. A tal fin se ha procedido a recoger las opiniones de diversos colectivos del sector, a través tanto de las cofradías de pescadores como de las Federaciones, acerca de las actividades de pesca-turismo, buscando la representación de todas las zonas geográficas. Fruto de tal investigación se ha llegado a la conclusión de que el desarrollo de las actividades propuesta de pescaturismo cuenta con una opinión favorable dentro del sector que, con una mayoría aplastante se manifestó a favor de desarrollar la posibilidad de ejercicio de la mismas. SUMMARY The objective of this thesis is aimed to analyze various options of diversification that commercial maritime fishing can have, with an emphasis on respect for the environment. Specifically, fishing-tourism appears as one of the most viable alternatives because of its environmental friendliness as well by its relatively simple possibility of implementation in Spain In order to develop it, it is proposed the necessary changes legislative for implementation in our country proceeding to study of the current situation of fisheries in our country from a legal perspective, with special consideration of the management and conservation of fisheries resources. The study of their possible implementation in Spain, begins by analyzing which are the different administrations that influence in that activity and to what extent they do, both at the national field as well at the supranational field, with special reference to the European Union and organization of thereof. Then proceeds to an examination, both of the relevant legislation as well as of the jurisprudence and doctrine applicable to maritime fishing in Spain for, once conducted this study, to begin with the consideration of the requirements, be they human or material, to those who want to carry out. Then studies the regime of offences and penalties, entering, finally, in the field conservation of resources, where discusses measures which, to this end, are in our legal system Later analyzes, even briefly, sport fishing, which, although with a much lower incidence, it is another form of extractive activity. The existence of other similar experiences to the proposal of fishing-tourism in the international arena is object of study in order to determine which is the situation in other countries where it has been implemented previously. For that it is carried out an analysis of the different solutions that to this same problem have been given in other countries with long tradition in this field, to proceed to study about legislation on fishing-tourism, fundamentally in Italy, but also in France and Portugal In view of the above it can be concluded that today is not feasible to implement the fishing-tourism activities with the legal instruments that are available in Spain, as there are substantial points which prevent them or not allows its development. Therefore the approach to such activities necessarily entails a series of normatives changes. The next step and in relation to normatives changes to do, it is offered an alternative text to the legal text to modify (Law 3/2001 of the State Marine Fisheries) and to the Royal Decree No. 1027 / 1989, while we leave made accurate notes about which ought be the reglamentary framework that, in developing of the above modifications, can be enable exercise of fishing-tourism in Spain. Finally, it is been opted to carry out an analysis of the opinion of the sector, aimed at verifying if data obtained empirically were also reflected in the attitudes of the final recipients of such standards, who would not be other than the professional fishermen. For this purpose it has been collected the opinions of various groups of the sector, through of fishermen's associations and federations, about of the activities of tourism-fishing, looking for the representation of all geographical areas. The result of such an investigation has concluded that the development of this proposed activities fishing-tourism has a positive opinion within the sector because an overwhelming majority was in favor of developing the possibility of exercise of the same.
Resumo:
Using photocatalysis for energy applications depends, more than for environmental purposes or selective chemical synthesis, on converting as much of the solar spectrum as possible; the best photocatalyst, titania, is far from this. Many efforts are pursued to use better that spectrum in photocatalysis, by doping titania or using other materials (mainly oxides, nitrides and sulphides) to obtain a lower bandgap, even if this means decreasing the chemical potential of the electron-hole pairs. Here we introduce an alternative scheme, using an idea recently proposed for photovoltaics: the intermediate band (IB) materials. It consists in introducing in the gap of a semiconductor an intermediate level which, acting like a stepstone, allows an electron jumping from the valence band to the conduction band in two steps, each one absorbing one sub-bandgap photon. For this the IB must be partially filled, to allow both sub-bandgap transitions to proceed at comparable rates; must be made of delocalized states to minimize nonradiative recombination; and should not communicate electronically with the outer world. For photovoltaic use the optimum efficiency so achievable, over 1.5 times that given by a normal semiconductor, is obtained with an overall bandgap around 2.0 eV (which would be near-optimal also for water phtosplitting). Note that this scheme differs from the doping principle usually considered in photocatalysis, which just tries to decrease the bandgap; its aim is to keep the full bandgap chemical potential but using also lower energy photons. In the past we have proposed several IB materials based on extensively doping known semiconductors with light transition metals, checking first of all with quantum calculations that the desired IB structure results. Subsequently we have synthesized in powder form two of them: the thiospinel In2S3 and the layered compound SnS2 (having bandgaps of 2.0 and 2.2 eV respectively) where the octahedral cation is substituted at a â?10% level with vanadium, and we have verified that this substitution introduces in the absorption spectrum the sub-bandgap features predicted by the calculations. With these materials we have verified, using a simple reaction (formic acid oxidation), that the photocatalytic spectral response is indeed extended to longer wavelengths, being able to use even 700 nm photons, without largely degrading the response for above-bandgap photons (i.e. strong recombination is not induced) [3b, 4]. These materials are thus promising for efficient photoevolution of hydrogen from water; work on this is being pursued, the results of which will be presented.
Resumo:
El objetivo de este proyecto es estudiar la recuperación secundaria de petróleo de la capa sureste Ayoluengo del campo Ayoluengo, Burgos (España), y su conversión en un almacenamiento subterráneo de gas. La capa Ayoluengo se ha considerado como una capa inclinada de 60 km por 10 km de superficie por 30 m de espesor en el que se han perforado 20 pozos, y en donde la recuperación primaria ha sido de un 19%. Se ha realizado el ajuste histórico de la recuperación primaria de gas, petróleo y agua de la capa desde el año 1965 al 2011. La conversión a almacenamiento subterráneo de gas se ha realizado mediante ciclos de inyección de gas, de marzo a octubre, y extracción de gas, de noviembre a febrero, de forma que se incrementa la presión del campo hasta alcanzar la presión inicial. El gas se ha inyectado y extraído por 5 pozos situados en la zona superior de la capa. Al mismo tiempo, se ha realizado una recuperación secundaria debido a la inyección de gas natural de 20 años de duración en donde la producción de petróleo se realiza por 14 pozos situados en la parte inferior de la capa. Para proceder a la simulación del ajuste histórico, conversión en almacenamiento y recuperación secundaria se utilizó el simulador Eclipse100. Los resultados obtenidos fueron una recuperación secundaria de petróleo de un 9% más comparada con la primaria. En cuanto al almacenamiento de gas natural, se alcanzó la presión inicial consiguiendo un gas útil de 300 Mm3 y un gas colchón de 217,3 Mm3. ABSTRACT The aim of this project is to study the secondary recovery of oil from the southeast Ayoluengo layer at the oil field Ayoluengo, Burgos (Spain), and its conversion into an underground gas storage. The Ayoluengo layer is an inclined layer of 60 km by 10km of area by 30 m gross and with 20 wells, which its primary recovery is of 19%. The history matching of the production of oil, gas and water has been carried out from the year 1965 until 2011. The conversion into an underground gas storage has been done in cycles of gas injection from March to October, and gas extraction from November to February, so that the reservoir pressure increases until it gets to the initial pressure. The gas has been injected and extracted through five well situated in the top part of the layer. At the same time, the secondary recovery has occurred due to de injection of natural gas during 20 years where the production of oil has been done through 14 wells situated in the lowest part of the layer. To proceed to the simulation of the history match, the conversion into an underground gas storage and its secondary recovery, the simulator used was Eclipse100. The results were a secondary recovery of oil of 9% more, compared to the primary recovery and concerning the underground gas storage, the initial reservoir pressure was achieved with a working gas of 300 Mm3 and a cushion gas of 217,3 Mm3.
Resumo:
Los métodos de detección rápida de microorganismos se están convirtiendo en una herramienta esencial para el control de calidad en el área de la biotecnología, como es el caso de las industrias de alimentos y productos farmacéuticos y bioquímicos. En este escenario, el objetivo de esta tesis doctoral es desarrollar una técnica de inspección rápida de microoganismos basada en ultrasonidos. La hipótesis propuesta es que la combinación de un dispositivo ultrasónico de medida y un medio líquido diseñado específicamente para producir y atrapar burbujas, pueden constituir la base de un método sensible y rápido de detección de contaminaciones microbianas. La técnica presentada es efectiva para bacterias catalasa-positivas y se basa en la hidrólisis del peróxido de hidrógeno inducida por la catalasa. El resultado de esta reacción es un medio con una creciente concentración de burbujas. Tal medio ha sido estudiado y modelado desde el punto de vista de la propagación ultrasónica. Las propiedades deducidas a partir del análisis cinemático de la enzima se han utilizado para evaluar el método como técnica de inspección microbiana. En esta tesis, se han investigado aspectos teóricos y experimentales de la hidrólisis del peróxido de hidrógeno. Ello ha permitido describir cuantitativamente y comprender el fenómeno de la detección de microorganismos catalasa-positivos mediante la medida de parámetros ultrasónicos. Más concretamente, los experimentos realizados muestran cómo el oxígeno que aparece en forma de burbujas queda atrapado mediante el uso de un gel sobre base de agar. Este gel fue diseñado y preparado especialmente para esta aplicación. A lo largo del proceso de hidrólisis del peróxido de hidrógeno, se midió la atenuación de la onda y el “backscattering” producidos por las burbujas, utilizando una técnica de pulso-eco. Ha sido posible detectar una actividad de la catalasa de hasta 0.001 unidades/ml. Por otra parte, este estudio muestra que por medio del método propuesto, se puede lograr una detección microbiana para concentraciones de 105 células/ml en un periodo de tiempo corto, del orden de unos pocos minutos. Estos resultados suponen una mejora significativa de tres órdenes de magnitud en comparación con otros métodos de detección por ultrasonidos. Además, la sensibilidad es competitiva con modernos y rápidos métodos microbiológicos como la detección de ATP por bioluminiscencia. Pero sobre todo, este trabajo muestra una metodología para el desarrollo de nuevas técnicas de detección rápida de bacterias basadas en ultrasonidos. ABSTRACT In an industrial scenario where rapid microbiological methods are becoming essential tools for quality control in the biotechnological area such as food, pharmaceutical and biochemical; the objective of the work presented in this doctoral thesis is to develop a rapid microorganism inspection technique based on ultrasounds. It is proposed that the combination of an ultrasonic measuring device with a specially designed liquid medium, able to produce and trap bubbles could constitute the basis of a sensitive and rapid detection method for microbial contaminations. The proposed technique is effective on catalase positive microorganisms. Well-known catalase induced hydrogen peroxide hydrolysis is the fundamental of the developed method. The physical consequence of the catalase induced hydrogen peroxide hydrolysis is an increasingly bubbly liquid medium. Such medium has been studied and modeled from the point of view of ultrasonic propagation. Properties deduced from enzyme kinematics analysis have been extrapolated to investigate the method as a microbial inspection technique. In this thesis, theoretical and experimental aspects of the hydrogen peroxide hydrolysis were analyzed in order to quantitatively describe and understand the catalase positive microorganism detection by means of ultrasonic measurements. More concretely, experiments performed show how the produced oxygen in form of bubbles is trapped using the new gel medium based on agar, which was specially designed for this application. Ultrasonic attenuation and backscattering is measured in this medium using a pulse-echo technique along the hydrogen peroxide hydrolysis process. Catalase enzymatic activity was detected down to 0.001 units/ml. Moreover, this study shows that by means of the proposed method, microbial detection can be achieved down to 105 cells/ml in a short time period of the order of few minutes. These results suppose a significant improvement of three orders of magnitude compared to other ultrasonic detection methods for microorganisms. In addition, the sensitivity reached is competitive with modern rapid microbiological methods such as ATP detection by bioluminescence. But above all, this work points out a way to proceed for developing new rapid microbial detection techniques based on ultrasound.
Resumo:
Neuro-evolutive development from birth until the age of six years is a decisive factor in a child?s quality of life. Early detection of development disorders in early childhood can facilitate necessary diagnosis and/or treatment. Primary-care pediatricians play a key role in its detection as they can undertake the preventive and therapeutic actions requested to promote a child?s optimal development. However, the lack of time and little specific knowledge at primary-care avoid to applying continuous early-detection anomalies procedures. This research paper focuses on the deployment and evaluation of a smart system that enhances the screening of language disorders in primary care. Pediatricians get support to proceed with early referral of language disorders. The proposed model provides them with a decision-support tool for referral actions to trigger essential diagnostic and/or therapeutic actions for a comprehensive individual development. The research was conducted by starting from a sample of 60 cases of children with language disorders. Validation was carried out through two complementary steps: first, by including a team of seven experts from the fields of neonatology, pediatrics, neurology and language therapy, and, second, through the evaluation of 21 more previously diagnosed cases. The results obtained show that therapist positively accepted the system proposal in 18 cases (86%) and suggested system redesign for single referral to a speech therapist in three remaining cases.
Resumo:
La Aeroelasticidad fue definida por Arthur Collar en 1947 como "el estudio de la interacción mutua entre fuerzas inerciales, elásticas y aerodinámicas actuando sobre elementos estructurales expuestos a una corriente de aire". Actualmente, esta definición se ha extendido hasta abarcar la influencia del control („Aeroservoelasticidad‟) e, incluso, de la temperatura („Aerotermoelasticidad‟). En el ámbito de la Ingeniería Aeronáutica, los fenómenos aeroelásticos, tanto estáticos (divergencia, inversión de mando) como dinámicos (flameo, bataneo) son bien conocidos desde los inicios de la Aviación. Las lecciones aprendidas a lo largo de la Historia Aeronáutica han permitido establecer criterios de diseño destinados a mitigar la probabilidad de sufrir fenómenos aeroelásticos adversos durante la vida operativa de una aeronave. Adicionalmente, el gran avance experimentado durante esta última década en el campo de la Aerodinámica Computacional y en la modelización aeroelástica ha permitido mejorar la fiabilidad en el cálculo de las condiciones de flameo de una aeronave en su fase de diseño. Sin embargo, aún hoy, los ensayos en vuelo siguen siendo necesarios para validar modelos aeroelásticos, verificar que la aeronave está libre de inestabilidades aeroelásticas y certificar sus distintas envolventes. En particular, durante el proceso de expansión de la envolvente de una aeronave en altitud/velocidad, se requiere predecir en tiempo real las condiciones de flameo y, en consecuencia, evitarlas. A tal efecto, en el ámbito de los ensayos en vuelo, se han desarrollado diversas metodologías que predicen, en tiempo real, las condiciones de flameo en función de condiciones de vuelo ya verificadas como libres de inestabilidades aeroelásticas. De entre todas ellas, aquella que relaciona el amortiguamiento y la velocidad con un parámetro específico definido como „Margen de Flameo‟ (Flutter Margin), permanece como la técnica más común para proceder con la expansión de Envolventes en altitud/velocidad. No obstante, a pesar de su popularidad y facilidad de aplicación, dicha técnica no es adecuada cuando en la aeronave a ensayar se hallan presentes no-linealidades mecánicas como, por ejemplo, holguras. En particular, en vuelos de ensayo dedicados específicamente a expandir la envolvente en altitud/velocidad, las condiciones de „Oscilaciones de Ciclo Límite‟ (Limit Cycle Oscillations, LCOs) no pueden ser diferenciadas de manera precisa de las condiciones de flameo, llevando a una determinación excesivamente conservativa de la misma. La presente Tesis desarrolla una metodología novedosa, basada en el concepto de „Margen de Flameo‟, que permite predecir en tiempo real las condiciones de „Ciclo Límite‟, siempre que existan, distinguiéndolas de las de flameo. En una primera parte, se realiza una revisión bibliográfica de la literatura acerca de los diversos métodos de ensayo existentes para efectuar la expansión de la envolvente de una aeronave en altitud/velocidad, el efecto de las no-linealidades mecánicas en el comportamiento aeroelástico de dicha aeronave, así como una revisión de las Normas de Certificación civiles y militares respecto a este tema. En una segunda parte, se propone una metodología de expansión de envolvente en tiempo real, basada en el concepto de „Margen de Flameo‟, que tiene en cuenta la presencia de no-linealidades del tipo holgura en el sistema aeroelástico objeto de estudio. Adicionalmente, la metodología propuesta se valida contra un modelo aeroelástico bidimensional paramétrico e interactivo programado en Matlab. Para ello, se plantean las ecuaciones aeroelásticas no-estacionarias de un perfil bidimensional en la formulación espacio-estado y se incorpora la metodología anterior a través de un módulo de análisis de señal y otro módulo de predicción. En una tercera parte, se comparan las conclusiones obtenidas con las expuestas en la literatura actual y se aplica la metodología propuesta a resultados experimentales de ensayos en vuelo reales. En resumen, los principales resultados de esta Tesis son: 1. Resumen del estado del arte en los métodos de ensayo aplicados a la expansión de envolvente en altitud/velocidad y la influencia de no-linealidades mecánicas en la determinación de la misma. 2. Revisión de la normas de Certificación Civiles y las normas Militares en relación a la verificación aeroelástica de aeronaves y los límites permitidos en presencia de no-linealidades. 3. Desarrollo de una metodología de expansión de envolvente basada en el Margen de Flameo. 4. Validación de la metodología anterior contra un modelo aeroelástico bidimensional paramétrico e interactivo programado en Matlab/Simulink. 5. Análisis de los resultados obtenidos y comparación con resultados experimentales. ABSTRACT Aeroelasticity was defined by Arthur Collar in 1947 as “the study of the mutual interaction among inertia, elastic and aerodynamic forces when acting on structural elements surrounded by airflow”. Today, this definition has been updated to take into account the Controls („Aeroservoelasticity‟) and even the temperature („Aerothermoelasticity‟). Within the Aeronautical Engineering, aeroelastic phenomena, either static (divergence, aileron reversal) or dynamic (flutter, buzz), are well known since the early beginning of the Aviation. Lessons learned along the History of the Aeronautics have provided several design criteria in order to mitigate the probability of encountering adverse aeroelastic phenomena along the operational life of an aircraft. Additionally, last decade improvements experienced by the Computational Aerodynamics and aeroelastic modelization have refined the flutter onset speed calculations during the design phase of an aircraft. However, still today, flight test remains as a key tool to validate aeroelastic models, to verify flutter-free conditions and to certify the different envelopes of an aircraft. Specifically, during the envelope expansion in altitude/speed, real time prediction of flutter conditions is required in order to avoid them in flight. In that sense, within the flight test community, several methodologies have been developed to predict in real time flutter conditions based on free-flutter flight conditions. Among them, the damping versus velocity technique combined with a Flutter Margin implementation remains as the most common technique used to proceed with the envelope expansion in altitude/airspeed. However, although its popularity and „easy to implement‟ characteristics, several shortcomings can adversely affect to the identification of unstable conditions when mechanical non-linearties, as freeplay, are present. Specially, during test flights devoted to envelope expansion in altitude/airspeed, Limits Cycle Oscillations (LCOs) conditions can not be accurately distinguished from those of flutter and, in consequence, it leads to an excessively conservative envelope determination. The present Thesis develops a new methodology, based on the Flutter Margin concept, that enables in real time the prediction of the „Limit Cycle‟ conditions, whenever they exist, without degrading the capability of predicting the flutter onset speed. The first part of this Thesis presents a review of the state of the art regarding the test methods available to proceed with the envelope expansion of an aircraft in altitude/airspeed and the effect of mechanical non-linearities on the aeroelastic behavior. Also, both civil and military regulations are reviewed with respect aeroelastic investigation of air vehicles. The second part of this Thesis proposes a new methodology to perform envelope expansion in real time based on the Flutter Margin concept when non-linearities, as freeplay, are present. Additionally, this methodology is validated against a Matlab/Slimulink bidimensional aeroelastic model. This model, parametric and interactive, is formulated within the state-space field and it implements the proposed methodology through two main real time modules: A signal processing module and a prediction module. The third part of this Thesis compares the final conclusions derived from the proposed methodology with those stated by the flight test community and experimental results. In summary, the main results provided by this Thesis are: 1. State of the Art review of the test methods applied to envelope expansion in altitude/airspeed and the influence of mechanical non-linearities in its identification. 2. Review of the main civil and military regulations regarding the aeroelastic verification of air vehicles and the limits set when non-linearities are present. 3. Development of a methodology for envelope expansion based on the Flutter Margin concept. 4. A Matlab/Simulink 2D-[aeroelastic model], parametric and interactive, used as a tool to validate the proposed methodology. 5. Conclusions driven from the present Thesis and comparison with experimental results.
Resumo:
Cascade is an information reconciliation protocol proposed in the context of secret key agreement in quantum cryptography. This protocol allows removing discrepancies in two partially correlated sequences that belong to distant parties, connected through a public noiseless channel. It is highly interactive, thus requiring a large number of channel communications between the parties to proceed and, although its efficiency is not optimal, it has become the de-facto standard for practical implementations of information reconciliation in quantum key distribution. The aim of this work is to analyze the performance of Cascade, to discuss its strengths, weaknesses and optimization possibilities, comparing with some of the modified versions that have been proposed in the literature. When looking at all design trade-offs, a new view emerges that allows to put forward a number of guidelines and propose near optimal parameters for the practical implementation of Cascade improving performance significantly in comparison with all previous proposals.
Resumo:
Cascade is an information reconciliation protocol proposed in the context of secret key agreement in quantum cryptography. This protocol allows removing discrepancies in two partially correlated sequences that belong to distant parties, connected through a public noiseless channel. It is highly interactive, thus requiring a large number of channel communications between the parties to proceed and, although its efficiency is not optimal, it has become the de-facto standard for practical implementations of information reconciliation in quantum key distribution. The aim of this work is to analyze the performance of Cascade, to discuss its strengths, weaknesses and optimization possibilities, comparing with some of the modified versions that have been proposed in the literature. When looking at all design trade-offs, a new view emerges that allows to put forward a number of guidelines and propose near optimal parameters for the practical implementation of Cascade improving performance significantly in comparison with all previous proposals.
Resumo:
Context: Replication plays an important role in experimental disciplines. There are still many uncertain- ties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experiment- ers, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment? Objective: To improve our understanding of SE experiment replication, in this work we propose a classi- fication which is intend to provide experimenters with guidance about what types of replication they can perform. Method: The research approach followed is structured according to the following activities: (1) a litera- ture review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE. Results: We propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment. Conclusion: The aim of replication is to verify results, but different types of replication serve special ver- ification purposes and afford different degrees of change. Each replication type helps to discover partic- ular experimental conditions that might influence the results. The proposed classification can be used to identify changes in a replication and, based on these, understand the level of verification.
Resumo:
The development of this Master's Thesis is aimed at modeling active for estimating seismic hazard in Haití failures. It has been used zoned probabilistic method, both classical and hybrid, considering the incorporation of active faults as independent units in the calculation of seismic hazard. In this case, the rate of seismic moment is divided between the failures and the area seismogenetic same region. Failures included in this study are the Septentrional, Matheux and Enriquillo fault. We compared the results obtained by both methods to determine the importance of considering the faults in the calculation. In the first instance, updating the seismic catalog, homogenization, completeness analysis and purification was necessary to obtain a catalog ready to proceed to the estimation of the hazard. With the seismogenic zoning defined in previous studies and the updated seismic catalog, they are obtained relations Gutenberg-Richter recurrence of seismicity, superficial and deep in each area. Selected attenuation models were those used in (Benito et al., 2011), as the tectonic area of study is very similar to that of Central America. Its implementation has been through the development of a logical in which each branch is multiplied by an index based on the relevance of each combination of models. Results are presented as seismic hazard maps for return periods of 475, 975 and 2475 years, and spectral acceleration (SA) in structural periods: 0.1 - 0.2 - 0.5 - 1.0 and 2.0 seconds, and the difference accelerations between maps obtained by the classical method and the hybrid method. Maps realize the importance of including faults as separate items in the calculation of the hazard. The morphology of the zoned maps presented higher values in the area where the superficial and deep zone overlap. In the results it can determine that the minimum values in the zoned approach they outweigh the hybrid method, especially in areas where there are no faults. Higher values correspond to those obtained in fault zones by the hybrid method understanding that the contribution of the faults in this method is very important with high values. The maximum value of PGA obtained is close to Septentrional in 963gal, near to 460 gal in Matheux, and the Enriquillo fault line value reaches 760gal PGA in the Eastern segment and Western 730gal in the segment. This compares with that obtained in the zoned approach in this area where the value of PGA obtained was 240gal. These values are compared with those obtained by Frankel et al., (2011) with those have much similarity in values and morphology, in contrast to those presented by Benito et al., (2012) and the Standard Seismic Dominican Republic
Resumo:
Nowadays robots have made their way into real applications that were prohibitive and unthinkable thirty years ago. This is mainly due to the increase in power computations and the evolution in the theoretical field of robotics and control. Even though there is plenty of information in the current literature on this topics, it is not easy to find clear concepts of how to proceed in order to design and implement a controller for a robot. In general, the design of a controller requires of a complete understanding and knowledge of the system to be controlled. Therefore, for advanced control techniques the systems must be first identified. Once again this particular objective is cumbersome and is never straight forward requiring of great expertise and some criteria must be adopted. On the other hand, the particular problem of designing a controller is even more complex when dealing with Parallel Manipulators (PM), since their closed-loop structures give rise to a highly nonlinear system. Under this basis the current work is developed, which intends to resume and gather all the concepts and experiences involve for the control of an Hydraulic Parallel Manipulator. The main objective of this thesis is to provide a guide remarking all the steps involve in the designing of advanced control technique for PMs. The analysis of the PM under study is minced up to the core of the mechanism: the hydraulic actuators. The actuators are modeled and experimental identified. Additionally, some consideration regarding traditional PID controllers are presented and an adaptive controller is finally implemented. From a macro perspective the kinematic and dynamic model of the PM are presented. Based on the model of the system and extending the adaptive controller of the actuator, a control strategy for the PM is developed and its performance is analyzed with simulation.
Resumo:
El concepto de casa crecedera, tal y como lo conocemos en la actualidad, se acuñó por primera vez en 1932 en el concurso Das Wachsende Haus organizado por Martin Wagner y Hans Poelzig dentro del marco de la Exposición Internacional Sonne, Luft und Haus für alle, promovida por la Oficina de Turismo de la ciudad de Berlín. En dicho concurso, se definía este tipo de vivienda como aquella célula básica o vivienda semilla que, dependiendo de las necesidades y posibilidades de los habitantes, podía crecer mediante otras estancias, conformando una vivienda completa en sí misma en cada fase de crecimiento. Numerosos arquitectos de primer orden, tales como Walter Gropius, Bruno Taut, Erich Mendelsohn o Hans Scharoun, participaron en este concurso, abriendo una nueva vía de exploración dentro de la vivienda flexible, la del crecimiento programado. A partir de ese momento, en Europa, y subsecuentemente en EEUU y otras regiones desarrolladas, se iniciaron numerosas investigaciones teóricas y prácticas en torno al fenómeno del crecimiento en la vivienda desde un enfoque vinculado a la innovación, tanto espacial como técnica. Por otro lado, aunque dentro del marco de la arquitectura popular de otros países, ya se ensayaban viviendas crecederas desde el siglo XVIII debido a que, por su tamaño, eran más asequibles dentro del mercado. Desde los años treinta, numerosos países en vías de desarrollo tuvieron que lidiar con migraciones masivas del campo a la ciudad, por lo que se construyeron grandes conjuntos habitacionales que, en numerosos casos, estaban conformados por viviendas crecederas. En todos ellos, la aproximación al crecimiento de la vivienda se daba desde una perspectiva diferente a la de los países desarrollados. Se primaba la economía de medios, el uso de sistemas constructivos de bajo costo y, en muchos casos, se fomentaba incluso la autoconstrucción guiada, frente a las construcciones prefabricadas ensambladas por técnicos especializados que se proponían, por ejemplo, en los casos europeos. Para realizar esta investigación, se recopiló información de estas y otras viviendas. A continuación, se identificaron distintas maneras de producir el crecimiento, atendiendo a su posición relativa respecto de la vivienda semilla, a las que se denominó mecanismos de ampliación, utilizados indistintamente sin tener en cuenta la ubicación geográfica de cada casa. La cuestión de porqué se prefiere un mecanismo en lugar de otro en un caso determinado, desencadenó el principal objetivo de esta Tesis: la elaboración de un sistema de análisis y diagnóstico de la vivienda crecedera que, de acuerdo a determinados parámetros, permitiera indicar cuál es la ampliación o sucesión de ampliaciones óptimas para una familia concreta, en una ubicación establecida. Se partió de la idea de que el crecimiento de la vivienda está estrechamente ligado a la evolución de la unidad de convivencia que reside en ella, de manera que la casa se transformó en un hábitat dinámico. Además se atendió a la complejidad y variabilidad del fenómeno, sujeto a numerosos factores socio-económicos difícilmente previsibles en el tiempo, pero fácilmente monitorizables según unos patrones determinados vinculados a la normatividad, el número de habitantes, el ahorro medio, etc. Como consecuencia, para el diseño del sistema de optimización de la vivienda crecedera, se utilizaron patrones evolutivos. Dichos patrones, alejados ya del concepto espacial y morfológico usualmente utilizado en arquitectura por figuras como C. Alexander o J. Habraken, pasaron a entenderse como una secuencia de eventos en el tiempo (espaciales, sociales, económicos, legales, etc.), que describen el proceso de transformación y que son peculiares de cada vivienda. De esta manera, el tiempo adquirió una especial importancia al convertirse en otro material más del proyecto arquitectónico. Fue en la construcción de los patrones donde se identificaron los mencionados mecanismos de ampliación, entendidos también como sistemas de compactación de la ciudad a través de la ocupación tridimensional del espacio. Al estudiar la densidad, mediante los conceptos de holgura y hacinamiento, se aceptó la congestión de las ciudades como un valor positivo. De esta forma, las posibles transformaciones realizadas por los habitantes (previstas desde un inicio) sobre el escenario del habitar (vivienda semilla), se convirtieron también en herramientas de proyecto urbano que responden a condicionantes del lugar y de los habitantes con distintas intensidades de crecimiento, ocupación y densidad. Igualmente, en el proceso de diseño del sistema de optimización, se detectaron las estrategias para la adaptabilidad y transformación de la casa crecedera, es decir, aquella serie de acciones encaminadas a la alteración de la vivienda para facilitar su ampliación, y que engloban desde sistemas constructivos en espera, que facilitan las costuras entre crecimiento y vivienda semilla, hasta sistemas espaciales que permiten que la casa altere su uso transformándose en un hábitat productivo o en un artefacto de renta. Así como los mecanismos de ampliación están asociados a la morfología, se descubrió que su uso es independiente de la localización, y que las estrategias de adaptabilidad de la vivienda se encuentran ligadas a sistemas constructivos o procesos de gestión vinculados a una región concreta. De esta manera, la combinación de los mecanismos con las estrategias caracterizan el proceso de evolución de la vivienda, vinculándola a unos determinados condicionantes sociales, geográficos y por tanto, constructivos. Finalmente, a través de la adecuada combinación de mecanismos de ampliación y estrategias de adaptabilidad en el proyecto de la vivienda con crecimiento programado es posible optimizar su desarrollo en términos económicos, constructivos, sociales y espaciales. Como resultado, esto ayudaría no sólo a mejorar la vida de los habitantes de la vivienda semilla en términos cualitativos y cuantitativos, sino también a compactar las ciudades mediante sistemas incluyentes, ya que las casas crecederas proporcionan una mayor complejidad de usos y diversidad de relaciones sociales. ABSTRACT The growing house concept -as we currently know it- was used for the first time back in 1932 in the competition Das Wachsende Haus organized by Martin Wagner and Hans Poelzig during the International Exhibition Sonne, Luft und Haus für alle, promoted by Berlin's Tourist Office. In that competition this type of housing was defined as a basic cell or a seed house unit, and depending on the needs and capabilities of the residents it could grow by adding rooms and defining itself as a complete house unit during each growing stage. Many world-top class architects such as Walter Gropius, Bruno Taut, Erich Mendelsohn or Hans Scharoun, were part of this competition exploring a new path in the flexible housing field, the scheduled grownth. From that moment and on, in Europe -and subsequently in the USA and other developed areas- many theorical and pragmatical researchs were directed towards the growing house phenomena, coming from an initial approach related to innovation, spacial and technical innovation. Furthermore -inside the traditional architecture frame in other countries, growing houses were already tested in the XVIII century- mainly due to the size were more affordable in the Real State Market. Since the 30's decade many developing countries had to deal with massive migration movements from the countryside to cities, building large housing developments were -in many cases- formed by growing housing units. In all of these developing countries the growing house approach was drawn from a different perspective than in the developed countries. An economy of means was prioritized, the utilization of low cost construction systems and -in many cases- a guided self-construction was prioritized versus the prefabricated constructions set by specialized technics that were proposed -for instance- in the European cases. To proceed with this research, information from these -and other- housing units was gathered. From then and on different ways to perform the growing actions were identified, according to its relative position from the seed house unit, these ways were named as addition or enlargement mechanisms indifferently utilized without adknowledging the geographic location for each house. The question of why one addition mechanism is preferred over another in any given case became the main target of this Thesis; the ellaboration of an analysis and diagnosis system for the growing house -according to certain parameters- would allow to point out which is the addition or addition process more efficient for a certain family in a particular location. As a starting point the grownth of the housing unit is directly linked to the evolution of the family unit that lives on it, so the house becomes a dynamic habitat. The complexity and the variability of the phenomena was taken into consideration related to a great number of socio-economic factors hardly able to be foreseen ahead on time but easy to be monitored according to certain patterns linked to regulation, population, average savings, etc As a consequence, to design the optimization system for the growing house, evolutionary patterns were utilized. Those patterns far away from the spatial and morphologic concept normally utilized in Architecture by characters like C. Alexander or J. Habraken, started to be understood like a sequence of events on time (spatial events, social events, economic events, legal events, etc) that describes the transformation process and that are particular for each housing unit. Therefore time became something important as another ingredient in the Architectural Project. The before mentioned addition or enlargement mechanisms were identified while building the patterns; these mechanisms were also understood as city's system of compactation through the tridimendional ocupation of space. Studying density, thorough the concepts of comfort and overcrowding, traffic congestion in the city was accepted as a positive value. This way, the possible transformations made by the residents (planned from the begining) about the residencial scenary (seed house), also became tools of the urban project that are a response to site's distinctive features and to the residents with different grownth intensities, activities and density Likewise, during the process of designing the optimization system, strategies for adaptations and transformation of the growing house were detected, in other words, the serial chain of actions directed to modify the house easing its enlargement or addition, and that comprehends from constructive systems on hold -that smooths the costures between grownth and housing seed- to spatial systems that allows that the house modify its utilization, becoming a productive habitat or a rental asset. Because the enlargement mechanisms are linked to the morphology, it was discovered that the use it's not related to the location, and that the adaptation strategies of the houses are linked to constructive systems or management processes linked to a particular area. This way the combination of mechanisms and strategies characterizes the process of housing evolution, linking it to certain social and geographic peculiarities and therefore constructives. At last, through the certain combination of enlargement mechanisms and adaptability strategies in the housing with scheduled grownth project is possible to optimize its development in economic, constructive, social and spatial terms. As a result, this would help not only to improve the life of the seed house residents in qualitative and quantitative terms but also to compact the cities through inclusive systems, given that the growing houses provide a larger complexity of uses and social relations.
Resumo:
Until a few years ago, most of the network communications were based in the wire as the physical media, but due to the advances and the maturity of the wireless communications, this is changing. Nowadays wireless communications offers fast, secure, efficient and reliable connections. Mobile communications are in expansion, clearly driven by the use of smart phones and other mobile devices, the use of laptops, etc… Besides that point, the inversion in the installation and maintenance of the physical medium is much lower than in wired communications, not only because the air has no cost, but because the installation and maintenance of the wire require a high economic cost. Besides the economic cost we find that wire is a more vulnerable medium to external threats such as noise, sabotages, etc… There are two different types of wireless networks: those which the structure is part of the network itself and those which have a lack of structure or any centralization, in a way that the devices that form part of the network can connect themselves in a dynamic and random way, handling also the routing of every control and information messages, this kind of networks is known as Ad-hoc. In the present work we will proceed to study one of the multiple wireless protocols that allows mobile communications, it is Optimized Link State Routing, from now on, OLSR, it is an pro-active routing, standard mechanism that works in a distributed in order to stablish the connections among the different nodes that belong to a wireless network. Thanks to this protocol it is possible to get all the routing tables in all the devices correctly updated every moment through the periodical transmission of control messages and on this way allow a complete connectivity among the devices that are part of the network and also, allow access to other external networks such as virtual private networks o Internet. This protocol could be perfectly used in environments such as airports, malls, etc… The update of the routing tables in all the devices is got thanks to the periodical transmission of control messages and finally it will offer connectivity among all the devices and the corresponding external networks. For the study of OLSR protocol we will have the help of the network simulator “Network Simulator 2”, a freeware network simulator programmed in C++ based in discrete events. This simulator is used mainly in educational and research environments and allows a very extensive range of protocols, both, wired networks protocols and wireless network protocols, what is going to be really useful to proceed to the simulation of different configurations of networks and protocols. In the present work we will also study different simulations with Network Simulator 2, in different scenarios with different configurations, wired networks, and Ad-hoc networks, where we will study OLSR Protocol. RESUMEN. Hasta hace pocos años, la mayoría de las comunicaciones de red estaban basadas en el cable como medio físico pero debido al avance y madurez alcanzados en el campo de las comunicaciones inalámbricas esto está cambiando. Hoy día las comunicaciones inalámbricas nos ofrecen conexiones veloces, seguras, eficientes y fiables. Las comunicaciones móviles se encuentran en su momento de máxima expansión, claramente impulsadas por el uso de teléfonos y demás dispositivos móviles, el uso de portátiles, etc… Además la inversión a realizar en la instalación y el mantenimiento del medio físico en las comunicaciones móviles es muchísimo menor que en comunicaciones por cable, ya no sólo porque el aire no tenga coste alguno, sino porque la instalación y mantenimiento del cable precisan de un elevado coste económico por norma. Además del coste económico nos encontramos con que es un medio más vulnerable a amenazas externas tales como el ruido, escuchas no autorizadas, sabotajes, etc… Existen dos tipos de redes inalámbricas: las constituidas por una infraestructura que forma parte más o menos de la misma y las que carecen de estructura o centralización alguna, de modo que los dispositivos que forman parte de ella pueden conectarse de manera dinámica y arbitraria entre ellos, encargándose además del encaminamiento de todos los mensajes de control e información, a este tipo de redes se las conoce como redes Ad-hoc. En el presente Proyecto de Fin de Carrera se procederá al estudio de uno de los múltiples protocolos inalámbricos que permiten comunicaciones móviles, se trata del protocolo inalámbrico Optimized Link State Routing, de ahora en adelante OLSR, un mecanismo estándar de enrutamiento pro-activo, que trabaja de manera distribuida para establecer las conexiones entre los nodos que formen parte de las redes inalámbricas Ad-hoc, las cuales carecen de un nodo central y de una infraestructura pre-existente. Gracias a este protocolo es posible conseguir que todos los equipos mantengan en todo momento las tablas de ruta actualizadas correctamente mediante la transmisión periódica de mensajes de control y así permitir una completa conectividad entre todos los equipos que formen parte de la red y, a su vez, también permitir el acceso a otras redes externas tales como redes privadas virtuales o Internet. Este protocolo sería usado en entornos tales como aeropuertos La actualización de las tablas de enrutamiento de todos los equipos se conseguirá mediante la transmisión periódica de mensajes de control y así finalmente se podrá permitir conectividad entre todos los equipos y con las correspondientes redes externas. Para el estudio del protocolo OLSR contaremos con el simulador de redes Network Simulator 2, un simulador de redes freeware programado en C++ basado en eventos discretos. Este simulador es usado principalmente en ambientes educativos y de investigación y permite la simulación tanto de protocolos unicast como multicast. El campo donde más se utiliza es precisamente en el de la investigación de redes móviles Ad-hoc. El simulador Network Simulator 2 no sólo implementa el protocolo OLSR, sino que éste implementa una amplia gama de protocolos, tanto de redes cableadas como de redes inalámbricas, lo cual va a sernos de gran utilidad para proceder a la simulación de distintas configuraciones de redes y protocolos. En el presente Proyecto de Fin de Carrera se estudiarán también diversas simulaciones con el simulador NS2 en diferentes escenarios con diversas configuraciones; redes cableadas, redes inalámbricas Ad-hoc, donde se estudiará el protocolo antes mencionado: OLSR. Este Proyecto de Fin de Carrera consta de cuatro apartados distintos: Primeramente se realizará el estudio completo del protocolo OLSR, se verán los beneficios y contrapartidas que ofrece este protocolo inalámbrico. También se verán los distintos tipos de mensajes existentes en este protocolo y unos pequeños ejemplos del funcionamiento del protocolo OLSR. Seguidamente se hará una pequeña introducción al simulador de redes Network Simulator 2, veremos la historia de este simulador, y también se hará referencia a la herramienta extra NAM, la cual nos permitirá visualizar el intercambio de paquetes que se produce entre los diferentes dispositivos de nuestras simulaciones de forma intuitiva y amigable. Se hará mención a la plataforma MASIMUM, encargada de facilitar en un entorno académico software y documentación a sus alumnos con el fin de facilitarles la investigación y la simulación de redes y sensores Ad-hoc. Finalmente se verán dos ejemplos, uno en el que se realizará una simulación entre dos PCs en un entorno Ethernet y otro ejemplo en el que se realizará una simulación inalámbrica entre cinco dispositivos móviles mediante el protocolo a estudiar, OLSR.
Resumo:
The improvement of energy efficiency in existing buildings is always a challenge due to their particular, and sometimes protected, constructive solutions. New constructive regulations in Spain leave a big undefined gap when a restoration is considered because they were developed for new buildings. However, rehabilitation is considered as an opportunity for many properties because it allows owners to obtain benefits from the use of the buildings. The current financial and housing crisis has turned society point of view to existing buildings and making them more efficient is one of the Spanish government’s aims. The economic viability of a rehabilitation action should take all factors into account: both construction costs and the future operative costs of the building must be considered. Nevertheless, the application of these regulations in Spain is left to the designer’s opinion and always under a subjective point of view. With the research work described in this paper and with the help of some case-studies, the cost of adapting an existing building to the new constructive regulations will be studied and Energetic Efficiency will be evaluated depending on how the investment is recovered. The interest of the research is based on showing how new constructive solutions can achieve higher levels of efficiency in terms of energy, construction and economy and it will demonstrate that Life Cycle Costing analysis can be a mechanism to find the advantages and disadvantages of using these new constructive solutions. Therefore, this paper has the following objectives: analysing constructive solutions in existing buildings - to establish a process for assessing total life cycle costs (LCC) during the planning stages with consideration of future operating costs - to select the most advantageous operating system – To determine the return on investment in terms of construction costs based on new techniques, the achieved energy savings and investment payback periods.
Resumo:
The new user cold start issue represents a serious problem in recommender systems as it can lead to the loss of new users who decide to stop using the system due to the lack of accuracy in the recommenda- tions received in that first stage in which they have not yet cast a significant number of votes with which to feed the recommender system?s collaborative filtering core. For this reason it is particularly important to design new similarity metrics which provide greater precision in the results offered to users who have cast few votes. This paper presents a new similarity measure perfected using optimization based on neu- ral learning, which exceeds the best results obtained with current metrics. The metric has been tested on the Netflix and Movielens databases, obtaining important improvements in the measures of accuracy, precision and recall when applied to new user cold start situations. The paper includes the mathematical formalization describing how to obtain the main quality measures of a recommender system using leave- one-out cross validation.