30 resultados para Frontal Analysis Continuous Capillary Clectrophoresis
Resumo:
This paper deals with the theoretical method and the modelling problems on the analysis of the Pyrotechnic Shock Propagation in the Vehicle Equipment Bay Structure of the ARIANE 5 during the separation of the upper stage. This work has been developed under a contract with the Spanish Firm Construcciones Aeronáuticas S.A. From all the analysis and the studies it can be concluded that: 1.- The mathematical method used for the study of the pyrotechnic shock phenomena is very well suited for conducting parametric studies. 2.- A careful model of the structure should be developed taking into account the realistic stiffness and dissipation properties at the junctions. 3.- The load produced by the pyrotechnic device should be carefully calibrated to reach a good agreement between theoretical and test results. 4.- In any case with the adquired experience it can be said that with the modelling of continuous elements the order of magnitude of the accelerations can be predicted with the accuracy needed in practical applications.
Resumo:
In this paper a new method for fault isolation in a class of continuous-time stochastic dynamical systems is proposed. The method is framed in the context of model-based analytical redundancy, consisting in the generation of a residual signal by means of a diagnostic observer, for its posterior analysis. Once a fault has been detected, and assuming some basic a priori knowledge about the set of possible failures in the plant, the isolation task is then formulated as a type of on-line statistical classification problem. The proposed isolation scheme employs in parallel different hypotheses tests on a statistic of the residual signal, one test for each possible fault. This isolation method is characterized by deriving for the unidimensional case, a sufficient isolability condition as well as an upperbound of the probability of missed isolation. Simulation examples illustrate the applicability of the proposed scheme.
Resumo:
A comprehensive assessment of the liquidity development in the Iberian power futures market managed by OMIP (“Operador do Mercado Ibérico de Energia, Pólo Português”) in its first 4 years of existence is performed. This market started on July 2006. A regression model tracking the evolution of the traded volumes in the continuous market is built as a function of 12 potential liquidity drivers. The only significant drivers are the traded volumes in OMIP compulsory auctions, the traded volumes in the “Over The Counter” (OTC) market, and the OTC cleared volumes in OMIP clearing house (OMIClear). Furthermore, the enrollment of financial members shows strong correlation with the traded volumes in the continuous market. OMIP liquidity is still far from the levels reached by the most mature European markets (Nord Pool and EEX). The market operator and its clearing house could develop efficient marketing actions to attract new entrants active in the spot market (energy intensive industries, suppliers, and small producers) as well as volumes from the opaque OTC market, and to improve the performance of existing illiquid products. An active dialogue with all the stakeholders (market participants, spot market operator, and supervisory authorities) will help to implement such actions.
Resumo:
In this paper we present an innovative technique to tackle the problem of automatic road sign detection and tracking using an on-board stereo camera. It involves a continuous 3D analysis of the road sign during the whole tracking process. Firstly, a color and appearance based model is applied to generate road sign candidates in both stereo images. A sparse disparity map between the left and right images is then created for each candidate by using contour-based and SURF-based matching in the far and short range, respectively. Once the map has been computed, the correspondences are back-projected to generate a cloud of 3D points, and the best-fit plane is computed through RANSAC, ensuring robustness to outliers. Temporal consistency is enforced by means of a Kalman filter, which exploits the intrinsic smoothness of the 3D camera motion in traffic environments. Additionally, the estimation of the plane allows to correct deformations due to perspective, thus easing further sign classification.
Resumo:
According to the PMBOK (Project Management Body of Knowledge), project management is “the application of knowledge, skills, tools, and techniques to project activities to meet the project requirements” [1]. Project Management has proven to be one of the most important disciplines at the moment of determining the success of any project [2][3][4]. Given that many of the activities covered by this discipline can be said that are “horizontal” for any kind of domain, the importance of acknowledge the concepts and practices becomes even more obvious. The specific case of the projects that fall in the domain of Software Engineering are not the exception about the great influence of Project Management for their success. The critical role that this discipline plays in the industry has come to numbers. A report by McKinsey & Co [4] shows that the establishment of programs for the teaching of critical skills of project management can improve the performance of the project in time and costs. As an example of the above, the reports exposes: “One defense organization used these programs to train several waves of project managers and leaders who together administered a portfolio of more than 1,000 capital projects ranging in Project management size from $100,000 to $500 million. Managers who successfully completed the training were able to cut costs on most projects by between 20 and 35 percent. Over time, the organization expects savings of about 15 percent of its entire baseline spending”. In a white paper by the PMI (Project Management Institute) about the value of project management [5], it is stated that: “Leading organizations across sectors and geographic borders have been steadily embracing project management as a way to control spending and improve project results”. According to the research made by the PMI for the paper, after the economical crisis “Executives discovered that adhering to project management methods and strategies reduced risks, cut costs and improved success rates—all vital to surviving the economic crisis”. In every elite company, a proper execution of the project management discipline has become a must. Several members of the software industry have putted effort into achieving ways of assuring high quality results from projects; many standards, best practices, methodologies and other resources have been produced by experts from different fields of expertise. In the industry and the academic community, there is a continuous research on how to teach better software engineering together with project management [4][6]. For the general practices of Project Management the PMI produced a guide of the required knowledge that any project manager should have in their toolbox to lead any kind of project, this guide is called the PMBOK. On the side of best practices 10 and required knowledge for the Software Engineering discipline, the IEEE (Institute of Electrical and Electronics Engineers) developed the SWEBOK (Software Engineering Body of Knowledge) in collaboration with software industry experts and academic researchers, introducing into the guide many of the needed knowledge for a 5-year expertise software engineer [7]. The SWEBOK also covers management from the perspective of a software project. This thesis is developed to provide guidance to practitioners and members of the academic community about project management applied to software engineering. The way used in this thesis to get useful information for practitioners is to take an industry-approved guide for software engineering professionals such as the SWEBOK, and compare the content to what is found in the PMBOK. After comparing the contents of the SWEBOK and the PMBOK, what is found missing in the SWEBOK is used to give recommendations on how to enrich project management skills for a software engineering professional. Recommendations for members of the academic community on the other hand, are given taking into account the GSwE2009 (Graduated Software Engineering 2009) standard [8]. GSwE2009 is often used as a main reference for software engineering master programs [9]. The standard is mostly based on the content of the SWEBOK, plus some contents that are considered to reinforce the education of software engineering. Given the similarities between the SWEBOK and the GSwE2009, the results of comparing SWEBOK and PMBOK are also considered valid to enrich what the GSwE2009 proposes. So in the end the recommendations for practitioners end up being also useful for the academic community and their strategies to teach project management in the context of software engineering.
Resumo:
Over the past few years, the common practice within air traffic management has been that commercial aircraft fly by following a set of predefined routes to reach their destination. Currently, aircraft operators are requesting more flexibility to fly according to their prefer- ences, in order to achieve their business objectives. Due to this reason, much research effort is being invested in developing different techniques which evaluate aircraft optimal trajectory and traffic synchronisation. Also, the inefficient use of the airspace using barometric altitude overall in the landing and takeoff phases or in Continuous Descent Approach (CDA) trajectories where currently it is necessary introduce the necessary reference setting (QNH or QFE). To solve this problem and to permit a better airspace management born the interest of this research. Where the main goals will be to evaluate the impact, weakness and strength of the use of geometrical altitude instead of the use of barometric altitude. Moreover, this dissertation propose the design a simplified trajectory simulator which is able to predict aircraft trajectories. The model is based on a three degrees of freedom aircraft point mass model that can adapt aircraft performance data from Base of Aircraft Data, and meteorological information. A feature of this trajectory simulator is to support the improvement of the strategic and pre-tactical trajectory planning in the future Air Traffic Management. To this end, the error of the tool (aircraft Trajectory Simulator) is measured by comparing its performance variables with actual flown trajectories obtained from Flight Data Recorder information. The trajectory simulator is validated by analysing the performance of different type of aircraft and considering different routes. A fuel consumption estimation error was identified and a correction is proposed for each type of aircraft model. In the future Air Traffic Management (ATM) system, the trajectory becomes the fundamental element of a new set of operating procedures collectively referred to as Trajectory-Based Operations (TBO). Thus, governmental institutions, academia, and industry have shown a renewed interest for the application of trajectory optimisation techniques in com- mercial aviation. The trajectory optimisation problem can be solved using optimal control methods. In this research we present and discuss the existing methods for solving optimal control problems focusing on direct collocation, which has received recent attention by the scientific community. In particular, two families of collocation methods are analysed, i.e., Hermite-Legendre-Gauss-Lobatto collocation and the pseudospectral collocation. They are first compared based on a benchmark case study: the minimum fuel trajectory problem with fixed arrival time. For the sake of scalability to more realistic problems, the different meth- ods are also tested based on a real Airbus 319 El Cairo-Madrid flight. Results show that pseudospectral collocation, which has shown to be numerically more accurate and computa- tionally much faster, is suitable for the type of problems arising in trajectory optimisation with application to ATM. Fast and accurate optimal trajectory can contribute properly to achieve the new challenges of the future ATM. As atmosphere uncertainties are one of the most important issues in the trajectory plan- ning, the final objective of this dissertation is to have a magnitude order of how different is the fuel consumption under different atmosphere condition. Is important to note that in the strategic phase planning the optimal trajectories are determined by meteorological predictions which differ from the moment of the flight. The optimal trajectories have shown savings of at least 500 [kg] in the majority of the atmosphere condition (different pressure, and temperature at Mean Sea Level, and different lapse rate temperature) with respect to the conventional procedure simulated at the same atmosphere condition.This results show that the implementation of optimal profiles are beneficial under the current Air traffic Management (ATM).
Resumo:
En la actualidad existe un gran conocimiento en la caracterización de rellenos hidráulicos, tanto en su caracterización estática, como dinámica. Sin embargo, son escasos en la literatura estudios más generales y globales de estos materiales, muy relacionados con sus usos y principales problemáticas en obras portuarias y mineras. Los procedimientos semi‐empíricos para la evaluación del efecto silo en las celdas de cajones portuarios, así como para el potencial de licuefacción de estos suelos durantes cargas instantáneas y terremotos, se basan en estudios donde la influencia de los parámetros que los rigen no se conocen en gran medida, dando lugar a resultados con considerable dispersión. Este es el caso, por ejemplo, de los daños notificados por el grupo de investigación del Puerto de Barcelona, la rotura de los cajones portuarios en el Puerto de Barcelona en 2007. Por estos motivos y otros, se ha decidido desarrollar un análisis para la evaluación de estos problemas mediante la propuesta de una metodología teórico‐numérica y empírica. El enfoque teórico‐numérico desarrollado en el presente estudio se centra en la determinación del marco teórico y las herramientas numéricas capaces de solventar los retos que presentan estos problemas. La complejidad del problema procede de varios aspectos fundamentales: el comportamiento no lineal de los suelos poco confinados o flojos en procesos de consolidación por preso propio; su alto potencial de licuefacción; la caracterización hidromecánica de los contactos entre estructuras y suelo (camino preferencial para el flujo de agua y consolidación lateral); el punto de partida de los problemas con un estado de tensiones efectivas prácticamente nulo. En cuanto al enfoque experimental, se ha propuesto una metodología de laboratorio muy sencilla para la caracterización hidromecánica del suelo y las interfaces, sin la necesidad de usar complejos aparatos de laboratorio o procedimientos excesivamente complicados. Este trabajo incluye por tanto un breve repaso a los aspectos relacionados con la ejecución de los rellenos hidráulicos, sus usos principales y los fenómenos relacionados, con el fin de establecer un punto de partida para el presente estudio. Este repaso abarca desde la evolución de las ecuaciones de consolidación tradicionales (Terzaghi, 1943), (Gibson, English & Hussey, 1967) y las metodologías de cálculo (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) hasta las contribuciones en relación al efecto silo (Ranssen, 1985) (Ravenet, 1977) y sobre el fenómeno de la licuefacción (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). Con motivo de este estudio se ha desarrollado exclusivamente un código basado en el método de los elementos finitos (MEF) empleando el programa MATLAB. Para ello, se ha esablecido un marco teórico (Biot, 1941) (Zienkiewicz & Shiomi, 1984) (Segura & Caron, 2004) y numérico (Zienkiewicz & Taylor, 1989) (Huerta & Rodríguez, 1992) (Segura & Carol, 2008) para resolver problemas de consolidación multidimensional con condiciones de contorno friccionales, y los correspondientes modelos constitutivos (Pastor & Zienkiewicz, 1986) (Fiu & Liu, 2011). Asimismo, se ha desarrollado una metodología experimental a través de una serie de ensayos de laboratorio para la calibración de los modelos constitutivos y de la caracterización de parámetros índice y de flujo (Castro, 1969) (Bahda 1997) (Been & Jefferies, 2006). Para ello se han empleado arenas de Hostun como material (relleno hidráulico) de referencia. Como principal aportación se incluyen una serie de nuevos ensayos de corte directo para la caracterización hidromecánica de la interfaz suelo – estructura de hormigón, para diferentes tipos de encofrados y rugosidades. Finalmente, se han diseñado una serie de algoritmos específicos para la resolución del set de ecuaciones diferenciales de gobierno que definen este problema. Estos algoritmos son de gran importancia en este problema para tratar el procesamiento transitorio de la consolidación de los rellenos hidráulicos, y de otros efectos relacionados con su implementación en celdas de cajones, como el efecto silo y la licuefacciones autoinducida. Para ello, se ha establecido un modelo 2D axisimétrico, con formulación acoplada u‐p para elementos continuos y elementos interfaz (de espesor cero), que tratan de simular las condiciones de estos rellenos hidráulicos cuando se colocan en las celdas portuarias. Este caso de estudio hace referencia clara a materiales granulares en estado inicial muy suelto y con escasas tensiones efectivas, es decir, con prácticamente todas las sobrepresiones ocasionadas por el proceso de autoconsolidación (por peso propio). Por todo ello se requiere de algoritmos numéricos específicos, así como de modelos constitutivos particulares, para los elementos del continuo y para los elementos interfaz. En el caso de la simulación de diferentes procedimientos de puesta en obra de los rellenos se ha requerido la modificacion de los algoritmos empleados para poder así representar numéricamente la puesta en obra de estos materiales, además de poder realizar una comparativa de los resultados para los distintos procedimientos. La constante actualización de los parámetros del suelo, hace también de este algoritmo una potente herramienta que permite establecer un interesante juego de perfiles de variables, tales como la densidad, el índice de huecos, la fracción de sólidos, el exceso de presiones, y tensiones y deformaciones. En definitiva, el modelo otorga un mejor entendimiento del efecto silo, término comúnmente usado para definir el fenómeno transitorio del gradiente de presiones laterales en las estructuras de contención en forma de silo. Finalmente se incluyen una serie de comparativas entre los resultados del modelo y de diferentes estudios de la literatura técnica, tanto para el fenómeno de las consolidaciones por preso propio (Fredlund, Donaldson & Gitirana, 2009) como para el estudio del efecto silo (Puertos del Estado, 2006, EuroCódigo (2006), Japan Tech, Stands. (2009), etc.). Para concluir, se propone el diseño de un prototipo de columna de decantación con paredes friccionales, como principal propuesta de futura línea de investigación. Wide research is nowadays available on the characterization of hydraulic fills in terms of either static or dynamic behavior. However, reported comprehensive analyses of these soils when meant for port or mining works are scarce. Moreover, the semi‐empirical procedures for assessing the silo effect on cells in floating caissons, and the liquefaction potential of these soils during sudden loads or earthquakes are based on studies where the underlying influence parameters are not well known, yielding results with significant scatter. This is the case, for instance, of hazards reported by the Barcelona Liquefaction working group, with the failure of harbor walls in 2007. By virtue of this, a complex approach has been undertaken to evaluate the problem by a proposal of numerical and laboratory methodology. Within a theoretical and numerical scope, the study is focused on the numerical tools capable to face the different challenges of this problem. The complexity is manifold; the highly non‐linear behavior of consolidating soft soils; their potentially liquefactable nature, the significance of the hydromechanics of the soil‐structure contact, the discontinuities as preferential paths for water flow, setting “negligible” effective stresses as initial conditions. Within an experimental scope, a straightforward laboratory methodology is introduced for the hydromechanical characterization of the soil and the interface without the need of complex laboratory devices or cumbersome procedures. Therefore, this study includes a brief overview of the hydraulic filling execution, main uses (land reclamation, filled cells, tailing dams, etc.) and the underlying phenomena (self‐weight consolidation, silo effect, liquefaction, etc.). It comprises from the evolution of the traditional consolidation equations (Terzaghi, 1943), (Gibson, English, & Hussey, 1967) and solving methodologies (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) to the contributions in terms of silo effect (Ranssen, 1895) (Ravenet, 1977) and liquefaction phenomena (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). The novelty of the study lies on the development of a Finite Element Method (FEM) code, exclusively formulated for this problem. Subsequently, a theoretical (Biot, 1941) (Zienkiewicz and Shiomi, 1984) (Segura and Carol, 2004) and numerical approach (Zienkiewicz and Taylor, 1989) (Huerta, A. & Rodriguez, A., 1992) (Segura, J.M. & Carol, I., 2008) is introduced for multidimensional consolidation problems with frictional contacts and the corresponding constitutive models (Pastor & Zienkiewicz, 1986) (Fu & Liu, 2011). An experimental methodology is presented for the laboratory test and material characterization (Castro 1969) (Bahda 1997) (Been & Jefferies 2006) using Hostun sands as reference hydraulic fill. A series of singular interaction shear tests for the interface calibration is included. Finally, a specific model algorithm for the solution of the set of differential equations governing the problem is presented. The process of consolidation and settlements involves a comprehensive simulation of the transient process of decantation and the build‐up of the silo effect in cells and certain phenomena related to self‐compaction and liquefaction. For this, an implementation of a 2D axi‐syimmetric coupled model with continuum and interface elements, aimed at simulating conditions and self‐weight consolidation of hydraulic fills once placed into floating caisson cells or close to retaining structures. This basically concerns a loose granular soil with a negligible initial effective stress level at the onset of the process. The implementation requires a specific numerical algorithm as well as specific constitutive models for both the continuum and the interface elements. The simulation of implementation procedures for the fills has required the modification of the algorithm so that a numerical representation of these procedures is carried out. A comparison of the results for the different procedures is interesting for the global analysis. Furthermore, the continuous updating of the model provides an insightful logging of variable profiles such as density, void ratio and solid fraction profiles, total and excess pore pressure, stresses and strains. This will lead to a better understanding of complex phenomena such as the transient gradient in lateral pressures due to silo effect in saturated soils. Interesting model and literature comparisons for the self‐weight consolidation (Fredlund, Donaldson, & Gitirana, 2009) and the silo effect results (Puertos del Estado (2006), EuroCode (2006), Japan Tech, Stands. (2009)). This study closes with the design of a decantation column prototype with frictional walls as the main future line of research.
Resumo:
The stability of an infinitely long compound liquid column is analysed by using a one-dimensional inviscid slice model. Results obtained from this one-dimensional linear analysis are applicable to the study of compound capillary jets, which are used in the ink-jet printing technique. Stability limits and the breaking regimes of such fluid configurations are established, and, whenever possible, theoretical results are compared with experimental ones.
Resumo:
This article presents a new and computationally efficient method of analysis of a railway track modelled as a continuous beam of 2N spans supported by elastic vertical springs. The main feature of this method is its important reduction in computational effort with respect to standard matrix methods of structural analysis. In this article, the whole structure is considered to be a repetition of a single one. The analysis presented is applied to a simple railway track model, i.e. to a repetitive beam supported on vertical springs (sleepers). The proposed method of analysis is based on the general theory of spatially periodic structures. The main feature of this theory is the possibility to apply Discrete Fourier Transform (DFT) in order to reduce a large system of q(2N + 1) linear stiffness equilibrium equations to a set of 2N + 1 uncoupled systems of q equations each. In this way, a dramatic reduction of the computational effort of solving the large system of equations is achieved. This fact is particularly important in the analysis of railway track structures, in which N is a very large number (around several thousands), and q = 2, the vertical displacement and rotation, is very small. The proposed method allows us to easily obtain the exact solution given by Samartín [1], i.e. the continuous beam railway track response. The comparison between the proposed method and other methods of analysis of railway tracks, such as Lorente de Nó and Zimmermann-Timoshenko, clearly shows the accuracy of the obtained results for the proposed method, even for low values of N. In addition, identical results between the proposed and the Lorente methods have been found, although the proposed method seems to be of simpler application and computationally more efficient than the Lorente one. Small but significative differences occur between these two methods and the one developed by Zimmermann-Timoshenko. This article also presents a detailed sensitivity analysis of the vertical displacement of the sleepers. Although standard matrix methods of structural analysis can handle this railway model, one of the objectives of this article is to show the efficiency of DFT method with respect to standard matrix structural analysis. A comparative analysis between standard matrix structural analysis and the proposed method (DFT), in terms of computational time, input, output and also software programming, will be carried out. Finally, a URL link to a MatLab computer program list, based on the proposed method, is given
Resumo:
Since the advent of the computer into the engineering field, the application of the numerical methods to the solution of engineering problems has grown very rapidly. Among the different computer methods of structural analysis the Finite Element (FEM) has been predominantly used. Shells and space structures are very attractive and have been constructed to solve a large variety of functional problems (roofs, industrial building, aqueducts, reservoirs, footings etc). In this type of structures aesthetics, structural efficiency and concept play a very important role. This class of structures can be divided into three main groups, namely continuous (concrete) shells, space frames and tension (fabric, pneumatic, cable etc )structures. In the following only the current applications of the FEM to the analysis of continuous shell structures will be discussed. However, some of the comments on this class of shells can be also applied to some extend to the others, but obviously specific computational problems will be restricted to the continuous shells. Different aspects, such as, the type of elements,input-output computational techniques etc, of the analysis of shells by the FEM will be described below. Clearly, the improvements and developments occurring in general for the FEM since its first appearance in the fifties have had a significative impact on the particular class of structures under discussion.
Resumo:
The availability of suitable laser sources is one of the main challenges in future space missions for accurate measurement of atmospheric CO2. The main objective of the European project BRITESPACE is to demonstrate the feasibility of an all-semiconductor laser source to be used as a space-borne laser transmitter in an Integrated Path Differential Absorption (IPDA) lidar system. We present here the proposed transmitter and system architectures, the initial device design and the results of the simulations performed in order to estimate the source requirements in terms of power, beam quality, and spectral properties to achieve the required measurement accuracy. The laser transmitter is based on two InGaAsP/InP monolithic Master Oscillator Power Amplifiers (MOPAs), providing the ON and OFF wavelengths close to the selected absorption line around 1.57 µm. Each MOPA consists of a frequency stabilized Distributed Feedback (DFB) master oscillator, a modulator section, and a tapered semiconductor amplifier optimized to maximize the optical output power. The design of the space-compliant laser module includes the beam forming optics and the thermoelectric coolers.The proposed system replaces the conventional pulsed source with a modulated continuous wave source using the Random Modulation-Continuous Wave (RM-CW) approach, allowing the designed semiconductor MOPA to be applicable in such applications. The system requirements for obtaining a CO2 retrieval accuracy of 1 ppmv and a spatial resolution of less than 10 meters have been defined. Envelope estimated of the returns indicate that the average power needed is of a few watts and that the main noise source is the ambient noise.
Resumo:
This paper presents a project for providing the students of Structural Engineering with the flexibility to learn outside classroom schedules. The goal is a framework for adaptive E-learning based on a repository of open educational courseware with a set of basic Structural Engineering concepts and fundamentals. These are paramount for students to expand their technical knowledge and skills in structural analysis and design of tall buildings, arch-type structures as well as bridges. Thus, concepts related to structural behaviour such as linearity, compatibility, stiffness and influence lines have traditionally been elusive for students. The objective is to facilitate the student a teachinglearning process to acquire the necessary intuitive knowledge, cognitive skills and the basis for further technological modules and professional development in this area. As a side effect, the system is expected to help the students improve their preparation for exams on the subject. In this project, a web-based open-source system for studying influence lines on continuous beams is presented. It encompasses a collection of interactive user-friendly applications accessible via Web, written in JavaScript under JQuery and Dygraph Libraries, taking advantage of their efficiency and graphic capabilities. It is performed in both Spanish and English languages. The student is enabled to set the geometric, topologic, boundary and mechanic layout of a continuous beam. While changing the loading and the support conditions, the changes in the beam response prompt on the screen, so that the effects of the several issues involved in structural analysis become apparent. This open interaction with the user allows the student to simulate and virtually infer the structural response. Different levels of complexity can be handled, whereas an ongoing help is at hand for any of them. Students can freely boost their experiential learning on this subject at their own pace, in order to further share, process, generalize and apply the relevant essential concepts of Structural Engineering analysis. Besides, this collection is being added to the "Virtual Lab of Continuum Mechanics" of the UPM, launched in 2013 (http://serviciosgate.upm.es/laboratoriosvirtuales/laboratorios/medios-continuos-en-construcci%C3%B3n)
Resumo:
Actualmente la optimization de la calidad de experiencia (Quality of Experience- QoE) de HTTP Adaptive Streaming (HAS) de video recibe una atención creciente. Este incremento de interés proviene fundamentalmente de las carencias de las soluciones actuales HAS, que, al no ser QoE-driven, no incluyen la percepción de la calidad de los usuarios finales como una parte integral de la lógica de adaptación. Por lo tanto, la obtención de información de referencia fiable en QoE en HAS presenta retos importantes, ya que las metodologías de evaluación subjetiva de la calidad de vídeo propuestas en las normas actuales no son adecuadas para tratar con la variación temporal de la calidad que es consustancial de HAS. Esta tesis investiga la influencia de la adaptación dinámica en la calidad de la transmisión de vídeo considerando métodos de evaluación subjetiva. Tras un estudio exhaustivo del estado del arte en la evaluación subjetiva de QoE en HAS, se han resaltado los retos asociados y las líneas de investigación abiertas. Como resultado, se han seleccionado dos líneas principales de investigación: el análisis del impacto en la QoE de los parámetros de las técnicas de adaptación y la investigación de las metodologías de prueba subjetiva adecuada para evaluación de QoE en HAS. Se han llevado a cabo un conjunto de experimentos de laboratorio para investigar las cuestiones planteadas mediante la utilización de diferentes metodologáas para pruebas subjetivas. El análisis estadístico muestra que no son robustas todas las suposiciones y reivindicaciones de las referencias analizadas, en particular en lo que respecta al impacto en la QoE de la frecuencia de las variaciones de calidad, de las adaptaciones suaves o abruptas y de las oscilaciones de calidad. Por otra parte, nuestros resultados confirman la influencia de otros parámetros, como la longitud de los segmentos de vídeo y la amplitud de las oscilaciones de calidad. Los resultados también muestran que tomar en consideración las características objetivas de los contenidos puede ser beneficioso para la mejora de la QoE en HAS. Además, todos los resultados han sido validados mediante extensos análisis experimentales que han incluido estudio tanto en otros laboratorios como en crowdsourcing Por último, sobre los aspectos metodológicos de las pruebas subjetivas de QoE, se ha realizado la comparación entre los resultados experimentales obtenidos a partir de un método estandarizado basado en estímulos cortos (ACR) y un método semi continuo (desarrollado para la evaluación de secuencias prolongadas de vídeo). A pesar de algunas diferencias, el resultado de los análisis estadísticos no muestra ningún efecto significativo de la metodología de prueba. Asimismo, aunque se percibe la influencia de la presencia de audio en la evaluación de degradaciones del vídeo, no se han encontrado efectos estadísticamente significativos de dicha presencia. A partir de la ausencia de influencia del método de prueba y de la presencia de audio, se ha realizado un análisis adicional sobre el impacto de realizar comparaciones estadísticas múltiples en niveles estadísticos de importancia que aumentan la probabilidad de los errores de tipo-I (falsos positivos). Nuestros resultados muestran que, para obtener un efectos sólido en el análisis estadístico de los resultados subjetivos, es necesario aumentar el número de sujetos de las pruebas claramente por encima de los tamaños de muestras propuestos por las normas y recomendaciones actuales. ABSTRACT Optimizing the Quality of Experience (QoE) of HTTP adaptive video streaming (HAS) is receiving increasing attention nowadays. The growth of interest is mainly caused by the fact that current HAS solutions are not QoE-driven, i.e. end-user quality perception is not integral part of the adaptation logic. However, obtaining the necessary reliable ground truths on HAS QoE faces substantial challenges, since the subjective video quality assessment methodologies as proposed by current standards are not well-suited for dealing with the time-varying quality properties that are characteristic for HAS. This thesis investigates the influence of dynamic quality adaptation on the QoE of streaming video by means of subjective evaluation approaches. Based on a comprehensive survey of related work on subjective HAS QoE assessment, the related challenges and open research questions are highlighted and discussed. As a result, two main research directions are selected for further investigation: analysis of the QoE impact of different technical adaptation parameters, and investigation of testing methodologies suitable for HAS QoE evaluation. In order to investigate related research issues and questions, a set of laboratory experiments have been conducted using different subjective testing methodologies. Our statistical analysis demonstrates that not all assumptions and claims reported in the literature are robust, particularly as regards the QoE impact of switching frequency, smooth vs. abrupt switching, and quality oscillation. On the other hand, our results confirm the influence of some other parameters such as chunk length and switching amplitude on perceived quality. We also show that taking the objective characteristics of the content into account can be beneficial to improve the adaptation viewing experience. In addition, all aforementioned findings are validated by means of an extensive cross-experimental analysis that involves external laboratory and crowdsourcing studies. Finally, to address the methodological aspects of subjective QoE testing, a comparison between the experimental results obtained from a (short stimuli-based) ACR standardized method and a semi-continuous method (developed for assessment of long video sequences) has been performed. In spite of observation of some differences, the result of statistical analysis does not show any significant effect of testing methodology. Similarly, although the influence of audio presence on evaluation of video-related degradations is perceived, no statistically significant effect of audio presence could be found. Motivating by this finding (no effect of testing method and audio presence), a subsequent analysis has been performed investigating the impact of performing multiple statistical comparisons on statistical levels of significance which increase the likelihood of Type-I errors (false positives). Our results show that in order to obtain a strong effect from the statistical analysis of the subjective results, it is necessary to increase the number of test subjects well beyond the sample sizes proposed by current quality assessment standards and recommendations.
Resumo:
The Illinois Institute of Technology (iit) campus, Chicago, by architect Ludwig Mies van der Rohe, is often considered as a transitional work, usually acknowledged as significant for the reorientation of his professional career after he emigrated to the United States. Moreover, its favorable recognition today is somehow indicative of its relevance as a model for urban intervention in the contemporary American city and for contemporary city planning in general, not to mention the profound impact that it had on the cityscape of Chicago. However, today we know it was rather the result of a close collaboration between he and Ludwig Hilberseimer —later on, to be completed with Alfred Caldwell— who merged their personal ideas and expertise in the design for the first time. In addition to this, when one tries to locate the design within its own historical context and evaluate the sources of its approach to it, some contradictions arise. The major impact of the images produced by Mies to promote its realization —widely disseminated in most contemporary architectural periodicals— probably outshined the particular circumstances in which the design was conceived. In fact, it would never be materialized as originally presented, but it was, instead, continuously reworked according to land availability in the site —a circumstance often ignored by subsequent architectural critic, that enthusiastically praised the design even before it was fully completed. One of the main consequences of looking at iit from such a standpoint is that, when historically contextualized, one can appreciate that, due to the urban scale of its implementation process, the design had to face a complex reality very different to that initially planned by the architect, often far from his actual possibilities of intervention. Such approach is in contradiction with the common description of the design as a ‘tabula rasa’ that allegedly would have been formulated on the basis of a full denial of its context. On the contrary, the ever-changing circumstances of the design motivated a necessary re-interpretation of the relation between its executed fragments, in order to keep the original identity of the whole in an ever-changing context. This situation implied a continuous transformation of the design by means of a steady re-composition of its elements: as the number of completed buildings increased in its successive stages, their relation to their site-specific context changed, in a very particular process that these lines try to delineate. Requiring decades to be erected, neither of its authors would ever see the design finished as planned, partially because of the difficulties in acquiring the extension of land that it required. Considering the study of this process as able to provide a valuable gateway to understand the urban discourse that the architects entailed, the aim of these lines is to analyze the problems that the iit campus design had to face. As a starting point, a relationship between practice and theory in the activity of the authors implied in iit campus design has been assumed. Far from being interrupted during World War ii, strong historical evidence can be found to infer that both were developed in parallel. Consequently, the historical sequence of the preserved testimonies has been put into context, as well as their transformation while Mies remained in charge for the campus Master Plan. Notably, when seen from this perspective, some ideas already expressed during his previous European practice were still present during the design process. Particularly, Mies's particular understanding of certain architectural concepts — such as those of ‘order’ and ‘structure’—can be traced paralleling the theories about urban planning from his collaborators, a fact that possibly facilitated the campus successful development. The study of the way these ideas were actually redeveloped and modified in the American urban context, added to the specific process of the implementation of iit campus design, sheds a new light for a critical interpretation of the reasons that made it possible, and of the actual responsibility of Mies's collaborators in its overall development and final completion. RESUMEN El campus del Illinois Institute of Technology (iit) de Chicago, obra del arquitecto Ludwig Mies van der Rohe, es a menudo considerado como una obra de transición que, por lo general, ha venido siendo reconocida como relevante para la reorientación de su carrera profesional posterior a su exilio en los Estados Unidos. El reconocimiento del que goza el proyecto es indicativo, de algún modo, de su importancia como modelo para la intervención urbana en la ciudad norteamericana contemporánea y el planeamiento de la ciudad contemporánea en general, sin olvidar el profundo impacto que ha tenido sobre el paisaje urbano de Chicago. Sin embargo, hoy sabemos que el resultado se benefició de su estrecha colaboración con Ludwig Hilberseimer y se completaría más tarde con la de Alfred Caldwell, quienes unieron sus ideas y experiencia profesional en el proyecto por primera vez. Asimismo, cuando se intenta ubicar el proyecto dentro de su propio contexto histórico y evaluar los criterios de su manera de abordarlo, surgen algunas contradicciones. El considerable impacto de las imágenes producidas por Mies para impulsar su ejecución —ampliamente difundidas en la mayoría de publicaciones de arquitectura de la época— probablemente eclipsó las particulares circunstancias en las que el proyecto fue concebido. De hecho, nunca llegó a materializarse tal y como fue inicialmente presentado. Por contra, fue reelaborado de manera continua, de acuerdo a la disponibilidad de suelo en el emplazamiento; una circunstancia a menudo ignorada por la crítica posterior, que elogió con entusiasmo el proyecto antes siquiera de que fuese terminado. Una de las principales consecuencias de contemplar el iit desde semejante punto de vista es que, una vez contextualizada históricamente su puesta en obra, se puede apreciar que el arquitecto tuvo que enfrentarse a una compleja realidad urbana muy diferente a la inicialmente prevista —probablemente debido a la escala del proyecto— a menudo lejos de sus posibilidades reales de intervención. Este enfoque contradice la descripción habitual del proyecto como una ‘tabula rasa’, que supuestamente se habría formulado sobre la base de una negación completa de su contexto. Por el contrario, las circunstancias cambiantes del proyecto obligaron una necesaria reinterpretación de la relación entre sus frag mentos ejecutados, con el fin de mantener la identidad original del conjunto en un contexto en constante cambio. Esta situación implicó una continua transformación del proyecto por medio de una permanente re-composición de sus elementos: según se incrementaba el número de edificios construidos en las etapas sucesivas de desarrollo del conjunto, variaba su relación con el contexto específico en que se emplazaban, en un proceso muy particular que estas líneas tratan de perfilar. Al necesitar décadas para ser levantado, ninguno de sus autores vería el conjunto terminado según lo planificado, en parte debido a las dificultades para la adquisición de la extensión de suelo que demandaba. Asumiendo que el estudio de este proceso es capaz de proporcionar una valiosa puerta de entrada para elucidar el discurso urbano asumido por los Mies, el objetivo de estas líneas es analizar los problemas a los que el proyecto del campus del iit tuvo que enfrentarse. Como punto de partida, se ha supuesto una relación entre la práctica y la teoría en la actividad de los autores implicados en el proyecto del campus del iit. Lejos de interrumpirse durante la Segunda Guerra Mundial, existen evidencias históricas sólidas para deducir que ambas vertientes se desarrollaron en paralelo. En consecuencia, se ha contextualizado la secuencia histórica de los testimonios conservados, así como su transformación durante el periodo en que Mies estuvo a cargo del Plan General del campus. Significativamente, al ser contempladas bajo esta perspectiva, algunas ideas ya expresadas durante su práctica europea anterior resultan aún presentes durante la redacción del proyecto. En concreto, se puede trazar un paralelismo entre la comprensión particular de Mies de ciertos conceptos arquitectónicos —como los de ‘orden’ y ‘estructura’— y las teorías sobre el urbanismo de sus colaboradores, hecho que posiblemente facilitó el exitoso desarrollo del proyecto. El estudio de la manera en que estas ideas fueron reelaboradas y modificadas en el contexto urbano estadounidense, sumado al proceso específico de su aplicación en el proyecto del campus del iit, arroja una nueva luz para una interpretación crítica tanto de las razones que lo hicieron posible, como del papel real que los colaboradores de Mies tuvieron en su desarrollo y ejecución final.
Resumo:
Innovation is a process that faces several market failure situations. For this reason and for being considered one of the main drivers of economic growth, a large number of governmental and supranational policies are designed to foster technological progress. Along with these policies, there is an increasing concern with their continuous evaluation aiming at providing valuable feedback for these program?s adaptation and adequacy to the firm?s needs. The paper develops an evaluation of the influence of innovation-focused programs in firm¿s innovation and economic performance by means of a comparative analysis of the results obtained by Spanish firms that have participated in R&D national programmes and those achieved by Spanish firms participating in EUREKA international program. Findings show that the programmes were effective in achieving their objective of promoting technological innovation but, as regards the economic effects, the results were less conclusive since some differences were observed depending on the programme. The EUREKA companies displayed better behaviour, with positive differences in their returns on assets and labour productivity. The results also confirm the importance of designing more detailed and rigorous evaluation processes, taking into account the risk variable, in order to draw a more realistic picture of the impact of national and international programmes.