949 resultados para Measurement method


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis doctoral contribuye al análisis y desarrollo de nuevos elementos constructivos que integran sistemas de generación eléctrica a través de células fotovoltaicas (FV); particularmente, basados en tecnología FV de lámina delgada. Para ello se estudia el proceso de la integración arquitectónica de éstos elementos (conocido internacionalmente como “Building Integrated Photovoltaic – BIPV”) mediante diferentes metodologías. Se inicia con el estudio de los elementos fotovoltaicos existentes y continúa con los materiales que conforman actualmente las pieles de los edificios y su posible adaptación a las diferentes tecnologías. Posteriormente, se propone una estrategia de integración de los elementos FV en los materiales constructivos. En ésta se considera la doble función de los elementos BIPV, eléctrica y arquitectónica, y en especial se plantea el estudio de la integración de elementos de disipación térmica y almacenamiento de calor mediante los materiales de cambio de fase (“Phase Change Materials – PCM”), todo esto con el objeto de favorecer el acondicionamiento térmico pasivo a través del elemento BIPV. Para validar dicha estrategia, se desarrolla una metodología experimental que consiste en el diseño y desarrollo de un prototipo denominado elemento BIPV/TF – PCM, así como un método de medida y caracterización en condiciones de laboratorio. Entre los logros alcanzados, destaca la multifuncionalidad de los elementos BIPV, el aprovechamiento de la energía residual del elemento, la reducción de los excedentes térmicos que puedan modificar el balance térmico de la envolvente del edificio, y las mejoras conseguidas en la producción eléctrica de los módulos fotovoltaicos por reducción de temperatura, lo que hará más sostenible la solución BIPV. Finalmente, como resultado del análisis teórico y experimental, esta tesis contribuye significativamente al estudio práctico de la adaptabilidad de los elementos BIPV en el entorno urbano por medio de una metodología que se basa en el desarrollo y puesta en marcha de una herramienta informática, que sirve tanto a ingenieros como arquitectos para verificar la calidad de la integración arquitectónica y calidad eléctrica de los elementos FV, antes, durante y después de la ejecución de un proyecto constructivo. ABSTRACT This Doctoral Thesis contributes to the analysis and development of new building elements that integrate power generation systems using photovoltaic solar cells (PV), particularly based on thin-film PV technology. For this propose, the architectural integration process is studied (concept known as "Building Integrated Photovoltaic - BIPV") by means of different methodologies. It begins with the study of existing PV elements and materials that are currently part of the building skins and the possible adaptation to different technologies. Subsequently, an integration strategy of PV elements in building materials is proposed. Double function of BIPV elements is considered, electrical and architectural, especially the heat dissipation and heat storage elements are studied, particularly the use Phase Change Materials– PCM in order to favor the thermal conditioning of buildings by means of the BIPV elements. For this propose, an experimental methodology is implemented, which consist of the design and develop of a prototype "BIPV/TF- PCM element" and measurement method (indoor laboratory conditions) in order to validate this strategy. Among the most important achievements obtained of this develop and results analysis includes, in particular, the multifunctionality of BIPV elements, the efficient use of the residual energy of the element, reduction of the excess heat that it can change the heat balance of the building envelope and improvements in electricity production of PV modules by reducing the temperature, are some benefits achieved that make the BIPV element will be more sustainable. Finally, as a result of theoretical and experimental analysis, this thesis contributes significantly to the practical study of the adaptability of BIPV elements in the urban environment by means of a novel methodology based on the development and implementation by computer software of a useful tool which serves as both engineers and architects to verify the quality of architectural integration and electrical performance of PV elements before, during, and after execution of a building projects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El ruido derivado de las actividades de ocio es uno de los contaminantes acústicos más importantes en la sociedad actual. Este foco de ruido no sólo se encuentra presente en los entorno de los bares, pubs o discotecas, sino también en las zonas donde se desarrollan los eventos festivos de la ciudad. Sin embargo, son pocos los estudios y actuaciones llevadas a cabo desde el punto de vista ambiental que permitan conocer las principales características del ruido de ocio, los métodos de medida o los parámetros más adecuados. Por este motivo, se han fijado en estos aspectos los objetivos de esta tesis doctoral. Para el estudio del ruido de ocio nocturno se ha desarrollado y evaluado un método de medida, basado en la realización de medidas binaurales durante un recorrido y en medidas de larga duración en puntos fijos de las distintas zonas de ocio de Madrid y Cuenca. A partir de los resultados obtenidos, se ha realizado una caracterización acústica del ruido ocio, se ha definido un procedimiento de actuación en el que se incluye un modelo de predicción, y se ha desarrollado un modelo clasificador capaz de diferenciar el ruido de ocio del ruido de tráfico rodado. En el caso de los eventos de ocio también se ha desarrollado un método de evaluación y medida adaptado a sus características, con el que se han medido los eventos más importantes acontecidos durante un año en Madrid y Cuenca, del análisis de estas medidas se ha determinado qué eventos son los más ruidosos, así como sus características principales y las diferencias entre ellos. Este estudio pretende servir de apoyo en la gestión del ruido ambiental derivado de las actividades de ocio, presentando datos cualitativos y cuantitativos de este tipo de ruido en sus distintas facetas y aportando nuevas herramientas que faciliten su gestión. ABSTRACT Leisure noise is one of the most important environmental pollutants nowadays. This noise is not only nearby leisure venues where people go at night, but also around leisure events like popular parties or concerts placed in urban areas. There are few studies and actions about leisure noise from the environmental noise point of view, and consequently, there are no information about the leisure noise characteristics, the most appropriate measurement methods or the most interesting parameters to evaluate this kind of noise. Consequently, these are the aims of this PhD thesis. About the noise around leisure venues, a measurement method has been defined by using the Soundwalker technique. Besides, fixed point measurements have been done in different leisure areas. With the results of these measurements, a noise characterization has been done and a guide has been developed to act in case of leisure noise problems, including a method to predict the leisure noise in this kind of areas. As well as that, a classifying model has been done to differenciate leisure noise and road traffic noise. A measurement procedure has been developed in the leisure events case. Following this procedure, the most important events happened during a year in two different cities have been measured. With these results, the noisiest events, the most important characteristics of each kind of event and the differences between them have been pointed out. This study tries to support the environmental noise management in the leisure noise case. It provides cualitative and quantitative data of leisure noise levels in different situations; it also defines an action protocol to resolve leisure noise problems and it defines new tools to manage this kind of noise.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En la actualidad existen cada vez más dispositivos móviles que utilizamos diariamente. Estos dispositivos usan las nuevas tecnologías inalámbricas, ya sean redes de telefonía, Wifi o Bluetooth, lo que conlleva un consumo de energía elevado. Estos dispositivos además tienen una limitación que es la capacidad de la batería. Un ejemplo claro son los smartphones, los usamos a diario y la batería dura un día o poco más. Dada esta problemática del alto consumo de energía el mundo de la electrónica de consumo se ve obligado a desarrollar aplicaciones y sistemas operativos que realicen un consumo de potencia más eficientes, baterías de otro tipo de composiciones, etc. Para lo que es necesario que exista una forma eficaz de medir el consumo de energía. En la actualidad, en el laboratorio del GDEM (Grupo de Diseño Electrónico y Microeletrónico) existen varias corrientes de acción a la hora de resolver o paliar esta problemática. Aquí podemos dividirlo en dos grupos: trabajos que se dediquen a conseguir que el sistema realice un consumo más eficiente de la energía y trabajos dedicados a realizar medidas más precisas de este consumo para que, a su vez, sean utilizadas por el propio sistema para decidir formas de actuar. Con estas motivaciones se ha diseñado una tarjeta capaz de medir la potencia consumida por la BeagleBoard usando un método de medida novedoso. Los resultados obtenidos validan el diseño y el presupuesto total de la fabricación ha sido inferior a diez euros. Por lo tanto, los objetivos se han cumplido fabricando una tarjeta caracterizada por su sencillez y su bajo coste, además de abrir la puerta a que, junto con un trabajo futuro, se consiga que la BeagleBoard sea capaz de conocer el consumo de potencia en tiempo real. ABSTRACT. At present, the number of mobile devices that we use normally are increasing. These devices use the new wireless technologies, whether telephone network, wireless or Bluetooth, which carries a large power consumption. These devices also have a limitation which is the battery capacity. One clear example is the smartphones, we use them daily and the battery is spent in a day. With this problem of high energy consumption the world of consumer electronics is forced to develop applications and operating systems with more efficient power consumption or a battery of other compositions. For that purposese it is necessary to have an effective way to measure energy consumption. In the GDEM (Microelectronic and Electronic Design Group) lab there are several streams action for solving or alleviating this problem. Here we can divide into two groups: jobs that are dedicated to getting the system that perform more efficient consumption of energy and works dedicated to doing more precise measures of this consumption. With these motivations we designed a board which was able to measure the power consumed by the BeagleBoard using a innovative measurement method. The results validate the design and the price of the board is less than 10 euros. Therefore, the goals have been accomplished by making a board which is characterized by its simplicity and low cost. It has also opened the door to, in a future work, the BeagleBoard be able to know the power consumption in real time by adding the necessary software.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multiple indicators are of interest in smart cities at different scales and for different stakeholders. In open environments, such as The Web, or when indicator information has to be interchanged across systems, contextual information (e.g., unit of measurement, measurement method) should be transmitted together with the data and the lack of such information might cause undesirable effects. Describing the data by means of ontologies increases interoperability among datasets and applications. However, methodological guidance is crucial during ontology development in order to transform the art of modeling in an engineering activity. In the current paper, we present a methodological approach for modelling data about Key Performance Indicators and their context with an application example of such guidelines.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En las últimas dos décadas, se ha puesto de relieve la importancia de los procesos de adquisición y difusión del conocimiento dentro de las empresas, y por consiguiente el estudio de estos procesos y la implementación de tecnologías que los faciliten ha sido un tema que ha despertado un creciente interés en la comunidad científica. Con el fin de facilitar y optimizar la adquisición y la difusión del conocimiento, las organizaciones jerárquicas han evolucionado hacia una configuración más plana, con estructuras en red que resulten más ágiles, disminuyendo la dependencia de una autoridad centralizada, y constituyendo organizaciones orientadas a trabajar en equipo. Al mismo tiempo, se ha producido un rápido desarrollo de las herramientas de colaboración Web 2.0, tales como blogs y wikis. Estas herramientas de colaboración se caracterizan por una importante componente social, y pueden alcanzar todo su potencial cuando se despliegan en las estructuras organizacionales planas. La Web 2.0 aparece como un concepto enfrentado al conjunto de tecnologías que existían a finales de los 90s basadas en sitios web, y se basa en la participación de los propios usuarios. Empresas del Fortune 500 –HP, IBM, Xerox, Cisco– las adoptan de inmediato, aunque no hay unanimidad sobre su utilidad real ni sobre cómo medirla. Esto se debe en parte a que no se entienden bien los factores que llevan a los empleados a adoptarlas, lo que ha llevado a fracasos en la implantación debido a la existencia de algunas barreras. Dada esta situación, y ante las ventajas teóricas que tienen estas herramientas de colaboración Web 2.0 para las empresas, los directivos de éstas y la comunidad científica muestran un interés creciente en conocer la respuesta a la pregunta: ¿cuáles son los factores que contribuyen a que los empleados de las empresas adopten estas herramientas Web 2.0 para colaborar? La respuesta a esta pregunta es compleja ya que se trata de herramientas relativamente nuevas en el contexto empresarial mediante las cuales se puede llevar a cabo la gestión del conocimiento en lugar del manejo de la información. El planteamiento que se ha llevado a cabo en este trabajo para dar respuesta a esta pregunta es la aplicación de los modelos de adopción tecnológica, que se basan en las percepciones de los individuos sobre diferentes aspectos relacionados con el uso de la tecnología. Bajo este enfoque, este trabajo tiene como objetivo principal el estudio de los factores que influyen en la adopción de blogs y wikis en empresas, mediante un modelo predictivo, teórico y unificado, de adopción tecnológica, con un planteamiento holístico a partir de la literatura de los modelos de adopción tecnológica y de las particularidades que presentan las herramientas bajo estudio y en el contexto especifico. Este modelo teórico permitirá determinar aquellos factores que predicen la intención de uso de las herramientas y el uso real de las mismas. El trabajo de investigación científica se estructura en cinco partes: introducción al tema de investigación, desarrollo del marco teórico, diseño del trabajo de investigación, análisis empírico, y elaboración de conclusiones. Desde el punto de vista de la estructura de la memoria de la tesis, las cinco partes mencionadas se desarrollan de forma secuencial a lo largo de siete capítulos, correspondiendo la primera parte al capítulo 1, la segunda a los capítulos 2 y 3, la tercera parte a los capítulos 4 y 5, la cuarta parte al capítulo 6, y la quinta y última parte al capítulo 7. El contenido del capítulo 1 se centra en el planteamiento del problema de investigación así como en los objetivos, principal y secundarios, que se pretenden cumplir a lo largo del trabajo. Así mismo, se expondrá el concepto de colaboración y su encaje con las herramientas colaborativas Web 2.0 que se plantean en la investigación y una introducción a los modelos de adopción tecnológica. A continuación se expone la justificación de la investigación, los objetivos de la misma y el plan de trabajo para su elaboración. Una vez introducido el tema de investigación, en el capítulo 2 se lleva a cabo una revisión de la evolución de los principales modelos de adopción tecnológica existentes (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), dando cuenta de sus fundamentos y factores empleados. Sobre la base de los modelos de adopción tecnológica expuestos en el capítulo 2, en el capítulo 3 se estudian los factores que se han expuesto en el capítulo 2 pero adaptados al contexto de las herramientas colaborativas Web 2.0. Con el fin de facilitar la comprensión del modelo final, los factores se agrupan en cuatro tipos: tecnológicos, de control, socio-normativos y otros específicos de las herramientas colaborativas. En el capítulo 4 se lleva a cabo la relación de los factores que son más apropiados para estudiar la adopción de las herramientas colaborativas y se define un modelo que especifica las relaciones entre los diferentes factores. Estas relaciones finalmente se convertirán en hipótesis de trabajo, y que habrá que contrastar mediante el estudio empírico. A lo largo del capítulo 5 se especifican las características del trabajo empírico que se lleva a cabo para contrastar las hipótesis que se habían enunciado en el capítulo 4. La naturaleza de la investigación es de carácter social, de tipo exploratorio, y se basa en un estudio empírico cuantitativo cuyo análisis se llevará a cabo mediante técnicas de análisis multivariante. En este capítulo se describe la construcción de las escalas del instrumento de medida, la metodología de recogida de datos, y posteriormente se presenta un análisis detallado de la población muestral, así como la comprobación de la existencia o no del sesgo atribuible al método de medida, lo que se denomina sesgo de método común (en inglés, Common Method Bias). El contenido del capítulo 6 corresponde al análisis de resultados, aunque previamente se expone la técnica estadística empleada, PLS-SEM, como herramienta de análisis multivariante con capacidad de análisis predictivo, así como la metodología empleada para validar el modelo de medida y el modelo estructural, los requisitos que debe cumplir la muestra, y los umbrales de los parámetros considerados. En la segunda parte del capítulo 6 se lleva a cabo el análisis empírico de los datos correspondientes a las dos muestras, una para blogs y otra para wikis, con el fin de validar las hipótesis de investigación planteadas en el capítulo 4. Finalmente, en el capítulo 7 se revisa el grado de cumplimiento de los objetivos planteados en el capítulo 1 y se presentan las contribuciones teóricas, metodológicas y prácticas derivadas del trabajo realizado. A continuación se exponen las conclusiones generales y detalladas por cada grupo de factores, así como las recomendaciones prácticas que se pueden extraer para orientar la implantación de estas herramientas en situaciones reales. Como parte final del capítulo se incluyen las limitaciones del estudio y se sugiere una serie de posibles líneas de trabajo futuras de interés, junto con los resultados de investigación parciales que se han obtenido durante el tiempo que ha durado la investigación. ABSTRACT In the last two decades, the relevance of knowledge acquisition and dissemination processes has been highlighted and consequently, the study of these processes and the implementation of the technologies that make them possible has generated growing interest in the scientific community. In order to ease and optimize knowledge acquisition and dissemination, hierarchical organizations have evolved to a more horizontal configuration with more agile net structures, decreasing the dependence of a centralized authority, and building team-working oriented organizations. At the same time, Web 2.0 collaboration tools such as blogs and wikis have quickly developed. These collaboration tools are characterized by a strong social component and can reach their full potential when they are deployed in horizontal organization structures. Web 2.0, based on user participation, arises as a concept to challenge the existing technologies of the 90’s which were based on websites. Fortune 500 companies – HP, IBM, Xerox, Cisco- adopted the concept immediately even though there was no unanimity about its real usefulness or how it could be measured. This is partly due to the fact that the factors that make the drivers for employees to adopt these tools are not properly understood, consequently leading to implementation failure due to the existence of certain barriers. Given this situation, and faced with theoretical advantages that these Web 2.0 collaboration tools seem to have for companies, managers and the scientific community are showing an increasing interest in answering the following question: Which factors contribute to the decision of the employees of a company to adopt the Web 2.0 tools for collaborative purposes? The answer is complex since these tools are relatively new in business environments. These tools allow us to move from an information Management approach to Knowledge Management. In order to answer this question, the chosen approach involves the application of technology adoption models, all of them based on the individual’s perception of the different aspects related to technology usage. From this perspective, this thesis’ main objective is to study the factors influencing the adoption of blogs and wikis in a company. This is done by using a unified and theoretical predictive model of technological adoption with a holistic approach that is based on literature of technological adoption models and the particularities that these tools presented under study and in a specific context. This theoretical model will allow us to determine the factors that predict the intended use of these tools and their real usage. The scientific research is structured in five parts: Introduction to the research subject, development of the theoretical framework, research work design, empirical analysis and drawing the final conclusions. This thesis develops the five aforementioned parts sequentially thorough seven chapters; part one (chapter one), part two (chapters two and three), part three (chapters four and five), parte four (chapters six) and finally part five (chapter seven). The first chapter is focused on the research problem statement and the objectives of the thesis, intended to be reached during the project. Likewise, the concept of collaboration and its link with the Web 2.0 collaborative tools is discussed as well as an introduction to the technology adoption models. Finally we explain the planning to carry out the research and get the proposed results. After introducing the research topic, the second chapter carries out a review of the evolution of the main existing technology adoption models (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), highlighting its foundations and factors used. Based on technology adoption models set out in chapter 2, the third chapter deals with the factors which have been discussed previously in chapter 2, but adapted to the context of Web 2.0 collaborative tools under study, blogs and wikis. In order to better understand the final model, the factors are grouped into four types: technological factors, control factors, social-normative factors and other specific factors related to the collaborative tools. The first part of chapter 4 covers the analysis of the factors which are more relevant to study the adoption of collaborative tools, and the second part proceeds with the theoretical model which specifies the relationship between the different factors taken into consideration. These relationships will become specific hypotheses that will be tested by the empirical study. Throughout chapter 5 we cover the characteristics of the empirical study used to test the research hypotheses which were set out in chapter 4. The nature of research is social, exploratory, and it is based on a quantitative empirical study whose analysis is carried out using multivariate analysis techniques. The second part of this chapter includes the description of the scales of the measuring instrument; the methodology for data gathering, the detailed analysis of the sample, and finally the existence of bias attributable to the measurement method, the "Bias Common Method" is checked. The first part of chapter 6 corresponds to the analysis of results. The statistical technique employed (PLS-SEM) is previously explained as a tool of multivariate analysis, capable of carrying out predictive analysis, and as the appropriate methodology used to validate the model in a two-stages analysis, the measurement model and the structural model. Futhermore, it is necessary to check the requirements to be met by the sample and the thresholds of the parameters taken into account. In the second part of chapter 6 an empirical analysis of the data is performed for the two samples, one for blogs and the other for wikis, in order to validate the research hypothesis proposed in chapter 4. Finally, in chapter 7 the fulfillment level of the objectives raised in chapter 1 is reviewed and the theoretical, methodological and practical conclusions derived from the results of the study are presented. Next, we cover the general conclusions, detailing for each group of factors including practical recommendations that can be drawn to guide implementation of these tools in real situations in companies. As a final part of the chapter the limitations of the study are included and a number of potential future researches suggested, along with research partial results which have been obtained thorough the research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El trabajo fin de master “Análisis de la precisión en la medida del tiempo de reverberación y de los parámetros asociados” tiene como objetivo primordial la evaluación de los parámetros y métodos utilizados para la obtención de estos, a través del tiempo de reverberación, tanto de forma global, conjunto de todos los métodos, como cada uno de ellos por separado. Un objetivo secundario es la evaluación de la incertidumbre en función del método de medición usado. Para realizarlo, se van a aprovechar las mediciones realizadas para llevar a cabo el proyecto fin de carrera [1], donde se medía el tiempo de reverberación en dos recintos diferentes usando el método del ruido interrumpido y el método de la respuesta impulsiva integrada con señales distintas. Las señales que han sido utilizadas han sido señales impulsivas de explosión de globos, disparo de pistola, claquetas y, a través de procesado digital, señales periódicas pseudoaleatorias MLS y barridos de tonos puros. La evaluación que se realizará a cada parámetro ha sido extraída de la norma UNE 89002 [2], [3]y [4]. Se determinará si existen valores aberrantes tanto por el método de Grubbs como el de Cochran, e interesará conocer la veracidad, precisión, repetibilidad y reproducibilidad de los resultados obtenidos. Los parámetros que han sido estudiados y evaluados son el tiempo de reverberación con caída de 10 dB, (T10), con caída de 15 dB (T15), con caída de 20 dB (T20), con caída de 30 dB (T30), el tiempo de la caída temprana (EDT), el tiempo final (Ts), claridad (C20, C30, C50 y C80) y definición (D50 y D80). Dependiendo de si el parámetro hace referencia al recinto o si varía en función de la relación entre la posición de fuente y micrófono, su estudio estará sujeto a un procedimiento diferente de evaluación. ABSTRACT. The master thesis called “Analysis of the accuracy in measuring the reverberation time and the associated parameters” has as the main aim the assessment of parameters and methods used to obtain these through reverberation time, both working overall, set of all methods, as each of them separately. A secondary objective is to evaluate the uncertainty depending on the measurement method used. To do this, measurements of [1] will be used, where they were carried on in two different spaces using the interrupted noise method and the method of impulse response integrated with several signals. The signals that have been used are impulsive signals such as balloon burst, gunshot, slates and, through digital processing, periodic pseudorandom signal MLS and swept pure tone. The assessment that will be made to each parameter has been extracted from the UNE 89002 [2], [3] and [4]. It will determine whether there are aberrant values both through Grubbs method and Cochran method, to say so, if a value is inconsistent with the rest of the set. In addition, it is interesting to know the truthfulness, accuracy, repeatability and reproducibility of results obtained from the first part of this rule. The parameters that are going to be evaluated are reverberation time with 10 dB decay, (T10), with 15 dB decay (T15), with 20 dB decay (T20), with 30 dB decay (T30), the Early Decay Time (EDT), the final time (Ts), clarity (C20, C30, C50 y C80) and definition (D50 y D80). Depending on whether the parameter refers to the space or if it varies depending on the relationship between source and microphone positions, the study will be related to a different evaluation procedure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research examines the deposition of airborne particles which contain heavy metals and investigates the methods that can be used to identify their sources. The research focuses on lead and cadmium because these two metals are of growing public and scientific concern on environmental health grounds. The research consists of three distinct parts. The first is the development and evaluation of a new deposition measurement instrument - the deposit cannister - designed specifically for large-scale surveys in urban areas. The deposit cannister is specifically designed to be cheap, robust, and versatile and therefore to permit comprehensive high-density urban surveys. The siting policy reduces contamination from locally resuspended surface-dust. The second part of the research has involved detailed surveys of heavy metal deposition in Walsall, West Midlands, using the new high-density measurement method. The main survey, conducted over a six-week period in November - December 1982, provided 30-day samples of deposition at 250 different sites. The results have been used to examine the magnitude and spatial variability of deposition rates in the case-study area, and to evaluate the performance of the measurement method. The third part of the research has been to conduct a 'source-identification' exercise. The methods used have been Receptor Models - Factor Analysis and Cluster Analysis - and a predictive source-based deposition model. The results indicate that there are six main source processes contributing to deposition of metals in the Walsall area: coal combustion, vehicle emissions, ironfounding, copper refining and two general industrial/urban processes. |A source-based deposition model has been calibrated using facctorscores for one source factor as the dependent variable, rather than metal deposition rates, thus avoiding problems traditionally encountered in calibrating models in complex multi-source areas. Empirical evidence supports the hypothesised associatlon of this factor with emissions of metals from the ironfoundry industry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis describes an investigation into methods for controlling the mode distribution in multimode optical fibres. The major contributions presented in this thesis are summarised below. Emerging standards for Gigabit Ethernet transmission over multimode optical fibre have led to a resurgence of interest in the precise control, and specification, of modal launch conditions. In particular, commercial LED and OTDR test equipment does not, in general, comply with these standards. There is therefore a need for mode control devices, which can ensure compliance with the standards. A novel device consisting of a point-load mode-scrambler in tandem with a mode-filter is described in this thesis. The device, which has been patented, may be tuned to achieve a wide range of mode distributions and has been implemented in a ruggedised package for field use. Various other techniques for mode control have been described in this work, including the use of Long Period Gratings and air-gap mode-filters. Some of the methods have been applied to other applications, such as speckle suppression and in sensor technology. A novel, self-referencing, sensor comprising two modal groups in the Mode Power Distribution has been designed and tested. The feasibility of a two-channel Mode Group Diversity Multiplexed system has been demonstrated over 985m. A test apparatus for measuring mode distribution has been designed and constructed. The apparatus consists of a purpose-built video microscope, and comprehensive control and analysis software written in Visual Basic. The system may be fitted with a Silicon camera or an InGaAs camera, for measurement in the 850nm and 130nm transmission windows respectively. A limitation of the measurement method, when applied to well-filled fibres, has been identified and an improvement to the method has been proposed, based on modelled Laguerre Gauss field solutions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

2002 Mathematics Subject Classification: 62P30, 62P10.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This dissertation delivers a framework to diagnose the Bull-Whip Effect (BWE) in supply chains and then identify methods to minimize it. Such a framework is needed because in spite of the significant amount of literature discussing the bull-whip effect, many companies continue to experience the wide variations in demand that are indicative of the bull-whip effect. While the theory and knowledge of the bull-whip effect is well established, there still is the lack of an engineering framework and method to systematically identify the problem, diagnose its causes, and identify remedies. ^ The present work seeks to fill this gap by providing a holistic, systems perspective to bull-whip identification and diagnosis. The framework employs the SCOR reference model to examine the supply chain processes with a baseline measure of demand amplification. Then, research of the supply chain structural and behavioral features is conducted by means of the system dynamics modeling method. ^ The contribution of the diagnostic framework, is called Demand Amplification Protocol (DAMP), relies not only on the improvement of existent methods but also contributes with original developments introduced to accomplish successful diagnosis. DAMP contributes a comprehensive methodology that captures the dynamic complexities of supply chain processes. The method also contributes a BWE measurement method that is suitable for actual supply chains because of its low data requirements, and introduces a BWE scorecard for relating established causes to a central BWE metric. In addition, the dissertation makes a methodological contribution to the analysis of system dynamic models with a technique for statistical screening called SS-Opt, which determines the inputs with the greatest impact on the bull-whip effect by means of perturbation analysis and subsequent multivariate optimization. The dissertation describes the implementation of the DAMP framework in an actual case study that exposes the approach, analysis, results and conclusions. The case study suggests a balanced solution between costs and demand amplification can better serve both firms and supply chain interests. Insights pinpoint to supplier network redesign, postponement in manufacturing operations and collaborative forecasting agreements with main distributors.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a morphology study of intermediate-redshift (0.2 < z < 1.2) luminous infrared galaxies (LIRGs) and general field galaxies in the GOODS fields using a revised asymmetry measurement method optimized for deep fields. By taking careful account of the importance of the underlying sky-background structures, our new method does not suffer from systematic bias and offers small uncertainties. By redshifting local LIRGs and low-redshift GOODS galaxies to different higher redshifts, we have found that the redshift dependence of the galaxy asymmetry due to surface-brightness dimming is a function of the asymmetry itself, with larger corrections for more asymmetric objects. By applying redshift-, infrared (IR)-luminosity- and optical-brightness-dependent asymmetry corrections, we have found that intermediate-redshift LIRGs generally show highly asymmetric morphologies, with implied merger fractions ~50% up to z = 1.2, although they are slightly more symmetric than local LIRGs. For general field galaxies, we find an almost constant relatively high merger fraction (20%-30%). The B-band luminosity functions (LFs) of galaxy mergers are derived at different redshifts up to z = 1.2 and confirm the weak evolution of the merger fraction after breaking the luminosity-density degeneracy. The IR LFs of galaxy mergers are also derived, indicating a larger merger fraction at higher IR luminosity. The integral of the merger IR LFs indicates a dramatic evolution of the merger-induced IR energy density [(1 + z)^~(5-6)], and that galaxy mergers start to dominate the cosmic IR energy density at z greater than or ~ 1.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A newly developed framework for quantifying aerosol particle diversity and mixing state based on information-theoretic entropy is applied for the first time to single particle mass spectrometry field data. Single particle mass fraction estimates for black carbon, organic aerosol, ammonium, nitrate and sulfate, derived using single particle mass spectrometer, aerosol mass spectrometer and multi-angle absorption photometer measurements are used to calculate single particle species diversity (Di). The average single particle species diversity (Dα) is then related to the species diversity of the bulk population (Dγ) to derive a mixing state index value (χ) at hourly resolution. The mixing state index is a single parameter representation of how internally/externally mixed a particle population is at a given time. The index describes a continuum, with values of 0 and 100% representing fully external and internal mixing, respectively. This framework was applied to data collected as part of the MEGAPOLI winter campaign in Paris, France, 2010. Di values are low (∼ 2) for fresh traffic and wood-burning particles that contain high mass fractions of black carbon and organic aerosol but low mass fractions of inorganic ions. Conversely, Di values are higher (∼ 4) for aged carbonaceous particles containing similar mass fractions of black carbon, organic aerosol, ammonium, nitrate and sulfate. Aerosol in Paris is estimated to be 59% internally mixed in the size range 150-1067 nm, and mixing state is dependent both upon time of day and air mass origin. Daytime primary emissions associated with vehicular traffic and wood-burning result in low χ values, while enhanced condensation of ammonium nitrate on existing particles at night leads to higher χ values. Advection of particles from continental Europe containing ammonium, nitrate and sulfate leads to increases in Dα, Dγ and χ. The mixing state index represents a useful metric by which to compare and contrast ambient particle mixing state at other locations globally.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ambient wintertime background urban aerosol in Cork city, Ireland, was characterized using aerosol mass spectrometry. During the three-week measurement study in 2009, 93% of the ca. 1 350 000 single particles characterized by an Aerosol Time-of-Flight Mass Spectrometer (TSI ATOFMS) were classified into five organic-rich particle types, internally mixed to different proportions with elemental carbon (EC), sulphate and nitrate, while the remaining 7% was predominantly inorganic in nature. Non-refractory PM1 aerosol was characterized using a High Resolution Time-of-Flight Aerosol Mass Spectrometer (Aerodyne HR-ToF-AMS) and was also found to comprise organic aerosol as the most abundant species (62 %), followed by nitrate (15 %), sulphate (9 %) and ammonium (9 %), and chloride (5 %). Positive matrix factorization (PMF) was applied to the HR-ToF-AMS organic matrix, and a five-factor solution was found to describe the variance in the data well. Specifically, "hydrocarbon-like" organic aerosol (HOA) comprised 20% of the mass, "low-volatility" oxygenated organic aerosol (LV-OOA) comprised 18 %, "biomass burning" organic aerosol (BBOA) comprised 23 %, non-wood solid-fuel combustion "peat and coal" organic aerosol (PCOA) comprised 21 %, and finally a species type characterized by primary m/z peaks at 41 and 55, similar to previously reported "cooking" organic aerosol (COA), but possessing different diurnal variations to what would be expected for cooking activities, contributed 18 %. Correlations between the different particle types obtained by the two aerosol mass spectrometers are also discussed. Despite wood, coal and peat being minor fuel types used for domestic space heating in urban areas, their relatively low combustion efficiencies result in a significant contribution to PM1 aerosol mass (44% and 28% of the total organic aerosol mass and non-refractory total PM1, respectively).Ambient wintertime background urban aerosol in Cork city, Ireland, was characterized using aerosol mass spectrometry. During the three-week measurement study in 2009, 93% of the ca. 1 350 000 single particles characterized by an Aerosol Time-of-Flight Mass Spectrometer (TSI ATOFMS) were classified into five organic-rich particle types, internally mixed to different proportions with elemental carbon (EC), sulphate and nitrate, while the remaining 7% was predominantly inorganic in nature. Non-refractory PM1 aerosol was characterized using a High Resolution Time-of-Flight Aerosol Mass Spectrometer (Aerodyne HR-ToF-AMS) and was also found to comprise organic aerosol as the most abundant species (62 %), followed by nitrate (15 %), sulphate (9 %) and ammonium (9 %), and chloride (5 %). Positive matrix factorization (PMF) was applied to the HR-ToF-AMS organic matrix, and a five-factor solution was found to describe the variance in the data well. Specifically, "hydrocarbon-like" organic aerosol (HOA) comprised 20% of the mass, "low-volatility" oxygenated organic aerosol (LV-OOA) comprised 18 %, "biomass burning" organic aerosol (BBOA) comprised 23 %, non-wood solid-fuel combustion "peat and coal" organic aerosol (PCOA) comprised 21 %, and finally a species type characterized by primary m/z peaks at 41 and 55, similar to previously reported "cooking" organic aerosol (COA), but possessing different diurnal variations to what would be expected for cooking activities, contributed 18 %. Correlations between the different particle types obtained by the two aerosol mass spectrometers are also discussed. Despite wood, coal and peat being minor fuel types used for domestic space heating in urban areas, their relatively low combustion efficiencies result in a significant contribution to PM1 aerosol mass (44% and 28% of the total organic aerosol mass and non-refractory total PM1, respectively).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An aerosol time-of-flight mass spectrometer (ATOFMS) was deployed for the measurement of the size resolved chemical composition of single particles at a site in Cork Harbour, Ireland for three weeks in August 2008. The ATOFMS was co-located with a suite of semi-continuous instrumentation for the measurement of particle number, elemental carbon (EC), organic carbon (OC), sulfate and particulate matter smaller than 2.5 μm in diameter (PM2.5). The temporality of the ambient ATOFMS particle classes was subsequently used in conjunction with the semi-continuous measurements to apportion PM2.5 mass using positive matrix factorisation. The synergy of the single particle classification procedure and positive matrix factorisation allowed for the identification of six factors, corresponding to vehicular traffic, marine, long-range transport, various combustion, domestic solid fuel combustion and shipping traffic with estimated contributions to the measured PM2.5 mass of 23%, 14%, 13%, 11%, 5% and 1.5% respectively. Shipping traffic was found to contribute 18% of the measured particle number (20–600 nm mobility diameter), and thus may have important implications for human health considering the size and composition of ship exhaust particles. The positive matrix factorisation procedure enabled a more refined interpretation of the single particle results by providing source contributions to PM2.5 mass, while the single particle data enabled the identification of additional factors not possible with typical semi-continuous measurements, including local shipping traffic.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Kandidaatintyön tarkoituksena oli selvittää pienahitsin juuren kriittisyyttä. Työ oli saanut aiheen rakenneputki kokeiden yhteydessä tehdyistä havainnoista. Työssä tutustuttiin millaiset ovat pienahitsin mitoitus menetelmät ja tausta tutkimusta kuinka sitä sovelletaan käytäntöön suurlujuusteräksille. Työssä esitellään käytetyt tutkimusmenetelmät kuinka menetelmätriangulaatio saavutettiin. Tutkimuskysymyksenä oli hitsien kestävyyden mitoituksen riittävyys. Tutkimukset suoritettiin tarkastellen staattisesti kuormitettuja pienahitsejä. Pienahitsi kappaleista tehtiin laboratoriokoekappale ja FEM-laskentamalli joista vertailtiin tuloksia. Laboratoriokokeessa mittaus menetelmänä käytettiin DIC-mittausta, jolle voitiin tehdä jälkikäsittelyjä ja sieltä määrittää haluttuja datapisteitä. Laskennassa suurimmat jännityskeskittymät syntyivät hitsin kohdalle mutta vetokokeessa koekappaleeseen syntyi vauriot sularajalle ja vetokorvakkeen kiinnityshitsin rajaviivalle. Tällä kohtaa todettiin materiaalimalli riittämättömäksi, koska siihen ei ollut määritelty muutosvyöhykkeen parametreja.