16 resultados para Traditional enrichment method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Actualmente son una práctica común los procesos de normalización de métodos de ensayo y acreditación de laboratorios, ya que permiten una evaluación de los procedimientos llevados a cabo por profesionales de un sector tecnológico y además permiten asegurar unos mínimos de calidad en los resultados finales. En el caso de los laboratorios de acústica, para conseguir y mantener la acreditación de un laboratorio es necesario participar activamente en ejercicios de intercomparación, utilizados para asegurar la calidad de los métodos empleados. El inconveniente de estos ensayos es el gran coste que suponen para los laboratorios, siendo en ocasiones inasumible por estos teniendo que renunciar a la acreditación. Este Proyecto Fin de Grado se centrará en el desarrollo de un Laboratorio Virtual implementado mediante una herramienta software que servirá para realizar ejercicios de intercomparación no presenciales, ampliando de ese modo el concepto e-comparison y abriendo las bases a que en un futuro este tipo de ejercicios no presenciales puedan llegar a sustituir a los llevados a cabo actualmente. En el informe primero se hará una pequeña introducción, donde se expondrá la evolución y la importancia de los procedimientos de calidad acústica en la sociedad actual. A continuación se comentará las normativas internacionales en las que se soportará el proyecto, la norma ISO 145-5, así como los métodos matemáticos utilizados en su implementación, los métodos estadísticos de propagación de incertidumbres especificados por la JCGM (Joint Committee for Guides in Metrology). Después, se hablará sobre la estructura del proyecto, tanto del tipo de programación utilizada en su desarrollo como la metodología de cálculo utilizada para conseguir que todas las funcionalidades requeridas en este tipo de ensayo estén correctamente implementadas. Posteriormente se llevará a cabo una validación estadística basada en la comparación de unos datos generados por el programa, procesados utilizando la simulación de Montecarlo, y unos cálculos analíticos, que permita comprobar que el programa funciona tal y como se ha previsto en la fase de estudio teórico. También se realizará una prueba del programa, similar a la que efectuaría un técnico de laboratorio, en la que se evaluará la incertidumbre de la medida calculándola mediante el método tradicional, pudiendo comparar los datos obtenidos con los que deberían obtenerse. Por último, se comentarán las conclusiones obtenidas con el desarrollo y pruebas del Laboratorio Virtual, y se propondrán nuevas líneas de investigación futuras relacionadas con el concepto e-comparison y la implementación de mejoras al Laboratorio Virtual. ABSTRACT. Nowadays it is common practise to make procedures to normalise trials methods standards and laboratory accreditations, as they allow for the evaluation of the procedures made by professionals from a particular technological sector in addition to ensuring a minimum quality in the results. In order for an acoustics laboratory to achieve and maintain the accreditation it is necessary to actively participate in the intercomparison exercises, since these are used to assure the quality of the methods used by the technicians. Unfortunately, the high cost of these trials is unaffordable for many laboratories, which then have to renounce to having the accreditation. This Final Project is focused on the development of a Virtual Laboratory implemented by a software tool that it will be used for making non-attendance intercomparison trials, widening the concept of e-comparison and opening the possibility for using this type of non-attendance trials instead of the current ones. First, as a short introduction, I show the evolution and the importance today of acoustic quality procedures. Second, I will discuss the international standards, such as ISO 145-5, as well the mathematic and statistical methods of uncertainty propagation specified by the Joint Committee for Guides in Metrology, that are used in the Project. Third, I speak about the structure of the Project, as well as the programming language structure and the methodology used to get the different features needed in this acoustic trial. Later, a statistical validation will be carried out, based on comparison of data generated by the program, processed using a Montecarlo simulation, and analytical calculations to verify that the program works as planned in the theoretical study. There will also be a test of the program, similar to one that a laboratory technician would carry out, by which the uncertainty in the measurement will be compared to a traditional calculation method so as to compare the results. Finally, the conclusions obtained with the development and testing of the Virtual Laboratory will be discussed, new research paths related to e-comparison definition and the improvements for the Laboratory will be proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El objetivo de la presente investigación es el desarrollo de un modelo de cálculo rápido, eficiente y preciso, para la estimación de los costes finales de construcción, en las fases preliminares del proyecto arquitectónico. Se trata de una herramienta a utilizar durante el proceso de elaboración de estudios previos, anteproyecto y proyecto básico, no siendo por tanto preciso para calcular el “predimensionado de costes” disponer de la total definición grafica y literal del proyecto. Se parte de la hipótesis de que en la aplicación práctica del modelo no se producirán desviaciones superiores al 10 % sobre el coste final de la obra proyectada. Para ello se formulan en el modelo de predimensionado cinco niveles de estimación de costes, de menor a mayor definición conceptual y gráfica del proyecto arquitectónico. Los cinco niveles de cálculo son: dos que toman como referencia los valores “exógenos” de venta de las viviendas (promoción inicial y promoción básica) y tres basados en cálculos de costes “endógenos” de la obra proyectada (estudios previos, anteproyecto y proyecto básico). El primer nivel de estimación de carácter “exógeno” (nivel .1), se calcula en base a la valoración de mercado de la promoción inmobiliaria y a su porcentaje de repercusión de suelo sobre el valor de venta de las viviendas. El quinto nivel de valoración, también de carácter “exógeno” (nivel .5), se calcula a partir del contraste entre el valor externo básico de mercado, los costes de construcción y los gastos de promoción estimados de la obra proyectada. Este contraste entre la “repercusión del coste de construcción” y el valor de mercado, supone una innovación respecto a los modelos de predimensionado de costes existentes, como proceso metodológico de verificación y validación extrínseca, de la precisión y validez de las estimaciones resultantes de la aplicación práctica del modelo, que se denomina Pcr.5n (Predimensionado costes de referencia con .5niveles de cálculo según fase de definición proyectual / ideación arquitectónica). Los otros tres niveles de predimensionado de costes de construcción “endógenos”, se estiman mediante cálculos analíticos internos por unidades de obra y cálculos sintéticos por sistemas constructivos y espacios funcionales, lo que se lleva a cabo en las etapas iniciales del proyecto correspondientes a estudios previos (nivel .2), anteproyecto (nivel .3) y proyecto básico (nivel .4). Estos cálculos teóricos internos son finalmente evaluados y validados mediante la aplicación práctica del modelo en obras de edificación residencial, de las que se conocen sus costes reales de liquidación final de obra. Según va evolucionando y se incrementa el nivel de definición y desarrollo del proyecto, desde los estudios previos hasta el proyecto básico, el cálculo se va perfeccionando en su nivel de eficiencia y precisión de la estimación, según la metodología aplicada: [aproximaciones sucesivas en intervalos finitos], siendo la hipótesis básica como anteriormente se ha avanzado, lograr una desviación máxima de una décima parte en el cálculo estimativo del predimensionado del coste real de obra. El cálculo del coste de ejecución material de la obra, se desarrolla en base a parámetros cúbicos funcionales “tridimensionales” del espacio proyectado y parámetros métricos constructivos “bidimensionales” de la envolvente exterior de cubierta/fachada y de la huella del edificio sobre el terreno. Los costes funcionales y constructivos se ponderan en cada fase del proceso de cálculo con sus parámetros “temáticos/específicos” de gestión (Pg), proyecto (Pp) y ejecución (Pe) de la concreta obra presupuestada, para finalmente estimar el coste de construcción por contrata, como resultado de incrementar al coste de ejecución material el porcentaje correspondiente al parámetro temático/especifico de la obra proyectada. El modelo de predimensionado de costes de construcción Pcr.5n, será una herramienta de gran interés y utilidad en el ámbito profesional, para la estimación del coste correspondiente al Proyecto Básico previsto en el marco técnico y legal de aplicación. Según el Anejo I del Código Técnico de la Edificación (CTE), es de obligado cumplimiento que el proyecto básico contenga una “Valoración aproximada de la ejecución material de la obra proyectada por capítulos”, es decir , que el Proyecto Básico ha de contener al menos un “presupuesto aproximado”, por capítulos, oficios ó tecnologías. El referido cálculo aproximado del presupuesto en el Proyecto Básico, necesariamente se ha de realizar mediante la técnica del predimensionado de costes, dado que en esta fase del proyecto arquitectónico aún no se dispone de cálculos de estructura, planos de acondicionamiento e instalaciones, ni de la resolución constructiva de la envolvente, por cuanto no se han desarrollado las especificaciones propias del posterior proyecto de ejecución. Esta estimación aproximada del coste de la obra, es sencilla de calcular mediante la aplicación práctica del modelo desarrollado, y ello tanto para estudiantes como para profesionales del sector de la construcción. Como se contiene y justifica en el presente trabajo, la aplicación práctica del modelo para el cálculo de costes en las fases preliminares del proyecto, es rápida y certera, siendo de sencilla aplicación tanto en vivienda unifamiliar (aisladas y pareadas), como en viviendas colectivas (bloques y manzanas). También, el modelo es de aplicación en el ámbito de la valoración inmobiliaria, tasaciones, análisis de viabilidad económica de promociones inmobiliarias, estimación de costes de obras terminadas y en general, cuando no se dispone del proyecto de ejecución y sea preciso calcular los costes de construcción de las obras proyectadas. Además, el modelo puede ser de aplicación para el chequeo de presupuestos calculados por el método analítico tradicional (estado de mediciones pormenorizadas por sus precios unitarios y costes descompuestos), tanto en obras de iniciativa privada como en obras promovidas por las Administraciones Públicas. Por último, como líneas abiertas a futuras investigaciones, el modelo de “predimensionado costes de referencia 5 niveles de cálculo”, se podría adaptar y aplicar para otros usos y tipologías diferentes a la residencial, como edificios de equipamientos y dotaciones públicas, valoración de edificios históricos, obras de urbanización interior y exterior de parcela, proyectos de parques y jardines, etc….. Estas lineas de investigación suponen trabajos paralelos al aquí desarrollado, y que a modo de avance parcial se recogen en las comunicaciones presentadas en los Congresos internacionales Scieconf/Junio 2013, Rics‐Cobra/Septiembre 2013 y en el IV Congreso nacional de patología en la edificación‐Ucam/Abril 2014. ABSTRACT The aim of this research is to develop a fast, efficient and accurate calculation model to estimate the final costs of construction, during the preliminary stages of the architectural project. It is a tool to be used during the preliminary study process, drafting and basic project. It is not therefore necessary to have the exact, graphic definition of the project in order to be able to calculate the cost‐scaling. It is assumed that no deviation 10% higher than the final cost of the projected work will occur during the implementation. To that purpose five levels of cost estimation are formulated in the scaling model, from a lower to a higher conceptual and graphic definition of the architectural project. The five calculation levels are: two that take as point of reference the ”exogenous” values of house sales (initial development and basic development), and three based on calculation of endogenous costs (preliminary study, drafting and basic project). The first ”exogenous” estimation level (level.1) is calculated over the market valuation of real estate development and the proportion the cost of land has over the value of the houses. The fifth level of valuation, also an ”exogenous” one (level.5) is calculated from the contrast between the basic external market value, the construction costs, and the estimated development costs of the projected work. This contrast between the ”repercussions of construction costs” and the market value is an innovation regarding the existing cost‐scaling models, as a methodological process of extrinsic verification and validation, of the accuracy and validity of the estimations obtained from the implementation of the model, which is called Pcr.5n (reference cost‐scaling with .5calculation levels according to the stage of project definition/ architectural conceptualization) The other three levels of “endogenous” construction cost‐scaling are estimated from internal analytical calculations by project units and synthetic calculations by construction systems and functional spaces. This is performed during the initial stages of the project corresponding to preliminary study process (level.2), drafting (level.3) and basic project (level.4). These theoretical internal calculations are finally evaluated and validated via implementation of the model in residential buildings, whose real costs on final payment of the works are known. As the level of definition and development of the project evolves, from preliminary study to basic project, the calculation improves in its level of efficiency and estimation accuracy, following the applied methodology: [successive approximations at finite intervals]. The basic hypothesis as above has been made, achieving a maximum deviation of one tenth, in the estimated calculation of the true cost of predimensioning work. The cost calculation for material execution of the works is developed from functional “three‐dimensional” cubic parameters for the planned space and constructive “two dimensional” metric parameters for the surface that envelopes around the facade and the building’s footprint on the plot. The functional and building costs are analyzed at every stage of the process of calculation with “thematic/specific” parameters of management (Pg), project (Pp) and execution (Pe) of the estimated work in question, and finally the cost of contractual construction is estimated, as a consequence of increasing the cost of material execution with the percentage pertaining to the thematic/specific parameter of the projected work. The construction cost‐scaling Pcr.5n model will be a useful tool of great interest in the professional field to estimate the cost of the Basic Project as prescribed in the technical and legal framework of application. According to the appendix of the Technical Building Code (CTE), it is compulsory that the basic project contains an “approximate valuation of the material execution of the work, projected by chapters”, that is, that the basic project must contain at least an “approximate estimate” by chapter, trade or technology. This approximate estimate in the Basic Project is to be performed through the cost‐scaling technique, given that structural calculations, reconditioning plans and definitive contruction details of the envelope are still not available at this stage of the architectural project, insofar as specifications pertaining to the later project have not yet been developed. This approximate estimate of the cost of the works is easy to calculate through the implementation of the given model, both for students and professionals of the building sector. As explained and justified in this work, the implementation of the model for cost‐scaling during the preliminary stage is fast and accurate, as well as easy to apply both in single‐family houses (detached and semi‐detached) and collective housing (blocks). The model can also be applied in the field of the real‐estate valuation, official appraisal, analysis of the economic viability of real estate developments, estimate of the cost of finished projects and, generally, when an implementation project is not available and it is necessary to calculate the building costs of the projected works. The model can also be applied to check estimates calculated by the traditional analytical method (state of measurements broken down into price per unit cost details), both in private works and those promoted by Public Authorities. Finally, as potential lines for future research, the “five levels of calculation cost‐scaling model”, could be adapted and applied to purposes and typologies other than the residential one, such as service buildings and public facilities, valuation of historical buildings, interior and exterior development works, park and garden planning, etc… These lines of investigation are parallel to this one and, by way of a preview, can be found in the dissertations given in the International Congresses Scieconf/June 2013, Rics‐Cobra/September 2013 and in the IV Congress on building pathology ‐Ucam/April 2014.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this study is to determine the stress distribution in the carpentry joint of halved and tabled scarf joint with the finite element method (FEM) and its comparison with the values obtained using the theory of Strength of Materials. The stress concentration areas where analyzed and the influence of mesh refinement was studied on the results in order to determine the mesh size that provides the stress values more consistent with the theory. In areas where stress concentration is lower, different mesh sizes show similar stress values. In areas where stress concentration occurs, the same values increase considerably with the refinement of the mesh. The results show a central symmetry of the isobar lines distribution where the centre of symmetry corresponds to the geometric centre of the joint. Comparison of normal stress levels obtained by the FEM and the classical theory shows small differences, except at points of stress concentration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Researchers in ecology commonly use multivariate analyses (e.g. redundancy analysis, canonical correspondence analysis, Mantel correlation, multivariate analysis of variance) to interpret patterns in biological data and relate these patterns to environmental predictors. There has been, however, little recognition of the errors associated with biological data and the influence that these may have on predictions derived from ecological hypotheses. We present a permutational method that assesses the effects of taxonomic uncertainty on the multivariate analyses typically used in the analysis of ecological data. The procedure is based on iterative randomizations that randomly re-assign non identified species in each site to any of the other species found in the remaining sites. After each re-assignment of species identities, the multivariate method at stake is run and a parameter of interest is calculated. Consequently, one can estimate a range of plausible values for the parameter of interest under different scenarios of re-assigned species identities. We demonstrate the use of our approach in the calculation of two parameters with an example involving tropical tree species from western Amazonia: 1) the Mantel correlation between compositional similarity and environmental distances between pairs of sites, and; 2) the variance explained by environmental predictors in redundancy analysis (RDA). We also investigated the effects of increasing taxonomic uncertainty (i.e. number of unidentified species), and the taxonomic resolution at which morphospecies are determined (genus-resolution, family-resolution, or fully undetermined species) on the uncertainty range of these parameters. To achieve this, we performed simulations on a tree dataset from southern Mexico by randomly selecting a portion of the species contained in the dataset and classifying them as unidentified at each level of decreasing taxonomic resolution. An analysis of covariance showed that both taxonomic uncertainty and resolution significantly influence the uncertainty range of the resulting parameters. Increasing taxonomic uncertainty expands our uncertainty of the parameters estimated both in the Mantel test and RDA. The effects of increasing taxonomic resolution, however, are not as evident. The method presented in this study improves the traditional approaches to study compositional change in ecological communities by accounting for some of the uncertainty inherent to biological data. We hope that this approach can be routinely used to estimate any parameter of interest obtained from compositional data tables when faced with taxonomic uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land value bears significant weight in house prices in historical town centers. An essential aim for regulating the mortgage market, particularly in the financial and property crisis that countries such as Spain are undergoing, is to have at hand objective procedures for its valuation, whatever the conditions (location, construction, planning). Of all the factors contributing to house price make-up, the land is the only one whose value does not depend on acquisition cost, but rather on the location-time binomial. That is to say, the specific circumstances at that point and at the exact moment of valuation. For this reason, the most commonly applied procedure for land valuation in town centers is the use of the residual method: once the selling price of new housing in a district is known, the other necessary costs and expenses of development are deducted, including those of building and the developer’s profit. The value left is that of the land. To apply these procedures it is vital to have figures such as building costs, technical fees, tax costs, etc. But, above all, it is essential to obtain the selling price of the new housing. This is not always feasible, on account of the lack of newbuild development in this location. This shortage of information occurs in historical town cities, where urban renewal is slight due to the heritage-protection policies, and where, nevertheless there is substantial activity in the secondary market. In these circumstances, as an alternative for land valuation in consolidated urban areas, we have the adaptation of the residual method to the particular characteristics of the secondary market. To these ends, there is the proposal for the appreciation of the dwelling which follows, in a backwards direction, the application of traditional depreciation methods proposed by the various valuation manuals and guidelines. The reliability of the results obtained is analyzed by contrasting it with published figures for newly-built properties, according to different rules applied in administrative appraisals in Spain and the incidence of an eventual correction due to conservation state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metrological confirmation process must be designed and implemented to ensure that metrological characteristics of the measurement system meet metrological requirements of the measurement process. The aim of this paper is to present an alternative method to the traditional metrological requirements about the relationship between tolerance and measurement uncertainty, to develop such confirmation processes. The proposed way to metrological confirmation considers a given inspection task of the measurement process into the manufacturing system, and it is based on the Index of Contamination of the Capability, ICC. Metrological confirmation process is then developed taking into account the producer risks and economic considerations on this index. As a consequence, depending on the capability of the manufacturing process, the measurement system will be or will not be in adequate state of metrological confirmation for the measurement process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Simultaneous Multiple Surface (SMS) method in planar geometry (2D) is applied to imaging designs, generating lenses that compare well with aplanatic designs. When the merit function utilizes image quality over the entire field (not just paraxial), the SMS strategy is superior. In fact, the traditional aplanatic approach is actually a particular case of the SMS strategy

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Active learning is one of the most efficient mechanisms for learning, according to the psychology of learning. When students act as teachers for other students, the communication is more fluent and knowledge is transferred easier than in a traditional classroom. This teaching method is referred to in the literature as reciprocal peer teaching. In this study, the method is applied to laboratory sessions of a higher education institution course, and the students who act as teachers are referred to as ‘‘laboratory monitors.’’ A particular way to select the monitors and its impact in the final marks is proposed. A total of 181 students participated in the experiment, experiences with laboratory monitors are discussed, and methods for motivating and training laboratory monitors and regular students are proposed. The types of laboratory sessions that can be led by classmates are discussed. This work is related to the changes in teaching methods in the Spanish higher education system, prompted by the Bologna Process for the construction of the European Higher Education Area

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical imaging optics has been developed over centuries in many areas, such as its paraxial imaging theory and practical design methods like multi-parametric optimization techniques. Although these imaging optical design methods can provide elegant solutions to many traditional optical problems, there are more and more new design problems, like solar concentrator, illumination system, ultra-compact camera, etc., that require maximum energy transfer efficiency, or ultra-compact optical structure. These problems do not have simple solutions from classical imaging design methods, because not only paraxial rays, but also non-paraxial rays should be well considered in the design process. Non-imaging optics is a newly developed optical discipline, which does not aim to form images, but to maximize energy transfer efficiency. One important concept developed from non-imaging optics is the “edge-ray principle”, which states that the energy flow contained in a bundle of rays will be transferred to the target, if all its edge rays are transferred to the target. Based on that concept, many CPC solar concentrators have been developed with efficiency close to the thermodynamic limit. When more than one bundle of edge-rays needs to be considered in the design, one way to obtain solutions is to use SMS method. SMS stands for Simultaneous Multiple Surface, which means several optical surfaces are constructed simultaneously. The SMS method was developed as a design method in Non-imaging optics during the 90s. The method can be considered as an extension to the Cartesian Oval calculation. In the traditional Cartesian Oval calculation, one optical surface is built to transform an input wave-front to an out-put wave-front. The SMS method however, is dedicated to solve more than 1 wave-fronts transformation problem. In the beginning, only 2 input wave-fronts and 2 output wave-fronts transformation problem was considered in the SMS design process for rotational optical systems or free-form optical systems. Usually “SMS 2D” method stands for the SMS procedure developed for rotational optical system, and “SMS 3D” method for the procedure for free-form optical system. Although the SMS method was originally employed in non-imaging optical system designs, it has been found during this thesis that with the improved capability to design more surfaces and control more input and output wave-fronts, the SMS method can also be applied to imaging system designs and possesses great advantage over traditional design methods. In this thesis, one of the main goals to achieve is to further develop the existing SMS-2D method to design with more surfaces and improve the stability of the SMS-2D and SMS-3D algorithms, so that further optimization process can be combined with SMS algorithms. The benefits of SMS plus optimization strategy over traditional optimization strategy will be explained in details for both rotational and free-form imaging optical system designs. Another main goal is to develop novel design concepts and methods suitable for challenging non-imaging applications, e.g. solar concentrator and solar tracker. This thesis comprises 9 chapters and can be grouped into two parts: the first part (chapter 2-5) contains research works in the imaging field, and the second part (chapter 6-8) contains works in the non-imaging field. In the first chapter, an introduction to basic imaging and non-imaging design concepts and theories is given. Chapter 2 presents a basic SMS-2D imaging design procedure using meridian rays. In this chapter, we will set the imaging design problem from the SMS point of view, and try to solve the problem numerically. The stability of this SMS-2D design procedure will also be discussed. The design concepts and procedures developed in this chapter lay the path for further improvement. Chapter 3 presents two improved SMS 3 surfaces’ design procedures using meridian rays (SMS-3M) and skew rays (SMS-1M2S) respectively. The major improvement has been made to the central segments selections, so that the whole SMS procedures become more stable compared to procedures described in Chapter 2. Since these two algorithms represent two types of phase space sampling, their image forming capabilities are compared in a simple objective design. Chapter 4 deals with an ultra-compact SWIR camera design with the SMS-3M method. The difficulties in this wide band camera design is how to maintain high image quality meanwhile reduce the overall system length. This interesting camera design provides a playground for the classical design method and SMS design methods. We will show designs and optical performance from both classical design method and the SMS design method. Tolerance study is also given as the end of the chapter. Chapter 5 develops a two-stage SMS-3D based optimization strategy for a 2 freeform mirrors imaging system. In the first optimization phase, the SMS-3D method is integrated into the optimization process to construct the two mirrors in an accurate way, drastically reducing the unknown parameters to only few system configuration parameters. In the second optimization phase, previous optimized mirrors are parameterized into Qbfs type polynomials and set up in code V. Code V optimization results demonstrates the effectiveness of this design strategy in this 2-mirror system design. Chapter 6 shows an etendue-squeezing condenser optics, which were prepared for the 2010 IODC illumination contest. This interesting design employs many non-imaging techniques such as the SMS method, etendue-squeezing tessellation, and groove surface design. This device has theoretical efficiency limit as high as 91.9%. Chapter 7 presents a freeform mirror-type solar concentrator with uniform irradiance on the solar cell. Traditional parabolic mirror concentrator has many drawbacks like hot-pot irradiance on the center of the cell, insufficient use of active cell area due to its rotational irradiance pattern and small acceptance angle. In order to conquer these limitations, a novel irradiance homogenization concept is developed, which lead to a free-form mirror design. Simulation results show that the free-form mirror reflector has rectangular irradiance pattern, uniform irradiance distribution and large acceptance angle, which confirm the viability of the design concept. Chapter 8 presents a novel beam-steering array optics design strategy. The goal of the design is to track large angle parallel rays by only moving optical arrays laterally, and convert it to small angle parallel output rays. The design concept is developed as an extended SMS method. Potential applications of this beam-steering device are: skylights to provide steerable natural illumination, building integrated CPV systems, and steerable LED illumination. Conclusion and future lines of work are given in Chapter 9. Resumen La óptica de formación de imagen clásica se ha ido desarrollando durante siglos, dando lugar tanto a la teoría de óptica paraxial y los métodos de diseño prácticos como a técnicas de optimización multiparamétricas. Aunque estos métodos de diseño óptico para formación de imagen puede aportar soluciones elegantes a muchos problemas convencionales, siguen apareciendo nuevos problemas de diseño óptico, concentradores solares, sistemas de iluminación, cámaras ultracompactas, etc. que requieren máxima transferencia de energía o dimensiones ultracompactas. Este tipo de problemas no se pueden resolver fácilmente con métodos clásicos de diseño porque durante el proceso de diseño no solamente se deben considerar los rayos paraxiales sino también los rayos no paraxiales. La óptica anidólica o no formadora de imagen es una disciplina que ha evolucionado en gran medida recientemente. Su objetivo no es formar imagen, es maximazar la eficiencia de transferencia de energía. Un concepto importante de la óptica anidólica son los “rayos marginales”, que se pueden utilizar para el diseño de sistemas ya que si todos los rayos marginales llegan a nuestra área del receptor, todos los rayos interiores también llegarán al receptor. Haciendo uso de este principio, se han diseñado muchos concentradores solares que funcionan cerca del límite teórico que marca la termodinámica. Cuando consideramos más de un haz de rayos marginales en nuestro diseño, una posible solución es usar el método SMS (Simultaneous Multiple Surface), el cuál diseña simultáneamente varias superficies ópticas. El SMS nació como un método de diseño para óptica anidólica durante los años 90. El método puede ser considerado como una extensión del cálculo del óvalo cartesiano. En el método del óvalo cartesiano convencional, se calcula una superficie para transformar un frente de onda entrante a otro frente de onda saliente. El método SMS permite transformar varios frentes de onda de entrada en frentes de onda de salida. Inicialmente, sólo era posible transformar dos frentes de onda con dos superficies con simetría de rotación y sin simetría de rotación, pero esta limitación ha sido superada recientemente. Nos referimos a “SMS 2D” como el método orientado a construir superficies con simetría de rotación y llamamos “SMS 3D” al método para construir superficies sin simetría de rotación o free-form. Aunque el método originalmente fue aplicado en el diseño de sistemas anidólicos, se ha observado que gracias a su capacidad para diseñar más superficies y controlar más frentes de onda de entrada y de salida, el SMS también es posible aplicarlo a sistemas de formación de imagen proporcionando una gran ventaja sobre los métodos de diseño tradicionales. Uno de los principales objetivos de la presente tesis es extender el método SMS-2D para permitir el diseño de sistemas con mayor número de superficies y mejorar la estabilidad de los algoritmos del SMS-2D y SMS-3D, haciendo posible combinar la optimización con los algoritmos. Los beneficios de combinar SMS y optimización comparado con el proceso de optimización tradicional se explican en detalle para sistemas con simetría de rotación y sin simetría de rotación. Otro objetivo importante de la tesis es el desarrollo de nuevos conceptos de diseño y nuevos métodos en el área de la concentración solar fotovoltaica. La tesis está estructurada en 9 capítulos que están agrupados en dos partes: la primera de ellas (capítulos 2-5) se centra en la óptica formadora de imagen mientras que en la segunda parte (capítulos 6-8) se presenta el trabajo del área de la óptica anidólica. El primer capítulo consta de una breve introducción de los conceptos básicos de la óptica anidólica y la óptica en formación de imagen. El capítulo 2 describe un proceso de diseño SMS-2D sencillo basado en los rayos meridianos. En este capítulo se presenta el problema de diseñar un sistema formador de imagen desde el punto de vista del SMS y se intenta obtener una solución de manera numérica. La estabilidad de este proceso se analiza con detalle. Los conceptos de diseño y los algoritmos desarrollados en este capítulo sientan la base sobre la cual se realizarán mejoras. El capítulo 3 presenta dos procedimientos para el diseño de un sistema con 3 superficies SMS, el primero basado en rayos meridianos (SMS-3M) y el segundo basado en rayos oblicuos (SMS-1M2S). La mejora más destacable recae en la selección de los segmentos centrales, que hacen más estable todo el proceso de diseño comparado con el presentado en el capítulo 2. Estos dos algoritmos representan dos tipos de muestreo del espacio de fases, su capacidad para formar imagen se compara diseñando un objetivo simple con cada uno de ellos. En el capítulo 4 se presenta un diseño ultra-compacto de una cámara SWIR diseñada usando el método SMS-3M. La dificultad del diseño de esta cámara de espectro ancho radica en mantener una alta calidad de imagen y al mismo tiempo reducir drásticamente sus dimensiones. Esta cámara es muy interesante para comparar el método de diseño clásico y el método de SMS. En este capítulo se presentan ambos diseños y se analizan sus características ópticas. En el capítulo 5 se describe la estrategia de optimización basada en el método SMS-3D. El método SMS-3D calcula las superficies ópticas de manera precisa, dejando sólo unos pocos parámetros libres para decidir la configuración del sistema. Modificando el valor de estos parámetros se genera cada vez mediante SMS-3D un sistema completo diferente. La optimización se lleva a cabo variando los mencionados parámetros y analizando el sistema generado. Los resultados muestran que esta estrategia de diseño es muy eficaz y eficiente para un sistema formado por dos espejos. En el capítulo 6 se describe un sistema de compresión de la Etendue, que fue presentado en el concurso de iluminación del IODC en 2010. Este interesante diseño hace uso de técnicas propias de la óptica anidólica, como el método SMS, el teselado de las lentes y el diseño mediante grooves. Este dispositivo tiene un límite teórica en la eficiencia del 91.9%. El capítulo 7 presenta un concentrador solar basado en un espejo free-form con irradiancia uniforme sobre la célula. Los concentradores parabólicos tienen numerosas desventajas como los puntos calientes en la zona central de la célula, uso no eficiente del área de la célula al ser ésta cuadrada y además tienen ángulos de aceptancia de reducido. Para poder superar estas limitaciones se propone un novedoso concepto de homogeneización de la irrandancia que se materializa en un diseño con espejo free-form. El análisis mediante simulación demuestra que la irradiancia es homogénea en una región rectangular y con mayor ángulo de aceptancia, lo que confirma la viabilidad del concepto de diseño. En el capítulo 8 se presenta un novedoso concepto para el diseño de sistemas afocales dinámicos. El objetivo del diseño es realizar un sistema cuyo haz de rayos de entrada pueda llegar con ángulos entre ±45º mientras que el haz de rayos a la salida sea siempre perpendicular al sistema, variando únicamente la posición de los elementos ópticos lateralmente. Las aplicaciones potenciales de este dispositivo son varias: tragaluces que proporcionan iluminación natural, sistemas de concentración fotovoltaica integrados en los edificios o iluminación direccionable con LEDs. Finalmente, el último capítulo contiene las conclusiones y las líneas de investigación futura.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The correct assignment of high molecular weight glutenin subunit variants is a key task in wheat breeding. However, the traditional analysis by protein electrophoresis is sometimes difficult and not very precise. This work describes a novel DNA marker for the accurate discrimination between the Glu-B1 locus subunits Bx7 and Bx7*. The analysis of one hundred and forty two bread wheat cultivars from different countries has highlighted a great number of misclassifications in the literature that could lead to wrong conclusions in studies of the relationship between glutenin composition and wheat quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to achieve total selectivity at electrical distribution networks it is of great importance to analyze the defect currents at ungrounded power systems. This information will help to grant selectivity at electrical distribution networks ensuring that only the defect line or feeder is removed from service. In the present work a new selective and directional protection method for ungrounded power systems is evaluated. The new method measures only defect currents to detect earth faults and works with a directional criterion to determine the line under faulty conditions. The main contribution of this new technique is that it can detect earth faults in outgoing lines at any type of substation avoiding the possible mismatch of traditional directional earth fault relays. This detection technique is based on the comparison of the direction of a reference current to the direction of all earth fault capacitive currents at all the feeders connected to the same bus bars. This new method has been validated through computer simulations. The results for the different cases studied are remarkable, proving total validity and usefulness of the new method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La comparación de las diferentes ofertas presentadas en la licitación de un proyecto,con el sistema de contratación tradicional de medición abierta y precio unitario cerrado, requiere herramientas de análisis que sean capaces de discriminar propuestas que teniendo un importe global parecido pueden presentar un impacto económico muy diferente durante la ejecución. Una de las situaciones que no se detecta fácilmente con los métodos tradicionales es el comportamiento del coste real frente a las variaciones de las cantidades realmente ejecutadas en obra respecto de las estimadas en el proyecto. Este texto propone abordar esta situación mediante un sistema de análisis cuantitativo del riesgo como el método de Montecarlo. Este procedimiento, como es sabido, consiste en permitir que los datos de entrada que definen el problema varíen unas funciones de probabilidad definidas, generar un gran número de casos de prueba y tratar los resultados estadísticamente para obtener los valores finales más probables,con los parámetros necesarios para medir la fiabilidad de la estimación. Se presenta un modelo para la comparación de ofertas, desarrollado de manera que puede aplicarse en casos reales aplicando a los datos conocidos unas condiciones de variación que sean fáciles de establecer por los profesionales que realizan estas tareas. ABSTRACT: The comparison of the different bids in the tender for a project, with the traditional contract system based on unit rates open to and re-measurement, requires analysis tools that are able to discriminate proposals having a similar overall economic impact, but that might show a very different behaviour during the execution of the works. One situation not easily detected by traditional methods is the reaction of the actual cost to the changes in the exact quantity of works finally executed respect of the work estimated in the project. This paper intends to address this situation through the Monte Carlo method, a system of quantitative risk analysis. This procedure, as is known, is allows the input data defining the problem to vary some within well defined probability functions, generating a large number of test cases, the results being statistically treated to obtain the most probable final values, with the rest of the parameters needed to measure the reliability of the estimate. We present a model for the comparison of bids, designed in a way that it can be applied in real cases, based on data and assumptions that are easy to understand and set up by professionals who wish to perform these tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The technique of reinforcement of wooden floors is a matter clearly multidisciplinary. The teaching of the subject using the "traditional" method, explaining the theory first and then proposing and solving problems has not been successful. This paper discusses the results of a teaching experiencie. It has been the teaching of the subject by the case method. The results are clearly superior to those obtained with the traditional methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most red wines commercialized in the market use the malolactic fermentationprocess in order to ensure stability from a microbiological point of view. In this secondfermentation, malic acid is converted into L-lactic acid under controlled setups. Howeverthis process is not free from possible collateral effects that on some occasions produceoff-flavors, wine quality loss and human health problems. In warm viticulture regions suchas the south of Spain, the risk of suffering a deviation during the malolactic fermentationprocess increases due to the high must pH. This contributes to produce wines with highvolatile acidity and biogenic amine values. This manuscript develops a new red winemakingmethodology that consists of combining the use of two non-Saccharomyces yeast strains asan alternative to the traditional malolactic fermentation. In this method, malic acid is totallyconsumed by Schizosaccharomyces pombe, thus achieving the microbiological stabilizationobjective, while Lachancea thermotolerans produces lactic acid in order not to reduce andeven increase the acidity of wines produced from low acidity musts. This technique reducesthe risks inherent to the malolactic fermentation process when performed in warm regions.The result is more fruity wines that contain less acetic acid and biogenic amines than thetraditional controls that have undergone the classical malolactic fermentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electric probes are objects immersed in the plasma with sharp boundaries which collect of emit charged particles. Consequently, the nearby plasma evolves under abrupt imposed and/or naturally emerging conditions. There could be localized currents, different time scales for plasma species evolution, charge separation and absorbing-emitting walls. The traditional numerical schemes based on differences often transform these disparate boundary conditions into computational singularities. This is the case of models using advection-diffusion differential equations with source-sink terms (also called Fokker-Planck equations). These equations are used in both, fluid and kinetic descriptions, to obtain the distribution functions or the density for each plasma species close to the boundaries. We present a resolution method grounded on an integral advancing scheme by using approximate Green's functions, also called short-time propagators. All the integrals, as a path integration process, are numerically calculated, what states a robust grid-free computational integral method, which is unconditionally stable for any time step. Hence, the sharp boundary conditions, as the current emission from a wall, can be treated during the short-time regime providing solutions that works as if they were known for each time step analytically. The form of the propagator (typically a multivariate Gaussian) is not unique and it can be adjusted during the advancing scheme to preserve the conserved quantities of the problem. The effects of the electric or magnetic fields can be incorporated into the iterative algorithm. The method allows smooth transitions of the evolving solutions even when abrupt discontinuities are present. In this work it is proposed a procedure to incorporate, for the very first time, the boundary conditions in the numerical integral scheme. This numerical scheme is applied to model the plasma bulk interaction with a charge-emitting electrode, dealing with fluid diffusion equations combined with Poisson equation self-consistently. It has been checked the stability of this computational method under any number of iterations, even for advancing in time electrons and ions having different time scales. This work establishes the basis to deal in future work with problems related to plasma thrusters or emissive probes in electromagnetic fields.