912 resultados para Dwarf Galaxy Fornax Distribution Function Action Based


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present studies of the spatial clustering of inertial particles embedded in turbulent flow. A major part of the thesis is experimental, involving the technique of Phase Doppler Interferometry (PDI). The thesis also includes significant amount of simulation studies and some theoretical considerations. We describe the details of PDI and explain why it is suitable for study of particle clustering in turbulent flow with a strong mean velocity. We introduce the concept of the radial distribution function (RDF) as our chosen way of quantifying inertial particle clustering and present some original works on foundational and practical considerations related to it. These include methods of treating finite sampling size, interpretation of the magnitude of RDF and the possibility of isolating RDF signature of inertial clustering from that of large scale mixing. In experimental work, we used the PDI to observe clustering of water droplets in a turbulent wind tunnel. From that we present, in the form of a published paper, evidence of dynamical similarity (Stokes number similarity) of inertial particle clustering together with other results in qualitative agreement with available theoretical prediction and simulation results. We next show detailed quantitative comparisons of results from our experiments, direct-numerical-simulation (DNS) and theory. Very promising agreement was found for like-sized particles (mono-disperse). Theory is found to be incorrect regarding clustering of different-sized particles and we propose a empirical correction based on the DNS and experimental results. Besides this, we also discovered a few interesting characteristics of inertial clustering. Firstly, through observations, we found an intriguing possibility for modeling the RDF arising from inertial clustering that has only one (sensitive) parameter. We also found that clustering becomes saturated at high Reynolds number.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

hyDRaCAT Spectral Reflectance Library for tundra provides the surface reflectance data and the bidirectional reflectance distribution function (BRDF) of important Arctic tundra vegetation communities at representative Siberian and Alaskan tundra sites. The aim of this dataset is the hyperspectral and spectro-directional reflectance characterization as basis for the extraction of vegetation parameters, and the normalization of BRDF effects in off-nadir and multi-temporal remote sensing data. The spectroscopic and field spectro-goniometric measurements were undertaken on the YAMAL2011 expedition of representative Siberian vegetation fields and on the North American Arctic Transect NAAT2012 expedition of Alaskan vegetation fields both belonging to the Greening-of-the-Arctic (GOA) program. For the field spectroscopy each 100 m2 vegetation study grid was divided into quadrats of 1 × 1 m. The averaged reflectance of all quadrats represents the spectral reflectance at the scale of the whole grid at the 10 × 10 m scale. For the surface radiometric measurements two GER1500 portable field spectroradiometers (Spectra Vista Corporation, Poughkeepsie, NY, USA) were used. The GER1500 measures radiance across the wavelength range of 350-1,050 nm, with sampling intervals of 1.5 nm and a radiance accuracy of 1.2 × 10**-1 W/cm**2/nm/sr. In order to increase the signal-to-noise ratio, 32 individual measurements were averaged per one target scan. To minimize variations in the target reflectance due to sun zenith angle changes, all measurements at one study location have been performed under similar sun zenith angles and during clear-sky conditions. The field spectrometer measurements were carried out with a GER1500 UV-VIS spectrometer The spectrogoniometer measurements were carried out with a self-designed spectro-goniometer: the Manual Transportable Instrument platform for ground-based Spectro-directional observations (ManTIS, patent publication number: DE 10 2011 117 713.A1). The ManTIS was equipped with the GER1500 spectrometer allowing spectro-directional measurements with up to 30° viewing zenith angle by full 360° viewing azimuth angles. Measurements in central Yamal (Siberia) at the research site 'Vaskiny Dachi' were carried out in the late summer phenological state from August 12 2011 to August 28 2011. All measurements in Alaska along the North South transect on the North Slope were taken between 29 June and 11 July 2012, ensuring that the vegetation was in the same phenological state near peak growing season.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The episodic occurrence of debris flow events in response to stochastic precipitation and wildfire events makes hazard prediction challenging. Previous work has shown that frequency-magnitude distributions of non-fire-related debris flows follow a power law, but less is known about the distribution of post-fire debris flows. As a first step in parameterizing hazard models, we use frequency-magnitude distributions and cumulative distribution functions to compare volumes of post-fire debris flows to non-fire-related debris flows. Due to the large number of events required to parameterize frequency-magnitude distributions, and the relatively small number of post-fire event magnitudes recorded in the literature, we collected data on 73 recent post-fire events in the field. The resulting catalog of 988 debris flow events is presented as an appendix to this article. We found that the empirical cumulative distribution function of post-fire debris flow volumes is composed of smaller events than that of non-fire-related debris flows. In addition, the slope of the frequency-magnitude distribution of post-fire debris flows is steeper than that of non-fire-related debris flows, evidence that differences in the post-fire environment tend to produce a higher proportion of small events. We propose two possible explanations: 1) post-fire events occur on shorter return intervals than debris flows in similar basins that do not experience fire, causing their distribution to shift toward smaller events due to limitations in sediment supply, or 2) fire causes changes in resisting and driving forces on a package of sediment, such that a smaller perturbation of the system is required in order for a debris flow to occur, resulting in smaller event volumes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context. The ESA Rosetta spacecraft, currently orbiting around cornet 67P/Churyumov-Gerasimenko, has already provided in situ measurements of the dust grain properties from several instruments, particularly OSIRIS and GIADA. We propose adding value to those measurements by combining them with ground-based observations of the dust tail to monitor the overall, time-dependent dust-production rate and size distribution. Aims. To constrain the dust grain properties, we take Rosetta OSIRIS and GIADA results into account, and combine OSIRIS data during the approach phase (from late April to early June 2014) with a large data set of ground-based images that were acquired with the ESO Very Large Telescope (VLT) from February to November 2014. Methods. A Monte Carlo dust tail code, which has already been used to characterise the dust environments of several comets and active asteroids, has been applied to retrieve the dust parameters. Key properties of the grains (density, velocity, and size distribution) were obtained from. Rosetta observations: these parameters were used as input of the code to considerably reduce the number of free parameters. In this way, the overall dust mass-loss rate and its dependence on the heliocentric distance could be obtained accurately. Results. The dust parameters derived from the inner coma measurements by OSIRIS and GIADA and from distant imaging using VLT data are consistent, except for the power index of the size-distribution function, which is alpha = -3, instead of alpha = -2, for grains smaller than 1 mm. This is possibly linked to the presence of fluffy aggregates in the coma. The onset of cometary activity occurs at approximately 4.3 AU, with a dust production rate of 0.5 kg/s, increasing up to 15 kg/s at 2.9 AU. This implies a dust-to-gas mass ratio varying between 3.8 and 6.5 for the best-fit model when combined with water-production rates from the MIRO experiment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction : The source and deployment of finance are central issues in economic development. Since 1966, when the Soeharto Administration was inaugurated, Indonesian economic development has relied on funds in the form of aid from international organizations and foreign countries. After the 1990s, a further abundant inflow of capital sustained a rapid economic development. Foreign funding was the basis of Indonesian economic growth. This paper will describe the mechanism for allocating funds in the Indonesian economy. It will identify the problems this mechanism generated in the Indonesian experience, and it will attempt to explain why there was a collapse of the financial system in the wake of the Asian Currency Crisis of 1997. History of the Indonesian Financial system The year 1966 saw the emergence of commercial banks in Indonesia. It can be said that before 1966 a financial system hardly existed, a fact commonly attributed to economic disruptions like the consecutive runs of fiscal deficit and hyperinflation under the Soekarno Administration. After 1996, with the inauguration of Soeharto, a regulatory system of financial legislation, e.g. central banking law and banking regulation, was introduced and implemented, and the banking sector that is the basis of the current financial system in Indonesia was built up.    The Indonesian financial structure was significantly altered at the first financial reform of 1983. Between 1966 and 1982, the banking sector consisted of Bank Indonesia (the Central Bank) and the state-owned banks. There was also a system for distributing the abundant public revenue derived from the soaring oil price of the 1970s. The public finance distribution function, incorporated in Indonesian financial system, changed after the successive financial reforms of 1983 and 1988, when there was a move away from the monopoly-market style dominated by state-owned banks (which was a system of public finance distribution that operated at the discretion of the government) towards a modern market mechanism. The five phases of development The Indonesian financial system developed in five phases between 1966 and the present time. The first period (1966-72) was its formative period, the second (1973-82) its policy based finance period under soaring oil prices, the third (1983-91) its financial-reform period, the fourth (1992-97) its period of expansion, and the fifth (1998-) its period of financial restructuring. The first section of this paper summarizes the financial policies operative during each of the periods identified above. In the second section changes to the financial sector in response to policies are examined, and an analysis of these changes shows that an important development of the financial sector occurred during the financial reform period. In the third section the focus of analysis shifts from the general financial sector to particular commercial banks’ performances. In the third section changes in commercial banks’ lending and fund-raising behaviour after the 1990s are analysed by comparing several banking groups in terms of their ownership and foundation time. The last section summarizes the foregoing analyses and examines the problems that remain in the Indonesian financial sector, which is still undergoing restructuring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the photovoltaic field, the back contact solar cells technology has appeared as an alternative to the traditional silicon modules. This new type of cells places both positive and negative contacts on the back side of the cells maximizing the exposed surface to the light and making easier the interconnection of the cells in the module. The Emitter Wrap-Through solar cell structure presents thousands of tiny holes to wrap the emitter from the front surface to the rear surface. These holes are made in a first step over the silicon wafers by means of a laser drilling process. This step is quite harmful from a mechanical point of view since holes act as stress concentrators leading to a reduction in the strength of these wafers. This paper presents the results of the strength characterization of drilled wafers. The study is carried out testing the samples with the ring on ring device. Finite Element models are developed to simulate the tests. The stress concentration factor of the drilled wafers under this load conditions is determined from the FE analysis. Moreover, the material strength is characterized fitting the fracture stress of the samples to a three-parameter Weibull cumulative distribution function. The parameters obtained are compared with the ones obtained in the analysis of a set of samples without holes to validate the method employed for the study of the strength of silicon drilled wafers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper discusses the dispersion relation for longitudinal electron waves propagating in a collisionless, homogeneous isotropic plasma, which contains both Maxwellian and suprathermal electrons. I t is found that the dispersion curve, known to have two separate branches for zero suprathermal energy spread,depends sensitively on this quantity. As the energy half-width of the suprathermal population increases, the branches approach each other until they touch at a connexion point, for a small critical value of that half-width. The topology of the dispersion curves is different for half-widths above and below critical; and this can affect the use of wave-propagation measurements as a diagnostic technique for the determination of the electron distribution function. Both the distance between the branches and spatial damping near the connexion frequency depend on the half-width, if below critical, and can be used to determine it. The theory is applied to experimental data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traumatic brain injury and spinal cord injury have recently been put under the spotlight as major causes of death and disability in the developed world. Despite the important ongoing experimental and modeling campaigns aimed at understanding the mechanics of tissue and cell damage typically observed in such events, the differenti- ated roles of strain, stress and their corresponding loading rates on the damage level itself remain unclear. More specif- ically, the direct relations between brain and spinal cord tis- sue or cell damage, and electrophysiological functions are still to be unraveled. Whereas mechanical modeling efforts are focusing mainly on stress distribution and mechanistic- based damage criteria, simulated function-based damage cri- teria are still missing. Here, we propose a new multiscale model of myelinated axon associating electrophysiological impairment to structural damage as a function of strain and strain rate. This multiscale approach provides a new framework for damage evaluation directly relating neuron mechanics and electrophysiological properties, thus provid- ing a link between mechanical trauma and subsequent func- tional deficits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta tesis se propone un procedimiento para evaluar la resistencia mecánica de obleas de silicio cristalino y se aplica en diferentes casos válidos para la industria. En el sector de la industria fotovoltaica predomina la tecnología basada en paneles de silicio cristalino. Estos paneles están compuestos por células solares conectadas en serie y estas células se forman a partir de obleas de silicio. Con el objetivo de disminuir el coste del panel, en los últimos años se ha observado una clara tendencia a la reducción del espesor de las obleas. Esta reducción del espesor modifica la rigidez de las obleas por lo que ha sido necesario modificar la manera tradicional de manipularlas con el objetivo de mantener un bajo ratio de rotura. Para ello, es necesario conocer la resistencia mecánica de las obleas. En la primera parte del trabajo se describen las obleas de silicio, desde su proceso de formación hasta sus propiedades mecánicas. Se muestra la influencia de la estructura cristalográfica en la resistencia y en el comportamiento ya que el cristal de silicio es anisótropo. Se propone también el método de caracterización de la resistencia. Se utiliza un criterio probabilista basado en los métodos de dimensionamiento de materiales frágiles en el que la resistencia queda determinada por los parámetros de la ley de Weibull triparamétrica. Se propone el procedimiento para obtener estos parámetros a partir de campañas de ensayos, modelización numérica por elementos finitos y un algoritmo iterativo de ajuste de los resultados. En la segunda parte de la tesis se describen los diferentes tipos de ensayos que se suelen llevar a cabo con este material. Se muestra además, para cada uno de los ensayos descritos, un estudio comparativo de diferentes modelos de elementos finitos simulando los ensayos. Se comparan tanto los resultados aportados por cada modelo como los tiempos de cálculo. Por último, se presentan tres aplicaciones diferentes donde se ha aplicado este procedimiento de estudio. La primera aplicación consiste en la comparación de la resistencia mecánica de obleas de silicio en función del método de crecimiento del lingote. La resistencia de las tradicionales obleas monocristalinas obtenidas por el método Czochralski y obleas multicristalinas es comparada con las novedosas obleas quasi-monocristalinas obtenidas por métodos de fundición. En la segunda aplicación se evalúa la profundidad de las grietas generadas en el proceso de corte del lingote en obleas. Este estudio se realiza de manera indirecta: caracterizando la resistencia de grupos de obleas sometidas a baños químicos de diferente duración. El baño químico reduce el espesor de las obleas eliminando las capas más dañadas. La resistencia de cada grupo es analizada y la comparación permite obtener la profundidad de las grietas generadas en el proceso de corte. Por último, se aplica este procedimiento a un grupo de obleas con características muy especiales: obleas preparadas para formar células de contacto posterior EWT. Estas obleas presentan miles de agujeros que las debilitan considerablemente. Se aplica el procedimiento de estudio propuesto con un grupo de estas obleas y se compara la resistencia obtenida con un grupo de referencia. Además, se propone un método simplificado de estudio basado en la aplicación de una superficie de intensificación de tensiones. ABSTRACT In this thesis, a procedure to evaluate the mechanical strength of crystalline silicon wafers is proposed and applied in different studies. The photovoltaic industry is mainly based on crystalline silicon modules. These modules are composed of solar cells which are based on silicon wafers. Regarding the cost reduction of solar modules, a clear tendency to use thinner wafers has been observed during last years. Since the stiffness varies with thickness, the manipulation techniques need to be modified in order to guarantee a low breakage rate. To this end, the mechanical strength has to be characterized correctly. In the first part of the thesis, silicon wafers are described including the different ways to produce them and the mechanical properties of interest. The influence of the crystallographic structure in the strength and the behaviour (the anisotropy of the silicon crystal) is shown. In addition, a method to characterize the mechanical strength is proposed. This probabilistic procedure is based on methods to characterize brittle materials. The strength is characterized by the values of the three parameters of the Weibull cumulative distribution function (cdf). The proposed method requires carrying out several tests, to simulate them through Finite Element models and an iterative algorithm in order to estimate the parameters of the Weibull cdf. In the second part of the thesis, the different types of test that are usually employed with these samples are described. Moreover, different Finite Element models for the simulation of each test are compared regarding the information supplied by each model and the calculation times. Finally, the method of characterization is applied to three examples of practical applications. The first application consists in the comparison of the mechanical strength of silicon wafers depending on the ingot growth method. The conventional monocrystalline wafers based on the Czochralski method and the multicrystalline ones are compared with the new quasi-monocrystalline substrates. The second application is related to the estimation of the crack length caused by the drilling process. An indirect way is used to this end: several sets of silicon wafers are subjected to chemical etchings of different duration. The etching procedure reduces the thickness of the wafers removing the most damaged layers. The strength of each set is obtained by means of the proposed method and the comparison permits to estimate the crack length. At last, the procedure is applied to determine the strength of wafers used for the design of back-contact cells of type ETW. These samples are drilled in a first step resulting in silicon wafers with thousands of tiny holes. The strength of the drilled wafers is obtained and compared with the one of a standard set without holes. Moreover, a simplified approach based on a stress intensification surface is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traumatic brain injury and spinal cord injury have recently been put under the spotlight as major causes of death and disability in the developed world. Despite the important ongoing experimental and modeling campaigns aimed at understanding the mechanics of tissue and cell damage typically observed in such events, the differentiated roles of strain, stress and their corresponding loading rates on the damage level itself remain unclear. More specifically, the direct relations between brain and spinal cord tissue or cell damage, and electrophysiological functions are still to be unraveled. Whereas mechanical modeling efforts are focusing mainly on stress distribution and mechanistic-based damage criteria, simulated function-based damage criteria are still missing. Here, we propose a new multiscale model of myelinated axon associating electrophysiological impairment to structural damage as a function of strain and strain rate. This multiscale approach provides a new framework for damage evaluation directly relating neuron mechanics and electrophysiological properties, thus providing a link between mechanical trauma and subsequent functional deficits

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El vidrio no puede ser tratado como un material estructural convencional desde el punto de vista de la resistencia mecánica. Su naturaleza, como material frágil, junto con la inevitable presencia de microfisuras en su superficie y las consecuencias de accidentes por posibles fallos, exigen métodos rigurosos que garanticen un cálculo seguro de los elementos estructurales de vidrio, cuya resistencia a rotura depende en gran medida del tamaño del elemento y del tipo de carga a la que está sometido. Por lo tanto, su cálculo debe basarse en conceptos probabilísticos y en criterios de mecánica de la fractura, en sustitución de un cálculo convencional de vidrio según tablas deducidas de programas experimentales y posterior aplicación del concepto de tensiones admisibles. Con el fin de analizar y comparar las características mecánicas de vidrios templados, termoendurecidos y recocidos, se realizó un amplio programa experimental de ensayos de flexión a cuatro puntos y de anillos concéntricos de pequeña superficie, seguido de un ajuste de los resultados mediante una función de distribución triparamétrica de Weibull. Glass cannot be handled as a conventional structural material from the point of view of the mechanical strength. Its nature as brittle material, together with the inevitable presence of micro-cracks on its surface and the consequences of eventual failures, demand rigorous methods to achieve a safe design for glass elements, whose stress resistance is very much dependent on the integrity of its surface, element size and loading pattern. Thus, its design must rely on probabilistic concepts and fracture mechanics criteria, substitutive of the conventional glass design based on charts derived from experimental programs and subsequent application of the admissible stress concept. In order to analyze and compare the strength characteristics of tempered, heat-strengthened and annealed glass, a large experimental programme based on four-point bending and coaxial double ring tests was performed and the results were fitted using a three-parameter Weibull cumulative distribution function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La iluminación con diodos emisores de luz (LED) está reemplazando cada vez en mayor medida a las fuentes de luz tradicionales. La iluminación LED ofrece ventajas en eficiencia, consumo de energía, diseño, tamaño y calidad de la luz. Durante más de 50 años, los investigadores han estado trabajando en mejoras LED. Su principal relevancia para la iluminación está aumentando rápidamente. Esta tesis se centra en un campo de aplicación importante, como son los focos. Se utilizan para enfocar la luz en áreas definidas, en objetos sobresalientes en condiciones profesionales. Esta iluminación de alto rendimiento requiere una calidad de luz definida, que incluya temperaturas ajustables de color correlacionadas (CCT), de alto índice de reproducción cromática (CRI), altas eficiencias, y colores vivos y brillantes. En el paquete LED varios chips de diferentes colores (rojo, azul, fósforo convertido) se combinan para cumplir con la distribución de energía espectral con alto CRI. Para colimar la luz en los puntos concretos deseados con un ángulo de emisión determinado, se utilizan blancos sintonizables y diversos colores de luz y ópticas secundarias. La combinación de una fuente LED de varios colores con elementos ópticos puede causar falta de homogeneidad cromática en la distribución espacial y angular de la luz, que debe resolverse en el diseño óptico. Sin embargo, no hay necesidad de uniformidad perfecta en el punto de luz debido al umbral en la percepción visual del ojo humano. Por lo tanto, se requiere una descripción matemática del nivel de uniformidad del color con respecto a la percepción visual. Esta tesis está organizada en siete capítulos. Después de un capítulo inicial que presenta la motivación que ha guiado la investigación de esta tesis, en el capítulo 2 se presentan los fundamentos científicos de la uniformidad del color en luces concentradas, como son: el espacio de color aplicado CIELAB, la percepción visual del color, los fundamentos de diseño de focos respecto a los motores de luz y ópticas no formadoras de imágenes, y los últimos avances en la evaluación de la uniformidad del color en el campo de los focos. El capítulo 3 desarrolla diferentes métodos para la descripción matemática de la distribución espacial del color en un área definida, como son la diferencia de color máxima, la desviación media del color, el gradiente de la distribución espacial de color, así como la suavidad radial y axial. Cada función se refiere a los diferentes factores que influyen en la visión, los cuales necesitan un tratamiento distinto que el de los datos que se tendrán en cuenta, además de funciones de ponderación que pre- y post-procesan los datos simulados o medidos para la reducción del ruido, la luminancia de corte, la aplicación de la ponderación de luminancia, la función de sensibilidad de contraste, y la función de distribución acumulativa. En el capítulo 4, se obtiene la función de mérito Usl para la estimación de la uniformidad del color percibida en focos. Se basó en los resultados de dos conjuntos de experimentos con factor humano realizados para evaluar la percepción visual de los sujetos de los patrones de focos típicos. El primer experimento con factor humano dio lugar al orden de importancia percibida de los focos. El orden de rango percibido se utilizó para correlacionar las descripciones matemáticas de las funciones básicas y la función ponderada sobre la distribución espacial del color, que condujo a la función Usl. El segundo experimento con factor humano probó la percepción de los focos bajo condiciones ambientales diversas, con el objetivo de proporcionar una escala absoluta para Usl, para poder así sustituir la opinión subjetiva personal de los individuos por una función de mérito estandarizada. La validación de la función Usl se presenta en relación con el alcance de la aplicación y condiciones, así como las limitaciones y restricciones que se realizan en el capítulo 5. Se compararon los datos medidos y simulados de varios sistemas ópticos. Se discuten los campos de aplicación , así como validaciones y restricciones de la función. El capítulo 6 presenta el diseño del sistema de focos y su optimización. Una evaluación muestra el análisis de sistemas basados en el reflector y la lente TIR. Los sistemas ópticos simulados se comparan en la uniformidad del color Usl, sensibilidad a las sombras coloreadas, eficiencia e intensidad luminosa máxima. Se ha comprobado que no hay un sistema único que obtenga los mejores resultados en todas las categorías, y que una excelente uniformidad de color se pudo alcanzar por la conjunción de dos sistemas diferentes. Finalmente, el capítulo 7 presenta el resumen de esta tesis y la perspectiva para investigar otros aspectos. ABSTRACT Illumination with light-emitting diodes (LED) is more and more replacing traditional light sources. They provide advantages in efficiency, energy consumption, design, size and light quality. For more than 50 years, researchers have been working on LED improvements. Their main relevance for illumination is rapidly increasing. This thesis is focused on one important field of application which are spotlights. They are used to focus light on defined areas, outstanding objects in professional conditions. This high performance illumination required a defined light quality including tunable correlated color temperatures (CCT), high color rendering index (CRI), high efficiencies and bright, vivid colors. Several differently colored chips (red, blue, phosphor converted) in the LED package are combined to meet spectral power distribution with high CRI, tunable white and several light colors and secondary optics are used to collimate the light into the desired narrow spots with defined angle of emission. The combination of multi-color LED source and optical elements may cause chromatic inhomogeneities in spatial and angular light distribution which needs to solved at the optical design. However, there is no need for perfect uniformity in the spot light due to threshold in visual perception of human eye. Therefore, a mathematical description of color uniformity level with regard to visual perception is required. This thesis is organized seven seven chapters. After an initial one presenting the motivation that has guided the research of this thesis, Chapter 2 introduces the scientific basics of color uniformity in spot lights including: the applied color space CIELAB, the visual color perception, the spotlight design fundamentals with regards to light engines and nonimaging optics, and the state of the art for the evaluation of color uniformity in the far field of spotlights. Chapter 3 develops different methods for mathematical description of spatial color distribution in a defined area, which are the maximum color difference, the average color deviation, the gradient of spatial color distribution as well as the radial and axial smoothness. Each function refers to different visual influencing factors, and they need different handling of data be taken into account, along with weighting functions which pre- and post-process the simulated or measured data for noise reduction, luminance cutoff, the implementation of luminance weighting, contrast sensitivity function, and cumulative distribution function. In chapter 4, the merit function Usl for the estimation of the perceived color uniformity in spotlights is derived. It was based on the results of two sets of human factor experiments performed to evaluate the visual perception of typical spotlight patterns by subjects. The first human factor experiment resulted in the perceived rank order of the spotlights. The perceived rank order was used to correlate the mathematical descriptions of basic functions and weighted function concerning the spatial color distribution, which lead to the Usl function. The second human factor experiment tested the perception of spotlights under varied environmental conditions, with to objective to provide an absolute scale for Usl, so the subjective personal opinion of individuals could be replaced by a standardized merit function. The validation of the Usl function is presented concerning the application range and conditions as well as limitations and restrictions in carried out in chapter 5. Measured and simulated data of various optical several systems were compared. Fields of applications are discussed as well as validations and restrictions of the function. Chapter 6 presents spotlight system design and their optimization. An evaluation shows the analysis of reflector-based and TIR lens systems. The simulated optical systems are compared in color uniformity Usl , sensitivity to colored shadows, efficiency, and peak luminous intensity. It has been found that no single system which performed best in all categories, and that excellent color uniformity could be reached by two different system assemblies. Finally, chapter 7 summarizes the conclusions of the present thesis and an outlook for further investigation topics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la presente Tesis se ha llevado a cabo el contraste y desarrollo de metodologías que permitan mejorar el cálculo de las avenidas de proyecto y extrema empleadas en el cálculo de la seguridad hidrológica de las presas. En primer lugar se ha abordado el tema del cálculo de las leyes de frecuencia de caudales máximos y su extrapolación a altos periodos de retorno. Esta cuestión es de gran relevancia, ya que la adopción de estándares de seguridad hidrológica para las presas cada vez más exigentes, implica la utilización de periodos de retorno de diseño muy elevados cuya estimación conlleva una gran incertidumbre. Es importante, en consecuencia incorporar al cálculo de los caudales de diseño todas la técnicas disponibles para reducir dicha incertidumbre. Asimismo, es importante hacer una buena selección del modelo estadístico (función de distribución y procedimiento de ajuste) de tal forma que se garantice tanto su capacidad para describir el comportamiento de la muestra, como para predecir de manera robusta los cuantiles de alto periodo de retorno. De esta forma, se han realizado estudios a escala nacional con el objetivo de determinar el esquema de regionalización que ofrece mejores resultados para las características hidrológicas de las cuencas españolas, respecto a los caudales máximos anuales, teniendo en cuenta el numero de datos disponibles. La metodología utilizada parte de la identificación de regiones homogéneas, cuyos límites se han determinado teniendo en cuenta las características fisiográficas y climáticas de las cuencas, y la variabilidad de sus estadísticos, comprobando posteriormente su homogeneidad. A continuación, se ha seleccionado el modelo estadístico de caudales máximos anuales con un mejor comportamiento en las distintas zonas de la España peninsular, tanto para describir los datos de la muestra como para extrapolar a los periodos de retorno más altos. El proceso de selección se ha basado, entre otras cosas, en la generación sintética de series de datos mediante simulaciones de Monte Carlo, y el análisis estadístico del conjunto de resultados obtenido a partir del ajuste de funciones de distribución a estas series bajo distintas hipótesis. Posteriormente, se ha abordado el tema de la relación caudal-volumen y la definición de los hidrogramas de diseño en base a la misma, cuestión que puede ser de gran importancia en el caso de presas con grandes volúmenes de embalse. Sin embargo, los procedimientos de cálculo hidrológico aplicados habitualmente no tienen en cuenta la dependencia estadística entre ambas variables. En esta Tesis se ha desarrollado un procedimiento para caracterizar dicha dependencia estadística de una manera sencilla y robusta, representando la función de distribución conjunta del caudal punta y el volumen en base a la función de distribución marginal del caudal punta y la función de distribución condicionada del volumen respecto al caudal. Esta última se determina mediante una función de distribución log-normal, aplicando un procedimiento de ajuste regional. Se propone su aplicación práctica a través de un procedimiento de cálculo probabilístico basado en la generación estocástica de un número elevado de hidrogramas. La aplicación a la seguridad hidrológica de las presas de este procedimiento requiere interpretar correctamente el concepto de periodo de retorno aplicado a variables hidrológicas bivariadas. Para ello, se realiza una propuesta de interpretación de dicho concepto. El periodo de retorno se entiende como el inverso de la probabilidad de superar un determinado nivel de embalse. Al relacionar este periodo de retorno con las variables hidrológicas, el hidrograma de diseño de la presa deja de ser un único hidrograma para convertirse en una familia de hidrogramas que generan un mismo nivel máximo en el embalse, representados mediante una curva en el plano caudal volumen. Esta familia de hidrogramas de diseño depende de la propia presa a diseñar, variando las curvas caudal-volumen en función, por ejemplo, del volumen de embalse o la longitud del aliviadero. El procedimiento propuesto se ilustra mediante su aplicación a dos casos de estudio. Finalmente, se ha abordado el tema del cálculo de las avenidas estacionales, cuestión fundamental a la hora de establecer la explotación de la presa, y que puede serlo también para estudiar la seguridad hidrológica de presas existentes. Sin embargo, el cálculo de estas avenidas es complejo y no está del todo claro hoy en día, y los procedimientos de cálculo habitualmente utilizados pueden presentar ciertos problemas. El cálculo en base al método estadístico de series parciales, o de máximos sobre un umbral, puede ser una alternativa válida que permite resolver esos problemas en aquellos casos en que la generación de las avenidas en las distintas estaciones se deba a un mismo tipo de evento. Se ha realizado un estudio con objeto de verificar si es adecuada en España la hipótesis de homogeneidad estadística de los datos de caudal de avenida correspondientes a distintas estaciones del año. Asimismo, se han analizado los periodos estacionales para los que es más apropiado realizar el estudio, cuestión de gran relevancia para garantizar que los resultados sean correctos, y se ha desarrollado un procedimiento sencillo para determinar el umbral de selección de los datos de tal manera que se garantice su independencia, una de las principales dificultades en la aplicación práctica de la técnica de las series parciales. Por otra parte, la aplicación practica de las leyes de frecuencia estacionales requiere interpretar correctamente el concepto de periodo de retorno para el caso estacional. Se propone un criterio para determinar los periodos de retorno estacionales de forma coherente con el periodo de retorno anual y con una distribución adecuada de la probabilidad entre las distintas estaciones. Por último, se expone un procedimiento para el cálculo de los caudales estacionales, ilustrándolo mediante su aplicación a un caso de estudio. The compare and develop of a methodology in order to improve the extreme flow estimation for dam hydrologic security has been developed. First, the work has been focused on the adjustment of maximum peak flows distribution functions from which to extrapolate values for high return periods. This has become a major issue as the adoption of stricter standards on dam hydrologic security involves estimation of high design return periods which entails great uncertainty. Accordingly, it is important to incorporate all available techniques for the estimation of design peak flows in order to reduce this uncertainty. Selection of the statistical model (distribution function and adjustment method) is also important since its ability to describe the sample and to make solid predictions for high return periods quantiles must be guaranteed. In order to provide practical application of previous methodologies, studies have been developed on a national scale with the aim of determining a regionalization scheme which features best results in terms of annual maximum peak flows for hydrologic characteristics of Spanish basins taking into account the length of available data. Applied methodology starts with the delimitation of regions taking into account basin’s physiographic and climatic characteristics and the variability of their statistical properties, and continues with their homogeneity testing. Then, a statistical model for maximum annual peak flows is selected with the best behaviour for the different regions in peninsular Spain in terms of describing sample data and making solid predictions for high return periods. This selection has been based, among others, on synthetic data series generation using Monte Carlo simulations and statistical analysis of results from distribution functions adjustment following different hypothesis. Secondly, the work has been focused on the analysis of the relationship between peak flow and volume and how to define design flood hydrographs based on this relationship which can be highly important for large volume reservoirs. However, commonly used hydrologic procedures do not take statistical dependence between these variables into account. A simple and sound method for statistical dependence characterization has been developed by the representation of a joint distribution function of maximum peak flow and volume which is based on marginal distribution function of peak flow and conditional distribution function of volume for a given peak flow. The last one is determined by a regional adjustment procedure of a log-normal distribution function. Practical application is proposed by a probabilistic estimation procedure based on stochastic generation of a large number of hydrographs. The use of this procedure for dam hydrologic security requires a proper interpretation of the return period concept applied to bivariate hydrologic data. A standard is proposed in which it is understood as the inverse of the probability of exceeding a determined reservoir level. When relating return period and hydrological variables the only design flood hydrograph changes into a family of hydrographs which generate the same maximum reservoir level and that are represented by a curve in the peak flow-volume two-dimensional space. This family of design flood hydrographs depends on the dam characteristics as for example reservoir volume or spillway length. Two study cases illustrate the application of the developed methodology. Finally, the work has been focused on the calculation of seasonal floods which are essential when determining the reservoir operation and which can be also fundamental in terms of analysing the hydrologic security of existing reservoirs. However, seasonal flood calculation is complex and nowadays it is not totally clear. Calculation procedures commonly used may present certain problems. Statistical partial duration series, or peaks over threshold method, can be an alternative approach for their calculation that allow to solve problems encountered when the same type of event is responsible of floods in different seasons. A study has been developed to verify the hypothesis of statistical homogeneity of peak flows for different seasons in Spain. Appropriate seasonal periods have been analyzed which is highly relevant to guarantee correct results. In addition, a simple procedure has been defined to determine data selection threshold on a way that ensures its independency which is one of the main difficulties in practical application of partial series. Moreover, practical application of seasonal frequency laws requires a correct interpretation of the concept of seasonal return period. A standard is proposed in order to determine seasonal return periods coherently with the annual return period and with an adequate seasonal probability distribution. Finally a methodology is proposed to calculate seasonal peak flows. A study case illustrates the application of the proposed methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A representation of the color gamut of special effect coatings is proposed and shown for six different samples, whose colors were calculated from spectral bidirectional reflectance distribution function (BRDF) measurements at different geometries. The most important characteristic of the proposed representation is that it allows a straightforward understanding of the color shift to be done both in terms of conventional irradiation and viewing angles and in terms of flake-based parameters. A different line was proposed to assess the color shift of special effect coatings on a*,b*-diagrams: the absorption line. Similar to interference and aspecular lines (constant aspecular and irradiation angles, respectively), an absorption line is the locus of calculated color coordinates from measurement geometries with a fixed bistatic angle. The advantages of using the absorption lines to characterize the contributions to the spectral BRDF of the scattering at the absorption pigments and the reflection at interference pigments for different geometries are shown.