9 resultados para Being and Time

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relacionado con línea de investigación del GDS del ISOM ver http://www.isom.upm.es/dsemiconductores.php

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Improvements in neuroimaging methods have afforded significant advances in our knowledge of the cognitive and neural foundations of aesthetic appreciation. We used magnetoencephalography (MEG) to register brain activity while participants decided about the beauty of visual stimuli. The data were analyzed with event-related field (ERF) and Time-Frequency (TF) procedures. ERFs revealed no significant differences between brain activity related with stimuli rated as “beautiful” and “not beautiful.” TF analysis showed clear differences between both conditions 400 ms after stimulus onset. Oscillatory power was greater for stimuli rated as “beautiful” than those regarded as “not beautiful” in the four frequency bands (theta, alpha, beta, and gamma). These results are interpreted in the frame of synchronization studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyzes the correlation between the fluctuations of the electrical power generated by the ensemble of 70 DC/AC inverters from a 45.6 MW PV plant. The use of real electrical power time series from a large collection of photovoltaic inverters of a same plant is an impor- tant contribution in the context of models built upon simplified assumptions to overcome the absence of such data. This data set is divided into three different fluctuation categories with a clustering proce- dure which performs correctly with the clearness index and the wavelet variances. Afterwards, the time dependent correlation between the electrical power time series of the inverters is esti- mated with the wavelet transform. The wavelet correlation depends on the distance between the inverters, the wavelet time scales and the daily fluctuation level. Correlation values for time scales below one minute are low without dependence on the daily fluctuation level. For time scales above 20 minutes, positive high correlation values are obtained, and the decay rate with the distance depends on the daily fluctuation level. At intermediate time scales the correlation depends strongly on the daily fluctuation level. The proposed methods have been implemented using free software. Source code is available as supplementary material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last few years, technical debt has been used as a useful means for making the intrinsic cost of the internal software quality weaknesses visible. This visibility is made possible by quantifying this cost. Specifically, technical debt is expressed in terms of two main concepts: principal and interest. The principal is the cost of eliminating or reducing the impact of a, so called, technical debt item in a software system; whereas the interest is the recurring cost, over a time period, of not eliminating a technical debt item. Previous works about technical debt are mainly focused on estimating principal and interest, and on performing a cost-benefit analysis. This cost-benefit analysis allows one to determine if to remove technical debt is profitable and to prioritize which items incurring in technical debt should be fixed first. Nevertheless, for these previous works technical debt is flat along the time. However the introduction of new factors to estimate technical debt may produce non flat models that allow us to produce more accurate predictions. These factors should be used to estimate principal and interest, and to perform cost-benefit analysis related to technical debt. In this paper, we take a step forward introducing the uncertainty about the interest, and the time frame factors so that it becomes possible to depict a number of possible future scenarios. Estimations obtained without considering the possible evolution of the interest over time may be less accurate as they consider simplistic scenarios without changes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mediterranean climate is characterized by hot summer, high evapotranspiration rates, and scarce precipitations (400 mm per year) during grapevine cycle. These extremely dry conditions affect vineyard productivity and sustainability. Supplementary irrigation is a needed practice in order to maintain yield and quality. Almost all Spanish grape growing regions are characterized by these within this context, especially in the center region, where this study was performed. The main objective of this work was to study the influence of irrigation on yield and quality. For this aim, we applied different levels of irrigation (mm of water applied) during different stages of growth and berry maturity. Four experimental treatments were applied considering the amount of water and the moment of the application: T1: Water irrigation (420 mm) applied from bloom to maturity. T2: Corresponded to the traditional irrigation scheduling, from preveraison to maturity (154 mm). T3: Water irrigation from bloom to preveraison, and water deficit from veraison to maturity (312 mm). T4: Irrigation applied from preveraison to maturity (230 mm) Experimental vineyard, cv. Cabernet Sauvignon, was located in a commercial vineyard (Bodegas Licinia S.L.) in the hot region of Morata de Tajuña (Madrid). The trial was performed during 2010 and 2011 seasons. Our results showed that yield increased from 2010 to 2011 in the treatments with a higher amount of water appli ed, T1 and T3 (24 and 10 % of yield increase respectively). This was mainly due to an increase in bud fertility (nº of bunches per shoot). Furthermore, sugar content was higher in T3 (27.3 ºBrix), followed by T2 (27 ºBrix). By contrast, T4 (irrigation from veraison) presented the lowest solid soluble concentration and the highest acidity. These results suggest that grapevine has an intrinsic capacity to adapt to its environment. However, this adaptation capacity should be evaluated considering the sensibility of quality parameters during the maturity period (acidity, pH, aroma, color...) and its impact on yield. Here, we demonstrated that a higher amount of water irrigation applied was no linked to a negative effect on quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser material processing is being extensively used in photovoltaic applications for both the fabrication of thin film modules and the enhancement of the crystalline silicon solar cells. The two temperature model for thermal diffusion was numerically solved in this paper. Laser pulses of 1064, 532 or 248 nm with duration of 35, 26 or 10 ns were considered as the thermal source leading to the material ablation. Considering high irradiance levels (108–109 W cm−2), a total absorption of the energy during the ablation process was assumed in the model. The materials analysed in the simulation were aluminium (Al) and silver (Ag), which are commonly used as metallic electrodes in photovoltaic devices. Moreover, thermal diffusion was also simulated for crystalline silicon (c-Si). A similar trend of temperature as a function of depth and time was found for both metals and c-Si regardless of the employed wavelength. For each material, the ablation depth dependence on laser pulse parameters was determined by means of an ablation criterion. Thus, after the laser pulse, the maximum depth for which the total energy stored in the material is equal to the vaporisation enthalpy was considered as the ablation depth. For all cases, the ablation depth increased with the laser pulse fluence and did not exhibit a clear correlation with the radiation wavelength. Finally, the experimental validation of the simulation results was carried out and the ability of the model with the initial hypothesis of total energy absorption to closely fit experimental results was confirmed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En muchas áreas de la ingeniería, la integridad y confiabilidad de las estructuras son aspectos de extrema importancia. Estos son controlados mediante el adecuado conocimiento de danos existentes. Típicamente, alcanzar el nivel de conocimiento necesario que permita caracterizar la integridad estructural implica el uso de técnicas de ensayos no destructivos. Estas técnicas son a menudo costosas y consumen mucho tiempo. En la actualidad, muchas industrias buscan incrementar la confiabilidad de las estructuras que emplean. Mediante el uso de técnicas de última tecnología es posible monitorizar las estructuras y en algunos casos, es factible detectar daños incipientes que pueden desencadenar en fallos catastróficos. Desafortunadamente, a medida que la complejidad de las estructuras, los componentes y sistemas incrementa, el riesgo de la aparición de daños y fallas también incrementa. Al mismo tiempo, la detección de dichas fallas y defectos se torna más compleja. En años recientes, la industria aeroespacial ha realizado grandes esfuerzos para integrar los sensores dentro de las estructuras, además de desarrollar algoritmos que permitan determinar la integridad estructural en tiempo real. Esta filosofía ha sido llamada “Structural Health Monitoring” (o “Monitorización de Salud Estructural” en español) y este tipo de estructuras han recibido el nombre de “Smart Structures” (o “Estructuras Inteligentes” en español). Este nuevo tipo de estructuras integran materiales, sensores, actuadores y algoritmos para detectar, cuantificar y localizar daños dentro de ellas mismas. Una novedosa metodología para detección de daños en estructuras se propone en este trabajo. La metodología está basada en mediciones de deformación y consiste en desarrollar técnicas de reconocimiento de patrones en el campo de deformaciones. Estas últimas, basadas en PCA (Análisis de Componentes Principales) y otras técnicas de reducción dimensional. Se propone el uso de Redes de difracción de Bragg y medidas distribuidas como sensores de deformación. La metodología se validó mediante pruebas a escala de laboratorio y pruebas a escala real con estructuras complejas. Los efectos de las condiciones de carga variables fueron estudiados y diversos experimentos fueron realizados para condiciones de carga estáticas y dinámicas, demostrando que la metodología es robusta ante condiciones de carga desconocidas. ABSTRACT In many engineering fields, the integrity and reliability of the structures are extremely important aspects. They are controlled by the adequate knowledge of existing damages. Typically, achieving the level of knowledge necessary to characterize the structural integrity involves the usage of nondestructive testing techniques. These are often expensive and time consuming. Nowadays, many industries look to increase the reliability of the structures used. By using leading edge techniques it is possible to monitoring these structures and in some cases, detect incipient damage that could trigger catastrophic failures. Unfortunately, as the complexity of the structures, components and systems increases, the risk of damages and failures also increases. At the same time, the detection of such failures and defects becomes more difficult. In recent years, the aerospace industry has done great efforts to integrate the sensors within the structures and, to develop algorithms for determining the structural integrity in real time. The ‘philosophy’ has being called “Structural Health Monitoring” and these structures have been called “smart structures”. These new types of structures integrate materials, sensors, actuators and algorithms to detect, quantify and locate damage within itself. A novel methodology for damage detection in structures is proposed. The methodology is based on strain measurements and consists in the development of strain field pattern recognition techniques. The aforementioned are based on PCA (Principal Component Analysis) and other dimensional reduction techniques. The use of fiber Bragg gratings and distributed sensing as strain sensors is proposed. The methodology have been validated by using laboratory scale tests and real scale tests with complex structures. The effects of the variable load conditions were studied and several experiments were performed for static and dynamic load conditions, demonstrating that the methodology is robust under unknown load conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assuring the sustainability of quality in photovoltaic rural electrification programmes involves enhancing the reliability of the components of solar home systems as well as the characterization of the overall programme cost structure. Batteries and photovoltaic modules have a great impact on both the reliability and the cost assessment, the battery being the weakest component of the solar home system and consequently the most expensive element of the programme. The photovoltaic module, despite being the most reliable component, has a significant impact cost-wise on the initial investment, even at current market prices. This paper focuses on the in-field testing of both batteries and photovoltaic modules working under real operating conditions within a sample of 41 solar home systems belonging to a large photovoltaic rural electrification programme with more than 13,000 installed photovoltaic systems. Different reliability parameters such as lifetime have been evaluated, taking into account different factors, for example energy consumption rates, or the manufacturing quality of batteries. A degradation model has been proposed relating both loss of capacity and time of operation. The user e solar home system binomial is also analysed in order to understand the meaning of battery lifetime in rural electrification.