17 resultados para One-step electrospin technique

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La medicina ha evolucionado de forma que las imágenes digitales tienen un papel de gran relevancia para llevar a cabo el diagnóstico de enfermedades. Son muchos y de diversa naturaleza los problemas que pueden presentar el aparato fonador. Un paso previo para la caracterización de imágenes digitales de la laringe es la segmentación de las cuerdas vocales. Hasta el momento se han desarrollado algoritmos que permiten la segmentación de la glotis. El presente proyecto pretende avanzar un paso más en el estudio, procurando asimismo la segmentación de las cuerdas vocales. Para ello, es necesario aprovechar la información de color que ofrecen las imágenes, pues es lo que va a determinar la diferencia entre una región y otra de la imagen. En este proyecto se ha desarrollado un novedoso método de segmentación de imágenes en color estroboscópicas de la laringe basado en el crecimiento de regiones a partir de píxeles-semilla. Debido a los problemas que presentan las imágenes obtenidas por la técnica de la estroboscopia, para conseguir óptimos resultados de la segmentación es necesario someter a las imágenes a un preprocesado, que consiste en la eliminación de altos brillos y aplicación de un filtro de difusión anisotrópica. Tras el preprocesado, comienza el crecimiento de la región a partir de unas semillas que se obtienen previamente. La condición de inclusión de un píxel en la región se basa en un parámetro de tolerancia que se determina de forma adaptativa. Este parámetro comienza teniendo un valor muy bajo y va aumentando de forma recursiva hasta alcanzar una condición de parada. Esta condición se basa en el análisis de la distribución estadística de los píxeles dentro de la región que va creciendo. La última fase del proyecto consiste en la realización de las pruebas necesarias para verificar el funcionamiento del sistema diseñado, obteniéndose buenos resultados en la segmentación de la glotis y resultados esperanzadores para seguir mejorando el sistema para la segmentación de las cuerdas vocales. ABSTRACT Medicine has evolved so that digital images have a very important role to perform disease diagnosis. There are wide variety of problems that can present the vocal apparatus. A preliminary step for characterization of digital images of the larynx is the segmentation of the vocal folds. To date, some algorithms that allow the segmentation of the glottis have been developed. This project aims to go one step further in the study, also seeking the segmentation of the vocal folds. To do this, we must use the color information offered by images, since this is what will determine the difference between different regions in a picture. In this project a novel method of larynx color images segmentation based on region growing from a pixel seed is developed. Due to the problems of the images obtained by the technique of stroboscopy, to achieve optimal results of the segmentation is necessary a preprocessing of the images, which involves the removal of high brightness and applying an anisotropic diffusion filter. After this preprocessing, the growth of the region from previously obtained seeds starts. The condition for inclusion of a pixel in the region is based on a tolerance parameter, which is adaptively determined. It initially has a low value and this is recursively increased until a stop condition is reached. This condition is based on the analysis of the statistical distribution of the pixels within the grown region. The last phase of the project involves the necessary tests to verify the proper working of the designed system, obtaining very good results in the segmentation of the glottis and encouraging results to keep improving the system for the segmentation of the vocal folds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

habilidades de comprensión y resolución de problemas. Tanto es así que se puede afirmar con rotundidad que no existe el método perfecto para cada una de las etapas de desarrollo y tampoco existe el modelo de ciclo de vida perfecto: cada nuevo problema que se plantea es diferente a los anteriores en algún aspecto y esto hace que técnicas que funcionaron en proyectos anteriores fracasen en los proyectos nuevos. Por ello actualmente se realiza un planteamiento integrador que pretende utilizar en cada caso las técnicas, métodos y herramientas más acordes con las características del problema planteado al ingeniero. Bajo este punto de vista se plantean nuevos problemas. En primer lugar está la selección de enfoques de desarrollo. Si no existe el mejor enfoque, ¿cómo se hace para elegir el más adecuado de entre el conjunto de los existentes? Un segundo problema estriba en la relación entre las etapas de análisis y diseño. En este sentido existen dos grandes riesgos. Por un lado, se puede hacer un análisis del problema demasiado superficial, con lo que se produce una excesiva distancia entre el análisis y el diseño que muchas veces imposibilita el paso de uno a otro. Por otro lado, se puede optar por un análisis en términos del diseño que provoca que no cumpla su objetivo de centrarse en el problema, sino que se convierte en una primera versión de la solución, lo que se conoce como diseño preliminar. Como consecuencia de lo anterior surge el dilema del análisis, que puede plantearse como sigue: para cada problema planteado hay que elegir las técnicas más adecuadas, lo que requiere que se conozcan las características del problema. Para ello, a su vez, se debe analizar el problema, eligiendo una técnica antes de conocerlo. Si la técnica utiliza términos de diseño entonces se ha precondicionado el paradigma de solución y es posible que no sea el más adecuado para resolver el problema. En último lugar están las barreras pragmáticas que frenan la expansión del uso de métodos con base formal, dificultando su aplicación en la práctica cotidiana. Teniendo en cuenta todos los problemas planteados, se requieren métodos de análisis del problema que cumplan una serie de objetivos, el primero de los cuales es la necesidad de una base formal, con el fin de evitar la ambigüedad y permitir verificar la corrección de los modelos generados. Un segundo objetivo es la independencia de diseño: se deben utilizar términos que no tengan reflejo directo en el diseño, para que permitan centrarse en las características del problema. Además los métodos deben permitir analizar problemas de cualquier tipo: algorítmicos, de soporte a la decisión o basados en el conocimiento, entre otros. En siguiente lugar están los objetivos relacionados con aspectos pragmáticos. Por un lado deben incorporar una notación textual formal pero no matemática, de forma que se facilite su validación y comprensión por personas sin conocimientos matemáticos profundos pero al mismo tiempo sea lo suficientemente rigurosa para facilitar su verificación. Por otro lado, se requiere una notación gráfica complementaria para representar los modelos, de forma que puedan ser comprendidos y validados cómodamente por parte de los clientes y usuarios. Esta tesis doctoral presenta SETCM, un método de análisis que cumple estos objetivos. Para ello se han definido todos los elementos que forman los modelos de análisis usando una terminología independiente de paradigmas de diseño y se han formalizado dichas definiciones usando los elementos fundamentales de la teoría de conjuntos: elementos, conjuntos y relaciones entre conjuntos. Por otro lado se ha definido un lenguaje formal para representar los elementos de los modelos de análisis – evitando en lo posible el uso de notaciones matemáticas – complementado con una notación gráfica que permite representar de forma visual las partes más relevantes de los modelos. El método propuesto ha sido sometido a una intensa fase de experimentación, durante la que fue aplicado a 13 casos de estudio, todos ellos proyectos reales que han concluido en productos transferidos a entidades públicas o privadas. Durante la experimentación se ha evaluado la adecuación de SETCM para el análisis de problemas de distinto tamaño y en sistemas cuyo diseño final usaba paradigmas diferentes e incluso paradigmas mixtos. También se ha evaluado su uso por analistas con distinto nivel de experiencia – noveles, intermedios o expertos – analizando en todos los casos la curva de aprendizaje, con el fin de averiguar si es fácil de aprender su uso, independientemente de si se conoce o no alguna otra técnica de análisis. Por otro lado se ha estudiado la capacidad de ampliación de modelos generados con SETCM, para comprobar si permite abordar proyectos realizados en varias fases, en los que el análisis de una fase consista en ampliar el análisis de la fase anterior. En resumidas cuentas, se ha tratado de evaluar la capacidad de integración de SETCM en una organización como la técnica de análisis preferida para el desarrollo de software. Los resultados obtenidos tras esta experimentación han sido muy positivos, habiéndose alcanzado un alto grado de cumplimiento de todos los objetivos planteados al definir el método.---ABSTRACT---Software development is an inherently complex activity, which requires specific abilities of problem comprehension and solving. It is so difficult that it can even be said that there is no perfect method for each of the development stages and that there is no perfect life cycle model: each new problem is different to the precedent ones in some respect and the techniques that worked in other problems can fail in the new ones. Given that situation, the current trend is to integrate different methods, tools and techniques, using the best suited for each situation. This trend, however, raises some new problems. The first one is the selection of development approaches. If there is no a manifestly single best approach, how does one go about choosing an approach from the array of available options? The second problem has to do with the relationship between the analysis and design phases. This relation can lead to two major risks. On one hand, the analysis could be too shallow and far away from the design, making it very difficult to perform the transition between them. On the other hand, the analysis could be expressed using design terminology, thus becoming more a kind of preliminary design than a model of the problem to be solved. In third place there is the analysis dilemma, which can be expressed as follows. The developer has to choose the most adequate techniques for each problem, and to make this decision it is necessary to know the most relevant properties of the problem. This implies that the developer has to analyse the problem, choosing an analysis method before really knowing the problem. If the chosen technique uses design terminology then the solution paradigm has been preconditioned and it is possible that, once the problem is well known, that paradigm wouldn’t be the chosen one. The last problem consists of some pragmatic barriers that limit the applicability of formal based methods, making it difficult to use them in current practice. In order to solve these problems there is a need for analysis methods that fulfil several goals. The first one is the need of a formal base, which prevents ambiguity and allows the verification of the analysis models. The second goal is design-independence: the analysis should use a terminology different from the design, to facilitate a real comprehension of the problem under study. In third place the analysis method should allow the developer to study different kinds of problems: algorithmic, decision-support, knowledge based, etc. Next there are two goals related to pragmatic aspects. Firstly, the methods should have a non mathematical but formal textual notation. This notation will allow people without deep mathematical knowledge to understand and validate the resulting models, without losing the needed rigour for verification. Secondly, the methods should have a complementary graphical notation to make more natural the understanding and validation of the relevant parts of the analysis. This Thesis proposes such a method, called SETCM. The elements conforming the analysis models have been defined using a terminology that is independent from design paradigms. Those terms have been then formalised using the main concepts of the set theory: elements, sets and correspondences between sets. In addition, a formal language has been created, which avoids the use of mathematical notations. Finally, a graphical notation has been defined, which can visually represent the most relevant elements of the models. The proposed method has been thoroughly tested during the experimentation phase. It has been used to perform the analysis of 13 actual projects, all of them resulting in transferred products. This experimentation allowed evaluating the adequacy of SETCM for the analysis of problems of varying size, whose final design used different paradigms and even mixed ones. The use of the method by people with different levels of expertise was also evaluated, along with the corresponding learning curve, in order to assess if the method is easy to learn, independently of previous knowledge on other analysis techniques. In addition, the expandability of the analysis models was evaluated, assessing if the technique was adequate for projects organised in incremental steps, in which the analysis of one step grows from the precedent models. The final goal was to assess if SETCM can be used inside an organisation as the preferred analysis method for software development. The obtained results have been very positive, as SETCM has obtained a high degree of fulfilment of the goals stated for the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En estos tiempos de crisis se hace imperativo lograr un consumo de recursos públicos lo más racional posible. El transporte público urbano es un sector al que se dedican grandes inversiones y cuya prestación de servicios está fuertemente subvencionada. El incremento de la eficiencia técnica del sector, entendida como la relación entre producción de servicios y consumo de recursos, puede ayudar a conseguir una mejor gestión de los fondos públicos. Un primer paso para que se produzca una mejora es el desarrollo de una metodología de evaluación de la eficiencia técnica de las compañías de transporte público. Existen diferentes métodos para la evaluación técnica de un conjunto de compañías pertenecientes a un sector. Uno de los más utilizados es el método frontera, en el que se encuentra el análisis envolvente de datos (Data Envelopment Analysis, DEA, por sus siglas en inglés). Este método permite establecer una frontera de eficiencia técnica relativa a un determinado grupo de compañías, en función de un número limitado de variables. Las variables deben cuantificar, por un lado, la prestación de servicios de las distintas compañías (outputs), y por el otro, los recursos consumidos en la producción de dichos servicios (inputs). El objetivo de esta tesis es analizar, mediante el uso del método DEA, la eficiencia técnica de los servicios de autobuses urbanos en España. Para ello, se estudia el número de variables más adecuado para conformar los modelos con los que se obtienen las fronteras de eficiencia. En el desarrollo de la metodología se utilizan indicadores de los servicios de autobús urbano de las principales ciudades de las áreas metropolitanas españolas, para el periodo 2004-2009. In times of crisis it is imperative achieve a consumption of public resources as rational as possible. Urban public transport is a sector devoted to large investments and whose services are heavily subsidized. Increase the technical efficiency of the sector, defined as the ratio of service output and resource consumption, can help achieve a better management of public funds. One step to produce an improvement is the development of a methodology for evaluating the technical efficiency of the public transport companies. There are different methods for the technical evaluation of a set of companies within an industry. One of the most widely used methods is the frontier method, in particular the Data Envelopment Analysis (DEA). This method allows the calculation of a technical efficiency frontier on a specific group of companies, based on a limited number of variables. Variables must quantify, on the one hand, the provision of services of different companies (outputs), and on the other hand, the resources consumed in the production of such services (inputs). The objective of this thesis is to analyze, using the DEA method, the technical efficiency of urban bus services in Spain. For this purpose, it is studied the more suitable variables that can be used in the models to obtain the efficiency frontiers. In developing the methodology are used indicators of urban bus services in major cities of the Spanish metropolitan areas for the period 2004-2009.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wind power time series usually show complex dynamics mainly due to non-linearities related to the wind physics and the power transformation process in wind farms. This article provides an approach to the incorporation of observed local variables (wind speed and direction) to model some of these effects by means of statistical models. To this end, a benchmarking between two different families of varying-coefficient models (regime-switching and conditional parametric models) is carried out. The case of the offshore wind farm of Horns Rev in Denmark has been considered. The analysis is focused on one-step ahead forecasting and a time series resolution of 10 min. It has been found that the local wind direction contributes to model some features of the prevailing winds, such as the impact of the wind direction on the wind variability, whereas the non-linearities related to the power transformation process can be introduced by considering the local wind speed. In both cases, conditional parametric models showed a better performance than the one achieved by the regime-switching strategy. The results attained reinforce the idea that each explanatory variable allows the modelling of different underlying effects in the dynamics of wind power time series.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este trabajo de tesis se propone un esquema de votación telemática, de carácter paneuropeo y transnacional, que es capaz de satisfacer las más altas exigencias en materia de seguridad. Este enfoque transnacional supone una importante novedad que obliga a identificar a los ciudadanos más allá de las fronteras de su país, exigencia que se traduce en la necesidad de que todos los ciudadanos europeos dispongan de una identidad digital y en que ésta sea reconocida más allá de las fronteras de su país de origen. Bajo estas premisas, la propuesta recogida en esta tesis se aborda desde dos vertientes complementarias: por una parte, el diseño de un esquema de votación capaz de conquistar la confianza de gobiernos y ciudadanos europeos y, por otra, la búsqueda de una respuesta al problema de interoperabilidad de Sistemas de Gestión de Identidad (IDMs), en consonancia con los trabajos que actualmente realiza la UE para la integración de los servicios proporcionados por las Administraciones Públicas de los distintos países europeos. El punto de partida de este trabajo ha sido la identificación de los requisitos que determinan el adecuado funcionamiento de un sistema de votación telemática para, a partir de ellos,proponer un conjunto de elementos y criterios que permitan, por una parte, establecer comparaciones entre distintos sistemas telemáticos de votación y, por otra, evaluar la idoneidad del sistema propuesto. A continuación se han tomado las más recientes y significativas experiencias de votación telemática llevadas a cabo por diferentes países en la automatización de sus procesos electorales, analizándolas en profundidad para demostrar que, incluso en los sistemas más recientes, todavía subsisten importantes deficiencias relativas a la seguridad. Asimismo, se ha constatado que un sector importante de la población se muestra receloso y, a menudo, cuestiona la validez de los resultados publicados. Por tanto, un sistema que aspire a ganarse la confianza de ciudadanos y gobernantes no sólo debe operar correctamente, trasladando los procesos tradicionales de votación al contexto telemático, sino que debe proporcionar mecanismos adicionales que permitan superar los temores que inspira el nuevo sistema de votación. Conforme a este principio, el enfoque de esta tesis, se orienta, en primer lugar, hacia la creación de pruebas irrefutables, entendibles y auditables a lo largo de todo el proceso de votación, que permitan demostrar con certeza y ante todos los actores implicados en el proceso (gobierno, partidos políticos, votantes, Mesa Electoral, interventores, Junta Electoral,jueces, etc.) que los resultados publicados son fidedignos y que no se han violado los principios de anonimato y de “una persona, un voto”. Bajo este planteamiento, la solución recogida en esta tesis no sólo prevé mecanismos para minimizar el riesgo de compra de votos, sino que además incorpora mecanismos de seguridad robustos que permitirán no sólo detectar posibles intentos de manipulación del sistema, sino también identificar cuál ha sido el agente responsable. De forma adicional, esta tesis va más allá y traslada el escenario de votación a un ámbito paneuropeo donde aparecen nuevos problemas. En efecto, en la actualidad uno de los principales retos a los que se enfrentan las votaciones de carácter transnacional es sin duda la falta de procedimientos rigurosos y dinámicos para la actualización sincronizada de los censos de votantes de los distintos países que evite la presencia de errores que redunden en la incapacidad de controlar que una persona emita más de un voto, o que se vea impedido del todo a ejercer su derecho. Este reconocimiento de la identidad transnacional requiere la interoperabilidad entre los IDMs de los distintos países europeos. Para dar solución a este problema, esta tesis se apoya en las propuestas emergentes en el seno de la UE, que previsiblemente se consolidarán en los próximos años, tanto en materia de identidad digital (con la puesta en marcha de la Tarjeta de Ciudadano Europeo) como con el despliegue de una infraestructura de gestión de identidad que haga posible la interoperabilidad de los IDMs de los distintos estados miembros. A partir de ellas, en esta tesis se propone una infraestructura telemática que facilita la interoperabilidad de los sistemas de gestión de los censos de los distintos estados europeos en los que se lleve a cabo conjuntamente la votación. El resultado es un sistema versátil, seguro, totalmente robusto, fiable y auditable que puede ser aplicado en elecciones paneuropeas y que contempla la actualización dinámica del censo como una parte crítica del proceso de votación. ABSTRACT: This Ph. D. dissertation proposes a pan‐European and transnational system of telematic voting that is capable of meeting the strictest security standards. This transnational approach is a significant innovation that entails identifying citizens beyond the borders of their own country,thus requiring that all European citizens must have a digital identity that is recognized beyond the borders of their country of origin. Based on these premises, the proposal in this thesis is analyzed in two mutually‐reinforcing ways: first, a voting system is designed that is capable of winning the confidence of European governments and citizens and, second, a solution is conceived for the problem of interoperability of Identity Management Systems (IDMs) that is consistent with work being carried out by the EU to integrate the services provided by the public administrations of different European countries. The starting point of this paper is to identify the requirements for the adequate functioning of a telematic voting system and then to propose a set of elements and criteria that will allow for making comparisons between different such telematic voting systems for the purpose of evaluating the suitability of the proposed system. Then, this thesis provides an in‐depth analysis of most recent significant experiences in telematic voting carried out by different countries with the aim of automating electoral processes, and shows that even the most recent systems have significant shortcomings in the realm of security. Further, a significant portion of the population has shown itself to be wary,and they often question the validity of the published results. Therefore, a system that aspires to win the trust of citizens and leaders must not only operate correctly by transferring traditional voting processes into a telematic environment, but must also provide additional mechanisms that can overcome the fears aroused by the new voting system. Hence, this thesis focuses, first, on creating irrefutable, comprehensible and auditable proof throughout the voting process that can demonstrate to all actors in the process – the government, political parties, voters, polling station workers, electoral officials, judges, etc. ‐that the published results are accurate and that the principles of anonymity and one person,one vote, have not been violated. Accordingly, the solution in this thesis includes mechanisms to minimize the risk of vote buying, in addition to robust security mechanisms that can not only detect possible attempts to manipulate the system, but also identify the responsible party. Additionally, this thesis goes one step further and moves the voting scenario to a pan‐European scale, in which new problems appear. Indeed, one of the major challenges at present for transnational voting processes is the lack of rigorous and dynamic procedures for synchronized updating of different countries’ voter rolls, free from errors that may make the system unable to keep an individual from either casting more than one vote, or from losing the effective exercise of the right to vote. This recognition of transnational identity requires interoperability between the IDMs of different European countries. To solve the problem, this thesis relies on proposals emerging within the EU that are expected to take shape in the coming years, both in digital identity – with the launch of the European Citizen Card – and in the deployment of an identity management infrastructure that will enable interoperability of the IDMs of different member states. Based on these, the thesis proposes a telematic infrastructure that will achieve interoperability of the census management systems of European states in which voting processes are jointly carried out. The result is a versatile, secure, totally robust, reliable and auditable system that can be applied in pan‐European election, and that includes dynamic updating of the voter rolls as a critical part of the voting process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A one-step extraction procedure and a leaching column experiment were performed to assess the effects of citric and tartaric acids on Cu and Zn mobilization in naturally contaminated mine soils to facilitate assisted phytoextraction. A speciation modeling of the soil solution and the metal fractionation of soils were performed to elucidate the chemical processes that affected metal desorption by organic acids. Different extracting solutions were prepared, all of which contained 0.01 M KNO3 and different concentrations of organic acids: control without organic acids, 0.5 mM citric, 0.5 mM tartaric, 10 mM citric, 10 mM tartaric, and 5 mM citric +5 mM tartaric. The results of the extraction procedure showed that higher concentrations of organic acids increased metal desorption, and citric acid was more effective at facilitating metal desorption than tartaric acid. Metal desorption was mainly influenced by the decreasing pH and the dissolution of Fe and Mn oxides, not by the formation of soluble metal–organic complexes as was predicted by the speciation modeling. The results of the column study reported that low concentrations of organic acids did not significantly increase metal mobilization and that higher doses were also not able to mobilize Zn. However, 5–10 mM citric acid significantly promoted Cu mobilization (from 1 mg kg−1 in the control to 42 mg kg−1 with 10 mM citric acid) and reduced the exchangeable (from 21 to 3 mg kg−1) and the Fe and Mn oxides (from 443 to 277 mg kg−1) fractions. Citric acid could efficiently facilitate assisted phytoextraction techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this talk we address a proposal concerning a methodology for extracting universal, domain neutral, architectural design patterns from the analysis of biological cognition. This will render a set of design principles and design patterns oriented towards the construction of better machines. Bio- inspiration cannot be a one step process if we we are going to to build robust, dependable autonomous agents; we must build solid theories first, departing from natural systems, and supporting our designs of artificial ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Offshore wind industry has exponentially grown in the last years. Despite this growth, there are still many uncertainties in this field. This paper analyzes some current uncertainties in the offshore wind market, with the aim of going one step further in the development of this sector. To do this, some already identified uncertainties compromising offshore wind farm structural design have been identified and described in the paper. Examples of these identified uncertainties are the design of the transition piece and the difficulties for the soil properties characterization. Furthermore, this paper deals with other uncertainties not identified yet due to the limited experience in the sector. To do that, current and most used offshore wind standards and recommendations related to the design of foundation and support structures (IEC 61400-1, 2005; IEC 61400-3, 2009; DNV-OS-J101, Design of Offshore Wind Turbine, 2013 and Rules and Guidelines Germanischer Lloyd, WindEnergie, 2005) have been analyzed. These new identified uncertainties are related to the lifetime and return period, loads combination, scour phenomenon and its protection, Morison e Froude Krilov and diffraction regimes, wave theory, different scale and liquefaction. In fact, there are a lot of improvements to make in this field. Some of them are mentioned in this paper, but the future experience in the matter will make it possible to detect more issues to be solved and improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Short-run forecasting of electricity prices has become necessary for power generation unit schedule, since it is the basis of every profit maximization strategy. In this article a new and very easy method to compute accurate forecasts for electricity prices using mixed models is proposed. The main idea is to develop an efficient tool for one-step-ahead forecasting in the future, combining several prediction methods for which forecasting performance has been checked and compared for a span of several years. Also as a novelty, the 24 hourly time series has been modelled separately, instead of the complete time series of the prices. This allows one to take advantage of the homogeneity of these 24 time series. The purpose of this paper is to select the model that leads to smaller prediction errors and to obtain the appropriate length of time to use for forecasting. These results have been obtained by means of a computational experiment. A mixed model which combines the advantages of the two new models discussed is proposed. Some numerical results for the Spanish market are shown, but this new methodology can be applied to other electricity markets as well

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical simulations of axisymmetric reactive jets with one-step Arrhenius kinetics are used to investigate the problem of deflagration initiation in a premixed fuel–air mixture by the sudden discharge of a hot jet of its adiabatic reaction products. For the moderately large values of the jet Reynolds number considered in the computations, chemical reaction is seen to occur initially in the thin mixing layer that separates the hot products from the cold reactants. This mixing layer is wrapped around by the starting vortex, thereby enhancing mixing at the jet head, which is followed by an annular mixing layer that trails behind, connecting the leading vortex with the orifice rim. A successful deflagration is seen to develop for values of the orifice radius larger than a critical value a c in the order of the flame thickness of the planar deflagration δL. Introduction of appropriate scales provides the dimensionless formulation of the problem, with flame initiation characterised in terms of a critical Damköhler number Δc=(a d/δL)2, whose parametric dependence is investigated. The numerical computations reveal that, while the jet Reynolds number exerts a limited influence on the criticality conditions, the effect of the reactant diffusivity on ignition is much more pronounced, with the value of Δc increasing significantly with increasing Lewis numbers. The reactant diffusivity affects also the way ignition takes place, so that for reactants with the flame develops as a result of ignition in the annular mixing layer surrounding the developing jet stem, whereas for highly diffusive reactants with Lewis numbers sufficiently smaller than unity combustion is initiated in the mixed core formed around the starting vortex. The analysis provides increased understanding of deflagration initiation processes, including the effects of differential diffusion, and points to the need for further investigations corporating detailed chemistry models for specific fuel–air mixtures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conditions are identified under which analyses of laminar mixing layers can shed light on aspects of turbulent spray combustion. With this in mind, laminar spray-combustion models are formulated for both non-premixed and partially premixed systems. The laminar mixing layer separating a hot-air stream from a monodisperse spray carried by either an inert gas or air is investigated numerically and analytically in an effort to increase understanding of the ignition process leading to stabilization of high-speed spray combustion. The problem is formulated in an Eulerian framework, with the conservation equations written in the boundary-layer approximation and with a one-step Arrhenius model adopted for the chemistry description. The numerical integrations unveil two different types of ignition behaviour depending on the fuel availability in the reaction kernel, which in turn depends on the rates of droplet vaporization and fuel-vapour diffusion. When sufficient fuel is available near the hot boundary, as occurs when the thermochemical properties of heptane are employed for the fuel in the integrations, combustion is established through a precipitous temperature increase at a well-defined thermal-runaway location, a phenomenon that is amenable to a theoretical analysis based on activation-energy asymptotics, presented here, following earlier ideas developed in describing unsteady gaseous ignition in mixing layers. By way of contrast, when the amount of fuel vapour reaching the hot boundary is small, as is observed in the computations employing the thermochemical properties of methanol, the incipient chemical reaction gives rise to a slowly developing lean deflagration that consumes the available fuel as it propagates across the mixing layer towards the spray. The flame structure that develops downstream from the ignition point depends on the fuel considered and also on the spray carrier gas, with fuel sprays carried by air displaying either a lean deflagration bounding a region of distributed reaction or a distinct double-flame structure with a rich premixed flame on the spray side and a diffusion flame on the air side. Results are calculated for the distributions of mixture fraction and scalar dissipation rate across the mixing layer that reveal complexities that serve to identify differences between spray-flamelet and gaseous-flamelet problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An analysis of the structure of flame balls encountered under microgravity conditions, which are stable due to radiant energy losses from H₂O, is carried out for fuel-lean hydrogen-air mixtures. It is seen that, because of radiation losses, in stable flame balls the maximum flame temperature remains close to the crossover temperature, at which the rate of the branching step H + O₂ -> OH + O equals that of the recombination step H + O₂ + M -> HO₂ + M. Under those conditions, all chemical intermediates have very small concentrations and follow the steady-state approximation, while the main species react according to the overall step 2H₂ + O₂-> 2H₂O; so that a one-step chemical-kinetic description, recently derived by asymptotic analysis for near-limit fuel-lean deflagrations, can be used with excellent accuracy to describe the whole branch of stable flame balls. Besides molecular diffusion in a binary-diffusion approximation, Soret diffusion is included, since this exerts a nonnegligible effect to extend the flammability range. When the large value of the activation energy of the overall reaction is taken into account, the leading-order analysis in the reaction-sheet approximation is seen to determine the flame ball radius as that required for radiant heat losses to remove enough of the heat released by chemical reaction at the flame to keep the flame temperature at a value close to crossover. The results are relevant to burning velocities at lean equivalent ratios and may influence fire-safety issues associated with hydrogen utilization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been reasoned that the structures of strongly cellular flames in very lean mixtures approach an array of flame balls, each burning as if it were isolated, thereby indicating a connection between the critical conditions required for existence of steady flame balls and those necessary for occurrence of self-sustained premixed combustion. This is the starting assumption of the present study, in which structures of near-limit steady sphericosym-metrical flame balls are investigated with the objective of providing analytic expressions for critical combustion conditions in ultra-lean hydrogen-oxygen mixtures diluted with N2 and water vapor. If attention were restricted to planar premixed flames, then the lean-limit mole fraction of H2 would be found to be roughly ten percent, more than twice the observed flammability limits, thereby emphasizing the relevance of the flame-ball phenomena. Numerical integrations using detailed models for chemistry and radiation show that a onestep chemical-kinetic reduced mechanism based on steady-state assumptions for all chemical intermediates, together with a simple, optically thin approximation for water-vapor radiation, can be used to compute near-limit fuel-lean flame balls with excellent accuracy. The previously developed one-step reaction rate includes a crossover temperature that determines in the first approximation a chemical-kinetic lean limit below which combustión cannot occur, with critical conditions achieved when the diffusion-controlled radiation-free peak temperature, computed with account taken of hydrogen Soret diffusion, is equal to the crossover temperature. First-order corrections are found by activation-energy asymptotics in a solution that involves a near-field radiation-free zone surrounding a spherical flame sheet, together with a far-field radiation-conduction balance for the temperature profile. Different scalings are found depending on whether or not the surrounding atmosphere contains wáter vapor, leading to different analytic expressions for the critical conditions for flame-ball existence, which give results in very good agreement with those obtained by detailed numerical computations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the beginning of Internet, Internet Service Providers (ISP) have seen the need of giving to users? traffic different treatments defined by agree- ments between ISP and customers. This procedure, known as Quality of Service Management, has not much changed in the last years (DiffServ and Deep Pack-et Inspection have been the most chosen mechanisms). However, the incremen-tal growth of Internet users and services jointly with the application of recent Ma- chine Learning techniques, open up the possibility of going one step for-ward in the smart management of network traffic. In this paper, we first make a survey of current tools and techniques for QoS Management. Then we intro-duce clustering and classifying Machine Learning techniques for traffic charac-terization and the concept of Quality of Experience. Finally, with all these com-ponents, we present a brand new framework that will manage in a smart way Quality of Service in a telecom Big Data based scenario, both for mobile and fixed communications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In nature, several types of landforms have simple shapes: as they evolve they tend to take on an ideal, simple geometric form such as a cone, an ellipsoid or a paraboloid. Volcanic landforms are possibly the best examples of this ?ideal? geometry, since they develop as regular surface features due to the point-like (circular) or fissure-like (linear) manifestation of volcanic activity. In this paper, we present a geomorphometric method of fitting the ?ideal? surface onto the real surface of regular-shaped volcanoes through a number of case studies (Mt. Mayon, Mt. Somma, Mt. Semeru, and Mt. Cameroon). Volcanoes with circular, as well as elliptical, symmetry are addressed. For the best surface fit, we use the minimization library MINUIT which is made freely available by the CERN (European Organization for Nuclear Research). This library enables us to handle all the available surface data (every point of the digital elevation model) in a one-step, half-automated way regardless of the size of the dataset, and to consider simultaneously all the relevant parameters of the selected problem, such as the position of the center of the edifice, apex height, and cone slope, thanks to the highly performing adopted procedure. Fitting the geometric surface, along with calculating the related error, demonstrates the twofold advantage of the method. Firstly, we can determine quantitatively to what extent a given volcanic landform is regular, i.e. how much it follows an expected regular shape. Deviations from the ideal shape due to degradation (e.g. sector collapse and normal erosion) can be used in erosion rate calculations. Secondly, if we have a degraded volcanic landform, whose geometry is not clear, this method of surface fitting reconstructs the original shape with the maximum precision. Obviously, in addition to volcanic landforms, this method is also capable of constraining the shapes of other regular surface features such as aeolian, glacial or periglacial landforms.