899 resultados para One Step dentin bonding system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The great developments that have occurred during the last few years in the finite element method and its applications has kept hidden other options for computation. The boundary integral element method now appears as a valid alternative and, in certain cases, has significant advantages. This method deals only with the boundary of the domain, while the F.E.M. analyses the whole domain. This has the following advantages: the dimensions of the problem to be studied are reduced by one, consequently simplifying the system of equations and preparation of input data. It is also possible to analyse infinite domains without discretization errors. These simplifications have the drawbacks of having to solve a full and non-symmetric matrix and some difficulties are incurred in the imposition of boundary conditions when complicated variations of the function over the boundary are assumed. In this paper a practical treatment of these problems, in particular boundary conditions imposition, has been carried out using the computer program shown below. Program SERBA solves general elastostatics problems in 2-dimensional continua using the boundary integral equation method. The boundary of the domain is discretized by line or elements over which the functions are assumed to vary linearly. Data (stresses and/or displacements) are introduced in the local co-ordinate system (element co-ordinates). Resulting stresses are obtained in local co-ordinates and displacements in a general system. The program has been written in Fortran ASCII and implemented on a 1108 Univac Computer. For 100 elements the core requirements are about 40 Kwords. Also available is a Fortran IV version (3 segments)implemented on a 21 MX Hewlett-Packard computer,using 15 Kwords.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A one-step extraction procedure and a leaching column experiment were performed to assess the effects of citric and tartaric acids on Cu and Zn mobilization in naturally contaminated mine soils to facilitate assisted phytoextraction. A speciation modeling of the soil solution and the metal fractionation of soils were performed to elucidate the chemical processes that affected metal desorption by organic acids. Different extracting solutions were prepared, all of which contained 0.01 M KNO3 and different concentrations of organic acids: control without organic acids, 0.5 mM citric, 0.5 mM tartaric, 10 mM citric, 10 mM tartaric, and 5 mM citric +5 mM tartaric. The results of the extraction procedure showed that higher concentrations of organic acids increased metal desorption, and citric acid was more effective at facilitating metal desorption than tartaric acid. Metal desorption was mainly influenced by the decreasing pH and the dissolution of Fe and Mn oxides, not by the formation of soluble metal–organic complexes as was predicted by the speciation modeling. The results of the column study reported that low concentrations of organic acids did not significantly increase metal mobilization and that higher doses were also not able to mobilize Zn. However, 5–10 mM citric acid significantly promoted Cu mobilization (from 1 mg kg−1 in the control to 42 mg kg−1 with 10 mM citric acid) and reduced the exchangeable (from 21 to 3 mg kg−1) and the Fe and Mn oxides (from 443 to 277 mg kg−1) fractions. Citric acid could efficiently facilitate assisted phytoextraction techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The influence of anemometer rotor shape parameters, such as the cups’ front area or their center rotation radius on the anemometer’s performance was analyzed. This analysis was based on calibrations performed on two different anemometers (one based on magnet system output signal, and the other one based on an opto-electronic system output signal), tested with 21 different rotors. The results were compared to the ones resulting from classical analytical models. The results clearly showed a linear dependency of both calibration constants, the slope and the offset, on the cups’ center rotation radius, the influence of the front area of the cups also being observed. The analytical model of Kondo et al. was proved to be accurate if it is based on precise data related to the aerodynamic behavior of a rotor’s cup.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this talk we address a proposal concerning a methodology for extracting universal, domain neutral, architectural design patterns from the analysis of biological cognition. This will render a set of design principles and design patterns oriented towards the construction of better machines. Bio- inspiration cannot be a one step process if we we are going to to build robust, dependable autonomous agents; we must build solid theories first, departing from natural systems, and supporting our designs of artificial ones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Offshore wind industry has exponentially grown in the last years. Despite this growth, there are still many uncertainties in this field. This paper analyzes some current uncertainties in the offshore wind market, with the aim of going one step further in the development of this sector. To do this, some already identified uncertainties compromising offshore wind farm structural design have been identified and described in the paper. Examples of these identified uncertainties are the design of the transition piece and the difficulties for the soil properties characterization. Furthermore, this paper deals with other uncertainties not identified yet due to the limited experience in the sector. To do that, current and most used offshore wind standards and recommendations related to the design of foundation and support structures (IEC 61400-1, 2005; IEC 61400-3, 2009; DNV-OS-J101, Design of Offshore Wind Turbine, 2013 and Rules and Guidelines Germanischer Lloyd, WindEnergie, 2005) have been analyzed. These new identified uncertainties are related to the lifetime and return period, loads combination, scour phenomenon and its protection, Morison e Froude Krilov and diffraction regimes, wave theory, different scale and liquefaction. In fact, there are a lot of improvements to make in this field. Some of them are mentioned in this paper, but the future experience in the matter will make it possible to detect more issues to be solved and improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Short-run forecasting of electricity prices has become necessary for power generation unit schedule, since it is the basis of every profit maximization strategy. In this article a new and very easy method to compute accurate forecasts for electricity prices using mixed models is proposed. The main idea is to develop an efficient tool for one-step-ahead forecasting in the future, combining several prediction methods for which forecasting performance has been checked and compared for a span of several years. Also as a novelty, the 24 hourly time series has been modelled separately, instead of the complete time series of the prices. This allows one to take advantage of the homogeneity of these 24 time series. The purpose of this paper is to select the model that leads to smaller prediction errors and to obtain the appropriate length of time to use for forecasting. These results have been obtained by means of a computational experiment. A mixed model which combines the advantages of the two new models discussed is proposed. Some numerical results for the Spanish market are shown, but this new methodology can be applied to other electricity markets as well

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerical simulations of axisymmetric reactive jets with one-step Arrhenius kinetics are used to investigate the problem of deflagration initiation in a premixed fuel–air mixture by the sudden discharge of a hot jet of its adiabatic reaction products. For the moderately large values of the jet Reynolds number considered in the computations, chemical reaction is seen to occur initially in the thin mixing layer that separates the hot products from the cold reactants. This mixing layer is wrapped around by the starting vortex, thereby enhancing mixing at the jet head, which is followed by an annular mixing layer that trails behind, connecting the leading vortex with the orifice rim. A successful deflagration is seen to develop for values of the orifice radius larger than a critical value a c in the order of the flame thickness of the planar deflagration δL. Introduction of appropriate scales provides the dimensionless formulation of the problem, with flame initiation characterised in terms of a critical Damköhler number Δc=(a d/δL)2, whose parametric dependence is investigated. The numerical computations reveal that, while the jet Reynolds number exerts a limited influence on the criticality conditions, the effect of the reactant diffusivity on ignition is much more pronounced, with the value of Δc increasing significantly with increasing Lewis numbers. The reactant diffusivity affects also the way ignition takes place, so that for reactants with the flame develops as a result of ignition in the annular mixing layer surrounding the developing jet stem, whereas for highly diffusive reactants with Lewis numbers sufficiently smaller than unity combustion is initiated in the mixed core formed around the starting vortex. The analysis provides increased understanding of deflagration initiation processes, including the effects of differential diffusion, and points to the need for further investigations corporating detailed chemistry models for specific fuel–air mixtures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conditions are identified under which analyses of laminar mixing layers can shed light on aspects of turbulent spray combustion. With this in mind, laminar spray-combustion models are formulated for both non-premixed and partially premixed systems. The laminar mixing layer separating a hot-air stream from a monodisperse spray carried by either an inert gas or air is investigated numerically and analytically in an effort to increase understanding of the ignition process leading to stabilization of high-speed spray combustion. The problem is formulated in an Eulerian framework, with the conservation equations written in the boundary-layer approximation and with a one-step Arrhenius model adopted for the chemistry description. The numerical integrations unveil two different types of ignition behaviour depending on the fuel availability in the reaction kernel, which in turn depends on the rates of droplet vaporization and fuel-vapour diffusion. When sufficient fuel is available near the hot boundary, as occurs when the thermochemical properties of heptane are employed for the fuel in the integrations, combustion is established through a precipitous temperature increase at a well-defined thermal-runaway location, a phenomenon that is amenable to a theoretical analysis based on activation-energy asymptotics, presented here, following earlier ideas developed in describing unsteady gaseous ignition in mixing layers. By way of contrast, when the amount of fuel vapour reaching the hot boundary is small, as is observed in the computations employing the thermochemical properties of methanol, the incipient chemical reaction gives rise to a slowly developing lean deflagration that consumes the available fuel as it propagates across the mixing layer towards the spray. The flame structure that develops downstream from the ignition point depends on the fuel considered and also on the spray carrier gas, with fuel sprays carried by air displaying either a lean deflagration bounding a region of distributed reaction or a distinct double-flame structure with a rich premixed flame on the spray side and a diffusion flame on the air side. Results are calculated for the distributions of mixture fraction and scalar dissipation rate across the mixing layer that reveal complexities that serve to identify differences between spray-flamelet and gaseous-flamelet problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An analysis of the structure of flame balls encountered under microgravity conditions, which are stable due to radiant energy losses from H₂O, is carried out for fuel-lean hydrogen-air mixtures. It is seen that, because of radiation losses, in stable flame balls the maximum flame temperature remains close to the crossover temperature, at which the rate of the branching step H + O₂ -> OH + O equals that of the recombination step H + O₂ + M -> HO₂ + M. Under those conditions, all chemical intermediates have very small concentrations and follow the steady-state approximation, while the main species react according to the overall step 2H₂ + O₂-> 2H₂O; so that a one-step chemical-kinetic description, recently derived by asymptotic analysis for near-limit fuel-lean deflagrations, can be used with excellent accuracy to describe the whole branch of stable flame balls. Besides molecular diffusion in a binary-diffusion approximation, Soret diffusion is included, since this exerts a nonnegligible effect to extend the flammability range. When the large value of the activation energy of the overall reaction is taken into account, the leading-order analysis in the reaction-sheet approximation is seen to determine the flame ball radius as that required for radiant heat losses to remove enough of the heat released by chemical reaction at the flame to keep the flame temperature at a value close to crossover. The results are relevant to burning velocities at lean equivalent ratios and may influence fire-safety issues associated with hydrogen utilization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been reasoned that the structures of strongly cellular flames in very lean mixtures approach an array of flame balls, each burning as if it were isolated, thereby indicating a connection between the critical conditions required for existence of steady flame balls and those necessary for occurrence of self-sustained premixed combustion. This is the starting assumption of the present study, in which structures of near-limit steady sphericosym-metrical flame balls are investigated with the objective of providing analytic expressions for critical combustion conditions in ultra-lean hydrogen-oxygen mixtures diluted with N2 and water vapor. If attention were restricted to planar premixed flames, then the lean-limit mole fraction of H2 would be found to be roughly ten percent, more than twice the observed flammability limits, thereby emphasizing the relevance of the flame-ball phenomena. Numerical integrations using detailed models for chemistry and radiation show that a onestep chemical-kinetic reduced mechanism based on steady-state assumptions for all chemical intermediates, together with a simple, optically thin approximation for water-vapor radiation, can be used to compute near-limit fuel-lean flame balls with excellent accuracy. The previously developed one-step reaction rate includes a crossover temperature that determines in the first approximation a chemical-kinetic lean limit below which combustión cannot occur, with critical conditions achieved when the diffusion-controlled radiation-free peak temperature, computed with account taken of hydrogen Soret diffusion, is equal to the crossover temperature. First-order corrections are found by activation-energy asymptotics in a solution that involves a near-field radiation-free zone surrounding a spherical flame sheet, together with a far-field radiation-conduction balance for the temperature profile. Different scalings are found depending on whether or not the surrounding atmosphere contains wáter vapor, leading to different analytic expressions for the critical conditions for flame-ball existence, which give results in very good agreement with those obtained by detailed numerical computations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La telepesencia combina diferentes modalidades sensoriales, incluyendo, entre otras, la visual y la del tacto, para producir una sensación de presencia remota en el operador. Un elemento clave en la implementación de sistemas de telepresencia para permitir una telemanipulación del entorno remoto es el retorno de fuerza. Durante una telemanipulación, la energía mecánica es transferida entre el operador humano y el entorno remoto. En general, la energía es una propiedad de los objetos físicos, fundamental en su mutual interacción. En esta interacción, la energía se puede transmitir entre los objetos, puede cambiar de forma pero no puede crearse ni destruirse. En esta tesis, se aplica este principio fundamental para derivar un nuevo método de control bilateral que permite el diseño de sistemas de teleoperación estables para cualquier arquitectura concebible. El razonamiento parte del hecho de que la energía mecánica insertada por el operador humano en el sistema debe transferirse hacia el entorno remoto y viceversa. Tal como se verá, el uso de la energía como variable de control permite un tratamiento más general del sistema que el control convencional basado en variables específicas del sistema. Mediante el concepto de Red de Potencia de Retardo Temporal (RPRT), el problema de definir los flujos de energía en un sistema de teleoperación es solucionado con independencia de la arquitectura de comunicación. Como se verá, los retardos temporales son la principal causa de generación de energía virtual. Este hecho se observa con retardos a partir de 1 milisegundo. Esta energía virtual es añadida al sistema de forma intrínseca y representa la causa principal de inestabilidad. Se demuestra que las RPRTs son transportadoras de la energía deseada intercambiada entre maestro y esclavo pero a la vez generadoras de energía virtual debido al retardo temporal. Una vez estas redes son identificadas, el método de Control de Pasividad en el Dominio Temporal para RPRTs se propone como mecanismo de control para asegurar la pasividad del sistema, y as__ la estabilidad. El método se basa en el simple hecho de que esta energía virtual debido al retardo debe transformarse en disipación. As__ el sistema se aproxima al sistema deseado, donde solo la energía insertada desde un extremo es transferida hacia el otro. El sistema resultante presenta dos cualidades: por un lado la estabilidad del sistema queda garantizada con independencia de la arquitectura del sistema y del canal de comunicación; por el otro, el rendimiento es maximizado en términos de fidelidad de transmisión energética. Los métodos propuestos se sustentan con sistemas experimentales con diferentes arquitecturas de control y retardos entre 2 y 900 ms. La tesis concluye con un experimento que incluye una comunicación espacial basada en el satélite geoestacionario ASTRA. ABSTRACT Telepresence combines different sensorial modalities, including vision and touch, to produce a feeling of being present in a remote location. The key element to successfully implement a telepresence system and thus to allow telemanipulation of a remote environment is force feedback. In a telemanipulation, mechanical energy must convey from the human operator to the manipulated object found in the remote environment. In general, energy is a property of all physical objects, fundamental to their mutual interactions in which the energy can be transferred among the objects and can change form but cannot be created or destroyed. In this thesis, we exploit this fundamental principle to derive a novel bilateral control mechanism that allows designing stable teleoperation systems with any conceivable communication architecture. The rationale starts from the fact that the mechanical energy injected by a human operator into the system must be conveyed to the remote environment and Vice Versa. As will be seen, setting energy as the control variable allows a more general treatment of the controlled system in contrast to the more conventional control of specific systems variables. Through the Time Delay Power Network (TDPN) concept, the issue of defining the energy flows involved in a teleoperation system is solved with independence of the communication architecture. In particular, communication time delays are found to be a source of virtual energy. This fact is observed with delays starting from 1 millisecond. Since this energy is added, the resulting teleoperation system can be non-passive and thus become unstable. The Time Delay Power Networks are found to be carriers of the desired exchanged energy but also generators of virtual energy due to the time delay. Once these networks are identified, the Time Domain Passivity Control approach for TDPNs is proposed as a control mechanism to ensure system passivity and therefore, system stability. The proposed method is based on the simple fact that this intrinsically added energy due to the communication must be transformed into dissipation. Then the system becomes closer to the ambitioned one, where only the energy injected from one end of the system is conveyed to the other one. The resulting system presents two benefits: On one hand, system stability is guaranteed through passivity independently from the chosen control architecture and communication channel; on the other, performance is maximized in terms of energy transfer faithfulness. The proposed methods are sustained with a set of experimental implementations using different control architectures and communication delays ranging from 2 to 900 milliseconds. An experiment that includes a communication Space link based on the geostationary satellite ASTRA concludes this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

habilidades de comprensión y resolución de problemas. Tanto es así que se puede afirmar con rotundidad que no existe el método perfecto para cada una de las etapas de desarrollo y tampoco existe el modelo de ciclo de vida perfecto: cada nuevo problema que se plantea es diferente a los anteriores en algún aspecto y esto hace que técnicas que funcionaron en proyectos anteriores fracasen en los proyectos nuevos. Por ello actualmente se realiza un planteamiento integrador que pretende utilizar en cada caso las técnicas, métodos y herramientas más acordes con las características del problema planteado al ingeniero. Bajo este punto de vista se plantean nuevos problemas. En primer lugar está la selección de enfoques de desarrollo. Si no existe el mejor enfoque, ¿cómo se hace para elegir el más adecuado de entre el conjunto de los existentes? Un segundo problema estriba en la relación entre las etapas de análisis y diseño. En este sentido existen dos grandes riesgos. Por un lado, se puede hacer un análisis del problema demasiado superficial, con lo que se produce una excesiva distancia entre el análisis y el diseño que muchas veces imposibilita el paso de uno a otro. Por otro lado, se puede optar por un análisis en términos del diseño que provoca que no cumpla su objetivo de centrarse en el problema, sino que se convierte en una primera versión de la solución, lo que se conoce como diseño preliminar. Como consecuencia de lo anterior surge el dilema del análisis, que puede plantearse como sigue: para cada problema planteado hay que elegir las técnicas más adecuadas, lo que requiere que se conozcan las características del problema. Para ello, a su vez, se debe analizar el problema, eligiendo una técnica antes de conocerlo. Si la técnica utiliza términos de diseño entonces se ha precondicionado el paradigma de solución y es posible que no sea el más adecuado para resolver el problema. En último lugar están las barreras pragmáticas que frenan la expansión del uso de métodos con base formal, dificultando su aplicación en la práctica cotidiana. Teniendo en cuenta todos los problemas planteados, se requieren métodos de análisis del problema que cumplan una serie de objetivos, el primero de los cuales es la necesidad de una base formal, con el fin de evitar la ambigüedad y permitir verificar la corrección de los modelos generados. Un segundo objetivo es la independencia de diseño: se deben utilizar términos que no tengan reflejo directo en el diseño, para que permitan centrarse en las características del problema. Además los métodos deben permitir analizar problemas de cualquier tipo: algorítmicos, de soporte a la decisión o basados en el conocimiento, entre otros. En siguiente lugar están los objetivos relacionados con aspectos pragmáticos. Por un lado deben incorporar una notación textual formal pero no matemática, de forma que se facilite su validación y comprensión por personas sin conocimientos matemáticos profundos pero al mismo tiempo sea lo suficientemente rigurosa para facilitar su verificación. Por otro lado, se requiere una notación gráfica complementaria para representar los modelos, de forma que puedan ser comprendidos y validados cómodamente por parte de los clientes y usuarios. Esta tesis doctoral presenta SETCM, un método de análisis que cumple estos objetivos. Para ello se han definido todos los elementos que forman los modelos de análisis usando una terminología independiente de paradigmas de diseño y se han formalizado dichas definiciones usando los elementos fundamentales de la teoría de conjuntos: elementos, conjuntos y relaciones entre conjuntos. Por otro lado se ha definido un lenguaje formal para representar los elementos de los modelos de análisis – evitando en lo posible el uso de notaciones matemáticas – complementado con una notación gráfica que permite representar de forma visual las partes más relevantes de los modelos. El método propuesto ha sido sometido a una intensa fase de experimentación, durante la que fue aplicado a 13 casos de estudio, todos ellos proyectos reales que han concluido en productos transferidos a entidades públicas o privadas. Durante la experimentación se ha evaluado la adecuación de SETCM para el análisis de problemas de distinto tamaño y en sistemas cuyo diseño final usaba paradigmas diferentes e incluso paradigmas mixtos. También se ha evaluado su uso por analistas con distinto nivel de experiencia – noveles, intermedios o expertos – analizando en todos los casos la curva de aprendizaje, con el fin de averiguar si es fácil de aprender su uso, independientemente de si se conoce o no alguna otra técnica de análisis. Por otro lado se ha estudiado la capacidad de ampliación de modelos generados con SETCM, para comprobar si permite abordar proyectos realizados en varias fases, en los que el análisis de una fase consista en ampliar el análisis de la fase anterior. En resumidas cuentas, se ha tratado de evaluar la capacidad de integración de SETCM en una organización como la técnica de análisis preferida para el desarrollo de software. Los resultados obtenidos tras esta experimentación han sido muy positivos, habiéndose alcanzado un alto grado de cumplimiento de todos los objetivos planteados al definir el método.---ABSTRACT---Software development is an inherently complex activity, which requires specific abilities of problem comprehension and solving. It is so difficult that it can even be said that there is no perfect method for each of the development stages and that there is no perfect life cycle model: each new problem is different to the precedent ones in some respect and the techniques that worked in other problems can fail in the new ones. Given that situation, the current trend is to integrate different methods, tools and techniques, using the best suited for each situation. This trend, however, raises some new problems. The first one is the selection of development approaches. If there is no a manifestly single best approach, how does one go about choosing an approach from the array of available options? The second problem has to do with the relationship between the analysis and design phases. This relation can lead to two major risks. On one hand, the analysis could be too shallow and far away from the design, making it very difficult to perform the transition between them. On the other hand, the analysis could be expressed using design terminology, thus becoming more a kind of preliminary design than a model of the problem to be solved. In third place there is the analysis dilemma, which can be expressed as follows. The developer has to choose the most adequate techniques for each problem, and to make this decision it is necessary to know the most relevant properties of the problem. This implies that the developer has to analyse the problem, choosing an analysis method before really knowing the problem. If the chosen technique uses design terminology then the solution paradigm has been preconditioned and it is possible that, once the problem is well known, that paradigm wouldn’t be the chosen one. The last problem consists of some pragmatic barriers that limit the applicability of formal based methods, making it difficult to use them in current practice. In order to solve these problems there is a need for analysis methods that fulfil several goals. The first one is the need of a formal base, which prevents ambiguity and allows the verification of the analysis models. The second goal is design-independence: the analysis should use a terminology different from the design, to facilitate a real comprehension of the problem under study. In third place the analysis method should allow the developer to study different kinds of problems: algorithmic, decision-support, knowledge based, etc. Next there are two goals related to pragmatic aspects. Firstly, the methods should have a non mathematical but formal textual notation. This notation will allow people without deep mathematical knowledge to understand and validate the resulting models, without losing the needed rigour for verification. Secondly, the methods should have a complementary graphical notation to make more natural the understanding and validation of the relevant parts of the analysis. This Thesis proposes such a method, called SETCM. The elements conforming the analysis models have been defined using a terminology that is independent from design paradigms. Those terms have been then formalised using the main concepts of the set theory: elements, sets and correspondences between sets. In addition, a formal language has been created, which avoids the use of mathematical notations. Finally, a graphical notation has been defined, which can visually represent the most relevant elements of the models. The proposed method has been thoroughly tested during the experimentation phase. It has been used to perform the analysis of 13 actual projects, all of them resulting in transferred products. This experimentation allowed evaluating the adequacy of SETCM for the analysis of problems of varying size, whose final design used different paradigms and even mixed ones. The use of the method by people with different levels of expertise was also evaluated, along with the corresponding learning curve, in order to assess if the method is easy to learn, independently of previous knowledge on other analysis techniques. In addition, the expandability of the analysis models was evaluated, assessing if the technique was adequate for projects organised in incremental steps, in which the analysis of one step grows from the precedent models. The final goal was to assess if SETCM can be used inside an organisation as the preferred analysis method for software development. The obtained results have been very positive, as SETCM has obtained a high degree of fulfilment of the goals stated for the method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the beginning of Internet, Internet Service Providers (ISP) have seen the need of giving to users? traffic different treatments defined by agree- ments between ISP and customers. This procedure, known as Quality of Service Management, has not much changed in the last years (DiffServ and Deep Pack-et Inspection have been the most chosen mechanisms). However, the incremen-tal growth of Internet users and services jointly with the application of recent Ma- chine Learning techniques, open up the possibility of going one step for-ward in the smart management of network traffic. In this paper, we first make a survey of current tools and techniques for QoS Management. Then we intro-duce clustering and classifying Machine Learning techniques for traffic charac-terization and the concept of Quality of Experience. Finally, with all these com-ponents, we present a brand new framework that will manage in a smart way Quality of Service in a telecom Big Data based scenario, both for mobile and fixed communications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In nature, several types of landforms have simple shapes: as they evolve they tend to take on an ideal, simple geometric form such as a cone, an ellipsoid or a paraboloid. Volcanic landforms are possibly the best examples of this ?ideal? geometry, since they develop as regular surface features due to the point-like (circular) or fissure-like (linear) manifestation of volcanic activity. In this paper, we present a geomorphometric method of fitting the ?ideal? surface onto the real surface of regular-shaped volcanoes through a number of case studies (Mt. Mayon, Mt. Somma, Mt. Semeru, and Mt. Cameroon). Volcanoes with circular, as well as elliptical, symmetry are addressed. For the best surface fit, we use the minimization library MINUIT which is made freely available by the CERN (European Organization for Nuclear Research). This library enables us to handle all the available surface data (every point of the digital elevation model) in a one-step, half-automated way regardless of the size of the dataset, and to consider simultaneously all the relevant parameters of the selected problem, such as the position of the center of the edifice, apex height, and cone slope, thanks to the highly performing adopted procedure. Fitting the geometric surface, along with calculating the related error, demonstrates the twofold advantage of the method. Firstly, we can determine quantitatively to what extent a given volcanic landform is regular, i.e. how much it follows an expected regular shape. Deviations from the ideal shape due to degradation (e.g. sector collapse and normal erosion) can be used in erosion rate calculations. Secondly, if we have a degraded volcanic landform, whose geometry is not clear, this method of surface fitting reconstructs the original shape with the maximum precision. Obviously, in addition to volcanic landforms, this method is also capable of constraining the shapes of other regular surface features such as aeolian, glacial or periglacial landforms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los polímeros compostables suponen en torno al 30% de los bioplásticos destinados a envasado, siendo a su vez esta aplicación el principal destino de la producción de este tipo de materiales que, en el año 2013, superó 1,6 millones de toneladas. La presente tesis aborda la biodegradación de los residuos de envases domésticos compostables en medio aerobio para dos tipos de formato y materiales, envase rígido de PLA (Clase I) y dos tipos de bolsas de PBAT+PLA (Clases II y III). Sobre esta materia se han realizado diversos estudios en escala de laboratorio pero para otro tipo de envases y biopolímeros y bajo condiciones controladas del compost con alguna proyección particularizada en plantas. La presente tesis da un paso más e investiga el comportamiento real de los envases plásticos compostables en la práctica del compostaje en tecnologías de pila y túnel, tanto a escala piloto como industrial, dentro del procedimiento y con las condiciones ambientales de instalaciones concretas. Para ello, con el método seguido, se han analizado los requisitos básicos que debe cumplir un envase compostable, según la norma UNE – EN 13432, evaluando el porcentaje de biodegradación de los envases objeto de estudio, en función de la pérdida de peso seco tras el proceso de compostaje, y la calidad del compost obtenido, mediante análisis físico-químico y de fitotoxicidad para comprobar que los materiales de estudio no aportan toxicidad. En cuanto a los niveles de biodegrabilidad, los resultados permiten concluir que los envases de Clase I se compostan adecuadamente en ambas tecnologías y que no requieren de unas condiciones de proceso muy exigentes para alcanzar niveles de biodegradación del 100%. En relación a los envases de Clase II, se puede asumir que se trata de un material que se composta adecuadamente en pila y túnel industrial pero que requiere de condiciones exigentes para alcanzar niveles de biodegradación del 100% al afectarle de forma clara la ubicación de las muestras en la masa a compostar, especialmente en el caso de la tecnología de túnel. Mientras el 90% de las muestras alcanza el 100% de biodegradación en pila industrial, tan sólo el 50% lo consigue en la tecnología de túnel a la misma escala. En cuanto a los envases de Clase III, se puede afirmar que es un material que se composta adecuadamente en túnel industrial pero que requiere de condiciones de cierta exigencia para alcanzar niveles de biodegradación del 100% al poderle afectar la ubicación de las muestras en la masa a compostar. El 75% de las muestras ensayadas en túnel a escala industrial alcanzan el 100% de biodegradación y, aunque no se ha ensayado este tipo de envase en la tecnología de pila al no disponer de muestras, cabe pensar que los resultados de biodegrabilidad que hubiera podido alcanzar habrían sido, como mínimo, los obtenidos para los envases de Clase II, al tratarse de materiales muy similares en composición. Por último, se concluye que la tecnología de pila es más adecuada para conseguir niveles de biodegradación superiores en los envases tipo bolsa de PBAT+PLA. Los resultados obtenidos permiten también sacar en conclusión que, en el diseño de instalaciones de compostaje para el tratamiento de la fracción orgánica recogida selectivamente, sería conveniente realizar una recirculación del rechazo del afino del material compostado para aumentar la probabilidad de someter este tipo de materiales a las condiciones ambientales adecuadas. Si además se realiza un triturado del residuo a la entrada del proceso, también se aumentaría la superficie específica a entrar en contacto con la masa de materia orgánica y por tanto se favorecerían las condiciones de biodegradación. En cuanto a la calidad del compost obtenido en los ensayos, los resultados de los análisis físico – químicos y de fitotoxicidad revelan que los niveles de concentración de microorganismo patógenos y de metales pesados superan, en la práctica totalidad de las muestras, los niveles máximos permitidos en la legislación vigente aplicable a productos fertilizantes elaborados con residuos. Mediante el análisis de la composición de los envases ensayados se constata que la causa de esta contaminación reside en la materia orgánica utilizada para compostar en los ensayos, procedente del residuo de origen doméstico de la denominada “fracción resto”. Esta conclusión confirma la necesidad de realizar una recogida selectiva de la fracción orgánica en origen, existiendo estudios que evidencian la mejora de la calidad del residuo recogido en la denominada “fracción orgánica recogida selectivamente” (FORM). Compostable polymers are approximately 30% of bioplastics used for packaging, being this application, at same time, the main destination for the production of such materials exceeded 1.6 million tonnes in 2013. This thesis deals with the biodegradation of household packaging waste compostable in aerobic medium for two format types and materials, rigid container made of PLA (Class I) and two types of bags made of PBAT + PLA (Classes II and III). There are several studies developed about this issue at laboratory scale but for other kinds of packaging and biopolymers and under composting controlled conditions with some specifically plants projection. This thesis goes one step further and researches the real behaviour of compostable plastic packaging in the composting practice in pile and tunnel technologies, both at pilot and industrial scale, within the procedure and environmental conditions of concrete devices. Therefore, with a followed method, basic requirements fulfilment for compostable packaging have been analysed according to UNE-EN 13432 standard. It has been assessed the biodegradability percentage of the packaging studied, based on loss dry weight after the composting process, and the quality of the compost obtained, based on physical-chemical analysis to check no toxicity provided by the studied materials. Regarding biodegradability levels, results allow to conclude that Class I packaging are composted properly in both technologies and do not require high exigent process conditions for achieving 100% biodegradability levels. Related to Class II packaging, it can be assumed that it is a material that composts properly in pile and tunnel at industrial scale but requires exigent conditions for achieving 100% biodegradability levels for being clearly affected by sample location in the composting mass, especially in tunnel technology case. While 90% of the samples reach 100% of biodegradation in pile at industrial scale, only 50% achieve it in tunnel technology at the same scale. Regarding Class III packaging, it can be said that it is a material properly composted in tunnel at industrial scale but requires certain exigent conditions for reaching 100% biodegradation levels for being possibly affected by sample location in the composting mass. The 75% of the samples tested in tunnel at industrial scale reaches 100% biodegradation. Although this kind of packaging has not been tested on pile technology due to unavailability of samples, it is judged that biodegradability results that could be reached would have been, at least, the same obtained for Class II packaging, as they are very similar materials in composition. Finally, it is concluded that pile technology is more suitable for achieving highest biodegradation levels in bag packaging type of PBAT+PLA. Additionally, the obtained results conclude that, in the designing of composting devices for treatment of organic fraction selectively collected, it would be recommended a recirculation of the refining refuse of composted material in order to increase the probability of such materials to expose to proper environmental conditions. If the waste is grinded before entering the process, the specific surface in contact with organic material would also be increased and therefore biodegradation conditions would be more favourable. Regarding quality of the compost obtained in the tests, physical-chemical and phytotoxicity analysis results reveal that pathogen microorganism and heavy metals concentrations exceed, in most of the samples, the maximum allowed levels by current legislation for fertilizers obtained from wastes. Composition analysis of tested packaging verifies that the reason for this contamination is the organic material used for composting tests, comes from the household waste called “rest fraction”. This conclusion confirms the need of a selective collection of organic fraction in the origin, as existing studies show the quality improvement of the waste collected in the so-called “organic fraction selectively collected” (FORM).