927 resultados para ONE-STEP PLUS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

La medicina ha evolucionado de forma que las imágenes digitales tienen un papel de gran relevancia para llevar a cabo el diagnóstico de enfermedades. Son muchos y de diversa naturaleza los problemas que pueden presentar el aparato fonador. Un paso previo para la caracterización de imágenes digitales de la laringe es la segmentación de las cuerdas vocales. Hasta el momento se han desarrollado algoritmos que permiten la segmentación de la glotis. El presente proyecto pretende avanzar un paso más en el estudio, procurando asimismo la segmentación de las cuerdas vocales. Para ello, es necesario aprovechar la información de color que ofrecen las imágenes, pues es lo que va a determinar la diferencia entre una región y otra de la imagen. En este proyecto se ha desarrollado un novedoso método de segmentación de imágenes en color estroboscópicas de la laringe basado en el crecimiento de regiones a partir de píxeles-semilla. Debido a los problemas que presentan las imágenes obtenidas por la técnica de la estroboscopia, para conseguir óptimos resultados de la segmentación es necesario someter a las imágenes a un preprocesado, que consiste en la eliminación de altos brillos y aplicación de un filtro de difusión anisotrópica. Tras el preprocesado, comienza el crecimiento de la región a partir de unas semillas que se obtienen previamente. La condición de inclusión de un píxel en la región se basa en un parámetro de tolerancia que se determina de forma adaptativa. Este parámetro comienza teniendo un valor muy bajo y va aumentando de forma recursiva hasta alcanzar una condición de parada. Esta condición se basa en el análisis de la distribución estadística de los píxeles dentro de la región que va creciendo. La última fase del proyecto consiste en la realización de las pruebas necesarias para verificar el funcionamiento del sistema diseñado, obteniéndose buenos resultados en la segmentación de la glotis y resultados esperanzadores para seguir mejorando el sistema para la segmentación de las cuerdas vocales. ABSTRACT Medicine has evolved so that digital images have a very important role to perform disease diagnosis. There are wide variety of problems that can present the vocal apparatus. A preliminary step for characterization of digital images of the larynx is the segmentation of the vocal folds. To date, some algorithms that allow the segmentation of the glottis have been developed. This project aims to go one step further in the study, also seeking the segmentation of the vocal folds. To do this, we must use the color information offered by images, since this is what will determine the difference between different regions in a picture. In this project a novel method of larynx color images segmentation based on region growing from a pixel seed is developed. Due to the problems of the images obtained by the technique of stroboscopy, to achieve optimal results of the segmentation is necessary a preprocessing of the images, which involves the removal of high brightness and applying an anisotropic diffusion filter. After this preprocessing, the growth of the region from previously obtained seeds starts. The condition for inclusion of a pixel in the region is based on a tolerance parameter, which is adaptively determined. It initially has a low value and this is recursively increased until a stop condition is reached. This condition is based on the analysis of the statistical distribution of the pixels within the grown region. The last phase of the project involves the necessary tests to verify the proper working of the designed system, obtaining very good results in the segmentation of the glottis and encouraging results to keep improving the system for the segmentation of the vocal folds.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Offshore wind industry has exponentially grown in the last years. Despite this growth, there are still many uncertainties in this field. This paper analyzes some current uncertainties in the offshore wind market, with the aim of going one step further in the development of this sector. To do this, some already identified uncertainties compromising offshore wind farm structural design have been identified and described in the paper. Examples of these identified uncertainties are the design of the transition piece and the difficulties for the soil properties characterization. Furthermore, this paper deals with other uncertainties not identified yet due to the limited experience in the sector. To do that, current and most used offshore wind standards and recommendations related to the design of foundation and support structures (IEC 61400-1, 2005; IEC 61400-3, 2009; DNV-OS-J101, Design of Offshore Wind Turbine, 2013 and Rules and Guidelines Germanischer Lloyd, WindEnergie, 2005) have been analyzed. These new identified uncertainties are related to the lifetime and return period, loads combination, scour phenomenon and its protection, Morison e Froude Krilov and diffraction regimes, wave theory, different scale and liquefaction. In fact, there are a lot of improvements to make in this field. Some of them are mentioned in this paper, but the future experience in the matter will make it possible to detect more issues to be solved and improved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Short-run forecasting of electricity prices has become necessary for power generation unit schedule, since it is the basis of every profit maximization strategy. In this article a new and very easy method to compute accurate forecasts for electricity prices using mixed models is proposed. The main idea is to develop an efficient tool for one-step-ahead forecasting in the future, combining several prediction methods for which forecasting performance has been checked and compared for a span of several years. Also as a novelty, the 24 hourly time series has been modelled separately, instead of the complete time series of the prices. This allows one to take advantage of the homogeneity of these 24 time series. The purpose of this paper is to select the model that leads to smaller prediction errors and to obtain the appropriate length of time to use for forecasting. These results have been obtained by means of a computational experiment. A mixed model which combines the advantages of the two new models discussed is proposed. Some numerical results for the Spanish market are shown, but this new methodology can be applied to other electricity markets as well

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Numerical simulations of axisymmetric reactive jets with one-step Arrhenius kinetics are used to investigate the problem of deflagration initiation in a premixed fuel–air mixture by the sudden discharge of a hot jet of its adiabatic reaction products. For the moderately large values of the jet Reynolds number considered in the computations, chemical reaction is seen to occur initially in the thin mixing layer that separates the hot products from the cold reactants. This mixing layer is wrapped around by the starting vortex, thereby enhancing mixing at the jet head, which is followed by an annular mixing layer that trails behind, connecting the leading vortex with the orifice rim. A successful deflagration is seen to develop for values of the orifice radius larger than a critical value a c in the order of the flame thickness of the planar deflagration δL. Introduction of appropriate scales provides the dimensionless formulation of the problem, with flame initiation characterised in terms of a critical Damköhler number Δc=(a d/δL)2, whose parametric dependence is investigated. The numerical computations reveal that, while the jet Reynolds number exerts a limited influence on the criticality conditions, the effect of the reactant diffusivity on ignition is much more pronounced, with the value of Δc increasing significantly with increasing Lewis numbers. The reactant diffusivity affects also the way ignition takes place, so that for reactants with the flame develops as a result of ignition in the annular mixing layer surrounding the developing jet stem, whereas for highly diffusive reactants with Lewis numbers sufficiently smaller than unity combustion is initiated in the mixed core formed around the starting vortex. The analysis provides increased understanding of deflagration initiation processes, including the effects of differential diffusion, and points to the need for further investigations corporating detailed chemistry models for specific fuel–air mixtures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conditions are identified under which analyses of laminar mixing layers can shed light on aspects of turbulent spray combustion. With this in mind, laminar spray-combustion models are formulated for both non-premixed and partially premixed systems. The laminar mixing layer separating a hot-air stream from a monodisperse spray carried by either an inert gas or air is investigated numerically and analytically in an effort to increase understanding of the ignition process leading to stabilization of high-speed spray combustion. The problem is formulated in an Eulerian framework, with the conservation equations written in the boundary-layer approximation and with a one-step Arrhenius model adopted for the chemistry description. The numerical integrations unveil two different types of ignition behaviour depending on the fuel availability in the reaction kernel, which in turn depends on the rates of droplet vaporization and fuel-vapour diffusion. When sufficient fuel is available near the hot boundary, as occurs when the thermochemical properties of heptane are employed for the fuel in the integrations, combustion is established through a precipitous temperature increase at a well-defined thermal-runaway location, a phenomenon that is amenable to a theoretical analysis based on activation-energy asymptotics, presented here, following earlier ideas developed in describing unsteady gaseous ignition in mixing layers. By way of contrast, when the amount of fuel vapour reaching the hot boundary is small, as is observed in the computations employing the thermochemical properties of methanol, the incipient chemical reaction gives rise to a slowly developing lean deflagration that consumes the available fuel as it propagates across the mixing layer towards the spray. The flame structure that develops downstream from the ignition point depends on the fuel considered and also on the spray carrier gas, with fuel sprays carried by air displaying either a lean deflagration bounding a region of distributed reaction or a distinct double-flame structure with a rich premixed flame on the spray side and a diffusion flame on the air side. Results are calculated for the distributions of mixture fraction and scalar dissipation rate across the mixing layer that reveal complexities that serve to identify differences between spray-flamelet and gaseous-flamelet problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An analysis of the structure of flame balls encountered under microgravity conditions, which are stable due to radiant energy losses from H₂O, is carried out for fuel-lean hydrogen-air mixtures. It is seen that, because of radiation losses, in stable flame balls the maximum flame temperature remains close to the crossover temperature, at which the rate of the branching step H + O₂ -> OH + O equals that of the recombination step H + O₂ + M -> HO₂ + M. Under those conditions, all chemical intermediates have very small concentrations and follow the steady-state approximation, while the main species react according to the overall step 2H₂ + O₂-> 2H₂O; so that a one-step chemical-kinetic description, recently derived by asymptotic analysis for near-limit fuel-lean deflagrations, can be used with excellent accuracy to describe the whole branch of stable flame balls. Besides molecular diffusion in a binary-diffusion approximation, Soret diffusion is included, since this exerts a nonnegligible effect to extend the flammability range. When the large value of the activation energy of the overall reaction is taken into account, the leading-order analysis in the reaction-sheet approximation is seen to determine the flame ball radius as that required for radiant heat losses to remove enough of the heat released by chemical reaction at the flame to keep the flame temperature at a value close to crossover. The results are relevant to burning velocities at lean equivalent ratios and may influence fire-safety issues associated with hydrogen utilization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It has been reasoned that the structures of strongly cellular flames in very lean mixtures approach an array of flame balls, each burning as if it were isolated, thereby indicating a connection between the critical conditions required for existence of steady flame balls and those necessary for occurrence of self-sustained premixed combustion. This is the starting assumption of the present study, in which structures of near-limit steady sphericosym-metrical flame balls are investigated with the objective of providing analytic expressions for critical combustion conditions in ultra-lean hydrogen-oxygen mixtures diluted with N2 and water vapor. If attention were restricted to planar premixed flames, then the lean-limit mole fraction of H2 would be found to be roughly ten percent, more than twice the observed flammability limits, thereby emphasizing the relevance of the flame-ball phenomena. Numerical integrations using detailed models for chemistry and radiation show that a onestep chemical-kinetic reduced mechanism based on steady-state assumptions for all chemical intermediates, together with a simple, optically thin approximation for water-vapor radiation, can be used to compute near-limit fuel-lean flame balls with excellent accuracy. The previously developed one-step reaction rate includes a crossover temperature that determines in the first approximation a chemical-kinetic lean limit below which combustión cannot occur, with critical conditions achieved when the diffusion-controlled radiation-free peak temperature, computed with account taken of hydrogen Soret diffusion, is equal to the crossover temperature. First-order corrections are found by activation-energy asymptotics in a solution that involves a near-field radiation-free zone surrounding a spherical flame sheet, together with a far-field radiation-conduction balance for the temperature profile. Different scalings are found depending on whether or not the surrounding atmosphere contains wáter vapor, leading to different analytic expressions for the critical conditions for flame-ball existence, which give results in very good agreement with those obtained by detailed numerical computations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

habilidades de comprensión y resolución de problemas. Tanto es así que se puede afirmar con rotundidad que no existe el método perfecto para cada una de las etapas de desarrollo y tampoco existe el modelo de ciclo de vida perfecto: cada nuevo problema que se plantea es diferente a los anteriores en algún aspecto y esto hace que técnicas que funcionaron en proyectos anteriores fracasen en los proyectos nuevos. Por ello actualmente se realiza un planteamiento integrador que pretende utilizar en cada caso las técnicas, métodos y herramientas más acordes con las características del problema planteado al ingeniero. Bajo este punto de vista se plantean nuevos problemas. En primer lugar está la selección de enfoques de desarrollo. Si no existe el mejor enfoque, ¿cómo se hace para elegir el más adecuado de entre el conjunto de los existentes? Un segundo problema estriba en la relación entre las etapas de análisis y diseño. En este sentido existen dos grandes riesgos. Por un lado, se puede hacer un análisis del problema demasiado superficial, con lo que se produce una excesiva distancia entre el análisis y el diseño que muchas veces imposibilita el paso de uno a otro. Por otro lado, se puede optar por un análisis en términos del diseño que provoca que no cumpla su objetivo de centrarse en el problema, sino que se convierte en una primera versión de la solución, lo que se conoce como diseño preliminar. Como consecuencia de lo anterior surge el dilema del análisis, que puede plantearse como sigue: para cada problema planteado hay que elegir las técnicas más adecuadas, lo que requiere que se conozcan las características del problema. Para ello, a su vez, se debe analizar el problema, eligiendo una técnica antes de conocerlo. Si la técnica utiliza términos de diseño entonces se ha precondicionado el paradigma de solución y es posible que no sea el más adecuado para resolver el problema. En último lugar están las barreras pragmáticas que frenan la expansión del uso de métodos con base formal, dificultando su aplicación en la práctica cotidiana. Teniendo en cuenta todos los problemas planteados, se requieren métodos de análisis del problema que cumplan una serie de objetivos, el primero de los cuales es la necesidad de una base formal, con el fin de evitar la ambigüedad y permitir verificar la corrección de los modelos generados. Un segundo objetivo es la independencia de diseño: se deben utilizar términos que no tengan reflejo directo en el diseño, para que permitan centrarse en las características del problema. Además los métodos deben permitir analizar problemas de cualquier tipo: algorítmicos, de soporte a la decisión o basados en el conocimiento, entre otros. En siguiente lugar están los objetivos relacionados con aspectos pragmáticos. Por un lado deben incorporar una notación textual formal pero no matemática, de forma que se facilite su validación y comprensión por personas sin conocimientos matemáticos profundos pero al mismo tiempo sea lo suficientemente rigurosa para facilitar su verificación. Por otro lado, se requiere una notación gráfica complementaria para representar los modelos, de forma que puedan ser comprendidos y validados cómodamente por parte de los clientes y usuarios. Esta tesis doctoral presenta SETCM, un método de análisis que cumple estos objetivos. Para ello se han definido todos los elementos que forman los modelos de análisis usando una terminología independiente de paradigmas de diseño y se han formalizado dichas definiciones usando los elementos fundamentales de la teoría de conjuntos: elementos, conjuntos y relaciones entre conjuntos. Por otro lado se ha definido un lenguaje formal para representar los elementos de los modelos de análisis – evitando en lo posible el uso de notaciones matemáticas – complementado con una notación gráfica que permite representar de forma visual las partes más relevantes de los modelos. El método propuesto ha sido sometido a una intensa fase de experimentación, durante la que fue aplicado a 13 casos de estudio, todos ellos proyectos reales que han concluido en productos transferidos a entidades públicas o privadas. Durante la experimentación se ha evaluado la adecuación de SETCM para el análisis de problemas de distinto tamaño y en sistemas cuyo diseño final usaba paradigmas diferentes e incluso paradigmas mixtos. También se ha evaluado su uso por analistas con distinto nivel de experiencia – noveles, intermedios o expertos – analizando en todos los casos la curva de aprendizaje, con el fin de averiguar si es fácil de aprender su uso, independientemente de si se conoce o no alguna otra técnica de análisis. Por otro lado se ha estudiado la capacidad de ampliación de modelos generados con SETCM, para comprobar si permite abordar proyectos realizados en varias fases, en los que el análisis de una fase consista en ampliar el análisis de la fase anterior. En resumidas cuentas, se ha tratado de evaluar la capacidad de integración de SETCM en una organización como la técnica de análisis preferida para el desarrollo de software. Los resultados obtenidos tras esta experimentación han sido muy positivos, habiéndose alcanzado un alto grado de cumplimiento de todos los objetivos planteados al definir el método.---ABSTRACT---Software development is an inherently complex activity, which requires specific abilities of problem comprehension and solving. It is so difficult that it can even be said that there is no perfect method for each of the development stages and that there is no perfect life cycle model: each new problem is different to the precedent ones in some respect and the techniques that worked in other problems can fail in the new ones. Given that situation, the current trend is to integrate different methods, tools and techniques, using the best suited for each situation. This trend, however, raises some new problems. The first one is the selection of development approaches. If there is no a manifestly single best approach, how does one go about choosing an approach from the array of available options? The second problem has to do with the relationship between the analysis and design phases. This relation can lead to two major risks. On one hand, the analysis could be too shallow and far away from the design, making it very difficult to perform the transition between them. On the other hand, the analysis could be expressed using design terminology, thus becoming more a kind of preliminary design than a model of the problem to be solved. In third place there is the analysis dilemma, which can be expressed as follows. The developer has to choose the most adequate techniques for each problem, and to make this decision it is necessary to know the most relevant properties of the problem. This implies that the developer has to analyse the problem, choosing an analysis method before really knowing the problem. If the chosen technique uses design terminology then the solution paradigm has been preconditioned and it is possible that, once the problem is well known, that paradigm wouldn’t be the chosen one. The last problem consists of some pragmatic barriers that limit the applicability of formal based methods, making it difficult to use them in current practice. In order to solve these problems there is a need for analysis methods that fulfil several goals. The first one is the need of a formal base, which prevents ambiguity and allows the verification of the analysis models. The second goal is design-independence: the analysis should use a terminology different from the design, to facilitate a real comprehension of the problem under study. In third place the analysis method should allow the developer to study different kinds of problems: algorithmic, decision-support, knowledge based, etc. Next there are two goals related to pragmatic aspects. Firstly, the methods should have a non mathematical but formal textual notation. This notation will allow people without deep mathematical knowledge to understand and validate the resulting models, without losing the needed rigour for verification. Secondly, the methods should have a complementary graphical notation to make more natural the understanding and validation of the relevant parts of the analysis. This Thesis proposes such a method, called SETCM. The elements conforming the analysis models have been defined using a terminology that is independent from design paradigms. Those terms have been then formalised using the main concepts of the set theory: elements, sets and correspondences between sets. In addition, a formal language has been created, which avoids the use of mathematical notations. Finally, a graphical notation has been defined, which can visually represent the most relevant elements of the models. The proposed method has been thoroughly tested during the experimentation phase. It has been used to perform the analysis of 13 actual projects, all of them resulting in transferred products. This experimentation allowed evaluating the adequacy of SETCM for the analysis of problems of varying size, whose final design used different paradigms and even mixed ones. The use of the method by people with different levels of expertise was also evaluated, along with the corresponding learning curve, in order to assess if the method is easy to learn, independently of previous knowledge on other analysis techniques. In addition, the expandability of the analysis models was evaluated, assessing if the technique was adequate for projects organised in incremental steps, in which the analysis of one step grows from the precedent models. The final goal was to assess if SETCM can be used inside an organisation as the preferred analysis method for software development. The obtained results have been very positive, as SETCM has obtained a high degree of fulfilment of the goals stated for the method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the beginning of Internet, Internet Service Providers (ISP) have seen the need of giving to users? traffic different treatments defined by agree- ments between ISP and customers. This procedure, known as Quality of Service Management, has not much changed in the last years (DiffServ and Deep Pack-et Inspection have been the most chosen mechanisms). However, the incremen-tal growth of Internet users and services jointly with the application of recent Ma- chine Learning techniques, open up the possibility of going one step for-ward in the smart management of network traffic. In this paper, we first make a survey of current tools and techniques for QoS Management. Then we intro-duce clustering and classifying Machine Learning techniques for traffic charac-terization and the concept of Quality of Experience. Finally, with all these com-ponents, we present a brand new framework that will manage in a smart way Quality of Service in a telecom Big Data based scenario, both for mobile and fixed communications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In nature, several types of landforms have simple shapes: as they evolve they tend to take on an ideal, simple geometric form such as a cone, an ellipsoid or a paraboloid. Volcanic landforms are possibly the best examples of this ?ideal? geometry, since they develop as regular surface features due to the point-like (circular) or fissure-like (linear) manifestation of volcanic activity. In this paper, we present a geomorphometric method of fitting the ?ideal? surface onto the real surface of regular-shaped volcanoes through a number of case studies (Mt. Mayon, Mt. Somma, Mt. Semeru, and Mt. Cameroon). Volcanoes with circular, as well as elliptical, symmetry are addressed. For the best surface fit, we use the minimization library MINUIT which is made freely available by the CERN (European Organization for Nuclear Research). This library enables us to handle all the available surface data (every point of the digital elevation model) in a one-step, half-automated way regardless of the size of the dataset, and to consider simultaneously all the relevant parameters of the selected problem, such as the position of the center of the edifice, apex height, and cone slope, thanks to the highly performing adopted procedure. Fitting the geometric surface, along with calculating the related error, demonstrates the twofold advantage of the method. Firstly, we can determine quantitatively to what extent a given volcanic landform is regular, i.e. how much it follows an expected regular shape. Deviations from the ideal shape due to degradation (e.g. sector collapse and normal erosion) can be used in erosion rate calculations. Secondly, if we have a degraded volcanic landform, whose geometry is not clear, this method of surface fitting reconstructs the original shape with the maximum precision. Obviously, in addition to volcanic landforms, this method is also capable of constraining the shapes of other regular surface features such as aeolian, glacial or periglacial landforms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El cerebro humano es probablemente uno de los sistemas más complejos a los que nos enfrentamos en la actualidad, si bien es también uno de los más fascinantes. Sin embargo, la compresión de cómo el cerebro organiza su actividad para llevar a cabo tareas complejas es un problema plagado de restos y obstáculos. En sus inicios la neuroimagen y la electrofisiología tenían como objetivo la identificación de regiones asociadas a activaciones relacionadas con tareas especificas, o con patrones locales que variaban en el tiempo dada cierta actividad. Sin embargo, actualmente existe un consenso acerca de que la actividad cerebral tiene un carácter temporal multiescala y espacialmente extendido, lo que lleva a considerar el cerebro como una gran red de áreas cerebrales coordinadas, cuyas conexiones funcionales son continuamente creadas y destruidas. Hasta hace poco, el énfasis de los estudios de la actividad cerebral funcional se han centrado en la identidad de los nodos particulares que forman estas redes, y en la caracterización de métricas de conectividad entre ellos: la hipótesis subyacente es que cada nodo, que es una representación mas bien aproximada de una región cerebral dada, ofrece a una única contribución al total de la red. Por tanto, la neuroimagen funcional integra los dos ingredientes básicos de la neuropsicología: la localización de la función cognitiva en módulos cerebrales especializados y el rol de las fibras de conexión en la integración de dichos módulos. Sin embargo, recientemente, la estructura y la función cerebral han empezado a ser investigadas mediante la Ciencia de la Redes, una interpretación mecánico-estadística de una antigua rama de las matemáticas: La teoría de grafos. La Ciencia de las Redes permite dotar a las redes funcionales de una gran cantidad de propiedades cuantitativas (robustez, centralidad, eficiencia, ...), y así enriquecer el conjunto de elementos que describen objetivamente la estructura y la función cerebral a disposición de los neurocientíficos. La conexión entre la Ciencia de las Redes y la Neurociencia ha aportado nuevos puntos de vista en la comprensión de la intrincada anatomía del cerebro, y de cómo las patrones de actividad cerebral se pueden sincronizar para generar las denominadas redes funcionales cerebrales, el principal objeto de estudio de esta Tesis Doctoral. Dentro de este contexto, la complejidad emerge como el puente entre las propiedades topológicas y dinámicas de los sistemas biológicos y, específicamente, en la relación entre la organización y la dinámica de las redes funcionales cerebrales. Esta Tesis Doctoral es, en términos generales, un estudio de cómo la actividad cerebral puede ser entendida como el resultado de una red de un sistema dinámico íntimamente relacionado con los procesos que ocurren en el cerebro. Con este fin, he realizado cinco estudios que tienen en cuenta ambos aspectos de dichas redes funcionales: el topológico y el dinámico. De esta manera, la Tesis está dividida en tres grandes partes: Introducción, Resultados y Discusión. En la primera parte, que comprende los Capítulos 1, 2 y 3, se hace un resumen de los conceptos más importantes de la Ciencia de las Redes relacionados al análisis de imágenes cerebrales. Concretamente, el Capitulo 1 está dedicado a introducir al lector en el mundo de la complejidad, en especial, a la complejidad topológica y dinámica de sistemas acoplados en red. El Capítulo 2 tiene como objetivo desarrollar los fundamentos biológicos, estructurales y funcionales del cerebro, cuando éste es interpretado como una red compleja. En el Capítulo 3, se resumen los objetivos esenciales y tareas que serán desarrolladas a lo largo de la segunda parte de la Tesis. La segunda parte es el núcleo de la Tesis, ya que contiene los resultados obtenidos a lo largo de los últimos cuatro años. Esta parte está dividida en cinco Capítulos, que contienen una versión detallada de las publicaciones llevadas a cabo durante esta Tesis. El Capítulo 4 está relacionado con la topología de las redes funcionales y, específicamente, con la detección y cuantificación de los nodos mas importantes: aquellos denominados “hubs” de la red. En el Capítulo 5 se muestra como las redes funcionales cerebrales pueden ser vistas no como una única red, sino más bien como una red-de-redes donde sus componentes tienen que coexistir en una situación de balance funcional. De esta forma, se investiga cómo los hemisferios cerebrales compiten para adquirir centralidad en la red-de-redes, y cómo esta interacción se mantiene (o no) cuando se introducen fallos deliberadamente en la red funcional. El Capítulo 6 va un paso mas allá al considerar las redes funcionales como sistemas vivos. En este Capítulo se muestra cómo al analizar la evolución de la topología de las redes, en vez de tratarlas como si estas fueran un sistema estático, podemos caracterizar mejor su estructura. Este hecho es especialmente relevante cuando se quiere tratar de encontrar diferencias entre grupos que desempeñan una tarea de memoria, en la que las redes funcionales tienen fuertes fluctuaciones. En el Capítulo 7 defino cómo crear redes parenclíticas a partir de bases de datos de actividad cerebral. Este nuevo tipo de redes, recientemente introducido para estudiar las anormalidades entre grupos de control y grupos anómalos, no ha sido implementado nunca en datos cerebrales y, en este Capítulo explico cómo hacerlo cuando se quiere evaluar la consistencia de la dinámica cerebral. Para concluir esta parte de la Tesis, el Capítulo 8 se centra en la relación entre las propiedades topológicas de los nodos dentro de una red y sus características dinámicas. Como mostraré más adelante, existe una relación entre ellas que revela que la posición de un nodo dentro una red está íntimamente correlacionada con sus propiedades dinámicas. Finalmente, la última parte de esta Tesis Doctoral está compuesta únicamente por el Capítulo 9, el cual contiene las conclusiones y perspectivas futuras que pueden surgir de los trabajos expuestos. En vista de todo lo anterior, espero que esta Tesis aporte una perspectiva complementaria sobre uno de los más extraordinarios sistemas complejos frente a los que nos encontramos: El cerebro humano. ABSTRACT The human brain is probably one of the most complex systems we are facing, thus being a timely and fascinating object of study. Characterizing how the brain organizes its activity to carry out complex tasks is highly non-trivial. While early neuroimaging and electrophysiological studies typically aimed at identifying patches of task-specific activations or local time-varying patterns of activity, there has now been consensus that task-related brain activity has a temporally multiscale, spatially extended character, as networks of coordinated brain areas are continuously formed and destroyed. Up until recently, though, the emphasis of functional brain activity studies has been on the identity of the particular nodes forming these networks, and on the characterization of connectivity metrics between them, the underlying covert hypothesis being that each node, constituting a coarse-grained representation of a given brain region, provides a unique contribution to the whole. Thus, functional neuroimaging initially integrated the two basic ingredients of early neuropsychology: localization of cognitive function into specialized brain modules and the role of connection fibres in the integration of various modules. Lately, brain structure and function have started being investigated using Network Science, a statistical mechanics understanding of an old branch of pure mathematics: graph theory. Network Science allows endowing networks with a great number of quantitative properties, thus vastly enriching the set of objective descriptors of brain structure and function at neuroscientists’ disposal. The link between Network Science and Neuroscience has shed light about how the entangled anatomy of the brain is, and how cortical activations may synchronize to generate the so-called functional brain networks, the principal object under study along this PhD Thesis. Within this context, complexity appears to be the bridge between the topological and dynamical properties of biological systems and, more specifically, the interplay between the organization and dynamics of functional brain networks. This PhD Thesis is, in general terms, a study of how cortical activations can be understood as the output of a network of dynamical systems that are intimately related with the processes occurring in the brain. In order to do that, I performed five studies that encompass both the topological and the dynamical aspects of such functional brain networks. In this way, the Thesis is divided into three major parts: Introduction, Results and Discussion. In the first part, comprising Chapters 1, 2 and 3, I make an overview of the main concepts of Network Science related to the analysis of brain imaging. More specifically, Chapter 1 is devoted to introducing the reader to the world of complexity, specially to the topological and dynamical complexity of networked systems. Chapter 2 aims to develop the biological, topological and functional fundamentals of the brain when it is seen as a complex network. Next, Chapter 3 summarizes the main objectives and tasks that will be developed along the forthcoming Chapters. The second part of the Thesis is, in turn, its core, since it contains the results obtained along these last four years. This part is divided into five Chapters, containing a detailed version of the publications carried out during the Thesis. Chapter 4 is related to the topology of functional networks and, more specifically, to the detection and quantification of the leading nodes of the network: the hubs. In Chapter 5 I will show that functional brain networks can be viewed not as a single network, but as a network-of-networks, where its components have to co-exist in a trade-off situation. In this way, I investigate how the brain hemispheres compete for acquiring the centrality of the network-of-networks and how this interplay is maintained (or not) when failures are introduced in the functional network. Chapter 6 goes one step beyond by considering functional networks as living systems. In this Chapter I show how analyzing the evolution of the network topology instead of treating it as a static system allows to better characterize functional networks. This fact is especially relevant when trying to find differences between groups performing certain memory tasks, where functional networks have strong fluctuations. In Chapter 7 I define how to create parenclitic networks from brain imaging datasets. This new kind of networks, recently introduced to study abnormalities between control and anomalous groups, have not been implemented with brain datasets and I explain in this Chapter how to do it when evaluating the consistency of brain dynamics. To conclude with this part of the Thesis, Chapter 8 is devoted to the interplay between the topological properties of the nodes within a network and their dynamical features. As I will show, there is an interplay between them which reveals that the position of a node in a network is intimately related with its dynamical properties. Finally, the last part of this PhD Thesis is composed only by Chapter 9, which contains the conclusions and future perspectives that may arise from the exposed results. In view of all, I hope that reading this Thesis will give a complementary perspective of one of the most extraordinary complex systems: The human brain.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los polímeros compostables suponen en torno al 30% de los bioplásticos destinados a envasado, siendo a su vez esta aplicación el principal destino de la producción de este tipo de materiales que, en el año 2013, superó 1,6 millones de toneladas. La presente tesis aborda la biodegradación de los residuos de envases domésticos compostables en medio aerobio para dos tipos de formato y materiales, envase rígido de PLA (Clase I) y dos tipos de bolsas de PBAT+PLA (Clases II y III). Sobre esta materia se han realizado diversos estudios en escala de laboratorio pero para otro tipo de envases y biopolímeros y bajo condiciones controladas del compost con alguna proyección particularizada en plantas. La presente tesis da un paso más e investiga el comportamiento real de los envases plásticos compostables en la práctica del compostaje en tecnologías de pila y túnel, tanto a escala piloto como industrial, dentro del procedimiento y con las condiciones ambientales de instalaciones concretas. Para ello, con el método seguido, se han analizado los requisitos básicos que debe cumplir un envase compostable, según la norma UNE – EN 13432, evaluando el porcentaje de biodegradación de los envases objeto de estudio, en función de la pérdida de peso seco tras el proceso de compostaje, y la calidad del compost obtenido, mediante análisis físico-químico y de fitotoxicidad para comprobar que los materiales de estudio no aportan toxicidad. En cuanto a los niveles de biodegrabilidad, los resultados permiten concluir que los envases de Clase I se compostan adecuadamente en ambas tecnologías y que no requieren de unas condiciones de proceso muy exigentes para alcanzar niveles de biodegradación del 100%. En relación a los envases de Clase II, se puede asumir que se trata de un material que se composta adecuadamente en pila y túnel industrial pero que requiere de condiciones exigentes para alcanzar niveles de biodegradación del 100% al afectarle de forma clara la ubicación de las muestras en la masa a compostar, especialmente en el caso de la tecnología de túnel. Mientras el 90% de las muestras alcanza el 100% de biodegradación en pila industrial, tan sólo el 50% lo consigue en la tecnología de túnel a la misma escala. En cuanto a los envases de Clase III, se puede afirmar que es un material que se composta adecuadamente en túnel industrial pero que requiere de condiciones de cierta exigencia para alcanzar niveles de biodegradación del 100% al poderle afectar la ubicación de las muestras en la masa a compostar. El 75% de las muestras ensayadas en túnel a escala industrial alcanzan el 100% de biodegradación y, aunque no se ha ensayado este tipo de envase en la tecnología de pila al no disponer de muestras, cabe pensar que los resultados de biodegrabilidad que hubiera podido alcanzar habrían sido, como mínimo, los obtenidos para los envases de Clase II, al tratarse de materiales muy similares en composición. Por último, se concluye que la tecnología de pila es más adecuada para conseguir niveles de biodegradación superiores en los envases tipo bolsa de PBAT+PLA. Los resultados obtenidos permiten también sacar en conclusión que, en el diseño de instalaciones de compostaje para el tratamiento de la fracción orgánica recogida selectivamente, sería conveniente realizar una recirculación del rechazo del afino del material compostado para aumentar la probabilidad de someter este tipo de materiales a las condiciones ambientales adecuadas. Si además se realiza un triturado del residuo a la entrada del proceso, también se aumentaría la superficie específica a entrar en contacto con la masa de materia orgánica y por tanto se favorecerían las condiciones de biodegradación. En cuanto a la calidad del compost obtenido en los ensayos, los resultados de los análisis físico – químicos y de fitotoxicidad revelan que los niveles de concentración de microorganismo patógenos y de metales pesados superan, en la práctica totalidad de las muestras, los niveles máximos permitidos en la legislación vigente aplicable a productos fertilizantes elaborados con residuos. Mediante el análisis de la composición de los envases ensayados se constata que la causa de esta contaminación reside en la materia orgánica utilizada para compostar en los ensayos, procedente del residuo de origen doméstico de la denominada “fracción resto”. Esta conclusión confirma la necesidad de realizar una recogida selectiva de la fracción orgánica en origen, existiendo estudios que evidencian la mejora de la calidad del residuo recogido en la denominada “fracción orgánica recogida selectivamente” (FORM). Compostable polymers are approximately 30% of bioplastics used for packaging, being this application, at same time, the main destination for the production of such materials exceeded 1.6 million tonnes in 2013. This thesis deals with the biodegradation of household packaging waste compostable in aerobic medium for two format types and materials, rigid container made of PLA (Class I) and two types of bags made of PBAT + PLA (Classes II and III). There are several studies developed about this issue at laboratory scale but for other kinds of packaging and biopolymers and under composting controlled conditions with some specifically plants projection. This thesis goes one step further and researches the real behaviour of compostable plastic packaging in the composting practice in pile and tunnel technologies, both at pilot and industrial scale, within the procedure and environmental conditions of concrete devices. Therefore, with a followed method, basic requirements fulfilment for compostable packaging have been analysed according to UNE-EN 13432 standard. It has been assessed the biodegradability percentage of the packaging studied, based on loss dry weight after the composting process, and the quality of the compost obtained, based on physical-chemical analysis to check no toxicity provided by the studied materials. Regarding biodegradability levels, results allow to conclude that Class I packaging are composted properly in both technologies and do not require high exigent process conditions for achieving 100% biodegradability levels. Related to Class II packaging, it can be assumed that it is a material that composts properly in pile and tunnel at industrial scale but requires exigent conditions for achieving 100% biodegradability levels for being clearly affected by sample location in the composting mass, especially in tunnel technology case. While 90% of the samples reach 100% of biodegradation in pile at industrial scale, only 50% achieve it in tunnel technology at the same scale. Regarding Class III packaging, it can be said that it is a material properly composted in tunnel at industrial scale but requires certain exigent conditions for reaching 100% biodegradation levels for being possibly affected by sample location in the composting mass. The 75% of the samples tested in tunnel at industrial scale reaches 100% biodegradation. Although this kind of packaging has not been tested on pile technology due to unavailability of samples, it is judged that biodegradability results that could be reached would have been, at least, the same obtained for Class II packaging, as they are very similar materials in composition. Finally, it is concluded that pile technology is more suitable for achieving highest biodegradation levels in bag packaging type of PBAT+PLA. Additionally, the obtained results conclude that, in the designing of composting devices for treatment of organic fraction selectively collected, it would be recommended a recirculation of the refining refuse of composted material in order to increase the probability of such materials to expose to proper environmental conditions. If the waste is grinded before entering the process, the specific surface in contact with organic material would also be increased and therefore biodegradation conditions would be more favourable. Regarding quality of the compost obtained in the tests, physical-chemical and phytotoxicity analysis results reveal that pathogen microorganism and heavy metals concentrations exceed, in most of the samples, the maximum allowed levels by current legislation for fertilizers obtained from wastes. Composition analysis of tested packaging verifies that the reason for this contamination is the organic material used for composting tests, comes from the household waste called “rest fraction”. This conclusion confirms the need of a selective collection of organic fraction in the origin, as existing studies show the quality improvement of the waste collected in the so-called “organic fraction selectively collected” (FORM).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O consumidor contemporâneo, inserido em um novo ambiente de comunicação, potencializa suas expressões, capaz de avaliar uma marca ou produto e transmitir sua opinião pelas redes sociais, ou seja, o consumidor expressa suas opiniões e desejos dialogando com seus pares de forma espontânea nas redes sociais on-line. É neste ambiente de participação e interação (ciberespaço) que está nosso objeto de estudo, o boca a boca on-line – a voz do consumidor contemporâneo, também conhecido como uma manifestação informativa pessoal ou uma conversa, a opinion sharing. Proporcionado pelos consumidores nas redes sociais on-line, o boca a boca se fortalece em função das possibilidades de interação, característica da sociedade em rede. Nesse cenário, oobjetivo desta pesquisa é caracterizar o boca a boca on-line como um novo fluxo comunicacional entre consumidores, hoje potencializado pelas novas tecnologias da comunicação, capazes de alterar a percepção da marca e demonstrar o uso, pelas marcas, das redes sociais on-line ainda como um ambiente de comunicação unidirecional. Mediante três casos selecionados por conveniência (dois casos nacionais e um internacional), o corpus de análise de nossa pesquisa se limitou aos 5.084 comentários disponibilizados após publicação de matérias jornalísticas no Portal G1 e nas fanpages (Facebook), ambos relativos aos casos selecionados. Com a Análise de Conteúdo dos posts, identificamos e categorizamos a fala do consumidor contemporâneo, sendo assim possível comprovar que as organizações/marcas se valem da cultura do massivo, não dialogando com seus consumidores, pois utilizam as redes sociais on-line ainda de forma unidirecional, além de não darem a devida atenção ao atual fluxo onde se evidencia a opinião compartilhada dos consumidores da sociedade em rede.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cell proliferation is regulated by the induction of growth promoting genes and the suppression of growth inhibitory genes. Malignant growth can result from the altered balance of expression of these genes in favor of cell proliferation. Induction of the transcription factor, c-Myc, promotes cell proliferation and transformation by activating growth promoting genes, including the ODC and cdc25A genes. We show that c-Myc transcriptionally represses the expression of a growth arrest gene, gas1. A conserved Myc structure, Myc box 2, is required for repression of gas1, and for Myc induction of proliferation and transformation, but not for activation of ODC. Activation of a Myc-estrogen receptor fusion protein by 4-hydroxytamoxifen was sufficient to repress gas1 gene transcription. These findings suggest that transcriptional repression of growth arrest genes, including gas1, is one step in promotion of cell growth by Myc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Polyethylene chains in the amorphous region between two crystalline lamellae M unit apart are modeled as random walks with one-step memory on a cubic lattice between two absorbing boundaries. These walks avoid the two preceding steps, though they are not true self-avoiding walks. Systems of difference equations are introduced to calculate the statistics of the restricted random walks. They yield that the fraction of loops is (2M - 2)/(2M + 1), the fraction of ties 3/(2M + 1), the average length of loops 2M - 0.5, the average length of ties 2/3M2 + 2/3M - 4/3, the average length of walks equals 3M - 3, the variance of the loop length 16/15M3 + O(M2), the variance of the tie length 28/45M4 + O(M3), and the variance of the walk length 2M3 + O(M2).