15 resultados para Three-photon processes

em Universidad Politécnica de Madrid


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes a new method, oriented to crop row detection in images from maize fields with high weed pressure. The vision system is designed to be installed onboard a mobile agricultural vehicle, i.e. submitted to gyros, vibrations and undesired movements. The images are captured under image perspective, being affected by the above undesired effects. The image processing consists of three main processes: image segmentation, double thresholding, based on the Otsu’s method, and crop row detection. Image segmentation is based on the application of a vegetation index, the double thresholding achieves the separation between weeds and crops and the crop row detection applies least squares linear regression for line adjustment. Crop and weed separation becomes effective and the crop row detection can be favorably compared against the classical approach based on the Hough transform. Both gain effectiveness and accuracy thanks to the double thresholding that makes the main finding of the paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este trabajo doctoral se evaluó la bioaccesibilidad in vitro para As, Co, Cr, Cu, Ni, Pb y Zn (en la fracción menor de 100μm) por tres procedimientos distintos en 32 muestras de suelo superficial, recogidas en 16 parques infantiles de la ciudad de Madrid.. Dos de los métodos de extracción (SBET y extracción con HCl a pH=1.5) reproducen únicamente la fase gástrica, mientras que otro (RIVM) tiene en cuenta un proceso completo de digestión (gástrico+intestinal). La bioaccesibilidad (%) se definió frente a las concentraciones pseudototales de los elementos traza estudiados (agua regia), utilizando un modelo de regresión lineal pasando por el origen. Los dos métodos gástricos ofrecieron resultados similares y consistentes con datos de otros estudios, siendo el orden de bioaccesibilidad As ≈ Cu ≈ Pb ≈ Zn > Co > Ni > Cr, con rangos entre el 63 y el 7%. Para el procedimiento RIVM (gástrico + intestinal) se obtuvieron valores de un orden similar a los obtenidos en fase gástrica para los elementos As, Cu, Pb y Zn (muy similares para el Zn, algo superiores para Cu y Pb, y algo inferiores para As). Por el contrario, la bioaccesibilidad de Co y Cu es, en este caso, muy superior a la resultante de los ensayos en fase gástrica. El orden de bioaccesibilidad es Co ≈ Cu ≈ Pb > As ≈ Cr ≈ Zn, con rangos entre el 42 y el 69%. Los resultados de los tres procedimientos evaluados correlacionan muy intensamente para los elementos traza As, Cu, Pb y Zn, existiendo intensas correlaciones entre casi todos los elementos estudiados para las dos fases gástricas, no siendo así en el ensayo de digestión completa. Se estudiaron algunas propiedades físico-químicas de los suelos muestreados, así como su composición en algunos elementos mayoritarios con el objeto de evaluar su influencia sobre la bioaccesibilidad. Se observa una dependencia de la bioaccesibilidad (%) de distintos elementos respecto a algunas propiedades de los suelos estudiados, tales como: contenido en Fe, Ca (carbonatos) y P, materia orgánica y pH. El contenido en Fe resulta ser muy relevante en cuanto a la bioaccesibilidad obtenida. En todos los casos correlaciona negativamente con el porcentaje de bioaccesibilidad siendo más significativo este fenómeno en el caso de las extracciones en fase gástrica. Se sugiere que dada la baja solubilización de los óxidos de hierro en los medios extractantes empleados hay una fuerte adsorción de complejos aniónicos (metal-anión cloruro) sobre la superficie de estos óxidos de Fe, con la consiguientes disminución de la bioaccesibilidad. En cuanto al contenido en calcio (carbonatos) este dato parece muy relevante si nos referimos a la bioaccesibilidad del As. Efectivamente el As aparece ligado al Ca del suelo y su solubilización en medios ácidos implicaría un aumento de la bioaccesibilidad del As, mientras que su precipitación al pasar a pH básico (fase intestinal) provocaría una disminución de la bioaccesibilidad. La materia orgánica sólo se ha demostrado relevante respecto a los contenidos pseudototales para el Zn. Para el porcentaje de bioaccesibilidad es significativo para muchos elementos en los ensayos en fase gástrica. La influencia del pH de los suelos estudiados sólo parece ser muy significativo en el caso del Cr. Los valores altamente homogéneos del pH de los suelos estudiados sin duda hacen que este parámetro no resulte significativo para más elementos, tal como se desprende de estudios anteriores. ABSTRACT A total of 32 samples of superficial soil were collected from 16 playground areas in Madrid. The in vitro bioaccessibility of As, Co, Cr, Ni, Pb and Zn (fraction below 100μm) was evaluated by means of three extraction processes. Two of them (SBET and HCl-extraction, pH=1.5) simulate the gastric enviroment, while the other one (RIVM) reproduces a gastric+intestinal digestion sequence. Bioaccessibility (%) was compared against pseudo-total concentrations of trace elements studied (aqua regia) with a linear regression model (forced to intercept the origin) Both gastric methods offered very similar and consistent results with data from other studies, with bioaccessibilities following the order: As ≈ Cu ≈ Pb ≈ Zn > Co > Ni > Cr, and ranging from 63% to 7% The values obtained through RIVM (gastric+ intestinal) method are similar to those obtained in gastric environment for elements: As, Cu, Pb and Zn (very similar to Zn, to a higher extent Cu and Pb, and to a lower extent As). On the contrary the bioaccessibility obtained for elements Co and Cu is considerable higher than in gastric environment sequence. Bioaccessibilities follows the order Co ≈ Cu ≈ Pb >As ≈ Cr ≈ Zn, ranging between 42 and 69%. The three procedures used correlate very intensively to trace elements As,Cu, Pb and Zn, existing strong correlations between almost all elements studied for the two gastric environment, not in the case of the complete digestion sequence. Some soil physical – chemical properties selected were studied, as well as their composition in some main elements in order to assess their influence on bioaccessibility. A dependence was observed between different elements bioaccesibility (%) and some soil properties, such as: Fe, Ca (carbonate) content and P, organic matter and pH. Fe content becomes very relevant regarding the bioaccessibility obtained. In all cases it correlated negatively with bioaccessibility percentage being more significant this phenomenon in gastric environment extractions. It is suggested that given the low solubility of iron oxide in the extractant media used there has to be a strong adsorption of anionic complexes (metal – chloride anion) on these Fe oxides surface, with a consequent decrease of bioaccessibility. Regarding calcium (carbonate) content this data seems very relevant referred to As bioaccessibility. Indeed, As appears to be bound to soil Ca and its solubilisation in acid media would increase As bioaccessibility, while its precipitation at basic pH (intestinal environment) would cause a reduction in bioaccessibility. The influence of organic matter only seemed significant for Zn “total” content, while it is significant in terms of gastric bioaccessibility for many elements. Soil pH only seems to be very significant in case of Cr. The highly homogeneous values for soil pH makes the influence of this parameter negligible for the other elements, unlike what has been observed in several previous studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aunque los conjuntos de vivienda social carecen en Europa de un nivel de protección patrimonial comparable a otras áreas urbanas, las negociaciones que surgen a raíz del anuncio de su remodelación o rehabilitación resultan inexplicables si no se supone su aceptación como patrimonio por parte de algunos de los actores implicados. Nos encontraríamos entonces no con un repertorio patrimonial consolidado, sino ante un proceso de patrimonialización, entendido no sólo como el paso de la categoría de no-patrimonio a la de patrimonio mediante la protección jurídica o urbanística, sino también como la construcción del consenso social suficiente en torno la posesión de ciertos valores preexistentes, así como la producción y reproducción social e histórica de estos valores. Ante el incipiente proceso de patrimonialización de determinados Barrios de Promoción Oficial madrileños, la tesis aborda dos cuestiones fundamentales, los mecanismos que conducen a su construcción social e histórica como nuevos elementos patrimoniales, y la evolución y estado actual del conjunto de elementos eventualmente patrimonializables, los Barrios de Promoción Oficial madrileños. De esta forma, propone un modelo teórico del proceso de patrimonialización de los conjuntos de vivienda social, basado en la identificación de discursos, valores, sujetos y etapas, así como de su relación con las características físicas y sociales originales de los barrios y su evolución hasta la actualidad. Para ello, se reconstruye la evolución física, social y de la protección patrimonial de los Barrios de Promoción Oficial madrileños, y se analizan tres procesos concretos de patrimonialización, la participación de la Colonia del Tercio y Terol en la defensa ciudadana de las Colonias Históricas madrileñas (1973-1979), la incorporación de criterios patrimoniales en la rehabilitación integral del Poblado Dirigido de Caño Roto (1991-2004), y la protección patrimonial como obstáculo para la remodelación de la U.V.A. de Hortaleza (2004-2015). ABSTRACT Comparatively speaking, few social housing estates are legally protected as cultural heritage in Europe. However, when a physical intervention is announced, the ensuing negotiation often defies explanation unless one assumes that the estate is regarded as cultural heritage by some of the players involved. Therefore, one should not so much talk of social housing as heritage, but of the process of patrimonialisation, which refers not only to the change of these estates from the status of non-heritage to that of cultural heritage through their legal recognition, but also to the process that leads to the acknowledgment of certain values in these estates, and even the social and cultural shaping of these values. This thesis tackles two main issues: the mechanisms that lead to the social and historical construction of new heritage elements or repertoires, and the evolution and current status of the Barrios de Promoción Oficial in Madrid. It proposes a theoretical model of the process of patrimonialisation of social housing estates, based on the identification of discourses, values, subjects and phases, as well as its relationship with the original physical and social features of these estates and their further evolution up to the present. In order to do so, the changes in the physical and social features of the Barrios de Promoción Oficial and in their heritage protection are reconstructed, and three particular processes are analysed. These are the role of Colonia del Tercio y Terol in the citizens’ movement for the preservation of the Colonias Históricas in Madrid (1973-1979), the incorporation of conservation criteria in the refurbishment of Poblado Dirigido de Caño Roto (1991-2004), and heritage as an obstacle to urban renewal in U.V.A. de Hortaleza (2004-2015).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to clarify the role played by the most commonly used viscous terms in simulating viscous laminar flows using the weakly compressible approach in the context of smooth particle hydrodynamics (WCSPH). To achieve this, Takeda et al. (Prog. Theor. Phys. 1994; 92(5):939–960), Morris et al. (J. Comput. Phys. 1997; 136:214–226) and Monaghan–Cleary–Gingold's (Appl. Math. Model. 1998; 22(12):981–993; Monthly Notices of the Royal Astronomical Society 2005; 365:199–213) viscous terms will be analysed, discussing their origins, structures and conservation properties. Their performance will be monitored with canonical flows of which related viscosity phenomena are well understood, and in which boundary effects are not relevant. Following the validation process of three previously published examples, two vortex flows of engineering importance have been studied. First, an isolated Lamb–Oseen vortex evolution where viscous effects are dominant and second, a pair of co-rotating vortices in which viscous effects are combined with transport phenomena. The corresponding SPH solutions have been compared to finite-element numerical solutions. The SPH viscosity model's behaviour in modelling the viscosity related effects for these canonical flows is adequate

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oxygen 1s excitation and ionization processes in the CO2 molecule have been studied with dispersed and non-dispersed fluorescence spectroscopy as well as with the vacuum ultraviolet (VUV) photon?photoion coincidence technique. The intensity of the neutral O emission line at 845 nm shows particular sensitivity to core-to-Rydberg excitations and core?valence double excitations, while shape resonances are suppressed. In contrast, the partial fluorescence yield in the wavelength window 300?650 nm and the excitation functions of selected O+ and C+ emission lines in the wavelength range 400?500 nm display all of the absorption features. The relative intensity of ionic emission in the visible range increases towards higher photon energies, which is attributed to O 1s shake-off photoionization. VUV photon?photoion coincidence spectra reveal major contributions from the C+ and O+ ions and a minor contribution from C2+. No conclusive changes in the intensity ratios among the different ions are observed above the O 1s threshold. The line shape of the VUV?O+ coincidence peak in the mass spectrum carries some information on the initial core excitation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A visual basic application for Microsoft® Excel 2007 has been developed as a helpful tool to perform mass, energy, exergy and thermoeconomic (MHBT) calculations during the systematic analysis of energy processes simulated with Aspen Plus®. The application reads an Excel workbook containing three sheets with the matter, work and heat streams results of an Aspen Plus® simulation. The required information from the Aspen Plus® simulation and the algorithm/calculations of the application are described and applied to an Air Separation Unit (ASU). This application helps the designer when MHBT analyses are performed, as it increases the knowledge of the process simulated with Aspen Plus®. It’s a valuable tool not only because of the calculations performed, but also because it creates a new Excel workbook where the results and the formulae written on the cells are fully visible and editable. There is free access to the application and it has no protection allowing changes and improvements to be done.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Cu2ZnSnS4 (CZTS) semiconductor is a potential photovoltaic material due to its optoelectronic properties. These optoelectronic properties can be potentially improved by the insertion of intermediate states into the energy bandgap. We explore this possibility using Cr as an impurity. We carried out first-principles calculations within the density functional theory analyzing three substitutions: Cu, Sn, or Zn by Cr. In all cases, the Cr introduces a deeper band into the host energy bandgap. Depending on the substitution, this band is full, empty, or partially full. The absorption coefficients in the independent-particle approximation have also been obtained. Comparison between the pure and doped host's absorption coefficients shows that this deeper band opens more photon absorption channels and could therefo:e increase the solar-light absorption with respect to the host.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coarse particles of aerodynamic diameter between 2.5 and 10 mm (PMc) are produced by a range of natural (windblown dust and sea sprays) and anthropogenic processes (non-exhaust vehicle emissions, industrial, agriculture, construction and quarrying activities). Although current ambient air quality regulations focus on PM2.5 and PM10, coarse particles are of interest from a public health point of view as they have been associated with certain mortality and morbidity outcomes. In this paper, an analysis of coarse particle levels in three European capitals (London, Madrid and Athens) is presented and discussed. For all three cities we analysed data from both traffic and urban background monitoring sites. The results showed that the levels of coarse particles present significant seasonal, weekly and daily variability. Their wind driven and non-wind driven resuspension as well as their roadside increment due to traffic were estimated. Both the local meteorological conditions and the air mass history indicating long-range atmospheric transport of particles of natural origin are significant parameters that influence the levels of coarse particles in the three cities especially during episodic events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demostrated using experimental data obtained on osmotic dehydratation of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses), namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality). Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP) and the Tabular Method (TM), were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil tomography and morphological functions built over Minkowski functionals were used to describe the impact on pore structure of two soil management practices in a Mediterranean vineyard. Soil structure controls important physical and biological processes in soil–plant–microbial systems. Those processes are dominated by the geometry of soil pore structure, and a correct model of this geometry is critical for understanding them. Soil tomography has been shown to provide rich three-dimensional digital information on soil pore geometry. Recently, mathematical morphological techniques have been proposed as powerful tools to analyze and quantify the geometrical features of porous media. Minkowski functionals and morphological functions built over Minkowski functionals provide computationally efficient means to measure four fundamental geometrical features of three-dimensional geometrical objects, that is, volume, boundary surface, mean boundary surface curvature, and connectivity. We used the threshold and the dilation and erosion of three-dimensional images to generate morphological functions and explore the evolution of Minkowski functionals as the threshold and as the degree of dilation and erosion changes. We analyzed the three-dimensional geometry of soil pore space with X-ray computed tomography (CT) of intact soil columns from a Spanish Mediterranean vineyard by using two different management practices (conventional tillage versus permanent cover crop of resident vegetation). Our results suggested that morphological functions built over Minkowski functionals provide promising tools to characterize soil macropore structure and that the evolution of morphological features with dilation and erosion is more informative as an indicator of structure than moving threshold for both soil managements studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purely data-driven approaches for machine learning present difficulties when data are scarce relative to the complexity of the model or when the model is forced to extrapolate. On the other hand, purely mechanistic approaches need to identify and specify all the interactions in the problem at hand (which may not be feasible) and still leave the issue of how to parameterize the system. In this paper, we present a hybrid approach using Gaussian processes and differential equations to combine data-driven modeling with a physical model of the system. We show how different, physically inspired, kernel functions can be developed through sensible, simple, mechanistic assumptions about the underlying system. The versatility of our approach is illustrated with three case studies from motion capture, computational biology, and geostatistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The biggest problem when analyzing the brain is that its synaptic connections are extremely complex. Generally, the billions of neurons making up the brain exchange information through two types of highly specialized structures: chemical synapses (the vast majority) and so-called gap junctions (a substrate of one class of electrical synapse). Here we are interested in exploring the three-dimensional spatial distribution of chemical synapses in the cerebral cortex. Recent research has showed that the three-dimensional spatial distribution of synapses in layer III of the neocortex can be modeled by a random sequential adsorption (RSA) point process, i.e., synapses are distributed in space almost randomly, with the only constraint that they cannot overlap. In this study we hypothesize that RSA processes can also explain the distribution of synapses in all cortical layers. We also investigate whether there are differences in both the synaptic density and spatial distribution of synapses between layers. Using combined focused ion beam milling and scanning electron microscopy (FIB/SEM), we obtained three-dimensional samples from the six layers of the rat somatosensory cortex and identified and reconstructed the synaptic junctions. A total volume of tissue of approximately 4500μm3 and around 4000 synapses from three different animals were analyzed. Different samples, layers and/or animals were aggregated and compared using RSA replicated spatial point processes. The results showed no significant differences in the synaptic distribution across the different rats used in the study. We found that RSA processes described the spatial distribution of synapses in all samples of each layer. We also found that the synaptic distribution in layers II to VI conforms to a common underlying RSA process with different densities per layer. Interestingly, the results showed that synapses in layer I had a slightly different spatial distribution from the other layers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En las últimas tres décadas, las dinámicas de restructuración económica a nivel global han redefinido radicalmente el papel de las ciudades. La transición del keynesianismo al neoliberalismo ha provocado un cambio en las políticas urbanas de los gobiernos municipales, que han abandonado progresivamente las tareas de regulación y redistribución para centrarse en la promoción del crecimiento económico y la competitividad. En este contexto, muchas voces críticas han señalado que la regeneración urbana se ha convertido en un vehículo de extracción de valor de la ciudad y está provocando la expulsión de los ciudadanos más vulnerables. Sin embargo, la regeneración de áreas consolidadas supone también una oportunidad de mejora de las condiciones de vida de la población residente, y es una política necesaria para controlar la expansión de la ciudad y reducir las necesidades de desplazamiento, promoviendo así ciudades más sostenibles. Partiendo de la hipótesis de que la gobernanza de los procesos de regeneración urbana es clave en el resultado final de las operaciones y determina el modelo de ciudad resultante, el objetivo de esta investigación es verificar si la regeneración urbana es necesariamente un mecanismo de extracción de valor o si puede mejorar la calidad de vida en las ciudades a través de la participación de los ciudadanos. Para ello, propone un marco de análisis del proceso de toma de decisiones en los planes de regeneración urbana y su impacto en los resultados de los planes, tomando como caso de estudio la ciudad de Boston, que desde los años 1990 trata de convertirse en una “ciudad de los barrios”, fomentando la participación ciudadana al tiempo que se posiciona en la escena económica global. El análisis se centra en dos operaciones de regeneración iniciadas a finales de los años 1990. Por un lado, el caso de Jackson Square nos permite comprender el papel de la sociedad civil y el tercer sector en la regeneración de los barrios más desfavorecidos, en un claro ejemplo de urbanismo “desde abajo” (bottom-up planning). Por otro, la reconversión del frente marítimo de South Boston para la construcción del Distrito de Innovación nos acerca a las grandes operaciones de regeneración urbana con fines de estímulo económico, tradicionalmente vinculadas a los centros financieros (downtown) y dirigidas por las élites gubernamentales y económicas (la growth machine) a través de procesos más tecnocráticos (top-down planning). La metodología utilizada consiste en el análisis cualitativo de los procesos de toma de decisiones y la relación entre los agentes implicados, así como de la evaluación de la implementación de dichas decisiones y su influencia en el modelo urbano resultante. El análisis de los casos permite afirmar que la gobernanza de los procesos de regeneración urbana influye decisivamente en el resultado final de las intervenciones; sin embargo, la participación de la comunidad local en la toma de decisiones no es suficiente para que el resultado de la regeneración urbana contrarreste los efectos de la neoliberalización, especialmente si se limita a la fase de planeamiento y no se extiende a la fase de ejecución, y si no está apoyada por una movilización política de mayor alcance que asegure una acción pública redistributiva. Asimismo, puede afirmarse que los procesos de regeneración urbana suponen una redefinición del modelo de ciudad, dado que la elección de los espacios de intervención tiene consecuencias sobre el equilibrio territorial de la ciudad. Los resultados de esta investigación tienen implicaciones para la disciplina del planeamiento urbano. Por una parte, se confirma la vigencia del paradigma del “urbanismo negociado”, si bien bajo discursos de liderazgo público y sin apelación al protagonismo del sector privado. Por otra parte, la planificación colaborativa en un contexto de “responsabilización” de las organizaciones comunitarias puede desactivar la potencia política de la participación ciudadana y servir como “amortiguador” hacia el gobierno local. Asimismo, la sustitución del planeamiento general como instrumento de definición de la ciudad futura por una planificación oportunista basada en la actuación en áreas estratégicas que tiren del resto de la ciudad, no permite definir un modelo coherente y consensuado de la ciudad que se desea colectivamente, ni permite utilizar el planeamiento como mecanismo de redistribución. ABSTRACT In the past three decades, the dynamics of global economic restructuring have radically redefined the role of cities. The transition from keynesianism to neoliberalism has caused a shift in local governments’ urban policies, which have progressively abandoned the tasks of regulation and redistribution to focus on promoting economic growth and competitiveness. In this context, many critics have pointed out that urban regeneration has become a vehicle for extracting value from the city and is causing the expulsion of the most vulnerable citizens. However, regeneration of consolidated areas is also an opportunity to improve the living conditions of the resident population, and is a necessary policy to control the expansion of the city and reduce the need for transportation, thus promoting more sustainable cities. Assuming that the governance of urban regeneration processes is key to the final outcome of the plans and determines the resulting city model, the goal of this research is to verify whether urban regeneration is necessarily a value extraction mechanism or if it can improve the quality of life in cities through citizens’ participation. It proposes a framework for analysis of decision-making in urban regeneration processes and their impact on the results of the plans, taking as a case study the city of Boston, which since the 1990s is trying to become a "city of neighborhoods", encouraging citizen participation, while seeking to position itself in the global economic scene. The analysis focuses on two redevelopment plans initiated in the late 1990s. The Jackson Square case allows us to understand the role of civil society and the third sector in the regeneration of disadvantaged neighborhoods, in a clear example of bottom-up planning. On the contrary, the conversion of the South Boston waterfront to build the Innovation District takes us to the big redevelopment efforts with economic stimulus’ goals, traditionally linked to downtowns and led by government and economic elites (the local “growth machine”) through more technocratic processes (top-down planning). The research is based on a qualitative analysis of the processes of decision making and the relationship between those involved, as well as the evaluation of the implementation of those decisions and their influence on the resulting urban model. The analysis suggests that the governance of urban regeneration processes decisively influences the outcome of interventions; however, community engagement in the decision-making process is not enough for the result of the urban regeneration to counteract the effects of neoliberalization, especially if it is limited to the planning phase and does not extend to the implementation of the projects, and if it is not supported by a broader political mobilization to ensure a redistributive public action. Moreover, urban regeneration processes redefine the urban model, since the choice of intervention areas has important consequences for the territorial balance of the city. The results of this study have implications for the discipline of urban planning. On the one hand, it confirms the validity of the "negotiated planning" paradigm, albeit under public leadership discourse and without a direct appeal to the leadership role of the private sector. On the other hand, collaborative planning in a context of "responsibilization" of community based organizations can deactivate the political power of citizen participation and serve as a "buffer" towards the local government. Furthermore, the replacement of comprehensive planning, as a tool for defining the city's future, by an opportunistic planning based on intervention in strategic areas that are supposed to induce change in the rest of the city, does not allow a coherent and consensual urban model that is collectively desired, nor it allows to use planning as a redistribution mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physical and social transformation processes that take place in urban contexts with strong spatial growth and hardly any economic development frequently have significant adverse impacts for the affected people, which tend to be made invisible. This paper presents an analytical framework to explore different ways to approach urban transformation processes (supply side), their impacts on the set of needs of the community (demand side) and their consequences on the urban environment as a whole (context). The proposed method has been used to assess three actions related to the physical and social transformation of the largest self-made settlement in the city of Dakar, Senegal, during the 2005–2012 period. Research findings show how exogenous interests are privileged over the common good when the affected citizens are not effectively involved in decision-making processes.