871 resultados para Feed-back multi-source
Resumo:
Using non-identical quantum wells as the active material, a new distributed-feed back laser is fabricated with period varied Bragg grating. The full width at half maximum of 115 nm is observed in the amplified spontaneous emission spectrum of this material, which is flatter and wider than that of the identical quantum wells. Two wavelengths of 1.51 mu m and 1.53 mu m are realized under different work conditions. The side-mode suppression ratios of both wavelengths reach 40 dB. This device can be used as the light source of coarse wavelength division multiplexer communication systems.
Resumo:
Back Light Unit (BLU) and Color Filter are the two key components for the perfect color display of Liquid Crystal Display (LCD) device. LCD can not light actively itself, so a form of illumination, Back Light Unit is needed for its display. The color filter which consists of RGB primary colors, is used to generate three basic colors for LCD display. Traditional CCFL back light source has several disadvantages, while LED back light technology makes LCD obtain quite higher display quality than the CCFL back light. LCD device based on LED back light owns promoted efficiency of display. Moreover it can generate color gamut above 100% of the NTSC specification. Especially, we put forward an idea of Color Filter-Less technology that we design a film which is patterned of red and green emitting phosphors, then make it be excited by a blue light LED panel we fabricate, for its special emitting mechanism, this film can emit RGB basic color, therefore replace the color filter of LCD device. This frame typically benefits for lighting uniformity and provide pretty high light utilization ratio. Also simplifies back light structure thus cut down the expenses.
Resumo:
辐射传输研究是贯穿森林生态系统的纽带,太阳辐射为植物的生长发育提供光合能量、适宜的环境温度以及发育信息。一方面,气候变化使到达地面辐射能的质和量发生变化,影响到植被的生长发育,改变森林的结构,而森林结构的变化又会影响林冠内辐射能的分配和质量,这些变化会进一步影响到林下土壤温度,改变森林根系活性以及土壤营养转化的效率;连锁反应的结果有可能会使森林生态系统的生产力发生变化,改变碳素和氮素源库的调节方向,从而反馈影响地球气候系统。另一方面,人类作为生态系统的成员,必然需要森林生态系统为其提供更多的原材料和更好的生态服务功能,如何实现这些目标,就需要人类适度调整干预方式和频度,达到预期的目的。本文在建立适合于川西亚高山森林的叶面积测量技术、光照辐射模型和土壤温度变化模型的基础上,对川西亚高山地带森林生态系统的辐射传输特征进行了分析,并从森林结构的角度探讨了林分内的辐射分布以及对土壤温度的影响。主要成果如下: 1. 提出了一种照相法测量叶面积的方法。通过对摆放在平面上的叶片照相,利用投影变化,把非正射图像转化为正射图像,然后经过计算机图像处理得到每一片叶片的面积、周长、长度、宽度等信息。这种方法可使用户以任意方向和距离拍摄处于平面上的叶片,能同时处理大量的叶片,适于野外离体或活体叶片测量。叶片面积分辨率可调,分辨率可以与常用的激光叶面积仪相近甚至更高,而且叶片图像可以存档查询。 2. 提出一种模拟林内光照变化的模型。利用林冠半球照片,记录视点以上半球内的林冠构件空间分布,作为林冠子模型;天空辐射子模型采用国际照明委员会(CIE)的标准晴天和阴天以及插值模型。该模型能够模拟林下某一位点处的实时光斑变化。 3. 提出一种土壤温度变化模型。把土壤视为具有容量和阻力性质的结构,利用电阻和电容器件构建土壤能量分布模型。外界太阳辐射能经过植被以及其它一些能量分配器后进入土壤,其中有一部分转化为土壤势能,即土壤温度。土壤温度的变化类似于电池的充放电过程。在已知模型参数的情况下,可以从太阳辐射计算土壤温度的变化。在模型参数未知的情况下,通过输入和输出值推算模型的参数,而模型参数中的时间常数与土壤组成和含水量有关,这样就可以知道土壤水分的变化情况。 4. 从王朗亚高山森林典型样地林分结构的测量获得林地三维结构图、树冠形态、叶面积密度等参数,这些参数输入到Brunner (1998)开发的tRAYci 模型中计算出一段时间内林分任意位置处的光照值。与林下辐射计测量值以及半球照片计算结果的比较,该模型基本上能够满足对林分光环境了解的要求。 5. 从川西亚高山森林生产力的角度,探讨了森林生产力研究的方法以及川西地区的研究历史和成果,发现了其中的一些规律和问题,特别是在叶面积测量上,还没有使用标准的叶面积指数定义。综合来看,川西地区针叶林叶面积指数(单位土地面积上植物冠层总叶面积的一半) 应在4-5 之间。降雨丰富的华西雨屏带是川西地区森林生产力最高的地区,而向西北森林生产力逐渐降低。川西地区云冷杉林森林生产力平均约为600 gDM m-2 a-1,但是根据辐射能计算的潜在生产力则达到1800 gDM m-2 a-1。实际与潜在森林生产力的巨大差异说明其它因子对生产力的影响。 6. 王朗亚高山3 个典型森林林分中,白桦林样地(BF) 林下草本以糙野青茅、牛至、紫菀等喜阳性物种为主,林下透光度较高;冷杉林样地(FF) 林下透光度最低,以喜阴性物种水金凤、蟹甲草、囊瓣芹等为主;而云杉林样地(SF)林分林龄最大,林下透光度介于冷杉林和白桦林之间,草本层仍然以喜阴性物种东方草莓、紫花碎米芥、酢浆草等为主。冷杉林和云杉林的灌木层也很丰富,卫矛属、五加属、茶藨子属、忍冬属植物很丰富,而在白桦林则以栒摘要子属、榛子属、鹅耳枥属等植物为主。藓类植物在云杉林中最丰富,并且形成毯状层,其它两个林分则很稀少。3 个样地林分结构与林下光环境有很强的相关性,从光环境特征可以在一定程度上推测林分的结构。各样地单纯从乔木层材积推算的NPP 排列顺序为BF>FF>SF,与林下辐射透射率和林分年龄的顺序相同,暗示辐射对群落演替过程的驱动作用。 7. 用半球照相法测得BF、FF 和SF 3 个样地的有效叶面积指数以SF 样地最高,BF 最低。如果考虑针叶树叶片在小枝上的丛聚分布,利用北方针叶林的数值进行校正,则SF 样地LAI 显著增加(达到89%),其它样地的LAI 基本不变甚至有所下降。校正后的数值与文献中地面测量的结果较相近,说明在使用半球照相法测量川西亚高山针叶林LAI 时必须加以校正。 8. 在3 个样地中,白桦、岷江冷杉和方枝柏种群为丛聚分布,紫果云杉在FF和SF 样地中基本上为随机分布。3 个物种出现丛聚分布的最短距离约为2m,在最短距离以内则为随机分布。最短距离可能与树冠大小有关,种子传播特征以及对光照的需求状况可能是造成这种分布格局类型的原因。 Radiative transfer plays a key role in forest ecosystems. Solar radiation providesenergy for photosynthesis, appropriate ambient temperature and development informationfor plants. However, quality and quantity of radiation reaching land surface are affected byweather and subsequently influence the growth and development of plants, which in turnchanges the budget of radiation in forest. Soil temperature changes with the variation ofradiation under forest canopy and influences the activity of roots and rate of nutrientturnover. Thus, any changes of radiation will induce chain reactions in the entireecosystem and display in the value of net primary productivity which will possibly shiftthe relationship between carbon source and sink at local or regional scale and feed back tothe global climate system. On the other hand, as a component of ecosystems, humanbeings of course need to demand more materials and better service from ecosystems. Forthese purpose, man must adapt their pattern and frequency of interference to ecosystems.This paper aims to research on the canopy structure, the radiation distribution and theirinfluence on soil temperature from the process of radiative transfer in subalpine forestecosystem of western Sichuan. The main results are: 1 Present a new photogrammetric method for leaf area. The main idea is to convertnon-vertically taken images of planar leaves to orthoimages through projectivetransformation. The resultant images are used to get leaf morphological parametersthrough image processing. This method enables users to take photos at almost anyorientation and distance if only the leaves are placed on same plane, and to processlarge quantity of leaves in a short time, which is suitable for field measurement. Theresolution of leaf area is adjustable to fit for special requirement. 2 A model using hemispherical photos combining with solar tracks and radiation courseis provided to simulate light variation in forest. The hemispherical photos of canopyrecord the real spatial distribution of each element of plants viewed from a point. Skyradiance is simulated with CIE standard clear sky or cloudy sky model. This modelcan be used to simulate real time light variation under canopy. 3 Present a soil temperature model. Soil could be regarded as a body of resistor andcapacitor. Some of the budget of solar radiation in soil body is transformed into soilpotential energy, the soil temperature. Variation of soil temperature is driven by solarradiation, vegetation, soil properties, etc. This model has two parameters, one of whichis time constant and is related to soil water content. The inversed model can be used tosimulate the variation of soil water. 4 By using model tRAYci developed by Brunner (1998), the 3-D distribution of light inthree subalpine forest stands of Wanglang Nature Reserve has been simulated andvalidated with value of radiometers in these stands. This model can basically satisfythe need for understanding light regimes of these stands. 5 Present some principles and questions of NPP (net primary of productivity) researchesin western Sichuan. The standard leaf area index (LAI) defined by Chen and Black(1997) has not been used in this region. Total leaf area and projected leaf area indexare still used in NPP researches which may differ around 1-fold in magnitude. Thestandard LAI which is a half of total leaf area above unit land area should be between4 and 5 for typical subalpine coniferous forest of western Sichuan concluded fromliteratures. The maximum forest NPP occurs in West China rain belt and decreasesnorthwestwards. Average NPP of spruce-fir forest in western Sichuan is about600gDM m-2 a-1, which is below the potential NPP of 1800gDM m-2 a-1 based onmeasured radiation in this region. The significant difference between potential and realNPP suggests that other factors influence the growth of stands. 6 In the three subalpine forest stands of Wanglang Nature Reserve, herbage layer ofAbstractbirch stand (BF) with age of 40 is dominated by heliophytes of Deyeuxia scabrescens,Origanum vulgare, Aster tongoloa etc.. However, both of the other two stands aredominated by shade tolerent species, such as Impatiens noli-tangere, Impatiensdicentra, Cacalia deltophylla and Pternopetalum tanakae etc. in fir stand (FF) withage of 180 and Fragaria orientalis, Cardamine tangutorum and Oxalis corniculata etc.in spruce stand (SF) with age of 330. Shrub species in the latter two stands arerelatively rich, typical dominant genera being Euonymus, Acanthopanax, Ribes andLonicera. Birch stand has relatively sparse shrubs dominated by genera of Cotoneaster,Corylus and Carpinus. Mosses are significant only in spruce stand. The canopystructure controls the light regime of stand, which influence the composition of herblayers beneath the canopy. This light regime-community structure relationship can beused to infer the herb community from canopy structure. The NPP derived from timbervolume of arbor layer of the three stands decreases from BF to SF, which is in thesame order of transmitted total radiation under canopy and age of these stands,suggesting the driving effect of radiation in the succession of community. 7 The highest effective LAI of the three stands obtained by hemispherical photos is inplot SF and lowest in plot BF. After rectification of the clumping effect of leaves onshoot, the real LAI in plot SF increases significantly (89%) and approximate to theaverage LAI of coniferous forest in western Sichuan. Therefore, the LAI obtainedfrom hemispherical photos needs rectification for clumping effect. 8 Spatial distribution pattern for Betula platyphylla, Abies faxoniana and Sabinasaltuaria is clumpy, but Picea purpurea almost random in plot FF and SF. The shortestdistance for clumpy distribution for Betula platyphylla and Sabina saltuaria is 1.5m,and 2m for Abies faxoniana. And random pattern for these trees is exhibited within thisrange which almost coincides with the diameter of crown. Seed dispersalcharacteristics and light requirement may be the reason for different spatial pattern.
Resumo:
The past decade has witnessed the publication of a growing number of important ethnographic studies investigating the schooling experiences of Black students. Their focus has largely been upon student-teacher relations during the students' last few years of compulsory schooling. What they have highlighted is the complexity of racism and the varied nature of Black students' experiences of schooling. Drawing upon data from a year-long ethnographic study of an inner-city, multi-ethnic primary school, this paper aims to compliment these studies in two ways. Firstly the paper will broaden the focus to examine how student peer-group relations play an integral role, within the context of student-teacher relations, in shaping many Black students' schooling experiences. By focussing on African/Caribbean infant boys, it will be shown how student-teacher relations on the one hand, and peer-group relations on the other, form a continuous feed-back loop; the products of each tending to exacerbate and inflate the other. Secondly, by concentrating on infant children, the paper will assess the extent to which these resultant processes and practices are also evident for Black pupils at the beginning of their school careers - at the ages of five and six.
Resumo:
Esta tese de doutoramento apresenta contribuições conceituais e metodológicas de análises sistêmicas, envolvendo ciências sociais e ciências naturais, ao debate sobre a aplicabilidade do desenvolvimento sustentável no território costeiro amazônico. O principal desafio é a utilização de um referencial teórico inovador que articula sistemas sócio-ecológicos - SES e resiliência – à análise de dados primários e secundários. O universo da pesquisa abrange a região costeira bragantina, contemplando os sistemas sociais (comunidades de pescadores) e ecológicos (manguezal) como área amostral. O programa, Dinâmica e Manejo em Áreas de Manguezais – MADAM, totalizando dez anos de pesquisas interdisciplinares serve como principal fonte de informação. Com base nos conceitos do SES e da resiliência, são analisadas as relações entre o uso dos recursos naturais e a organização e estruturação sócio-econômica local. O objetivo é analisar a resiliência do sistema sócio-ecológico costeiro paraense, com base em processos contínuos de desenvolvimento sócio-econômico, identificando quais as mudanças geradas, e como o sistema costeiro reage e se adapta, a partir de novas configurações. O objetivo é fornecer alternativas para o correto desenvolvimento da referida área. O resultado reflete um panorama das condições atuais da zona costeira bragantina. Constatou-se, neste trabalho, que os principais fatos que contribuem para aumentar e diminuir a resiliência sócio-ecológica dessa região, entendida como a capacidade de se adaptar e se reorganizar frente a mudanças e distúrbios, são, particularmente, as forças motrizes endógenas, especialmente, o capital social e o Conhecimento Ecológico Local - CEL, este fornece um potencial reflexivo para um planejamento sustentável no contexto do litoral amazônico.
Resumo:
In cardiac muscle, a number of posttranslational protein modifications can alter the function of the Ca(2+) release channel of the sarcoplasmic reticulum (SR), also known as the ryanodine receptor (RyR). During every heartbeat RyRs are activated by the Ca(2+)-induced Ca(2+) release mechanism and contribute a large fraction of the Ca(2+) required for contraction. Some of the posttranslational modifications of the RyR are known to affect its gating and Ca(2+) sensitivity. Presently, research in a number of laboratories is focused on RyR phosphorylation, both by PKA and CaMKII, or on RyR modifications caused by reactive oxygen and nitrogen species (ROS/RNS). Both classes of posttranslational modifications are thought to play important roles in the physiological regulation of channel activity, but are also known to provoke abnormal alterations during various diseases. Only recently it was realized that several types of posttranslational modifications are tightly connected and form synergistic (or antagonistic) feed-back loops resulting in additive and potentially detrimental downstream effects. This review summarizes recent findings on such posttranslational modifications, attempts to bridge molecular with cellular findings, and opens a perspective for future work trying to understand the ramifications of crosstalk in these multiple signaling pathways. Clarifying these complex interactions will be important in the development of novel therapeutic approaches, since this may form the foundation for the implementation of multi-pronged treatment regimes in the future. This article is part of a Special Issue entitled: Cardiomyocyte Biology: Cardiac Pathways of Differentiation, Metabolism and Contraction.
Resumo:
The aim of this study was to explore potential causes and mechanisms for the sequence and temporal pattern of tree taxa, specifically for the shift from shrub-tundra to birch–juniper woodland during and after the transition from the Oldest Dryas to the Bølling–Allerød in the region surrounding the lake Gerzensee in southern Central Europe. We tested the influence of climate, forest dynamics, community dynamics compared to other causes for delays. For this aim temperature reconstructed from a δ18O-record was used as input driving the multi-species forest-landscape model TreeMig. In a stepwise scenario analysis, population dynamics along with pollen production and transport were simulated and compared with pollen-influx data, according to scenarios of different δ18O/temperature sensitivities, different precipitation levels, with/without inter-specific competition, and with/without prescribed arrival of species. In the best-fitting scenarios, the effects on competitive relationships, pollen production, spatial forest structure, albedo, and surface roughness were examined in more detail. The appearance of most taxa in the data could only be explained by the coldest temperature scenario with a sensitivity of 0.3‰/°C, corresponding to an anomaly of − 15 °C. Once the taxa were present, their temporal pattern was shaped by competition. The later arrival of Pinus could not be explained even by the coldest temperatures, and its timing had to be prescribed by first observations in the pollen record. After the arrival into the simulation area, the expansion of Pinus was further influenced by competitors and minor climate oscillations. The rapid change in the simulated species composition went along with a drastic change in forest structure, leaf area, albedo, and surface roughness. Pollen increased only shortly after biomass. Based on our simulations, two alternative potential scenarios for the pollen pattern can be given: either very cold climate suppressed most species in the Oldest Dryas, or they were delayed by soil formation or migration. One taxon, Pinus, was delayed by migration and then additionally hindered by competition. Community dynamics affected the pattern in two ways: potentially by facilitation, i.e. by nitrogen-fixing pioneer species at the onset, whereas the later pattern was clearly shaped by competition. The simulated structural changes illustrate how vegetation on a larger scale could feed back to the climate system. For a better understanding, a more integrated simulation approach covering also the immigration from refugia would be necessary, for this combines climate-driven population dynamics, migration, individual pollen production and transport, soil dynamics, and physiology of individual pollen production.
Resumo:
We re-evaluate the Greenland mass balance for the recent period using low-pass Independent Component Analysis (ICA) post-processing of the Level-2 GRACE data (2002-2010) from different official providers (UTCSR, JPL, GFZ) and confirm the present important ice mass loss in the range of -70 and -90 Gt/y of this ice sheet, due to negative contributions of the glaciers on the east coast. We highlight the high interannual variability of mass variations of the Greenland Ice Sheet (GrIS), especially the recent deceleration of ice loss in 2009-2010, once seasonal cycles are robustly removed by Seasonal Trend Loess (STL) decomposition. Interannual variability leads to varying trend estimates depending on the considered time span. Correction of post-glacial rebound effects on ice mass trend estimates represents no more than 8 Gt/y over the whole ice sheet. We also investigate possible climatic causes that can explain these ice mass interannual variations, as strong correlations between GRACE-based mass balance and atmosphere/ocean parallels are established: (1) changes in snow accumulation, and (2) the influence of inputs of warm ocean water that periodically accelerate the calving of glaciers in coastal regions and, feed-back effects of coastal water cooling by fresh currents from glaciers melting. These results suggest that the Greenland mass balance is driven by coastal sea surface temperature at time scales shorter than accumulation.
Resumo:
A lo largo del presente trabajo se investiga la viabilidad de la descomposición automática de espectros de radiación gamma por medio de algoritmos de resolución de sistemas de ecuaciones algebraicas lineales basados en técnicas de pseudoinversión. La determinación de dichos algoritmos ha sido realizada teniendo en cuenta su posible implementación sobre procesadores de propósito específico de baja complejidad. En el primer capítulo se resumen las técnicas para la detección y medida de la radiación gamma que han servido de base para la confección de los espectros tratados en el trabajo. Se reexaminan los conceptos asociados con la naturaleza de la radiación electromagnética, así como los procesos físicos y el tratamiento electrónico que se hallan involucrados en su detección, poniendo de relieve la naturaleza intrínsecamente estadística del proceso de formación del espectro asociado como una clasificación del número de detecciones realizadas en función de la energía supuestamente continua asociada a las mismas. Para ello se aporta una breve descripción de los principales fenómenos de interacción de la radiación con la materia, que condicionan el proceso de detección y formación del espectro. El detector de radiación es considerado el elemento crítico del sistema de medida, puesto que condiciona fuertemente el proceso de detección. Por ello se examinan los principales tipos de detectores, con especial hincapié en los detectores de tipo semiconductor, ya que son los más utilizados en la actualidad. Finalmente, se describen los subsistemas electrónicos fundamentales para el acondicionamiento y pretratamiento de la señal procedente del detector, a la que se le denomina con el término tradicionalmente utilizado de Electrónica Nuclear. En lo que concierne a la espectroscopia, el principal subsistema de interés para el presente trabajo es el analizador multicanal, el cual lleva a cabo el tratamiento cualitativo de la señal, y construye un histograma de intensidad de radiación en el margen de energías al que el detector es sensible. Este vector N-dimensional es lo que generalmente se conoce con el nombre de espectro de radiación. Los distintos radionúclidos que participan en una fuente de radiación no pura dejan su impronta en dicho espectro. En el capítulo segundo se realiza una revisión exhaustiva de los métodos matemáticos en uso hasta el momento ideados para la identificación de los radionúclidos presentes en un espectro compuesto, así como para determinar sus actividades relativas. Uno de ellos es el denominado de regresión lineal múltiple, que se propone como la aproximación más apropiada a los condicionamientos y restricciones del problema: capacidad para tratar con espectros de baja resolución, ausencia del concurso de un operador humano (no supervisión), y posibilidad de ser soportado por algoritmos de baja complejidad capaces de ser instrumentados sobre procesadores dedicados de alta escala de integración. El problema del análisis se plantea formalmente en el tercer capítulo siguiendo las pautas arriba mencionadas y se demuestra que el citado problema admite una solución en la teoría de memorias asociativas lineales. Un operador basado en este tipo de estructuras puede proporcionar la solución al problema de la descomposición espectral deseada. En el mismo contexto, se proponen un par de algoritmos adaptativos complementarios para la construcción del operador, que gozan de unas características aritméticas especialmente apropiadas para su instrumentación sobre procesadores de alta escala de integración. La característica de adaptatividad dota a la memoria asociativa de una gran flexibilidad en lo que se refiere a la incorporación de nueva información en forma progresiva.En el capítulo cuarto se trata con un nuevo problema añadido, de índole altamente compleja. Es el del tratamiento de las deformaciones que introducen en el espectro las derivas instrumentales presentes en el dispositivo detector y en la electrónica de preacondicionamiento. Estas deformaciones invalidan el modelo de regresión lineal utilizado para describir el espectro problema. Se deriva entonces un modelo que incluya las citadas deformaciones como una ampliación de contribuciones en el espectro compuesto, el cual conlleva una ampliación sencilla de la memoria asociativa capaz de tolerar las derivas en la mezcla problema y de llevar a cabo un análisis robusto de contribuciones. El método de ampliación utilizado se basa en la suposición de pequeñas perturbaciones. La práctica en el laboratorio demuestra que, en ocasiones, las derivas instrumentales pueden provocar distorsiones severas en el espectro que no pueden ser tratadas por el modelo anterior. Por ello, en el capítulo quinto se plantea el problema de medidas afectadas por fuertes derivas desde el punto de vista de la teoría de optimización no lineal. Esta reformulación lleva a la introducción de un algoritmo de tipo recursivo inspirado en el de Gauss-Newton que permite introducir el concepto de memoria lineal realimentada. Este operador ofrece una capacidad sensiblemente mejorada para la descomposición de mezclas con fuerte deriva sin la excesiva carga computacional que presentan los algoritmos clásicos de optimización no lineal. El trabajo finaliza con una discusión de los resultados obtenidos en los tres principales niveles de estudio abordados, que se ofrecen en los capítulos tercero, cuarto y quinto, así como con la elevación a definitivas de las principales conclusiones derivadas del estudio y con el desglose de las posibles líneas de continuación del presente trabajo.---ABSTRACT---Through the present research, the feasibility of Automatic Gamma-Radiation Spectral Decomposition by Linear Algebraic Equation-Solving Algorithms using Pseudo-Inverse Techniques is explored. The design of the before mentioned algorithms has been done having into account their possible implementation on Specific-Purpose Processors of Low Complexity. In the first chapter, the techniques for the detection and measurement of gamma radiation employed to construct the spectra being used throughout the research are reviewed. Similarly, the basic concepts related with the nature and properties of the hard electromagnetic radiation are also re-examined, together with the physic and electronic processes involved in the detection of such kind of radiation, with special emphasis in the intrinsic statistical nature of the spectrum build-up process, which is considered as a classification of the number of individual photon-detections as a function of the energy associated to each individual photon. Fbr such, a brief description of the most important matter-energy interaction phenomena conditioning the detection and spectrum formation processes is given. The radiation detector is considered as the most critical element in the measurement system, as this device strongly conditions the detection process. Fbr this reason, the characteristics of the most frequent detectors are re-examined, with special emphasis on those of semiconductor nature, as these are the most frequently employed ones nowadays. Finally, the fundamental electronic subsystems for preaconditioning and treating of the signal delivered by the detector, classically addresed as Nuclear Electronics, is described. As far as Spectroscopy is concerned, the subsystem most interesting for the scope covered by the present research is the so-called Multichannel Analyzer, which is devoted to the cualitative treatment of the signal, building-up a hystogram of radiation intensity in the range of energies in which the detector is sensitive. The resulting N-dimensional vector is generally known with the ñame of Radiation Spectrum. The different radio-nuclides contributing to the spectrum of a composite source will leave their fingerprint in the resulting spectrum. Through the second chapter, an exhaustive review of the mathematical methods devised to the present moment to identify the radio-nuclides present in the composite spectrum and to quantify their relative contributions, is reviewed. One of the more popular ones is the so-known Múltiple Linear Regression, which is proposed as the best suited approach according to the constraints and restrictions present in the formulation of the problem, i.e., the need to treat low-resolution spectra, the absence of control by a human operator (un-supervision), and the possibility of being implemented as low-complexity algorithms amenable of being supported by VLSI Specific Processors. The analysis problem is formally stated through the third chapter, following the hints established in this context, and it is shown that the addressed problem may be satisfactorily solved under the point of view of Linear Associative Memories. An operator based on this kind of structures may provide the solution to the spectral decomposition problem posed. In the same context, a pair of complementary adaptive algorithms useful for the construction of the solving operator are proposed, which share certain special arithmetic characteristics that render them specially suitable for their implementation on VLSI Processors. The adaptive nature of the associative memory provides a high flexibility to this operator, in what refers to the progressive inclusión of new information to the knowledge base. Through the fourth chapter, this fact is treated together with a new problem to be considered, of a high interest but quite complex nature, as is the treatment of the deformations appearing in the spectrum when instrumental drifts in both the detecting device and the pre-acconditioning electronics are to be taken into account. These deformations render the Linear Regression Model proposed almost unuseful to describe the resulting spectrum. A new model including the drifts is derived as an extensión of the individual contributions to the composite spectrum, which implies a simple extensión of the Associative Memory, which renders this suitable to accept the drifts in the composite spectrum, thus producing a robust analysis of contributions. The extensión method is based on the Low-Amplitude Perturbation Hypothesis. Experimental practice shows that in certain cases the instrumental drifts may provoke severe distortions in the resulting spectrum, which can not be treated with the before-mentioned hypothesis. To cover also these less-frequent cases, through the fifth chapter, the problem involving strong drifts is treated under the point of view of Non-Linear Optimization Techniques. This reformulation carries the study to the consideration of recursive algorithms based on the Gauss-Newton methods, which allow the introduction of Feed-Back Memories, computing elements with a sensibly improved capability to decompose spectra affected by strong drifts. The research concludes with a discussion of the results obtained in the three main levéis of study considerad, which are presented in chapters third, fourth and fifth, toghether with the review of the main conclusions derived from the study and the outline of the main research lines opened by the present work.
Resumo:
In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.
Resumo:
Nonlinear, non-stationary signals are commonly found in a variety of disciplines such as biology, medicine, geology and financial modeling. The complexity (e.g. nonlinearity and non-stationarity) of such signals and their low signal to noise ratios often make it a challenging task to use them in critical applications. In this paper we propose a new neural network based technique to address those problems. We show that a feed forward, multi-layered neural network can conveniently capture the states of a nonlinear system in its connection weight-space, after a process of supervised training. The performance of the proposed method is investigated via computer simulations.
Resumo:
La riduzione dei consumi di combustibili fossili e lo sviluppo di tecnologie per il risparmio energetico sono una questione di centrale importanza sia per l’industria che per la ricerca, a causa dei drastici effetti che le emissioni di inquinanti antropogenici stanno avendo sull’ambiente. Mentre un crescente numero di normative e regolamenti vengono emessi per far fronte a questi problemi, la necessità di sviluppare tecnologie a basse emissioni sta guidando la ricerca in numerosi settori industriali. Nonostante la realizzazione di fonti energetiche rinnovabili sia vista come la soluzione più promettente nel lungo periodo, un’efficace e completa integrazione di tali tecnologie risulta ad oggi impraticabile, a causa sia di vincoli tecnici che della vastità della quota di energia prodotta, attualmente soddisfatta da fonti fossili, che le tecnologie alternative dovrebbero andare a coprire. L’ottimizzazione della produzione e della gestione energetica d’altra parte, associata allo sviluppo di tecnologie per la riduzione dei consumi energetici, rappresenta una soluzione adeguata al problema, che può al contempo essere integrata all’interno di orizzonti temporali più brevi. L’obiettivo della presente tesi è quello di investigare, sviluppare ed applicare un insieme di strumenti numerici per ottimizzare la progettazione e la gestione di processi energetici che possa essere usato per ottenere una riduzione dei consumi di combustibile ed un’ottimizzazione dell’efficienza energetica. La metodologia sviluppata si appoggia su un approccio basato sulla modellazione numerica dei sistemi, che sfrutta le capacità predittive, derivanti da una rappresentazione matematica dei processi, per sviluppare delle strategie di ottimizzazione degli stessi, a fronte di condizioni di impiego realistiche. Nello sviluppo di queste procedure, particolare enfasi viene data alla necessità di derivare delle corrette strategie di gestione, che tengano conto delle dinamiche degli impianti analizzati, per poter ottenere le migliori prestazioni durante l’effettiva fase operativa. Durante lo sviluppo della tesi il problema dell’ottimizzazione energetica è stato affrontato in riferimento a tre diverse applicazioni tecnologiche. Nella prima di queste è stato considerato un impianto multi-fonte per la soddisfazione della domanda energetica di un edificio ad uso commerciale. Poiché tale sistema utilizza una serie di molteplici tecnologie per la produzione dell’energia termica ed elettrica richiesta dalle utenze, è necessario identificare la corretta strategia di ripartizione dei carichi, in grado di garantire la massima efficienza energetica dell’impianto. Basandosi su un modello semplificato dell’impianto, il problema è stato risolto applicando un algoritmo di Programmazione Dinamica deterministico, e i risultati ottenuti sono stati comparati con quelli derivanti dall’adozione di una più semplice strategia a regole, provando in tal modo i vantaggi connessi all’adozione di una strategia di controllo ottimale. Nella seconda applicazione è stata investigata la progettazione di una soluzione ibrida per il recupero energetico da uno scavatore idraulico. Poiché diversi layout tecnologici per implementare questa soluzione possono essere concepiti e l’introduzione di componenti aggiuntivi necessita di un corretto dimensionamento, è necessario lo sviluppo di una metodologia che permetta di valutare le massime prestazioni ottenibili da ognuna di tali soluzioni alternative. Il confronto fra i diversi layout è stato perciò condotto sulla base delle prestazioni energetiche del macchinario durante un ciclo di scavo standardizzato, stimate grazie all’ausilio di un dettagliato modello dell’impianto. Poiché l’aggiunta di dispositivi per il recupero energetico introduce gradi di libertà addizionali nel sistema, è stato inoltre necessario determinare la strategia di controllo ottimale dei medesimi, al fine di poter valutare le massime prestazioni ottenibili da ciascun layout. Tale problema è stato di nuovo risolto grazie all’ausilio di un algoritmo di Programmazione Dinamica, che sfrutta un modello semplificato del sistema, ideato per lo scopo. Una volta che le prestazioni ottimali per ogni soluzione progettuale sono state determinate, è stato possibile effettuare un equo confronto fra le diverse alternative. Nella terza ed ultima applicazione è stato analizzato un impianto a ciclo Rankine organico (ORC) per il recupero di cascami termici dai gas di scarico di autovetture. Nonostante gli impianti ORC siano potenzialmente in grado di produrre rilevanti incrementi nel risparmio di combustibile di un veicolo, è necessario per il loro corretto funzionamento lo sviluppo di complesse strategie di controllo, che siano in grado di far fronte alla variabilità della fonte di calore per il processo; inoltre, contemporaneamente alla massimizzazione dei risparmi di combustibile, il sistema deve essere mantenuto in condizioni di funzionamento sicure. Per far fronte al problema, un robusto ed efficace modello dell’impianto è stato realizzato, basandosi sulla Moving Boundary Methodology, per la simulazione delle dinamiche di cambio di fase del fluido organico e la stima delle prestazioni dell’impianto. Tale modello è stato in seguito utilizzato per progettare un controllore predittivo (MPC) in grado di stimare i parametri di controllo ottimali per la gestione del sistema durante il funzionamento transitorio. Per la soluzione del corrispondente problema di ottimizzazione dinamica non lineare, un algoritmo basato sulla Particle Swarm Optimization è stato sviluppato. I risultati ottenuti con l’adozione di tale controllore sono stati confrontati con quelli ottenibili da un classico controllore proporzionale integrale (PI), mostrando nuovamente i vantaggi, da un punto di vista energetico, derivanti dall’adozione di una strategia di controllo ottima.
Resumo:
Structure–activity relationships are indispensable to identify the most optimal antioxidants. The advantages of in vitro over in vivo experiments for obtaining these relationships are, that the structure is better defined in vitro, since less metabolism takes place. It is also the case that the concentration, a parameter that is directly linked to activity, is more accurately controlled. Moreover, the reactions that occur in vivo, including feed-back mechanisms, are often too multi-faceted and diverse to be compensated for during the assessment of a single structure–activity relationship. Pitfalls of in vitro antioxidant research include: (i) by definition, antioxidants are not stable and substantial amounts of oxidation products are formed and (ii) during the scavenging of reactive species, reaction products of the antioxidants accumulate. Another problem is that the maintenance of a defined concentration of antioxidants is subject to processes such as oxidation and the formation of reaction products during the actual antioxidant reaction, as well as the compartmentalization of the antioxidant and the reactive species in the in vitro test system. So determinations of in vitro structure-activity relationships are subject to many competing variables and they should always be evaluated critically. (c) 2005 Published by Elsevier B.V.
Resumo:
The idea of comparative performance assessment is crucial. Recent study findings show that in South Florida the use by most municipalities of external benchmarks for performance comparison is virtually non-existent. On one level this study sought to identify the factors impacting resident perceptions of municipal service quality. On a different and more practical level, this study sought to identify a core set of measures that could serve for multi jurisdictional comparisons of performance. ^ This study empirically tested three groups of hypotheses. Data were collected via custom designed survey instruments from multiple jurisdictions, representing diverse socioeconomic backgrounds, and across two counties. A second layer of analysis was conducted on municipal budget documents for the presence of performance measures. A third layer of analysis was conducted via face-to-face interviews with residents at the point of service delivery. Research questions were analyzed using descriptive and inferential statistic methodologies. ^ Results of survey data yielded inconsistent findings. In absolute aggregated terms, the use of sociological determinants to guide inquiry failed to yield conclusive answers regarding the factors impacting resident perceptions of municipal service quality. At disaggregated community levels, however, definite differences emerged but these had weak predictive ability. More useful were the findings of performance measures reporting via municipal budget documents and analyses of interviews with residents at the point of service delivery. Regardless of socio-economic profile, neighborhood characteristics, level of civic engagement or type of community, the same aspects were important to citizens when making assessments of service quality. For parks and recreation, respondents most frequently cited maintenance, facility amenities, and program offerings as important while for garbage collection services timely and consistent service delivery mattered most. Surprisingly municipalities participating in the study track performance data on items indicated as important by citizen assessments but regular feed back from residents or reporting to the same is rarely done. ^ The implications of these findings suggest that endeavors, such as the one undertaken in this study, can assist in determining a core set of measures for cross jurisdictional comparisons of municipal service quality, improving municipal delivery of services, and to communicate with the public. ^
Resumo:
Denial-of-service attacks (DoS) and distributed denial-of-service attacks (DDoS) attempt to temporarily disrupt users or computer resources to cause service un- availability to legitimate users in the internetworking system. The most common type of DoS attack occurs when adversaries °ood a large amount of bogus data to interfere or disrupt the service on the server. The attack can be either a single-source attack, which originates at only one host, or a multi-source attack, in which multiple hosts coordinate to °ood a large number of packets to the server. Cryptographic mechanisms in authentication schemes are an example ap- proach to help the server to validate malicious tra±c. Since authentication in key establishment protocols requires the veri¯er to spend some resources before successfully detecting the bogus messages, adversaries might be able to exploit this °aw to mount an attack to overwhelm the server resources. The attacker is able to perform this kind of attack because many key establishment protocols incorporate strong authentication at the beginning phase before they can iden- tify the attacks. This is an example of DoS threats in most key establishment protocols because they have been implemented to support con¯dentiality and data integrity, but do not carefully consider other security objectives, such as availability. The main objective of this research is to design denial-of-service resistant mechanisms in key establishment protocols. In particular, we focus on the design of cryptographic protocols related to key establishment protocols that implement client puzzles to protect the server against resource exhaustion attacks. Another objective is to extend formal analysis techniques to include DoS- resistance. Basically, the formal analysis approach is used not only to analyse and verify the security of a cryptographic scheme carefully but also to help in the design stage of new protocols with a high level of security guarantee. In this research, we focus on an analysis technique of Meadows' cost-based framework, and we implement DoS-resistant model using Coloured Petri Nets. Meadows' cost-based framework is directly proposed to assess denial-of-service vulnerabil- ities in the cryptographic protocols using mathematical proof, while Coloured Petri Nets is used to model and verify the communication protocols using inter- active simulations. In addition, Coloured Petri Nets are able to help the protocol designer to clarify and reduce some inconsistency of the protocol speci¯cation. Therefore, the second objective of this research is to explore vulnerabilities in existing DoS-resistant protocols, as well as extend a formal analysis approach to our new framework for improving DoS-resistance and evaluating the performance of the new proposed mechanism. In summary, the speci¯c outcomes of this research include following results; 1. A taxonomy of denial-of-service resistant strategies and techniques used in key establishment protocols; 2. A critical analysis of existing DoS-resistant key exchange and key estab- lishment protocols; 3. An implementation of Meadows's cost-based framework using Coloured Petri Nets for modelling and evaluating DoS-resistant protocols; and 4. A development of new e±cient and practical DoS-resistant mechanisms to improve the resistance to denial-of-service attacks in key establishment protocols.