888 resultados para power to extend time
Resumo:
The electrical power distribution and commercialization scenario is evolving worldwide, and electricity companies, faced with the challenge of new information requirements, are demanding IT solutions to deal with the smart monitoring of power networks. Two main challenges arise from data management and smart monitoring of power networks: real-time data acquisition and big data processing over short time periods. We present a solution in the form of a system architecture that conveys real time issues and has the capacity for big data management.
Resumo:
In this work, educational software for intuitive understanding of the basic dynamic processes of semiconductor lasers is presented. The proposed tool is addressed to the students of optical communication courses, encouraging self consolidation of the subjects learned in lectures. The semiconductor laser model is based on the well known rate equations for the carrier density, photon density and optical phase. The direct modulation of the laser is considered with input parameters which can be selected by the user. Different options for the waveform, amplitude and frequency of thpoint. Simulation results are plotted for carrier density and output power versus time. Instantaneous frequency variations of the laser output are numerically shifted to the audible frequency range and sent to the computer loudspeakers. This results in an intuitive description of the “chirp” phenomenon due to amplitude-phase coupling, typical of directly modulated semiconductor lasers. In this way, the student can actually listen to the time resolved spectral content of the laser output. By changing the laser parameters and/or the modulation parameters,consequent variation of the laser output can be appreciated in intuitive manner. The proposed educational tool has been previously implemented by the same authors with locally executable software. In the present manuscript, we extend our previous work to a web based platform, offering improved distribution and allowing its use to the wide audience of the web.
Resumo:
Un escenario habitualmente considerado para el uso sostenible y prolongado de la energía nuclear contempla un parque de reactores rápidos refrigerados por metales líquidos (LMFR) dedicados al reciclado de Pu y la transmutación de actínidos minoritarios (MA). Otra opción es combinar dichos reactores con algunos sistemas subcríticos asistidos por acelerador (ADS), exclusivamente destinados a la eliminación de MA. El diseño y licenciamiento de estos reactores innovadores requiere herramientas computacionales prácticas y precisas, que incorporen el conocimiento obtenido en la investigación experimental de nuevas configuraciones de reactores, materiales y sistemas. A pesar de que se han construido y operado un cierto número de reactores rápidos a nivel mundial, la experiencia operacional es todavía reducida y no todos los transitorios se han podido entender completamente. Por tanto, los análisis de seguridad de nuevos LMFR están basados fundamentalmente en métodos deterministas, al contrario que las aproximaciones modernas para reactores de agua ligera (LWR), que se benefician también de los métodos probabilistas. La aproximación más usada en los estudios de seguridad de LMFR es utilizar una variedad de códigos, desarrollados a base de distintas teorías, en busca de soluciones integrales para los transitorios e incluyendo incertidumbres. En este marco, los nuevos códigos para cálculos de mejor estimación ("best estimate") que no incluyen aproximaciones conservadoras, son de una importancia primordial para analizar estacionarios y transitorios en reactores rápidos. Esta tesis se centra en el desarrollo de un código acoplado para realizar análisis realistas en reactores rápidos críticos aplicando el método de Monte Carlo. Hoy en día, dado el mayor potencial de recursos computacionales, los códigos de transporte neutrónico por Monte Carlo se pueden usar de manera práctica para realizar cálculos detallados de núcleos completos, incluso de elevada heterogeneidad material. Además, los códigos de Monte Carlo se toman normalmente como referencia para los códigos deterministas de difusión en multigrupos en aplicaciones con reactores rápidos, porque usan secciones eficaces punto a punto, un modelo geométrico exacto y tienen en cuenta intrínsecamente la dependencia angular de flujo. En esta tesis se presenta una metodología de acoplamiento entre el conocido código MCNP, que calcula la generación de potencia en el reactor, y el código de termohidráulica de subcanal COBRA-IV, que obtiene las distribuciones de temperatura y densidad en el sistema. COBRA-IV es un código apropiado para aplicaciones en reactores rápidos ya que ha sido validado con resultados experimentales en haces de barras con sodio, incluyendo las correlaciones más apropiadas para metales líquidos. En una primera fase de la tesis, ambos códigos se han acoplado en estado estacionario utilizando un método iterativo con intercambio de archivos externos. El principal problema en el acoplamiento neutrónico y termohidráulico en estacionario con códigos de Monte Carlo es la manipulación de las secciones eficaces para tener en cuenta el ensanchamiento Doppler cuando la temperatura del combustible aumenta. Entre todas las opciones disponibles, en esta tesis se ha escogido la aproximación de pseudo materiales, y se ha comprobado que proporciona resultados aceptables en su aplicación con reactores rápidos. Por otro lado, los cambios geométricos originados por grandes gradientes de temperatura en el núcleo de reactores rápidos resultan importantes para la neutrónica como consecuencia del elevado recorrido libre medio del neutrón en estos sistemas. Por tanto, se ha desarrollado un módulo adicional que simula la geometría del reactor en caliente y permite estimar la reactividad debido a la expansión del núcleo en un transitorio. éste módulo calcula automáticamente la longitud del combustible, el radio de la vaina, la separación de los elementos de combustible y el radio de la placa soporte en función de la temperatura. éste efecto es muy relevante en transitorios sin inserción de bancos de parada. También relacionado con los cambios geométricos, se ha implementado una herramienta que, automatiza el movimiento de las barras de control en busca d la criticidad del reactor, o bien calcula el valor de inserción axial las barras de control. Una segunda fase en la plataforma de cálculo que se ha desarrollado es la simulació dinámica. Puesto que MCNP sólo realiza cálculos estacionarios para sistemas críticos o supercríticos, la solución más directa que se propone sin modificar el código fuente de MCNP es usar la aproximación de factorización de flujo, que resuelve por separado la forma del flujo y la amplitud. En este caso se han estudiado en profundidad dos aproximaciones: adiabática y quasiestática. El método adiabático usa un esquema de acoplamiento que alterna en el tiempo los cálculos neutrónicos y termohidráulicos. MCNP calcula el modo fundamental de la distribución de neutrones y la reactividad al final de cada paso de tiempo, y COBRA-IV calcula las propiedades térmicas en el punto intermedio de los pasos de tiempo. La evolución de la amplitud de flujo se calcula resolviendo las ecuaciones de cinética puntual. Este método calcula la reactividad estática en cada paso de tiempo que, en general, difiere de la reactividad dinámica que se obtendría con la distribución de flujo exacta y dependiente de tiempo. No obstante, para entornos no excesivamente alejados de la criticidad ambas reactividades son similares y el método conduce a resultados prácticos aceptables. Siguiendo esta línea, se ha desarrollado después un método mejorado para intentar tener en cuenta el efecto de la fuente de neutrones retardados en la evolución de la forma del flujo durante el transitorio. El esquema consiste en realizar un cálculo cuasiestacionario por cada paso de tiempo con MCNP. La simulación cuasiestacionaria se basa EN la aproximación de fuente constante de neutrones retardados, y consiste en dar un determinado peso o importancia a cada ciclo computacial del cálculo de criticidad con MCNP para la estimación del flujo final. Ambos métodos se han verificado tomando como referencia los resultados del código de difusión COBAYA3 frente a un ejercicio común y suficientemente significativo. Finalmente, con objeto de demostrar la posibilidad de uso práctico del código, se ha simulado un transitorio en el concepto de reactor crítico en fase de diseño MYRRHA/FASTEF, de 100 MW de potencia térmica y refrigerado por plomo-bismuto. ABSTRACT Long term sustainable nuclear energy scenarios envisage a fleet of Liquid Metal Fast Reactors (LMFR) for the Pu recycling and minor actinides (MAs) transmutation or combined with some accelerator driven systems (ADS) just for MAs elimination. Design and licensing of these innovative reactor concepts require accurate computational tools, implementing the knowledge obtained in experimental research for new reactor configurations, materials and associated systems. Although a number of fast reactor systems have already been built, the operational experience is still reduced, especially for lead reactors, and not all the transients are fully understood. The safety analysis approach for LMFR is therefore based only on deterministic methods, different from modern approach for Light Water Reactors (LWR) which also benefit from probabilistic methods. Usually, the approach adopted in LMFR safety assessments is to employ a variety of codes, somewhat different for the each other, to analyze transients looking for a comprehensive solution and including uncertainties. In this frame, new best estimate simulation codes are of prime importance in order to analyze fast reactors steady state and transients. This thesis is focused on the development of a coupled code system for best estimate analysis in fast critical reactor. Currently due to the increase in the computational resources, Monte Carlo methods for neutrons transport can be used for detailed full core calculations. Furthermore, Monte Carlo codes are usually taken as reference for deterministic diffusion multigroups codes in fast reactors applications because they employ point-wise cross sections in an exact geometry model and intrinsically account for directional dependence of the ux. The coupling methodology presented here uses MCNP to calculate the power deposition within the reactor. The subchannel code COBRA-IV calculates the temperature and density distribution within the reactor. COBRA-IV is suitable for fast reactors applications because it has been validated against experimental results in sodium rod bundles. The proper correlations for liquid metal applications have been added to the thermal-hydraulics program. Both codes are coupled at steady state using an iterative method and external files exchange. The main issue in the Monte Carlo/thermal-hydraulics steady state coupling is the cross section handling to take into account Doppler broadening when temperature rises. Among every available options, the pseudo materials approach has been chosen in this thesis. This approach obtains reasonable results in fast reactor applications. Furthermore, geometrical changes caused by large temperature gradients in the core, are of major importance in fast reactor due to the large neutron mean free path. An additional module has therefore been included in order to simulate the reactor geometry in hot state or to estimate the reactivity due to core expansion in a transient. The module automatically calculates the fuel length, cladding radius, fuel assembly pitch and diagrid radius with the temperature. This effect will be crucial in some unprotected transients. Also related to geometrical changes, an automatic control rod movement feature has been implemented in order to achieve a just critical reactor or to calculate control rod worth. A step forward in the coupling platform is the dynamic simulation. Since MCNP performs only steady state calculations for critical systems, the more straight forward option without modifying MCNP source code, is to use the flux factorization approach solving separately the flux shape and amplitude. In this thesis two options have been studied to tackle time dependent neutronic simulations using a Monte Carlo code: adiabatic and quasistatic methods. The adiabatic methods uses a staggered time coupling scheme for the time advance of neutronics and the thermal-hydraulics calculations. MCNP computes the fundamental mode of the neutron flux distribution and the reactivity at the end of each time step and COBRA-IV the thermal properties at half of the the time steps. To calculate the flux amplitude evolution a solver of the point kinetics equations is used. This method calculates the static reactivity in each time step that in general is different from the dynamic reactivity calculated with the exact flux distribution. Nevertheless, for close to critical situations, both reactivities are similar and the method leads to acceptable practical results. In this line, an improved method as an attempt to take into account the effect of delayed neutron source in the transient flux shape evolutions is developed. The scheme performs a quasistationary calculation per time step with MCNP. This quasistationary simulations is based con the constant delayed source approach, taking into account the importance of each criticality cycle in the final flux estimation. Both adiabatic and quasistatic methods have been verified against the diffusion code COBAYA3, using a theoretical kinetic exercise. Finally, a transient in a critical 100 MWth lead-bismuth-eutectic reactor concept is analyzed using the adiabatic method as an application example in a real system.
Resumo:
La telepesencia combina diferentes modalidades sensoriales, incluyendo, entre otras, la visual y la del tacto, para producir una sensación de presencia remota en el operador. Un elemento clave en la implementación de sistemas de telepresencia para permitir una telemanipulación del entorno remoto es el retorno de fuerza. Durante una telemanipulación, la energía mecánica es transferida entre el operador humano y el entorno remoto. En general, la energía es una propiedad de los objetos físicos, fundamental en su mutual interacción. En esta interacción, la energía se puede transmitir entre los objetos, puede cambiar de forma pero no puede crearse ni destruirse. En esta tesis, se aplica este principio fundamental para derivar un nuevo método de control bilateral que permite el diseño de sistemas de teleoperación estables para cualquier arquitectura concebible. El razonamiento parte del hecho de que la energía mecánica insertada por el operador humano en el sistema debe transferirse hacia el entorno remoto y viceversa. Tal como se verá, el uso de la energía como variable de control permite un tratamiento más general del sistema que el control convencional basado en variables específicas del sistema. Mediante el concepto de Red de Potencia de Retardo Temporal (RPRT), el problema de definir los flujos de energía en un sistema de teleoperación es solucionado con independencia de la arquitectura de comunicación. Como se verá, los retardos temporales son la principal causa de generación de energía virtual. Este hecho se observa con retardos a partir de 1 milisegundo. Esta energía virtual es añadida al sistema de forma intrínseca y representa la causa principal de inestabilidad. Se demuestra que las RPRTs son transportadoras de la energía deseada intercambiada entre maestro y esclavo pero a la vez generadoras de energía virtual debido al retardo temporal. Una vez estas redes son identificadas, el método de Control de Pasividad en el Dominio Temporal para RPRTs se propone como mecanismo de control para asegurar la pasividad del sistema, y as__ la estabilidad. El método se basa en el simple hecho de que esta energía virtual debido al retardo debe transformarse en disipación. As__ el sistema se aproxima al sistema deseado, donde solo la energía insertada desde un extremo es transferida hacia el otro. El sistema resultante presenta dos cualidades: por un lado la estabilidad del sistema queda garantizada con independencia de la arquitectura del sistema y del canal de comunicación; por el otro, el rendimiento es maximizado en términos de fidelidad de transmisión energética. Los métodos propuestos se sustentan con sistemas experimentales con diferentes arquitecturas de control y retardos entre 2 y 900 ms. La tesis concluye con un experimento que incluye una comunicación espacial basada en el satélite geoestacionario ASTRA. ABSTRACT Telepresence combines different sensorial modalities, including vision and touch, to produce a feeling of being present in a remote location. The key element to successfully implement a telepresence system and thus to allow telemanipulation of a remote environment is force feedback. In a telemanipulation, mechanical energy must convey from the human operator to the manipulated object found in the remote environment. In general, energy is a property of all physical objects, fundamental to their mutual interactions in which the energy can be transferred among the objects and can change form but cannot be created or destroyed. In this thesis, we exploit this fundamental principle to derive a novel bilateral control mechanism that allows designing stable teleoperation systems with any conceivable communication architecture. The rationale starts from the fact that the mechanical energy injected by a human operator into the system must be conveyed to the remote environment and Vice Versa. As will be seen, setting energy as the control variable allows a more general treatment of the controlled system in contrast to the more conventional control of specific systems variables. Through the Time Delay Power Network (TDPN) concept, the issue of defining the energy flows involved in a teleoperation system is solved with independence of the communication architecture. In particular, communication time delays are found to be a source of virtual energy. This fact is observed with delays starting from 1 millisecond. Since this energy is added, the resulting teleoperation system can be non-passive and thus become unstable. The Time Delay Power Networks are found to be carriers of the desired exchanged energy but also generators of virtual energy due to the time delay. Once these networks are identified, the Time Domain Passivity Control approach for TDPNs is proposed as a control mechanism to ensure system passivity and therefore, system stability. The proposed method is based on the simple fact that this intrinsically added energy due to the communication must be transformed into dissipation. Then the system becomes closer to the ambitioned one, where only the energy injected from one end of the system is conveyed to the other one. The resulting system presents two benefits: On one hand, system stability is guaranteed through passivity independently from the chosen control architecture and communication channel; on the other, performance is maximized in terms of energy transfer faithfulness. The proposed methods are sustained with a set of experimental implementations using different control architectures and communication delays ranging from 2 to 900 milliseconds. An experiment that includes a communication Space link based on the geostationary satellite ASTRA concludes this thesis.
Resumo:
The European chestnut (Castanea sativa Mill.) is a multipurpose species that has been widely cultivated around the Mediterranean basin since ancient times. New varieties were brought to the Iberian Peninsula during the Roman Empire, which coexist since then with native populations that survived the last glaciation. The relevance of chestnut cultivation has being steadily growing since the Middle Ages, until the rural decline of the past century put a stop to this trend. Forest fires and diseases were also major factors. Chestnut cultivation is gaining momentum again due to its economic (wood, fruits) and ecologic relevance, and represents currently an important asset in many rural areas of Europe. In this Thesis we apply different molecular tools to help improve current management strategies. For this study we have chosen El Bierzo (Castile and Leon, NW Spain), which has a centenary tradition of chestnut cultivation and management, and also presents several unique features from a genetic perspective (next paragraph). Moreover, its nuts are widely appreciated in Spain and abroad for their organoleptic properties. We have focused our experimental work on two major problems faced by breeders and the industry: the lack of a fine-grained genetic characterization and the need for new strategies to control blight disease. To characterize with sufficient detail the genetic diversity and structure of El Bierzo orchards, we analyzed DNA from 169 trees grafted for nut production covering the entire region. We also analyzed 62 nuts from all traditional varieties. El Bierzo constitutes an outstanding scenario to study chestnut genetics and the influence of human management because: (i) it is located at one extreme of the distribution area; (ii) it is a major glacial refuge for the native species; (iii) it has a long tradition of human management (since Roman times, at least); and (iv) its geographical setting ensures an unusual degree of genetic isolation. Thirteen microsatellite markers provided enough informativeness and discrimination power to genotype at the individual level. Together with an unexpected level of genetic variability, we found evidence of genetic structure, with three major gene pools giving rise to the current population. High levels of genetic differentiation between groups supported this organization. Interestingly, genetic structure does not match with spatial boundaries, suggesting that the exchange of material and cultivation practices have strongly influenced natural gene flow. The microsatellite markers selected for this study were also used to classify a set of 62 samples belonging to all traditional varieties. We identified several cases of synonymies and homonymies, evidencing the need to substitute traditional classification systems with new tools for genetic profiling. Management and conservation strategies should also benefit from these tools. The avenue of high-throughput sequencing technologies, combined with the development of bioinformatics tools, have paved the way to study transcriptomes without the need for a reference genome. We took advantage of RNA sequencing and de novo assembly tools to determine the transcriptional landscape of chestnut in response to blight disease. In addition, we have selected a set of candidate genes with high potential for developing resistant varieties via genetic engineering. Our results evidenced a deep transcriptional reprogramming upon fungal infection. The plant hormones ET and JA appear to orchestrate the defensive response. Interestingly, our results also suggest a role for auxins in modulating such response. Many transcription factors were identified in this work that interact with promoters of genes involved in disease resistance. Among these genes, we have conducted a functional characterization of a two major thaumatin-like proteins (TLP) that belongs to the PR5 family. Two genes encoding chestnut cotyledon TLPs have been previously characterized, termed CsTL1 and CsTL2. We substantiate here their protective role against blight disease for the first time, including in silico, in vitro and in vivo evidence. The synergy between TLPs and other antifungal proteins, particularly endo-p-1,3-glucanases, bolsters their interest for future control strategies based on biotechnological approaches.
Resumo:
Traffic flow time series data are usually high dimensional and very complex. Also they are sometimes imprecise and distorted due to data collection sensor malfunction. Additionally, events like congestion caused by traffic accidents add more uncertainty to real-time traffic conditions, making traffic flow forecasting a complicated task. This article presents a new data preprocessing method targeting multidimensional time series with a very high number of dimensions and shows its application to real traffic flow time series from the California Department of Transportation (PEMS web site). The proposed method consists of three main steps. First, based on a language for defining events in multidimensional time series, mTESL, we identify a number of types of events in time series that corresponding to either incorrect data or data with interference. Second, each event type is restored utilizing an original method that combines real observations, local forecasted values and historical data. Third, an exponential smoothing procedure is applied globally to eliminate noise interference and other random errors so as to provide good quality source data for future work.
Resumo:
In this contribution a novel iterative bit- and power allocation (IBPA) approach has been developed when transmitting a given bit/s/Hz data rate over a correlated frequency non-selective (4× 4) Multiple-Input MultipleOutput (MIMO) channel. The iterative resources allocation algorithm developed in this investigation is aimed at the achievement of the minimum bit-error rate (BER) in a correlated MIMO communication system. In order to achieve this goal, the available bits are iteratively allocated in the MIMO active layers which present the minimum transmit power requirement per time slot.
Resumo:
One of the main objectives of European Commission related to climate and energy is the well-known 20-20-20 targets to be achieved in 2020: Europe has to reduce greenhouse gas emissions of at least 20% below 1990 levels, 20% of EU energy consumption has to come from renewable resources and, finally, a 20% reduction in primary energy use compared with projected levels, has to be achieved by improving energy efficiency. In order to reach these objectives, it is necessary to reduce the overall emissions, mainly in transport (reducing CO2, NOx and other pollutants), and to increase the penetration of the intermittent renewable energy. A high deployment of battery electric (BEVs) and plug-in hybrid electric vehicles (PHEVs), with a low-cost source of energy storage, could help to achieve both targets. Hybrid electric vehicles (HEVs) use a combination of a conventional internal combustion engine (ICE) with one (or more) electric motor. There are different grades of hybridation from micro-hybrids with start-stop capability, mild hybrids (with kinetic energy recovery), medium hybrids (mild hybrids plus energy assist) and full hybrids (medium hybrids plus electric launch capability). These last types of vehicles use a typical battery capacity around 1-2 kWh. Plug in hybrid electric vehicles (PHEVs) use larger battery capacities to achieve limited electric-only driving range. These vehicles are charged by on-board electricity generation or either plugging into electric outlets. Typical battery capacity is around 10 kWh. Battery Electric Vehicles (BEVs) are only driven by electric power and their typical battery capacity is around 15-20 kWh. One type of PHEV, the Extended Range Electric Vehicle (EREV), operates as a BEV until its plug-in battery capacity is depleted; at which point its gasoline engine powers an electric generator to extend the vehicle's range. The charging of PHEVs (including EREVs) and BEVs will have different impacts to the electric grid, depending on the number of vehicles and the start time for charging. Initially, the lecture will start analyzing the electrical power requirements for charging PHEVs-BEVs in Flanders region (Belgium) under different charging scenarios. Secondly and based on an activity-based microsimulation mobility model, an efficient method to reduce this impact will be presented.
Resumo:
El desarrollo de las técnicas de imágenes por resonancia magnética han permitido el estudio y cuantificación, in vivo, de los cambios que ocurren en la morfología cerebral ligados a procesos tales como el neurodesarrollo, el envejecimiento, el aprendizaje o la enfermedad. Un gran número de métodos de morfometría han sido desarrollados con el fin de extraer la información contenida en estas imágenes y traducirla en indicadores de forma o tamaño, tales como el volumen o el grosor cortical; marcadores que son posteriormente empleados para encontrar diferencias estadísticas entre poblaciones de sujetos o realizar correlaciones entre la morfología cerebral y, por ejemplo, la edad o la severidad de determinada enfermedad. A pesar de la amplia variedad de biomarcadores y metodologías de morfometría, muchos estudios sesgan sus hipótesis, y con ello los resultados experimentales, al empleo de un número reducido de biomarcadores o a al uso de una única metodología de procesamiento. Con el presente trabajo se pretende demostrar la importancia del empleo de diversos métodos de morfometría para lograr una mejor caracterización del proceso que se desea estudiar. En el mismo se emplea el análisis de forma para detectar diferencias, tanto globales como locales, en la morfología del tálamo entre pacientes adolescentes con episodios tempranos de psicosis y adolescentes sanos. Los resultados obtenidos demuestran que la diferencia de volumen talámico entre ambas poblaciones de sujetos, previamente descrita en la literatura, se debe a una reducción del volumen de la región anterior-mediodorsal y del núcleo pulvinar del tálamo de los pacientes respecto a los sujetos sanos. Además, se describe el desarrollo de un estudio longitudinal, en sujetos sanos, que emplea simultáneamente distintos biomarcadores para la caracterización y cuantificación de los cambios que ocurren en la morfología de la corteza cerebral durante la adolescencia. A través de este estudio se revela que el proceso de “alisado” que experimenta la corteza cerebral durante la adolescencia es consecuencia de una disminución de la profundidad, ligada a un incremento en el ancho, de los surcos corticales. Finalmente, esta metodología es aplicada, en un diseño transversal, para el estudio de las causas que provocan el decrecimiento tanto del grosor cortical como del índice de girificación en adolescentes con episodios tempranos de psicosis. ABSTRACT The ever evolving sophistication of magnetic resonance image techniques continue to provide new tools to characterize and quantify, in vivo, brain morphologic changes related to neurodevelopment, senescence, learning or disease. The majority of morphometric methods extract shape or size descriptors such as volume, surface area, and cortical thickness from the MRI image. These morphological measurements are commonly entered in statistical analytic approaches for testing between-group differences or for correlations between the morphological measurement and other variables such as age, sex, or disease severity. A wide variety of morphological biomarkers are reported in the literature. Despite this wide range of potentially useful biomarkers and available morphometric methods, the hypotheses and findings of the grand majority of morphological studies are biased because reports assess only one morphometric feature and usually use only one image processing method. Throughout this dissertation biomarkers and image processing strategies are combined to provide innovative and useful morphometric tools for examining brain changes during neurodevelopment. Specifically, a shape analysis technique allowing for a fine-grained assessment of regional thalamic volume in early-onset psychosis patients and healthy comparison subjects is implemented. Results show that disease-related reductions in global thalamic volume, as previously described by other authors, could be particularly driven by a deficit in the anterior-mediodorsal and pulvinar thalamic regions in patients relative to healthy subjects. Furthermore, in healthy adolescents different cortical features are extracted and combined and their interdependency is assessed over time. This study attempts to extend current knowledge of normal brain development, specifically the largely unexplored relationship between changes of distinct cortical morphological measurements during adolescence. This study demonstrates that cortical flattening, present during adolescence, is produced by a combination of age-related increase in sulcal width and decrease in sulcal depth. Finally, this methodology is applied to a cross-sectional study, investigating the mechanisms underlying the decrease in cortical thickness and gyrification observed in psychotic patients with a disease onset during adolescence.
Resumo:
Esta tesis doctoral se enmarca dentro del campo de los sistemas embebidos reconfigurables, redes de sensores inalámbricas para aplicaciones de altas prestaciones, y computación distribuida. El documento se centra en el estudio de alternativas de procesamiento para sistemas embebidos autónomos distribuidos de altas prestaciones (por sus siglas en inglés, High-Performance Autonomous Distributed Systems (HPADS)), así como su evolución hacia el procesamiento de alta resolución. El estudio se ha llevado a cabo tanto a nivel de plataforma como a nivel de las arquitecturas de procesamiento dentro de la plataforma con el objetivo de optimizar aspectos tan relevantes como la eficiencia energética, la capacidad de cómputo y la tolerancia a fallos del sistema. Los HPADS son sistemas realimentados, normalmente formados por elementos distribuidos conectados o no en red, con cierta capacidad de adaptación, y con inteligencia suficiente para llevar a cabo labores de prognosis y/o autoevaluación. Esta clase de sistemas suele formar parte de sistemas más complejos llamados sistemas ciber-físicos (por sus siglas en inglés, Cyber-Physical Systems (CPSs)). Los CPSs cubren un espectro enorme de aplicaciones, yendo desde aplicaciones médicas, fabricación, o aplicaciones aeroespaciales, entre otras muchas. Para el diseño de este tipo de sistemas, aspectos tales como la confiabilidad, la definición de modelos de computación, o el uso de metodologías y/o herramientas que faciliten el incremento de la escalabilidad y de la gestión de la complejidad, son fundamentales. La primera parte de esta tesis doctoral se centra en el estudio de aquellas plataformas existentes en el estado del arte que por sus características pueden ser aplicables en el campo de los CPSs, así como en la propuesta de un nuevo diseño de plataforma de altas prestaciones que se ajuste mejor a los nuevos y más exigentes requisitos de las nuevas aplicaciones. Esta primera parte incluye descripción, implementación y validación de la plataforma propuesta, así como conclusiones sobre su usabilidad y sus limitaciones. Los principales objetivos para el diseño de la plataforma propuesta se enumeran a continuación: • Estudiar la viabilidad del uso de una FPGA basada en RAM como principal procesador de la plataforma en cuanto a consumo energético y capacidad de cómputo. • Propuesta de técnicas de gestión del consumo de energía en cada etapa del perfil de trabajo de la plataforma. •Propuestas para la inclusión de reconfiguración dinámica y parcial de la FPGA (por sus siglas en inglés, Dynamic Partial Reconfiguration (DPR)) de forma que sea posible cambiar ciertas partes del sistema en tiempo de ejecución y sin necesidad de interrumpir al resto de las partes. Evaluar su aplicabilidad en el caso de HPADS. Las nuevas aplicaciones y nuevos escenarios a los que se enfrentan los CPSs, imponen nuevos requisitos en cuanto al ancho de banda necesario para el procesamiento de los datos, así como en la adquisición y comunicación de los mismos, además de un claro incremento en la complejidad de los algoritmos empleados. Para poder cumplir con estos nuevos requisitos, las plataformas están migrando desde sistemas tradicionales uni-procesador de 8 bits, a sistemas híbridos hardware-software que incluyen varios procesadores, o varios procesadores y lógica programable. Entre estas nuevas arquitecturas, las FPGAs y los sistemas en chip (por sus siglas en inglés, System on Chip (SoC)) que incluyen procesadores embebidos y lógica programable, proporcionan soluciones con muy buenos resultados en cuanto a consumo energético, precio, capacidad de cómputo y flexibilidad. Estos buenos resultados son aún mejores cuando las aplicaciones tienen altos requisitos de cómputo y cuando las condiciones de trabajo son muy susceptibles de cambiar en tiempo real. La plataforma propuesta en esta tesis doctoral se ha denominado HiReCookie. La arquitectura incluye una FPGA basada en RAM como único procesador, así como un diseño compatible con la plataforma para redes de sensores inalámbricas desarrollada en el Centro de Electrónica Industrial de la Universidad Politécnica de Madrid (CEI-UPM) conocida como Cookies. Esta FPGA, modelo Spartan-6 LX150, era, en el momento de inicio de este trabajo, la mejor opción en cuanto a consumo y cantidad de recursos integrados, cuando además, permite el uso de reconfiguración dinámica y parcial. Es importante resaltar que aunque los valores de consumo son los mínimos para esta familia de componentes, la potencia instantánea consumida sigue siendo muy alta para aquellos sistemas que han de trabajar distribuidos, de forma autónoma, y en la mayoría de los casos alimentados por baterías. Por esta razón, es necesario incluir en el diseño estrategias de ahorro energético para incrementar la usabilidad y el tiempo de vida de la plataforma. La primera estrategia implementada consiste en dividir la plataforma en distintas islas de alimentación de forma que sólo aquellos elementos que sean estrictamente necesarios permanecerán alimentados, cuando el resto puede estar completamente apagado. De esta forma es posible combinar distintos modos de operación y así optimizar enormemente el consumo de energía. El hecho de apagar la FPGA para ahora energía durante los periodos de inactividad, supone la pérdida de la configuración, puesto que la memoria de configuración es una memoria volátil. Para reducir el impacto en el consumo y en el tiempo que supone la reconfiguración total de la plataforma una vez encendida, en este trabajo, se incluye una técnica para la compresión del archivo de configuración de la FPGA, de forma que se consiga una reducción del tiempo de configuración y por ende de la energía consumida. Aunque varios de los requisitos de diseño pueden satisfacerse con el diseño de la plataforma HiReCookie, es necesario seguir optimizando diversos parámetros tales como el consumo energético, la tolerancia a fallos y la capacidad de procesamiento. Esto sólo es posible explotando todas las posibilidades ofrecidas por la arquitectura de procesamiento en la FPGA. Por lo tanto, la segunda parte de esta tesis doctoral está centrada en el diseño de una arquitectura reconfigurable denominada ARTICo3 (Arquitectura Reconfigurable para el Tratamiento Inteligente de Cómputo, Confiabilidad y Consumo de energía) para la mejora de estos parámetros por medio de un uso dinámico de recursos. ARTICo3 es una arquitectura de procesamiento para FPGAs basadas en RAM, con comunicación tipo bus, preparada para dar soporte para la gestión dinámica de los recursos internos de la FPGA en tiempo de ejecución gracias a la inclusión de reconfiguración dinámica y parcial. Gracias a esta capacidad de reconfiguración parcial, es posible adaptar los niveles de capacidad de procesamiento, energía consumida o tolerancia a fallos para responder a las demandas de la aplicación, entorno, o métricas internas del dispositivo mediante la adaptación del número de recursos asignados para cada tarea. Durante esta segunda parte de la tesis se detallan el diseño de la arquitectura, su implementación en la plataforma HiReCookie, así como en otra familia de FPGAs, y su validación por medio de diferentes pruebas y demostraciones. Los principales objetivos que se plantean la arquitectura son los siguientes: • Proponer una metodología basada en un enfoque multi-hilo, como las propuestas por CUDA (por sus siglas en inglés, Compute Unified Device Architecture) u Open CL, en la cual distintos kernels, o unidades de ejecución, se ejecuten en un numero variable de aceleradores hardware sin necesidad de cambios en el código de aplicación. • Proponer un diseño y proporcionar una arquitectura en la que las condiciones de trabajo cambien de forma dinámica dependiendo bien de parámetros externos o bien de parámetros que indiquen el estado de la plataforma. Estos cambios en el punto de trabajo de la arquitectura serán posibles gracias a la reconfiguración dinámica y parcial de aceleradores hardware en tiempo real. • Explotar las posibilidades de procesamiento concurrente, incluso en una arquitectura basada en bus, por medio de la optimización de las transacciones en ráfaga de datos hacia los aceleradores. •Aprovechar las ventajas ofrecidas por la aceleración lograda por módulos puramente hardware para conseguir una mejor eficiencia energética. • Ser capaces de cambiar los niveles de redundancia de hardware de forma dinámica según las necesidades del sistema en tiempo real y sin cambios para el código de aplicación. • Proponer una capa de abstracción entre el código de aplicación y el uso dinámico de los recursos de la FPGA. El diseño en FPGAs permite la utilización de módulos hardware específicamente creados para una aplicación concreta. De esta forma es posible obtener rendimientos mucho mayores que en el caso de las arquitecturas de propósito general. Además, algunas FPGAs permiten la reconfiguración dinámica y parcial de ciertas partes de su lógica en tiempo de ejecución, lo cual dota al diseño de una gran flexibilidad. Los fabricantes de FPGAs ofrecen arquitecturas predefinidas con la posibilidad de añadir bloques prediseñados y poder formar sistemas en chip de una forma más o menos directa. Sin embargo, la forma en la que estos módulos hardware están organizados dentro de la arquitectura interna ya sea estática o dinámicamente, o la forma en la que la información se intercambia entre ellos, influye enormemente en la capacidad de cómputo y eficiencia energética del sistema. De la misma forma, la capacidad de cargar módulos hardware bajo demanda, permite añadir bloques redundantes que permitan aumentar el nivel de tolerancia a fallos de los sistemas. Sin embargo, la complejidad ligada al diseño de bloques hardware dedicados no debe ser subestimada. Es necesario tener en cuenta que el diseño de un bloque hardware no es sólo su propio diseño, sino también el diseño de sus interfaces, y en algunos casos de los drivers software para su manejo. Además, al añadir más bloques, el espacio de diseño se hace más complejo, y su programación más difícil. Aunque la mayoría de los fabricantes ofrecen interfaces predefinidas, IPs (por sus siglas en inglés, Intelectual Property) comerciales y plantillas para ayudar al diseño de los sistemas, para ser capaces de explotar las posibilidades reales del sistema, es necesario construir arquitecturas sobre las ya establecidas para facilitar el uso del paralelismo, la redundancia, y proporcionar un entorno que soporte la gestión dinámica de los recursos. Para proporcionar este tipo de soporte, ARTICo3 trabaja con un espacio de soluciones formado por tres ejes fundamentales: computación, consumo energético y confiabilidad. De esta forma, cada punto de trabajo se obtiene como una solución de compromiso entre estos tres parámetros. Mediante el uso de la reconfiguración dinámica y parcial y una mejora en la transmisión de los datos entre la memoria principal y los aceleradores, es posible dedicar un número variable de recursos en el tiempo para cada tarea, lo que hace que los recursos internos de la FPGA sean virtualmente ilimitados. Este variación en el tiempo del número de recursos por tarea se puede usar bien para incrementar el nivel de paralelismo, y por ende de aceleración, o bien para aumentar la redundancia, y por lo tanto el nivel de tolerancia a fallos. Al mismo tiempo, usar un numero óptimo de recursos para una tarea mejora el consumo energético ya que bien es posible disminuir la potencia instantánea consumida, o bien el tiempo de procesamiento. Con el objetivo de mantener los niveles de complejidad dentro de unos límites lógicos, es importante que los cambios realizados en el hardware sean totalmente transparentes para el código de aplicación. A este respecto, se incluyen distintos niveles de transparencia: • Transparencia a la escalabilidad: los recursos usados por una misma tarea pueden ser modificados sin que el código de aplicación sufra ningún cambio. • Transparencia al rendimiento: el sistema aumentara su rendimiento cuando la carga de trabajo aumente, sin cambios en el código de aplicación. • Transparencia a la replicación: es posible usar múltiples instancias de un mismo módulo bien para añadir redundancia o bien para incrementar la capacidad de procesamiento. Todo ello sin que el código de aplicación cambie. • Transparencia a la posición: la posición física de los módulos hardware es arbitraria para su direccionamiento desde el código de aplicación. • Transparencia a los fallos: si existe un fallo en un módulo hardware, gracias a la redundancia, el código de aplicación tomará directamente el resultado correcto. • Transparencia a la concurrencia: el hecho de que una tarea sea realizada por más o menos bloques es transparente para el código que la invoca. Por lo tanto, esta tesis doctoral contribuye en dos líneas diferentes. En primer lugar, con el diseño de la plataforma HiReCookie y en segundo lugar con el diseño de la arquitectura ARTICo3. Las principales contribuciones de esta tesis se resumen a continuación. • Arquitectura de la HiReCookie incluyendo: o Compatibilidad con la plataforma Cookies para incrementar las capacidades de esta. o División de la arquitectura en distintas islas de alimentación. o Implementación de los diversos modos de bajo consumo y políticas de despertado del nodo. o Creación de un archivo de configuración de la FPGA comprimido para reducir el tiempo y el consumo de la configuración inicial. • Diseño de la arquitectura reconfigurable para FPGAs basadas en RAM ARTICo3: o Modelo de computación y modos de ejecución inspirados en el modelo de CUDA pero basados en hardware reconfigurable con un número variable de bloques de hilos por cada unidad de ejecución. o Estructura para optimizar las transacciones de datos en ráfaga proporcionando datos en cascada o en paralelo a los distinto módulos incluyendo un proceso de votado por mayoría y operaciones de reducción. o Capa de abstracción entre el procesador principal que incluye el código de aplicación y los recursos asignados para las diferentes tareas. o Arquitectura de los módulos hardware reconfigurables para mantener la escalabilidad añadiendo una la interfaz para las nuevas funcionalidades con un simple acceso a una memoria RAM interna. o Caracterización online de las tareas para proporcionar información a un módulo de gestión de recursos para mejorar la operación en términos de energía y procesamiento cuando además se opera entre distintos nieles de tolerancia a fallos. El documento está dividido en dos partes principales formando un total de cinco capítulos. En primer lugar, después de motivar la necesidad de nuevas plataformas para cubrir las nuevas aplicaciones, se detalla el diseño de la plataforma HiReCookie, sus partes, las posibilidades para bajar el consumo energético y se muestran casos de uso de la plataforma así como pruebas de validación del diseño. La segunda parte del documento describe la arquitectura reconfigurable, su implementación en varias FPGAs, y pruebas de validación en términos de capacidad de procesamiento y consumo energético, incluyendo cómo estos aspectos se ven afectados por el nivel de tolerancia a fallos elegido. Los capítulos a lo largo del documento son los siguientes: El capítulo 1 analiza los principales objetivos, motivación y aspectos teóricos necesarios para seguir el resto del documento. El capítulo 2 está centrado en el diseño de la plataforma HiReCookie y sus posibilidades para disminuir el consumo de energía. El capítulo 3 describe la arquitectura reconfigurable ARTICo3. El capítulo 4 se centra en las pruebas de validación de la arquitectura usando la plataforma HiReCookie para la mayoría de los tests. Un ejemplo de aplicación es mostrado para analizar el funcionamiento de la arquitectura. El capítulo 5 concluye esta tesis doctoral comentando las conclusiones obtenidas, las contribuciones originales del trabajo y resultados y líneas futuras. ABSTRACT This PhD Thesis is framed within the field of dynamically reconfigurable embedded systems, advanced sensor networks and distributed computing. The document is centred on the study of processing solutions for high-performance autonomous distributed systems (HPADS) as well as their evolution towards High performance Computing (HPC) systems. The approach of the study is focused on both platform and processor levels to optimise critical aspects such as computing performance, energy efficiency and fault tolerance. HPADS are considered feedback systems, normally networked and/or distributed, with real-time adaptive and predictive functionality. These systems, as part of more complex systems known as Cyber-Physical Systems (CPSs), can be applied in a wide range of fields such as military, health care, manufacturing, aerospace, etc. For the design of HPADS, high levels of dependability, the definition of suitable models of computation, and the use of methodologies and tools to support scalability and complexity management, are required. The first part of the document studies the different possibilities at platform design level in the state of the art, together with description, development and validation tests of the platform proposed in this work to cope with the previously mentioned requirements. The main objectives targeted by this platform design are the following: • Study the feasibility of using SRAM-based FPGAs as the main processor of the platform in terms of energy consumption and performance for high demanding applications. • Analyse and propose energy management techniques to reduce energy consumption in every stage of the working profile of the platform. • Provide a solution with dynamic partial and wireless remote HW reconfiguration (DPR) to be able to change certain parts of the FPGA design at run time and on demand without interrupting the rest of the system. • Demonstrate the applicability of the platform in different test-bench applications. In order to select the best approach for the platform design in terms of processing alternatives, a study of the evolution of the state-of-the-art platforms is required to analyse how different architectures cope with new more demanding applications and scenarios: security, mixed-critical systems for aerospace, multimedia applications, or military environments, among others. In all these scenarios, important changes in the required processing bandwidth or the complexity of the algorithms used are provoking the migration of the platforms from single microprocessor architectures to multiprocessing and heterogeneous solutions with more instant power consumption but higher energy efficiency. Within these solutions, FPGAs and Systems on Chip including FPGA fabric and dedicated hard processors, offer a good trade of among flexibility, processing performance, energy consumption and price, when they are used in demanding applications where working conditions are very likely to vary over time and high complex algorithms are required. The platform architecture proposed in this PhD Thesis is called HiReCookie. It includes an SRAM-based FPGA as the main and only processing unit. The FPGA selected, the Xilinx Spartan-6 LX150, was at the beginning of this work the best choice in terms of amount of resources and power. Although, the power levels are the lowest of these kind of devices, they can be still very high for distributed systems that normally work powered by batteries. For that reason, it is necessary to include different energy saving possibilities to increase the usability of the platform. In order to reduce energy consumption, the platform architecture is divided into different power islands so that only those parts of the systems that are strictly needed are powered on, while the rest of the islands can be completely switched off. This allows a combination of different low power modes to decrease energy. In addition, one of the most important handicaps of SRAM-based FPGAs is that they are not alive at power up. Therefore, recovering the system from a switch-off state requires to reload the FPGA configuration from a non-volatile memory device. For that reason, this PhD Thesis also proposes a methodology to compress the FPGA configuration file in order to reduce time and energy during the initial configuration process. Although some of the requirements for the design of HPADS are already covered by the design of the HiReCookie platform, it is necessary to continue improving energy efficiency, computing performance and fault tolerance. This is only possible by exploiting all the opportunities provided by the processing architectures configured inside the FPGA. Therefore, the second part of the thesis details the design of the so called ARTICo3 FPGA architecture to enhance the already intrinsic capabilities of the FPGA. ARTICo3 is a DPR-capable bus-based virtual architecture for multiple HW acceleration in SRAM-based FPGAs. The architecture provides support for dynamic resource management in real time. In this way, by using DPR, it will be possible to change the levels of computing performance, energy consumption and fault tolerance on demand by increasing or decreasing the amount of resources used by the different tasks. Apart from the detailed design of the architecture and its implementation in different FPGA devices, different validation tests and comparisons are also shown. The main objectives targeted by this FPGA architecture are listed as follows: • Provide a method based on a multithread approach such as those offered by CUDA (Compute Unified Device Architecture) or OpenCL kernel executions, where kernels are executed in a variable number of HW accelerators without requiring application code changes. • Provide an architecture to dynamically adapt working points according to either self-measured or external parameters in terms of energy consumption, fault tolerance and computing performance. Taking advantage of DPR capabilities, the architecture must provide support for a dynamic use of resources in real time. • Exploit concurrent processing capabilities in a standard bus-based system by optimizing data transactions to and from HW accelerators. • Measure the advantage of HW acceleration as a technique to boost performance to improve processing times and save energy by reducing active times for distributed embedded systems. • Dynamically change the levels of HW redundancy to adapt fault tolerance in real time. • Provide HW abstraction from SW application design. FPGAs give the possibility of designing specific HW blocks for every required task to optimise performance while some of them include the possibility of including DPR. Apart from the possibilities provided by manufacturers, the way these HW modules are organised, addressed and multiplexed in area and time can improve computing performance and energy consumption. At the same time, fault tolerance and security techniques can also be dynamically included using DPR. However, the inherent complexity of designing new HW modules for every application is not negligible. It does not only consist of the HW description, but also the design of drivers and interfaces with the rest of the system, while the design space is widened and more complex to define and program. Even though the tools provided by the majority of manufacturers already include predefined bus interfaces, commercial IPs, and templates to ease application prototyping, it is necessary to improve these capabilities. By adding new architectures on top of them, it is possible to take advantage of parallelization and HW redundancy while providing a framework to ease the use of dynamic resource management. ARTICo3 works within a solution space where working points change at run time in a 3D space defined by three different axes: Computation, Consumption, and Fault Tolerance. Therefore, every working point is found as a trade-off solution among these three axes. By means of DPR, different accelerators can be multiplexed so that the amount of available resources for any application is virtually unlimited. Taking advantage of DPR capabilities and a novel way of transmitting data to the reconfigurable HW accelerators, it is possible to dedicate a dynamically-changing number of resources for a given task in order to either boost computing speed or adding HW redundancy and a voting process to increase fault-tolerance levels. At the same time, using an optimised amount of resources for a given task reduces energy consumption by reducing instant power or computing time. In order to keep level complexity under certain limits, it is important that HW changes are transparent for the application code. Therefore, different levels of transparency are targeted by the system: • Scalability transparency: a task must be able to expand its resources without changing the system structure or application algorithms. • Performance transparency: the system must reconfigure itself as load changes. • Replication transparency: multiple instances of the same task are loaded to increase reliability and performance. • Location transparency: resources are accessed with no knowledge of their location by the application code. • Failure transparency: task must be completed despite a failure in some components. • Concurrency transparency: different tasks will work in a concurrent way transparent to the application code. Therefore, as it can be seen, the Thesis is contributing in two different ways. First with the design of the HiReCookie platform and, second with the design of the ARTICo3 architecture. The main contributions of this PhD Thesis are then listed below: • Architecture of the HiReCookie platform including: o Compatibility of the processing layer for high performance applications with the Cookies Wireless Sensor Network platform for fast prototyping and implementation. o A division of the architecture in power islands. o All the different low-power modes. o The creation of the partial-initial bitstream together with the wake-up policies of the node. • The design of the reconfigurable architecture for SRAM FPGAs: ARTICo3: o A model of computation and execution modes inspired in CUDA but based on reconfigurable HW with a dynamic number of thread blocks per kernel. o A structure to optimise burst data transactions providing coalesced or parallel data to HW accelerators, parallel voting process and reduction operation. o The abstraction provided to the host processor with respect to the operation of the kernels in terms of the number of replicas, modes of operation, location in the reconfigurable area and addressing. o The architecture of the modules representing the thread blocks to make the system scalable by adding functional units only adding an access to a BRAM port. o The online characterization of the kernels to provide information to a scheduler or resource manager in terms of energy consumption and processing time when changing among different fault-tolerance levels, as well as if a kernel is expected to work in the memory-bounded or computing-bounded areas. The document of the Thesis is divided into two main parts with a total of five chapters. First, after motivating the need for new platforms to cover new more demanding applications, the design of the HiReCookie platform, its parts and several partial tests are detailed. The design of the platform alone does not cover all the needs of these applications. Therefore, the second part describes the architecture inside the FPGA, called ARTICo3, proposed in this PhD Thesis. The architecture and its implementation are tested in terms of energy consumption and computing performance showing different possibilities to improve fault tolerance and how this impact in energy and time of processing. Chapter 1 shows the main goals of this PhD Thesis and the technology background required to follow the rest of the document. Chapter 2 shows all the details about the design of the FPGA-based platform HiReCookie. Chapter 3 describes the ARTICo3 architecture. Chapter 4 is focused on the validation tests of the ARTICo3 architecture. An application for proof of concept is explained where typical kernels related to image processing and encryption algorithms are used. Further experimental analyses are performed using these kernels. Chapter 5 concludes the document analysing conclusions, comments about the contributions of the work, and some possible future lines for the work.
Resumo:
Frequently, population ecology of marine organisms uses a descriptive approach in which their sizes and densities are plotted over time. This approach has limited usefulness for design strategies in management or modelling different scenarios. Population projection matrix models are among the most widely used tools in ecology. Unfortunately, for the majority of pelagic marine organisms, it is difficult to mark individuals and follow them over time to determine their vital rates and built a population projection matrix model. Nevertheless, it is possible to get time-series data to calculate size structure and densities of each size, in order to determine the matrix parameters. This approach is known as a “demographic inverse problem” and it is based on quadratic programming methods, but it has rarely been used on aquatic organisms. We used unpublished field data of a population of cubomedusae Carybdea marsupialis to construct a population projection matrix model and compare two different management strategies to lower population to values before year 2008 when there was no significant interaction with bathers. Those strategies were by direct removal of medusae and by reducing prey. Our results showed that removal of jellyfish from all size classes was more effective than removing only juveniles or adults. When reducing prey, the highest efficiency to lower the C. marsupialis population occurred when prey depletion affected prey of all medusae sizes. Our model fit well with the field data and may serve to design an efficient management strategy or build hypothetical scenarios such as removal of individuals or reducing prey. TThis This sdfsdshis method is applicable to other marine or terrestrial species, for which density and population structure over time are available.
Resumo:
Willard apologizes for not writing or visiting in a long time, discusses a problem with getting his sister Sophronia to visit in Deerfield, and writes: “I think I have a wife, who will do everything in her power to make me happy.”
Resumo:
The steep environmental gradients of mountain ecosystems over short distances reflect large gradients of several climatic parameters and hence provide excellent possibilities for ecological research on the effects of environmental change. To gain a better understanding of the dynamics of abiotic and biotic parameters of mountain ecosystems, long-term records are required since permanent plots in mountain regions cover in the best case about 50 - 70 years. In order to extend investigations of ecological dynamics beyond these temporal limitations of permanent plots, paleoecological approaches can be used if the sampling resolution can be adapted to ecological research questions, e.g. a sample every 10 years. Paleoecological studies in mountain ecosystems can provide new ecological insights through the combination of different spatial and temporal scales. [f we thus improve our understanding of processes across both steep environmental gradients and different time scales, we may be able to better estimate ecosystem responses to current and future environmental change (Ammann et al. 1993; Lotter et al. 1997). The complexity of ecological interactions in mountain regions forces us to concentrate on a number of sub-systems - without losing sight of the wider context. Here, we summarize a few case studies on the effects of Holocene climate change and disturbance on the vegetation of the Western Alps. To categorize the main response modes of vegetation to climatic change and disturbance in the Alps we use three classes of ecological behaviour: "resilience", "adjustment", and "vulnerability", We assume a resilient (or elastic) behaviour if vegetation is able to recover to its former state, regaining important ecosystem characteristics, such as floristic composition, biodiversity, species abundances, and biomass (e.g. Küttel 1990; Aber and Melillo 199 1). Conversely, vegetation displacements may occur in response to climatic change and/or disturbance. In some cases, this may culminate in irreversible large-scale processes such as species and/or community extinctions. Such drastic developments indicate high ecosystem vulnerability (or inelasticity or instability, for detailed definitions see Küttel 1990; Aber and Melillo 199 1) to climatic change and/or disturbance. In this sense, the "vulnerability" (or instability) of an ecosystem is expressed by the degree of failure to recover to the original state before disturbance and/or climatic change. Between these two extremes (resilience vs. vulnerability), ecosystem adjustments to climatic change and/or disturbance may occur, including the appearance of new and/or the disappearance of old species. The term "adjustment" is hence used to indicate the response of vegetational communities, which adapted to new environmental conditions without losing their main character. For forest ecosystems, we assume vegetational adjustments (rather than vulnerability) if the dominant (or co-dominant) tree species are not outnumbered or replaced by formerly unimportant plant species or new invaders. Adaptation as a genetic process is not discussed here and will require additional pbylogeographical studies (that incorporate the analysis of ancient DNA) in order to fully understand the distributions of ecotypes.
Resumo:
Purpose. Drivers adopt smaller safety margins when pulling out in front of motorcycles compared with cars. This could partly account for why the most common motorcycle/car accident involves a car violating a motorcyclist's right of way. One possible explanation is the size-arrival effect in which smaller objects are perceived to arrive later than larger objects. That is, drivers may estimate the time to arrival of motorcycles to be later than cars because motorcycles are smaller. Methods. We investigated arrival time judgments using a temporal occlusion paradigm. Drivers recruited from the student population (n = 28 and n = 33) saw video footage of oncoming vehicles and had to press a response button when they judged that vehicles would reach them. Results. In experiment 1, the time to arrival of motorcycles was estimated to be significantly later than larger vehicles (a car and a van) for different approach speeds and viewing times. In experiment 2, we investigated an alternative explanation to the size-arrival effect: that the smaller size of motorcycles places them below the threshold needed for observers to make an accurate time to arrival judgment using tau. We found that the motorcycle/car difference in arrival time estimates was maintained for very short occlusion durations when tau could be estimated for both motorcycles and cars. Conclusions. Results are consistent with the size-arrival effect and are inconsistent with the tau threshold explanation. Drivers estimate motorcycles will reach them later than cars across a range of conditions. This could have safety implications.
Resumo:
Background and Aims Plants regulate their architecture strongly in response to density, and there is evidence that this involves changes in the duration of leaf extension. This questions the approximation, central in crop models, that development follows a fixed thermal time schedule. The aim of this research is to investigate, using maize as a model, how the kinetics of extension of grass leaves change with density, and to propose directions for inclusion of this regulation in plant models. • Methods Periodic dissection of plants allowed the establishment of the kinetics of lamina and sheath extension for two contrasting sowing densities. The temperature of the growing zone was measured with thermocouples. Two-phase (exponential plus linear) models were fitted to the data, allowing analysis of the timing of the phase changes of extension, and the extension rate of sheaths and blades during both phases. • Key Results The duration of lamina extension dictated the variation in lamina length between treatments. The lower phytomers were longer at high density, with delayed onset of sheath extension allowing more time for the lamina to extend. In the upper phytomers—which were shorter at high density—the laminae had a lower relative extension rate (RER) in the exponential phase and delayed onset of linear extension, and less time available for extension since early sheath extension was not delayed. • Conclusions The relative timing of the onset of fast extension of the lamina with that of sheath development is the main determinant of the response of lamina length to density. Evidence is presented that the contrasting behaviour of lower and upper phytomers is related to differing regulation of sheath ontogeny before and after panicle initiation. A conceptual model is proposed to explain how the observed asynchrony between lamina and sheath development is regulated.