17 resultados para Topdown
Resumo:
In primates, the observation of meaningful, goaldirected actions engages a network of cortical areas located within the premotor and inferior parietal lobules. Current models suggest that activity within these regions arises relatively automatically during passive action observation without the need for topdown control. Here we used functional magnetic resonance imaging to determine whether cortical activit)' associated with action observation is modulated by the strategic allocation of selective attention. Normal observers viewed movie clips of reach-to-grasp actions while performing an easy or difficult visual discrimination at the fovea. A wholebrain analysis was performed to determine the effects of attentional load on neural responses to observed hand actions. Our results suggest that cortical areas involved in action observation are significantiy modulated by attentional load. These findings have important implications for recent attempts to link the human action-observation system to response properties of "mirror neurons" in monkeys.
Resumo:
Top predator loss is a major global problem, with a current trend in biodiversity loss towards high trophic levels that modifies most ecosystems worldwide. Most research in this area is focused on large-bodied predators, despite the high extinction risk of small-bodied freshwater fish that often act as apex consumers. Consequently, it remains unknown if intermittent streams are affected by the consequences of top-predators' extirpations. The aim of our research was to determine how this global problem affects intermittent streams and, in particular, if the loss of a small-bodied top predator (1) leads to a 'mesopredator release', affects primary consumers and changes whole community structures, and (2) triggers a cascade effect modifying the ecosystem function. To address these questions, we studied the topdown effects of a small endangered fish species, Barbus meridionalis (the Mediterranean barbel), conducting an enclosure/exclosure mesocosm experiment in an intermittent stream where B. meridionalis became locally extinct following a wildfire.We found that top predator absence led to 'mesopredator release', and also to 'prey release' despite intraguild predation, which contrasts with traditional food web theory. In addition, B. meridionalis extirpation changed whole macroinvertebrate community composition and increased total macroinvertebrate density. Regarding ecosystem function, periphyton primary production decreased in apex consumer absence. In this study, the apex consumer was functionally irreplaceable; its local extinction led to the loss of an important functional role that resulted in major changes to the ecosystem's structure and function. This study evidences that intermittent streams can be affected by the consequences of apex consumers' extinctions, and that the loss of small-bodied top predators can lead to large ecosystem changes. We recommend the reintroduction of small-bodied apex consumers to systems where they have been extirpated, to restore ecosystem structure and function.
Resumo:
Rajavartiolaitos on yhteiskunnallisesti merkittävä turvallisuusviranomainen. Erityisesti laitoksen merellinen rooli sisäisen turvallisuuden kentässä on korostunut sen lakisääteisten merellisten tehtävien ansiosta. Rajavartiolaitoksen merelliselle roolille onkin asetettu huomattavat tavoitteet viraston strategia-asiakirjassa. Rajavartiolaitoksen strategian toimeenpanoa tai siinä suoriutumista ei tieteellisen tutkimuksen keinoin ole aiemmin tutkittu. Tutkimuksen tarkoituksena oli muodostaa rajavartiolaitoksen strategian toimeenpanon tutkimusta tukeva teoreettinen viitekehys. Näkökulma viitekehyksen muodostamiseksi oli merellinen. Kyseisen viitekehyksen avulla mahdollistetaan strategian toimeenpanossa onnistumisen tutkiminen nimenomaisesti merellisestä perspektiivistä. Tutkittaessa toimeenpanoon liittyvien tekijöiden toteutumista tai ilmentymiä rajavartiolaitoksen implementaatioprosessissa voidaan päätyä myös arvioon sen yhteiskunnallisesta vaikuttavuudesta ja viimekädessä roolista. Jatkotutkimuksen tehtävä on varsinainen implementaation tutkiminen nyt muodostettua teoreettista viitekehystä hyödyntäen. Implementaation tutkimuskenttään liittyy useita erilaisia teoreettisia suuntauksia. Teorialähtöisen sisällönanalyysin avulla pyrittiin muodostamaan kattava kuva vallitsevista teorioista varsinaisen paradigman puuttuessa. Sisällönanalyysin analyysirungoksi valittiin topdown- teorian käsitemaailma. Analysoitavat teokset valittiin implementaatiotutkimuksen vahvinta perinnettä edustavien tutkimusjulkaisujen ja artikkeleiden joukosta pyrkimyksenä huomioimaan tutkimuksen alan valtavirtaan kuuluvat merkittävimmät lähestymistavat. Rajavartiolaitoksen strategian toimeenpanon tutkimusta tukevan teorian muodostamiseksi tuli sisällönanalyysistä kumpuavat teoriat sitoa soveltuvin osin reaalimaailmaan eli viraston prosesseihin ja toimintaympäristöön. Tätä varten muodostettiin käsitys rajavartiolaitoksen strategisesta olosuhdeympäristöstä sekä sidottiin toimeenpanon prosessit governanssin teoriaan käyttäen Multiple Govenance Framework -viitekehystä (MGF). Varsinainen viitekehys ja siihen liittyvät tekijät valittiin analysoidusta materiaalista tukeutuen abduktiiviseen päättelyyn. Tutkimuksen johtopäätöksenä voidaan todeta rajavartiolaitoksen edustavan tietyssä määrin top-down-tutkimusperinteeseen liittyviä piirteitä. Tämä teoria ei kuitenkaan kaikilta osin huomio niitä erityispiirteitä, jotka tutkimuksessa tunnistettiin. Yhdistämällä MGF:n ominaisuudet ja top-down-teoriasta soveltuvat tekijät, muodostettiin hypoteesi rajavartiolaitoksen implementaatiota tukevasta viitekehyksestä. Viitekehys sidottiin rajavartiolaitokseen, mutta varsinainen operationalisointi jätettiin jatkotutkimuksen tehtäväksi.
Resumo:
Graphene is a material with extraordinary properties. Its mechanical and electrical properties are unparalleled but the difficulties in its production are hindering its breakthrough in on applications. Graphene is a two-dimensional material made entirely of carbon atoms and it is only a single atom thick. In this work, properties of graphene and graphene based materials are described, together with their common preparation techniques and related challenges. This Thesis concentrates on the topdown techniques, in which natural graphite is used as a precursor for the graphene production. Graphite consists of graphene sheets, which are stacked together tightly. In the top-down techniques various physical or chemical routes are used to overcome the forces keeping the graphene sheets together, and many of them are described in the Thesis. The most common chemical method is the oxidisation of graphite with strong oxidants, which creates a water-soluble graphene oxide. The properties of graphene oxide differ significantly from pristine graphene and, therefore, graphene oxide is often reduced to form materials collectively known as reduced graphene oxide. In the experimental part, the main focus is on the chemical and electrochemical reduction of graphene oxide. A novel chemical route using vanadium is introduced and compared to other common chemical graphene oxide reduction methods. A strong emphasis is placed on electrochemical reduction of graphene oxide in various solvents. Raman and infrared spectroscopy are both used in in situ spectroelectrochemistry to closely monitor the spectral changes during the reduction process. These in situ techniques allow the precise control over the reduction process and even small changes in the material can be detected. Graphene and few layer graphene were also prepared using a physical force to separate these materials from graphite. Special adsorbate molecules in aqueous solutions, together with sonic treatment, produce stable dispersions of graphene and few layer graphene sheets in water. This mechanical exfoliation method damages the graphene sheets considerable less than the chemical methods, although it suffers from a lower yield.
Resumo:
Floods are a major threat to human existence and historically have both caused the collapse of civilizations and forced the emergence of new cultures. The physical processes of flooding are complex. Increased population, climate variability, change in catchment and channel management, modified landuse and land cover, and natural change of floodplains and river channels all lead to changes in flood dynamics, and as a direct or indirect consequence, social welfare of humans. Section 5.16.1 explores the risks and benefits brought about by floods and reviews the responses of floods and floodplains to climate and landuse change. Section 5.08.2 reviews the existing modeling tools, and the top–down and bottom–up modeling frameworks that are used to assess impacts on future floods. Section 5.08.3 discusses changing flood risk and socioeconomic vulnerability based on current trends in emerging or developing countries and presents an alternative paradigm as a pathway to resilience. Section 5.08.4 concludes the chapter by stating a portfolio of integrated concepts, measures, and avant-garde thinking that would be required to sustainably manage future flood risk.
Resumo:
The phytoplankton growth is dependent of several abiotic (nutrients, temperature) and biotic (predation by zooplankton) variables. In this work, a mathematical model was developed in Stella software to understand the planktonic dynamics of Extremoz Lagoon (RN) and to simulate scenarios of different environmental conditions. Data were collected monthly at two points of the lagoon. The state variables are phytoplankton and zooplankton and forcing variables are nitrogen, phosphorus and temperature. The results show that: a) the model are well coupled, especially when some constants assume different values; b) simulated nutrient concentrations reduction decreases phytoplankton biomass, but the increase of nutrients does not stimulate the growth; c) changes in the temperature does not change the phytoplankton biomass; d) changes in zooplankton biomass affect directly and reduces the phytoplankton, indicating a topdown control mechanism; e) changes in the nutrient concentration modified the biomass of zooplankton suggesting a rapid flow of energy between nutrients, phytoplankton and zooplankton and a ecosystem likely arranged in an inverted pyramid (higher concentration of zooplankton than phytoplankton)
Resumo:
This paper considers the importance of using a top-down methodology and suitable CAD tools in the development of electronic circuits. The paper presents an evaluation of the methodology used in a computational tool created to support the synthesis of digital to analog converter models by translating between different tools used in a wide variety of applications. This tool is named MS 2SV and works directly with the following two commercial tools: MATLAB/Simulink and SystemVision. Model translation of an electronic circuit is achieved by translating a mixed-signal block diagram developed in Simulink into a lower level of abstraction in VHDL-AMS and the simulation project support structure in SystemVision. The method validation was performed by analyzing the power spectral of the signal obtained by the discrete Fourier transform of a digital to analog converter simulation model. © 2011 IEEE.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The aim of this work is to elucidate the impact of changes in solar irradiance and energetic particles versus volcanic eruptions on tropospheric global climate during the Dalton Minimum (DM, AD 1780–1840). Separate variations in the (i) solar irradiance in the UV-C with wavelengths λ < 250 nm, (ii) irradiance at wavelengths λ > 250 nm, (iii) in energetic particle spectrum, and (iv) volcanic aerosol forcing were analyzed separately, and (v) in combination, by means of small ensemble calculations using a coupled atmosphere–ocean chemistry–climate model. Global and hemispheric mean surface temperatures show a significant dependence on solar irradiance at λ > 250 nm. Also, powerful volcanic eruptions in 1809, 1815, 1831 and 1835 significantly decreased global mean temperature by up to 0.5 K for 2–3 years after the eruption. However, while the volcanic effect is clearly discernible in the Southern Hemispheric mean temperature, it is less significant in the Northern Hemisphere, partly because the two largest volcanic eruptions occurred in the SH tropics and during seasons when the aerosols were mainly transported southward, partly because of the higher northern internal variability. In the simulation including all forcings, temperatures are in reasonable agreement with the tree ring-based temperature anomalies of the Northern Hemisphere. Interestingly, the model suggests that solar irradiance changes at λ < 250 nm and in energetic particle spectra have only an insignificant impact on the climate during the Dalton Minimum. This downscales the importance of top–down processes (stemming from changes at λ < 250 nm) relative to bottom–up processes (from λ > 250 nm). Reduction of irradiance at λ > 250 nm leads to a significant (up to 2%) decrease in the ocean heat content (OHC) between 0 and 300 m in depth, whereas the changes in irradiance at λ < 250 nm or in energetic particles have virtually no effect. Also, volcanic aerosol yields a very strong response, reducing the OHC of the upper ocean by up to 1.5%. In the simulation with all forcings, the OHC of the uppermost levels recovers after 8–15 years after volcanic eruption, while the solar signal and the different volcanic eruptions dominate the OHC changes in the deeper ocean and prevent its recovery during the DM. Finally, the simulations suggest that the volcanic eruptions during the DM had a significant impact on the precipitation patterns caused by a widening of the Hadley cell and a shift in the intertropical convergence zone.
Resumo:
The purpose of this study was to investigate the generality and temporal endurance of the bivalency effect in task switching. This effect refers to the slowing on univalent stimuli that occurs when bivalent stimuli appear occasionally. We used a paradigm involving predictable switches between 3 simple tasks, with bivalent stimuli occasionally occurring on one of the tasks. The generality of the bivalency effect was investigated by using different tasks and different types of bivalent stimuli, and the endurance of this effect was investigated across different intertrial intervals (ITIs) and across the univalent trials that followed trials with bivalent stimuli. In 3 experiments, the results showed a general, robust, and enduring bivalency effect for all ITI conditions. Although the effect declined across trials, it remained significant for about 4 trials following one with a bivalent stimulus. Our findings emphasise the importance of top–down processes in task-switching performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Resumo:
La fisuración iniciada en la superficie de los pavimentos asfálticos constituye uno de los más frecuentes e importantes modos de deterioro que tienen lugar en los firmes bituminosos, como han demostrado los estudios teóricos y experimentales llevados a cabo en la última década. Sin embargo, este mecanismo de fallo no ha sido considerado por los métodos tradicionales de diseño de estos firmes. El concepto de firmes de larga duración se fundamenta en un adecuado seguimiento del proceso de avance en profundidad de estos deterioros y la intervención en el momento más apropiado para conseguir mantenerlos confinados como fisuras de profundidad parcial en la capa superficial más fácilmente accesible y reparable, de manera que pueda prolongarse la durabilidad y funcionalidad del firme y reducir los costes generalizados de su ciclo de vida. Por lo tanto, para la selección de la estrategia óptima de conservación de los firmes resulta esencial disponer de metodologías que posibiliten la identificación precisa in situ de la fisuración descendente, su seguimiento y control, y que además permitan una determinación fiable y con alto rendimiento de su profundidad y extensión. En esta Tesis Doctoral se presentan los resultados obtenidos mediante la investigación sistemática de laboratorio e in situ llevada a cabo para la obtención de datos sobre fisuración descendente en firmes asfálticos y para el estudio de procedimientos de evaluación de la profundidad de este tipo de fisuras empleando técnicas de ultrasonidos. Dichos resultados han permitido comprobar que la metodología no destructiva propuesta, de rápida ejecución, bajo coste y sencilla implementación (principalmente empleada hasta el momento en estructuras metálicas y de hormigón, debido a las dificultades que introduce la naturaleza viscoelástica de los materiales bituminosos) puede ser aplicada con suficiente fiabilidad y repetibilidad sobre firmes asfálticos. Las medidas resultan asimismo independientes del espesor total del firme. Además, permite resolver algunos de los inconvenientes frecuentes que presentan otros métodos de diagnóstico de las fisuras de pavimentos, tales como la extracción de testigos (sistema destructivo, de alto coste y prolongados tiempos de interrupción del tráfico) o algunas otras técnicas no destructivas como las basadas en medidas de deflexiones o el georradar, las cuales no resultan suficientemente precisas para la investigación de fisuras superficiales. Para ello se han realizado varias campañas de ensayos sobre probetas de laboratorio en las que se han estudiado diferentes condiciones empíricas como, por ejemplo, distintos tipos de mezclas bituminosas en caliente (AC, SMA y PA), espesores de firme y adherencias entre capas, temperaturas, texturas superficiales, materiales de relleno y agua en el interior de las grietas, posición de los sensores y un amplio rango de posibles profundidades de fisura. Los métodos empleados se basan en la realización de varias medidas de velocidad o de tiempo de transmisión del pulso ultrasónico sobre una única cara o superficie accesible del material, de manera que resulte posible obtener un coeficiente de transmisión de la señal (mediciones relativas o autocompensadas). Las mediciones se han realizado a bajas frecuencias de excitación mediante dos equipos de ultrasonidos diferentes dotados, en un caso, de transductores de contacto puntual seco (DPC) y siendo en el otro instrumento de contacto plano a través de un material especialmente seleccionado para el acoplamiento (CPC). Ello ha permitido superar algunos de los tradicionales inconvenientes que presenta el uso de los transductores convencionales y no precisar preparación previa de las superficies. La técnica de autocalibración empleada elimina los errores sistemáticos y la necesidad de una calibración local previa, demostrando el potencial de esta tecnología. Los resultados experimentales han sido comparados con modelos teóricos simplificados que simulan la propagación de las ondas ultrasónicas en estos materiales bituminosos fisurados, los cuales han sido deducidos previamente mediante un planteamiento analítico y han permitido la correcta interpretación de dichos datos empíricos. Posteriormente, estos modelos se han calibrado mediante los resultados de laboratorio, proporcionándose sus expresiones matemáticas generalizadas y gráficas para su uso rutinario en las aplicaciones prácticas. Mediante los ensayos con ultrasonidos efectuados en campañas llevadas a cabo in situ, acompañados de la extracción de testigos del firme, se han podido evaluar los modelos propuestos. El máximo error relativo promedio en la estimación de la profundidad de las fisuras al aplicar dichos modelos no ha superado el 13%, con un nivel de confianza del 95%, en el conjunto de todos los ensayos realizados. La comprobación in situ de los modelos ha permitido establecer los criterios y las necesarias recomendaciones para su utilización sobre firmes en servicio. La experiencia obtenida posibilita la integración de esta metodología entre las técnicas de auscultación para la gestión de su conservación. Abstract Surface-initiated cracking of asphalt pavements constitutes one of the most frequent and important types of distress that occur in flexible bituminous pavements, as clearly has been demonstrated in the technical and experimental studies done over the past decade. However, this failure mechanism has not been taken into consideration for traditional methods of flexible pavement design. The concept of long-lasting pavements is based on adequate monitoring of the depth and extent of these deteriorations and on intervention at the most appropriate moment so as to contain them in the surface layer in the form of easily-accessible and repairable partial-depth topdown cracks, thereby prolonging the durability and serviceability of the pavement and reducing the overall cost of its life cycle. Therefore, to select the optimal maintenance strategy for perpetual pavements, it becomes essential to have access to methodologies that enable precise on-site identification, monitoring and control of top-down propagated cracks and that also permit a reliable, high-performance determination of the extent and depth of cracking. This PhD Thesis presents the results of systematic laboratory and in situ research carried out to obtain information about top-down cracking in asphalt pavements and to study methods of depth evaluation of this type of cracking using ultrasonic techniques. These results have demonstrated that the proposed non-destructive methodology –cost-effective, fast and easy-to-implement– (mainly used to date for concrete and metal structures, due to the difficulties caused by the viscoelastic nature of bituminous materials) can be applied with sufficient reliability and repeatability to asphalt pavements. Measurements are also independent of the asphalt thickness. Furthermore, it resolves some of the common inconveniences presented by other methods used to evaluate pavement cracking, such as core extraction (a destructive and expensive procedure that requires prolonged traffic interruptions) and other non-destructive techniques, such as those based on deflection measurements or ground-penetrating radar, which are not sufficiently precise to measure surface cracks. To obtain these results, extensive tests were performed on laboratory specimens. Different empirical conditions were studied, such as various types of hot bituminous mixtures (AC, SMA and PA), differing thicknesses of asphalt and adhesions between layers, varied temperatures, surface textures, filling materials and water within the crack, different sensor positions, as well as an ample range of possible crack depths. The methods employed in the study are based on a series of measurements of ultrasonic pulse velocities or transmission times over a single accessible side or surface of the material that make it possible to obtain a signal transmission coefficient (relative or auto-calibrated readings). Measurements were taken at low frequencies by two short-pulse ultrasonic devices: one equipped with dry point contact transducers (DPC) and the other with flat contact transducers that require a specially-selected coupling material (CPC). In this way, some of the traditional inconveniences presented by the use of conventional transducers were overcome and a prior preparation of the surfaces was not required. The auto-compensating technique eliminated systematic errors and the need for previous local calibration, demonstrating the potential for this technology. The experimental results have been compared with simplified theoretical models that simulate ultrasonic wave propagation in cracked bituminous materials, which had been previously deduced using an analytical approach and have permitted the correct interpretation of the aforementioned empirical results. These models were subsequently calibrated using the laboratory results, providing generalized mathematical expressions and graphics for routine use in practical applications. Through a series of on-site ultrasound test campaigns, accompanied by asphalt core extraction, it was possible to evaluate the proposed models, with differences between predicted crack depths and those measured in situ lower than 13% (with a confidence level of 95%). Thereby, the criteria and the necessary recommendations for their implementation on in-service asphalt pavements have been established. The experience obtained through this study makes it possible to integrate this methodology into the evaluation techniques for pavement management systems.
Resumo:
We study the problem of efñcient, scalable set-sharing analysis of logic programs. We use the idea of representing sharing information as a pair of abstract substitutions, one of which is a worst-case sharing representation called a clique set, which was previously proposed for the case of inferring pair-sharing. We use the clique-set representation for (1) inferring actual set-sharing information, and (2) analysis within a topdown framework. In particular, we define the abstract functions required by standard top-down analyses, both for sharing alone and also for the case of including freeness in addition to sharing. Our experimental evaluation supports the conclusión that, for inferring set-sharing, as it was the case for inferring pair-sharing, precisión losses are limited, while useful efñciency gains are obtained. At the limit, the clique-set representation allowed analyzing some programs that exceeded memory capacity using classical sharing representations.
Resumo:
Abstract interpreters rely on the existence of a nxpoint algorithm that calculates a least upper bound approximation of the semantics of the program. Usually, that algorithm is described in terms of the particular language in study and therefore it is not directly applicable to programs written in a different source language. In this paper we introduce a generic, block-based, and uniform representation of the program control flow graph and a language-independent nxpoint algorithm that can be applied to a variety of languages and, in particular, Java. Two major characteristics of our approach are accuracy (obtained through a topdown, context sensitive approach) and reasonable efficiency (achieved by means of memoization and dependency tracking techniques). We have also implemented the proposed framework and show some initial experimental results for standard benchmarks, which further support the feasibility of the solution adopted.
Resumo:
Physiological and neuroimaging studies provide evidence to suggest that attentional mechanisms operating within the fronto-parietal network may exert top–down control on early visual areas, priming them for forthcoming sensory events. The believed consequence of such priming is enhanced task performance. Using the technique of magnetoencephalography (MEG), we investigated this possibility by examining whether attention-driven changes in cortical activity are correlated with performance on a line-orientation judgment task. We observed that, approximately 200 ms after a covert attentional shift towards the impending visual stimulus, the level of phase-resetting (transient neural coherence) within the calcarine significantly increased for 2–10 Hz activity. This was followed by a suppression of alpha activity (near 10 Hz) which persisted until the onset of the stimulus. The levels of phase-resetting, alpha suppression and subsequent behavioral performance varied between subjects in a systematic fashion. The magnitudes of phase-resetting and alpha-band power were negatively correlated, with high levels of coherence associated with high levels of performance. We propose that top–down attentional control mechanisms exert their initial effects within the calcarine through a phase-resetting within the 2–10 Hz band, which in turn triggers a suppression of alpha activity, priming early visual areas for incoming information and enhancing behavioral performance.
Resumo:
When people monitor a visual stream of rapidly presented stimuli for two targets (T1 and T2), they often miss T2 if it falls into a time window of about half a second after T1 onset—the attentional blink (AB). We provide an overview of recent neuroscientific studies devoted to analyze the neural processes underlying the AB and their temporal dynamics. The available evidence points to an attentional network involving temporal, right-parietal and frontal cortex, and suggests that the components of this neural network interact by means of synchronization and stimulus-induced desynchronization in the beta frequency range. We set up a neurocognitive scenario describing how the AB might emerge and why it depends on the presence of masks and the other event(s) the targets are embedded in. The scenario supports the idea that the AB arises from ‘‘biased competition’’, with the top–down bias being generated by parietal–frontal interactions and the competition taking place between stimulus codes in temporal cortex.