771 resultados para Agent-Based Models
Resumo:
1. A long-standing question in ecology is how natural populations respond to a changing environment. Emergent optimal foraging theory-based models for individual variation go beyond the population level and predict how its individuals would respond to disturbances that produce changes in resource availability. 2. Evaluating variations in resource use patterns at the intrapopulation level in wild populations under changing environmental conditions would allow to further advance in the research on foraging ecology and evolution by gaining a better idea of the underlying mechanisms explaining trophic diversity. 3. In this study, we use a large spatio-temporal scale data set (western continental Europe, 19682006) on the diet of Bonellis Eagle Aquila fasciata breeding pairs to analyse the predator trophic responses at the intrapopulation level to a prey population crash. In particular, we borrow metrics from studies on network structure and intrapopulation variation to understand how an emerging infectious disease [the rabbit haemorrhagic disease (RHD)] that caused the density of the eagles primary prey (rabbit Oryctolagus cuniculus) to dramatically drop across Europe impacted on resource use patterns of this endangered raptor. 4. Following the major RHD outbreak, substantial changes in Bonellis Eagles diet diversity and organisation patterns at the intrapopulation level took place. Dietary variation among breeding pairs was larger after than before the outbreak. Before RHD, there were no clusters of pairs with similar diets, but significant clustering emerged after RHD. Moreover, diets at the pair level presented a nested pattern before RHD, but not after. 5. Here, we reveal how intrapopulation patterns of resource use can quantitatively and qualitatively vary, given drastic changes in resource availability. 6. For the first time, we show that a pathogen of a prey species can indirectly impact the intrapopulation patterns of resource use of an endangered predator.
Resumo:
OBJECTIVES: Acute retinal necrosis is a rapidly progressive and devastating viral retinitis caused by the herpesvirus family. Systemic acyclovir is the treatment of choice; however, the progression of retinal lesions ceases approximately 2 days after treatment initiation. An intravitreal injection of acyclovir may be used an adjuvant therapy during the first 2 days of treatment when systemically administered acyclovir has not reached therapeutic levels in the retina. The aims of this study were to determine the pharmacokinetic profile of acyclovir in the rabbit vitreous after intravitreal injection and the functional effects of acyclovir in the rabbit retina. METHODS: Acyclovir (Acyclovir; Bedford Laboratories, Bedford, OH, USA) 1 mg in 0.1 mL was injected into the right eye vitreous of 32 New Zealand white rabbits, and 0.1 mL sterile saline solution was injected into the left eye as a control. The animals were sacrificed after 2, 9, 14, or 28 days. The eyes were enucleated, and the vitreous was removed. The half-life of acyclovir was determined using high-performance liquid chromatography. Electroretinograms were recorded on days 2, 9, 14, and 28 in the eight animals that were sacrificed 28 days after injection according to a modified protocol of the International Society for Clinical Electrophysiology of Vision. RESULTS: Acyclovir rapidly decayed in the vitreous within the first two days after treatment and remained at low levels from day 9 onward. The eyes that were injected with acyclovir did not present any electroretinographic changes compared with the control eyes. CONCLUSIONS: The vitreous half-life of acyclovir is short, and the electrophysiological findings suggest that the intravitreal delivery of 1 mg acyclovir is safe and well tolerated by the rabbit retina.
Resumo:
Air Pollution and Health: Bridging the Gap from Sources to Health Outcomes, an international specialty conference sponsored by the American Association for Aerosol Research, was held to address key uncertainties in our understanding of adverse health effects related to air pollution and to integrate and disseminate results from recent scientific studies that cut across a range of air pollution-related disciplines. The Conference addressed the science of air pollution and health within a multipollutant framework (herein "multipollutant" refers to gases and particulate matter mass, components, and physical properties), focusing on five key science areas: sources, atmospheric sciences, exposure, dose, and health effects. Eight key policy-relevant science questions integrated across various parts of the five science areas and a ninth question regarding findings that provide policy-relevant insights served as the framework for the meeting. Results synthesized from this Conference provide new evidence, reaffirm past findings, and offer guidance for future research efforts that will continue to incrementally advance the science required for reducing uncertainties in linking sources, air pollutants, human exposure, and health effects. This paper summarizes the Conference findings organized around the science questions. A number of key points emerged from the Conference findings. First, there is a need for greater focus on multipollutant science and management approaches that include more direct studies of the mixture of pollutants from sources with an emphasis on health studies at ambient concentrations. Further, a number of research groups reaffirmed a need for better understanding of biological mechanisms and apparent associations of various health effects with components of particulate matter (PM), such as elemental carbon, certain organic species, ultrafine particles, and certain trace elements such as Ni, V, and Fe(II), as well as some gaseous pollutants. Although much debate continues in this area, generation of reactive oxygen species induced by these and other species present in air pollution and the resulting oxidative stress and inflammation were reiterated as key pathways leading to respiratory and cardiovascular outcomes. The Conference also underscored significant advances in understanding the susceptibility of populations, including the role of genetics and epigenetics and the influence of socioeconomic and other confounding factors and their synergistic interactions with air pollutants. Participants also pointed out that short-and long-term intervention episodes that reduce pollution from sources and improve air quality continue to indicate that when pollution decreases so do reported adverse health effects. In the limited number of cases where specific sources or PM2.5 species were included in investigations, specific species are often associated with the decrease in effects. Other recent advances for improved exposure estimates for epidemiological studies included using new technologies such as microsensors combined with cell phone and integrated into real-time communications, hybrid air quality modeling such as combined receptor-and emission-based models, and surface observations used with remote sensing such as satellite data.
Falhas de mercado e redes em políticas públicas: desafios e possibilidades ao Sistema Único de Saúde
Resumo:
Os princípios e as diretrizes do Sistema Único de Saúde (SUS) impõem uma estrutura de assistência baseada em redes de políticas públicas que, combinada ao modelo de financiamento adotado, conduz a falhas de mercado. Isso impõe barreiras à gestão do sistema público de saúde e à concretização dos objetivos do SUS. As características institucionais e a heterogeneidade dos atores, aliadas à existência de diferentes redes de atenção à saúde, geram complexidade analítica no estudo da dinâmica global da rede do SUS. Há limitações ao emprego de métodos quantitativos baseados em análise estática com dados retrospectivos do sistema público de saúde. Assim, propõe-se a abordagem do SUS como sistema complexo, a partir da utilização de metodologia quantitativa inovadora baseada em simulação computacional. O presente artigo buscou analisar desafios e potencialidades na utilização de modelagem com autômatos celulares combinada com modelagem baseada em agentes para simulação da evolução da rede de serviços do SUS. Tal abordagem deve permitir melhor compreensão da organização, heterogeneidade e dinâmica estrutural da rede de serviços do SUS e possibilitar minimização dos efeitos das falhas de mercado no sistema de saúde brasileiro.
Resumo:
Ion channels are protein molecules, embedded in the lipid bilayer of the cell membranes. They act as powerful sensing elements switching chemicalphysical stimuli into ion-fluxes. At a glance, ion channels are water-filled pores, which can open and close in response to different stimuli (gating), and one once open select the permeating ion species (selectivity). They play a crucial role in several physiological functions, like nerve transmission, muscular contraction, and secretion. Besides, ion channels can be used in technological applications for different purpose (sensing of organic molecules, DNA sequencing). As a result, there is remarkable interest in understanding the molecular determinants of the channel functioning. Nowadays, both the functional and the structural characteristics of ion channels can be experimentally solved. The purpose of this thesis was to investigate the structure-function relation in ion channels, by computational techniques. Most of the analyses focused on the mechanisms of ion conduction, and the numerical methodologies to compute the channel conductance. The standard techniques for atomistic simulation of complex molecular systems (Molecular Dynamics) cannot be routinely used to calculate ion fluxes in membrane channels, because of the high computational resources needed. The main step forward of the PhD research activity was the development of a computational algorithm for the calculation of ion fluxes in protein channels. The algorithm - based on the electrodiffusion theory - is computational inexpensive, and was used for an extensive analysis on the molecular determinants of the channel conductance. The first record of ion-fluxes through a single protein channel dates back to 1976, and since then measuring the single channel conductance has become a standard experimental procedure. Chapter 1 introduces ion channels, and the experimental techniques used to measure the channel currents. The abundance of functional data (channel currents) does not match with an equal abundance of structural data. The bacterial potassium channel KcsA was the first selective ion channels to be experimentally solved (1998), and after KcsA the structures of four different potassium channels were revealed. These experimental data inspired a new era in ion channel modeling. Once the atomic structures of channels are known, it is possible to define mathematical models based on physical descriptions of the molecular systems. These physically based models can provide an atomic description of ion channel functioning, and predict the effect of structural changes. Chapter 2 introduces the computation methods used throughout the thesis to model ion channels functioning at the atomic level. In Chapter 3 and Chapter 4 the ion conduction through potassium channels is analyzed, by an approach based on the Poisson-Nernst-Planck electrodiffusion theory. In the electrodiffusion theory ion conduction is modeled by the drift-diffusion equations, thus describing the ion distributions by continuum functions. The numerical solver of the Poisson- Nernst-Planck equations was tested in the KcsA potassium channel (Chapter 3), and then used to analyze how the atomic structure of the intracellular vestibule of potassium channels affects the conductance (Chapter 4). As a major result, a correlation between the channel conductance and the potassium concentration in the intracellular vestibule emerged. The atomic structure of the channel modulates the potassium concentration in the vestibule, thus its conductance. This mechanism explains the phenotype of the BK potassium channels, a sub-family of potassium channels with high single channel conductance. The functional role of the intracellular vestibule is also the subject of Chapter 5, where the affinity of the potassium channels hEag1 (involved in tumour-cell proliferation) and hErg (important in the cardiac cycle) for several pharmaceutical drugs was compared. Both experimental measurements and molecular modeling were used in order to identify differences in the blocking mechanism of the two channels, which could be exploited in the synthesis of selective blockers. The experimental data pointed out the different role of residue mutations in the blockage of hEag1 and hErg, and the molecular modeling provided a possible explanation based on different binding sites in the intracellular vestibule. Modeling ion channels at the molecular levels relates the functioning of a channel to its atomic structure (Chapters 3-5), and can also be useful to predict the structure of ion channels (Chapter 6-7). In Chapter 6 the structure of the KcsA potassium channel depleted from potassium ions is analyzed by molecular dynamics simulations. Recently, a surprisingly high osmotic permeability of the KcsA channel was experimentally measured. All the available crystallographic structure of KcsA refers to a channel occupied by potassium ions. To conduct water molecules potassium ions must be expelled from KcsA. The structure of the potassium-depleted KcsA channel and the mechanism of water permeation are still unknown, and have been investigated by numerical simulations. Molecular dynamics of KcsA identified a possible atomic structure of the potassium-depleted KcsA channel, and a mechanism for water permeation. The depletion from potassium ions is an extreme situation for potassium channels, unlikely in physiological conditions. However, the simulation of such an extreme condition could help to identify the structural conformations, so the functional states, accessible to potassium ion channels. The last chapter of the thesis deals with the atomic structure of the !- Hemolysin channel. !-Hemolysin is the major determinant of the Staphylococcus Aureus toxicity, and is also the prototype channel for a possible usage in technological applications. The atomic structure of !- Hemolysin was revealed by X-Ray crystallography, but several experimental evidences suggest the presence of an alternative atomic structure. This alternative structure was predicted, combining experimental measurements of single channel currents and numerical simulations. This thesis is organized in two parts, in the first part an overview on ion channels and on the numerical methods adopted throughout the thesis is provided, while the second part describes the research projects tackled in the course of the PhD programme. The aim of the research activity was to relate the functional characteristics of ion channels to their atomic structure. In presenting the different research projects, the role of numerical simulations to analyze the structure-function relation in ion channels is highlighted.
Resumo:
Communication and coordination are two key-aspects in open distributed agent system, being both responsible for the system’s behaviour integrity. An infrastructure capable to handling these issues, like TuCSoN, should to be able to exploit modern technologies and tools provided by fast software engineering contexts. Thesis aims to demonstrate TuCSoN infrastructure’s abilities to cope new possibilities, hardware and software, offered by mobile technology. The scenarios are going to configure, are related to the distributed nature of multi-agent systems where an agent should be located and runned just on a mobile device. We deal new mobile technology frontiers concerned with smartphones using Android operating system by Google. Analysis and deployment of a distributed agent-based system so described go first to impact with quality and quantity considerations about available resources. Engineering issue at the base of our research is to use TuCSoN against to reduced memory and computing capability of a smartphone, without the loss of functionality, efficiency and integrity for the infrastructure. Thesis work is organized on two fronts simultaneously: the former is the rationalization process of the available hardware and software resources, the latter, totally orthogonal, is the adaptation and optimization process about TuCSoN architecture for an ad-hoc client side release.
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.
Resumo:
While the use of distributed intelligence has been incrementally spreading in the design of a great number of intelligent systems, the field of Artificial Intelligence in Real Time Strategy games has remained mostly a centralized environment. Despite turn-based games have attained AIs of world-class level, the fast paced nature of RTS games has proven to be a significant obstacle to the quality of its AIs. Chapter 1 introduces RTS games describing their characteristics, mechanics and elements. Chapter 2 introduces Multi-Agent Systems and the use of the Beliefs-Desires-Intentions abstraction, analysing the possibilities given by self-computing properties. In Chapter 3 the current state of AI development in RTS games is analyzed highlighting the struggles of the gaming industry to produce valuable. The focus on improving multiplayer experience has impacted gravely on the quality of the AIs thus leaving them with serious flaws that impair their ability to challenge and entertain players. Chapter 4 explores different aspects of AI development for RTS, evaluating the potential strengths and weaknesses of an agent-based approach and analysing which aspects can benefit the most against centralized AIs. Chapter 5 describes a generic agent-based framework for RTS games where every game entity becomes an agent, each of which having its own knowledge and set of goals. Different aspects of the game, like economy, exploration and warfare are also analysed, and some agent-based solutions are outlined. The possible exploitation of self-computing properties to efficiently organize the agents activity is then inspected. Chapter 6 presents the design and implementation of an AI for an existing Open Source game in beta development stage: 0 a.d., an historical RTS game on ancient warfare which features a modern graphical engine and evolved mechanics. The entities in the conceptual framework are implemented in a new agent-based platform seamlessly nested inside the existing game engine, called ABot, widely described in Chapters 7, 8 and 9. Chapter 10 and 11 include the design and realization of a new agent based language useful for defining behavioural modules for the agents in ABot, paving the way for a wider spectrum of contributors. Chapter 12 concludes the work analysing the outcome of tests meant to evaluate strategies, realism and pure performance, finally drawing conclusions and future works in Chapter 13.
Resumo:
The present work concerns with the study of debris flows and, in particular, with the related hazard in the Alpine Environment. During the last years several methodologies have been developed to evaluate hazard associated to such a complex phenomenon, whose velocity, impacting force and inappropriate temporal prediction are responsible of the related high hazard level. This research focuses its attention on the depositional phase of debris flows through the application of a numerical model (DFlowz), and on hazard evaluation related to watersheds morphometric, morphological and geological characterization. The main aims are to test the validity of DFlowz simulations and assess sources of errors in order to understand how the empirical uncertainties influence the predictions; on the other side the research concerns with the possibility of performing hazard analysis starting from the identification of susceptible debris flow catchments and definition of their activity level. 25 well documented debris flow events have been back analyzed with the model DFlowz (Berti and Simoni, 2007): derived form the implementation of the empirical relations between event volume and planimetric and cross section inundated areas, the code allows to delineate areas affected by an event by taking into account information about volume, preferential flow path and digital elevation model (DEM) of fan area. The analysis uses an objective methodology for evaluating the accuracy of the prediction and involve the calibration of the model based on factors describing the uncertainty associated to the semi empirical relationships. The general assumptions on which the model is based have been verified although the predictive capabilities are influenced by the uncertainties of the empirical scaling relationships, which have to be necessarily taken into account and depend mostly on errors concerning deposited volume estimation. In addition, in order to test prediction capabilities of physical-based models, some events have been simulated through the use of RAMMS (RApid Mass MovementS). The model, which has been developed by the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL) in Birmensdorf and the Swiss Federal Institute for Snow and Avalanche Research (SLF) takes into account a one-phase approach based on Voellmy rheology (Voellmy, 1955; Salm et al., 1990). The input file combines the total volume of the debris flow located in a release area with a mean depth. The model predicts the affected area, the maximum depth and the flow velocity in each cell of the input DTM. Relatively to hazard analysis related to watersheds characterization, the database collected by the Alto Adige Province represents an opportunity to examine debris-flow sediment dynamics at the regional scale and analyze lithologic controls. With the aim of advancing current understandings about debris flow, this study focuses on 82 events in order to characterize the topographic conditions associated with their initiation , transportation and deposition, seasonal patterns of occurrence and examine the role played by bedrock geology on sediment transfer.
Resumo:
Questa tesi di dottorato è inserita nell’ambito della convenzione tra ARPA_SIMC (che è l’Ente finanziatore), l’Agenzia Regionale di Protezione Civile ed il Dipartimento di Scienze della Terra e Geologico - Ambientali dell’Ateneo di Bologna. L’obiettivo principale è la determinazione di possibili soglie pluviometriche di innesco per i fenomeni franosi in Emilia Romagna che possano essere utilizzate come strumento di supporto previsionale in sala operativa di Protezione Civile. In un contesto geologico così complesso, un approccio empirico tradizionale non è sufficiente per discriminare in modo univoco tra eventi meteo innescanti e non, ed in generale la distribuzione dei dati appare troppo dispersa per poter tracciare una soglia statisticamente significativa. È stato quindi deciso di applicare il rigoroso approccio statistico Bayesiano, innovativo poiché calcola la probabilità di frana dato un certo evento di pioggia (P(A|B)) , considerando non solo le precipitazioni innescanti frane (quindi la probabilità condizionata di avere un certo evento di precipitazione data l’occorrenza di frana, P(B|A)), ma anche le precipitazioni non innescanti (quindi la probabilità a priori di un evento di pioggia, P(A)). L’approccio Bayesiano è stato applicato all’intervallo temporale compreso tra il 1939 ed il 2009. Le isolinee di probabilità ottenute minimizzano i falsi allarmi e sono facilmente implementabili in un sistema di allertamento regionale, ma possono presentare limiti previsionali per fenomeni non rappresentati nel dataset storico o che avvengono in condizioni anomale. Ne sono esempio le frane superficiali con evoluzione in debris flows, estremamente rare negli ultimi 70 anni, ma con frequenza recentemente in aumento. Si è cercato di affrontare questo problema testando la variabilità previsionale di alcuni modelli fisicamente basati appositamente sviluppati a questo scopo, tra cui X – SLIP (Montrasio et al., 1998), SHALSTAB (SHALlow STABility model, Montgomery & Dietrich, 1994), Iverson (2000), TRIGRS 1.0 (Baum et al., 2002), TRIGRS 2.0 (Baum et al., 2008).
Resumo:
Photovoltaic (PV) solar panels generally produce electricity in the 6% to 16% efficiency range, the rest being dissipated in thermal losses. To recover this amount, hybrid photovoltaic thermal systems (PVT) have been devised. These are devices that simultaneously convert solar energy into electricity and heat. It is thus interesting to study the PVT system globally from different point of views in order to evaluate advantages and disadvantages of this technology and its possible uses. In particular in Chapter II, the development of the PVT absorber numerical optimization by a genetic algorithm has been carried out analyzing different internal channel profiles in order to find a right compromise between performance and technical and economical feasibility. Therefore in Chapter III ,thanks to a mobile structure built into the university lab, it has been compared experimentally electrical and thermal output power from PVT panels with separated photovoltaic and solar thermal productions. Collecting a lot of experimental data based on different seasonal conditions (ambient temperature,irradiation, wind...),the aim of this mobile structure has been to evaluate average both thermal and electrical increasing and decreasing efficiency values obtained respect to separate productions through the year. In Chapter IV , new PVT and solar thermal equation based models in steady state conditions have been developed by software Dymola that uses Modelica language. This permits ,in a simplified way respect to previous system modelling softwares, to model and evaluate different concepts about PVT panel regarding its structure before prototyping and measuring it. Chapter V concerns instead the definition of PVT boundary conditions into a HVAC system . This was made trough year simulations by software Polysun in order to finally assess the best solar assisted integrated structure thanks to F_save(solar saving energy)factor. Finally, Chapter VI presents the conclusion and the perspectives of this PhD work.
Resumo:
Efficient energy storage and conversion is playing a key role in overcoming the present and future challenges in energy supply. Batteries provide portable, electrochemical storage of green energy sources and potentially allow for a reduction of the dependence on fossil fuels, which is of great importance with respect to the issue of global warming. In view of both, energy density and energy drain, rechargeable lithium ion batteries outperform other present accumulator systems. However, despite great efforts over the last decades, the ideal electrolyte in terms of key characteristics such as capacity, cycle life, and most important reliable safety, has not yet been identified. rnrnSteps ahead in lithium ion battery technology require a fundamental understanding of lithium ion transport, salt association, and ion solvation within the electrolyte. Indeed, well-defined model compounds allow for systematic studies of molecular ion transport. Thus, in the present work, based on the concept of ‘immobilizing’ ion solvents, three main series with a cyclotriphosphazene (CTP), hexaphenylbenzene (HBP), and tetramethylcyclotetrasiloxane (TMS) scaffold were prepared. Lithium ion solvents, among others ethylene carbonate (EC), which has proven to fulfill together with pro-pylene carbonate safety and market concerns in commercial lithium ion batteries, were attached to the different cores via alkyl spacers of variable length.rnrnAll model compounds were fully characterized, pure and thermally stable up to at least 235 °C, covering the requested broad range of glass transition temperatures from -78.1 °C up to +6.2 °C. While the CTP models tend to rearrange at elevated temperatures over time, which questions the general stability of alkoxide related (poly)phosphazenes, both, the HPB and CTP based models show no evidence of core stacking. In particular the CTP derivatives represent good solvents for various lithium salts, exhibiting no significant differences in the ionic conductivity σ_dc and thus indicating comparable salt dissociation and rather independent motion of cations and ions.rnrnIn general, temperature-dependent bulk ionic conductivities investigated via impedance spectroscopy follow a William-Landel-Ferry (WLF) type behavior. Modifications of the alkyl spacer length were shown to influence ionic conductivities only in combination to changes in glass transition temperatures. Though the glass transition temperatures of the blends are low, their conductivities are only in the range of typical polymer electrolytes. The highest σ_dc obtained at ambient temperatures was 6.0 x 10-6 S•cm-1, strongly suggesting a rather tight coordination of the lithium ions to the solvating 2-oxo-1,3-dioxolane moieties, supported by the increased σ_dc values for the oligo(ethylene oxide) based analogues.rnrnFurther insights into the mechanism of lithium ion dynamics were derived from 7Li and 13C Solid- State NMR investigations. While localized ion motion was probed by i.e. 7Li spin-lattice relaxation measurements with apparent activation energies E_a of 20 to 40 kJ/mol, long-range macroscopic transport was monitored by Pulsed-Field Gradient (PFG) NMR, providing an E_a of 61 kJ/mol. The latter is in good agreement with the values determined from bulk conductivity data, indicating the major contribution of ion transport was only detected by PFG NMR. However, the μm-diffusion is rather slow, emphasizing the strong lithium coordination to the carbonyl oxygens, which hampers sufficient ion conductivities and suggests exploring ‘softer’ solvating moieties in future electrolytes.rn
Resumo:
Coastal flooding poses serious threats to coastal areas around the world, billions of dollars in damage to property and infrastructure, and threatens the lives of millions of people. Therefore, disaster management and risk assessment aims at detecting vulnerability and capacities in order to reduce coastal flood disaster risk. In particular, non-specialized researchers, emergency management personnel, and land use planners require an accurate, inexpensive method to determine and map risk associated with storm surge events and long-term sea level rise associated with climate change. This study contributes to the spatially evaluation and mapping of social-economic-environmental vulnerability and risk at sub-national scale through the development of appropriate tools and methods successfully embedded in a Web-GIS Decision Support System. A new set of raster-based models were studied and developed in order to be easily implemented in the Web-GIS framework with the purpose to quickly assess and map flood hazards characteristics, damage and vulnerability in a Multi-criteria approach. The Web-GIS DSS is developed recurring to open source software and programming language and its main peculiarity is to be available and usable by coastal managers and land use planners without requiring high scientific background in hydraulic engineering. The effectiveness of the system in the coastal risk assessment is evaluated trough its application to a real case study.
Resumo:
We consider stochastic individual-based models for social behaviour of groups of animals. In these models the trajectory of each animal is given by a stochastic differential equation with interaction. The social interaction is contained in the drift term of the SDE. We consider a global aggregation force and a short-range repulsion force. The repulsion range and strength gets rescaled with the number of animals N. We show that for N tending to infinity stochastic fluctuations disappear and a smoothed version of the empirical process converges uniformly towards the solution of a nonlinear, nonlocal partial differential equation of advection-reaction-diffusion type. The rescaling of the repulsion in the individual-based model implies that the corresponding term in the limit equation is local while the aggregation term is non-local. Moreover, we discuss the effect of a predator on the system and derive an analogous convergence result. The predator acts as an repulsive force. Different laws of motion for the predator are considered.
Resumo:
Bladder pain syndrome (BPS) is a clinical syndrome of pelvic pain and urinary urgency-frequency in the absence of a specific cause. Investigating the expression levels of genes involved in the regulation of epithelial permeability, bladder contractility, and inflammation, we show that neurokinin (NK)1 and NK2 tachykinin receptors were significantly down-regulated in BPS patients. Tight junction proteins zona occludens-1, junctional adherins molecule -1, and occludin were similarly down-regulated, implicating increased urothelial permeability, whereas bradykinin B(1) receptor, cannabinoid receptor CB1 and muscarinic receptors M3-M5 were up-regulated. Using cell-based models, we show that prolonged exposure of NK1R to substance P caused a decrease of NK1R mRNA levels and a concomitant increase of regulatory micro(mi)RNAs miR-449b and miR-500. In the biopsies of BPS patients, the same miRNAs were significantly increased, suggesting that BPS promotes an attenuation of NK1R synthesis via activation of specific miRNAs. We confirm this hypothesis by identifying 31 differentially expressed miRNAs in BPS patients and demonstrate a direct correlation between miR-449b, miR-500, miR-328, and miR-320 and a down-regulation of NK1R mRNA and/or protein levels. Our findings further the knowledge of the molecular mechanisms of BPS, and have relevance for other clinical conditions involving the NK1 receptor.