964 resultados para Computations Driven Systems
Resumo:
Today, third generation networks are consolidated realities, and user expectations on new applications and services are becoming higher and higher. Therefore, new systems and technologies are necessary to move towards the market needs and the user requirements. This has driven the development of fourth generation networks. ”Wireless network for the fourth generation” is the expression used to describe the next step in wireless communications. There is no formal definition for what these fourth generation networks are; however, we can say that the next generation networks will be based on the coexistence of heterogeneous networks, on the integration with the existing radio access network (e.g. GPRS, UMTS, WIFI, ...) and, in particular, on new emerging architectures that are obtaining more and more relevance, as Wireless Ad Hoc and Sensor Networks (WASN). Thanks to their characteristics, fourth generation wireless systems will be able to offer custom-made solutions and applications personalized according to the user requirements; they will offer all types of services at an affordable cost, and solutions characterized by flexibility, scalability and reconfigurability. This PhD’s work has been focused on WASNs, autoconfiguring networks which are not based on a fixed infrastructure, but are characterized by being infrastructure less, where devices have to automatically generate the network in the initial phase, and maintain it through reconfiguration procedures (if nodes’ mobility, or energy drain, etc..., cause disconnections). The main part of the PhD activity has been focused on an analytical study on connectivity models for wireless ad hoc and sensor networks, nevertheless a small part of my work was experimental. Anyway, both the theoretical and experimental activities have had a common aim, related to the performance evaluation of WASNs. Concerning the theoretical analysis, the objective of the connectivity studies has been the evaluation of models for the interference estimation. This is due to the fact that interference is the most important performance degradation cause in WASNs. As a consequence, is very important to find an accurate model that allows its investigation, and I’ve tried to obtain a model the most realistic and general as possible, in particular for the evaluation of the interference coming from bounded interfering areas (i.e. a WiFi hot spot, a wireless covered research laboratory, ...). On the other hand, the experimental activity has led to Throughput and Packet Error Rare measurements on a real IEEE802.15.4 Wireless Sensor Network.
Resumo:
Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.
Resumo:
The production, segregation and migration of melt and aqueous fluids (henceforth called liquid) plays an important role for the transport of mass and energy within the mantle and the crust of the Earth. Many properties of large-scale liquid migration processes such as the permeability of a rock matrix or the initial segregation of newly formed liquid from the host-rock depends on the grain-scale distribution and behaviour of liquid. Although the general mechanisms of liquid distribution at the grain-scale are well understood, the influence of possibly important modifying processes such as static recrystallization, deformation, and chemical disequilibrium on the liquid distribution is not well constrained. For this thesis analogue experiments were used that allowed to investigate the interplay of these different mechanisms in-situ. In high-temperature environments where melts are produced, the grain-scale distribution in “equilibrium” is fully determined by the liquid fraction and the ratio between the solid-solid and the solid-liquid surface energy. The latter is commonly expressed as the dihedral or wetting angle between two grains and the liquid phase (Chapter 2). The interplay of this “equilibrium” liquid distribution with ongoing surface energy driven recrystallization is investigated in Chapter 4 and 5 with experiments using norcamphor plus ethanol liquid. Ethanol in contact with norcamphor forms a wetting angle of about 25°, which is similar to reported angles of rock-forming minerals in contact with silicate melt. The experiments in Chapter 4 show that previously reported disequilibrium features such as trapped liquid lenses, fully-wetted grain boundaries, and large liquid pockets can be explained by the interplay of the liquid with ongoing recrystallization. Closer inspection of dihedral angles in Chapter 5 reveals that the wetting angles are themselves modified by grain coarsening. Ongoing recrystallization constantly moves liquid-filled triple junctions, thereby altering the wetting angles dynamically as a function of the triple junction velocity. A polycrystalline aggregate will therefore always display a range of equilibrium and dynamic wetting angles at raised temperature, rather than a single wetting angle as previously thought. For the deformation experiments partially molten KNO3–LiNO3 experiments were used in addition to norcamphor–ethanol experiments (Chapter 6). Three deformation regimes were observed. At a high bulk liquid fraction >10 vol.% the aggregate deformed by compaction and granular flow. At a “moderate” liquid fraction, the aggregate deformed mainly by grain boundary sliding (GBS) that was localized into conjugate shear zones. At a low liquid fraction, the grains of the aggregate formed a supporting framework that deformed internally by crystal plastic deformation or diffusion creep. Liquid segregation was most efficient during framework deformation, while GBS lead to slow liquid segregation or even liquid dispersion in the deforming areas.
Towards model driven software development for Arduino platforms: a DSL and automatic code generation
Resumo:
La tesi ha lo scopo di esplorare la produzione di sistemi software per Embedded Systems mediante l'utilizzo di tecniche relative al mondo del Model Driven Software Development. La fase più importante dello sviluppo sarà la definizione di un Meta-Modello che caratterizza i concetti fondamentali relativi agli embedded systems. Tale modello cercherà di astrarre dalla particolare piattaforma utilizzata ed individuare quali astrazioni caratterizzano il mondo degli embedded systems in generale. Tale meta-modello sarà quindi di tipo platform-independent. Per la generazione automatica di codice è stata adottata una piattaforma di riferimento, cioè Arduino. Arduino è un sistema embedded che si sta sempre più affermando perché coniuga un buon livello di performance ed un prezzo relativamente basso. Tale piattaforma permette lo sviluppo di sistemi special purpose che utilizzano sensori ed attuatori di vario genere, facilmente connessi ai pin messi a disposizione. Il meta-modello definito è un'istanza del meta-metamodello MOF, definito formalmente dall'organizzazione OMG. Questo permette allo sviluppatore di pensare ad un sistema sotto forma di modello, istanza del meta-modello definito. Un meta-modello può essere considerato anche come la sintassi astratta di un linguaggio, quindi può essere definito da un insieme di regole EBNF. La tecnologia utilizzata per la definizione del meta-modello è stata Xtext: un framework che permette la scrittura di regole EBNF e che genera automaticamente il modello Ecore associato al meta-modello definito. Ecore è l'implementazione di EMOF in ambiente Eclipse. Xtext genera inoltre dei plugin che permettono di avere un editor guidato dalla sintassi, definita nel meta-modello. La generazione automatica di codice è stata realizzata usando il linguaggio Xtend2. Tale linguaggio permette di esplorare l'Abstract Syntax Tree generato dalla traduzione del modello in Ecore e di generare tutti i file di codice necessari. Il codice generato fornisce praticamente tutta la schematic part dell'applicazione, mentre lascia all'application designer lo sviluppo della business logic. Dopo la definizione del meta-modello di un sistema embedded, il livello di astrazione è stato spostato più in alto, andando verso la definizione della parte di meta-modello relativa all'interazione di un sistema embedded con altri sistemi. Ci si è quindi spostati verso un ottica di Sistema, inteso come insieme di sistemi concentrati che interagiscono. Tale difinizione viene fatta dal punto di vista del sistema concentrato di cui si sta definendo il modello. Nella tesi viene inoltre introdotto un caso di studio che, anche se abbastanza semplice, fornisce un esempio ed un tutorial allo sviluppo di applicazioni mediante l'uso del meta-modello. Ci permette inoltre di notare come il compito dell'application designer diventi piuttosto semplice ed immediato, sempre se basato su una buona analisi del problema. I risultati ottenuti sono stati di buona qualità ed il meta-modello viene tradotto in codice che funziona correttamente.
Resumo:
This thesis deals with distributed control strategies for cooperative control of multi-robot systems. Specifically, distributed coordination strategies are presented for groups of mobile robots. The formation control problem is initially solved exploiting artificial potential fields. The purpose of the presented formation control algorithm is to drive a group of mobile robots to create a completely arbitrarily shaped formation. Robots are initially controlled to create a regular polygon formation. A bijective coordinate transformation is then exploited to extend the scope of this strategy, to obtain arbitrarily shaped formations. For this purpose, artificial potential fields are specifically designed, and robots are driven to follow their negative gradient. Artificial potential fields are then subsequently exploited to solve the coordinated path tracking problem, thus making the robots autonomously spread along predefined paths, and move along them in a coordinated way. Formation control problem is then solved exploiting a consensus based approach. Specifically, weighted graphs are used both to define the desired formation, and to implement collision avoidance. As expected for consensus based algorithms, this control strategy is experimentally shown to be robust to the presence of communication delays. The global connectivity maintenance issue is then considered. Specifically, an estimation procedure is introduced to allow each agent to compute its own estimate of the algebraic connectivity of the communication graph, in a distributed manner. This estimate is then exploited to develop a gradient based control strategy that ensures that the communication graph remains connected, as the system evolves. The proposed control strategy is developed initially for single-integrator kinematic agents, and is then extended to Lagrangian dynamical systems.
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.
Resumo:
This thesis was driven by the ambition to create suitable model systems that mimic complex processes in nature, like intramolecular transitions, such as unfolding and refolding of proteins, or intermolecular interactions between different cell compo-nents. Novel biophysical approaches were adopted by employing atomic force mi-croscopy (AFM) as the main measurement technique due to its broad diversity. Thus, high-resolution imaging, adhesion measurements, and single-molecule force distance experiments were performed on the verge of the instrumental capabilities. As first objective, the interaction between plasma membrane and cytoskeleton, me-diated by the linker protein ezrin, was pursued. Therefore, the adsorption process and the lateral organization of ezrin on PIP2 containing solid-supported membranes were characterized and quantified as a fundament for the establishment of a biomimetic model system. As second component of the model system, actin filaments were coated on functionalized colloidal probes attached on cantilevers, serving as sensor elements. The zealous endeavor of creating this complex biomimetic system was rewarded by successful investigation of the activation process of ezrin. As a result, it can be stated that ezrin is activated by solely binding to PIP2 without any further stimulating agents. Additional cofactors may stabilize and prolong the active conformation but are not essentially required for triggering ezrin’s transformation into an active conformation. In the second project, single-molecule force distance experiments were performed on bis-loop tetra-urea calix[4]arene-catenanes with different loading rates (increase in force per second). These macromolecules were specifically designed to investigate the rupture and rejoining mechanism of hydrogen bonds under external load. The entangled loops of capsule-like molecules locked the unbound state of intramolecular hydrogen bonds mechanically, rendering a rebinding observable on the experimental time scale. In conjunction with Molecular Dynamics simulations, a three-well potential of the bond rupture process was established and all kinetically relevant parameters of the experiments were determined by means of Monte Carlo simulations and stochastic modeling. In summary, it can be stated that atomic force microscopy is an invaluable tool to scrutinize relevant processes in nature, such as investigating activation mechanisms in proteins, as shown by analysis of the interaction between F-actin and ezrin, as well as exploring fundamental properties of single hydrogen bonds that are of paramount interest for the complete understanding of complex supramolecular structures.
Resumo:
In the race to obtain protons with higher energies, using more compact systems at the same time, laser-driven plasma accelerators are becoming an interesting possibility. But for now, only beams with extremely broad energy spectra and high divergence have been produced. The driving line of this PhD thesis was the study and design of a compact system to extract a high quality beam out of the initial bunch of protons produced by the interaction of a laser pulse with a thin solid target, using experimentally reliable technologies in order to be able to test such a system as soon as possible. In this thesis, different transport lines are analyzed. The first is based on a high field pulsed solenoid, some collimators and, for perfect filtering and post-acceleration, a high field high frequency compact linear accelerator, originally designed to accelerate a 30 MeV beam extracted from a cyclotron. The second one is based on a quadruplet of permanent magnetic quadrupoles: thanks to its greater simplicity and reliability, it has great interest for experiments, but the effectiveness is lower than the one based on the solenoid; in fact, the final beam intensity drops by an order of magnitude. An additional sensible decrease in intensity is verified in the third case, where the energy selection is achieved using a chicane, because of its very low efficiency for off-axis protons. The proposed schemes have all been analyzed with 3D simulations and all the significant results are presented. Future experimental work based on the outcome of this thesis can be planned and is being discussed now.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
Molecular recognition and self-assembly represent fundamental issues for the construction of supramolecular systems, structures in which the components are held together through non-covalent interactions. The study of host-guest complexes and mechanical interlocked molecules, important examples in this field, is necessary in order to characterize self-assembly processes, achieve more control over the molecular organization and develop sophisticated structures by using properly designed building blocks. The introduction of paramagnetic species, or spin labelling, represents an attractive opportunity that allows their detection and characterization by the Electron Spin Resonance spectroscopy, a valuable technique that provides additional information to those obtained by traditional methods. In this Thesis, recent progresses in the design and the synthesis of new paramagnetic host-guest complexes and rotaxanes characterized by the presence of nitroxide radicals and their investigation by ESR spectroscopy are reported. In Chapter 1 a brief overview of the principal concepts of supramolecular chemistry, the spin labelling approach and the development of ESR methods applied to paramagnetic systems are described. Chapter 2 and 3 are focused on the introduction of radicals in macrocycles as Cucurbiturils and Pillar[n]arenes, due to the interesting binding properties and the potential employment in rotaxanes, in order to investigate their structures and recognition properties. Chapter 4 deals with one of the most studied mechanical interlocked molecules, the bistable [2]rotaxane reported by Stoddart and Heath based on the ciclobis (paraquat-p-phenylene) CBPQT4+, that represents a well known example of molecular switch driven by external stimuli. The spin labelling of analogous architectures allows the monitoring by ESR spectroscopy of the switch mechanism involving the ring compound by tuning the spin exchange interaction. Finally, Chapter 5 contains the experimental procedures used for the synthesis of some of the compounds described in Chapter 2-4.
Resumo:
Eine der offenen Fragen der aktuellen Physik ist das Verständnis von Systemen im Nichtgleichgewicht. Im Gegensatz zu der Gleichgewichtsphysik ist in diesem Bereich aktuell kein Formalismus bekannt der ein systematisches Beschreiben der unterschiedlichen Systeme ermöglicht. Um das Verständnis über diese Systeme zu vergrößern werden in dieser Arbeit zwei unterschiedliche Systeme studiert, die unter einem externen Feld ein starkes nichtlineares Verhalten zeigen. Hierbei handelt es sich zum einen um das Verhalten von Teilchen unter dem Einfluss einer extern angelegten Kraft und zum anderen um das Verhalten eines Systems in der Nähe des kritischen Punktes unter Scherung. Das Modellsystem in dem ersten Teil der Arbeit ist eine binäre Yukawa Mischung, die bei tiefen Temperaturen einen Glassübergang zeigt. Dies führt zu einer stark ansteigenden Relaxationszeit des Systems, so dass man auch bei kleinen Kräften relativ schnell ein nichtlineares Verhalten beobachtet. In Abhängigkeit der angelegten konstanten Kraft können in dieser Arbeit drei Regime, mit stark unterschiedlichem Teilchenverhalten, identifiziert werden. In dem zweiten Teil der Arbeit wird das Ising-Modell unter Scherung betrachtet. In der Nähe des kritischen Punkts kommt es in diesem Modell zu einer Beeinflussung der Fluktuationen in dem System durch das angelegte Scherfeld. Dies hat zur Folge, dass das System stark anisotrop wird und man zwei unterschiedliche Korrelationslängen vorfindet, die mit unterschiedlichen Exponenten divergieren. Infolgedessen lässt sich der normale isotrope Formalismus des "finite-size scaling" nicht mehr auf dieses System anwenden. In dieser Arbeit wird gezeigt, wie dieser auf den anisotropen Fall zu verallgemeinern ist und wie damit die kritischen Punkte, sowie die dazu gehörenden kritischen Exponenten berechnet werden können.
Resumo:
Management Control System (MCS) research is undergoing turbulent times. For a long time related to cybernetic instruments of management accounting only, MCS are increasingly seen as complex systems comprising not only formal accounting-driven instruments, but also informal mechanisms of control based on organizational culture. But not only have the means of MCS changed; researchers increasingly ap-ply MCS to organizational goals other than strategy implementation.rnrnTaking the question of "How do I design a well-performing MCS?" as a starting point, this dissertation aims at providing a comprehensive and integrated overview of the "current-state" of MCS research. Opting for a definition of MCS, broad in terms of means (all formal as well as informal MCS instruments), but focused in terms of objectives (behavioral control only), the dissertation contributes to MCS theory by, a) developing an integrated (contingency) model of MCS, describing its contingencies, as well as its subcomponents, b) refining the equifinality model of Gresov/Drazin (1997), c) synthesizing research findings from contingency and configuration research concerning MCS, taking into account case studies on research topics such as ambi-dexterity, equifinality and time as a contingency.
Resumo:
Clay mineral-rich sedimentary formations are currently under investigation to evaluate their potential use as host formations for installation of deep underground disposal facilities for radioactive waste (e.g. Boom Clay (BE), Opalinus Clay (CH), Callovo-Oxfordian argillite (FR)). The ultimate safety of the corresponding repository concepts depends largely on the capacity of the host formation to limit the flux towards the biosphere of radionuclides (RN) contained in the waste to acceptably low levels. Data for diffusion-driven transfer in these formations shows extreme differences in the measured or modelled behaviour for various radionuclides, e. g. between halogen RN (Cl-36, I-129) and actinides (U-238,U-235, Np-237, Th-232, etc.), which result from major differences between RN of the effects on transport of two phenomena: diffusion and sorption. This paper describes recent research aimed at improving understanding of these two phenomena, focusing on the results of studies carried out during the EC Funmig IP on clayrocks from the above three formations and from the Boda formation (HU). Project results regarding phenomena governing water, cation and anion distribution and mobility in the pore volumes influenced by the negatively-charged surfaces of clay minerals show a convergence of the modelling results for behaviour at the molecular scale and descriptions based on electrical double layer models. Transport models exist which couple ion distribution relative to the clay-solution interface and differentiated diffusive characteristics. These codes are able to reproduce the main trends in behaviour observed experimentally, e.g. D-e(anion) < D-e(HTO) < D-e(cation) and D-e(anion) variations as a function of ionic strength and material density. These trends are also well-explained by models of transport through ideal porous matrices made up of a charged surface material. Experimental validation of these models is good as regards monovalent alkaline cations, in progress for divalent electrostatically-interacting cations (e.g. Sr2+) and still relatively poor for 'strongly sorbing', high K-d cations. Funmig results have clarified understanding of how clayrock mineral composition, and the corresponding organisation of mineral grain assemblages and their associated porosity, can affect mobile solute (anions, HTO) diffusion at different scales (mm to geological formation). In particular, advances made in the capacity to map clayrock mineral grain-porosity organisation at high resolution provide additional elements for understanding diffusion anisotropy and for relating diffusion characteristics measured at different scales. On the other hand, the results of studies focusing on evaluating the potential effects of heterogeneity on mobile species diffusion at the formation scale tend to show that there is a minimal effect when compared to a homogeneous property model. Finally, the results of a natural tracer-based study carried out on the Opalinus Clay formation increase confidence in the use of diffusion parameters measured on laboratory scale samples for predicting diffusion over geological time-space scales. Much effort was placed on improving understanding of coupled sorption-diffusion phenomena for sorbing cations in clayrocks. Results regarding sorption equilibrium in dispersed and compacted materials for weakly to moderately sorbing cations (Sr2+, Cs+, Co2+) tend to show that the same sorption model probably holds in both systems. It was not possible to demonstrate this for highly sorbing elements such as Eu(III) because of the extremely long times needed to reach equilibrium conditions, but there does not seem to be any clear reason why such elements should not have similar behaviour. Diffusion experiments carried out with Sr2+, Cs+ and Eu(III) on all of the clayrocks gave mixed results and tend to show that coupled diffusion-sorption migration is much more complex than expected, leading generally to greater mobility than that predicted by coupling a batch-determined K-d and Ficks law based on the diffusion behaviour of HTO. If the K-d measured on equivalent dispersed systems holds as was shown to be the case for Sr, Cs (and probably Co) for Opalinus Clay, these results indicate that these cations have a D-e value higher than HTO (up to a factor of 10 for Cs+). Results are as yet very limited for very moderate to strongly sorbing species (e.g. Co(II), Eu(III), Cu(II)) because of their very slow transfer characteristics. (C) 2011 Elsevier Ltd. All rights reserved.