900 resultados para Chunk-based information diffusion


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tese apresentada como requisito parcial para obtenção do grau de Doutor em Gestão de Informação

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Spin-lattice Relaxation, self-Diffusion coefficients and Residual Dipolar Couplings (RDC’s) are the basis of well established Nuclear Magnetic Resonance techniques for the physicochemical study of small molecules (typically organic compounds and natural products with MW < 1000 Da), as they proved to be a powerful and complementary source of information about structural dynamic processes in solution. The work developed in this thesis consists in the application of the earlier-mentioned NMR techniques to explore, analyze and systematize patterns of the molecular dynamic behavior of selected small molecules in particular experimental conditions. Two systems were chosen to investigate molecular dynamic behavior by these techniques: the dynamics of ion-pair formation and ion interaction in ionic liquids (IL) and the dynamics of molecular reorientation when molecules are placed in oriented phases (alignment media). The application of NMR spin-lattice relaxation and self-diffusion measurements was applied to study the rotational and translational molecular dynamics of the IL: 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4]. The study of the cation-anion dynamics in neat and IL-water mixtures was systematically investigated by a combination of multinuclear NMR relaxation techniques with diffusion data (using by H1, C13 and F19 NMR spectroscopy). Spin-lattice relaxation time (T1), self-diffusion coefficients and nuclear Overhauser effect experiments were combined to determine the conditions that favor the formation of long lived [BMIM][BF4] ion-pairs in water. For this purpose and using the self-diffusion coefficients of cation and anion as a probe, different IL-water compositions were screened (from neat IL to infinite dilution) to find the conditions where both cation and anion present equal diffusion coefficients (8% water fraction at 25 ºC). This condition as well as the neat IL and the infinite dilution were then further studied by 13C NMR relaxation in order to determine correlation times (c) for the molecular reorientational motion using a mathematical iterative procedure and experimental data obtained in a temperature range between 273 and 353 K. The behavior of self-diffusion and relaxation data obtained in our experiments point at the combining parameters of molar fraction 8 % and temperature 298 K as the most favorable condition for the formation of long lived ion-pairs. When molecules are subjected to soft anisotropic motion by being placed in some special media, Residual Dipolar Couplings (RDCs), can be measured, because of the partial alignment induced by this media. RDCs are emerging as a powerful routine tool employed in conformational analysis, as it complements and even outperforms the approaches based on the classical NMR NOE or J3 couplings. In this work, three different alignment media have been characterized and evaluated in terms of integrity using 2H and 1H 1D-NMR spectroscopy, namely the stretched and compressed gel PMMA, and the lyotropic liquid crystals CpCl/n-hexanol/brine and cromolyn/water. The influence that different media and degrees of alignment have on the dynamic properties of several molecules was explored. Different sized sugars were used and their self-diffusion was determined as well as conformation features using RDCs. The results obtained indicate that no influence is felt by the small molecules diffusion and conformational features studied within the alignment degree range studied, which was the 3, 5 and 6 % CpCl/n-hexanol/brine for diffusion, and 5 and 7.5 % CpCl/n-hexanol/brine for conformation. It was also possible to determine that the small molecules diffusion verified in the alignment media presented close values to the ones observed in water, reinforcing the idea of no conditioning of molecular properties in such media.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the early nineties, Mark Weiser wrote a series of seminal papers that introduced the concept of Ubiquitous Computing. According to Weiser, computers require too much attention from the user, drawing his focus from the tasks at hand. Instead of being the centre of attention, computers should be so natural that they would vanish into the human environment. Computers become not only truly pervasive but also effectively invisible and unobtrusive to the user. This requires not only for smaller, cheaper and low power consumption computers, but also for equally convenient display solutions that can be harmoniously integrated into our surroundings. With the advent of Printed Electronics, new ways to link the physical and the digital worlds became available. By combining common printing techniques such as inkjet printing with electro-optical functional inks, it is starting to be possible not only to mass-produce extremely thin, flexible and cost effective electronic circuits but also to introduce electronic functionalities into products where it was previously unavailable. Indeed, Printed Electronics is enabling the creation of novel sensing and display elements for interactive devices, free of form factor. At the same time, the rise in the availability and affordability of digital fabrication technologies, namely of 3D printers, to the average consumer is fostering a new industrial (digital) revolution and the democratisation of innovation. Nowadays, end-users are already able to custom design and manufacture on demand their own physical products, according to their own needs. In the future, they will be able to fabricate interactive digital devices with user-specific form and functionality from the comfort of their homes. This thesis explores how task-specific, low computation, interactive devices capable of presenting dynamic visual information can be created using Printed Electronics technologies, whilst following an approach based on the ideals behind Personal Fabrication. Focus is given on the use of printed electrochromic displays as a medium for delivering dynamic digital information. According to the architecture of the displays, several approaches are highlighted and categorised. Furthermore, a pictorial computation model based on extended cellular automata principles is used to programme dynamic simulation models into matrix-based electrochromic displays. Envisaged applications include the modelling of physical, chemical, biological, and environmental phenomena.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Information security is concerned with the protection of information, which can be stored, processed or transmitted within critical information systems of the organizations, against loss of confidentiality, integrity or availability. Protection measures to prevent these problems result through the implementation of controls at several dimensions: technical, administrative or physical. A vital objective for military organizations is to ensure superiority in contexts of information warfare and competitive intelligence. Therefore, the problem of information security in military organizations has been a topic of intensive work at both national and transnational levels, and extensive conceptual and standardization work is being produced. A current effort is therefore to develop automated decision support systems to assist military decision makers, at different levels in the command chain, to provide suitable control measures that can effectively deal with potential attacks and, at the same time, prevent, detect and contain vulnerabilities targeted at their information systems. The concept and processes of the Case-Based Reasoning (CBR) methodology outstandingly resembles classical military processes and doctrine, in particular the analysis of “lessons learned” and definition of “modes of action”. Therefore, the present paper addresses the modeling and design of a CBR system with two key objectives: to support an effective response in context of information security for military organizations; to allow for scenario planning and analysis for training and auditing processes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work presents a molecular-scale agent-based model for the simulation of enzymatic reactions at experimentally measured concentrations. The model incorporates stochasticity and spatial dependence, using diffusing and reacting particles with physical dimensions. We developed strategies to adjust and validate the enzymatic rates and diffusion coefficients to the information required by the computational agents, i.e., collision efficiency, interaction logic between agents, the time scale associated with interactions (e.g., kinetics), and agent velocity. Also, we tested the impact of molecular location (a source of biological noise) in the speed at which the reactions take place. Simulations were conducted for experimental data on the 2-hydroxymuconate tautomerase (EC 5.3.2.6, UniProt ID Q01468) and the Steroid Delta-isomerase (EC 5.3.3.1, UniProt ID P07445). Obtained results demonstrate that our approach is in accordance to existing experimental data and long-term biophysical and biochemical assumptions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Validation is arguably the bottleneck in the diffusion magnetic resonance imaging (MRI) community. This paper evaluates and compares 20 algorithms for recovering the local intra-voxel fiber structure from diffusion MRI data and is based on the results of the "HARDI reconstruction challenge" organized in the context of the "ISBI 2012" conference. Evaluated methods encompass a mixture of classical techniques well known in the literature such as diffusion tensor, Q-Ball and diffusion spectrum imaging, algorithms inspired by the recent theory of compressed sensing and also brand new approaches proposed for the first time at this contest. To quantitatively compare the methods under controlled conditions, two datasets with known ground-truth were synthetically generated and two main criteria were used to evaluate the quality of the reconstructions in every voxel: correct assessment of the number of fiber populations and angular accuracy in their orientation. This comparative study investigates the behavior of every algorithm with varying experimental conditions and highlights strengths and weaknesses of each approach. This information can be useful not only for enhancing current algorithms and develop the next generation of reconstruction methods, but also to assist physicians in the choice of the most adequate technique for their studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this work is to present a multitechnique approach to define the geometry, the kinematics, and the failure mechanism of a retrogressive large landslide (upper part of the La Valette landslide, South French Alps) by the combination of airborne and terrestrial laser scanning data and ground-based seismic tomography data. The advantage of combining different methods is to constrain the geometrical and failure mechanism models by integrating different sources of information. Because of an important point density at the ground surface (4. 1 points m?2), a small laser footprint (0.09 m) and an accurate three-dimensional positioning (0.07 m), airborne laser scanning data are adapted as a source of information to analyze morphological structures at the surface. Seismic tomography surveys (P-wave and S-wave velocities) may highlight the presence of low-seismic-velocity zones that characterize the presence of dense fracture networks at the subsurface. The surface displacements measured from the terrestrial laser scanning data over a period of 2 years (May 2008?May 2010) allow one to quantify the landslide activity at the direct vicinity of the identified discontinuities. An important subsidence of the crown area with an average subsidence rate of 3.07 m?year?1 is determined. The displacement directions indicate that the retrogression is controlled structurally by the preexisting discontinuities. A conceptual structural model is proposed to explain the failure mechanism and the retrogressive evolution of the main scarp. Uphill, the crown area is affected by planar sliding included in a deeper wedge failure system constrained by two preexisting fractures. Downhill, the landslide body acts as a buttress for the upper part. Consequently, the progression of the landslide body downhill allows the development of dip-slope failures, and coherent blocks start sliding along planar discontinuities. The volume of the failed mass in the crown area is estimated at 500,000 m3 with the sloping local base level method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Anti-basal ganglia antibodies (ABGAs) have been suggested to be a hallmark of autoimmunity in Gilles de la Tourette's syndrome (GTS), possibly related to prior exposure to streptococcal infection. In order to detect whether the presence of ABGAs was associated with subtle structural changes in GTS, whole-brain analysis using independent sets of T(1) and diffusion tensor imaging MRI-based methods were performed on 22 adults with GTS with (n = 9) and without (n = 13) detectable ABGAs in the serum. Voxel-based morphometry analysis failed to detect any significant difference in grey matter density between ABGA-positive and ABGA-negative groups in caudate nuclei, putamina, thalami and frontal lobes. These results suggest that ABGA synthesis is not related to structural changes in grey and white matter (detectable with these methods) within frontostriatal circuits.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Internet is increasingly used as a source of information on health issues and is probably a major source of patients' empowerment. This process is however limited by the frequently poor quality of web-based health information designed for consumers. A better diffusion of information about criteria defining the quality of the content of websites, and about useful methods designed for searching such needed information, could be particularly useful to patients and their relatives. A brief, six-items DISCERN version, characterized by a high specificity for detecting websites with good or very good content quality was recently developed. This tool could facilitate the identification of high-quality information on the web by patients and may improve the empowerment process initiated by the development of the health-related web.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The automatic interpretation of conventional traffic signs is very complex and time consuming. The paper concerns an automatic warning system for driving assistance. It does not interpret the standard traffic signs on the roadside; the proposal is to incorporate into the existing signs another type of traffic sign whose information will be more easily interpreted by a processor. The type of information to be added is profuse and therefore the most important object is the robustness of the system. The basic proposal of this new philosophy is that the co-pilot system for automatic warning and driving assistance can interpret with greater ease the information contained in the new sign, whilst the human driver only has to interpret the "classic" sign. One of the codings that has been tested with good results and which seems to us easy to implement is that which has a rectangular shape and 4 vertical bars of different colours. The size of these signs is equivalent to the size of the conventional signs (approximately 0.4 m2). The colour information from the sign can be easily interpreted by the proposed processor and the interpretation is much easier and quicker than the information shown by the pictographs of the classic signs

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Shape complexity has recently received attention from different fields, such as computer vision and psychology. In this paper, integral geometry and information theory tools are applied to quantify the shape complexity from two different perspectives: from the inside of the object, we evaluate its degree of structure or correlation between its surfaces (inner complexity), and from the outside, we compute its degree of interaction with the circumscribing sphere (outer complexity). Our shape complexity measures are based on the following two facts: uniformly distributed global lines crossing an object define a continuous information channel and the continuous mutual information of this channel is independent of the object discretisation and invariant to translations, rotations, and changes of scale. The measures introduced in this paper can be potentially used as shape descriptors for object recognition, image retrieval, object localisation, tumour analysis, and protein docking, among others

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a) real absences b) pseudo-absences selected randomly from the background and c) two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA) or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors) was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97), and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have limited fit.